Newsletters

From the Editor

Oriol T. Valls

Let’s get uncomfortable

We have been able to obtain several very nice articles for this issue. Lieber and Press have submitted a fascinating article on the current challenges in nuclear deterrence. I have been happy to make an exception of our usual length guidelines for this article: it is worth reading every word. Alvin Saperstein gives us some deep insight on the difference between “is” and “does”, which is so often forgotten in academia. Anybody who has thought about this question will love it. We have also a follow up on our special issue (October 2017) on M. Curie: an article by a member of our own board of Editors, Maury Goodman, edited in a special way by Laura Berzak Hopkins. We have also our usual two book reviews. No letters to the editor this time: the previous issue did not contain anything controversial after all.

Tabitha Colter, our recently appointed Media Editor, will really ramp up our Newsletter’s media presence starting with this issue.

We need contributions: we accept articles on any relevant topic, and do not shy away from controversy. This time I have a suggestion: we need articles that will get the readership thinking and make some of them uncomfortable. Take superstition for example. We all love to think that it is confined to uneducated people but it is thriving at universities. Look at the website https://www.csh.umn.edu/ at my own institution. Do you think our silence, as scientists, on such matters diminishes our credibility? I think so. Or take Darwinism: we love to pretend that only nuts do not believe in Darwin’s theories. Please go to Chapters XI and XII (I bet you never got that far, did you?) of “On the origin of the species” and you will see many things that would not be too popular among the educated green set. There are many other examples. Please pick your favorite and write an article or persuade a friend to write one.

Oriol

Oriol T. Valls
University of Minnesota

The New Era of Nuclear Arsenal Vulnerability

Keir A. Lieber and Daryl G. Press

Nuclear deterrence rests on the survivability of nuclear arsenals. A weapons arsenal that can survive a disarming strike and inflict unacceptable damage on the attacker is the foundation of a robust deterrent. For much of the nuclear age, arsenal survivability seemed straightforward. “Counterforce” disarming attacks — those aimed at eliminating an enemy’s retaliatory forces — were nearly impossible to undertake because potential victims couldeasily hide and protect their weapons.1 Today, however, leaps in weapons accuracy and breakthroughs in remote sensing are undermining states’ efforts to secure their arsenals. Specifically, pinpoint accuracy and improved sensors are negating two key approaches that countries have relied upon to ensure arsenal survivability: hardening and concealment. The computer revolution has also spawned dramatic advances in data processing, communication, and artificial intelligence, and has opened a new cyber domain of strategic operations — compounding the growing vulnerability of nuclear delivery systems.

Nuclear arsenals around the world are not becoming equally vulnerable to attack. Countries that have considerable resources can buck these trends in technology and keep their forces survivable, albeit with considerable cost and effort. However, other countries — especially those facing wealthy, technologically advanced adversaries — will find it increasingly difficult to secure their arsenals. The implications for nuclear policy are far reaching: “the new era of counterforce,” as we label it,2 will reduce deterrence stability, undercut the logic of future nuclear arms reductions, and compel U.S. leaders to balance the risks and opportunities of honing U.S. counterforce capabilities.

Eroding Foundation of Nuclear Deterrence
Nuclear weapons are the ultimate instruments of deterrence. There could be no conceivable benefit from invading or attacking a rival if doing so would trigger nuclear retaliation. As long as nuclear arsenals are secure, and hence could survive an adversary’s attack and then retaliate, nuclear weapons are a tremendous source of security for those who possess them. For this reason, military planners have employed three basic approaches to protect nuclear forces from attack: hardening, concealment, and redundancy. In terms of hardening, planners place missiles in reinforced silos; deploy aircraft in hardened shelters; disperse mobile missiles to protective sites; and bury command and control sites, as well as the secure means used to communicate launch orders. Nuclear planners also rely heavily on concealment to ensure force survivability, particularly by dispersing ballistic missile submarines (SSBNs) and mobile missile launchers within vast deployment areas. Aircraft are harder to hide because they require airfields for takeoff and landing, but they too can employ concealment by dispersing to alternate airfields or remaining airborne during alerts. Finally, redundancy is used to bolster every aspect of the nuclear mission, especially force survivability. Most nuclear-armed countries use multiple types of weapons and delivery systems to hedge against design flaws and complicate enemy strike plans. They spread their forces across multiple bases and employ redundant communication networks, command and control arrangements, and early warning systems.

Major technological trends are undermining these strategies of survivability. Leaps in weapons accuracy have diminished the value of hardening, while breakthroughs in remote sensing threaten nuclear forces that depend on concealment. Another major change since the end of the Cold War — the significant reduction of nuclear arsenals — weakens the third strategy of survivability: redundancy. Deploying survivable nuclear forces in this environment is possible, but the challenge of protecting those forces is growing.

Counterforce in the Age of Accuracy
Throughout most of the Cold War, nuclear delivery systems were too inaccurate to conduct effective disarming strikes against large arsenals comprising hundreds of hardened targets. As late as 1985, the largest yield warhead on the best U.S. intercontinental ballistic missile (ICBM) had only a 54% chance of destroying a Soviet missile silo. Missiles fired from submarines were even less effective, offering less than 10% chance per warhead of destroying a hardened missile silo.3 A nuclear disarming strike against Soviet missile fields would have left hundreds of silos intact to inflict a devastating counterblow.

But technological advances in navigation and guidance, which began to enter the superpower arsenals in the mid-1980s, have significantly increased the vulnerability of hardened targets. Improved inertial sensors and stellar navigation systems greatly enhanced missile accuracy. GPS and other geolocation technologies allowed submarines to precisely determine their position before launch — allowing their weapons to surpass the accuracy of land-based weapons. Over the next two decades, new missiles and improved guidance systems on old missiles have transformed offensive strike capabilities. Today, a single warhead delivered by a U.S. ICBM would have roughly a 75% chance of destroying a hardened missile silo; and the most effective warhead currently deployed on U.S. submarines would have roughly an 80% chance of destroying the same target. In short, the impact of the precision revolution — prominently displayed by the United States in each of its recent conventional wars — has had equally dramatic consequences for nuclear targeting and deterrence.

The unfolding accuracy revolution and the computer revolution that spawned it have had other complementary effects on the vulnerability of hard targets. For decades, nuclear targeters understood that effective disarming attacks would be impossible unless they could strike each individual target with multiple weapons. After all, even a 90% effective strike against an enemy’s arsenal would be a failure, since the surviving weapons could inflict a devastating counterattack. The simple solution to that problem, striking each target multiple times, has been thought infeasible because of the problem of fratricide: the danger that incoming weapons might destroy each other. However, the accuracy revolution also offers a solution to the fratricide problem by opening the door to assigning multiple warheads against a single target; thus paving the way to disarming counterforce strikes.

One type of fratricide occurs when the prompt effects of nuclear detonations — principally radiation, heat, and overpressure — destroy or deflect nearby warheads. To protect those warheads, targeters must separate the incoming weapons by at least 3-5 seconds. A second source of fratricide is harder to overcome. Destroying hard targets typically requires low-altitude detonations (so-called ground bursts), which vaporize material on the ground. When the debris begins to cool, 6-8 seconds after the detonation, it forms a dust cloud that envelops the target. Even small dust particles can be lethal to incoming warheads speeding through the cloud to the target. For decades, these two sources of fratricide posed a major problem for nuclear planners. Multiple warheads could be aimed at a single target only if they were separated by at least 3-5 seconds (to avoid interfering with each other); yet, all inbound warheads had to arrive within 6-8 seconds of the first (before the dust cloud formed). As a result, assigning more than two weapons to each target would produce only marginal gains: if the first one resulted in a miss, the target would likely be shielded when the third or fourth warhead arrived.4

Improvements in accuracy, however, have greatly mitigated the problem of fratricide. The proportion of misses — the main culprit of fratricide — compared to hits is declining. To be clear, some weapons will still malfunction; that is, they will be prevented from destroying their targets because of malfunctioning missile boosters, faulty guidance systems, or defective warheads. Those kinds of failures, however, do not generally cause fratricide, because the warheads do not reach or detonate near the target. Only those that travel to the target area, yet detonate outside the lethal radius, will create a dust cloud that shields the target from other incoming weapons. In short, leaps in accuracy are essentially reducing the set of three outcomes (hit, miss, and malfunction) to just two: hit or malfunction. The “miss” category, the key cause of fratricide, has virtually disappeared.

Improved missile accuracy and the end of fratricide are just two of the developments that have helped negate hardening and increased the vulnerability of nuclear arsenals. The computer revolution has led to other improvements that, taken together, significantly increase counterforce capabilities.

First, increased SLBM accuracy has added hundreds of those warheads to the counterforce arsenal; it has also unlocked other advantages that submarines possess over land-based missiles. For example, submarines have flexibility in firing location, allowing them to strike targets that are out of range of ICBMs or that are deployed in locations that ICBMs cannot hit. Submarines also permit strikes from close range, reducing an adversary’s response time. And because submarines can fire from unpredictable locations, SLBM launches are more difficult to detect than ICBM attacks, further reducing adversary response time before impact.

Second, new “compensating” fuses that exist on most U.S. SLBMs and that will soon be deployed on the entire force are making ballistic missiles even more capable than the results reported above.5 Reentry vehicles equipped with this fusing system use an altimeter to measure the difference between the actual and expected trajectory of the reentry vehicle, and then compensate for inaccuracies by adjusting the warhead’s height of burst. Specifically, if the altimeter reveals that the warhead will detonate “short” of the target, the fusing system lowers the height of burst, allowing the weapon to travel farther (hence, closer to the aimpoint) before detonation. Alternatively, if the reentry vehicle is going to detonate beyond the target, the height of burst automatically adjusts upward to allow the weapon to detonate before it travels too far. This improved fuse greatly increases the effectiveness of ballistic missiles. For example, more than half of the warheads currently deployed on U.S. submarines otherwise have too small of an explosive yield to carry out the type of attack described in the previous paragraphs; but the compensating fuse gives them this “hard target kill” capability. Putting aside all the other improvements described above, the new fuse more than doubles the counterforce capabilities of the U.S. submarine fleet.

Third, the computer revolution has made possible rapid missile reprogramming — which increases the effectiveness of ballistic missiles by reducing the consequence of malfunctions. In the age of pinpoint accuracy, missile reliability has become the main hurdle to attacks on hardened targets. For decades analysts have recognized a solution to this problem: if missile failures can be detected, the targets assigned to the malfunctioning missiles can be rapidly reassigned to other missiles held in reserve. The capability to retarget missiles in a matter of minutes was installed at U.S. ICBM launch control centers in the 1990s and on U.S. submarines in the early 2000s, and both systems have since been upgraded. We do not know if the United States has adopted war plans that fully exploit rapid reprogramming to minimize the effects of missile failures. Nevertheless, such a targeting approach is within the technical capabilities of the United States and other major nuclear powers and may already be incorporated into war plans.

The cumulative consequences of these improvements are profound. Given a hypothetical target set of 200 hardened missile silos, a 1985-era U.S. ICBM strike — with two warheads assigned per target — would have been expected to leave 42 surviving silos. A comparable strike in 2018 could destroy every hardened silo.

These results are simply the output of a nuclear targeting model. In the real world, the effectiveness of any strike would depend on many factors not modeled here, including the skill of the attacking forces, the accuracy of target intelligence, the ability of the targeted country to detect an inbound strike and “launch on warning,” and other factors that depend on the political and strategic context. As a result, these calculations tell us less about the precise vulnerability of a given arsenal at a given time—though one can reach arresting conclusions based on the evidence—and more about trends in how technology is undermining survivability.

At the same time, our model substantially understates the vulnerability of hard targets because it does not capture the growing contribution of nonnuclear forces to counterforce missions. As nuclear arsenals shrink — and hence offer fewer targets that must be destroyed — and as conventional strike forces proliferate, the challenges for ensuring force survivability will grow.

Counterforce in the Age of Transparency
While advances in accuracy are negating the value of hardening, leaps in remote sensing are chipping away at the other main approach to achieving survivability: concealment. Finding concealed forces, particularly mobile ones, remains a major challenge. Trends in technology, however, are eroding the security that mobility once provided. In the ongoing competition between “hiders” and “seekers,” waged by ballistic missile submarines, mobile land-based missiles, and the forces that seek to track them, the hider’s job is growing more difficult over time.

At least five trends are ushering in an age of unprecedented transparency. First, sensor platforms have become more diverse. The mainstays of Cold War technical intelligence — satellites, submarines, and piloted aircraft — continue to play a vital role, and they are being augmented by new platforms: including remotely piloted aircraft, undersea drones, and various autonomous sensors, hidden on the ground or tethered to the seabeds. Additionally, the past two decades have witnessed the development of a new “virtual” sensing platform: cyberspying. Second, sensors are collecting a widening array of signals using a growing list of techniques. Early Cold War strategic intelligence relied heavily on photoreconnaissance, underwater acoustics, and the collection of adversary communications — all of which remain important. Now, modern sensors gather data from across the entire electromagnetic spectrum and they exploit an increasing number of analytic techniques, such as spectroscopy to identify the vapors leaking from faraway facilities, interferometry to discover underground structures, and the signals processing techniques that underpin synthetic aperture radars (SAR).

Third, remote sensing platforms increasingly provide persistent observation. At the beginning of the Cold War, strategic intelligence was hobbled by sensors that collected snapshots rather than streams of data. Spy planes sprinted past targets, and satellites passed overhead and then disappeared over the horizon. Over time those sensors were supplemented with platforms that remained in place and soaked up data, such as signals intelligence antennas, undersea hydrophones, and geostationary satellites. The trend toward persistence is continuing. Today, remotely piloted vehicles can loiter near enemy targets and autonomous sensors can monitor critical road junctures for months or years. Persistent observation is essential if the goal is not merely to count enemy weapons, but also to track their movement.

The fourth factor in the ongoing remote sensing revolution is the steady improvement in sensor resolution. In every field that employs remote sensing, improved sensors and advanced data processing are permitting more accurate measurements and fainter signals to be discerned from background noise. The leap in satellite image resolution is but one example: the first U.S. reconnaissance satellite (Corona) could detect objects as small as 25 feet across. Today, even commercial satellites can collect images with 1-foot resolution. (Spy satellites can do much better.) Advances in resolution are not merely transforming optical remote sensing systems; they are increasing the capabilities of infrared sensors, advanced radars, interferometers, and spectrographs. High-resolution data, however, would have limited utility if it were not for the fifth leg of the sensor revolution: improved data transmission. In the past, analysts sometimes had to wait weeks before they could examine satellite images. (Early satellites had to finish a roll of film and eject the canister, which was then recovered and processed). Today, intelligence gathered by aircraft, satellites, and drones can be transmitted in nearly real time.

The impact of the remote sensing revolution for nuclear force survivability is probably greater than the consequences of improved accuracy, but the payoff of improved sensors is more difficult to demonstrate using unclassified models. We illustrate some of the effects of improved sensors by using commercial geospatial software to estimate the fraction of North Korea’s road network that can be monitored using existing SAR satellites, standoff UAVs, and stealthy UAVs (which can probably operate within North Korea’s air space). We discover that the existing constellation of U.S. satellites can image the roads surrounding North Korean missile bases at least every 90 minutes, and existing UAVs could maintain persistent observation of the entire road network indefinitely. Our analysis understates U.S. surveillance capabilities by not accounting for optical satellites (which are effective in daylight), ground based sensors (which have likely been emplaced at key locations in Korea), satellite capabilities of allies, and other remote sensing techniques that would all likely come into play in a hunt for North Korean missiles.

To be clear, even with improved sensors, finding concealed forces, particularly mobile ones, remains a major challenge. But in the ongoing competition between “hiders” and “seekers,” waged by ballistic missile submarines, mobile land-based missiles, and the forces that try to track them, the hider’s job is growing more difficult than ever before. Nuclear survivability through concealment can no longer be assumed.

Policy Dilemmas in the New Era of Counterforce
The growing threat to nuclear forces raises major policy questions for U.S. national security planners. One set of questions relates to the wisdom of future bilateral arms reductions. Since the end of the Cold War, U.S. and Russian leaders have used arms control agreements to reduce their arsenals, seek to increase strategic stability, prevent attacks, and soothe relations between former adversaries. Yet as the effectiveness of nuclear counterforce systems grows, further arms cuts risk creating unintended consequences: a situation in which the major nuclear-armed states can envision victory in nuclear war. To make matters worse, nonnuclear means of counterforce are growing — for example, through improved conventional weapons, missile defenses, anti-submarine warfare systems, and cyber operations. The problem is stark: arms control agreements that only cut nuclear weapons reduce the number of targets that must be destroyed in a disarming strike; all the while, the nonnuclear forces that aim at those targets grow in number and capability. For years, arms control was a policy that made war less winnable and therefore less likely; today arms cuts — however well meaning — may have perverse consequences.

Second, the new era of nuclear arsenal vulnerability should also reopen debates in the United States about the wisdom of developing effective counterforce systems. Fielding those capabilities — nuclear, conventional, and other — may prove invaluable by enhancing nuclear deterrence during conventional wars, and allowing the United States to defend itself and its allies if nuclear deterrence fails. Enhancing counterforce capabilities, however, may also trigger arms races and other dynamics (such as dangerous deployment modes) that exacerbate political and military risks.

In the past, the state of technology bolstered the case for proponents of nuclear restraint: after all, disarming strikes seemed impossible, so enhancing counterforce capabilities would trigger arms racing without creating useful military capabilities. Today, however, technological trends support the advocates of counterforce. Modern conventional military power depends heavily on intelligence, surveillance, and reconnaissance (ISR) capabilities, as well as precision conventional weapons; but those capabilities are also the foundation of a counterforce arsenal. The United States will surely continue to enhance ISR and precision strike — as well as missile defenses, anti-submarine warfare, and cyber techniques — whether or not Washington decides to maximize its nuclear counterforce capabilities. In this new era of counterforce, where arms racing seems nearly inevitable, exercising restraint may limit options without yielding much benefit.

Conclusion
For most of the nuclear age, there were many impediments to effective counterforce. Weapons were too inaccurate to reliably destroy hardened targets; fratricide prevented many-on-one targeting; the number of targets to strike was huge; target intelligence was poor; conventional weapons were of limited use; and any attempt at disarming an adversary would be expected to kill vast numbers of people. Today, in stark contrast, highly accurate weapons aim at shrinking enemy target sets. The fratricide problem has been swept away. Conventional weapons can destroy most types of counterforce targets, and low-fatality nuclear strikes can be employed against others. Target intelligence, especially against mobile targets, remains the biggest obstacle to effective counterforce, but the technological changes under way in that domain are revolutionary. Of the two key strategies that countries have employed since the start of the nuclear age to keep their arsenals safe, hardening has been negated, and concealment is under great duress.

Nuclear weapons are still the ultimate tools of deterrence. Even in the new era of counterforce, nuclear arsenals can be deployed in a manner that protects them from disarming strikes. But technological trends are making the nuclear deterrence mission more demanding, and hence widening the gap between stronger and weaker nuclear-armed countries. The most powerful countries should be able to deploy survivable deterrent forces and field potent counterforce capabilities, while relatively weaker countries with smaller nuclear arsenals will struggle to keep their forces secure. Moreover, the technological trends that are causing this shift show no signs of abating. Weapons will grow even more accurate. Sensors will continue to improve. How countries adapt to the new strategic landscape will greatly shape the prospects for international peace, stability, and conflict for years to come.

References

  1. Keir A. Lieber is Director of the Security Studies Program and Associate Professor in the Edmund A. Walsh School of Foreign Service at Georgetown University. Kal25@georgetown.edu
  2. Daryl G. Press is Associate Professor in the Department of Government at Dartmouth College
  3. During the last decades of the Cold War, the massive nuclear arsenals deployed by the United States and the Soviet Union alsoseemed to make a perfect disarming strike impossible.
  4. We use a set of unclassified models and geospatial analysis to illustrate the growing effectiveness of counterforce capabilities in Keir A. Lieber and Daryl G. Press, “The New Era of Counterforce: Technological Change and the Future of Nuclear Deterrence,” International Security, Vol. 41, No. 4 (Spring 2017), pp. 9-49.
  5. The calculations underpinning this analysis are in Lieber and Press, “The New Era of Counterforce,” and its online appendix, http://dx.doi:10.7910/DVN/NKZJVT.
  6. It would take approximately 20 minutes for the heavier particles in a dust cloud to settle. In that time interval, relatively slow moving missiles could launch upward through the dust cloud, but very fast-moving reentry vehicles could not penetrate the cloud to strike the target again. See discussion and sources in Lieber and Press, “The New Era of Counterforce,” pp. 21-22.
  7. Theodore Postol, “Monte Carlo Simulations of Burst-Height Fuse Kill Probabilities,” unpublished presentation, July 28, 2015. See also Hans M. Kristensen, “Small Fuze, Big Effect,” Strategic Security blog (Federation of American Scientists), March 14, 2007, https://fas.org/blogs/security/2007/03/small_fuze_-_big_effect/.

“Facts” and Opinion: What One “is” vs What One “does” as Seen by Modern Physical Science

Alvin M. Saperstein, Professor of Physics Emeritus, Wayne State University, Detroit Michigan

There is much ado in the current political world about facts, alternate facts, fake facts, etc.1 What is a “fact”? It is a piece of information about the world — external or internal — obtained either from trustworthy others, by direct observation and measurement, or by an explicitly rational chain of reasoning from other well established facts.2

If by observation, either by direct use of the human senses or via instrumentation, the observer must be cognizant of the possibilities and limitations of the instrument used. The process of establishing the “fact” must be open to, and agreed upon, by many other independent individuals; otherwise it is impossible to differentiate between illusion and established fact. Despite Freud3, it is very difficult for different individuals to share the essence of their respective dreams. (Actually, all meaning is communal since almost all thinking, whether awake or asleep, requires the use of words, words which are created by the community and implanted in the brain of the growing child.)

It is also vital to distinguish between a “fact” and an “opinion”4 — which also may be shared by many others. Whether or not a Picasso painting is beautiful, or a politician is a credible candidate for high office, is an opinion. Whether or not the Earth is roughly spherical or billions of years old is a well-established fact. The choice between fact and opinion cannot be left to popular vote (which may change rapidly and radically) but must rely on a well established process, usually referred to as a “science”. Whether or not a crowd watching an event is large or small is an opinion.1 How many people were in the crowd can be a fact if suitable — numerical — observations were made. Thus the use of numbers must be a key attribute of any distinction between fact and opinion. Whether Shakespeare was a great dramatist is opinion. It is possible, and many scholars have tried, to make a credible estimate — with a wide range of uncertainty — of the number of people who have read him — an estimated “fact”. When using the word correctly, there should be no need to add the qualifier “established’ to the word “fact”.

It might be useful to view facts as modern physical science sees them — specifically “quantum physics”. Traditional philosophers (and later “scientists”) have always wondered what the “world” is — material or spiritual? real external or internal to our mind? controllable or not? harmful or not? good or evil? In addition to the rather rare human curiosity — the sense of wonder as to how the world “works”, there is also a common feeling that knowledge could lead to control of our environment, private or public, possibly beneficial, e.g., a sturdy bridge could be built, dangerous illnesses may be cured, effective public policy may be created. This search for what is has led from the macro- world to the micro-world — molecules, atoms, nuclei, other, “elementary particles”5. In the 20th century, these attempts at understanding our micro-world — the foundations of our macro-world — led theoretical physicists to the creation and growth of Quantum Mechanics6 and its applicability to more and more practical aspects of the world — e.g., new structural materials, solid state electronics, molecular biology. But as the applicability of Quantum Mechanics has spread and understanding of it deepened, physicists have come to understand that they cannot understand what the micro-world is, only what it does — and the latter to an ever increasing degree. Thus the fundamental “fact” about the real physical world is what it does, not what it is.

The concept that something “is" implies permanence; it exists whether or not it is being observed. Its state of being is independent of whether there is an external world cognizant of its existence. On the contrary, what something “does” is a response to an external stimulus; an external world is required — an observer, a measurement. Prior to twentieth century physics, the two descriptors — is and does — were always assumed to go together. Something, living or otherwise, could not be seen as “doing” without existing between observations. (To the thinkers of that time, a tree falling in the forest made a sound whether or not there was a human present to hear the sound.) Since the advent of Quantum Mechanics., this is no longer the case for microscopic entities — we can no longer definitely say what something “is” between observations, only how it responds to the measurement process — what it “does”.

The progress of microphysics — from molecule to atoms to nuclei and electrons to nuclear constituents — was the assumption of smaller and smaller “particles”. All of matter, and the light with which we interacted with that matter, was shown to be made up of particles. Contrary to Aristotle, for whom the world is a continuum, with no end to its divisibility, the now known entire physical world is a collection of particles. But what is a particle? Like most words, in most languages, the world “particle” comes from ordinary, daily human experiences — in this case marbles, pebbles or grains of sand. As observed, these particles have definite shape, color, hardness, electric charge, mass, location, and motion. As we, conceptually and experimentally, went down in size to the most fundamental entities, those of which the entire rest of the physical world is constituted — the so-called “elementary particles” — we gave them a smaller set of standard characteristics - mass, electric charge (and other “coupling” strengths to other particles) and spin. Other physical characteristics of matter– shape, color, hardness, etc. — could be the manifestations of the particles acting in concert. And so, these elementary particles — of which the universe is thought to be made — were conceived to be tiny balls of definite mass, electrical charge, spin, and size — though to-date, no measurements have shown the electron to act as if it were larger than a geometric point.

The ‘is-ness” of such a ball, tiny or otherwise, is that, at any instant of time, it has a definite location and state of motion (motion is just a coordinated sequence of locations); being at rest is just a particular form of motion. Thus each such particle traverses a definite trajectory through space, having, at each instant of time, a definite position and a definite velocity. And the universe, as we came to know it, is made up of such particles, interacting with each other via various forces — gravitational, electrical, nuclear, etc. Or so we thought until the first quarter of the twentieth century.

By 1900, the electron was thought to be such a particle, uniquely associated with the atoms of matter. It could be disassociated from its atoms, and, fired from an “electron gun” in a vacuum tube and thus have its mass, spin, and electrical charge accurately measured. Using a low intensity beam, fired at a phosphorescent screen or photographic plate, a bunch of individual spots was observed — certainly characteristic of particles. As the intensity of the beam, or the observation time interval, increased, the density of spots would increase, eventually covering the entire screen or plate. Since the electron gun had a known, predetermined accelerating voltage, the electrons hitting the target had a known velocity. And so they had the expected behavior of particles — definite trajectories. Therefore, the electron is a particle, and so were all of the other atomic and sub-atomic constituents of matter. And so, by 1905, was a light beam understood as a stream of particles called “photons”.

However, experimental physics moved on. What would happen if one placed an opaque screen (a thin metal sheet) between the electron gun and the photographic or phosphorescent film and parallel to the latter? As expected, nothing would appear in the film. If a thin, rectangular slit were cut into the blocking screen, a thin line of electron dots would appear on the film, directly behind the slit — again, as expected. With increasing time or beam intensity, the density of dots would increase, eventually becoming a solid image of the slit. The image line in the film would be somewhat thicker and diffuse than the cut slit; presumably some of the electrons were hitting the edges of the slit and so were slightly deflected from their original straight line trajectories.

In the next experiment, a second slit was cut into the blocking sheet, parallel and close to the first slit. Instead of there being two line images, one behind each cut, expected as a result of there being two possible trajectories through the blocking sheet, a series of line images were observed! The brightest one appeared on the film directly behind the central obstructed region between the two cuts, a place unreachable by either of the two expected trajectories. The remaining images, evenly spaced lines, got dimmer and dimmer as they appeared further and further from the central bright image line. The observation certainly contradicted the conception of electrons as particles! The observed pattern was familiar to those familiar with the behavior of waves — water or sound — approaching a barrier with two openings in it — a wave phenomenon called “two slit interference”. Could the electron beam approaching the barrier with the two slits be a wave? But as the beam intensity was greatly diminished, it was noted that the images were built up of a number of individual spots — just as expected for particles.

But if they were particles, which of the two possible trajectories - from electron gun, through one or the other of the two slits in the barrier sheet, to the film — did they follow? If the experimenter blocked one of the slits, the observed image was identical to that previously observed in the one-slit experiment. No surprise. If both slits were kept open but an electron counter placed behind one of them and the film configured so that it only registered when the counter indicated that an electron had passed through its slit, the resultant pattern on the film was that of the single slit experiment. The electrons could pass through either slit but if you could determine which of the two they had actually passed through; the observed pattern was the single slit pattern! Thus if your observation technique is such as to determine which trajectory the electron follows, it acts like a particle; if the observation is ignorant of which trajectory is followed, the electron acts as a wave. You can no longer say what it is, only what it does in response to your observation.

Similar experiments were repeated many times with electrons and many of the other “elementary particles” (proton, neutrons, etc.) — and similar results were obtained. When only a single trajectory was possible, the “elementary particle” behaved as a particle would be expected to behave. When several paths were open to it, and the path that was chosen was unknown to the observer, it behaved as a wave would be expected to behave. The “superposition” of two or more possible particle paths led to wave like behavior. Hence we cannot know what an “elementary particle is, only what it does under specified measurement procedures. The physical “fact” is not an is but a does!

This quantum behavior of the elementary particles — universally accepted by science as the fundamental building blocks of our universe is certainly peculiar. But there have been other important examples of the transition from is to does in our human world. For example, the biological sciences know of animals, e.g., the chameleon or the octopus, of which it cannot be said that their skin color “is”, since the color or pattern varies with the conditions under which the animal is observed. The skin does change. In human psychology, it is difficult to say what a person’s mood is but relatively easy to say what the mood does under differing interactions with others. In the border area between psychology and philosophy, there is a long simmering question as to the relation between brain and mind. We know that the brain is; an anatomist can hold the complete brain in one hand; we know a great deal about its constituents, their chemistry, structure, and electrical circuitry. But the mind is only known by what it does. We do not know how it creates a poem or a theory, only that it does.

The transition from is to does does not only occur for facts; it has also occurred for commonly held opinions. For example, many groups of people, in the past and in the present, talk about, and worship, a God that is; e.g. “I, the Lord, am your God”. (Exodus 6.7). But the same God was also known via doing: “God answered him in thunder”. (Exodus 19:19). This “is” God evolved from the personification of imperfect natural objects, to beings with human-like attributes, to the singular all-knowing, all-powerful formless God of the later Hebrew Bible. But through this evolution, from Gods, to God, the deity was always conceived of as is. However, many in modern theology have abandoned this notion of the deity as a being and opted for the concept of God as the process by which persons are creative and moral, as “the spirit that promotes righteousness in the world”7, i.e., this modern deity does.

And, of course, many of us — such as some politicians — loudly proclaim what they are (a form of is) but never do (a form of does) what they say.

Whether a fact is or does — and the distinction between fact and opinion, knowledge of it is a product of a long chain of observation and rational thought. Denying a fact’s reality may lead to some short term local gains, but casts doubt on the process by which it was ascertained — rational thought and science. It is this rational process which has proven vital for the well-being and advance of the human world, as well as to the creation of devices and systems which may lead to the obliteration of that world. Just as denial of biological facts have historically led to deadly epidemics and much shortening of life, so denial of physical and social facts has led to much starvation, war and misery. It is the task of all of us, not to deny the facts but to use them for the betterment of our common world.

Footnotes

  1. White House Pushes 'Alternative Facts.' — The New York Times, Jan 22, 2017
  2. Facts VS. Alternative Facts | Time.com
  3. https://en.wikipedia.org/wiki/Fact
  4. https://en.wikipedia.org/wiki/Sigmund Freud
  5. https://en.wikipedia.org/wiki/opinion
  6. https://en.wikipedia.org/wiki/elementary particle
  7. https://en.wikipedia.org/wiki/quantum_mechanics
  8. Mordechai M. Kaplan,” The future of the American Jew”, New York, 1949, pp.381-2.

Marie Curie, a Role Model for Future Scientists of All Ages

By Maury Goodman, Argonne National Laboratory

Emma Goodman working on a lab set

The life story of Marie Curie teaches a number of lessons, both directly related to the pioneering chemistry and physics for which she is famous as well as geopolitical and cultural issues in the world at large. As a female, the adolescent Marie Curie could not get an education in Poland, so despite her patriotic feelings, she moved to France in order to study the physics and chemistry that interested her. On top of that, in Russia-controlled Poland, there were strong discriminatory policies that affected daily life. And yet, despite these impediments, Curie pursued her passions and left an enduring legacy of scientific impact, an inspiration to many.

Joan Schaeffer, a former elementary school teacher, is someone who took inspiration from Curie’s story, and through a unique linkage between science and the performing arts, founded a theater company called “Historical Perspectives for Children” to share Curie’s story as well as the stories of other impactful people from history. Begun in 1993, this theater company’s goal is to provide strong character role models for children in grades K-81 by performing one-person plays geared to elementary school auditoriums with a format which entertains, educates, and engages young students. Joan in particular wanted to include strong female role models from STEM fields, and so the life story of Marie Curie was a natural choice. And, as the author’s daughter can attest to, a popular choice! The play based on Marie Curie has been performed over 1000 times including 75 times by the author’s daughter.

The inevitable question with a venture such as “Historical Perspectives for Children” is how do you bridge entertaining to educating? How do you hold a room’s attention while still conveying factual information? How can you explain radioactivity to elementary school children? Use the term and many might get excited, but Spiderman villains or the song by ImagineDragons could be the motivation. (See https://www.youtube.com/watch?v=ktvTqknDobU if you’ve missed out on the recent pop-rock music scene.)

Here is how Marie Curie through Joan’s character introduces the concept of radioactivity2:

Curie: Do you know what I mean by rays? Some do, some don't? Let me explain…Now I need you all to listen very carefully.

The sun gives off rays, and the rays of the sun give off light and heat. Now, you can't see the rays coming from the sun, can you? But you know they give off light because it's bright outside, right? And you know they give off heat because you can feel the warmth of the sun on your skin, right? Well — Uranium is something like a little sun which is found inside some rocks. The rays from the uranium can go through the rock and make the air around it electrical. I decided to call rocks which gave of these electrical rays "radioactive" and tested other rocks to see if they were radioactive. Some rocks, like this one called pitchblende, gave off very, very strong rays of electricity even though there was only a little amount of uranium inside. I thought there must be something else, a different element inside this rock which was also giving off these electrical rays. I decided to call this new element "radium." Now I had to find it. I had to break down this rock and see the radium, touch it and feel it to prove that radium really existed.

Moreover, to include further scientific content at an age-appropriate level, the performance includes a number of simple chemistry demonstrations. The performers who play Curie present a narrative telling the story of her life, while at the same time performing chemistry “experiments” behind a desk of beakers and test tubes. The demonstrations are not based on experiments that Curie actually performed during her career, which from a safety standpoint is probably a very good idea. But rather, the experiments provide a setting in which the performer tells the story of Curie’s life while at the same time promoting the exciting mysteries in chemistry and physics.

The youngest students may not be able to grasp the concept of radioactivity, but being introduced to the term plus witnessing a volcano made from baking soda and vinegar, a smoke machine, acetone “dissolving” Styrofoam and dishwashing soap, or even the blowing of large bubbles can motivate a curiosity in math and science. The experiments are designed to be safe and simple enough that interested students could even perform them at home. So, the audience can take away the facts of Curie’s life, struggles, and accomplishments as well as a seed of excitement in science and research and a way to reinforce and nurture that seed at home.

The accompanying photo shows one of the performers, the daughter of the author. In addition to her performances as Marie Curie, she performed another 200 shows as Amelia Earhart. That latter show featured numerous costume changes and did not have a comparable STEM content. The reaction among the student audiences differed. Questions during the Earhart show focused on historical events and the mystery surrounding her disappearance. Whereas, most of the questions during the Curie shows were about the science demonstrations, as well as her experiences in Poland and France. Even if more of the students had heard of Amelia Earhart as opposed to Marie Curie before the respective performances, the enthusiasm of the students in the two cases was similar.

As with any enterprise related to science, we search for metrics - how can we measure the success of educational theatre in providing role models for tomorrow’s scientists and citizens? Applause is one standard in the world of theatre. Requests from school administrators for additional performances each year is another. But the ability to keep six and seven year olds enthralled in their seats for 45 minutes is certainly a compelling and inspiring one, and one metric which continues to motivate ongoing efforts to communicate and share the passion and excitement about science that Marie Curie felt and that modern-day researchers and educators continue to feel.

References

  1. See the website http://www.historicalperspectives.net/ for the current shows available from this company.
  2. From “MARIE CURIE, SCIENTIST, MOTHER, HUMANITARIAN,” a play in one act for young audiences by Joan Schaeffer.

The War On Science

By Shawn Otto (Milkweed Editions 2016), 514 pages; ISBN: 978-1571313539, $20

In The War on Science Shawn Otto argues that humanity stands at a critical crossroads with the survival of our species contingent on which path we choose moving forward. Unfortunately for us, the optimal path is currently obfuscated by the fog of a war being waged perniciously on scientific ideals and the very foundations of democratic society. The forces of antiscience are attacking these hallowed ideals from multiple fronts without a preferred political persuasion. Left-leaning postmodernist academics elevating subjectivity; journalists overly concerned with preserving a false sense of balance; conservative religious fundamentalists proselytizing their ideologies; and myopic political lobbyists concerned only for their return on investment are amongst the forces Otto describes broadly as antiscientific. As their influence continues to grow the Jeffersonian ideal of a well-informed electorate capable of self-governance and a government guided by rational policy decisions becomes ever more unlikely. We have been left with a fractured society in which most citizens, including our political leaders, are incapable of discerning fact from opinion and evidence based arguments from rhetorical propaganda. According to Otto our current trajectory is wholly unsustainable and has us quickly approaching a catastrophic precipice.

As it weaves its way through 426 pages of history, social commentary, and analysis, Otto’s book frames and then unpacks this war on science leaving us with his proposed battle plan to defeat antiscience and reestablish the Jeffersonian ideal. In “Part I: Democracy’s Science Problem,” the author immediately presents us with the dilemma we are facing. “Vast areas of scientific knowledge and the people that work in them are under daily attack in a fierce worldwide war on science.” Furthermore, “political and religious institutions are pushing back against science and reason in a way that is threatening social and economic stability.” Implicitly invoking the ideals of science as being universal and inherently antiauthoritarian, Otto explains that while the forces of antiscience come from diverse social groups they share a common political end. By undercutting science’s legitimacy, these antiscientists cripple its natural capacity to challenge authoritarianism and tyranny, thereby putting democracy and modernity in peril.

In “Part II: The History of Modern Science Politics” Otto traces the entangled history of democracy and science. While many believe that the United States was founded on a core framework of religious ideals, Otto reminds us that the founding fathers were explicit about creating a separation of church and state and insisted on making scientific ideals the DNA of our democracy. In this relatively short yet grand narrative of the history of modern science, Otto gives us many examples of how science is inherently political, claiming that the process of science continuously questions established authority by developing theories and experiments that can be rigorously tested by anyone. Inevitably, this process leads us to knowledge that is independent of personal belief, independently verifiable, universal, and fundamentally objective.

Time and again, Otto weaves in historical anecdotes of scientific accomplishments like the theories of evolution, relativity, and the big bang, in each case showing how science ended up on opposing battle lines from religious and ideological authoritarians. Then during World War II, science became a literal weapon against fascism and tyranny as scientists harnessed and weaponized scientific concepts like nuclear energy and radar. As science became indispensable to democratic freedom, it became part of a military-industrial complex that placed it on the front lines of the Cold War. While this made science indispensable and well-funded it also made it politically partially complicit as a key cog in the grand machinery of Cold War technological development and ideological propaganda. As a result of science’s changing political status, scientists began to retreat from wider society, deepening the chasm between C.P. Snow’s two cultures. Today, those that inform the public about science (journalists) and those that make public policy decisions (politicians) are guided by a way of thinking that is antithetical to science. Like lawyers in a courtroom, they begin with a particular worldview or goal and selectively use evidence to build a corroborating narrative. It shouldn’t surprise us then that this argumentative rhetorical style has resulted in a polarized and controversial view of science.

In “Part III: The Three-Front War on Science” Otto takes aim at what he considers the primary agents of antiscience. The first front is made up of the postmodern academic intellectuals and activists that sparked the science wars during the 1990s. Using Thomas Kuhn as a framework, they began launching attacks on scientific authority, objectivity, and the scientific method as early as the mid 1960s. These attacks coincided with scientists’ retreat from society thereby allowing relativists and social constructivists plenty of room to discredit science, distort journalism, and give authoritarians ammunition on the other two battle fronts. The second front consists of ideologues from various religious and political persuasions that have latched onto the postmodern ideals of subjectivity and relativism thereby renouncing scientific objectivity and calling into question any scientific fact or theory that challenges their deep-seeded value system. Finally, Otto takes aim at industrial lobbyists and public-relations campaigns funded by corporations that are intent on leveraging scientific non-literacy by disseminating misinformation and deception, shaping public opinion, and ensuring public policy decisions that favor their bottom line. As a prime example, Otto presents a thorough analysis of the deception and public manipulation that surrounds the denial of anthropogenic climate change.

In the final section of the book, “Part IV: Winning the War,” Otto lays out a plan to defeat the antiscientists on all fronts. The crux of his plan begins by resetting how we measure economic reality. Otto claims that the tension between individual freedom and a responsibility for the “commons” is at the center of all our political strife. The myopic accounting practices that fail to recognize externalities like environmental resources as real measurable capital are completely unsustainable. As Otto points out, the threat to our survival is very real and we are running out of time. As a result, he suggests fourteen explicit battle plans. These range in specificity from a general plea to all citizens to get involved and “Do Something” to the creation of a “National Center for Science and Self-Governance” that would explicitly address the eight most pressing vulnerabilities that Otto sees as threatening voters’ ability to be properly informed and self-govern.

My graduate training and subsequent teaching experience in physics, the history of science, and Science, Technology, and Society (STS) studies affords me a perspective that leaves me a bit torn in reading Shawn Otto’s book. On the one hand, I agree with much of Otto’s commentary on our current sociopolitical dilemma and greatly appreciate the thoughtful case studies and analytical insights he brings to our understanding of science and its place in our social fabric. For me, the strength of this book comes after chapter 8. In particular Otto’s analysis of the “controversy” surrounding anthropomorphic climate change is an excellent case study that helps to support his argument. Unfortunately, the framing and setup of this argument is too sprawling and often feels a bit haphazard and superficial. For example, in chapter 8, Otto unpacks the rise of postmodernism and the deepening of the chasm between C.P. Snow’s two cultures. He seems to take particular issue with Kuhn’s work in the history and philosophy of science, yet he significantly misreads Kuhn. Kuhn was trained as a theoretical physicist and absolutely believed in scientific progress and an objective reality. Some subsequent interpretations of Kuhn have extrapolated his paradigmatic concepts and rejected objective reality altogether, but for the most part these are extreme stances. By and large science studies, as a diverse and multidisciplinary field, should not be conflated with extreme forms of relativism.

This speaks to a more fundamental issue I have with The War on Science. At times throughout this book it feels as though Otto stretches and contorts his narrative to overstate intentional conflicts between science and the outside forces of antiscience. While I don’t doubt that conflicts exist, by grouping people into oversimplified categories, we have no choice but to count them as enemies of science. On the other hand, a more nuanced understanding of science studies, for example, might allow science to gain important allies. Finally, while Otto acknowledges that science sometimes fails to live up to its ideals, he avoids holding it accountable for our current predicament. The framing of Otto’s analysis is ‘us against them’ and from that overly-simplified and polarized perspective, we shouldn’t be surprised that the view of science presented is not at all self-critical. Taking a closer look at scientific practice might reveal that some of the responsibility for today’s “War on Science” is rooted in its own internal rhetoric. Einstein once observed that: “science as something existing and complete is the most objective thing known to man. But science in the making, science as an end to be pursued, is as subjective and psychologically conditioned as any other branch of human endeavor.” If we learn to grapple with this difficult dissonance in the very DNA of science, we might take important strides towards a lasting peace.

Jose Perillan

Assistant Professor of Physics & Science, Technology & Society

Vassar College

joperillan@vassar.edu

Bombing the Marshall Islands: A Cold War Tragedy

By Keith M. Parsons and Robert A. Zaballa, Cambridge University Press, 2017, 230 pages, paperback

This important and well-written book is the story of the nuclear tests that blasted and poisoned the Marshall Islands, and the accompanying political and technical results. It also attempts to understand the context of why a program of atmospheric testing was considered necessary despite the knowable risks. The 1946 tests known as “Crosswinds” were to assure the Navy that its fleet could survive a nuclear attack. That assurance failed. There followed “Sandstone” (1948) and “Greenhouse” (1951), designed to improve fission bombs. “Ivy” (1952), “Castle” (1954) and other tests were designed to develop fusion devices. Based upon numerous, sometimes conflicting, sources, Parsons and Zaballa describe the test preparations, explosions, and effects (medical and otherwise) upon the test participants and non-participants. There are chapters on the adverse cultural impacts of fallout (such as commercial movies) and attempts to understand cold-war history and the apparent callousness of the developing U.S. nuclear policy. Unfortunately, no relevant maps are included. Two appendices for the layman, one of them on the physics of nuclear weapons and the other on the biology of nuclear radiation, provide handy summaries of these two topics.

Al Saperstein

Wayne State University

a_saperstein@wayne.edu

These contributions have not been peer-refereed. They represent solely the view(s) of the author(s) and not necessarily the view of APS.

Physics and Society is the non-peer-reviewed quarterly newsletter of the Forum on Physics and Society, a division of the American Physical Society. It presents letters, commentary, book reviews and articles on the relations of physics and the physics community to government and society. It also carries news of the Forum and provides a medium for Forum members to exchange ideas. Opinions expressed are those of the authors alone and do not necessarily reflect the views of the APS or of the Forum. Contributed articles (up to 2500 words), letters (500 words), commentary (1000 words), reviews (1000 words) and brief news articles are welcome. Send them to the relevant editor by e-mail (preferred) or regular mail.

Editor: Oriol T. Valls. Assistant Editor: Laura Berzak Hopkins. Reviews Editor: Art Hobson. Electronic Media Editor: Tabitha Colter. Editorial Board: Maury GoodmanRichard Wiener, Jeremiah Williams. Layout at APS: Leanne Poteet. Website for APS: webmaster@aps.org.