Newsletters

From the Editor

Oriol T. Valls

The articles in this issue include essays by two recent award winners. Neil Johnson is the recipient of the Forum’s 2018 Burton award, while Robert Kleinberg is the 2018-19 APS Distinguished Lecturer on the Applications of Physics. I am very grateful that they agreed to contribute to this Newsletter. We have additional articles on “Voodoo Physics” and on the social applications of complexity.

I saw some of you in Boston and received many suggestions for this Newsletter. Unfortunately, I won’t be able to make it to Denver.

We are continuing to expand our media presence. Several of the Forum sponsored APS meeting talks were video recorded and the files (or information on how to get them) will be posted at the Forum’s web site. Please send suggestions to our Media Editor, Tabitha Colter.

Contributions from our readership and their friends are always welcome, as they are suggestions for invited contributions. Suggestions and articles should be sent to me, except for book reviews, which should go to the reviews editor directly (ahobson@uark.edu). Content is not peer reviewed and opinions given there are the author’s, not necessarily mine, nor the Forum’s. We are very open as to what is appropriate. Controversy is good.

Oriol T. Valls
University of Minnesota

FPS Invited Speaker Sessions at April 2019 APS Meeting

(Some of the titles are tentative)

New Energy Technologies & Policies (Session 07, April 15, 10:45 - 12:33 - Chair: Richard Wiener)

  • Dan Kammen (UC Berkeley) - News Energy Technologies and New Energy Policies
  • Adilson Motter (Northwestern) - North American Power-Grid Network: Failures and Opportunities
  • Amory Lovins (Rocky Mountain Institute) - Integrative Design for Radical Energy Efficiency

New Challenges to International Science Collaborations (Session C07, April 13, 13:30 - 15:18 - Chair: Anna Quider)

  • Amy Flatten (APS Director of International Affairs; staff, APS Task Force on Expanding International Engagement) - Long-term Strategic Planning for APS International Activities
  • Bill Colglazier (Science Advisor to the Secretary of State in the Obama administration, now at AAAS Center for Science Diplomacy) Challenges and Opportunities for International Science Collaboration over the Next Ten Years
  • Karla Hagan (2015 APS Congressional Fellow and Current Senior Policy Advisor for science and innovation at the british Embassy in DC) The US-UK Science Collaboration Landscape: Status and Opportunities for the Future)

FPS Prize Session ( Session H06, April 14, 10:45 - 12:33 - Chair: Joel Primack)

  • Shirley Jackson (RPI) - Burton Forum Award Speech
  • FPS Lunchen 12:45 - 1:30

Secrecy and Espionage in Science (Session D06, April 13, 15:30 - 7:18, Co-sponsored with FHP - Chair: Paul Cadden-Zimansky, FHP)

  • Alex Wellerstein (Stevens Institute of Technology) - Secrecy and the Control of Science and Technology: Lessons from History and Sociology
  • Audra Wolfe - Freedom's Laboratory: The Cold War Struggle for the Soul of Science
  • Doug O'Reagan (MIT) - Allied Scientific Espionage and the Exploitation of German Technology after the Second World War

Attracting young people to Science and Science Policy (Session B07, April 13, 10:45 - 12:33, Co-sponsored with FECS - Chair: Kevin Ludwick, FECS)

  • Meredith Drosback (AAAS SciLine, Chair APS Cong Fellow Selection Committee, former APS Cong Fellow) Opportunities in Public Engagement: Sharing your scientific expertise with policymakers and the media
  • David Maiullo (Rutgers) - Physics for all: using physics demonstrations to both excite and educate the public in science and science policy
  • Brian Jones (CO State U) - A warm planet in a cold universe: Making climate change concepts accessible (and acceptable) to a wide audience

Detecting and Protecting the Earth from Earth-Crossing Asteroids (Session X05, April 16, 10:45 - 12:33, Co-sponsored with DAP - Chair: David Gerdes, UMich)

  • Amy Mainzer (JPL) - Finding near earth objects from space with NEOWISE and NEOCAM
  • Emily Kramer (JPL) - Finding Near Earth Objects from the ground with ZFT and LSST
  • Lindley Johnson (NASA Planetary Defense Office) - Defending Earth fromAsteroids

Fellowship and Award Nominations

Please remember that the Forum can nominate every year a certain number of its members to become APS Fellows. Please send suggestions to Joel Primack. People nominated should have made significant Physics and Society, as well as Physics research, contributions.

Also the Forum gives two special awards, the Burton Forum Award, “to recognize outstanding contributions to the public understanding or resolution of issues involving the interface of physics and society”, and the Leo Szilard Lectureship Award “to recognize outstanding accomplishments by physicists in promoting the use of physics for the benefit of society in such areas as the environment, arms control, and science policy”. In addition, the Nicholson Medal was established in 1994 by both the Forum and the Division of Plasma Physics "to recognize the humanitarian aspect of physics and physicists". Please see our honors page for details and nomination procedures.

How Physics Helps Lift the Lid on Online Extremism

Neil F. Johnson

Several companies will soon be launching satellites aimed at bringing Internet access to every possible place on the planet. Even before that happens, approximately half the world’s population (~3 billion people) are already using social media, with the dominant platform being Facebook. Each Facebook user is typically a member of more than one Facebook group and a follower of more than one Facebook page. Facebook and its international competitors such as VKontakte, purposely design their online features to help bring together people into relatively tight-knit clusters so that they can focus on some shared interest or purpose, e.g. jazz fans visiting Washington D.C. But such online clusters can be used for bad as well as good. They can bring together people who are against science, e.g. against vaccination programs. They can also serve to aggregate individuals with a potential interest in extremism or hate against a particular sector of society, e.g. individuals who are anti-immigration or anti-Semites. There are plenty of recent examples where online narratives helped incite individuals to commit violent acts — from Charlottesville, Parkland, Orlando, Maryland, Washington D.C., Pittsburgh and Tallahassee through to Manchester, London and mainland Europe. Detecting who among the 3 billion online users will ever carry out a real-world attack, sounds like looking for a needle in a haystack — but is actually worse since prior to any attack each ‘needle’ may be effectively indistinguishable from any other straw of hay.

In Physics terms, this new online world is one of the largest and most complex ‘many-body’ system in existence. It is far from equilibrium. It has many interacting particle types (i.e. humans, algorithms, bots) that can now interact instantaneously through a complex online web, and with interaction strengths that are independent of spatial separation. It features particles (i.e. people) that don’t do the same things under the same circumstances and are adaptive. And it is a system that is continually being perturbed by a shifting environment of news, rumors, presidential tweets and other world events.

But it is precisely because of this that Physics can provide new insights into its understanding. Starting with an article in Science in 2016 and subsequent papers in Physical Review, our research has shown that the key ingredient in the evolution of online extremism lies in the particular many-body correlations that define these tight-knit online clusters — in particular, the online pages and groups. Though media attention has focused on lone-wolf narratives, and it may very well be that a single individual carries out such an attack, such individuals are likely to have had some prior online exposure to pro-extremist narratives through access to these clusters (i.e. pages and/or groups). So, the correct focus for understanding future attacks likely lies in these cluster dynamics. After all, any physicist knows that it would be wrong to pin the boiling of water on what a single water molecule is doing, or on isolated molecules scattered across the system. Instead the answer lies in their many-body behavior, specifically the clustering of correlations. In the everyday world, taking apart every single car on the planet would never help you explain how clusters of drivers interact to cause traffic jams, or why traffic jams emerge universally in large cities.

Indeed, this message about the importance of developing a physics perspective on this problem, now seems to be resonating outside of physics to other disciplines and also policy-makers. Despite the tendency of the media to focus on ‘lone wolf’ actors, social scientists are now coming round to agree with this collective view (see B. Schuurman et al. (2018) End of the Lone Wolf: The Typology that Should Not Have Been, Studies in Conflict & Terrorism, DOI: 10.1080/1057610X.2017.1419554).

Our journey along this research path started in 2014, when we set out to study the many-body dynamics of pro-ISIS online support. We found that Facebook rapidly shuts down such pro-ISIS groups, but its overseas competitors can be slower to act, probably because doing so would require significant amounts of resources and time. The most important among these is VKontakte which has more than 350 million users spread across the world, but which is physically based in the politically sensitive area of Central Europe near ISIS’ major area of operations. Our study of freely available, open-source information on VKontakte between January 1 and August 31, 2015 revealed an ultrafast ecology of 196 pro-ISIS groups[1] that share operational information and propaganda, involving 108,086 individual followers. Although these online groups are typically shut down by online moderators within a few weeks of being created, we found that their members would simply go on to form another online group or join an existing online group that was still evading shutdown. And remarkably, all of this information is freely available, because these online groups need to attract newcomers and recruits, and hence their need for openness tends to outweigh any risk of capture. There had been competing research work focusing on analyzing extremism though messaging on Twitter, with the aim of identifying influential online individuals. However, such individual-level approaches met with only limited success from a security perspective, in part because removing the individual ranked No. 1 from any extremist network automatically leads to the individual ranked No. 2 becoming ranked No. 1, then the individual ranked No. 3 becoming No. 2, etc.

Membership of these pro-ISIS online clusters changed on a daily timescale during our study. On the most active day, the total number of follower links reached 134,857 since individual followers can become members of many separate groups. This process of data collection, analysis and modeling provided us with a living road map of online pro-ISIS activity. The high-resolution aspect of our data also meant that this study moved beyond the current focus of the network science field on identifying group structure in time-aggregated networks. Instead, we could see followers’ behavior in real time down to a timescale on the order of seconds. It also moved the understanding of human dynamics beyond the current focus on quasi-static links related to family or long-term friends, toward operationally-relevant dynamical interactions.

We were surprised to see that the evolution of this online group ecosystem resembled dynamical processes that had been observed in physics (e.g. polymers). However, unlike physical systems where individual units might break off from a group of molecules, or a group of molecules might break into a few pieces, we found that the fragmentation of these online groups is like a shattering process reflecting the sudden moderator shutdown of an online group. Upon deeper analysis, we noticed that the evolution of this online group ecosystem follows a rather precise mathematical form. As the size — i.e. the number of members — of each online group evolves over time, it produces a shark-fin shape as shown in the figure. It is the same shark-fin shape we find in the natural sciences when groups of interacting objects (particles, animals) follow a process of so-called ‘coalescence and fragmentation’. In other words, these online groups of ISIS supporters come together (coalescence) and break up (fragmentation) like fish in schools or birds in a flock might. There’s one difference, though. When they break up, they fragment completely because some external, anti-ISIS entity or online moderator has shut them down. That’s why you see the abrupt drop-off like the edge of a shark fin. This identification of a specific process governing the ecology of these online groups, enabled us to then write down a set of coupled mathematical equations to describe their evolution (see figure above). Solving this equation yields a distribution of group sizes which is essentially the same as that observed in the data, as well as reproducing their characteristic shark-fin shapes in time. We also identified new evolutionary adaptations that these pro-ISIS online groups have managed to invent and adopt. Some may go invisible for a while, and also occasionally reincarnate, appearing at a later date with a different identity and yet managing to retain most of their members. So just as Darwin predicted for biological evolution, pro-ISIS support has adapted to exploit features afforded by its new online environment (i.e. social media website) in order to survive longer.

There are many practical consequences of these findings. Identification of the online group coalescence-fragmentation mechanism suggests that anti-ISIS agencies can step in and break up small online groups before they develop into larger, potentially powerful ones. If anti-ISIS agencies aren’t active enough in their countermeasures, pro-ISIS support will quickly grow from a number of smaller online groups into one super-group. It also warns that if online-group shutdown rates drop below a certain critical value2, any piece of pro-ISIS material will then be able to spread globally across the Internet — ultimately leading to an Internet arms race. Moreover, we find that the birth-rate of these online groups escalates in a particular way ahead of real-world mass onslaught, just as clusters of correlations begin to proliferate ahead of a phase transition in a physical system, such as water boiling — except this is now a dynamical phase transition in time. The important role of these online groups also ties in nicely with earlier work that we did on guilds in the massively parallel online game World of Warcraft3. Furthermore, it means that instead of having to sift through millions of Internet users and track specific individuals through controversial profiling techniques, an anti-ISIS agency can usefully shift its focus toward open-source information to follow the relatively small number of online groups in order to gauge what is happening in terms of hard-core global ISIS support. As for the future, even if pro-ISIS support moves onto the dark net where open access is not possible, or if a new entity beyond ISIS emerges, these findings should still apply since they appear to capture a basic process of human collective behavior. Independent of cause, we can assume that the same types of many-body coalescence-fragmentation phenomena will arise.

For the next steps in this work, we are now carefully teasing apart the composition of these online groups, to find what differences they exhibit apart from their size — and what might make certain groups more influential than others. As part of this study, we have begun to look at characteristics such as gender. This is relevant since we find that about 40 percent of all users in these online groups declare themselves as women. The role of women in extremist activity has become of particular interest recently. For example, in 2016, three women were arrested in Paris for attempting to detonate a car bomb outside Notre Dame Cathedral. “If at first it appeared that women were confined to family and domestic chores by the Daesh terrorist organization, it must be noted that this view is now completely outdated,” François Molins, a French prosecutor, told reporters in announcing the arrests. Molins used the French term in referring to ISIS.

We were surprised to find that in the online pro-ISIS groups, the women tend to act as a far stronger “glue” than men in terms of holding the network together, despite women being in the minority. In the language of social networks, this means the women possess a “higher betweenness.” They provide a disproportionately richer resource for conflict resolution within the network, as well as providing better conduits for propaganda, financing and operational information. In plain terms, the women effectively hold the key to the flow of information, ideas and material between members within the group.

The following diagram shows what having “high betweenness” means for a representative portion of these networks in which men are the majority (M1, M2, M3 and M4) and there is one woman (W). Remove anyone from this portion, other than the woman, and all other members still remain connected. But remove the woman and this portion becomes fragmented. Thus, if you are a man in the network, you could not possibly reach all the other men without the woman being present. The connections between the men, who form the majority population, therefore, rely on the women — who are in the minority.

Specifically, our results suggest that any given woman will be a conduit for at least twice as many pieces of information, know-how and materials than a man on average. We have also found that the women simultaneously manage to maintain fairly low profiles. This turns out to be favorable for individual survival given the individual risks involved in such extreme activities. The lifetime, or resilience, of an online group of pro-ISIS followers faced with continual shutdowns by the online moderators tends to increase as the ratio of women to men increases. Such a result is consistent with women’s tendency to be better embedded in the network. One practical consequence of our findings is that a sensible way of dealing with a terrorist network would be to engage with the women involved. This is true even if the women are in the minority and also may not currently be deemed key figures.

Of course, much remains to be done. Every day, there are undoubtedly individuals online developing the intent and capability to carry out further violent attacks. So how might a many-body physics theory help detect them before they act? Suppose you meet someone in your university and are interested in knowing the next-step in their career. But instead of asking them their current thoughts and getting a potentially vague answer since they themselves may not yet know, you simply ask them what courses they have taken so far. This will then tell you the spectrum of things that they have been exposed to, and hence you can narrow down what job they will likely end up in — perhaps better than they themselves could at that stage. In an analogous way, such generalized many-body physics models, in the hands of security specialists, could play a similar role for terrorism, extremism and hate by seeing which individuals have passed through which groups and hence likely have the necessary intent and capability. Certainly not a perfect solution, and definitely unconventional, but arguably better than waiting until they do something.

References:

[1] N.F. Johnson et al. “New online ecology of adversarial aggregates” Science 352, 1459 (2016)

[2] Z. Zhao et al. “Effect of social group dynamics on contagion”, Physical Review E 81, 056107 (2010)

[3] N.F. Johnson et al. “Human group formation in online guilds and offline gangs driven by a common team”, Physical Review E 79, 066117 (2009)

[4] P. Manrique et al. “Women's connectivity in extreme networks”, Science Advances 2, e1501742 (2016)

Number of aggregate members over time graph
Example of the online group size (i.e. number of members in an aggregate of users) as time increases, for three example online groups (i.e. clusters). Below it, is the equation that correctly describes these online cluster dynamics within a so-called mean-field approximation.

network showing 1 woman and 4 men

Example of a portion of the network showing one woman (W) and four men (M1, M2, M3 and M4). The woman has a higher betweenness than any of the men, which means that she acts like the glue holding the network together.

The Transition from Academic Physics to the Wider World

Robert L. Kleinberg, Schlumberger

Introduction
It is universally acknowledged that strong educational systems are among the principal foundations of modern societies. Science education is particularly prized, as it is perceived there is an intimate connection between it and the technological progress that is a hallmark of prosperous nations with improving standards of living. 

In many cases, the connection between academic science and the economic engines of the broader society are reasonably clear. The paths from academic studies of semiconductor physics or polymer physics to corresponding industrial enterprises are reasonably direct. Yet one must ask, how exactly do studies of high energy physics or astrophysics connect to the concerns of wider society? No one would propose that we confine physics education to fields viewed as practical. Yet we owe it to physics students to give them some idea of how the hard work of undergraduate and graduate physics connects to their future in the outside world. 

As the 2018 - 2019 American Physical Society Distinguished Lecturer on the Applications of Physics, I have had the opportunity to visit a wide variety of academic physics departments. Faculty head-counts have ranged from six to well over sixty, with proportionate populations of undergraduate physics majors and physics graduate students. The most rewarding aspect of these visits has been roundtable discussions with students. I was looking for "directionless" students, and I found no shortage of them. 

Not that the students I talked to were disillusioned with physics or aimless in their studies. Far from it. They talked about their coursework and research projects with evident enthusiasm. But many had only a vague notion of what they might do with their education once they had finished their studies. This is not surprising. According to a recent letter to Physics Today [1], a professor at a major research university supervises an average of fifteen Ph.D. students in the course of his career, far above the replacement rate. Between 2002 and 2016 the number of physics faculty hired by all degree-granting schools varied between 230 to 360 per year [2] a period during which between 1100 and 1850 physics Ph.D. degrees per year were granted from American universities [3]. Thus although about half of students express an interest in an academic career, only about 20% will be able to pursue one.

For the other 80%, the course is far from clear. It is surprising but true that many physics students have never met a physicist outside of the university environment. We often speak of the lack of role models for women and minorities in physics, and indeed this is a real concern. But the fact is that few physics students, male or female, majority or minority, have role models that help them envision how their careers might evolve in the future.

The APS Distinguished Lecturer program is a small but helpful step toward mitigating this problem. Each lecturer decides what message he will carry forth during his or her one year tenure. One recent lecturer highlighted the fragile mental health status of physics graduate students [4], a startling thesis that nonetheless has the ring of truth.

My approach was more conservative: a fully worked example of how one Ph.D. physicist made the transition from studying the exotic physics of superfluid helium-3, which exists below three millikelvin, to designing geophysical instrumentation that could survive and operate ten kilometers below the surface at temperatures as high as 175ºC. Hence the title: “mK to km: How Millikelvin Physics is Reused to Explore the Earth Kilometers Below the Surface”.

One must first ask why I made this career choice. I certainly did not go to graduate school with the intent of learning to design geophysical instrumentation. However, my years in graduate school coincided with the oil embargoes of the 1970s. The trauma of long lines of automobiles outside of gasoline stations generated pressure for improvement and innovation in the provision of energy.

Such was the milieu when I finished graduate school in 1978. At forty years remove, it may seem strange that increasing the supply of fossil fuels was widely seen as an indispensable element of continued national security and economic well-being. But the study of fossil fuel emissions on climate was in an early phase, and advances in renewable energy and energy storage were still in the future. So I joined the petroleum industry, with the explicit goal of finding ways to make the extraction of oil from the earth more efficient.

Surprisingly, my graduate school training in millikelvin physics stood me in good stead in achieving this goal. To illustrate my transition from esoteric academic studies of superfluidity in liquid helium-3 to very practical inventions used to improve oil production, I present two case studies: one device that measures which way the wind was blowing ten million years ago, and another that measures the sizes of micrometer-scale interstices in porous media.

Electrical Conductivity Imaging of the Borehole
As will be shown below, the problem of measuring the direction of paleoclimatic winds reduces to a simpler and more generally useful measurement of the magnitude (“dip”) and orientation (“strike”) of layers of rock in the subsurface. Indeed, geometric measurements of sedimentary layers within the earth is fundamental to the delineation of oil and gas reservoirs in the subsurface.

Figure 1 is a cartoon of an archetypal oil and gas reservoir. Oil and gas originate deep within the earth, where organic matter, co-deposited with inorganic sediments, is thermally converted to light hydrocarbons. These hydrocarbons are less dense than water, which is ubiquitous in the subsurface, and therefore rise through the permeable rock column toward the surface. In favorable circumstances, they are trapped under a dome of impermeable cap rock. Otherwise they continue to the surface where they appear as oil or gas seeps, and are quickly oxidized in the atmosphere or ocean.

A principal goal of exploration geophysics is to find these domes. Then an exploratory well is drilled, but these wells often drill into underlying water-filled rock, as suggested by the figure. In order to drill into oil and gas deposits, subsequent wells must be drilled toward the top of the structure, determined by measuring the dip and strike of layers of sedimentary rock, shown as black curves in Figure 1.

Geologists routinely measure dip and strike at rock outcrops, but it is daunting to do so from within a 20 cm diameter borehole, several kilometers inside the earth, with instrumentation confined to the interior of a pressure housing able to withstand hydrostatic pressures of 140 MPa. Rather than putting a sample into a machine, the usual physics modality, the machine must be put into the sample, which is in fact the earth. Moreover, to achieve commercial viability, borehole measurement apparatus must satisfy a long list of engineering requirements, outlined in Box 1.

Few physics students leave their undergraduate or graduate education having had experience in building borehole instrumentation. Nonetheless, the recent graduate is far from helpless. Measuring the geometry of rock layers in a borehole depends on finding physical properties that are likely to vary from layer to layer. At outcrops, visual clues such as color and texture are often adequate. In the borehole, properties such as the speed of sound or electrical conductivity are likely to be more useful [5]. The electrical conductivity of fluid-saturated sedimentary rocks commonly varies between 0.001 S/m and 1 S/m, thus making conductivity an interesting measurement target.

Then the question becomes how to measure electrical conductivity outside of a pressure housing at 175º C.I took as inspiration the apparatus I built to measure the temperature of helium-3 below three millikelvin, shown in Figure 2 [6]. At left is an expanded view of the of the heat flow tower, within which is a 10 mg pellet of cerium magnesium nitrate (CMN), a paramagnetic salt used to measure the temperature of helium. CMN obeys the Curie-Weiss Law, which relates paramagnetic susceptibility to absolute temperature, with two calibration constants. The susceptibility is measured by positioning the pellet in one leg of a balanced-secondary mutual inductance coil set. The in-phase signal is proportional to susceptibility; the quadrature signal is rejected by phase sensitive detection.

To make an electrical conductivity measurement of rock surrounding a borehole, I turned the mutual induction concept inside out [7], see Figure 3. A single-turn primary loop generates a primary magnetic field which drives currents in the rock formation. These currents in turn generate a secondary magnetic field with unequal fluxes in the single-turn receiver loops, which are symmetrical with respect to the primary coil but not with respect to ground currents in the rock formation. The net quadrature signal is directly proportional to conductivity. The in-phase signal, which includes the magnetic susceptibility of the formation, is uninteresting in this context and is rejected.

In order to withstand the rigors of field deployment, the sensor had to be enclosed in a metal housing, with the side of the sensor closest to the borehole wall protected by a tough non-conductive ceramic face plate, see Figure 4. Metals generally have conductivities in the range of 107 S/m, many orders of magnitude greater than the most conductive salt-water-saturated rock. A naïve expectation might be that signal from the metal would overwhelm the signal from the rock formation, but this is not the case. The metal acts as a mirror, contributing nothing to the loss (conductivity) signal, but unbalancing the primary magnetic flux threading the two receiver loops. In order to restore the balance, the spacings between the primary and individual secondary loops are adjusted minutely. Residual imbalance is included in the discarded in-phase (permeability) signal.

The hardware shown in Figure 4 is clearly not waterproof, and fails to satisfy many other specifications included in Box 1. Nonetheless, it is an electromagnetically correct model, was used in a battery of laboratory tests, and showed that there were no barriers to the construction of a field-worthy instrument.

The story thus far has been told from the point of view of the experimental physicist, but theory and mathematical modeling played equally important roles. Alternative designs are evaluated, dimensions and materials optimized, and performance predicted more quickly and easily on the computer than in hardware [8].

To measure dip and strike from within a borehole, four such sensors are mounted on spring loaded arms that press the sensors against the borehole wall at 90º azimuths, as shown in Figure 5. The arms, mounted on a central mandrel, are pulled up the borehole on multiconductor cable, which transmits power and telemetry.

The measurement method is sketched in Figure 6. The boundary between dipping rock beds, having contrasting electrical conductivities σ1 and σ2, is detected by the four sensors at different depths in the borehole. The four electrical records of conductivity vs depth (of which only two are shown here) are compared to determine the dip (angle from horizontal) and strike (the azimuth of the dipping rock body, assumed to be planar).

The direction of the wind is ephemeral, and would seem to be an unlikely product of the measurement of rock properties deep inside the earth. Nonetheless, wind direction is encoded in the internal structure of sand dunes. Sand is blown up the windward side of the dune, leading to periodic avalanches on the leeward side, as shown in Figure 7. Each avalanche forms a distinct layer, which given appropriate environmental conditions can be fossilized. Eventually, the dune field can be preserved by burial. Thus the prevailing direction of the wind is encoded in the subtle layering of the dune, corresponding to the down-dip direction of the rock layers.

Once covered by cap rock, dunes make excellent oil reservoirs, due to their high porosity and permeability to fluid flow. Because dune fields tend to elongate in the direction of the wind, once oil is found in a sand dune environment, it is a better bet to drill the next well upwind or downwind, rather than across the wind direction. This method has been found to have commercial value [9].

Pore Size Distribution of Sedimentary Rock
Despite suggestive nomenclature, and the impression some might have from exposure to children’s fiction [10], oil reservoirs are not large subterranean lakes. Oil and gas are found in micrometer-scale interstices of sedimentary (more rarely, igneous or metamorphic) rock formations. Figure 8 is a micrograph of a quarried rock, similar to rock found in very high grade petroleum reservoirs. One property of an excellent reservoir rock is a substantial volume fraction of pore space (“porosity”). Good reservoirs usually have porosities around 0.3, i.e. 30% of the rock is available to be filled with fluids. The other desirable property is permeability to fluid flow. Permeabilities of oil and gas reservoirs range over eight orders of magnitude [11]; adequate flow rate is essential to economic viability.

Roughly speaking, the permeability is proportional to the square of pore size. A number of empirical and semi-empirical correlations have been established [12], all of which require some knowledge of pore size. Unless one brings samples of rock to the surface for laboratory investigation – an expensive and time-consuming process – pore size is very difficult to estimate from standard borehole measurement methods. Nuclear magnetic resonance has proved to be the most reliable method of determining pore size.

At the simplest level, the connection between magnetic resonance and pore size is illustrated in Figure 9. Panel 1: A static magnetic field, B0, polarizes nuclei in fluid molecules residing in porous media. Panel 2: A pulse of oscillating magnetic field, B1, reorients the nuclear spins into the plane perpendicular to B0. Panel 3: The orientation of a nuclear spins is largely unaffected by rotational and translational motions of the molecule in which it resides. Panel 4: Upon contact with a magnetic ion on a solid grain surface the spin can relax back to its low-energy state parallel to B0. The smaller the pore, the less time it takes for a fluid molecule to diffuse to a grain surface. A quantitative treatment of this phenomenon shows that the nuclear magnetic relaxation rate is proportional to the local surface-to-volume ratio of the pore space [12].

Thus the magnetic resonance measurement requires a constant static magnetic field, B0, and a pulsed oscillating magnetic field, B1, perpendicular to B0. These conditions are easily satisfied in the laboratory, where the sample, which might be a human being, is inserted into the apparatus. It is less obvious how to implement the measurement when the sample is external to the sources of the fields, particularly because the signal to noise ratio is proportional to the square of B0, and this field drops off rapidly with distance from the source.

This problem has been solved in a number of different ways [13]. The solution I and my colleagues found is shown in Figure 10. The apparent simplicity of the design belies the myriad complications encountered in its development [14,15].

Because spectrometers operating at the sensor’s operating frequency of 2 MHz were not commercially available, one was designed and built as part of the project, incorporating a number of novel circuits [16]. Low signal to noise, strict performance specifications, and the necessity of suppressing spin dynamics errors forced us to invent a new pulse sequence [17]. Significant theoretical and experimental efforts were needed to understand the magnetic resonance properties of fluids in porous rock, culminating in a comprehensive microscopic theory [18,19].

More than thirty years after its invention, the borehole nuclear magnetic resonance measurement continues to prove its utility in ways the inventors could not have foreseen. For example, it has been found to be useful for other applications where conditions excluded the use of conventional laboratory equipment. One such application was an assay of methane hydrate generated in porous rock at the seafloor offshore Monterey, California. For this purpose, modified borehole NMR apparatus was mated with a remotely operated submersible vehicle [20,21], see Figure 11. In another deployment, the apparatus was used to characterize permafrost on the Arctic coast of Alaska [22].

Conclusion
Few physics students are able to follow in the footsteps of their academic advisors, but few have a clear idea of the opportunities open to physicists elsewhere. Physicists are not usually hired to continue their thesis work, and are not generally hired to perform routine technical tasks. They are expected to be innovators. Indeed the physicist’s tool box is a rich one, including theory, mathematical modeling, instrument design, and data processing.

Dynamic economic systems respond to societal needs by confronting and solving problems. Incumbents ramp up research and development efforts [23] while entrepreneurs (often physicists themselves) start new enterprises. In such circumstances, there is rarely an adequate supply of experienced workers whose skills exactly match emerging needs. Employers look for smart, adaptable candidates who can bring technical skills and an instinct for innovation to a novel set of problems. Innovation starts at the employment interview, where the candidate needs to show how his or her experience, often in a very esoteric field of study, intersects with the priorities of the prospective employer.

Historically, physicists have become leaders in fields seemingly remote from their textbooks, calculations, and labs. Molecular biology was revolutionized by physicists [24,25], and physicists have taken a leading role in bringing quantitative methods to financial markets [26]. Neither biology nor finance are standard elements of the physics curriculum. Today’s world does not just reward adaptability, it demands it, and few professions provide better preparation for the challenges of the future — whatever they may be — than physics.

Acknowledgements
I owe an enormous debt of gratitude to my colleagues and collaborators over the years. Here I only cite those with whom I worked on the projects mentioned in this article. At the University of California, San Diego: Richard Johnson, Richard Webb, and John C. Wheatley. At Schlumberger: Douglas Griffin, Weng Cho Chew, Brian Clark, Apo Sezginer, Masafumi Fukuhara, Lawrence Latour, Partha Mitra, Suhail Farooqui, Imelda Foley, Charles Flaum, and Christian Straley. At the Monterey Bay Aquarium Research Institute: Peter Brewer, George Malby, and James Yasinowski.

References

  1. T. Christensen, “Rebutting Remarks on Feynman and Wheeler”, Physics Today 71(9), 1-13 (2018)
  2. American Institute of Physics, Number of Faculty Hired by Physics Departments, Fall 2017 https://www.aip.org/sites/default/files/statistics/physics-trends/fall17-faculty-hired-p.pdf
  3. American Institute of Physics, Number of Doctorates Earned in Physics, Classes 1972 through 2017, 15 October 2018 https://www.aip.org/statistics/data-graphics/number-doctorates-earned-physics-classes-1972-through-2017
  4. R. Kleinberg, R. Tromp, T. Brintlinger, C. Bailey, The APS Distinguished Lectureship on the Applications of Physics, APS News, January 2018 https://www.aps.org/publications/apsnews/201801/lectureship.cfm
  5. D.V. Ellis, J.M. Singer, Well Logging for Earth Scientists. Berlin: Springer (2007)
  6. R.T. Johnson, R.L. Kleinberg, R.A. Webb, J.C. Wheatley, Journal of Low Temperature Physics, 18, 501-517 (1975)
  7. R.L. Kleinberg, W.C. Chew, D.D. Griffin, "Noncontacting Electrical Conductivity Sensor for Remote, Hostile Environments", IEEE Transactions on Instrumentation and Measurement, 38, 22-26 (1989)
  8. W.C. Chew, R.L. Kleinberg, Theory of Microinduction Measurements, IEEE Transactions on Geoscience and Remote Sensing, 26, 707-718 (1988)
  9. Schlumberger, Oil-Base Mud Dipmeter Tool, Publication M-090195, June 1987
  10. J. Verne, A Journey to the Center of the Earth, Classics Illustrated No. 138, Gilberton (1957)
  11. R.L. Kleinberg, J. Boak, Unconventional Fossil Fuels Nomenclature, Oklahoma Geology Notes, 77 (3) 20-25 (July 2017) http://ou.edu/content/dam/ogs/documents/geologynotes/GN-V77N3.pdf
  12. R.L. Kleinberg, Nuclear Magnetic Resonance, in P.-z. Wong, ed., Experimental Methods in the Physical Sciences, Volume 35: Methods in the Physics of Porous Media, Academic Press, 1999
  13. Special Issue: The History of NMR Well Logging, Concepts in Magnetic Resonance, 13(6), 340-411 (2001)
  14. R.L. Kleinberg, A Sezginer, D.D. Griffin, M. Fukuhara, Novel NMR Apparatus for Investigating an External Sample, Journal of Magnetic Resonance, 97, 466 (1992)
  15. A. Sezginer, D.D. Griffin, R.L. Kleinberg, M. Fukuhara, D.G. Dudley, RF Sensor of a Novel NMR Apparatus, Journal of Electromagnetic Waves and Applications, 7, 13 (1993)
  16. D.D. Griffin, R.L. Kleinberg, M. Fukuhara, Low Frequency NMR Spectrometer, Measurement Science and Technology, 4, 968 (1993)
  17. A. Sezginer, R.L. Kleinberg, M. Fukuhara, L.L. Latour, "Very Rapid Simultaneous Measurement of Nuclear Magnetic Resonance Spin-Lattice Relaxation Time and Spin-Spin Relaxation Time", Journal of Magnetic Resonance, 92, 504 (1991)
  18. R.L. Kleinberg, W.E. Kenyon, P.P. Mitra, "Mechanism of NMR Relaxation of Fluids in Rocks", Journal of Magnetic Resonance, A108, 206 (1994)
  19. I. Foley, S.A. Farooqui, R.L. Kleinberg, "Effect of Paramagnetic Ions on NMR Relaxation of Fluids at Solid Surfaces", Journal of Magnetic Resonance A123, 95 (1996)
  20. R.L. Kleinberg, C. Flaum, C. Straley, P.G. Brewer, G.E. Malby, E.T. Peltzer, G. Friederich and J.P. Yesinowski, Seafloor nuclear magnetic resonance assay of methane hydrate in sediment and rock, Journal of Geophysical Research 108(B3): 2137 (2003); doi:10.1029/2001JB000919
  21. R.L. Kleinberg, C. Flaum, D.D. Griffin, P.G. Brewer, G.E. Malby, E.T. Peltzer and J.P. Yesinowski, Deep sea NMR: Methane hydrate growth habit in porous media and its relationship to hydraulic permeability, deposit accumulation, and submarine slope stability, Journal of Geophysical Research 108(B10): 2508 (2003); doi:10.1029/2003JB002389
  22. R.L. Kleinberg and D.D. Griffin, NMR measurements of permafrost: Unfrozen water assay, pore scale distribution of ice, and hydraulic permeability of sediments, Cold Regions Science and Technology 42, 63-77 (2005)
  23. R.L. Kleinberg, M.N. Fagan, Business Cycles and Innovation Cycles in the U.S. Upstream Oil & Gas Industry, submitted for publication
  24. R. Holliday, Physics and the origins of molecular biology, Journal of Genetics, 85, 93-97 (2006)
  25. A. Jogalekar, Physicists in Biology; And Other Quirks of the Genomic Age, The Curious Wavefunction Blog, Scientific American (2012) https://blogs.scientificamerican.com/the-curious-wavefunction/physicists-in-biology-and-other-quirks-of-the-genomic-age/
  26. Anonymous, Net gains, Nature Physics 9, 119 (2013) https://doi.org/10.1038/nphys2588, https://rdcu.be/bfHRL

Voodoo Fusion Energy

Daniel L. Jassby, Princeton Plasma Physics Lab (ret.)

“It is much easier to fool people than to convince them that they have been fooled.” - Mark Twain

Modern Fusion Fantasies
During the last decade a host of fusion energy "startups" have captured the attention of the technology press and blogosphere. These startups promise to develop practical fusion electric power generators in 5 to 15 years, and incidentally will achieve ITER's planned performance in a fraction of the time at 1% of the cost. With few exceptions, journalists have accepted these claims without criticism and propagated them with enthusiasm. 

But these projects are nothing more than modern-day versions of Ronald Richter's arc discharges of 1948-54, the inaugural fusion energy brouhaha[1]. Just as Richter's contraption could not generate a single fusion reaction, none of the current projects has given evidence of more than token fusion-neutron production, if any at all.

It was principally the absence of neutron emission that doomed claims of “cold fusion”, so why should more elaborate assemblies get a free pass, just because they use plasmas heated beyond room temperature? A tepid plasma of deuterium cannot produce measurable levels of fusion neutrons because one or more of the ion temperature, ion density or plasma volume is too small. As far as energy production is concerned, such systems are the functional equivalent of cold fusion but cost orders of magnitude more.

Robert Park was the longtime director of the Washington office of the American Physical Society, and author of the book “Voodoo Science” [2]. In his book and numerous columns under the heading “What’s New,” Park demolished “cold fusion” but never mentioned any of the failed “warm plasma” fusion schemes of his era. Unlike cold fusion, plasma-based fusion attempts are generally not voodoo science but most of these enterprises can be classified as voodoo technology.

For present purposes, we define “voodoo fusion” as those plasma systems that have never produced any fusion neutrons, but whose promoters claim will put net electrical power on the grid or serve as a portable electric power generator within a decade or so. As in Richter’s pioneering fiasco, all the modern voodoo schemes offer perfect ex-amples of one axiom of fusion energy R&D: The Inverse Timescale Axiom states that for any fusion concept, the smaller the achieved fusion neutron production, the shorter the predicted time to a working power reactor.

The total absence of any fusion neutron production has an inexplicable psychological effect: It encourages both promoters to predict and onlookers to believe that tinkering with a tepid plasma can result in commercial fusion electric power generators within a decade.

Voodoo incantations are necessary both to induce a trance in journalists, investors and politicians in order to procure financing, and eventually to command the fusion neutrons to materialize by witchcraft as those neutrons cannot be produced by the touted plasma concepts. Today the messianic incantations of the voodoo priest-promoters invoke the aura of “the energy source that powers the sun and stars” as well as the myth that terrestrial fusion energy is “clean and green” in order to cast a spell over credulous investors and politicians.

Fusion Neutrons are Critical

Unlike solar fusion reactions that produce no neutrons, in the most favored Earth-bound fusion reaction (deuterium-tritium), 80% of the energy is released in streams of high-energy neutrons. Because of the difficulty in handling radioactive tritium, experi-menters commonly use deuterium alone, and 50% of D-D reactions produce high-energy neutrons. Some enterprises propose to use the neutron-free D-3He reaction, but neutrons are still produced in unavoidable D-D reactions. In all cases, no neutrons means no fusion. Finally, some ventures propose to use the aneutronic proton-boron reaction, but the only convincing way to discern progress in reaching reactor conditions is by doping the plasma with deuterium and measuring D-D neutron output.

Numerous fusion “startups” promise a practical fusion reactor delivering net electric power in 5 to 10 years, but almost all have apparently never produced a single D-D fusion reaction. The currently most notorious (in alphabetical order) are General Fu-sion [3], Helion Energy [4], Lockheed-Martin Compact Fusion [5], and Tri-Alpha Energy [6], all of which have made that promise for the last 5 to 15 years.

For reference, one watt of D-D fusion power is accompanied by approximately one tril-lion neutrons per second. Fusion concepts that can attain some level of fusion activity are many tokamaks in numerous labs worldwide, a few stellarators, laser-compressed pellets at Livermore (NIF) and U. Rochester, MagLIF at Sandia, electrostatic fusors, and the dense plasma focus (DPF). Neutron production by itself has various practical uses such as isotope production and radiography [7]. More than 90% of fusion concepts have never produced measurable levels of fusion neutrons, which means those systems may have little practical value.

This discussion excludes Tokamak Energy and Commonwealth Fusion from the voodoo class despite their preposterous and insupportable declarations of near-term elec-trical power production [8], solely because their schemes are based on tokamaks. For 50 years many tokamak facilities have demonstrated that they are capable of produc-ing a significant level of D-D fusion reactions, increasing from 1xE8 n/s in T-3 in 1969 to more than 2xE16 n/s in TFTR in 1988 [9] and comparable rates years later in JET and DIII-D [10]. We also exclude LPP Fusion, because its DPF does produce meaningful levels of D-D fusion neutrons (3xE11 n/pulse), as the DPF has done since the 1960’s.

Naked Emperors Flaunt Their New(tronless) Clothes
One sketchy way to track the predicted dates for each player’s commercial power plant is to review the articles issued periodically by Brian Wang on nextbigfuture.com [11]. For the last dozen years, Wang has quoted uncritically the rash predictions of future accomplishments with dates furnished by project promoters. Wang treats all projects and unjustified claims seriously, but you, dear reader, will merely take note of the dates promised for commercial fusion reactors.

Here are samples of the more recent predictions for energy breakeven and commercial power plants from the currently most notorious voodoo fusion enterprises. The symbol NBF denotes the website nextbigfuture.com.

General Fusion (GF)
“GF targets prototype by 2015 and a working reactor by 2020,” from NBF 5/19/2012.
“GF will demonstrate DD-equiv. net (energy) gain in 2016,” and “GF targeting commercial reactor for 2020,” from NBF 5/24/2013.
GF will demonstrate net gain in 2018 and “GF targeting commercial reactor for 2023,” from NBF 8/18/2015.
“GF Demo nuclear fusion plant around 2023”, quoting C. Mowry, CEO of GF, from NBF 5/23/2018.

Helion Energy
“The Helion Fusion Engine will enable profitable fusion energy in 2019,” from NBF 7/18/2014.
“If our physics holds, we hope to reach that goal (net energy gain) in the next three years,” D. Kirtley, CEO of Helion, told The Wall Street Journal in 2014.
“Helion will demonstrate net energy gain within 24 months, and 50-MWe pilot plant by 2019,” from NBF 8/18/2015.
“Helion will attain net energy output within a couple of years and commercial power in 6 years,” Science News 1/27/2016.
“Helion plans to reach breakeven energy generation in less than three years, nearly ten times faster than ITER,” from NBF 10/1/2018.

Lockheed-Martin Compact Fusion
“Lockheed will have a small fusion reactor prototype (power plant) in five years…and a commercial application within a decade,” from MIT Technology Review, 10/20/2014.
“Net energy gain in 2020 and commercial power plant targeted for 2024,” from NBF May 3, 2016.

Tri-Alpha Energy (now TAE Technologies).
“Tri Alpha says it will produce a working commercial reactor between 2015 and 2020,” from NBF 8/16/2011.
“Tri Alpha Energy now likely 2020 - 2025….. for commercial nuclear fusion,” from NBF 10/16/2015.
“Tri-Alpha Fusion to develop commercial fusion by 2027,” from NBF 1/19/2017.
”The company will generate net energy from fusion…. in about five or six years,” from K. Bourzac [8], 8/6/2018.

Another collection of postings describing plasma fantasies masquerading as practical energy sources along with projected commercialization dates can be found on The Polywell Blog [12], maintained by Matthew Moynihan.

There are also numerous wannabe fusion contenders that have popped up in the last 5 years or so, making the usual preposterous claims on the basis of nothing but hot air or cold plasma, but these outfits are not yet sufficiently well-known to warrant more than a mention. Examples are Dynomak, First Light, HyperJet, and numerous members of the delusional Fusion Industry Association.

Journalists and promoters rarely mention neutrons, because most journalists have never heard of them, while the promoters assume that neutrons can eventually be made to issue from their contraptions by the appropriate voodoo recitation. Wang and Moynihan, who monitor all relevant press releases, have probably heard of fusion neutrons but they care not a whit about their absence — in their eyes any gaseous plasma is the basis of a working fusion reactor simply because its promoters claim that it is. But you, dear reader, will actually search for reports of fusion neutron production and you will find essentially nothing!

The permanent fusion R&D organizations, mainly government-supported labs, are the silent spectators of the parade of naked emperors, only occasionally challenging their insupportable assertions and predictions. One feature that voodoo fusion schemes do share with their neutron-producing rivals is that while they will never put electricity onto the grid, all of them take plenty of energy from the grid. The voracious consumption of electricity is an inescapable feature of all terrestrial fusion schemes.

In January 2019 well after this article was first submitted, TAE Technologies reported achieving a token fusion reaction rate [13]. TAE Tech’s experiments with deuterium plasmas have produced a maximum of 5E9 n/s (~5 milliwatts fusion) in a 5 ms pulse amounting to about 2E7 D-D neutrons per pulse, with the injection of at least 5 MW of neutral beams and tens of MW total electric power consumption.

As noted above, the Dense Plasma Focus at LPP has produced a D-D neutron yield per pulse that’s 10,000 times larger than TAE’s best, while the TFTR, JET and DIII-D tokamaks have produced D-D neutron rates that are 10 million times larger.

Despite its device producing fusion power that’s at least nine orders of magnitude smaller than its electric power consumption, TAE Tech has just issued its most outlandish claim ever. That’s described by Brian Wang in “nextbigfuture.com” (16 Jan. 2019) under the headline “CEO of TAE Technologies Says They Will Begin Commer-cialization of Fusion by 2023.”

A Cousinly Voodoo Plasma Enterprise
So how will these ballyhooed boondoggles end up? For guidance let’s look at the fate of a notorious voodoo technology enterprise in the field of medical diagnostics. Theranos (a contraction of THERApy diagNOSis) is a California-based company that purported to have a revolutionary blood-testing system, garnered nearly a billion dollars of investment, and stacked big names on its Board of Directors. The company’s claim is that with only a tiny drop of blood it could run hundreds of tests and detect dis-eases in their early stages. Over the past few years it was exposed as a total sham, principally by the investigative work of John Carreyrou, which culminated in his book “Bad Blood” [14].

Theranos and fusion energy enterprises are both centered on fluids called “plasma,” liquid in the case of Theranos, and gaseous for the others. But Theranos and the so-called fusion energy startups have a lot more in common than the name of their working fluid. It’s striking how the characteristics of the Theranos undertaking are similar to those of the phantom fusion enterprises that promise a practical fusion reactor delivering substantial net electric power in 5 to 10 years, but have never produced more than a token amount of D-D fusion reactions, if any at all.

Here are some of the features of the Theranos sensation, derived from Carreyrou’s book and published interviews with Carreyrou [14]. The voodoo fusion analogs are in parentheses.

  • Holmes preached that the Theranos device was “the most important thing that hu-manity has ever built.” (That’s the same refrain made by fusion technologists about their beloved contraptions.)
  • Theranos’s technology was either not ready or unworkable during the initial period of bombast, and when put into service was never validated. (Today’s much-heralded voodoo fusion equipment can produce only token fusion neutrons, if any, and to the end of time will be capable only of phantom energy production.)
  • Holmes beguiled high-profile people to serve as directors or investors. They ignored or could not recognize all indications that they were being fooled. (Trusting billionaires and celebrities serve as investors or board members in General Fusion, Helion, TAE Tech., etc.)
  • Holmes’ game was always that the ends justify the means. She thought the technology would eventually catch up with all the promises she made, and practiced “fake it till you make it.” (Fusion promoters assume that high-energy neutrons will magically show up someday in the abundance needed for promised electricity generation.)

The founders and chief executives of General Fusion, Helion Energy, Lockheed Fu-sion and Tri-Alpha are reincarnations of Ronald Richter, the first practitioner of voodoo fusion energy. Like Theranos’s Elizabeth Holmes, these modern priests of voodoo fu-sion have cast a spell over journalists, investors and politicians.

The pin-pricks of blood plasma extracted by Theranos could not produce enough usa-ble data for meaningful tests. It was voodoo diagnosis. Similarly, the tepid plasmas of the voodoo fusioneers can never produce enough fusion neutrons, if any at all, to have practical use. Very recently, Theranos announced that it would dissolve and its inves-tors will receive at most one cent on the dollar. At best, that same outcome awaits the voodoo fusion ventures when it becomes apparent that their power plant foolishness, fantasies and deception have zero factual basis. And at worst? Follow the Theranos case.

References

  1. Jose A. Balseiro, “Report on the September 1952 Inspection of the Isla Huemul Project,” Comisión Nacional de Energía Atómica, Buenos Aires, Argentina (1988); M. A. J. Mariscotti, “El Secreto Atómico de Huemul,” Sudamericana-Planeta, Buenos Aires, Argentina (1985); many references in Wikipedia entry for “Huemul Project.”
  2. Robert L. Park, “Voodoo Science,” Oxford Univ. Press, 2000.
  3. General Fusion website
  4. Helion Energy website
  5. Lockheed Martin website
  6. TAE Technologies (Tri-Alpha Energy) website
  7. APS Panel on Public Affairs Report, “Neutrons for the Nation,” Amer. Physical Soc., July 2018.
  8. D. Kramer, Physics Today, August 2018, p. 25; K. Bourzac, Chemical & Engin. News (ACS), August 6, 2018.
  9. M.B. Bell et al. (TFTR results), Proc. IAEA Fusion Energy Conf. (Nice, 1988), Vol 1, p. 27.
  10. P.B. Snyder et al. (DIII-D results); H-T Kim et al. (JET results); both presented at 27th IAEA Fusion Energy Conf. (India, 2018).
  11. Brian Wang, www.nextbigfuture.com
  12. Matthew Moynihan, www.thepolywellblog.com
  13. R. M. Magee, et al, Nature Physics, publ. online, 14 Jan. 2019.
  14. John Carreyrou, “Bad Blood,” Alfred Knopf, 2018; book review by R. Lowenstein, New York Times, May 21, 2018; interview of Carreyrou by Vox, June 19, 2018

Appendix
Neutron Distractions

  • Some years ago when “sonofusion” was a short-lived obsession, General Fusion applied an electrically driven shock wave to a sphere of deuterated water and claimed to have produced up to 50,000 neutrons per shot. That was never confirmed and in any case has nothing to do with the company’s fusion reactor concept.
  • TAE Technologies is pursuing another venture that uses ion beams striking solid targets to produce neutrons for cancer therapy. This technique is real, having been pi-oneered by cyclotron inventor Ernest Lawrence and his physician brother in 1938 and used for eight decades.

Daniel L. Jassby

Complexity and Emergence – Essential Physics for Our Troubled Times

Lars English, Dickinson College

Times have been changing in physics. No, I’m not talking about string theory, multiverses, dark matter, or any other buzzwords in the news lately. Something more fundamental is happening. There is a new mindset that challenges old assumptions, and its implications arguably extend far beyond the discipline. But to appreciate just how much things have shifted, and why it matters, it is useful to start with a brief history.

The story of physics begins in earnest with Isaac Newton. His formulation of mechanics in the late 17th century commonly marks the birth of our discipline. In fact, classical mechanics (as it’s now called) proved so successful, so durable, that the first and second revolutions had to await the early 20th century. First came the sudden appearance of Einstein’s theory of relativity, and then the gradual arrival of quantum mechanics. In the late 1940s and 50s, a third paradigm shift unfolded in the form of the first quantum field theory, which would culminate in the standard model of particle physics. Now we are in midst of what feels like a forth paradigm shift — towards complexity and emergence — one that can perhaps trace its early beginnings to the chaos theory of the 1970s.

But let’s go back to Newton for a second. Ever since his famous deduction of the mathematics of gravity, physicists have appreciated the power of simplicity. Newton had considered interactions between only two objects at a time — earth and moon, or earth and apple. Such “two-body” problems are usually mathematically tractable, and this explains their appeal in physics. Physicists kept coming back to them again and again. In the 1920s, for instance, the young quantum wave mechanics trained its sight on a single electron around a single proton — aka hydrogen. In the 1940s, it was the electron and the photon. What all these advances shared in common was the “divide-and-conquer” strategy. It was an approach that isolated only a few parts in order to ferret out some of the general laws that govern them, only to then use those laws in more complex situations. One could label this strategy reductionist, but it has been highly successful in physics over the centuries.

Now, however, the limitations of this kind of approach are becoming increasingly apparent. The strategy of breaking things apart, studying the parts in isolation, and from there reconstructing the whole, does not work well on a broad range of new and important problems. These range from making sense of shape-memory alloys, to designing the electromagnetic metamaterials used in cloaking; from understanding the next generation of high-temperature superconductors, to investigating the weird topological excitations that could form the basis for quantum computers. Then there are problems somewhat outside of physics proper that are attracting many physicists, like studying the robustness or fragility of the nation’s electrical grid.

Such modern problems share a couple of common features that preclude a reductionist approach, such as nonlinearity, feedback, and self-organization. They typically involve a large number of strongly interacting parts (more than just two). By tackling such problems, physicists have had to simultaneously cultivate a new mindset that recognizes as a central theme the possibility of emergence, namely the notion that the whole cannot be understood through its parts alone.

I think it is fair to say that most physicists tend to be pragmatic, non-ideological people, and from a practical standpoint, the reductionist strategy works well for certain types of scientific problems, and not so well for others. Yes, physics is now busy crafting an alternative framework going by the name of emergence, but this is motivated by a desire to understand nature in its full complexity and manipulate it. Meanwhile, however, reductionism has long left the exclusive realm of physics and made its way into the larger society, where it continues to derive power from its close former association with physics.

Here is the problem. Reductionism turned philosophy, devoid of scientific context, encourages an unrealistic expectation and a dangerous intellectual overreach in its followers. Once adopted, it has a way of spreading out in all directions to infect one’s thinking. How?

Reductionism says that nothing fundamentally novel can happen when the parts of a system assemble to form the whole. It asserts that the behavior of the whole is already contained in the properties of the isolated parts. Moreover, the laws governing the parts can allegedly also be used to derive the rules operating at the system level, at least in principle. Within this old paradigm, then, the science discovered by microbiologists about DNA transcription and protein production in cells is already fully contained within the laws of quantum mechanics. High-level cognitive function can ultimately be reduced to our genes. Consciousness is a by-product of the firing of neurons — the list goes on.

The logical conclusion of such a mindset is that ultimately there are only the elementary particles and the laws they obey. Ultimately, we are all just collections of myriad particles that move about in random fashion, collide with one another and interact via fields that are themselves comprised of particles. All the rest can be reconstructed from this layer of reality, and since this layer does not contain any deeper meaning, such meaning is also absent at any other layer. We are left with a sweeping scientific materialism.

The great physicist Philip Anderson once lamented that molecular biologists “seem determined to reduce everything about the human organism to ‘only’ chemistry, from the common cold to all mental disease to the religious instinct.” It is indeed a fundamentally nihilistic view of the world. If science demands reductionism, and reductionism implies materialism and nihilism, do we really want to place our trust in science? It is a fair question — some of the broader science skepticism may be attributable to a rejection of such implications.

Things look no better when we transport the reductionist mindset into the arena of sociology. Here it nudges us to entertain and nurture a suspicion that the reason certain societies happen to be presently wealthy or poor must be traceable back to the individuals making up these societies. It furthers precisely the “cultural” or “ignorance” hypotheses of wealth and poverty that Daron Acemoglu and James Robinson reject in their recent book Why Nations Fail as poor predictors of the future. Such hypotheses usually go hand in hand with ethnic or racial prejudice, and they deceptively insinuate that the status-quo is somehow preordained, unchangeable and genetic.

In philosophy this is referred to as the fallacy of composition, and in physics we encounter some of the most dramatic illustrations that expose this type of fallacy. Think of diamond and graphite — two substances entirely composed of carbon atoms. While their microscopic composition is exactly the same, their macroscopic properties are nothing alike. Anything you could measure about them — mechanically, optically, electrically, thermally, acoustically — would all be vastly different. There is no overlap. Nevertheless, if you burned a piece of graphite and a piece of diamond, the carbon dioxide you get would be indistinguishable. This example (and many others within material science) gives us a glimpse of emergence as articulated in the physical sciences.

The central premise of emergence is that the whole is qualitatively more than the sum of its parts. Much more important than mere composition is structure, organization, or architecture — things that transcend the individual parts. Furthermore, the architecture acts back on the individual parts and affects their behavior — a phenomenon sometimes called downward control. In the graphite versus diamond example, it is the layered honeycomb structure of graphite that nudges the individual carbon atoms to manifest certain electron states, whereas the tetrahedral diamond structure forces carbon electrons into different orbitals. At the same time, of course, the whole cannot demand of its parts what these are not somehow capable of doing. We can speak of a kind of bi-directionality and a co-emergence between levels.

What are the larger implications that follow from the limits of reductionism that are now increasingly appreciated in physics? In a nutshell, the emergence revolution saves us from all kinds of ominous implications that strict reductionism demands. We attain a flexibility of mind where we can admire the wondrous world of sub-atomic particles while not immediately thinking that everything reduces to it. We don’t see science as a hierarchical ladder with a bottom (the “fundamental”) and a top (the “derivative” and “applied”), but as an interweaving tapestry. We avoid a scientific materialism — all-encompassing in scope — that denies among other things the reality of consciousness and human agency. We stay clear of racist or sexist narratives so pervasive in human thinking, as we appreciate the larger feedback loops ever-present in society. We can easily acknowledge stereotype threat as a functioning mechanism with causal powers (i.e., the well-established effect where the mere awareness of a societal stereotype against you suppressed your performance). We can understand gender as a social construct without then going to the extreme of rejecting chromosomes.

In short, emergence is an indispensable idea for our troubled times. It allows us to avoid conceptual extremes in an intellectually honest way. Physics can no longer be enlisted as an ally by opponents to these ideas. Much of physics has become a proponent of emergence, and physicists have put their own distinctive stamp on it. The result will be novel materials, yes, but the ramifications will go far beyond the practical applications. We would do well to take note.

Accessory to War: The Unspoken Alliance Between Astrophysics and the Military

By Neil deGrasse Tyson and Avis Lang, W. W. Norton & Co., New York, New York, 404 pages, plus end notes, acknowledgements and references. ISBN 978-0-393-06444-5, hardback, $30.

Neil deGrasse Tyson is one of the best, if not THE best communicator of science to the general public alive today. In Accessory to War: The Unspoken Alliance Between Astrophysics and the Military, he and co-author/researcher/editor Avis Lang tackle a complex issue that at first glance appears self-evident (at least to the technically inclined): that astrophysicists and warfighters use much the same knowledge and equipment to pursue their respective vocations. While some of the historic and modern parallels between astronomy/astrophysics and military operations recounted in this book may already be known to some readers, the vast number of examples given, and the amount of detail on each, is extremely thorough, and unlikely to be known to many. It is probable that no other source collects all these topics and examples in one place. Further, while Dr. Tyson often writes for the general public (and this book contains no math to work through), I see it as more of a textbook for graduate students of military history, the history of science, or governmental policy. In this role, it should be a required addition to such courses throughout academia.

Examples Tyson and Lang give of synergism between astrophysics and military operations include, but are certainly not limited to:

  1. Orbiting satellites of all kinds (obvious connections).
  2. Adaptive optics and laser guide stars (removing atmospheric distortions for telescopes and laser weapons).
  3. Radar (for radio astronomy and enemy systems detection).
  4. X-ray, UV and infrared detectors (for space object spectral studies and military sensor requirements).
  5. Space (Hubble) telescopes and imaging spy satellites.
  6. Particle accelerators (for simulating astrophysical processes in the laboratory and particle beam weapons).
  7. Atomic and nuclear processes in stars and nuclear weapons.
  8. The VELA nuclear-burst detection satellites giving birth to the entire field of gamma-ray burst astrophysics.

As can be seen from the above list, a certain amount of technical knowledge is required to read Accessory to War, hence the presumption of its use in a graduate course, or by scientists and engineers. But historians will also find it useful. Tyson is an adept conveyor of history as well as science, and begins the book by describing several ancient military exploits that borrowed communications and surveillance techniques from astronomers (sometimes from astrologers). The book’s discussion of the early telescope shows how it not only aided humankind’s understanding of the heavens, but also was a key instrument in land and naval warfare and in commercial navigation, thus the exploration of the earth’s surface as well as its sky. Another intersection of military, astrophysics and commercial interests that the book covers in detail is the use of weather and global positioning satellites by all three communities. The fact that America no longer dominates the “high ground” of space receives a good deal of historical explanation from Tyson and Lang. Another subject that receives considerable attention in the second half of the book is the multiple bi-lateral and multi-lateral treaties and agreements concerning space and nuclear weapons that the U.S. has entered into since the start of the Space Age. These are loosely tied to astrophysics (verification instrumentation), more so to science in general, and explicitly to military development and operations. While of some interest to scientists and the lay reader, historians should find these sections to be of great value.

Tyson and Lang end their book with an attempt to separate fact from hype — and not just hype in the popular press. They quote numerous official Department of Defense papers and memoranda from the past 20 years that claim America needs to be ready to fight a kinetic, directed energy, electronic, and cyber war in space. But factually, 50 years after the Moon Landing, the only actual, deployed counter-space weapons the U.S. and other countries have are kinetic-kill ASAT missiles (which none dare use because of the fratricidal space debris they generate), and satellite communications jammers. They make a strong case that cooperation and negotiation, attributes of the international astrophysics community, offer more hope for the future of mankind in space than militarism. Space is simply too vast and free of national borders for it to be otherwise.

Ronald I. Miller

DoD/DIA/Missile & Space Intelligence Center (Retired)

Hello World: Being Human in the Age of Algorithms

By Hannah Fry, W.W.Norton & Co. 2018, 246 pages, ISBN 978-0-393-63499-0, $25.95 hardcover.

This book is about computer algorithms that, according to the author, have a large and growing influence on nearly all aspects of our lives. Hannah Fry is Professor of the Mathematics of Cities at University College in London and also a regular presenter for the BBC. Hello World covers algorithms as relatively straightforward as the statistical analysis of crime rates and as powerful as the machine learning techniques used to play chess, make medical diagnoses, and more. The book’s emphasis is on what these algorithms are designed to do, how well they achieve their goals, and the effect of these increasingly invasive algorithms on our lives. There is minimal technical discussion about the mathematics and programing of computers. The book is well written and easily accessible to non-technical readers.

One of the book’s strengths is its wide range and number of topics. Chapter one is a general discussion of how ubiquitous these algorithms have become and the frighteningly important role they sometimes play in critical decisions, such as when to launch a retaliatory nuclear missile attack. The author then turns to a very different topic: the enormous power people can have over us by using data mining procedures and how much they can glean about intimate aspects of our lives. For example, a Target store was able to determine that a teenager was pregnant from the items she had bought there. Her parents, with whom she was living, first learned she was pregnant from the coupons Target started sending her. Fry describes how the Chinese government, through its Sesame Credit citizen scoring system, uses similar techniques. Based on a wide range of data gathered from online activity, Sesame Credit determines a single score for each person. When such scoring becomes mandatory in 2020 it will give the government enormous power over people’s lives.

Chapter two reviews algorithms that calculate the most appropriate sentences for convicted criminals or predict which defendants will skip bail or commit a crime while on bail. A concern expressed throughout the book is our bias toward believing computerized results, even though we know that dubious input data or computational methods imply dubious results. Moreover, widely used algorithms such as COMPAS are proprietary so that only the creators of the algorithm know how it works. This makes it particularly difficult to understand COMPAS’ known biases. Fry believes judges rely on the output of COMPAS more than they should. One reason for this over-reliance is the judges’ desire to be held less accountable. But the author also notes that human judges often make terrible mistakes; for example they can reach different conclusions from essentially the same data depending on the time of day they judged a case.

The chapter on algorithms in medicine focuses mostly on medical diagnosis. It includes pattern recognition to detect cancerous cells in biopsy slides, as well as more challenging tasks such as predicting which cancers might kill you. Part of the difficulty in making accurate diagnoses lies in the poor quality and accessibility of many people’s complete medical data. Another concern is more familiar: maintaining the privacy of medical data such as DNA test results while at the same time making these data available to those doing R&D in, for example, medically-related artificial intelligence. The author emphasizes how difficult it is to strike an appropriate balance between such competing demands, particularly when the benefits of an algorithm are overstated and the risks are obscured.

Another chapter discusses semi-autonomous and completely driverless car development. Fry describes using Bayes theorem to improve the decision-making of algorithms that drive these vehicles. Critical issues include: what degree of good performance is good enough for us to allow driverless cars on the streets? How will this technology interact with such random phenomena as unruly people? What about the moral decisions involved in prioritizing the safety protection for people involved in a potential accident? Fry also discusses the interesting problem of the large time lag between when a semi-autonomous vehicle senses trouble and when the backup human driver becomes sufficiently engaged to deal with the accident about to occur.

One chapter deals with predicting where and when crimes might occur and who might commit them. Fry is concerned about the efficacy and fairness of the mathematical models in these algorithms, as well as the growing use and accuracy of facial recognition software. The final chapter discusses computer programs that produce such artistic works as music in the style of Bach.

This is an interesting and enjoyable read. Fry’s main concern is finding a way to use algorithms to improve our lives while recognizing their weaknesses and strengths. She ultimately concludes that the best result is a partnership between human and algorithm with the final decisions made by the human.

Martin Epstein

California State University, Los Angeles

Archetypal oil and gas reservoir

Figure 1. Archetypal oil and gas reservoir. Buoyant hydrocarbons are trapped beneath a dome of impermeable cap rock. Black lines mark strata within major (shaded) rocks units.

Box 1: Standard Engineering Requirements for Borehole Instrumentation

Box 1: Standard Engineering Requirements for Borehole Instrumentation

Apparatus used to measure the thermal conductivity of helium-3 at very low temperatures.

Figure 2. Apparatus used to measure the thermal conductivity of helium-3 at very low temperatures. At left is an expanded view of the heat flow tower, within which is a small pellet of cerium magnesium nitrate (CMN), a paramagnetic salt used to measure the temperature of helium [6].

Concept for electrical conductivity measurement of a rock formation surrounding a borehole.

Figure 3. Concept for electrical conductivity measurement of a rock formation surrounding a borehole. The primary loop is energized by an alternating current source I. Symmetrically placed secondary loops are unbalanced by the presence of the conductive rock formation. The voltage VL, measured across impedance ZL, is linearly proportional to the conductivity of the formation [7].

Exploded view of conductivity sensor and associated electronic circuits in a brass housing.

Exploded view of conductivity sensor and associated electronic circuits in a brass housing.

 Assembled laboratory prototype sensor and housing

Figure 4 (a) Exploded view of conductivity sensor and associated electronic circuits in a brass housing. (b) Assembled laboratory prototype sensor and housing.

Four sensors, mounted on spring loaded arms that press them against the borehole wall

Figure 5. Four sensors, mounted on spring loaded arms that press them against the borehole wall, are pulled up the well to make the measurement. [Schlumberger]

Method of measuring dip and strike of a layered sedimentary formation.

Figure 6. Method of measuring dip and strike of a layered sedimentary formation. The boundary between dipping rock beds, having contrasting electrical conductivities σ1 and σ2, is detected by the four sensors at different depths in the borehole. The four electrical records of conductivity vs depth (of which only two are shown here) are compared to determine the dip and strike.

Wind-driven sedimentation in sand dunes.

Figure 7. Wind-driven sedimentation in sand dunes.

Micrograph of sandstone quarried in Berea, Kentucky.

Figure 8. Micrograph of sandstone quarried in Berea, Kentucky. White areas are sand grains, black areas are clay particles, and blue areas are pore space, which can be occupied by oil, water, or natural gas. The permeability to fluid flow, i.e. the rate at which liquids and gases can flow from the rock to the wellbore, is proportional to the square of the pore size.

Simplified explanation of connection between magnetic resonance measurements and pore size.

Figure 9. Simplified explanation of connection between magnetic resonance measurements and pore size.

Borehole nuclear magnetic resonance apparatus.

Figure 10. Borehole nuclear magnetic resonance apparatus. (a) Side view of borehole instrument, showing electronics cartridge (top) and sensor section (“CMR skid”) (bottom). The zone of sensitivity (red rectangle) is outside the apparatus. (b) Cross-sectional view of sensor. Permanent magnets generate a constant magnetic field, B0, outside of the borehole, several centimeters inside the rock formation. The split coax antenna generates a pulsed oscillating field, B1, which is perpendicular to B0 in the sensitive zone.

The author (kneeling) installing modified borehole nuclear magnetic resonance apparatus on a remotely operated submersible vehicle at the Monterey Bay Aquarium Research Institute.

Figure 11. The author (kneeling) installing modified borehole nuclear magnetic resonance apparatus on a remotely operated submersible vehicle at the Monterey Bay Aquarium Research Institute.


These contributions have not been peer-refereed. They represent solely the view(s) of the author(s) and not necessarily the view of APS.

Physics and Society is the non-peer-reviewed quarterly newsletter of the Forum on Physics and Society, a division of the American Physical Society. It presents letters, commentary, book reviews and articles on the relations of physics and the physics community to government and society. It also carries news of the Forum and provides a medium for Forum members to exchange ideas. Opinions expressed are those of the authors alone and do not necessarily reflect the views of the APS or of the Forum. Contributed articles (up to 2500 words), letters (500 words), commentary (1000 words), reviews (1000 words) and brief news articles are welcome. Send them to the relevant editor by e-mail (preferred) or regular mail.

Editor: Oriol T. Valls. Assistant Editor: Laura Berzak Hopkins. Reviews Editor: Art Hobson. Electronic Media Editor: Tabitha Colter. Editorial Board: Maury GoodmanRichard Wiener, Jeremiah Williams. Layout at APS: Leanne Poteet. Website for APS: webmaster@aps.org.