January 2024 Newsletter

From the Editor

Oriol T. Valls, the current Physics and Society Newsletter Editor, is a Condensed Matter theorist at the University of Minnesota.

Somewhat to my surprise, the article on controlled fusion that we recently published (see July 2023 issue) has elicited a large response. I emphasize in every issue that I welcome controversy: therefore I am very happy about this. As a result we have three articles in this January issue all related to this topic, written by people who have the right expertise and strong opinions.

Also, at the request of Fred Lamb, our current Chair, and myself, several of the speakers in the Forum-sponsored April meeting session on "The increasing danger of nuclear weapons" have been asked to contribute articles on the subject of their respective talks. One of them has submitted his contribution on time for this issue and I expect to get more for the next one. I hope this encourages you to attend the session in person, if you can, or at least helps you understand more about this fundamental issue.

We have also our usual complement of book reviews as organized by Quinn, our Reviews Editor.

I am very pleased also to announce in this issue the winners of the forum sponsored awards and the new APS Fellows who were proposed for membership by the Forum. Congratulations to all of them.

We are again advertising for a Media Editor. This is a great opportunity for somebody up to date on everything related to social media, and who wants to get more involved in Forum activities. Please apply or get somebody to apply.

I remind you again that the contents of this newsletter are mostly reader driven. Please consider sending your contribution or persuading somebody to do it. Manuscripts should be sent to me, preferably in .docx format, except Book Reviews which should be sent directly to book reviews editor Quinn Campagna (qcampagn@go.olemiss.edu). Content is not peer reviewed and opinions given are the author’s only, not necessarily mine, nor the Forum’s nor, a fortiori, the APS’s either. But no pertinent subject needs to be avoided on the grounds that it might be controversial. Controversy is good.

Oriol T. Valls                                                                                                                                          
University of Minnesota                                                                                                                             
otvalls@umn.edu

Top

Forum News

FPS Newsletter Media Editor Wanted

The FPS newsletter is looking for a Media Editor to help increase the electronic and social media presence of the Forum and its quarterly newsletter. The Media Editor works with the Editor and the APS Media team. Responsibilities and duties are quite flexible, but might include developing outreach through the Engage community site and expanding the media presence of the newsletter. This is a great opportunity for a media savvy person who is interested in getting more involved with the Forum and with APS. Please contact the Newsletter Editor at otvalls@umn.edu or the current Media Editor tabithacolter@gmail.com if you think you might be interested in volunteering for this position.

Top

2024 Forum Award Winners

We are pleased to announce the winners of the 2024 Forum Awards.

Top

2024 Leo Szilard Lectureship

Robert J. Budnitz
Lawrence Berkeley National Laboratory (retired)

Citation: "For outstanding leadership in formulating and guiding the US Nuclear Regulatory Research program in areas of reactor safety, waste management, and fuel-cycle safety, and for significantly advancing seismic probabilistic risk assessments as applied to nuclear power worldwide."

Joseph A. Burton Forum Award

Galileo Violini
Director Emeritus, Centro Internacional de Física, Bogota, Colombia

Citation: "For establishing programs in physics education and research in Latin America and the Caribbean that increased regional scientific capacity, for promoting international scientific cooperation across continents and regions of the world, and for creating the Centro Internacional de Física in Colombia."

Top

New APS Fellows Nominated by the Forum

We are pleased to announce the new APS Fellows that were nominated by our Forum. Congratulations to all of them.

Kartik Sheth
NASA
Citation: "For outstanding, innovative, and sustained leadership of inclusion, diversity, and equity efforts in astronomy and astrophysics and in the nation." 

Kathleen R. Turner
DOE Office of High Energy Physics
Citation: "For leadership as a program manager at the Department of Energy, enabling significant advances in the areas of cosmology, astronomy, and astrophysics."

Reba M. Bandyopadhyay
National Science Foundation
Citation: "For outstanding contributions to the nation through informing, crafting, and advancing innovative, inclusive, and data-driven science and technology policy."

Kausik S Das
University of Maryland Eastern Shore
Citation: "For leadership in promoting the progress of underrepresented groups in the field of physics, paired with notable contributions to advance diversity, education, and science communication, and for significantly contributing to the growth and inclusivity of the scientific community."

Irvy Gledhill
University of the Witwatersrand
Citation: "For decades of leadership to advance women in physics in South Africa and globally, for research solving problems important to society, and for exceptional, wide-ranging service to the physics community."

Jeffrey Kovac
University of Tennessee
Citation: "For innovative, scholarly, multidimensional, and persistent contributions to scientific ethics and ethics education along with numerous thoughtful contributions on other complex issues at the interface of science and society."

Top

Forum Election Results

We are pleased to announce the results of the latest election of Forum officers.

Vice Chair

Don Lincoln
Fermi National Accelerator Laboratory

Councilor

Beverly Karplus
Hartline Montana Technological University

Executive Committee Members-at-Large

Rachel Carr
United States Naval Academy

M.V. Ramana
University of British Columbia, Vancouver, Canada

These officers will assume their offices on January 1, 2024.

This is a good opportunity to thank the members of the Forum Executive Committee whose terms of service are now ending. These are:

  • Past Chair: Henry Kelley
  • Executive Committee Members-at-Large:
  • Jennifer Dailey Lambert, Johns Hopkins University
  • Tara Drozdenko, Union of Concerned Scientists

Top

Articles

Dreams of Fusion Energy S.J. Zweben

stewart.zweben@gmail.com

”Fusion is the process that powers the sun and the stars, releasing vast amounts of energy that makes all life on Earth possible. When we bring the process of that power to Earth, it will bring about an age of safe, clean, and unlimited energy that will transform our planet.”
Princeton Plasma Physics Laboratory: https://www.pppl.gov/about/

“Culham Center for Fusion Energy is turning the process that powers the Sun into carbon-free, safe and abundant electricity for a cleaner planet.”
UK Atomic Energy Authority: https://ccfe.ukaea.uk/

“The surest path to limitless, clean fusion energy...2025: SPARC achieves commercially relevant net energy from fusion; early 2030s: first fusion power plant, called ARC, is completed.”
Commonwealth Fusion Systems: https://cfs.energy/

“Helion is building the world’s first fusion power plant, enabling a future with unlimited clean electricity. Microsoft has agreed to purchase electricity from Helion’s first fusion power plant, scheduled for deployment in 2028.”
Helion Energy: https://www.helionenergy.com/

1. Introduction

As illustrated by the quotes above from 2023, there has recently been great enthusiasm for fusion energy, especially among new private companies such as Commonwealth Fusion Systems and Helion Energy. Even much older and larger government-funded fusion labs such as PPPL (US) and Culham (UK) continue to promote the dream of unlimited and clean fusion energy.

Here I will discuss why these claims for fusion energy research are at best wildly optimistic but more often mistaken, delusional, deceitful, or fraudulent. This perspective is based on my long career working on tokamak experiments at Oak Ridge, UCLA, Caltech, MIT, and PPPL. My graduate thesis advisor at Cornell justified fusion research in 1973 by saying that “we have to find out if it will work”. That was a good motivation then, but after 50 more years of worldwide effort and about 10 more generations of graduate students, it is unfortunately nearly certain that a commercially practical fusion reactor will never be made. 

2. Physics requirements for a fusion reactor

Serious research on controlled fusion began in the early 1950’s at several governmental labs worldwide. These programs were based in part on a hoped-for analogy with fission reactors: fusion seemed like another promising new source of electricity. It was clear from the beginning that fusion reactors required hot plasmas such as those which fuel the nuclear power of the sun and hydrogen bombs. Many clever ideas for magnetic fusion reactor designs were proposed and tried in the 1950’s such as tokamaks, stellarators, magnetic mirrors, and various types of magnetic pinches. Laser-driven inertial fusion reactor designs were proposed by the early 1960’s.

The minimal requirements for a fusion reactor were first published by Lawson in 1957, as discussed in Ref. [1]. The simplest requirement is that the fusion energy production rate within the plasma is greater than the plasma energy loss rate. This “scientific breakeven” criterion favors higher plasma densities “n” to increase the fusion reaction rate and longer energy confinement times “τE” to reduce rate of plasma energy loss, whatever the mechanism. By far the largest fusion rates occur for deuterium-tritium (D-T) reactions, which produce a 14.1 MeV neutron and a 3.5 MeV alpha particle. For a D-T plasma in the ion temperature range of Ti=5-20 keV, where the fusion cross-section increases as the square of Ti, this criterion is roughly: n(ions/m3) Ti(keV) τE(sec) ≥ 1021. This “triple product” requirement can be obtained, for example, with n=1020 m-3, Ti=10 keV, and τE = 1 sec. So far only a few fusion devices have gotten close to scientific breakeven, yet breakeven is still very far from practical fusion reactor, due in part to their very large cost.

Almost all the scientific difficulties of fusion have to do with the complexity of plasma physics, and not atomic physics or nuclear physics. Plasmas are complicated because their motion is strongly affected by electric and magnetic fields, which in turn can be created or distorted by the plasma itself. This causes a strong self-organization of plasmas which makes them resistant to outside control. Fusion plasma control and confinement is made more difficult by the very high speed of the ions, typically 108 cm/sec.

Plasma instabilities are the most difficult physics problem in fusion research. In general, these instabilities drive plasma out from the hot core toward the colder edge, but unfortunately, there is no general theory which can predict the energy confinement time τE for a specific plasma configuration. This is why fusion experiments have improved mainly by trial and error for the past 70 years.

The motion of a fusion plasmas is usually turbulent, with interacting fluctuations in density, temperature, electric fields, magnetic fields, and velocity. Hundreds of plasma scientists have been working for many years on increasingly complex and realistic computer simulations of the nonlinear equations of plasma turbulence, but so far with only partial success. Reliable theoretical predictions about future fusion plasmas are beyond our present capabilities.

3. Dreams of fusion

The history of fusion has been reviewed in many books [2], so only a few highlights are mentioned below. This history is aptly summarized by the subtitle of Seife’s book: “The science of wishful thinking”.

3.1 - ZETA: The ZETA machine, a toroidal pinch at the UK Atomic Energy Authority at Harwell, produced D-D fusion reactions in 1958. Their results were published in Nature, alongside some of the early results from US fusion researchers. There was frenzy of media coverage in England which promised “unlimited power from seawater” and “a sun of our own”. However, it quickly became clear that the neutrons in ZETA were not of thermonuclear origin since they were asymmetrically emitted in space, and so were not relevant for a fusion reactor. The UK fusion effort moved from Harwell to Culham in 1965.

3.2 - FRCs: The “field reversed configuration”, discovered by accident in the late 1950’s, is a beautiful theoretical idea spoiled by strong instabilities. The idea is a toroidal magnetic bottle like a free-floating smoke ring. It was first found in short-lived pinch experiments, and later imagined to be a perfect fusion reactor geometry. My first experience in fusion research was on an electron beam driven FRC at Cornell in the early 1970’s, but it didn’t work well and FRC experiments died out by the 1990’s. They were reimagined this century by private fusion companies like TAE Technologies and Helion Energy, but still don’t work well (compared to tokamaks). 

3.3 - TFTR: The Tokamak Fusion Test reactor (TFTR) was the largest magnetic fusion device ever built in the US and began operation at PPPL in 1982 (see Fig. 1). It was designed to reach scientific breakeven with D-T fuel in the 1980’s (I worked on TFTR from 1984-1997). TFTR initially had a great confinement time of about τE=500 msec, but this fell disastrously to about 50 msec as more heating was applied. After years of trial and error, by 1994 TFTR obtained 10 MW of D-T fusion power with 40 MW of plasma heating power, still far short of breakeven. In retrospect, TFTR was at the limits of the capability, resources, and enthusiasm of the US fusion program.

3.4 - NIF: The National Ignition Facility (NIF) is the largest laser fusion experiment in the world. It was funded by the US Defense Department to simulate H-bomb physics, but it is also for used for research on inertial fusion energy (IFE).

Figure 1: The Tokamak Fusion Test Reactor (TFTR) in 1989. It operated from 1982-1997 at the Princeton Plasma Physics Laboratory (PPPL). The goal of this machine was to obtain scientific breakeven using D-T fuel, but its best results fell short of this goal. People with blue coats are at the upper and lower right. Source: https://www.pppl.gov/tokamak-fusion-test-reactor

Construction began in 1997 and a D-T fusion yield of 10 MJ per shot with a fusion gain ratio of 10 was planned by 2012, at a cost of about $5B. The best fusion yield achieved by 2012 was less than 1% of that expected. The yield increased to about 3 MJ by 2023, still far short of the original expectation. However, scientific breakeven was achieved and ignition was claimed by NIF proponents. There are very serious problems with the feasibility of this technology as a fusion reactor, such as low laser efficiency, high cost of targets, target alignment and chamber clearing at high pulse rate, and tritium breeding.

3.5 - ITER: ITER is a tokamak about 30 m high being built in France and is the largest controlled fusion device ever attempted (see Fig. 2). Its engineering design was started in 1988, construction started in 2013, and full D-T operation planned for 2035. It is being funded by the European Community and 6 other international partners and is expected to cost roughly $50B. Yet it is very far from being a real fusion reactor. ITER is expected to make 500 MW of fusion from 50 MW of plasma heating power for 500 sec pulses, but if this fusion power was converted to electrical power (which it will not be), it would barely be able to power itself. After the start of D-T it will need to be maintained remotely due to its intensively radioactive structure. ITER could be seriously damaged in a fraction of a second by a bad plasma “disruption”, a large-scale instability seen in all tokamaks.

4. Realities of fusion

A nice summary of historical and contemporary fusion experiments as of 2022 is presented in Ref. [1]. The best magnetic fusion energy (MFE) results were obtained in the 1990’s on the three large tokamaks TFTR, JET, and JT-60U, with a triple product nTiτE~6-8 x1020, which is near the ~1021 needed for scientific breakeven. The best result so far for a private fusion company is nTiτE~2 x1017 for an FRC at TAE Technologies, which is comparable to the tokamak results of the early 1970’s. The best NIF results were about nTiτE~5x1021, and the ITER and SPARC tokamaks are projected to have nTiτE~6-8 x1021.

ITER is the most ambitious and most difficult fusion project ever attempted. However, in my opinion, it will also be one of the biggest failures in the history of science. It may never be completed due to design or construction problems or budget overruns. Even if it is completed, it will likely fail to reach its goal due to poor confinement, impurity radiation, wall erosion, water leaks, magnet failure, lack of tritium, or catastrophic major disruption. Even if it reaches its goal, it should be obvious that this is not a good way to make electricity due to the huge cost, long downtimes, and high levels of radioactivity. Even if this wasn’t obvious, the gap between ITER and a DEMO fusion reactor [3] is so large that it will very likely never be crossed.

Yet ITER is by far the most likely to succeed the path to fusion energy. Its design was based on the best experimental evidence worldwide and the most reliable technology available. Thousands of very capable scientists and engineers worked on its design for over 20 years. There is no little or no new physics expected in ITER. If anything, the design is too conservative, since it is basically the largest existing tokamak JET (at Culham) multiplied in size scale by x2. But ITER is still an extremely large step: JET ended a 40-year run in 2022 with a 10 MW D-T shot of 5 second duration [4], but ITER is supposed to make 500 MW D-T shots with a 500 second duration.

However, in my opinion inertial fusion energy (IFE) activities are chasing a delusion. There is no way that the recent NIF 3 MJ shot is relevant for a fusion reactor: 3 MJ of fusion energy can be converted to just $0.10 in electricity. The idea of making 100 times larger explosions (equal to about 70 kg of TNT) at a rate of 10 times per second to make a 1 GW electrical power plant is just not credible (to put it mildly). The LLNL press conference [5] touting the 3 MJ fusion energy “breakthrough” in 2022 was wildly overhyped, with both gullible media and US Department of Energy (DOE) officials equally to blame. Much more likely than an IFE reactor would be the discovery of a new method to trigger a large H-bomb without a fission bomb. This is what NIF is trying to do, but with a very small bomb and a very large driver. In this respect IFE research is dangerous.

Private fusion energy companies which claim to be delivering fusion power within about 10 years such as Helion Energy and Commonwealth Fusion Systems are either deceitful or fraudulent. These company’s scientists must know this cannot be delivered, given the long history of fusion research. Many of these companies are based on ideas developed in cancelled government-funded programs. For example, the SPARC tokamak at CFS [6] is based on earlier DOE-funded designs for a high field tokamak after TFTR; namely, CIT (compact ignition tokamak), BPX (burning plasma experiment), and FIRE (fusion ignition research experiment). Helion Energy and TAE Technologies are based on DOE work on FRCs by scientists from Los Alamos, the University of Washington, and Cornell. These company founders apparently wanted to continue their research and found that there were enough gullible rich people to fund them, but only if they promised fusion power soon enough.

Large government-funded fusion labs such as Culham or PPPL continue to sell the long-term promise of fusion to their sponsors every year, but they are certainly not deluded by the dream of near-term fusion energy. Instead, they delicately position these labs in the twilight zone between optimism and deceit by claiming that there may be a practical fusion reactor in about 30 years. These labs have been sustaining this unrealistic optimism for over 60 years.

Figure 2: The ITER tokamak under construction in southern France in early 2023. One of the 18 toroidal magnetic coils is suspended in the center, and one of the 9 D-shaped vacuum vessel segments is at the left. The vacuum vessel segments fit inside the toroidal field coils. The first D-T operation of ITER is scheduled for 2035. Source: https://www.iter. org/construction/TokamakAssembl

5. Some specific problems

There is no single physics or engineering issue which completely prohibits a fusion reactor. Instead, there are many very difficult problems all of which need to be solved together, making a practical reactor nearly impossible. This section describes some of these problems for the tokamak, the most successful magnetic fusion energy device. Many of these issues are common to other fusion devices, and all of them have been known for at least 40 years. 

5.1 - Energy confinement: Plasma energy confinement has been the main physics issue in tokamak research since the 1950’s. For example, the energy confinement time of ITER needs to be τE≥4 sec to achieve its goal, which is about 10 times higher than that of the largest existing tokamak JET. The ITER energy confinement predicted from existing tokamak data using “empirical scaling” is about τE=3.0±0.5 sec, which is marginal for its success, and even higher confinement times are needed for a reactor.

5.2 - Impurity contamination: Impurity ions in the plasma core originating from the vessel wall or from helium “ash” from D-T reactions will reduce the fusion power for a given plasma configuration. The impurity fractions are almost entirely unpredictable in ITER due to the uncertainties in the plasma-wall interaction and the impurity ion particle confinement. There is presently no demonstrated method to preferentially remove impurities or helium ash from a tokamak, so the level of impurities in future tokamaks may become unacceptable. 

5.3 - Disruptions: The most dangerous tokamak instability is a plasma “disruption”, which causes a very rapid (few msec) loss of the entire plasma energy and plasma current to the wall. Disruptions occur in all tokamaks and can cause extremely large electromagnetic forces and heat loads on the vessel and wall components. Prediction and mitigation of disruptions is planned, but it is still possible that a single large disruption could significantly damage the ITER tokamak and may even lead to a loss of coolant accident (steam explosion). 

5.4 - Wall erosion: There will inevitably be a gradual erosion and redeposition of the internal tokamak walls due to plasma heat and particle loss. The location and rate of this erosion are difficult to predict or control since it depends on largely unknown turbulent transport loss in the edge plasma. Excessive wall erosion or cyclic thermal stress could lead to a leak from the water cooling lines just below the walls, which would immediately shut down operation.

5.5 - Magnet failure: ITER will have the largest and most complex set of superconducting magnets ever built, many of which need to be pulsed every shot. All these magnets must be cooled with liquid helium and restrained from huge electromagnetic forces. These magnets can fail due to coolant leaks, mechanical stress, or electrical arcing. It would be very difficult or impossible to repair or replace any of the major coils of a tokamak reactor after D-T operation, since the whole structure will be radioactive, necessitating full robotic maintenance. 

5.6 - Tritium inventory: The tritium fuel for a D-T tokamak reactor needs to be created in on-site breeding blankets located outside the plasma but inside the toroidal field coils. This is theoretically possible using neutron-lithium reactions with neutron multipliers such as beryllium. The design of these blankets is extremely complicated due to neutronic, thermal, and mechanical interactions, and none has been tested so far in a D-T neutron environment. 

5.7 - Radiation damage: In a tokamak reactor the first wall will be subject to very high 14 MeV neutron radiation loads, typically a few MW/m2 over many years. This will eventually cause radiation-induced damage of the first-wall materials, typically measured as the average number of displacements per atom of the material lattice (perhaps 100 dpa). Radiation damage causes changes to metals such as softening, swelling, and helium embrittlement which could eventually result in structural failure of the wall.

5.8 - Availability: A practical tokamak reactor should be operated with a full power availability factor comparable to other electrical power plants, which ranges from nuclear fission at 90% to solar at 25%. At present the longest D-D tokamaks pulses run for about 1000 sec a few times per day, or <5% of the time. Long shutdowns are also expected in ITER due to the difficult repair and maintenance needs. An orderof- magnitude increase in availability is required for a tokamak reactor, which is difficult since expensive external current drive will be needed for long pulses.

5.9 - Safety: A tokamak reactor will have at least few kg of tritium and radioactive dust inside the vacuum vessel, and so a public evacuation plan will be needed in case of a vacuum accident. A tokamak reactor will also create a huge amount (thousands of tons) of low-level radioactive waste due to neutron activation of the interior walls, which will require a long-term decommissioning and storage process. Also, any fusion reactor poses a threat of nuclear proliferation since fissile material can be made by placing natural uranium near fusion neutrons.

5.10 - Cost: Assuming a tokamak reactor could be built to produce net electricity, it will be practical only if its cost of electricity is comparable to that from other sources. This seems extremely unlikely based on the $50B cost of ITER, which cannot produce any net electricity. Given the simplicity and falling costs and of solar and wind power, it is highly unlikely that a tokamak reactor could ever be cost competitive. Each of these 10 problems is independent of the others and has been unresolved for at least 40 years. Assigning an optimistic probability of ½ for the solution of each, the probability of a successful tokamak reactor is at best 1/1000.

6. Psychology of fusion

Technological optimism has been a driving force in the fusion energy research since the early 1950’s. For most fusion scientists it doesn’t matter that the reactor goal is many years away. Gradual progress has been made by developing better understanding and bigger machines, and old concepts such as the pinch and mirror machines were largely left behind. The fusion field has provided enjoyable day-to-day work and long-term employment, so if the funding continues people will continue to chase the fusion dream.

Fusion program leaders have become adept at maintaining government support for their programs. There has never been a public admission that a fusion reactor was out of reach, just that it required more time and more money. The machines have become larger and fewer, and some older fusion labs like Oak Ridge and Los Alamos were phased out in favor of single-purpose labs like PPPL. Remarkably few mainline fusion scientists have told the hard truth about the difficulty of fusion [7]. Dissent was discouraged as fusion work became more collaborative both nationally and internationally, eventually leading to ITER. The shrinking scope of fusion research helped stimulate the private fusion start-up companies, which brought back some of the naïve enthusiasm of the 1950s, along with many of the same mistakes and delusions.

A key ingredient in fusion psychology is the slippery slope between the scientific optimism of early fusion researchers and the science fiction of recent private companies such as Helion Energy, General Fusion, and TAE Technologies. Unfortunately, it is hard to judge the relative merits of various unlikely fusion concepts since their problems lie hidden in the subtleties of plasma physics and in the multiplicity of engineering challenges. Mainline fusion scientists have been reluctant to criticize the private fusion companies since their own agendas appear so similar. Indeed, PPPL is collaborating with CFS and Tokamak Energy Inc. and Culham is collaborating with General Fusion and First Light Fusion, in newly fashionable “public-private partnerships”.

But surely there must be some unanticipated technological spin-offs which make fusion research worthwhile, even if we don’t succeed in making a fusion reactor? Unfortunately, there have been none so far. It is true that low temperature plasmas are very useful in many applications such as in chip making and plasma processing of surfaces. But there have been no applications for the high temperature (keV) plasmas used in fusion research. The main societal benefit of fusion research has been friendly international collaboration, which began shortly after controlled fusion was declassified in 1958. Fusion research has enjoyed an almost unique worldwide community spirit because the goal of fusion energy is so distant. 

7. Conclusion

The plasma physics difficulties in fusion research might eventually be overcome by building even bigger and more expensive machines. The problems are mainly due to various plasma instabilities which cause energy to leak out. These plasma instabilities are not understood well enough to predict the performance in future devices. New ideas for confinement might help, but it would take decades of development to match the performance of present tokamaks.

The engineering difficulties are a much bigger obstacle. A D-T fusion reactor will be extremely radioactive and can only be maintained by remote control. The inside walls will be eroded by the plasma and damaged by the neutrons over time, and the whole structure will be left radioactive after plant closure. Such a reactor would be extremely complex and most likely unreliable. Non-D-T fuels such as D-3He might reduce some of the engineering problems, but the plasma physics problems would be much more difficult due to the higher temperatures and confinement times required. 

The very high cost of a fusion reactor is by far the biggest problem. It is hard to imagine that a practical fusion reactor could ever cost less than ITER or NIF, neither of which can make net electricity, and both of which cost more than a fission power plant. Despite some highly optimistic fusion reactor design studies, there is unfortunately no real chance that a fusion reactor could compete with solar or wind energy, powered by the amazing fusion reactor in the sun. Acknowledgment I thank several former colleagues for helpful comments on this article.

References

[1] S.E. Wurtzel and S.C. Hsu, “Progress toward fusion energy breakeven and gain as measured against the Lawson criterion”, Phys. Plasmas 29, 062103 (2022) [

2] for example: Charles Seife, Sun in a Bottle: The Strange History of Fusion and the Science of Wishful Thinking, Viking Books (2008); C.M. Braams and P.E. Stott, Nuclear Fusion: Half a Century of Magnetic Confinement Fusion Research, IOP Publishing Ltd. (2002)

[3] G. Federici et al, “Overview of the DEMO staged design approach in Europe”, Nucl. Fusion 59 (2019) 066013.

[4] J. Mailloux et al, “Overview of JET results for optimising ITER operation”, Nucl. Fusion 62, (2022) 042026

[5] Lawrence Livermore National Laboratory, https://www.llnl.gov/ news/ignition

[6] A.J. Creely et al, “SPARC as a platform to advance tokamak science”, 30, Phys. Plasmas 090601 (2023) 

[7] D. Jassby, “The quest for fusion energy”, Inference Vol. 7, No. 1 (May 2022); L.M. Lidsky, “The Trouble With Fusion”, MIT Technology Review, 86(7), 32 (1983)

Top

The Irrelevance of the National Ignition Facility for Fusion Power

Stephen E. Bodner, sebodner@mailaps.org

The achievement of ignition and scientific breakeven on the National Ignition Facility (NIF) was a Kitty Hawk moment that proved the scientific feasibility of inertial fusion. Targets for commercial laser fusion power will probably use direct laser illumination of a spherical target, rather than the NIF’s indirect illumination method. The physics understanding of these “direct-drive” targets is not as advanced as the “indirectdrive” NIF target. Therefore a variety of new research facilities will be needed to develop the science and technology of laser fusion energy.

All but one of the above sentences are fundamentally incorrect. This does not in way deprecate the accomplishments of the NIF scientists. I congratulate them for demonstrating the ignition of thermonuclear fuel in the laboratory. This article is instead an analysis of the relationship of the NIF accomplishment to the development of fusion power. There are several points to consider:

  1. The “indirect-drive” target concept used with the NIF cannot be extended to commercial fusion power, because it cannot produce the high energy gain needed to overcome the inefficiencies of the laser and the thermal-to-electric conversion.
  2. The NIF indirect-drive target cannot provide any useful physics knowledge that could be applied to directdrive targets, because radiation-driven implosions and laser-driven implosions have fundamentally different physics parameters and behavior. The NIF result was not in any way a “Kitty Hawk” moment.
  3. The NIF program overall has delayed progress toward laser fusion energy by approximately two decades.
  4. The NIF glass laser technology is also of no value, because glass lasers cannot meet the various target physics requirements of direct-drive targets. Only a gas laser such as ArF is potentially useful.
  5. There are currently two proposals for laser fusion power using a gas laser: symmetric direct-drive and hybrid asymmetric direct-drive. Only the symmetric direct-drive concept is potentially viable. This fusion concept also has a physics basis that is superior to the NIF program.

The three basic types of laser fusion targets. (a) Direct-drive that directly illuminates the sphere with laser beams. (b) Indirect-drive that converts the laser light to x-rays on a gold can and uses the x-rays to illuminate the sphere. (c) Hybrid-drive that first illuminates metal rings on the can to create x-rays and then uses two-sided direct laser illumination.

The requirement of high target gain

Some of the electric power generated in a fusion plant will have to be recycled back to feed the laser. To keep that recycled fraction to 1/4, the product of laser efficiency times target gain must reach the value of 10. [1] Because tritium recycling and target fabrication will also require power, the product of laser efficiency times target gain will have to somewhat exceed 10.

A spherical target with symmetric direct illumination by an ArF gas laser is calculated to achieve energy gains in the vicinity of 200 using just 1 MJ of laser energy. [2] Gains would be even higher with laser energies above 1 MJ. With a calculated ArF laser efficiency of 10%, the estimated target gain of 200 far exceeds the minimum requirement of 100. This safety factor may be needed to cope with remaining uncertainties in this fusion concept.

If one instead considers an indirect-drive target, using a glass laser with a calculated efficiency of 16%, then the target would need a minimum gain of 60. The NIF indirect-drive target originally had a “point-design” energy gain of 13, using 1.35 MJ of absorbed laser energy. [3] That target design failed in 2012. After a decade of design improvements, the NIF target has achieved an energy gain of less that 2 using 2 MJ of absorbed laser energy. Increasing that gain from 2 to 60 seems unlikely at reasonable laser energies. My pessimism about the indirect-drive approach is shared by many others in the fusion community. [4]

The “indirect-drive” target used on the NIF is geometrically complex. A sphere with its fuel is placed inside a cylindrical can lined with gold. The laser heating of the gold produces x-rays. The x-rays are then used to heat the outside of the sphere. This indirect method of first producing x-rays is very inefficient, because of the energy wasted in heating the can wall.

To produce high enough energy gains for a fusion power plant will thus require “direct-drive” in which the laser directly heats the outside of the spherical target. The physics irrelevance of a radiation-driven target The heating of the outside of a sphere with thermal x-rays produces fundamentally different radial profiles of density, temperature, and velocity than does heating with UV laser light. Angular variations are also fundamentally different, with low-mode perturbations dominant using x-rays and high-mode perturbations dominant using lasers. These different radial and angular profiles determine the convergence behavior of the imploding shell.

Thus the achievement of ignition with x-rays does not in any way even suggest that ignition is also possible with laser light. I believe that there is strong evidence that direct-drive implosions can succeed, but my belief is not based in any way upon the NIF results. The NIF demonstrated that high spherical convergence and ignition is possible using indirect x-ray drive, but it was definitely not a Kitty Hawk moment for fusion energy using direct laser drive.

The early history of laser fusion

I transferred into the laser fusion program at Lawrence Livermore National Lab near the end of 1971, when it was still a highly classified program with a total of about a dozen scientists. I moved to the Naval Research Laboratory (NRL) in 1974 and soon became the leader of their modest-size laser fusion program. For about 30 years I was able to closely observe the entire national laser fusion program.

In 1972 the laser fusion program had the single internal objective of developing laser fusion for commercial power. The target design was of course direct-drive. Indirect-drive was rejected as too inefficient for a fusion power plant. No one back then considered laser fusion research to have any value for the nuclear weapons program, because of differences in basic physical parameters. However the senior lab management sought funding from the nuclear weapons side of the AEC (a predecessor of DOE) rather than from the energy side of AEC. They explained to me that it was easier to obtain significant additional funding within the weapons budget. But this easier source of funding had a downside: the program was not properly reviewed and assessed in terms of its actual goal.

Major physics problems were quickly found in both the laser and the laser-target interaction. In 1975 the Livermore program switched to indirect-drive targets, rather than try to solve the apparently intractable physics problems associated with direct-drive and laser fusion energy. We now know that their judgement was not quite accurate. It took that laboratory 47 more years just to reach ignition with indirect-drive. Meanwhile, the basic problems producing high gain with direct-drive were solved by other scientific groups using very little funding.

In my view, the first major management error with their laser fusion program was that 1972 decision to seek weapons funding rather than energy funding. The second major error, in 1975, was to abandon the fusion energy goal when the wider community found several challenging physics problems. The third major error, after the end of nuclear testing, was the decision to construct the NIF. By the end of the 1990s a durable KrF gas laser with superb optical smoothing had been developed and used at NRL. Methods had also been invented and demonstrated to control the growth of fluid instabilities. The technology and science basis of high-gain direct-drive was not complete by the late 1990s, but it was far enough along to be reasonably confident of some success.

The relevance of glass lasers to high-gain fusion

Some fusion scientists had hoped that a glass laser somewhat similar to the NIF, but with diodes replacing flash lamps, would be useful in a fusion power plant. However glass lasers cannot meet the many severe requirements for a high-gain direct-drive implosion. [1] The fusion laser for direct-drive must have a far-UV wavelength to maximize the plasma pressure and minimize the risks of laser-plasma instabilities. [2] There has to be optical smoothing of all laser perturbation modes via induced-spatial-incoherence, to reduce laser intensity modulations at the target from about 100% down to about 1% or less. This optical smoothing also prevents laser filamentation in the plasma. The laser also needs a broad and continuous bandwidth of about 10 THz, to minimize several laser-plasma instabilities and to maximize the laser illumination uniformity. It also needs zooming, in which the focal spot size is reduced at least twice during the few-nanosecond implosion, to better match the laser focal size to the imploding target size. Glass lasers fail all of the above important requirements, partly because the converted frequency is in the near-UV not the far-UV, but mostly because of the nonlinear index of refraction of all laser glass. The only way to simultaneously and easily satisfy all of the above constraints for direct-drive is with the excimer gas lasers KrF and ArF. [1]

Scientific knowledge of fusion targets

Some people ignore all of the above and just want to know how many neutrons have been generated, and how much money has been spent. They then conclude that the NIF with indirect-drive must be far more advanced scientifically than an ArF laser with direct-drive. But actually it is the opposite. Very little of the NIF target physics is well understood because the gold layer restricts diagnostic access to the inside of the can. Where are the laser beams actually depositing various portions of their energy including refraction? Does any of the laser light refract directly onto the sphere? What are the density, temperature, and velocity profiles of the plasmas that ablate from the can wall and the spherical target? Where inside the can are the various residual laser-plasma instabilities generated? What about nonparallel density and temperature gradients that would generate massive currents and magnetic fields? Why are the neutron results not reproducible? There is very little knowledge of most of the NIF target’s physics behavior. That of course makes their eventual achievement of ignition all the more impressive.

Direct-drive laser fusion is much more advanced scientifically. The fusion target is just a spherical shell. The most dangerous and complex physics that need controlling have a high mode number around the sphere: various laser-plasma instabilities that scatter light or produce suprathermal electrons; Rayleigh-Taylor and Richtmyer-Meshkov fluid instabilities that distort the shell; early-time shock imprinting from the residual laser nonuniformities. All have been approximated and studied by accelerating a foil, using one-sided irradiation with multiple laser beams. The foil is an approximation to an early-time section of the spherical shell. One can diagnose the plasmas on both hot and cold sides of the foil, and measure the growing spatial and temporal mass variations across the foils with high resolution using x-rays and curved crystals. At NRL, using a few kJ KrF gas laser with superb time-averaged beam uniformity, the physics measurements of the laser-target coupling and the hydrodynamic acceleration have been measured in detail over many years. Mostly the experimental data is in good agreement with complex 2D computer calculations that were performed before the experiments.

If the target chamber has 40 or more portholes, and the laser beams are well balanced in energy and aimed well, then there should be no significant low-mode distortion of the imploding shell. However because of the finite number of portholes there will be perturbations with intermediate spherical mode numbers. This problem needs further study, but does not currently appear to be a basic physics problem.

My conclusion is that the foil experiments at NRL, whose program I led until my retirement in 1999, has provided much more useful information about the performance of a future high-gain target than has the NIF target, which is almost a black box. (Pun intended.) One should thus ignore the neutron derby and ignore dollars spent as measures of scientific knowledge of fusion implosions.

Hybrid asymmetric illumination of direct-drive targets

The company Xcimer Energy has proposed a KrF laser design that would use very large amplifiers producing a total of at least 10 MJ. The laser pulse duration is compressed using stimulated scatter in several large gas tubes. Fluence levels are so high that final mirrors cannot be used. Their fusion target concept is a hybrid, with indirect-drive early in the pulse, followed by asymmetric 2-sided direct-drive for the main pulse. [5]

Two-sided illumination would create a low-mode illumination asymmetry on the sphere, with preferential heating at the two poles. To uniformly heat the sphere all the way to the equator, the laser focal profile has a higher intensity at the edge. We know however that because of refraction in the underdense plasma, the laser deposition at the equator would be at a lower plasma density than at the poles. Good spherical symmetry is thus not possible. To attempt to maintain the uniformity of illumination as the target implodes, the size of the focal spot also has to be decreased many times during the implosion, apparently requiring at least 14 different lasers, all in the horizontal plane.

The concept of introducing a strong low-mode asymmetry by using just two incoming laser beam directions, and then trying to remove this asymmetry using many non-collinear ring-shaped laser profiles, is not a good approach to fusion if one is concerned, as one should be, about achieving a high spherical convergence.

Because many separate lasers are required, with a high total laser energy, the lasers’ capital cost would likely be much higher, not lower as claimed, than the standard approach using an ArF or KrF laser with multiplexing. Also, studies indicate that the laser would only be about 30% of the total cost of a fusion facility. And without optical beam smoothing there would be laser filamentation in the plasma that would enhance most of the laser-plasma instabilities. The Xcimer Energy proposal is basically a flawed approach to fusion energy that disregards the many past advances in understanding and controlling direct-drive implosions.

Symmetric illumination of direct-drive targets

The only potentially viable approach to a laser fusion power plant would use direct, symmetric illumination of a spherical target with an ArF gas laser that has broadband induced-spatial- incoherence optical smoothing. [6] The major remaining physics uncertainties need to be addressed with a new 30 kJ ArF laser-target facility that would use foil targets to evaluate the early-time laser-target interaction, but under conditions closer to that of a high gain target. [1] The program could then proceed immediately to a ~ 0.75 MJ high-gain test facility. The conceptual design of the reactor chamber has an inner surface wall of tungsten fibers and grazing-incidence final mirrors that may survive the repetitive thermonuclear explosions.

References

[1] Bodner, S.E.. J. Fusion Energy 42, 33 (2023). https://doi.org/10.1007/ s10894-023-00372-w

[2] Schmitt, A.J., Obenschain, S.P,, Phys. Plasmas 30, 012702 (2023). https://doi.org/10.1063/5.0118093

[3] Lindl, J.D., Amendt, P., Berger, R.L., Glendinning, S.G., et. al., Phys. Plasmas, 11, pg. 349 (2004)

[4] Kramer, D., Physics Today, pg. 25, (March 2023)

[5] Galloway, C., ARPA-E Workshop, https://arpa-e.energy.gov/sites/ default/files/A06-Galloway-FusionWorkshop_03-07-23.pdf

[6] Obenschain, S.P., Fusion Power Assoc. Mtg., http://www. firefusionpower.org/FPA22_ArF_LaserX_Obenschain.pdf

Top


A Necessary New Approach for the American Fusion Effort

Wallace Manheimer, NRL (retired), wallymanheimer@yahoo.com

This FPS essay is offered in the hopes that it can persuade a major national DoE fusion-oriented lab to accept and sell direct drive laser fusion, as its major effort, and thereby achieve leadership in the national fusion energy program. This effort is motivated largely by the success that the Lawrence Livermore National Laboratory (LLNL) has had with its indirect drive experimental program (1-4). I made this point as emphatically as I could in a book, which I have recently had published by Generis publishing company (5).

Direct drive laser fusion using an excimer laser has been the approach that the Naval Research Laboratory has taken under the leadership of both Steven Bodner (~1975-1999) and Steven Obenschain (1999-2022). I was a participant in this program. Since I am one of the few at NRL that had also worked on the tokamak project, I have a pretty fair idea of what tokamaks can and cannot do. Accordingly, at many informal meetings and at a formal seminar or two at NRL, I went out on a limb claiming that we had the best approach to fusion. However, the NRL group has been losing support and personnel recently, and I believe is unlikely to recover. After all, laser fusion for the civilian economy, is of little interest to the Navy. However, as I will argue shortly, there is still an important role NRL can play. The University of Rochester Lab for Laser Energetics (URLLE) is also investigating both indirect and direct drive laser fusion. I believe it is time for a larger effort, in a national lab dedicated to fusion, to take the baton. (Also, I have argued very strongly for fusion breeding (5-8), but this is a subject for another day).

The LLNL laser called the National Ignition Facility (NIF) uses what is called indirect drive. That is the target is placed inside of a container called a hohlraum. The laser illuminated the inner high Z walls to produce a black body of temperature 250-300 eV, producing an intense X-ray flux, which irradiates and implodes the target. A schematic of their configuration is shown in Figure 1, taken from (4).

The whole idea of laser fusion is to create a small thermonuclear burn in the center of the implosion. The 14 MeV neutrons escape, but the 3.5 MeV alphas are absorbed locally and heat the surrounding plasma. This allows the possibility of a burn wave, or in other words an ignited plasma. After years of trial and error, NIF finally produced this. Look at their measurement of fusion production as a function of time, from 3 shots, after the time of maximum compression. It is reproduced in Figure 2 taken from (3).

Notice that the maximum fusion power is after the time of peak compression, in other words the peak fusion is from an expanding plasma. This could only be from setting up an alpha heated burn wave. Further evidence of an alpha burn wave can be gleaned from their direct measurements of radius and temperature of the expanding plasma. These have been presented in an online seminar (9) and in a conference plenary talk (10), 

Figure 1: The configuration of the LLNL NIF successful experiment on producing an ignited plasma

Figure 2: Fusion power vs time, relative to peak compression (t=0, gray dashed line). The shaded bands denote 1σ uncertainty.

but apparently have not been yet written up in the archival scientific literature. I had been at both online presentations, and my sketches of their results are shown in Figure 3 below.

Figure 3: Other LLNL diagnostics of the alpha burn wave.

Apparently, so far, they have had 3 such successful shots, the first with a Q = 0.76, the second with a Q =0.63, and the third with a Q =1.5

I believe the (LLNL) result of getting a Q =1.5, and more importantly, demonstrating an alpha burn wave, in a laser fusion target in an indirect drive configuration is a breakthrough for the ages. They achieved this, at a much lower cost, nearly 20 years before ITER hopes to do anything like this. I believe that 100 years from now, it will be regarded as one of the most important experiments of the 21st century. The Secretary of Energy was at the presentation of their results. At the high holiday service at my synagogue, the rabbi, when presenting some of the good news of 5783, mentioned this result!

The path LLNL is on has quite a few problems if the goal is energy rather than nuclear stockpile stewardship. First, each shot involves a hohlraum, a precisely engineered container, made with expensive materials like gold or uranium and currently costing thousands of dollars each. Mass manufacturing of hohlraums will undoubtedly bring down their price considerably. But even if the target produces a total energy of ~100MJ, which would translate to ~33MJ of electric energy, or ~ 10 kWhrs, worth about a dollar, this gives a very low-price limit for the ultimate economically acceptable hohlraum price. Second, only a small fraction of the laser light (in the form of X-rays) makes it to the target; the rest is lost through other channels. This is shown in Figure 4, taken from the LLNL publication (11). 

Finally, the LLNL configuration is fine for one shot, with the target precisely positioned in a small ‘tent’, so focusing the laser on it is relatively simple. To do this continually, as would be required for a power generator, targets would have to be continuously shot in at high speed. Not only must the target engagement be such that the laser focuses on it, wherever it is, but the hohlraum must also be precisely oriented so that its axis aligns precisely with the laser light.

The sponsor of NIF is not energy, but nuclear weapons simulation. Hence the sponsor has little interest in quantities crucial for fusion energy, namely laser efficiency, rep rate capability, bandwidth, and laser engagement of a wobbling, fast moving target. This argues in my mind for a large effort by a major DoE lab to set up a second laser fusion facility, this one focused on energy, direct drive, and using a more appropriate laser, most likely an excimer laser, which certainly has a shorter wavelength than that used at LLNL, and most likely has higher average power (i.e. faster rep rate), and higher efficiency. Tammy Ma also made the point in this forum, i.e. that laser fusion should be supported for energy as well as nuclear weapon simulation (12). I wrote a letter supporting her suggestion (13). This essay continues this argument. Much more detail is in sections 4 and 5 of the book (5). 

Specifically, if the NIF result can be replicated in a direct drive configuration, i.e. a configuration which discards the hohlraum and has nearly 100% of the laser light directly illuminating the spherical target, it would be an event of incredible importance.

Figure 4: A schematic of where the laser energy goes for an indirect drive configuration. Only 10-15% of the laser energy makes it to the target in the form of X-rays.

The American government sponsored fusion effort for energy has focused only on magnetic fusion energy (MFE). However, these efforts, at various DoE labs, seems to have hit something of a brick wall. They all seem to be waiting for results from ITER, probably ~ 20 years from now, if there are no further delays. Hence, there is a strong argument for replacing the government sponsored magnetic fusion effort in the United States with a direct drive laser driven inertial fusion energy (IFE) effort, while still honoring its commitments to ITER.

The national DoE fusion lab I am most familiar with the Princeton Plasma Physics Lab (PPPL, almost a neighbor of NRL). I know several people there and have always greatly respected their work on tokamaks. Accordingly, before submitting this essay to the FPS, I wrote a letter, dated September 26, 2023, to the president of Princeton University, the organization that manages PPPL, and made this this very same suggestion to them. 

I’ll start with the history of the PPPL as I see it. Up to ~ 1965 the lab concentrated on stellarators, which had very poor confinement. Then the Russians published their experiments on tokamaks, which showed much better confinement. Almost immediately the lab changed to tokamaks and had a wonderful 35 year run with them. PPPL led the world in magnetic fusion. For instance, Figure 3 of part 4 of (5) showed a chart of the 30-year accomplishments (1970-2000) of the tokamak effort. Princeton machines appeared 5 times, ST (symmetric tokamak), PLT (Princeton large torus), PDX (Princeton divertor experiment) and TFTR (tokamak fusion test reactor) twice. TFTR was the first tokamak to operate with DT plasmas and produced respectable amounts of 14 MeV neutrons. However, at the choice of either the lab or its sponsors, these experiments ended much too soon. While they got neutron production only in a hot ion mode, the JET tokamak in England showed DT fusion in both a hot ion and a thermal ion mode. I think PPPL should have attempted fusion in a thermal ion mode.

Furthermore, TFTR’s two main competitors, JET in England, and JT-60 in Japan, did not quit in ~ 2000, as TFTR did, and they continued to advance is a variety of ways. They especially did experiments on longer pulse times. After all, if you get results for hundreds of milliseconds, or a second or two, and steady state is what is needed; you should at least try to get results for several tens of seconds, as JET and JT-60 both successfully did. In the mid 1990’s, PPPL did submit a proposal for a follow-on tokamak, TPX (tokamak physics experiment), which would explore the possibilities of steady state operation, one of the key difficulties confronting tokamaks. It was an excellent proposal, I think, but either it was rejected by sponsors, or abandoned by the lab. A pity!

After ~ 2000, in my opinion, the lab seemed to flounder, needing a new focus. It difficult to deny this. As far as I am aware, the lab has not had a major, world class fusion experiment since then.

In searching the PPPL web site (14) in October 2023, there are two main experimental programs the lab is now involved with. One is helping the Japanese with their tokamak JT60-SA. The other is rebuilding their spherical tokamak (ST) which has been down for quite some time because of a broken coil. They claim this project is now 76% complete, and it will investigate liquid metal linings. However, ST’s almost certainly cannot lead directly to an economical fusion reactor, as it is unlikely that the center post can withstand the intense neutron flux in a reactor and remain superconducting. In other words, PPPL is now a lab in service to other labs, and is no longer a leader of the fusion project as it had been from 1970-2000. Switching to direct drive laser fusion is most likely the best way PPPL, and the United States, can reclaim its leadership position in fusion. There is no doubt in my mind that PPPL has the scientific and engineering expertise to pull this off. Of course, there is no reason another National DoE fusion lab, ORNL, LANL, or Sandia cannot also vie for this role. As an admirer of PPPL, I simply contacted them first. 

The book (5) makes a detailed case that there are significant advantages laser fusion has over tokamaks. Summarizing:

  1. Tokamaks still do not know how to drive the current steady state at an acceptably low power.
  2. ITER’s magnetic field stores the energy of a 1-ton bomb; the plasma, which we do not understand well, the energy of a ~ 100-pound bomb. IFE is much safer.
  3. Experimentally MFE has no idea what to do with the alphas; it seems to regard them as a nuisance. IFE knows just what to do with them, and they are a crucial, integral part of the entire concept. Their role is to produce an alpha driven burn wave, which LLNL/ NIF has already done. 
  4. IFE has no problem with recycling from the wall back into the plasma, MFE most likely has a big problem. 
  5. Tokamaks are constrained by conservative design rules (5-8). Laser fusion has no such constraint we are aware of at this point. We now know that IFE works at both the Megajoule and Megaton energy levels.
  6. Tokamaks have no flexibility of where to place the wall, IFE has several options., including a simple option using a liquid liner with a free surface. (5)
  7. ITER has hit one delay and cost overrun after another. These delays have been going on for ~20 years and its cost has increased by a factor of 4 or 5 so far.
  8. While the LLNL/NIF laser fusion effort was also plagued by delays and cost overruns, these were nothing compared with the delays and cost overruns continually plaguing ITER.

So let the rest of the world do tokamaks; let the privately funded ‘fusion start-ups’ make promises for quick fusion development, promises they have little chance of keeping (4, 8, 15-23); while the American government sponsored controlled fusion program, in a DoE lab, aided at least initially by NRL, does direct drive laser fusion with an excimer laser. Let the competition begin. It seems obvious to me that the DoE ought to switch its principal effort to direct drive laser fusion. In fact, to me it seems nearly a no brainer, considering the difficulties with MFE and the recent triumph of IFE. If PPPL is the choice, the lab would do just what it did 60 years ago, when it abruptly, and correctly changed to follow a much better course than the one it had been on.

The politics and strategies for pulling this off are way above my pay grade. However, I believe that it is a goal fully consistent with the DoE’s mission. I believe NRL can play an important role in bringing this transition about. It has the only experience with high energy excimer lasers, which likely will be the best choice for direct drive laser fusion. I realize that I may come across here as having incredible chutzpah. I am hardly the world’s expert on either magnetic or inertial fusion. However, in my own mind at least, this FPS essay is offered with humility; with reasonable, but obviously incomplete knowledge; and with faith that controlled fusion, in one form or another, will be the power source our future civilization really needs.

References

  1. H. Abu-Shawareb et al. * (Indirect Drive ICF Collaboration), Lawson Criterion for Ignition Exceeded in an Inertial Fusion Experiment, Phys. Rev. Let. 129, 075001 (2022)
  2. A. Kritcher et al, Phys. Rev. E 106, 025201 (2022), Design of an inertial fusion experiment exceeding the Lawson criterion for ignition
  3. A.B. Zylstra, et al, Experimental achievement and signatures of ignition at the National Ignition Facility, Phys. Rev. E, 106, 025202 (2022) Editors’ Suggestion Featured in Physics
  4. A.B. Zylstra, et al, Burning plasma achieved in inertial fusion, Nature, vol 601, p542, January 27,2022
  5. Wallace Manheimer, Mass Delusions, How they harm sustainable energy, climate policy, fusion and fusion breeding, Parts 4 and 5 Generis publishing, 2023, https://www.amazon.com/dp/ B0C2SY69PW
  6. Wallace Manheimer , Fusion Breeding for Midcentury Sustainable Power, Journal of Fusion Energy June 2014 (open access) vol 33, p 199 https://link.springer.com/article/10.1007/s10894-014-9690-9 
  7. Wallace ManheimerMagnetic Fusion is tough - if not impossible - fusion breeding is much easier, Forum on Physics and Society, July 2021, page 10 https://higherlogicdownload.s3.amazonaws.com/ APS/04c6f478-b2af-44c6-97f0-6857df5439a6/UploadedImages/ P_S_JLY21.pdf
  8. Wallace Manheimer, Fusion breeding and pure fusion development perceptions and misperceptions, International Journal of Engineering, Applied Science and Technology, 2022, Volume 7, Issue 7, p 125-154, https://www.ijeast.com/papers/125-154,%20 Tesma0707.pd
  9. On line seminar offered by various members of the LLNL NIF team after their first successful fusion shot (Q~0.76)
  10. Laurent Divol. Plenary talk APS-DPP meeting November 2022, Spokane,
  11. Hybrid’ Experiments Drive NIF Toward Ignition, https://lasers. llnl. gov/news/hybrid-experiments-drive-nif-toward- 
  12. Tammy Ma, Fostering a New Era in Inertial Confinement Research, Forum on Physics and Society, April, 2022, https://engage.aps.org/ fps/resources/newsletters/newsletter-archives/april-2022#Fostering 
  13. Wallace Manheimer, Forum on Physics and Society, Letters to the editor, July, 2022, https://engage.aps.org/fps/resources/newsletters/ newsletter-archives/july-2022#Letters 
  14. www.pppl.gov 
  15. Daniel Jassby, Voodoo Fusion, Forum on Physics and Society, April, 2019, https://engage.aps.org/fps/resources/newsletters/newsletterarchives/ april-2019 
  16. Daniel Jassby, Fusion Frenzy, a Recuring Pandemic, Forum on Physics and Society, October 2021, https://higherlogicdownload. s3.amazonaws.com/APS/a05ec1cf-2e34-4fb3-816e-ea1497930d75/ UploadedImages/P_S_OCT21.pdf 
  17. L.J. Reinders, The Fairy Tale of Nuclear Fusion, Springer, 2021, chapter 15
  18. Don Steiner, Commentary: Fusion energy isn't going to solve today's climate problem, Albany Times Union, January 3, 2023, https://www. timesunion.com/opinion/article/Commentary-Fusion-energy-isn-tgoing- to-solve-17683732.php
  19. Martin Lampe and Wallace Manheimer, Comments on the Colliding Beam Fusion Reactor Proposed by Rostoker, Binderbauer and Monkhorst for Use with the p-11B Fusion Reaction, NRL Memorandum Report NRL/MR/6709--98-8305, October 1998
  20. https://www.youtube.com/watch?v=3vUPhsFoniw
  21. The next 3 references are first, a proposal by General Atomics to build a tokamak pilot plant; second, this author’s article skeptical of the plan; and third GA’s response. 
  22. R.J. Buttery et al, The advanced tokamak path to a compact net electric fusion pilot plant, 2021, Nucl. Fusion, 61, 046028
  23. Wallace Manheimer, Comment on The advanced tokamak path to a compact net electric fusion pilot plant, Nucl. Fusion, Volume 62, number 12, 2022 
  24. R. J. Buttery et al 2022 Nucl. Fusion 62 128002, Reply to Comment on ‘The advanced tokamak path to a compact net electric fusion ‘pilot plant’

Top


Advances in Earth Observation Capabilities and their Impact on Nuclear Deterrence

Igor Moric, imoric@princeton.edu

Introduction 

Miniaturization of technology and cheaper launches coupled with increased military and civilian demand for data are driving a revolution in space-based Earth observation capabilities. Constellations of commercial and national operated imaging satellites are already providing imagery of the surface with a sub-daily frequency. If current trends continue, it can be expected that swarms of novel observation satellites will, in the near future, establish an environment with persistent, high-resolution, multispectral coverage and real-time delivery of raw data or AI-processed insights. This will impact the relationships of nuclear weapon states - overhead transparency may instigate a destabilizing nuclear arms race, or provide a path to increased nuclear stability.

Revolution of Earth Observation Capabilities

Starting from the 1980s, with the rise of the commercial Earth Observation (EO) industry, satellite imagery is increasingly present in everyday life. Rising demand followed by the gradual relaxation and, as of recently, essentially elimination of restrictions on data distribution, spurred a proliferation of private companies from all over the world that build imaging satellites, launch them and offer imagery for purchase. Compared to national or spy systems, commercial EO systems are smaller and typically have lower resolution, but there are many more of them, allowing for a higher frequency of observation and wider geographical coverage. In addition, such data is more widely available and can be shared by states with their partners, without revealing their own classified capabilities.

In addition to systems which image the surface in the visible optical band (with a top resolution of 0.3-m for commercial systems and about 0.1-m for national systems), there is also a recent emergence of satellites carrying sensors sensitive to other bands. Synthetic aperture radars (SAR) emit their own radiation which lets them maintain visibility even during night and through clouds. Such sensors also allow for making very precise topographic models of the surface, easier detection of metallic and human-made objects and even detection of surface deformations at the millimeter scale. Change detection is made easier with SAR since observation can be made from the same angle and at the same time, without being disturbed by varying atmospheric conditions. Infrared (IR) imagery permits observation of thermal gradients on the surface which facilitates detection of different types of human activity, including ground fighting and missile launches. As solar radiation bounces off ground materials, it carries some information on their chemical composition. Hyperspectral imagers sample many bands of the electro-magnetic (EM) spectrum to collect this information and perform a level of “imaging spectroscopy” of whatever they observe. Ground-radio frequency emissions can also be gathered by sensors placed in orbit. These can allegedly detect radio emissions, and even the use of ground radar.

During most of the Cold War it might have been possible to obtain snapshots of ground activity maybe once per month for a site of interest, if weather conditions allowed. Today, a much higher number of operating satellites allows for observation of ground activity in more detail and more often than ever before. If we were to combine all the operating commercial sensors into one super-constellation, it is already possible to obtain a 5-m resolution imagery of most locations in the northern hemisphere each day or better with optical satellites, and under a few hours with SAR. [1] If national (or “spy”) satellites are added, the coverage becomes even better.

As the amount of data gathered increases, it becomes less possible for human analysts to manually treat it. Emergence of sensing capabilities is linked to the emergence of artificial intelligence (AI) algorithms, which are used today for automated processing of raw data, fusing of different data sources, analysis and distribution of insights hidden within. For example, commercial providers offer product feeds that automatically detect buildings and roads on imagery their sensors collect. More complex algorithms are able to automatically not only detect but also classify vehicles, aircraft and ships. For example, Palantir is a closed source AI-based intelligence gathering and battle-management software, which is able to automatically aggregate information from various sources including secret intelligence tools, spy satellites and commercial data. Their promotional videos show how a human operator communicates with the software, which speeds up the kill-chain from detection to classification, all the way to finally optimizing which unit to use for the strike.

Benefits and Dangers to Nuclear Stability

Following recent trends, in the next 10-20 years, it may become possible to probe the EM spectrum from a variety of orbits, allowing monitoring of all types of ground activity without gaps in observation. These capabilities will have an impact on the relationships of nuclear weapon states, and how the next generation of nuclear arms control treaties are negotiated and compliance is verified.

With overhead transparency it becomes easier to estimate the adversary's capabilities and monitor their activity. Detection of new construction and monitoring of facility operation is possible even without ground access. Imagery also makes possible geolocation of military targets such as airfields, military bases, weapons storage facilities and air defense installations, detection of large troop movements, as well as signatures indicative of future activity such as increased supply line activity, site parking and storage patterns, and even increased training activity. Some other indirect observables of future military and nuclear activity are the strengthening of security, gathering of equipment, addition of roads, increased traffic, and accumulation of debris from excavation in areas of strategic significance or near known nuclear facilities. With real-time persistent observation (and the help of AI algorithms) it may be possible to automatically discover, classify and track vehicles, ships and aircraft on a global scale. Nuclear programs have been shrouded in secrecy since their beginnings in the 1940’s. Since it is difficult to know what your nuclear competitor is doing, the only safe assumption is the worst case scenario; at any point in time, the adversary is building as many weapons as they are capable of making, and as powerful as they can.

Nuclear relationships are therefore based not on trust, but on a sort-of-a-balance enforced by a deterrent of massive retaliatory power. Survival is maintained at the edge of mutual annihilation; the choice is no longer between war or peace, but between war and suicide. [2]

Such a relationship is only stable if the leadership of nuclear weapon states is rational, and if the probability of success for a first strike is low or uncertain. To increase uncertainty of success, some states rely on opacity and intentional ambiguity in their nuclear decision making and operations. They may also attempt to appear irrational hoping that, in a crisis, the adversary will be more careful and back off to reduce risk.

Lack of information means states will rely on anticipation and speculation to derive what the other side will do, what they fear and how they perceive risk. Yet, people that compose leadership of any state have emotions, have less control than they imagine and have imperfect reasoning, and they rarely account for accidents. They may also wrongly perceive adversaries signals, and the adversary may wrongly perceive their signals. All this will typically lead to worst-case scenarios, where assumptions are made that the adversary is always trying to get you - is developing new capabilities, deploying forces and preparing for a strike.

Therefore, if nuclear deterrence relies on rationality of all sides, and if we can accept that the greatest danger for a nuclear war comes from an accident or a misjudgment – overhead transparency can mitigate some of the issues of irrationality. It can disincentivize states from attempting to obtain some threatening capability, making it more costly if the adversary has enough time to prepare and react. It enables clearer communication, and makes it easier to recognize peaceful intentions, if these exist. In general, transparency closes the gap between the perceived, worst-case speculated intent of the adversary and their actual capability.

However, growing transparency can be abused and increase the likelihood of a nuclear conflict erupting. With more information available, disbalances between adversaries are revealed. The stronger side can decide to attack to exploit its advantage, or the weaker side can decide to strike preemptively, anticipating the attack. In addition, remote sensor data is only partial information. It may reveal what the other side is doing but not what their intention is; biases are confirmed, rather than challenged. The benefit of wider availability to data means more people can verify state findings, but this also increases public pressure on decision-makers and makes crisis management more difficult. How would the Cuban Crisis be resolved if spy plane imagery of ballistic missiles on Cuba was revealed publicly at the time, and everyone was commenting on it online, stuck in their bubbles of information?

Finally, nuclear deterrence relies on the survivability of nuclear retaliatory forces. These need to be able to survive the first strike and retaliate if needed. To ensure survivability of their arsenals, states rely on secrecy in nuclear development and their operational practices, by building a larger number of weapons than they may need, by diversifying how weapons can be delivered, and by hardening, concealing and making mobile their delivery vehicles. Most survivable retaliatory forces are considered to be ballistic missile submarines (SSBNs) and land-based mobile missile transporter-erector launchers (TELs). While the sea is not transparent and SSBNs are not vulnerable with current technology, and it is unlikely this will change in the near future - this is not the case with TELs. [3]

Persistent, multi-angle, multi-band monitoring provided by thousands of in-orbit sensors may be able to defeat most of the countermeasures available to the TEL operator. Beyond only optical imagery and direct detection, the seeker can use SAR to construct 3D maps of terrain and derive where the missile launcher can and cannot move. SAR also provides visibility in the night and during cloudy conditions, and can help identify tracks even after the vehicle passes. Hyperspectral data can be used to identify human-made materials in a non-urban environment, and IR sensors can also detect signatures of human activity. In addition, launchers are crewed by humans and humans are imperfect and leave tracks, use phones when they are not supposed to and post on social media. Crews can operate optimally for a while, but not over a long period of time. Information contained in the data collected and fused over a period of years would, in time, sufficiently decrease the uncertainty on the location of the TELs, and AI algorithms may even be able to derive the fleet’s operational patterns to make it possible to deduce an approximate location of launchers - even if they are not directly visible by sensors.

Nevertheless, the probability for a successful strike on TELs will not significantly change if they are localized, and their credibility as a deterrent is maintained. For example, the United States does not have enough missiles and warheads to strike an entire fleet of moving TELs and at the same time destroy all other adversaries' nuclear forces, including command and control, silos, bombers, and submarines. Vulnerability is not binary, and while being able to locate the launchers may decrease their survivability, it does not make it significantly likely they can be destroyed. Lack of nuclear parity does not default to an all out nuclear war, and as historical examples show, a non zero chance of retaliation is sufficient to maintain deterrence between rational parties.

Conclusion: Accepting Transparency

The Cuban Missile Crisis made it clear to both the United States and the Soviet Union that a nuclear war could not have winners, and that a surprise attack was not feasible or rational. In the 1970s, the United States and the Soviet Union agreed on a series of arms control treaties aimed to maintain a nuclear stalemate by limiting the development and deployment of defensive and offensive nuclear capabilities.

Satellite imagery played an important role in verifying these agreements, and was incorporated as part of national technical means (NTM), starting from the 1972 Anti-Ballistic Missile Treaty (ABM) treaty. Americans and the Soviets were not allowed to impede verification by the other side's NTM. Similar language remains in the last remaining nuclear arms treaty, the New START. 

Transparency provided by emerging EO capabilities moderated the Cold War nuclear competition between the United States and the Soviet Union. And even though at times revelations of the adversary’s novel capabilities led to temporary instability, in the long term, the increased visibility limited worst-case scenarios of military planners and contributed to the implementation of arms control treaties - once the superpowers realized the reality of their relationship and the political environment allowed for it.

In a future driven by technological change but possibly without effective arms control treaties to manage the competition, nuclear weapon states face dangerous challenges and have a choice. They can counter increased transparency and development of technology by being even less transparent about their activity and by pursuing further enlargement and diversification of their delivery vehicles and weapons. This will encourage adversaries' worst-case projections, lead to a new nuclear arms race and may weaken nuclear deterrence. [4]

 An alternative is to accept that rational leaders do not start nuclear wars. Nuclear symmetry is not a condition for cooperation, and as shown many times during the Cold War, fluctuations of nuclear capabilities between nuclear weapon states do not default to a nuclear war. Greatest danger of a nuclear war is therefore from miscommunication and miscalculation.

Overhead transparency can provide more predictability to relationships of nuclear weapon states, and facilitate a new generation of arms control. The capabilities of EO systems have significantly advanced since the first generation of satellites was launched more than 60 years ago. Today’s systems are also able to produce much more information than was available for verification of the first generation of arms control treaties. This makes easier initial assessment of the adversary's forces and capabilities and facilitates negotiations about new agreements. It allows easier demonstration of compliance and makes deception more difficult in the long term.

As a first step, satellite observation could take on an increased role in a New START follow-up agreement. Overhead transparency may also allow other, more advanced scenarios not possible before. For example, if all sides can monitor each other’s nuclear facilities and bases without gaps in observation, they could be ensured that nuclear components are not secretly transferred away after storing. This provides an increased level of control over non-deployed weapons but also permits some demating scenarios. Higher resolution available makes it possible to detect missile uploading, seeing if the missile silos or submarine hatches are loaded, and even counting warheads from space to obtain an upper limit. 

To conclude, rational nuclear powers must constrain themselves from provoking a nuclear escalation, but – if they wish to survive – they are also obliged to assure their nuclear adversaries and find ways to persuade them to act rationally. New technologies may make this possible, but only if nuclear weapon states accept the conditions of increased visibility and establish norms where this does not fuel instability and arms racing, but reduces the probability of escalation. The nuclear danger was started in secrecy but can only end in plain sight and with full transparency. 

References

[1] Igor Moric, "Capabilities of Commercial Satellite Earth Observation Systems and Applications for Nuclear Verification and Monitoring," Science & Global Security 30, no. 1 (2022): 22-49, https:// scienceandglobalsecurity.org/archive/2022/01/capabilities_of_ commercial_sat.html.

[2] Richard Rhodes, The Making of the Atomic Bomb, Simon & Schuster; Anniversary,Reprint edition (January 1, 1986).

[3] Igor Moric (2023) Nuclear stability in a world with overhead transparency, Comparative Strategy, 42:5, 621-654, DOI: 10.1080/01495933.2023.2236489, https://www.tandfonline.com/ doi/full/10.1080/01495933.2023.2236489.

[4] Charles L. Glaser, James M. Acton, and Steve Fetter, The U.S. Nuclear Arsenal Can Deter Both China and Russia: Why America Doesn’t Need More Missiles, Foreign Affairs, October 2023, https:// www.foreignaffairs.com/united-states/us-nuclear-arsenal-can-deterboth- china-and-russia.

Top

Reviews

Astrotopia: The Dangerous Religion of the Corporate Space Race

Mary-Jane Rubenstein (University of Chicago Press, 2022), 225 pages, ISBN 9780226821122, $24.00

When I first received ‘Astrotopia’ by M.J. Rubenstein in the mail, I did not expect it to land a place in the “religion” section of my library. I assumed the subtitle “the dangerous religion of the corporate space race” used the word 'religion' as a pejorative. The book is actually a serious work in religious studies written for a popular audience. In it, Rubenstein sets out to examine the religious rhetoric and undertones in our current, corporate approach to space. This broad project is an interdisciplinary adventure that exposes the reader to history, biblical scholarship, literary theory, and international law among other topics.

One of the main threads in the book is that outdated Jewish and Christian beliefs of all kinds still motivate seemingly modern, secular perspectives on space. Rubenstein uses this angle to analyze the arguments from a range of contributors to the outer-space conversation; including businessmen like Elon Musk, politicians like Mike Pence, and scientists like Gerard O’Neill. It is argued that Christianity is responsible for promoting a conquest-oriented view of nature that allows for the dangerous, often absurd excesses we see in our current hopes for space. This case is built historically by detailing how the spread of early Christianity in Europe worked against folk religions that personified natural entities (called animist religions) which tended towards a great respect for nature. In contrast to animist religions, Christianity taught that nature is devoid of spiritual meaning and ripe for human exploitation. Rubenstein argues this perspective is the seed of our abrasive approach to nature today. The Christian idea that man is the pinnacle of creation has justified environmental destruction throughout history and remains alive in a residual yet secularized form in our thinking about space.

The book draws similar comparisons for much of the rhetoric in New Space discourse, like the idea that there is a paradise in the sky, or the religious idea of claiming land for the people of God. Rubenstein is careful to point out that while a lot of these views are historically related to Christianity and Judaism, they are not the current views held within these communities. Ironically, our modernist saviors espouse pseudo-religious beliefs that not even the religions are holding on to. The book is clearly written and accessible for a mass audience. Those who are not experts in religious studies will find comfort in the book’s non-presumptuous and entertaining prose. Astrotopia strikes a nice balance between historical, analytical, and argumentative writing. In my view, the extent of polemical writing in the book is well deserved by the absurdities of the New Space movement. Rubenstein is unapologetic in her criticisms while being welcomely careful with her prescriptions. The later parts of the book are dedicated to building a counter-narrative to the New Space orthodoxy. Rubenstein realizes her thorough criticism begs an alternative, and she presents said alternative with modesty. For instance, a section of the book is dedicated to making the case that space itself has a right to be left alone. The argument is presented carefully, with full self awareness of its foreignness to the modern mind. There is a lack of cavalierness in her prescriptions that nicely contrasts the norms of the New Space community. 

A few criticisms that do not detract from the overall quality of the book are possible. The project of the book is so broad that the prose needs to quickly move through many disparate topics. At times, this could feel like scrolling through an intellectual news feed that jumps from discussions of official American iconography to the optimism of afro-futurist art. This speed can leave a curious reader desiring more detail; a trade-off made for the breadth of the book. Thankfully, the book has an ample bibliography that such readers can utilize. Personally, I found the pace exciting and enjoyed the tasteful indulgences and omissions of details. I only write this to warn the most erudite among us who may desire something truly encyclopedic.

That said, I greatly enjoyed this book, and I give it my endorsement. The prose is as delightful and dynamic as its argument is novel and interesting. Physicists like us should read this book, especially those so buried in technical research that they rarely engage with work from the humanities. The book opens up with Rubenstein sharing a story of when she was nervous about presenting to a group of astronomers. How was she, a scholar of religion, supposed to make her work relevant to a community of quantitative scientists? Writing within these pages of the APS, I’m happy to say that she successfully communicated her work (and its importance) at least to this lone physicist. And I suspect the book will achieve the same effect for yourself!

Michael Cairo
University of Pennsylvania

The Climate Book

Greta Thunberg (Penguin Press, 2022), 446 pages, ISBN 9780593492307, $30.00

This is a huge book with contributions by many authors collected by Greta Thunberg. Thunberg is a well known climate activist nominated for the Nobel Peace Prize at an early age. She introduces and summarizes their work and the points they make.

This important work, which argues that a solution to the global climate crisis is essential to survival in the short term, is a unique one which physicists need to be aware of. Many of its articles, particularly those written by Thunberg, are a plea for more climate activism.

The book consists of five parts.

Part One is titled “How Climate Works.” It deals with the spreading of pollution and its consequences for mankind up to this date and has multiple articles arranged in different sections. To begin, there are articles by a science journalist, a professor of ecology, and an evolutionary biologist at UC Santa Cruz. Another staff author section of part one begins with an essay by Thunberg entitled, “The science is as good as it gets,” followed by articles from atmospheric scientist at Princeton, and then a professor of earth science at Harvard, followed by the Director of the Postdam Institute for Climate and Impact Research at the University of Potsdam, and, finally, Thunberg’s summary essay entitled, “This is the biggest story in the world.” I found Part One very interesting since it dwells on early man’s global extinction of herds of large beasts and I hadn’t realized that our problem with climate went back that far.

Part Two is entitled , “How Our Planet is Changing. “ This part deals with atmospheric changes, including dangerous weather and temperature changes human beings have wrought. She presents experts who show that there are increasing droughts and floods, ice sheets and desert formations.

Part Three is entitled “How It Affects Us.” Thunberg and her panel of experts detail health related problems brought on by climate change, including heat and illness, vector-borne diseases, antibiotic resistance, and problems with food and nutrition. In a section, “We are all not in the same boat,” she and her experts point out that climate change presents various challenges in different parts of the world.

Part Four is “What We’ve Done About It.” She and her experts point out that we are not, unfortunately, moving in the right direction, and that those in charge keep saying one thing while doing another (incorrect) thing.

In Part Five, “ What We Must Do Now,” Thunberg summarizes the whole book. First, she and her panel discuss the most effective way to get out of the mess. Thunberg made her reputation as an outspoken climate activist and this book reflects her outlook. Not surprisingly this book presents to the North American Community a European perspective on our life styles. It is critical of the American life styles.

The book ends with a summery of what we can do as a society and what you can do as an individual to combat climate change.

It is clear to a beginning reader of this book that our climate problems are not quitting soon or easily.

As physicists we are in the position to educate ourselves and others and participate actively in the research and development as recommended by Thunberg.

Dr. Ruth Howes

Ball State University

rhowes@bsu.edu

Top