January 2025 Newsletter

From the Editor

Oriol T. Valls, the current Physics and Society Newsletter Editor, is a Condensed Matter theorist at the University of Minnesota.

This issue of the newsletter contains four news items, which is more than usual: but after all this is called a newsletter because it is supposed to include news.

Congratulations to the new FPS sponsored Fellows and to the FPS award winners. Also we introduce the new Forum officials, that were just elected. Many thanks are due to the officials whose terms are expiring. They have been very helpful whenever difficulties with the newsletter have arisen.

We are publishing a Letter to the Editor also in this issue which seems to agree well with our repeatedly expressed wish to be controversial.

We also have an article, which continues the discussion about fusion which has been going on for a while in these pages. We had a last minute cancellation of another article. The topic of the article, which was the subject of Forum sponsored talks at the last March meeting, was the trade offs between decarbonisation and equity. This is very important and very relevant to Physics and Society. Unfortunately the authors work for the government and are now feeling the need to be very cautious as to what they say. This is extremely unfortunate. I am in the process of soliciting articles on this topic from people free of these external constraints.

This newsletter and its contents are largely reader driven. All topics related to Physics and Society are acceptable, excluding only undiluted politics and anything containing invective, particularly of the ad hominem variety. Manuscripts should be sent to me, preferably in .docx format, except Book Reviews which should be sent directly to book reviews editor Quinn Campagna (qcampagn@go.olemiss.edu). Content is not peer reviewed and opinions given are the author’s only, not necessarily mine, nor the Forum’s nor, a fortiori, the APS’s either. But subject to the mild restrictions mentioned above no pertinent subject needs to be avoided on the grounds that it might be controversial. On the contrary, controversy is welcome.

Oriol T. Valls                                                                                                                                          
University of Minnesota                                                                                                                             
otvalls@umn.edu

Top

News

Fellows

Areg Danagoulian
2024 APS Fellow, nominated by the Forum on Physics and Society

For seminal technological contributions in the field of arms control and cargo security, which significantly benefit international security.

Kazi Rajibul Islam
2024 APS Fellow, nominated by the Forum on Physics and Society

For exceptional efforts in promoting science education in rural India and Bangladesh through community outreach, including founding the Bengali online journal Bigyan and co-founding the Open Quantum Design for open-source quantum computing.

Robert Tchitnga
2024 APS Fellow, nominated by the Forum on Physics and Society

For work elevating physics in Cameroon, including outreach to the public, school children, university students, and fellow faculty, and for use of physics to provide low

Top

Prizes

Alex Glaser - Szilard Prize WinnerAlex Glaser
Princeton

For seminal scientific contributions and innovations to advance nuclear arms control, nonproliferation, and disarmament verification, and for leading the Princeton Program of Science and Global Security and mentoring many students and young researchers over the years. 



Sebastien Philippe - Burton Award WinnerSebastien Philippe
Princeton

For accurately estimating radiation doses from French and U.S. nuclear tests and effectively communicating these findings to the public, as well as assessing potential radiation from nuclear attacks on U.S. ICBM silos, demonstrating the importance of addressing scientific findings and consulting affected individuals.

Top

Forum Election Results

The election of the new Forum officers for 2025 took place in November. We had 5,252 eligible voters and 637 ballots were cast, that is 12.13% of the eligible voters. While low, this is a higher percentage than last year when 11% of 5,163 Forum members voted. 

We congratulate our newly elected officers: 

Vice Chair: Jason Gardner, National Laboratory, Oak Ridge 

FPS representatives to POPA: Savannah Thais, Data Science Institute, Columbia University 

Two Executive Committee Members-at-Large: 

Philip (Bo) Hammer, Institute for Mathematical and Statistical Innovation, University of Chicago 

Mark C. Harvey, Texas Southern University 

These officers will assume their offices on January 1, 2025. They replace the outgoing executive committee members whose terms are ending, whom we thank for their dedication and service to the unit: 

Frederick Lamb, University of Illinois, Past Chair 

Laura Grego, Union of Concerned Scientists, FPS Rep to POPA 

Eliane Lessner, DOE Office of Science, Member-at-large 

Idalia Ramos, University of Puerto Rico, Member-at-large 

Top

APS Global FPS Meeting Sessions

ACC = Anaheim Convention Center; M = Marriott

“March” meeting

SESSION 1: Intersections of quantum science and society

*co-sponsored with DQI 

5 speakers
Room: ACC 156 (Level1) Thursday, March 20, 11:30am -2:30pm 

SESSION 2: Science communication in an age of misinformation and dis-information

3 speakers

Room: ACC 156 (Level 1) Friday, March 21, 8:00am - 11:00am

SESSION 3: Fusion Energy - lab to grid commercial development and climate impacts/ramifications

*co-sponsored with DNP

11 speakers
Room: ACC Livestream 159 (Level 1) Wednesday, March 19, 8:00am-11:00am

“April” meeting

SESSION 1: History and physics of the Manhattan Project and the bomb-ings of Hiroshima and Nagasaki

 *co-sponsored with FHPP

3 speakers
Room: M Platinum 9; Monday, March 17, 1.30pm-3:18pm

SESSION 2: Building Bridges through International Collaboration

*co-sponsored by FIP

3 speakers
Room: M Platinum 9; Tuesday, March 18, 3.45pm-5:33pm

SESSION 3: Awardee Session and light reception

Room: M Platinum 9; Wednesday, March 19, 10:45am-12:33pm

SESSION 4: The Physics of Climate Change: Unraveling Aerosols, Radia-tion, Clouds, and Precipitation for Future Projections and Societal Impact

3 speakers plus one moderator
Room: M Platinum 9; Monday, March 17, 3.45pm-5:33pm

Letters

Physics Students Should Not Have to Take Nontechnical Courses 

Physics students should not have to take nontechnical courses. The content of these courses can be easily learned outside university. Also, taking nontechnical courses means there is less time to take physics, mathematics, and other science and engineering courses. However, if physics students want to take nontechnical courses, they should be able to do so. 

Physics courses are much more important for physics students just like arts courses are much more important for arts students. Arts students don’t have to take physics courses so why do physics students have to take arts courses? It’s a form of bigotry against science for universities to assign greater importance to arts courses than physics courses. Courses in acting, dancing, film studies, and literature aren’t more important than courses in astrophysics, classical mechanics, quantum mechanics, and electromagnetism. If people in social sciences and humanities are offended by the assertion that science, mathematics, and engineering courses are more important for physics students to take, this shows bias against science, mathematics, and engineering. 

High school and elementary school are for providing a general education. High school and elementary school could be significantly enhanced to cover more important nontechnical material and material in science, mathematics, and engineering. In university, students should be able to focus on their fields of study. 

An undergraduate degree in science often requires about 10 nontechnical courses. Science, mathematics, and engineering courses are usually much more work than nontechnical courses so perhaps physics students could take 5 science, mathematics, or engineering courses instead of 10 nontechnical courses. 

If physics students can forego nontechnical courses, then they can take additional courses in classical mechanics, quantum mechanics, relativistic mechanics, optics, acoustics, astrophysics, condensed matter physics, particle physics, nuclear physics, thermodynamics, electromagnetism, semiconductor device physics, quantum computing, engineering physics, medical physics, biophysics, biology, physical chemistry, chemical physics, chemistry, environmental physics, environmental science, geophysics, earth science, computational physics, software development, mathematical physics, mathematics, etc. The preceding courses in science, mathematics, and engineering would be more useful than nontechnical courses for most physicists. 

If physics students are able to take more physics courses, more other science courses, more mathematics courses, and engineering courses instead of nontechnical courses in undergraduate degrees, they will be less likely to need graduate degrees and more likely to get employment in the field of physics. 

People don’t need to learn everything from university courses. I was able to get over 1000 reviewed publications in English without taking any university courses on English. I got numerous research papers published in the fields of politics, public policy, and education without taking any courses on these subjects. I was able to get numerous published op-ed articles on politics and public policy without taking any courses on journalism. I got numerous political poems and short stories published without taking any university courses on creative writing. I learned what I needed to know from the preceding subjects on my own. My university education was primarily focused on engineering, science, and mathematics. 

Accreditation organizations and universities requiring students to take nontechnical courses instead of physics courses are doing more harm than good. Accreditation organizations and universities should stop requiring physics students to take nontechnical courses. 

Biography: Ashu M. G. Solo has over 1000 reviewed publications and over 250 reprints of these publications. These publications are in the fields of engineering, mathematics, science, military studies, politics, public policy, computational psychology, education, etc. 

Ashu M. G. Solo, Maverick Trailblazers Inc.™, amgsolo@mavericktrailblazers.com

Top

Articles

Fusion Power Plants Won't Happen Anytime Soon 

By Daniel L. Jassby (retired from Princeton Plasma Physics Lab) 

“Common sense ain’t common.” —Will Rogers 

In the last five years hundreds of journalists, editorialists and government policy makers have apparently fallen victim to the salesmanship of fusion energy promoters. They have succumbed to the propaganda line that essentially all the scientific issues of fusion power have been settled, and only residual engineering problems need to be resolved. Other proponents assure us that even the engineering issues are in hand, and the implementation of fusion power requires only ensuring a robust industrial supply chain and accommodating nuclear safety regulations. 

What is the reality of a fusion energy source? For 75 years national governments have funded programs to achieve terrestrial fusion energy. That period has been bookended by two versions of the only technique that has proved capable of igniting a thermonuclear burn, namely, implosion of a fusion fuel capsule driven by soft X-ray ablation of the capsule surface. The principal difference in the two versions was the source of the X-rays: A 200-terajoule fission explosion in the Ivy Mike (“H-bomb”) shot of 1952 [1], and a 2-megajoule laser pulse in Lawrence Livermore’s NIF (National Ignition Facility) in 2022 [2]. Despite the 8 orders of magnitude difference in supplied energy, the X-ray pulse length (3 to 10 ns) and effective black-body “temperature” (300 eV in NIF and 1,000 eV in MIKE) were remarkably similar. 

The achievement of ignition and propagating thermonuclear burn in the NIF has been called a “Kitty Hawk” moment for fusion R&D, but a better analogy is the first manned moon landing in 1969. Within a few years after the Wright Brothers’ first flights, dozens of fliers in the US and Europe made even more impressive demonstrations, but in half a century no entity has been able to repeat NASA’s 1969-72 lunar landings (not even NASA, to date). The NIF results were achieved by a uniquely skilled assemblage of scientists and technologists with highly coordinated support teams of unparalleled capabilities embracing countless disciplines. Those capabilities cannot be readily duplicated, and it will take decades for another laboratory in the US or abroad to replicate NIF’s achievement. There exist large laser-fusion facilities in France, China and Russia, but the fusion output from France’s LMJ is reminiscent of the US’s ICF performance in the 1980’s, and no significant results have been reported from China’s SG-III or Russia’s UFL-2M. 

Scientific feasibility remains a formidable challenge 

Can any other fusion scheme demonstrate scientific feasibility? For a power-producing reactor, “scientific feasibility” is the ability of the underlying fusioning plasma to reach thermonuclear ignition, or at least demonstrate fusion energy gains of 10 or more. The fusion energy gain, Q, is the ratio of fusion energy output to injected heating energy during a pulse. In radiatively-driven implosion the denominator of Q should logically be taken as the X-ray energy deposited on the fuel capsule. However, it is commonly taken as the 5 to 10 times larger laser energy that produces the X-rays. Consequently, in the first ignition-threshold shot of August 2021, Q was only 0.7, but has since reached 2.4 and the NIF may attain Q = 10 some years hence. Whatever the definition of Q, ignition has occurred when the core of the compressed fuel capsule continues to rise in temperature after the compression phase is complete, as observed in the NIF’s Dec. 2022 shot and numerous later shots [2]. 

The successful NIF results promise nothing about the feasibility of any other proposed method for controlled fusion. There do exist two other plausible concepts, namely laser compression of a spherical fusion fuel capsule by the laser beams themselves (called “direct drive”), and the magnetically confined tokamak. Using direct drive, the OMEGA facility at the Univ. of Rochester’s LLE has produced 900 J of fusion energy with a 28-kJ laser pulse [3], giving Q = 0.03. Any attempt to reach ignition with direct drive will require a new facility delivering much higher laser energy, a decade-long project to implement. 

The leading magnetic confinement concept, the tokamak, has attained a maximum Q of 0.6, but half of that includes beam-thermal reactions, that is, fusion reactions between injected beam ion and plasma thermal ions, which are not scalable to high Q. The highest thermonuclear Q achieved in any tokamak is at most 0.3, nearly two orders of magnitude below the Q = 10 required for a power reactor. There have been no advances in the thermonuclear performance of tokamaks since 1997 [4]. The JET tokamak’s much heralded “record fusion energy pulse” of 59 MJ in 2021 was achieved by injecting deuterium beams into a tritium target plasma [5], and while the total Q was about 0.35, the thermonuclear (non-beam) Q was only 0.08 for that shot [6]. 

Achieved fusion energy gains

Fusion concepts that have never used tritium can be compared using their operation in deuterium alone. This author made a compilation of record neutron yields in 1979 [8]. Except for tokamaks and laser-driven systems, little has changed in 45 years, with the old records still standing for most pinches of all types, magnetic mirrors, plasma focus, reverse-field configurations and electron-beam-driven systems. Since then, only beam-injected tokamaks, one RF-heated tokamak and one stellarator have achieved Q substantially greater than that produced simply by bombarding a deuterated target with a deuteron beam, viz. Q = 0.001%. The stellarator is the tokamak’s close relation, and has achieved fusion confinement parameters and neutron output a factor of 20 below those of the tokamak’s best. Just two other concepts, the DPF (Dense Plasma Focus) and MagLIF (an imploding liner) have achieved fusion gains in deuterium comparable with the ultra-simple beam/solid-target method. All other magnetic confinement and magneto-inertial concepts that can produce any neutrons are 3 to 6 orders of magnitude behind the tokamak in the vital parameters Q and fusion-neutron output, despite their promoters’ perennial claims of reaching “breakeven next year” (i.e., Q = 1). 

In summary, no fusion concept other than radiatively-driven fuel compression has come anywhere close to ignition conditions, and none will for decades, if ever. The SPARC tokamak under construction in Massachusetts [9] may attain Q = 1 by the early 2030’s, and the repeatedly delayed ITER tokamak [10] is supposed to reach Q = 10 in the 2040’s, but there is no certainty of achieving either result. Recently announced delays in the ITER implementation schedule reduce the probability, already marginal, that it will ever become operational [11]. It’s likely that new problems will arise leading to further setbacks in the schedule. If the ITER project collapses, a successful SPARC program will not save the world as its promoters claim, but at least it may save tokamaks from oblivion. 

Because of their inability to enhance fusion parameters since 1997, tokamak and stellarator operators in the last decade have concentrated on increasing discharge pulse length, asserting that longer pulse length signifies progress toward power reactor operation. But they report nothing about the variation of D-D neutron output during the pulse, if indeed there is any, and that is the most vital parameter for a reactor. Considering the inescapable adverse interactions between the plasma and first-wall components in toroidal confinement devices, it remains an open question whether fusion production, if any, can be maintained during long pulses. 

Resurrecting discarded fusion concepts 

As noted above, most fusion concepts reached their maximum possible performance decades ago, yielding Q-values well below that from a simple beam-target system [8]. Many especially hapless schemes can produce no neutrons whatever, and no neutrons means no fusion. In all cases obstacles to improved performance of the underlying plasmas are not merely challenging but insuperable. Nevertheless, private fusion companies are attempting to resurrect many of those zombie concepts to serve as corporate centerpieces. 

Promoters claim that the latest supercomputers and artificial intelligence (AI )will now make discarded fusion approaches feasible. While AI and machine learning can process experimental datasets and other information orders of magnitude faster than older methods, these supposed cure-alls for ailing plasmas will not bring recalcitrant fusion schemes into line. AI optimizes performance by processing all existing data, but it can salvage no concept whose optimal performance falls short for physics reasons. 

The trumpeted importance of supercomputers and AI is belied by the origins of today’s leading controlled-fusion concepts: Fuel compression driven by soft X-ray ablation. was conceived in 1951 as the Teller-Ulam configuration and successfully implemented in the Ivy Mike device in 1952 with the help of computing machines that were nothing more than glorified desk calculators. The tokamak was conceived and developed in the 1950’s and 1960’s in the Soviet Union with no computer assistance whatever. Both methods originated and evolved with Natural Intelligence (NI) and nothing has yet supplanted them, although a more convenient source of X-rays (the laser) was found for the first method, and new heating methods (particle beams and RF waves) were applied to the second. 

Government planners embrace fusion frenzy 

Recently, there has appeared a new phenomenon in the “strategic planning” of government energy agencies. National governments have actually begun to believe the preposterous claims of several dozen private firms that they will deliver fusion-based electricity to the grid in the 2030’s. So the governments of the US, UK, Japan and South Korea, among others, have panicked, pushed aside their plans for DEMO’s in the 2040’s or 2050’s, and now aver that they will implement far more ambitious electricity-producing “fusion pilot plants” by 2040. 

Accordingly, the two distinct spheres of government-supported labs and private fusion companies have put forward a host of proposed engineering test reactors, demonstration plants, and fusion power “pilot plants” that have absolutely no scientific or technological justification, as they are based on magnetically confined plasmas with high energy gain that nobody has come close to producing and will not for decades, if ever. Devoid of NI but bolstered by AI, the design teams might as well be planning a crewed spaceship voyage to Mars using a Piper Cub. 

Beside lack of scientific feasibility, there are two more fundamental show-stoppers: First, every fusion facility consumes megawatts to hundreds of megawatts of electricity, but no device has ever produced even a token amount of electricity (kilowatts) while gorging on megawatts [12]. It may well be decades before anyone can make even that modest a demonstration. Second, 80% of D-T fusion energy emerges as streams of hugely energetic neutrons, but no-one in any line of endeavor — reactors or accelerators —has ever converted neutron barrages into electricity. Apparently nobody can, but every one of the public and private fusion schemes proposes to realize, in a single step from today’s primitive plasma toys, not just electricity production, but net electrical power. 

Every one of the public and private grand plans is a castle of sand that can collapse at any time. Some already have crumbled, such as Lockheed’s imaginary compact fusion reactor, General Fusion’s cancelled demo reactor in the UK, and South Korea’s K-DEMO originally to be implemented by 2030. The remaining grandiose fantasies will evaporate like their 20th century predecessors, most gone before the end of the USDOE’s “bold decadal vision.” 

Time scale for technology development 

Many supposedly reactor-relevant technologies are under development for those fusion concepts that are orders of magnitude away from basic feasibility. Ironically, no reactor technologies have been developed for the single concept (X-ray-induced fuel implosion) that has demonstrated scientific feasibility. 

Those missing technologies include a laser or particle driver providing a repetitive 5-ns, 5-MJ pulse with electrical efficiency of at least 10%; the manufacturing and mass production of highly sophisticated fuel targets for about $1 each; means of target injection, tracking and engagement by the incident driver beam; and means of removing target debris and positioning the next target. Most daunting is that all those functions must be performed 20,000 to 200,000 times per day, depending on the size of the fuel capsule and laser energy pulse. That rate contrasts starkly with the present NIF shot rate of just once per day. The target chamber must be protected by a wall of flowing liquid metal or molten salt that can withstand the explosive output of fusion neutrons and radiation and convert this incident energy to electricity. 

The assembly of singular technologies required to transform NIF’s unique scientific achievement into a viable power generator will require many decades of innovation and development. This challenge and the improbable prospects for a demonstration of ignition or high Q with any other concept explain why there is no likelihood of a working fusion power pilot plant in the foreseeable future, and perhaps not in this century. 

References 

Wikipedia entry for “Ivy Mike.” 

A. L. Kritcher et al., “Design of the first fusion experiment to reach target energy gain G > 1,” Phys. Rev. E 109, 20521 (Feb. 2024). https:// doi.org/10. 1103/PhysRevE.109.025204 

C. A. Williams, et al. “Demonstration of hot-spot fuel gain exceeding unity in direct-drive ICF implosions,” Nature Physics 05 Feb. 2024. https://doi.org/10.1038/s41567-023-02363-2 

D. L. Jassby, “The Quest for Fusion Energy,” Inference Vol. 7, No. 1, 2022. https://doi.org/10.37282/991819.22.30 

M. Maslov et al 2023, “JET D-T scenario with optimized non-thermal fusion,”Nucl. Fusion 63 112002. https://doi.org/10.1088/1741-4326/ace2d8 

Daniel Jassby, “Magnetic Fusion’s Finest is a Glorified Beam- Target Neutron Generator,” APS Forum on Physics & Society, Vol. 51, No. 3, July 2022 (online). https://engage.aps.org/fps/resources/newsletters/newsletter-archives/july-2022 

C. E. Moss, et al., “Survey of Neutron Generators for Active Interrogation,” LANL technical report LA-UR-17-23592 (2017). https://indico.fnal.gov/event/21088/contributions/60822/attachments/38056/46206/Survey_of_Neutron_Generators_for_ Active_Interrogation.pdf 

D. L. Jassby, “Maximum Neutron Yields in Experimental Fusion Devices,” PPPL report PPPL-1515 (1979). https://digital.library.unt.edu/ark:/67531/metadc1111062/m2/1/high_res_d/6202769.pdf 

A. J. Creely et al. “Overview of the SPARC Tokamak,” Journal of Plasma Physics 86, no. 5, 2020. https://doi.org/10.1017/S0022377820001257. 

Michel Claessens 2023, ITER: The Giant Fusion Reactor, 2nd Edition (Cham: Springer). https://doi.org/10.1007/978-3-031-37762-4. 

Daniel Clery, “Giant international fusion project is in big trouble,” Science, 3 July 2024. https://www.science.org/content/article/giant-international-fusion-project-big-trouble. 

Daniel Jassby, “Fusion Frenzy— A Recurring Pandemic,” APS Forum on Physics & Society, Vol. 50, No. 4, October 2021. https://higherlogicdownload.s3.amazonaws.com/APS/a05ec1cf-2e34-4fb3-816e-ea1497930d75/UploadedImages/P_S_OCT21.pdf

Top

Reviews

Not the End of the World: How We Can Be the First Generation to Build a Sustainable Planet 

Hannah Ritchie (Little Brown Spark, New York, 2024). 341 pp. $30. ISBN 978-0-0316-53675-2. 

Hannah Ritchie opens this book with the following paragraph: 

It has become common to tell kids that they’re going to die from climate change. If a heatwave doesn’tget you then a wildfire will. Or a hurricane, a flood, or mass starvation. Incredibly, many of us hardly blink before telling our children this story. It shouldn’t, then, come as a surprise that most young people think their future is in peril. There is an intense feeling of anxiety and dread about what the planet has in store for us. 

She then goes on to explain how her studies in environmental geoscience (2010-2014) depressed her about the future until she found encouragement in Hans Rosling’s look at big picture data rather than individual depressing events in the news. We can have a future if we plan for it, and “it’s up to us to decide how many people,” she writes (p. 7). One encouraging example that she cites is that people are heeding climate scientists. “We need to believe that it is possible to tackle . . . the world’s environmental problems,” she states (p. 10), and she promises to do this by addressing “the seven biggest environmental crises we must solve if we are to achieve sustainability,” (p. 12) from the top down: air pollution, climate change, deforestation, food production, biodiversity loss, ocean plastics, and overfishing. 

As a senior researcher in the Programme on Global Development at the University of Oxford and deputy editor and lead researcher at Our World in Data, she does this in terms of graphs like those she prepared for Steven Pinker’s Enlightenment Now. In her opening chapter, on sustainability, she observes that the world has never met the United Nations criterion for sustainability of “meeting the needs of the present without compromising the ability of future generations to meet their own needs” (p. 17) and emphasizes that this means that both present and future generations have their needs met. What gives her optimism for the future are seven indicators: 1) reduced child mortality, 2) reduced childbirth mortality, 3) increased life expectancy, 4) reduced hunger and malnutrition, 5) improved sanitation and access to electricity, 6) improved education, and 7) reduced poverty. 

Whereas previous prognosticators have focused concern on Earth’s future population, Ritchie does not. Instead, she observes, the population growth rate has declined so that 2017 was the year of maximum children, and this will lead to a maximum population between 10 and 11 billion in the 2080s; and reducing the number of children born to 1.5 per woman would maximize population at 7-8 billion in 2100. Elevating everyone on Earth to Denmark’s standard of living would require five times the present global economy; and assuring everyone a daily wage of $30 would require more than twice the present global economy. This will be enabled by new technologies, which Ritchie says “are allowing us to decouple a good and comfortable life from an environmentally destructive one.” (p. 34) She adds that “The . . . technologies we need to fix our environmental problems have become viable . . . only in the last few years.” (p. 35) 

Air Pollution. In her chapter on air pollution, Ritchie observes that, except for depletion of stratospheric ozone by chlorofluorocarbons (which have been dealt with by the Montreal Protocol), the cause of air pollution is “burning stuff” – for home heating, transportation, and electric power. “Wood is worse than coal; coal is worse than kerosene; kerosene is worse than [natural] gas,” (p.55) she writes. This is also the chronological sequence in which these fuels have been used for heating and power, and it led to buildups of air pollutant emissions in the last half of the twentieth century which have since seen reductions (98% for sulfur dioxide, 76% for nitrogen oxides, 94% for black carbon, 73% for volatile organic compounds, and 90% for carbon monoxide in the United Kingdom), following the Kuznets Curve, which shows things “getting worse before they get better.” (p. 50) Similar curves are seen for these emissions in China, but they are only now starting to come down from their peak. To further reduce air pollution (and also oppose climate change), we need to replace combustion with nuclear or renewable electricity (both of which are safer than burning fossil fuels) – and rely less on personal automobiles. 

Climate Change. In her chapter on climate change, Ritchie writes that “We have to accept two things: climate change is happening, and human emissions of greenhouse gases are responsible . . . . The time for debating whether climate change is . . . happening is over. We need to move past it to the question of what we’re going to do about it.” (p. 73) If there are no climate policies, the world’s temperature will increase between 4oC and 5oC above pre-industrial levels by 2100, she states. If the present climate policies continue, the temperature increase will range between 2.5oC and 2.9oC; and if all present climate pledges are fulfilled, the temperature increase will be 2.1oC. 

Large decreases in the cost of generating electricity by solar and wind in addition to greater energy conservation and efficiency have enabled global per capita carbon dioxide emissions to peak at 4.9 tons per person in 2012. Three quarters of our greenhouse gas emissions come from energy use in various aspects of everyday life (14% of them from transportation), and these emissions can be eliminated by using energy sources that don’t emit these gases. For personal transportation, this means driving electric vehicles (although “it takes more energy to manufacture the battery [of an EV] than it does to produce a combustion engine” (p. 94), the EV cumulatively emits fewer greenhouse gases in less than two years in the UK) – and 2017 was the year of peak sales of “petrol cars.” 

The other quarter of greenhouse gas emissions comes from land use, primarily agriculture. To reduce greenhouse gas emissions from food production, we need to eliminate beef (which emits 50 kilograms of carbon dioxide equivalent to produce 100 grams of protein) and lamb (which emits 20 kg – all other protein sources emit less than 11 kilograms). 

Deforestation. In her chapter on deforestation, Ritchie observes that there are two principal reasons to cut down forests: 1) to get wood for building materials and energy; 2) to clear land for agriculture. After cutting down their forests, developed countries have new building materials and energy sources and improved agricultural yields through technology, and their forests are growing again. Now developing countries are beginning the same sequence, but we must head off their deforestation by providing them the same technology and even paying them for not cutting down their forests. To disincentivize deforestation we must reduce the practice of things that drive it. The two leading drivers are beef (responsible for 41%) and oil seeds (18%). Beef is also an inefficient use of land to produce protein (164 square meters are required to produce 100 grams), outdone only by lamb (185 square meters). Short of eating less beef, Ritchie suggests switching from grass-fed to grain-fed beef and selectively using only the most efficient means to produce it. 

Food. The thrust of Ritchie’s chapter on food is that “From overhunting animals for food and claiming their habitats for farmland, to killing off ecosystems with pesticides and fertilizers, the largest threat to the world’s animals is human demand for food.” (pp. 161-162) Although agriculture allowed humans, who had been nomadic hunter-gatherers, to establish settlements, they had to move their settlements when the soil became depleted – until they realized that they could overcome this depletion by planting peas or beans or spreading manure. Fertilizer, combined with new varieties bred by Norman Borlaug, has now led to increased crop yields, except in Africa, to the point that the world produces twice the food it needs, although it is not distributed equitably, and half is fed to livestock or made into synthetic fuels. 

But livestock are an inefficient way to produce food. Chickens produce 13 calories as meat for 100 calories fed to them, which is more than nine calories for pigs, four for lambs, and three for cows, in consonance with the disproportionately high carbon dioxide emissions to produce 100 grams of protein from beef or lamb and the disproportionately large amount of land to do the same. Eliminating beef and lamb would halve the land used for agriculture, accelerating a decline from a peak in 2000. Replacing this land with forests would absorb carbon dioxide as well as reduce its emission from agriculture. This would require meat substitutes, which have a much smaller carbon footprint, or, Ritchie suggests, a hybrid burger of beef and soy. 

Although the carbon footprint and land requirement for cow’s milk is only a fifth of that for beef, Ritchie recommends replacing cow’s milk with plant-based milks, “now often fortified with vitamins D and B12” (found only in animal-based food). (p. 179) This would further halve the amount of land needed for agriculture. She closes with a scenario in which all the world’s inhabitants follow her recommendations and in 2060 10 billion of them are happily well-fed. In this she is able to deprecate Paul R. Ehrlich’s dire predictions in The Population Bomb. 

Biodiversity Loss. In her chapter on biodiversity loss, Ritchie notes that 1.4% of mammalian species have gone extinct since 1500, as have 1.3% of bird species, 0.6% of amphibians, 0.2% of reptiles, and 0.2% of bony fishes. To those asking whether this constitutes a sixth mass extinction, Ritchie provides the definition that a mass extinction is characterized by the extermination of 75% of species over two million years. A mathematical calculation would show that all the extinction rate percentages since 1500, if allowed to continue, would amount to a mass extinction. Interestingly, animals constitute only 0.4% of the Earth’s biomass (the rest is 0.8% protists, 1.8% archaea, 2% fungi, 13% bacteria, and 82% plants). 

Ocean Plastics. Ritchie writes in her chapter on ocean plastics that “The world produces around 460 million tonnes of plastic each year, and 350 million tonnes of it becomes waste.” (p. 232) She notes that plastics play an important role in medicine, transportation (where it lightens vehicles and reduces the energy needed to run them), and food preservation. Because chemical recycling is too expensive, only mechanical recycling is currently being done, and most of the rest of plastic waste goes to landfills. But 80 million tons of waste plastic per year is mismanaged, and about one million tonnes enter the ocean (0.3% of the total waste), 80% of it from 1656 of the world’s 100,000 river outlets, mostly rivers in Asian nations whose waste infrastructure has not kept up with their rapidly developing economies. 

Overfishing. Were it not for government-imposed restrictions, the seas would be overfished, Ritchie notes in her chapter on overfishing. The need for restrictions was first recognized for whales, whose bones were used as well as their oil and meat; commercial whaling was made illegal in 1987. Fishing is still allowed, but only the “maximum sustainable yield,” which leaves about half the pre-fishing population in the ocean. Aquaculture now produces more fish than are caught in the sea, with minimal carbon footprint now that farmed fish are fed plant-based food instead of wild-caught fish. A low carbon footprint is also needed to protect corals, which are threatened by rising ocean temperature; and two fish to be avoided, because their carbon footprint is more than twice that of chicken, are lobster and flounder, due to fueling fishing boats and refrigerating the fish. 

In her conclusion, Ritchie notes that because our environmental problems are interconnected, they also share common solutions: shifting away from fossil fuels, not eating beef and lamb, and improving crop yields. “Lab-grown meat, dense cities and nuclear energy need a rebrand,” she writes. (p. 294) 

Ritchie also points out that vegans and “flexitarians,” nuclear fans and renewables advocates, cyclists and electric vehiclers must realize that they are all on the same team and must pull together against climate change deniers, fossil fuel companies, and meat lobbies (she shows this with a vector diagram). She “believe[s] that we can be the generation that meets the needs of everyone while leaving the environment in a better place than we found it.” (pp. 298-299) “What makes us different from our ancestors is that economic and technological changes mean we have options,” Ritchie concludes. But “a sustainable future is not guaranteed – if we want it we need to create it.” (p, 299) 

I would add that Ritchie has written her message of hope for the future in a very interesting manner that makes it very accessible to her readers. The graphs provide an added pictorial dimension to her text, and a detailed set of clear headings and subheadings add to the clarity with which it is organized. 

 

John L. Roeder
The Calhoun School, NYC
JLRoeder@aol.com


When Science Meets Power

Geoff Mulgan (Polity, 2024) ISBN 978-1509553068. $29.95 

"When Science Meets Power” is a thorough exploration of the intersection between science and government. This is a colossal topic that could easily fill many volumes of dry prose. The author attempts to tackle the subject in one concise book by breaking it into general explorations of history, philosophy, and politics throughout the six numbered sections of the book. These six sections answer questions like “How have states used science?” and “How can science work in a globalized world?”, with two sections each being dedicated to history, philosophy, and politics respectively. There is no doubt that this book establishes its points on a large body of research and experience. The book is also rich with insights and novel framings for common problems. However, I found the book hard to finish for its frequent lack of synthesis in response to each posed question. With a few exceptions, most of the book reads like a catalog of disparate facts that left me feeling confused about what each chapter was trying to argue. My excitement to read such a prima facie interesting book was lost through attrition and confusion, to the point where I cannot honestly recommend it. 

I first noticed issues with this book at its very beginning where the history of the interface between science and politics is discussed. This section is broken into stories about the different uses governments have had for science. Some of these stories are patiently told, while others move rapidly between time and place in a way that can feel disorienting and tiring. One instance zooms from ancient Egypt to the moon landing in one page (pg. 58). Although such an extreme tendency is not typical of this section, this is where my frustrations came into focus: Generally the prose has a quality where, when looking in detail one encounters a rapid patter of facts, and when zooming out one struggles to understand what narrative is being developed across the length of the book. One can skim through the book and find many examples of this rapid toggling of narrative focus. Just now, I opened to pg. 37 and saw Francis Bacon, Thomas Jefferson, and the USSR mentioned on one page. Perhaps serendipitously, I next opened to pg. 215 to find “In 1867, Marx” and “Los Alamos in 1943” mentioned within a paragraph’s shot of each other. 

My issue isn’t that one is not allowed to move quickly between topics; rather, it’s that this happens so frequently throughout the book, it becomes noticeable and displeasing. Just after noticing this tendency in the history section, the philosophy section confirmed it to me. Section III, “The Problem of Truths and Logics”, was a veritable marathon of name-drops. The prose presented such a kaleidoscope of different philosophies, I felt that at times the author was getting carried away with merely listing thinkers without taking the time to explain their ideas. Again, in the span of one page (between page 106-107), we heard of Walter Benjamin, Francois Mitterand, Max Weber, the Jesuits, Francis Bacon, and Carl Schmit each on separate thoughts. I was left feeling desperate for the moment the author would slow down and explain something in detail, but such a moment rarely occurred throughout the remainder of the book. 

I don’t want to give the impression that this book accomplishes nothing. The high flux of information and the ample bibliography show that this is a thoroughly researched work. My critiques above are an attempt to put to words why I felt frustrated and exhausted by the prose of the book, but they aren’t to imply the book is an unserious work. There are also several novel ideas that are cogently communicated; the different logics of science, politics, and bureaucracy being one example. The book is also clearly written in the shadow of the Covid-19 pandemic in a way that felt like a thoughtful epilogue to all the madness that occurred during those years. The latter part of the book looks forward to the challenges politics and science have ahead like climate change and AI, and the author offers some nice reflections on what could be done. Ultimately, despite my respect for the author and the quality of research done to create this book, the difficulty in following its line of argument and rushed presentation of facts is what keeps me from recommending this book to any friends. And as a result, I do not recommend this to you. 

Michael Cairo
University of Pennsylvania
mcairo@sas.upenn.edu

Top