Archived Newsletters

Letters

A group of 4 letters continues the previous discussion on low-level radiation and nuclear power; the reader is reminded of the Jan. ‘97 issue of this journal which was devoted to this important topic. As usual, all letters in Physics and Society may be edited or shortened by the Editor.

Principle of Collective Dose

In the January issue of Physics and Society, Richard Garwin claims that the views expressed by Bertram Wolfe are only "part of the story". He then goes on to apply the principle of collective dose to the public exposure that resulted from the Chernobyl accident. He implies that the majority of evidence and the scientific community consensus supports such an application.

I don't believe anything can be further from the truth. In 1995, the Health Physics Society (HPS) issued a position statement that contained the following statement:

"In discussing the question of the limitations of extrapolation to estimate radiogenic risk in the millirem range, the National Academy of Sciences, in its 1990 BEIR V report noted '...the possibility that there may be no risks from exposures comparable to external natural background radiation cannot be ruled out. At such low doses and dose rates, it must be acknowledged that the lower limit of the range of uncertainty in the risk estimates extends to zero.' The Health Physics Society recommends that assessments of radiogenic health risks be limited to dose estimates near and above 10 rem. Below this level, only dose is credible and statements of associated risks are more speculative than credible." [1]

This statement, issued by a scientific organization dedicated exclusively to the protection of people and the environment from radiation and composed of more than 6,800 scientists, physicians, engineers, lawyers and other professionals representing academia, industry, government, national laboratories, trade unions and other organizations, does not appear to agree with Mr Garwin's application of collective dose. He states that the average individual dose was 7 mSv (700 mrem), which is quite a bit lower than the cutoff level recommended by HPS for making risk estimates.

Also, an extensive amount of data exists that contradicts the use of the ICRP risk model, not only the single study in China refered to by Mr. Garwin. For a recent and brief review of that data see Dr. Pollycove's article on the nonlinearity of radiation health effects [2]. In this article, data is reviewed from several studies that contradict the ICRP risk model Mr. Garwin has applied. A much more extensive review of the evidence contradicting this model can be found in the data document compiled by Radiation Science and Health, Inc.[3] This document references hundreds of studies that do not support the application of the ICRP model to the population exposure that resulted from the Chernobyl accident.

Finally, what is the harm with the application of the linear ICRP risk model? Shouldn't nuclear power be able to stand up in a fair comparison to other energy sources? Absolutely, but the key word is "fair". Our regulatory policies are based on models similar to the ICRP risk model. They do not serve to protect the occupational worker or the public and may even endanger public health.[4-5] Monetary resources are too small for simple everyday medical care, while billions of dollars are spent to reduce perceived radiation hazards that are trivial. These billions of dollars are increasing the cost of producing electricity from nuclear power without a benefit to the public. Nuclear power is being forced to compete on an uneven playing field.

While nuclear power can be competitive in the current environment, as Mr. Garwin points out, why should it be forced to compete at a disadvantage when science simply does not support the basis of that competition?

[1] "Position Statement on Risk Assessment" Health Physics Society, April 1995.

[2 Pollycove, Myron, "Nonlinearity of Radiation Health Effects," Environmental Health Perspectives, Vol. 106 (suppl. 1), February 1998.

[3] "Low Level Radiation Health Effects," Radiation, Science, and Health, Inc., Edited by J. Muckerheide, March 19, 1998. http://cnts.wpi.edu/rsh/Data_Docs/index.cfm

[4] Rockwell, Theodore, "Our Radiation Protection Policy Is A Hazard To Public Health," The Scientist, Vol. 11(5), March 3, 1997.

http://www.the-scientist.library.upenn.edu/yr1997/mar/opin_970303.cfm

[5] Rockwell, Theodore, "Discussions Of Nuclear Power Should Be Based In Reality," The Scientist, Vol 12(6), March 16, 1998.

http://www.the-scientist.library.upenn.edu/yr1998/mar/opin_980316.cfm

Michael C. Baker, Ph.D., P.E., (505) 667-7334; Fax: (505) 665-8346

Radioassay and Nondestructive Testing Team, Environmental Science and Waste Technology Group Mail Stop J594

Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87545 mcbaker@lanl

Defending the Linear No-Threshold Hypothesis

Michael Baker (see previous Letter) responds to my letter in the January P&S in which I note that the International Commission on Radiation Protection (ICRP) and the Board on Effects of Ionizing Radiation (BEIR) provide a best estimate of cancer deaths at the rate of 0.04 per person-sievert (one sievert = 100 rem, = one gray for external gamma radiation). I said "There is little dispute over the collective exposure ... at 600,000 person-Sv ("p-Sv"). The cancer deaths are thus likely to be 24,000 ..." I did not say there was "scientific community consensus" (with the meaning of "unanimity") but I did quote the two organs that have been set up by the community to make these estimates. There is certainly great uncertainty as to the coefficients, not excluding the value of zero as asserted by Mr. Baker. But to quote a statement of "6800 scientists, physicians, engineers, lawyers and other professionals ..." does not add anything to our estimate.

Let's look at some of the evidence Mr. Baker cites (in Reference 3) attacking the "Linear No-Threshold Hypothesis" that underlies the use of "collective dose" with the recommended 0.04 cancer deaths per person-sievert. Ref. 3 states "... low-level radiation DNA damage is insignificant compared to normal oxidative DNA damage (0.3 cGy causes approximately six DNA damage events per cell, roughly the normal background radiation per year, compared to 240,000 per cell per day, or about 90 million per year, from normal oxidative DNA damage)."

John Graham(1) quotes the same 240,000 DNA events per day but goes on to say that the "Double-stranded breaks constitute 5% of the single-stranded breaks, so that with a background level of 240,000 breaks per cell per day, there are 10,000 to 12,000 double breaks." But that is incorrect. According to Maurice Tubiana(2) 4% of the breaks due to radiation are double-stranded, while only one in 15,000 is double-stranded in the case of the spontaneous damage. So Ref. 3 is telling us there are six DNA damage events per cell per year due to background radiation and 240,000/15,000 = 16 per day or 6000 per year from normal oxidative DNA damage.

Is the ICRP coefficient 0.04 cancer deaths per p-Sv inconsistent with these data? Assuming (without asserting) that double-stranded DNA damage is the cause of all cancer, the 6,000 double-stranded DNA breaks per year, accumulated for some 40 years, would account for the eventual 20% of the people who die from cancer in every developed society. It is generally believed that it requires on the average some 5-8 unrepaired damages to the DNA in a cell to permit that cell to escape from the normal strict regulation of growth and to be a source of cancer; the abnormal cell must also escape "apoptosis," cell death imposed by internal monitors. Using Tubiana's "4% of the DNA damage events from radiation are double-stranded" and Ref. 3's "six DNA damage events per cell per 0.3 cGy", we find that some 80 double-stranded breaks are caused by one Gy (or one Sv for gamma radiation).

If we assume that the 80 double-stranded breaks due to 1 Sv are simply additive and of the same kind as 6000 per year due to natural (non-radiation) events, we must consider that the 1 Sv effect is added to some 40 years of accumulated spontaneous damage--to some 240,000 spontaneous double-stranded breaks. But the increase in cancer rate would not be 80/240,000 = 1/3000, because the increased rate could apply to any one of the (let's say eight) steps required to transform a cell to be cancerous. A Taylor-series expansion(3) shows, in fact, that if there are n steps, then the increase in rate up to (1 + e) increases the effect by (1 + n e), so that 1 Sv of radiation would increase the cancer rate not by 1/3000 but by 1/400.

This is hardly a quantitative justification of the ICRP coefficient, which would predict 0.04/0.20 = 1/5 rather than 1/400; the ICRP coefficient would require radiation induced double-stranded breaks to be 100 times as effective as spontaneous breaking in leading to cancer.

Baker's Ref. 3 ridicules the linear no-threshold hypothesis as "... equivalent to predicting that: if 5 persons die in each group of 10 persons given 100 aspirins each, giving one aspirin each to 1000 persons will result in five deaths." In fact, if one out of 100 sugar pills carries a lethal dose of poison, then if ten persons are given 100 sugar pills each, seven will die, and giving one sugar pill each to 1000 persons will result in ten deaths--a nearly linear relation. The question is not one of arithmetic but of understanding which model is appropriate.

Another major data point is also presented in Ref. 3, but, contrary to the implication, it does not contradict the ICRP coefficient. In brief, some 65,000 people were studied in a High Background Radiation Area in China, in which external radiation exceeded that in the control area (25,000 people) by 1.77 mGy/yr. The data in Table 3 of Ref. 3 show "all-cancer" rates equal in the two groups with 90% confidence limits of 0.86-1.15 for the risk ratio between the HBRA and the control area. What would the ICRP coefficient of 0.04 cancer deaths per p-Sv predict? Forty years of exposure would correspond to about 70 mSv which corresponds to an individual cancer risk of 0.28%. Compared with the normal cancer incidence of 20%, this would be an increase in cancer risk by 0.28/20 = 1.4%. Since the 90% confidence interval is +/- 15%, the experiment really has no power to detect predicted cancer augmentation to 1.014 times that in the control area; in no way does this epidemiological study contradict the ICRP estimate.

One can perfectly well understand the frustration of those dealing every day with radiation when confronted with statements or even regulations that demonize the slightest dose. And one can likewise sympathize with those who are incensed with a portrayal of the spontaneous rate of double-strand lesions as 10,000 per day, when it is almost a thousand times less.

Combined with a 20% natural cancer incidence rate, Taylor's expansion tells us that there is sure to be a linear, no-threshold rate of induction of cancer by radiation; the coefficient might be negative.

I agree with Mr. Baker that it makes no sense to have a criterion of "as low as reasonably achievable" for radiation exposure risk. If we take the ICRP coefficient as 0.04 cancer deaths per Sv, and the value of a life lost (or saved) as $1 million, then it is worth $40,000 to avoid an exposure to the public of one p-Sv. According to the 1993 report of the United Nations Scientific Committee on the Effects of Atomic Radiation(4) one gigawatt-year of electrical energy produced (GWy) from coal contributed some 7 p-Sv of exposure, while the operation of a typical nuclear plant contributed some 1.8 p-Sv. The energy produced sells for some $400 million, so the damage to society from reactor operation of $72,000 is tiny in comparison. UNSCEAR data shows that the reprocessing of fuel from the same 1 GWy, as it was done commercially in France, provided a further global exposure of 1250 p-Sv, 99% of which comes from carbon-14. In addition, mining and milling of ore contributes, on the average, 150 p-Sv for the once-through nuclear fuel cycle, while the 20% economy in uranium from reprocessing and recycle reduces the mining and milling component to 120 p-Sv per GWy. The British Nuclear Fuels Limited operation at Sellafield now captures C-14, reducing the exposure from reprocessing; and modern mining and milling can reduce that component by a factor 100 or more. The ICRP dose-response coefficient can play an important role in allocating resources, despite the uncertainty in its magnitude.

Richard L. Garwin, IBM Fellow Emeritus
Thomas J. Watson Research Center
P.O. Bos 218, Yorktown Heights, NY 10598-0218
(914)945-2555; FAX (914)945-4419

References

1 "The Benefits of Low Level Radiation", a speech to the Uranium Institute Annual Symposium, London, 1996.

2 Radioprotection, 1996.

3 Crump, K.S., Hoel, D.G., Langley, C.H., and Peto, R. (1976), "Fundamental Carcinogenic Processes and Their Implications for Low Dose Risk Assessment," Cancer Research, 36:2973-2979, quoted in Wilson, R. (1997) "LowDose Linearity: An Introduction" Physics and Society, 26: January 1997.

4 UNSCEAR, 1993, Table 53.

Both Wolfe and Garwin are Right, but Incomplete

The argument between Dr. Wolfe and Dr. Garwin - each of whom obtained his Ph.D. in experimental nuclear or particle physics - is interesting because both are right in some respects and incomplete in others. Dr. Wolfe is correct: the deaths attributable to the Chernobyl accident are only about 40. But he is incomplete and, to this extent, misleading because, as Garwin points out, the number of deaths worldwide that are calculated may be 20,000. It is important to say "maybe" because the calculation is based on a linear no threshold theory and most of the deaths (if they come) will come from very low levels that are indistinguishable from the variable background. Neither scientist made the important point that the general arguments for a linear no threshold theory apply to a vast number of other situations in society where society conventionally ignores the effects of low exposures. Air pollution is the most well known example but only one of many. Many scientists believe air that pollution is causing the (delayed) deaths of tens of thousands of people in the USA every year.

Unless this or a similar comparison is made (as Dr. Garwin did not do and has not done on many other occasions) the discussion of the large number of calculated deaths from Chernobyl can be highly misleading.

Richard Wilson
Mallinckrodt Professor of Physics
Harvard University, Cambridge, MA 02138
(617) 495-3387; fax: (617) 495-0416;;home (617) 332-4823

Wolfe on Garwin and the LNT

Richard Garwin, in his letter (P&S January 1999) attacks one part of my October letter in which I raise the point that low level radiation, like low level sunlight may be healthy. Garwin quotes the International Committee on Radiation Protection (ICRP), and the National Academy of Sciences Committee on Biological Effects of Ionizing Radiation (BEIR) which adopt the "Linear Theory of Radiation (LNT).

"The LNT says in effect, that if gulping down thirty ounces of liquor will kill a person, then if thirty people each drink an ounce one of them will die. The linear theory was adopted at the start of the nuclear industry as a conservative means to protect the public at a time when the effects of low level radiation were not known or understood. It still is in effect. But the question is whether it is doing more harm than good.

Because the effects are so small it is hard to develop a clear measure of low level radiation effects. But despite Garwin's arguments, the data we are now collecting1 seems to indicate that at worse there is no effect below about 300 mSv, and at best it is healthy. Dr Bernard Cohen of the U of Pittsburgh[2] found that people living in areas with high levels of radioactive Radon develop less cancer than those in low level areas. Dr. Sohei Kondo of Japan[3] finds that those Japanese who received low levels of radiation from the Atom bombs live longer than those who received no radiation. And as mentioned in my letter, those living in high backround radiation areas, like Denver, live longer than those in low radiation areas.[4] Despite Garwin's arguments about Chernobyl, there are still no measurable added death rates in the Soviet public.

There is much additional data that seems to support the positive effects of low level radiation. But the effects are small so that the statistics are hard to verify.

One should understand that the linear theory, if wrong, can kill people. Even if one believes in the linear theory it can do harm if it is not properly explained to the public. The European Chernobyl abortions are one example. Should one have aborted when the extra radiation was less than the normal variations in nature's background radiation? And should we be spending many billions of dollars, and risking lives, by activities and transportation to reduce radiation at military sites (like Hanford, WA) below the normal background variations? Is the public being helped by being frightened about lifesaving radiation sterilization of food?

Garwin and I apparently agree on the public benefits of nuclear energy. My concern is that those knowledgeable about radiation have not properly educated the public; and the extremists opposed to nuclear energy hurt the public by frightening them with distorted views about low level radiation.

References

1.Low Level Radiation Effects: Compiling the Data. (1998) Prepared by Radiation, Science, and Health, Inc. Editor: James Muckerheide

2.Cohen, B.L. (1995) Test of the linear-no-threshold theory of radiation carcinogenesis for inhaled radon decay products. Health Physics, 68, pp157-174

3.Kondo, S. (1993) Health Effects of Low-Level Radiation. Kinki University Press, Osaka; and Medical Physics Publishing Co., Madison, WI

4.Yalow, R.S. (1994) Concerns with low level ionizing radiation. Mayo Clinic Proc., Vol. 69, pp436-440; ANS Transactions, Vol. 71

Dr. Bertram Wolfe
Vice President of GE, Manager of its Nuclear Energy Division (Retired).
15453 Via Vaquero, Monte Sereno, CA 95030
Phone and Fax: 408 395 9039

Is Science Bad for the Poor?

I found Caroline Herzenberg's article on planning for the future of American science very interesting. However, in commenting on attitudes toward science, even she fell prey to a common misconception. She wrote that ``it (science) has been instrumental in the development of a civilian technology that systematically widens the gulf between the rich and the poor.''

In the long term (centuries) this is manifestly untrue In western countries, the vast majority of the population now have adequate food, clothing, clean water, and shelter, necessities that were not universally available in the 18th and 19th centuries. Life expectancy has risen dramatically. Transport speeds have increased from 5 to 50 or 500 mph for everyone, not just the rich.

Even in the last fifty years technology has not `widened the gulf.' It might appear that way due to the nature of news: expensive new technologies are newsworthy but cheaper existing technologies are not. The first televisions, antibiotics, lasers, personal computers, ... were not for the poor. In addition to the necessities, the vast majority of the population now has a telephone, refrigerator, television, and automobile as well as access to vaccines and emergency medical care. On an absolute scale, the physical improvement of the poor has been phenomenal.

Only if you choose your time period carefully, define `rich' and `poor' restrictively, and define `gulf' as `the ratio of incomes' or `the relative availability of luxuries' then perhaps the `gulf' has widened. It is not reasonable, but all too human, to focus on a small relative change in the status of two groups over a time when the absolute wealth of both has increased extraordinarily.

Science clearly does have a perception problem. It stems from several sources: 1) people do not make absolute comparisons, only relative ones; 2) humans have a short memory: today's consumer good is disconnected from yesterday's scientific advance; and 3) science has been so successful within its own realm that our failure to be equally successful in the realm of societal problems causes disappointment and even hostility.

To partially overcome these perception problems, we need to remind people (and ourselves) of the tremendous differences scientific and technological advances have made in their own lives.

Lawrence Weinstein
Associate Professor of Physics
Old Dominion University
Norfolk, VA 23529
757 683 5803

Should Physicists Dismiss Speculation?

Two letters (R. Riehemann, July 98, p. 2; V. Raman, October 98, p. 2) have questioned the value of drawing philosophical or religious lessons from modern science. Riehemann is "personally acquainted with persons who have been seriously misled by such books," and wonders whether "these books help or harm the public...." Raman claims that "writers...created an altogether new genre of scientific knowledge which consists largely of poetic and picturesque world views, dubiously related to hard-core science.... This has become fertile ground for unbridled imagination, mystical interpretations, and theological extrapolations...."

I agree that there is plenty of published nonsense connecting modern physics to broader notions. But before we physicists dismiss all such speculations, we should note that we ourselves have for centuries indulged in such speculations.

The "mechanical universe" has been part and parcel of physics since Newton. Although today we no longer accept Newton's version of it, we still use the term to describe classical physics. And many of us contribute, wittingly or unwittingly, to the perception among the general public that the universe described by science is in fact precisely this impersonal, automatic, mechanical universe, allowing little room for such commonly-believed notions as free will or ultimate purpose.

The mechanical universe, whether the one that Newton and Descartes puzzled over centuries ago, or a more modern version, is in fact a grand philosophical scheme with immediate religious implications, of just the sort criticized by those who are nervous about broad interpretations of modern physics. For one famous example, it represents a grand extrapolation of classical physics, far beyond its then-known range of validity, to conclude, with Laplace, that "an intelligence which at a given instant knew all the forces acting in nature and the positions of every object in the universe...could describe with a single forumula the motions of the largest astronomical bodies and those of the lightest atoms. To such an intelligence, nothing would be uncertain; the future, like the past, would be an open book." Talk about sweeping generalizations!

Much of the philosophical speculation from modern physics is less far-fetched than the mechanical universe. For example, consider the notion that every particle in the universe is "entangled" with every other particle, and that every detection-type event (of the sort that causes "collapse of the wave packet") therefore causes a simultaneous quantum jump of every particle in the universe. That is, every eye-shift as you read this page causes a subtle instantaneous shiver throughout every particle in the universe. This seemingly outrageous notion is a straightforward implication of quantum theory coupled with the rather broadly accepted notion that the universe originated in a single quantum event.

We physicists are awfully quick to criticize broad implications suggested by modern physics, but we have failed, for more than three centuries now, to seriously critique the mechanical, predetermined universe that is suggested by Newtonian physics. This is inconsistent, to say the least, and should cause us to adopt a rather broad-minded and humble view of the philosophizing of others.

Art Hobson
University of Arkansas

Equipment Efficiency Standards: Mitigating Global Climate Change at a Profit

Howard S. Geller and David B. Goldstein
Presented at FPS-APS Awards Session, Columbus, Ohio, April 14, 1998

Introduction: Physics in the Public Interest

A. Refrigerators and Power Plants That Were Never Built.
One of Leo Szilard’s claims to fame is the invention, with Albert Einstein, of several new technologies for domestic refrigerators. But due to the Depression and to unexpected progress from vapor-compression cycle refrigerators based on CFCs, the Szilard-Einstein refrigerators were never built. This talk is also about refrigerators that were never built: inefficient mass-produced refrigerators, along with inefficient air conditioners, washing machines, furnaces, and many other products. It is also about many costly and polluting power plants that were never built thanks to appliance and equipment efficiency standards.

The refrigerator that was never produced might be consuming as much as 8,000 kWh/year, if 1947-1972 trends had continued. (See Figure.) Instead, as a result of six iterations of standards, the average American refrigerator sold after the year 2001 will consume 475 kWh/year, down from an estimated 1826 kWh/year in 1974 despite continuing increases in size and features. (See Figure.) Peak demand savings, estimated on the assumption — which nobody realized was untrue at the time — that 1972 energy consumption would have remained constant without standards (rather than increasing), are about 13,000 MW today. But if we had extrapolated pre-1970’s performance, peak demand by refrigerators today would be about 120,000 MW, compared to the actual level of about 15,000 MW. The difference exceeds the capacity of all U.S. nuclear power plants.

Exponential extrapolation of past trends was not an unrealistic assumption at the time. Virtually every utility in the country, backed by their regulatory agencies and Department of Energy forecasters, was assuming that residential electricity growth would continue at about the 9.5% rate that it had grown during the prior decades. The total growth in electricity consumption for refrigerators was also about 9.5%. Suggesting that this rate would come down in the future, as one of the authors did, was highly controversial.

Why were the projected inefficient refrigerators and other products not built? The overwhelming cause was the development of efficiency standards for the products. Non-governmental organizations (NGOs), of which the authors are representatives, played a seminal role in creating the policy and legal atmosphere in which standards could be promulgated.

B. Institutionalizing Public Interest Physics
We are honored to receive an award as two representatives of the non-profit sector, both professionally trained scientists working full-time in public interest institutions that value scientific training and expertise. This is a new phenomenon in America, which appears to be ahead of the rest of the world in this regard. Leo Szilard of course pioneered this type of work in his founding of the Council for a Livable World.

Non-profit organizations promoting environmental quality or energy efficiency have been around for all of the 20th Century, but until the late 1960’s, these organizations were based primarily on volunteer effort and did not widely employ the knowledge of scientific or other professionals on a regular basis. This situation changed with the rising environmental awareness of the last three decades, and the non-profit sector has reached a level of scientific maturity that we believe is recognized by our receipt of the Szilard Award. Such awards are no longer solely for scientists working for universities, large laboratories, or the private sector. This year marks the second time that a scientist in one of our organizations has received this honor.

The accomplishments for which this award is presented this year are based, we believe, on a different perspective in looking at energy problems and their environmental consequences. There are two sources of this perspective. The first is our base in the non-profit sector. Scientists working full-time for NGOs had the resources to analyze the problems of energy use from a policy viewpoint as well as a technical viewpoint and to pursue answers — and solutions — to the questions of why the world was using so much energy.

Another source of this new perspective is the problem-solving approach that is provided by physics, as contrasted to the traditional economics approach.

Traditional economics tends to see energy as merely one of a set of commodities in the economy. Demand and supply of energy are determined by market equilibration.

When the first energy crisis hit, this line of reasoning predominated. It held that energy underlay most all of the productive processes of the United States, and its use was optimized, so reductions in energy use necessitated by supply constrictions or high prices would come only at a sacrifice.

Physicists began to question this theory. First, analyses of the technologies for energy use found widespread unexploited potentials to reduce energy use by 30% or 50% or more with payback periods of three years or less.

Physicists began looking at broader ways of defining the problem that energy was being used to solve, and at broader views of different design principles that would allow large energy savings. This systems approach frequently could offer larger energy savings, lower overall costs, and higher quality energy services compared to a component approach.

The systems approach can be applied to the entire energy sector, comparing efficiency improvements with energy supply upgrades and developing a policy framework that picks the cheapest and most secure options first. This approach has been used in California, the Pacific Northwest, New England, and Wisconsin, saving consumers in these jurisdictions tens of billions of dollars.

Another key intellectual contribution by physicists was the need to compare theory with experiment. Economic theory asserts optimization, but remarkably little study had ever been performed about whether this hypothesis was validated or contradicted by real world practice. NGOs found, performed, or encouraged empirical research that showed massive market failures in the area of equipment efficiency.

Most recently, the hypothesis of market optimization has been falsified on a grand scale by the elaborate measurement and evaluation of utility incentive programs in California and elsewhere. Studies confirmed that California utilities had found over $2 billion of societal benefit, averaging a benefit-cost ratio of more than 2:1, during the early 1990s.

II. Benefits of Appliance Standards
Minimum efficiency standards on appliances and equipment provide broad benefits. Consumers save money, energy savings yield reduced pollutant emissions in the home and at the power plant, utilities benefit from the reduced need for investment in new power plants, transmission lines, and distribution equipment, and appliance manufacturers as well as retailers can benefit from selling higher priced, higher value- added products.

Appliance standards in the U.S. were initiated through a complex process involving the interplay of national and state regulatory initiatives. The first standards were adopted by states in the mid-1970s. Federal legislation called for national standards by 1980, but this effort was dropped by the Department of Energy in 1983. NGOs and states challenged this DOE decision in court; at the same time California responded to an NRDC petition and initiated proceedings on refrigerator and air conditioner standards in 1983. Following California’s adoption, other states began to promulgate their own standards.

In this atmosphere, the manufacturers agreed to negotiate consensus national standards with our organizations in return for preemption of further state standards. These discussions bore fruit, and national efficiency standards were adopted on a wide range of residential appliances, lighting products, and other equipment through the National Appliance Energy Conservation Act (NAECA) in 1987, along with amendments to NAECA adopted in 1988 and 1992. Pursuant to these laws, the Department of Energy (DOE) issued tougher standards via rulemaking on four occasions so far.

Standards already adopted are expected to save about 1.2 Quads (1.3 EJ) per year of primary energy in 2000, rising to 3.1 Quads (3.4 EJ) per year by 2015 (see Table 1). Since most of the savings is electricity, standards are expected to reduce national electricity use in 2000 by 88 TWh -- equivalent to the power typically supplied by 31 large (500 MW) baseload power plants. By 2015, the electricity savings from standards already adopted is expected to reach 245 TWh.

These standards will save consumers about $160 billion net (i.e., energy cost savings minus the increased first cost, expressed as net present value in 1996 dollars). This means average savings of over $1500 per household . Consumers save $3.20 for each dollar added to the first cost of appliances.

Appliance standards reduce air pollution and greenhouse gas emissions substantially. Lawrence Berkeley Lab estimates that existing standards will prevent 29 million tons of carbon emissions, 286,000 tons of NOx emissions, and 385,000 tons of SO2 emissions in 2000. The carbon savings by 2010, around 65 million tons, is equivalent to removing around 30 million automobiles from the road.

Manufacturers' bottom lines will not be adversely affected by standards.Manufacturers incur additional costs to improve the energy efficiency of their products, but recoup these costs by selling higher value-added, higher priced products. Whereas competitive pressures make it difficult for an individual manufacturer to enhance energy efficiency unilaterally, uniform regulations level the playing field.

The benefits of appliance standards extend worldwide. Many products covered by the U.S. standards are produced and traded internationally, leading to diffusion of new technologies worldwide. For example, today refrigerators are more efficient in Brazil because many compressors used in U.S. refrigerators are manufactured in Brazil. The U.S. standards led to steady improvements in the efficiency of these compressors, which are used in Brazil as well as exported.

Following the U.S. lead, appliance standards have also been adopted by Canada, Mexico, Brazil, Japan, Korea, and the European Community. These countries are extending standards to additional products, and other countries including China are developing standards, in order to reap even greater benefits

III. Future Standards and the Kyoto Climate Protocol
NAECA requires the Department of Energy to consider amended standards on a regular schedule. It contains specific criteria for such standards , including cost- effectiveness for consumers and manufacturers. Physics and economics are supposed to guide the setting of standards,.

Appliance efficiency standards — and their close cousins, efficiency standards for new buildings — could be a significant contributor to the U.S. goal under the Kyoto Protocol to reduce greenhouse gas emissions by 7% from their 1990 level. This goal entails a reduction of around 505 megatons of carbon equivalent by 2010. New appliance standards could provide roughly 30 megatons (see Table 2). This is 6 percent of the entire goal, coming mainly from the buildings sector, which accounts for 30% of total U.S. carbon emissions.

The effect of standards is amplified if we include the potential savings from new building efficiency standards. States that have taken a leadership role in promulgating appliance and equipment efficiency standards have also been global leaders in building energy efficiency standards. When pursued in tandem, savings from each policy have been comparable in magnitude.

If the rest of the United States is able to achieve the improvements in new building efficiency standards adoption and enforcement that West Coast states have already achieved, these standards will provide an additional 44 megatons of avoided carbon emissions by 2010 (see Table 3). These emissions reductions will be achieved with a net benefit of about $65 billion.

Total carbon savings from building and appliance efficiency standards cover 15% of the total U.S. goal under the Kyoto Protocol.

Efficiency standards are not the only policy that can promote expanded energy efficiency in the building sector. Market transformation programs, tax credits, utility energy efficiency programs as facilitated through a public benefits charge, private or public research and development on energy efficiency, and information services can build upon the savings achieved by standards.

Adoption of these economically attractive measures greatly reduces the likelihood that unprofitable measures will be needed to meet the Kyoto target.

But the benefits from standards are dependent on policymakers’ taking prompt action. Savings from standards take a relatively long time to occur. The standard setting process itself takes two years or more , and manufacturers must be provided three years or more of lead time.

After standards take effect, energy savings will be obtained from that portion of the stock of equipment (or buildings) that is turned over. Energy-using capital tends to be long-lived: 10-25 years for equipment and 45-100 years for buildings.

These considerations limit the amount of energy savings and emissions reductions that can be achieved by the yearf 2010. Setting new standards on a wide range of products over the next three years could result in an emissions reduction of 59 MtC by 2020, but only 30 MtC by 2010. And if setting these standards is delayed by three years, the avoided emissions by 2010 would drop about 50%.

This point has policy importance. Many policymakers are undecided as to whether the U.S. should ratify the Kyoto Protocol, primarily because of concerns about its economic effects. But appliance standards have a positive economic impact, and there is essentially no scientific dispute about this fact. It would be an economic (as well as an environmental) mistake to delay the adoption of new standards, particularly if the Kyoto Protocol is eventually ratified.

Considering both appliance and building efficiency standards, annual carbon emissions savings from the building sector more than double between 2010 and 2020, even assuming that no new standards are adopted after 2010. This calculation shows that if the U.S. building sector meets its share of the U.S. 7% greenhouse gas emissions reduction goal for 2010, even larger savings can be achieved automatically in 2020 and beyond.

In conclusion, this analysis presents multiple reasons why the United States should move forward aggressively with new appliance standards and other climate mitigation measures that can be justified without considering environmental benefits. By doing so, we not only mitigate global warming, but we facilitate compliance with the Kyoto Protocol painlessly — indeed, profitably — if the United States decides to ratify the Protocol.

Appliance efficiency standards have been one of the most successful public policy initiatives to promote energy conservation in the United States if not the world. We are proud of the results and proud of being recognized for the leadership we provided. While Dr. Szilard's refrigerators were not commercialized, we think he would approve of the "refrigerator revolution" brought about by these efficiency standards.

TABLE 1 - SAVINGS FROM EXISTING STANDARDS

Standard

Electricity Saved (TWh/yr)

Peak Capacity Saved (GW)

Primary Energy Saved (Quads/yr)

Net Economic Benefit

(billion $)

2000

2015

2000

2015

2000

2015

NAECA

8

43

1.4

15.7

0.21

0.58

46.3

Ballasts

18

24

5.7

7.5

0.21

0.28

8.9

NAECA updates in

1989, 1991

20

39

3.6

7.3

0.23

0.45

15.2

EPAct lamps

35

90

7.0

18.0

0.40

1.04

65.5

EPAct other

7

26

3.1

9.5

0.19

0.55

18.7

Refrigerators (2001)

0

21

0

2.7

0

0.21

5.9

Room AC (2000)

0

2

0

1.5

0

0.02

0.6

TOTAL

88

245

20.8

62.2

1.24

3.13

161.2

Percentage of projected U.S. use

2.7%

6.0%

2.6%

6.5%

1.2%

2.7%

---

Notes:
1) The percentage of projected U.S. use is based on forecasts in the Annual Energy Outlook 1998, Energy Information Administration, Washington, DC.
2) Net economic benefit is expressed in 1996 dollars, using a 7% real discount rate to calculate net present value.
TABLE 2 - ESTIMATED NATIONAL SAVINGS
FROM FUTURE EFFICIENCY STANDARDS

Product

Savings in 2010

Savings in 2020

Energy (Quads)

Carbon (MtC)

Energy (Quads)

Carbon (MtC)

Clothes washers

0.08

2.2

0.24

6.6

Central ACs and heat pumps

0.21

4.5

0.48

9.2

Water heaters

0.53

9.9

1.04

18.7

Fluorescent ballasts

0.11

2.3

0.20

4.1

Transformers

0.06

1.3

0.15

2.9

Comm’l packaged ACs & heat pumps

0.08

1.6

0.12

2.4

Packaged refrigeraton

0.02

0.5

0.05

0.9

Furnaces

0.07

1.0

0.26

3.7

Refrigerators/

freezers

0.03

0.7

0.12

2.5

Room air conditioners

0.01

0.3

0.04

0.9

Power supplies

0.21

4.3

0.29

6.0

Dishwashers

0.02

0.3

0.05

0.6

Reflector lamps

0.02

0.4

0.02

0.4

Gas ranges/ovens

0.01

0.1

0.02

0.3

TOTAL

1.51

29.7

3.14

59.1

Note:
Avoided carbon emissions are expressed in million metric tons assuming electricity savings come from fossil fuel-based power plants. Assumptions about power plant heat rates and carbon coefficients are derived from the Annual Energy Outlook 1998, Energy Information Administration, Washington, DC.
TABLE 3 - ESTIMATED SAVINGS FROM FUTURE BUILDING
EFFICIENCY STANDARDS

Sector

Savings in 2010

Savings in 2020

Energy (Quads)

Carbon (MtC)

Cost (Billions)

Energy (Quads)

Carbon (MtC)

Cost (Billions)

Commercial

1.65

36.7

$50

3.68

81.6

$90

Residential

0.33

7.2

$15

0.59

13.0

$20

TOTAL

1.98

44.0

$65

4.27

94.6

$110

NOTE: Calculations of energy and carbon savings are based on NRDC modifications of the Pacific Northwest National Laboratory-developed model for DOE input to the Government Performance Results Act. Costs are estimated by summing over annual results of the modified model runs.

Howard S. Geller, American Council for an Energy-Efficient Economy

hgeller@aceee.org

David B. Goldstein, Natural Resources Defense Council

dgoldstein@nrdc.org

New Automotive Technologies

Marc Ross

The Need for Changes in Automobiles
One major goal for automobiles is reduced local pollution. In the US, grams-per-mile emissions of carbon monoxide and hydrocarbons are restricted in formal tests to be at most 4% of their mid-1960s levels. Nitrogen oxides are also strongly regulated. Unfortunately, the average

car on the road emits 2 to 5 times more than the test-levels.1 As a result, a great variety of new regulatory initiatives have been undertaken; and manufacturers have taken major steps to meet them, as discussed below. While this problem is being solved in new autos, emission of particles is of increasing concern, as discussed briefly in connection with diesels.

Another concern is fuel economy and greenhouse gas emissions. In the US, the fuel economy of new automobiles improved dramatically from 1975 to 1982 primarily in response to the Corporate Average Fuel Economy (CAFE) regulations, supported by gasoline price increases. Since then the fuel economy of new automobiles has stalled at an average test-value of 25 miles per gallon.2 Presumably because the inflation-corrected price of gasoline is low, individual buyers of new vehicles are not interested in fuel economy. However, as citizens rather than as individual buyers, people do want society to achieve higher fuel economy; they are concerned about the long-term future of fuel availability and, perhaps, about climate change.

Although fuel economy has stagnated, manufacturers have introduced more-efficient technology, using it to increase power, size and weight - at fixed fuel economy. Two large autos have also been introduced: minivans and "sport utility vehicles". The minivans provide transportation for one to seven people, great versatility for the cost. The SUVs are essentially alternative styling. Where the pickup truck has long been a popular car-substitute among modest income households, the SUV is popular with wealthier households. Roughly 80% of pickups and near 100% of SUVs are used just like cars, in spite of being regulated as trucks, and thus being less safe, polluting more, and using more fuel per mile. There is pressure to bring these vehicles under more stringent regulation.

In a sense, the strongest driver for new societal goals for automobiles is new technological capabilities. The capability to design and reliably manufacture high-tech products is revolutionary, with new materials and new kinds of sensors and controls based on microprocessors. Sensors are at the heart of this revolution. Where sensors used to have to give simple unambiguous signals, sharply limiting their application, today responses are interpreted by a microprocessor, enabling the practical development of sophisticated controls on- board the vehicle and in its manufacture.

New Technologies for Conventional Autos
Conventional vehicles are achieving ever- higher performance and reliability while meeting stricter emissions and safety standards. And the petroleum-fueled internal combustion engine will continue to be improved. Consider emissions first.

A recent technological surprise with major consequences is that the conventional automobile can be extraordinarily clean in actual use. Two improvements are responsible: A proliferation of sensors coupled to the microprocessor that manages the engine is enabling improved and durable control of the air-fuel ratio, the variable to which catalytic exhaust clean-up is most sensitive. Second, more durable and rapid acting catalytic converters have been developed, using coatings that degrade less at high temperature; as a result, a small catalytic converter can be placed next to the exhaust manifold where it heats up quickly, converting much of the pollution in cold start. In addition, the manufacturers, in meeting a regulatory requirement for on-board diagnostic equipment, have learned much more about how their emissions controls function in the real world.

In the future this success can be strengthened. Conventional gasoline-fueled vehicles with essentially "zero emissions" are feasible. Almost complete control of combustion, based on measuring the performance of each cylinder in each cycle, is in hand, using sensors with good time resolution. One type examines the angular acceleration of the flywheel, another examines properties of the exhaust. Pressure measurement within each cylinder is also in development. Coupled with this, "direct injection" of fuel into each cylinder, being introduced by Mitsubishi and others, improves opportunities for control of the fuel-air ratio. (In today's spark-ignition engines, fuel injection is into the intake manifold. As a result, only about half of the fuel taken into the cylinder is injected during the same stroke, the rest being swept up from previously injected fuel.)

Energy efficiency improvement is a more- difficult challenge than after-treatment of the exhaust. The laws of physics make it more difficult, and so does the absence of new regulatory pressure. Nevertheless, powerful energy-saving technologies are being adopted and are in development. An example is, again, the direct-injection gasoline engine. With a fuel spray controlled in space and time, a stratified charge can be created such that combustion is reliable even with overall fuel- air mass ratios about one-third of stoichiometric. (With a uniform static mixture, the flame goes out if the gasoline-air ratio is less than about 0.7 stoichiometric.) This has three benefits: Air molecules are a good thermodynamic fluid: more of the heat goes into increased pressure than with the complex fuel molecules. Second, work output can be varied without changing the amount of air, reducing the need for throttling. Third, heat loss is reduced. Overall, a 25% efficiency improvement is feasible in urban driving. There is also a disbenefit: reduction of nitrogen oxides in oxygen-rich exhaust is difficult, which calls for inventive solutions.

Modern direct-injection diesels offer even better efficiency. However, with petroleum-based diesel fuels there are many particles in the exhaust. Small particles, perhaps coated with toxic fuel components, can lodge in the lungs, causing major health problems. However, the information is complex, in part because there are several types of particles with different causes. In addition, the largest group of particles is tens of nanometers in size, while the regulations are stated in terms of the mass of particles less than 10 microns in size! Some size distribution studies even suggest that as diesel engines are being designed to meet stricter particle-mass regulations, the number of particles emitted is actually increasing, with adverse impacts on health. Particles may also be an important problem for gasoline engines. Badly needed is fast on-line measurement technology which can count particles in different size ranges.

Much of the efficiency improvement achieved in the last two decades has come indirectly from increasing the engine's specific power (the maximum power per unit of displacement). In the past 20 years the specific power of the average car was increased 90%! This achievement enabled a 58% engine downsizing and a 26% reduction in 0-to-60 mph acceleration time. Engine downsizing means reduced engine friction and reduced weight. The increase in specific power was enabled in part by adding valves and by more accurate manufacturing. If the improved specific power had all been dedicated to it, fuel economy would have increased about 15%. The opportunity to continue increasing specific power is excellent.

Substantial progress in transmission efficiency will also occur. Two developments are: 1) adding gears (e.g. 5-speed automatic transmissions) so one drives at a modest engine speed more of the time, and thus with less friction, and 2) continuously variable transmission. Moreover, a CVT can be less lossy than a typical automatic transmission.

Progress may also be made in technologies that reduce vehicle load: reduced air drag, tire rolling resistance, mass and accessory loads. The appearance of streamlining is popular with buyers; and much more can be done. Lower-energy tires continue to be introduced as original equipment to help meet the CAFE standard.

Although typical vehicle masses have been increasing recently, mass presents the best opportunity for load reduction. Lighter materials, especially high-strength steel, plastics and aluminum, are taking increasing shares. The manufacturers are optimistic about the prospects for up to 40% mass reduction. Amory Lovins and collaborators have proposed the "Hypercar" with more-radical mass reduction.

Hybrid propulsion is a design which combines the engine with supplemental motor and storage such as a battery. One promising type of hybrid uses a small petroleum- fueled engine like a 1.2 liter three-cylinder direct-injection engine. The engine is truned off and the supplemental system drives the vehicle when very little power is needed. They are used together at very high power. The supplemental system is recharged by regenerative braking. A hybrid with an advanced engine and with practical reductionss in vehicle load would achieve two to three times greater fuel economy. I hope such a breakthrough will soon be seen in an SUV.

Two interesting technical points: The battery in this hybrid is quite different from that in an all- electric vehicle: High power rather than energy density is needed; and that is an easier target for electrochemists than high energy density. Second, one wants to turn the engine off at low power because the frictional work in a conventional engine at normal engine speed is about 7 kW, while the power loss at low output with a motor-inverter-battery system is about 1 kW.

Given all the technological options for improving fuel economy, the question is whether improvement will occur - in the absence of stronger regulations and/or very large increases in fuel prices.

Small Vehicles
A quite different approach to fuel economy is the small vehicle. A small narrow vehicle could meet several goals: much higher fuel economy, reduced emissions, and less congestion, both on the street and in parking.3 The basic rationale for small cars is that some 87% of trips involve only one or two riders. The disadvantage is the household's need for another vehicle when more people or large loads are involved.

Safety is critical to the future of small and light automobiles. Given a collision with another vehicle, lighter vehicles are of course at a disadvantage in principle; but the danger is sensitive to design, and the accident-severity correlation with vehicle weight is observed to be small in the frontal crash test with today's vehicles. (The danger correlates with the length of the vehicle, however, as one might expect.) Moreover only about one-fourth of fatal accidents are collisions between two automobiles. It should be required that all vehicles be designed to reduce the danger to other vehicles with which they may collide.4 The safety issues need more research and public discussion. Certainly the concept I’ve heard expressed that driving a SUV will make one safer because it is a killer of others, is bad logic as well as bad morals.

Alternative Energy Systems
Alternative automotive fuels: ethanol, methanol, natural gas, electricity (called a fuel here), hydrogen, and others, have enthusiastic, and often single-minded, advocates. There are huge investment implications, often including huge subsidies. One has to be skeptical of many claims.

A few comments: 1) Ethanol is now extracted from corn. This process is subsidized, undertaken as a result of lobbying by agriculture and the agro-industrial giant ADM. Alcohols based on growing plants will be serious contenders when economical technology is developed for making fuel from the woody material rather than from the seeds. 2) Natural gas is of interest. It is currently less expensive than gasoline but is awkward to store on board. It could be important for heavy duty vehicles. 3) Methanol and Dimethyl ether, made from methane are potential alternatives to diesel fuel. They avoid production of soot, one component in diesel particle emissions, because they have no carbon-carbon bonds.

Proponents of electric vehicles fail to make it clear that with the available batteries one cannot achieve size and performance comparable to conventional vehicles. Consider a simple exercise: Gasoline is conventionally used on board with an overall efficiency of about 17%. With excellent design, battery electricity might be used on board with an efficiency perhaps four times greater. This is a rough general result including the benefits of regenerative braking. With today's best batteries, 30 kWh storage requires a lot of weight, roughly 1/3 ton. Multiply 30 kWh by 4 to get the gasoline equivalent energy: It's about 3 gallons. When I have three gallons in my car's tank, I'm thinking I need a fill up. Moreover, recharging this three-gallon battery set is not simple. EVs cannot be like today's vehicles, barring much better batteries than those recently publicized.

Unfortunately, many of the alternative fuels would not help much with greenhouse gas emissions. For example, electricity is mainly made with coal, and there would be little GHG reduction if EVs had the same size and performance as today's vehicles.

The most interesting alternative is the fuel-cell automobile, fueled with hydrogen. The proton exchange membrane (PEM) cell is now favored for autos. While the efficiency of heat engines is strongly constrained by the second law and friction, physics is more generous to fuel cells. The efficiency of a hydrogen-fueled cell can be as high as 55% at moderate load. This does not include the rest of the propulsion system.

Daimler-Benz, Ford and Toyota are among early PEM developers. Chrysler has announced a fuel cell initiative with the hydrogen to be generated on board from gasoline. The advantage of this approach is, of course, its use of the existing gasoline distribution system. This concept could get fuel-cell vehicles on the road.

Eventually, hydrogen may be produced centrally and stored on board as a gas. At present, hydrogen is produced in large quantities at refineries from natural gas; and it can be produced renewably. With their high fuel economy, fuel cell vehicles using hydrogen stored on board could become practical, although the storage is technologically challenging. This would be an clean vehicle, with low energy and greenhouse gas implications. Fuel-cell autos are an excellent goal for the long run.

Notes

1. J. G. Calvert, J. B. Heywood, R. F. Sawyer and J. H. Seinfeld 1993, "Achieving Acceptable Air Quality: Some Reflections on Controlling Vehicle Emissions", Science, vol. 261, pp 37-45. M. Ross, R. Goodwin, R. Watkins, T. Wenzel & M. Q. Wang, "Real-World Emissions from Conventional Passenger Cars", Journal of the Air & Waste Management Assoc., vol. 48, pp 502-515, June 1998.

2. "Automobiles" refers here to passenger cars and light "trucks" under about 4 tons. The latter, pick- ups, minivans and "sport utility vehicles", are almost all used as passenger cars in the US.

3. Robert Q. Riley, 1994, Alternative Cars in the 21st Century: A New Personal Transportation Paradigm, Society of Automotive Engineers, Warrendale PA.

4. US Congress, Office of Technology Assessment 1995, Advanced Automotive Technology: Visions of a Super- Efficient Family Car, OTA-ETI-638, USGPO, pp 196-202. H. C. Gabler & W. T. Hollowell, "The Aggressivity of Light Trucks and Vans in Traffic Crashes", Society of Automotive Engineers technical paper 980908, 1998.

Marc Ross
Physics Dept., University of Michigan, Ann Arbor MI 48109-1120
phone 734-764-4459

Nuclear Weapons After the Cold War

Paper Given at APS Centennial Meeting, Atlanta, GA, March 24, 1999

W. K. H. Panofsky

The cold war is over but little has changed in respect to United States nuclear weapons policy. But the nature of the threats to United States security from nuclear weapons has shifted dramatically since the end of World War II. Today the likelihood of deliberate large scale nuclear attack against the United States is much less than the risk of nuclear weapons accident or unauthorized use and the threat from the proliferation of nuclear weapons across the globe.

During the cold war we have seen a dramatic nuclear build-up, reaching a rate on the U.S. side of more than 5,000 weapons per year, shifting to a build-down of nuclear weapons, which currently proceeds at a rate of around 1,500 per year. Figure 1 shows the pattern, including both the nuclear forces of the United States and the Soviet Union-Russia. The peak of the build-up "enriched" the world with over 60,000 nuclear weapons, an insane figure on its face considering the fact that two nuclear weapons, of explosive power of about one-tenth of the average of the weapons in current stockpiles, killed a quarter million people in Japan. The build-down as of today has cut the cold war peak by only about one-half.

This pattern is characterized by the fact that during the build-up the United States led the Soviet Union by roughly five years but that when the United States ceased building, the Soviets did not. Much ink has been spilled explaining the causes of this inexcusable build-up. Two sources stand out: One is that nuclear weapons have become symbols of political power with their physical reality relegated into oblivion. We as physicists have a major responsibility to maintain public awareness of the awesome reality of nuclear weapons. This task is made even more difficult in that, thanks to the tradition of non-use of nuclear weapons since 1945 and the cessation of atmospheric nuclear tests, only some old fogies have ever seen a nuclear explosion. The second reason for this vast nuclear arsenal has been the extension of the proclaimed utility of nuclear weapons beyond their "core mission," that is deterring the use of nuclear weapons by others to other military uses. This concept of extended deterrence, that is using nuclear weapons to deter threats posed by non-nuclear, meaning conventional, chemical and biological weapons, or to use the threat of nuclear weapons to protect the interests of other nations, denied the policy makers a meaningful answer to the fateful question "When is enough, enough?"

All this is now behind us -- or is it? Nuclear weapons are still viewed by many as symbols of power. The recent nuclear tests by India and Pakistan were largely motivated by politics, not by a profound and realistic analysis of security needs. The latest full review of United States policy concerning nuclear weapons -- the Nuclear Posture Review

(NPR) of 1994 -- retained a great deal of cold war thinking. The magnitude of "required" forces was still determined by a list of thousands of nuclear targets which had to be covered. The policy underlying the NPR was designated as "reduce and hedge," meaning that while the reducing trend illustrated should be continued, large non-deployed stockpiles should be retained in order to re-equip United States nuclear delivery systems with additional warheads should a more hostile Russia reemerge.

Since the time of the Nuclear Posture Review, there has only been one additional revision of official United States nuclear weapons policy. This occurred last year. The only change made was that the United States should no longer be prepared to fight a "protracted" nuclear war but be able to reply to a large variety of threats by a single response. However there was still a large target list, and deterrence was still to discourage a wide variety of conduct by other countries believed to be capable of hostile action against the United States, its allies or "interests."

These official policies tended to subordinate the threat due to increasing nuclear weapons proliferation and the risk of accidental or unauthorized use to a role distinctly secondary to the need of nuclear weapons to counter a large spectrum of specified conjectured threats. Yet the threat of nuclear proliferation is largest to the United States among all other nations. The United States, being the world's dominant power politically and measured by its conventional prowess, has most to lose if nuclear weapons proliferated. Nuclear weapons concentrate the destructive energy which can be delivered by any vehicle carrying weapons of a given size and weight by approximately a factor of one million. Thus nuclear weapons are in many respects the "great equalizer" among nations powerful and less powerful in the same sense that in the middle ages fire arms equalized the power of the physically weak and physically strong individuals.

Potential proliferants can deliver small numbers of nuclear weapons in many ways. Note that the U.S. developed a nuclear projectile system, the Davy Crockett, which could be handled by a single soldier. Thus nuclear weapons could be detonated on ships in harbor, delivered by light aircraft, smuggled across U.S. boundaries, as well as by ballistic and cruise missiles of a variety of ranges. Meaningful defense against such a spectrum of delivery options is impossible -- note the continuing failure of the "war on drugs" to prevent surreptitious entry. The merits and lack of merit of the huge effort to prevent delivery of nuclea

Threat to Federally Funded Research

Last October, Congress passed an omnibus, 4000 page appropriations bill, funding most federal agencies. At the time, Bob Park (What's New) warned that the "late-night, closed-door deal would provide cover for all sorts of unsavory items". Such a stealth provision was inserted by Sen. Richard Shelby (R-AL), without hearings on the possible consequences. Scientists nationwide are reacting with alarm at this provision, and Rep. George Brown (D-CA) has introduced legislation repealing it.

The provision calls on the Director of the Office of Management and Budget (OMB) to revise "OMB" regulations "to require Federal awarding agencies to ensure that all data produced under an award will be made available to the public through the procedures established under the Freedom of Information Act (FOIA)". In principle, this means that anyone could request scientific data from any federally-funded researcher, even if the data has yet to be analyzed, peer-reviewed, or published.

Scientists are naturally concerned about the possible premature release and potential misuse of their data, the effect on property rights, the confidentiality of research subjects, and the delays in their work by the requirement to comply immediately with such requests.

In a letter to Jacob Lew, the OMB director, Bruce Alberts, President of the National Academy of Sciences, said that "this is an enormous change in federal policy regarding federally funded research. We are convinced that the new legislation will have serious, unintended consequences for the nation's research enterprise......One of the most troublesome aspects of the application of FOIA to federal grantee research data is the possibility that FOIA may not allow a federal research grantee to publish the results of his or her research in scientific journals before the underlying research data must be made available to the publish under FOIA......Permitting the researcher who actually collected the data to be the first to analyze and publish conclusions concerning the data is an essential motivational aspect of research. Requiring public release of data prior to publication in scientific journals would seriously short-circuit the scientific research process that has been so effective in the United States. Moreover, it would severely disadvantage federally funded scientists while providing unreasonable advantages to the competitors...Premature release of research data before careful analysis of results, and without the independent scientific peer review that is part of the normal process of publication of scientific research, would also increase the risk of public disclosure of erroneous or misleading conclusions and confuse the public, which would not be in the public interest.... The situation is further exacerbated by the fact that the term "data" in the new legislation is not defined......We must not allow the entire federally-funded research establishment in the United States to be seriously burdened by compliance with new bureaucratic requirements that are intended to address a legislative concern that is irrelevant to the vast majority of federally funded research projects."

How did such a measure become law? The Harvard School of Public Health conducted an EPA-funded epidemiological study to see if health effects from fine particulates could be discerned for several cities. Industry groups wanted to gain access to Harvard's raw data in order to do their own analysis. Harvard refused, and the industry groups went to House Republicans. An Alabama congressman introduced an amendment last year to require that data generated by federally-funded research be made available through the FOIA. The defense industry was opposed, and the amendment was defeated. Then, at the end of the last Congress, Senator Shelby, with the approval of the OMB director, inserted the language requiring that data generated by federally funded grants be made available through FOIA. No one other than a few insiders in Congress and OMB knew about the action until the massive omnibus bill was made public and analyzed. Ironically, Harvard did make its data available to an industry group to do an independent reanalysis.

Rep. Brown has introduced H.R. 88 as a bill to repeal the omnibus language. He called it "ironic that a provision described as a sunshine provision needed to be tucked into a 4,000-page bill in the dead of night". Brown and 22 other Representatives, in a letter to Jacob Lew, warned of the possible consequences of the legislation...."One area of concern pertains to research involving human subjects....volunteers currently make agreements with researchers and their institutions to divulge personal medical information on the condition that their information will remain strictly confidential...." Brown said that the provision "makes scientists fair game for lawsuits, threatens academic freedom and is a blatant abuse of the democratic process". Brown hopes scientists will get behind the bill by contacting their representatives and urging them to cosponsor H.R. 88.

The President's Budget Proposals

Last year, the Administration announced its support for doubling spending on civilian research and development within the next twelve years. Howver, since the Administration has declared the surplus off-limits until an agreement on Social Security is reached, the spending caps put into place two years ago (which cap discretionary spending) remain in force, making such increased support much more difficult. In January, the Administration released its budget proposal for FY2000. Overall, civilian R&D funding is up 3%, civilian basic research is up 4%. For specific agencies, the results were a mixed bag:

National Science Foundation: NSF Director Rita Colwell said that "we're pleased with the support we've received from the Administration", describing the 5.8% total increase for the NSF. The Research and Related Activities account is slated for a 6.9% increase; Physics is up 3.0%. The "Information Technology for the 21st Century" initiative will be headed up by the NSF, with a $146 million request. Other priorities include the South Pole Station Modernization, the Large Hadron Collider, a new "Network for Earthquake Engineering Simulation, a "Biocomplexity in the Environment" initiative and "Educating for the Future".

Department of Energy: Although a significant increase (5.1%) is requested for the DOE's Office of Science, much of that increase is due to various presidential initiatives in Information Technology and Climate Change Technology. Another chunk of the increase is for construction funding for the Spallation Neutrons Source. High Energy Physics would receive an increase of just under 1.0%, and Nuclear Physics is up 1.3%. Fusion Energy Sciences is flat, and the Biological and Environmental Research account is down 5.8%.

In his budget briefing, described in FYI, #14, Secretary Richardson highlighted construction of the SNS at Oak Ridge, and cited a number of scientific user facilities that are just coming into operation---the Fermilab Main Injector, the SLAC B-factory, RHIC at Brookhaven, the Combustion Research facility at Sandia and the National Spherical Torus Experiment at Princeton. "All of these" , he said, are examples of "using science to serve society".

Office of Science Director Martha Krebs characterized the request for her office by saying "the bottom line is a pretty good one". It sustains real growth for DOE science, she said, it supports a major role in information technology, and it delivers new capability and increased utilization at the scientific user facilities. She described four themes for DOE's science portfolio: Fueling the Future, Protecting our Living Planet, Exploring Energy and Matter and Extraordinary Tools for Extraordinary Science. The latter includes the SNS, the LHC and information Technology. Exploring Energy and Matter includes the Sudbury/SNO detector, construction of Fermilab's Neutrino Experiment (NuMI), and a planned anti-matter search to fly on the space station. Full support of $70 million was requested for the LHC.

Specific details can be found in FYI at http://www.aip.org/enews/fyi.

The increase in funding for the Fermilab Main Injector and the SLAC B-factory within the High Energy Physics Request are partly offset by transferring the AGS to the Nuclear Physics budget. University funding in High Energy Physics is up 3.5%. In Nuclear Physics, Bates at MIT will receive a shut-down budget, and will cease operation at the end of FY1999, although there has been serious discussion about trying to keep it open. Construction of the Relativistic Heavy Ion Collider will be completed in this fiscal year; Jefferson Lab will receive a 4% increase. Details can be found in FYI.

NASA: NASA's budget will decrease by 0.6%, making the sixth consecutive year for a decrease. Space science, Earth science and the Space Station all receive increases (the Space Station by 7.7%), while life and microgravity sciences, as well as aerospace technology decrease. The Relativity/Gravity Probe-B as well as the TIMED (Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics) missions are scheduled for launch in 2000, as is a servicing mission for the Hubble Space Telescope. Two Explorer missions are also planned for launch next year. Details on the budget can be found in FYI #17 and #18.

Fermilab's Congressman Elected Speaker

Following the surprising withdrawal of Speaker-designate Robert Livingston, Republicans elected Denny Hastert, a seven term Congressman from Illinois' 14th district, to the House Speakership. Hastert's district includes Fermilab, and he has been a strong advocate of both Fermilab and Argonne National Laboratories.

Hastert is viewed as a solid, pragmatic Congressman, who believes in conservative principles but is willing to try to find compromises. His style is viewed as similar to that of former Republican minority leader Robert Michel. Hastert is viewed as a supporter of science and technology issues. He opposed cuts in the NSF and NASA appropriations in 1995, and pushed to fully fund the NSF during the government shutdowns. He has also endorsed funding for energy efficiency programs.

Hastert taught high school history and government for 16 years, and as a member of the General Assembly supported creation of the Illinois Math and Science Academy. Recently, he opposed national education testing and supports giving education dollars to the states in the form of block grants.

With only a six-seat majority in the House, his compromising style may be necessary. In his acceptance speech, he said "To my Democratic colleagues, I will say that I will meet you halfway, and maybe more so on occasion".

The Joseph A. Burton Forum Award, The Nicholson Medal and The Szilard Award

At the APS Centennial meeting, the Joseph A. Burton Forum Award will be presented to Freeman J. Dyson, of the Institute for Advanced Study. The citation reads, "For his thoughtful, elegant and widely published writings regarding the impact of diverse science and technology developments on critical societal issues and on fundamental questions for humankind". The Nicholson Medal will be given to Vitaly Ginzburg, of the Russian Academy of Sciences, for "courageously supporting democratic reforms in the former Soviet Union, and for leading the Soviet scientific community in humane directions". The Leo Szilard Lectureship Award will go to John Alexander Simpson "for his leading role in educating scientists, members of Congress and the public on the importance of civilian control of nuclear policy and his critical efforts in the planning and execution of the International Geophysical Year, which established in 1957 a successful model for today's global-scale scientific endeavors". The recipients will speak at the Forum Awards session at the Centennial Meeting. Future issues of this newsletter will give more details on the contributions of these three Award recipients.

Henry Barschall's Survey Upheld

Over a decade ago, Henry Barschall, a long-time friend of the Forum, conduced a survey comparing the cost-effectiveness of physics journals, based on the cost to libraries per word compared to the citation rate. Gordon and Breach's journals came out on the bottom, while APS journals were at the top. Gordon and Breach, claiming that the survey was "unfair competition", sued Barschall, the APS and the AIP in several countries, including the United States. Five years ago, a federal judge ruled that the survey constituted protected free speech and ruled in favor of Barschall. Last month (January), the U.S. Court of Appeals upheld this decision. Henry Barschall passed away two years ago.

Dual-Career-Couple Survey

A little over a year ago, Laurie McNeil and Marc Sher began a Web-based survey on dual-science-career couples. Since the density of jobs in physics is so low, it is very difficult for both members of a couple to find positions in the same geographical area. Since 68% of all married female physicists are married to scientists (compared with 17% of married male physicists), this problem has a disproportionate effect on women, and many believe it is the single greatest obstacle to significantly increasing the percentage of women in physics. Although the survey was long and comprehensive, the response was spectacular, with over 630 responses received (close to 40% of the total target audience). A detailed report on the survey has been written, and is available through the dual-career-couple Website that they have established at http://www.aps.org/units/fps/dualcareer.html. The report discusses the details of the survey and demographic responses, has a long section noting what institutions are doing to make the problem worse (which includes many very disturbing quotes from the survey respondents), and then has an even longer section discussing various solutions and responses to the problem. These include split/shared positions, spousal hiring programs, alternative academic positions, alternative non-academic positions, commuting suggestions and legal responses. A brief summary appears in the February issue of APS News, and a longer summary has been submitted to Physics Today. The full report is available at the above Website.

Change in Missile Defense Policy

On January 20th, Secretary of Defense William Cohen announced a significant change in the Administration policy on national missile defense (NMD).

In the past, the Administration has treaded cautiously, supporting continued research in NMD, but deferring any decision on deployment. There has been concern that going ahead with deployment would require abrogation of the ABM treaty, an action which could spell the end of strategic arms reduction talks (START). The Russian Duma has been considering the START II treaty (which has passed the U.S. Senate); should it be ratified, the U.S. and Russian governments would immediately begin START III talks, with a hope of reducing the number of strategic weapons on each side to 1000-1500 (compared with tens of thousands at the height of the Cold War). The Russian government has consistently claimed that unilateral abrogation of the ABM treaty would doom START II ratification and/or START III negotiations. Recently, however, angered by the bombing of Iraq, the Duma has put START II ratification on hold.

In his announcement, Secretary Cohen announced four elements of the new policy. First, "we are budgeting funds that would be necessary to pay for an NMD deployment", with the expectation that a decision on deployment would take place in June, 2000. Second, "we are affirming that there is a threat, and the threat is growing, and that we expect it will soon pose a danger not only to our troops overseas but also to Americans here at home". This affirmation is based on the recent firing of the Taepo-Dong II missile by North Korea, which could, with a reduced payload, reach the Western Aleutian Islands, and recent advances by Iran.

The third element says "deployment might require modifications to the ABM treaty, and we will seek to amend the treaty if necessary". He also noted that the U.S. has the right to pull out of the treaty with six months notice. This is the biggest change in current policy. Finally, the fourth element is that "we will phase key decisions to occur after critical integrated flight tests....we are projecting a deployment date of 2005".

Russia was predictably upset by this change in policy, and reiterated that any violation of the ABM treaty would result in the end of the START process.

Not to be outdone, Sen. Thad Cochran (R-MS) introduced S.257, "The National Missile Defense Act of 1999": "It is the policy of the United States to deploy as soon as is technologically possible an effective National Missile Defense system capable of defending the territory of the United States against a limited ballistic missile attack (whether accidental, unauthorized or deliberate)." Secretary Cohen said that this was the Administration policy as well. However, in a letter to Sen. John Warner (R-VA), the head of the National Security Council, Sandy Berger, objected to the Act for being focused solely on a determination that the system is "technologically possible". The cost of the system, the extent of the threat and the impact on START II and START III must also be considered, he said.

Of course, all of this assumes that the system will work---most early tests of theatre defense systems have failed miserably. Furthermore, countermeasures could evade any currently planned defense system. In Bob Park's words, "It's not enough to kill a strapped-down chicken; any nation that can launch an ICBM can equip it with simple countermeasures".

There has been discussion of an amendment to the Act which would define "technologically possible" as being effective against missiles equipped with simple countermeasures. In the House of Representatives, a similar Act is being proposed, however it leaves out the phrase "as soon as is technologically possible". President Clinton says that he will veto the Senate bill. START II ratification remains on hold.

Wanted - A New Articles Editor

Physics and Society is now soliciting applications and nominations for the position of articles editor. The articles editor plays a crucial part in determining the content and quality of P&S by evaluating and editing articles that have been submitted, as well as in inviting potential authors to contribute.

If you wish to apply, please send a letter outlining your interest, TOGETHER WITH A RESUME, to . the Editor, Al Saperstein (addresses in the "boiler plate" section, top of page 2 of each issue of Physics and Society), for consideration by the P&S staff and the Editorial Board We also welcome nominations of potential candidates.

We report with regret that the previous articles editior, Eliza Estafani, was forced to resign due to the pressure of other professional demands on her time. The remaining editorial staff of P&S thank her for her valuable contribution during the past two years and wish her continued success in her work."

Beyond Growth: The Economics of Sustainable Development

By Herman E. Daly, Beacon Press, Boston, 1996, ISBN 0-8070-4709-0 (paper)

Many of us believe that human impacts on the environment have brought us into a new era. Herman Daly, professor at the University of Maryland's School of Public Affairs and former member of the Environment Department of the World Bank, puts his finger on the economics of that change. We have moved, he argues, from a natural environment that was empty and thus effectively limitless to one that is filled up.

Every economist needs to read this book. It could be aptly subtitled "Economics As Though Earth and Natural Law Mattered." In the tradition of E. F. Schumacher's Small Is Beautiful, Amory Lovins Soft Energy Paths, and Nicholas Georgescu-Roegen's The Entropy Law and Economic Process, it steps outside of the conventional economist's world of infinite environmental sources and sinks, to the real finite world. This book will certainly interest environmentalists. Because it takes natural law into account, it should also interest scientists. My guess, however, is that most economists will not read it, because it begins from assumptions that fall outside of their customary world view, a world view that Daly shows to be dangerously outdated but that is impervious to change because it is seldom explicitly acknowledged.

My own liberal-arts physics course includes environmental topics, and touches on their economic feedbacks. Although these feedbacks are increasingly obvious, business students inform me that their business courses have little to say about such matters. Judging from Daly's book, this situation is the rule. Today's academic economists operate in a sort of vacuum, an empty world bereft of natural and environmental limitations.

Daly's unconventional economics begins from assumptions that are by now commonplace among environmentalists and scientists. The conventional macroeconomic vision treats industry and consumers as an isolated loop of industry, goods, services, and capital. Daly puts this loop into the natural world, with its finite and degradable throughputs of energy and matter. Daly connects these fundamentals with the two great thermodynamic laws, for instance in a figure (page 29) showing Georgescu-Roegen's "entropy hourglass" in which low-entropy solar energy flows through Earth at a constant (hence limited) rate, while limited terrestrial resources of matter and energy flow from low to high entropy.

Starting from an open system changes everything. The closed-loop analysis might have made sense in a relatively empty world. But today, when 33-50% of the land has been transformed by human action, when atmospheric CO2 has increased by 30%, when more than half of all accessible surface fresh water is put to human use, when 25% of the bird species have been driven to extinction and two-thirds of marine fisheries are fully exploited or depleted, it is folly to assume that the environment does not enter into the economic equation (see Jane Lubchenco, "Entering the Century of the Environment," Science 23 January 1998, pp. 491-497). In fact we live in a full world in which exponential expectations must be replaced by steady-state sustainability. Unfortunately, according to Daly, conventional economists have yet to figure this out.

To convey a sense of Daly's ideas I will quote, without comment, several of his observations.

"Sustainable growth [is] a clear oxymoron to those who see the economy as a subsystem [of the natural world]. At some point quantitative growth must give way to qualitative development as the path of progress. I believe we are at that point today." (p. 7)

"There is thus an important asymmetry between our two sources of low entropy. The solar source is stock-abundant, but flow-limited. The terrestrial source is stock-limited, but flow-abundant (temporarily). Peasant societies lived off the abundant solar flow; industrial societies have come to depend on enormous supplements from the limited terrestrial stocks. Reversing this dependence will be an enormous evolutionary shift." (30)

"Our national accounts are designed in such a way that they cannot reflect the costs of growth, except by perversely counting the resulting defensive expenditures as further growth. ...Unsustainable consumption is treated no differently from sustainable yield production (true income) in GNP. ...To design national policies to maximize GNP is...practically equivalent to maximizing depletion and pollution." (40-42)

"The tradeable pollution permits scheme...is a beautiful example of the independence and proper relationship among allocation, distribution, and scale. . .This scheme limits the total scale of pollution, need not give away anything but can sell the rights for public revenue, yet allows reallocation among individuals in the interest of efficiency" (52-53).

"As our exactions from and insertions back into the ecosystem increase in scale, the qualitative change induced in the ecosystem must also increase, for two reasons: ...the first law of thermodynamics [and] the second law of thermodynamics." (58)

"While all countries must worry about both population and per capita resource consumption, it is evident that the South needs to focus more on population, and the North more on per capita resource consumption. ...Without for a minute minimizing the necessity of population control, it is nevertheless incumbent on the North to get serious about consumption control." (61)

"Kenneth Boulding got it right fifty years ago [when he stated that] 'the objective of economic policy should not be to maximize consumption or production, but rather to minimize it, i.e. to enable us to maintain the our capital stock'." (p. 68)

"In sum, we found that empirical evidence that GNP growth has increased economic welfare in the U.S. since about 1970 is nonexistent." (97)

"These considerations also suggest a concept of 'overdevelopment' as correlative to 'underdevelopment': an overdeveloped country might be defined as one whose level of per capita resource consumption is such that if generalized to all countries could not be sustained indefinitely." (106)

"Population control is the sine qua non of sustainable development... Birth control [in less developed countries] is already practiced by the upper and urban classes. The relatively high rate of reproduction of the lower class insures an 'unlimited' supply of labor at low wages which promotes inequality in the distribution of income. ...A low birth rate tends to equalize the distribution of per capita income, in two ways: it reduces the number of heads among which a wage must be shared in the short run, and it permits the wage to rise by moving away from an unlimited supply of labor in the long run." (125)

"None of this is meant to imply that carrying capacity is only relevant to developing countries. If the U.S. had worried about carrying capacity, it would not have become so dangerously dependent on depleting petroleum reserves belonging to other nations. If the U.S. cannot even pass a reasonable gasoline tax to discipline unsustainable consumption, is it realistic to expect Paraguay and Ecuador to control population?" (p. 128)

"We therefore need a compensatory tariff to correct for differences in internalization of external costs among the nations. This is derided as 'protectionism' by free traders. But ...the compensatory tariff ...protects an efficient national policy of cost internalization against standards-lowering competition from countries that, for whatever reason, do not count all environmental and social costs." (147)

"Globalism does not serve world community--it is just individualism writ large. We can either leave transnational capital free of community constraint, or create an international government capable of controlling it, or renationalize capital and put it back under control of the national community. I favor the last alternative. I know it is hard to imagine right now, but so are the others. It may be easier to imagine after an international market crash." (148)

"A maximum income [is] part of the institutional basis appropriate to a steady-state economy. The notion of a maximum income ...seems to strike people as mean, petty, and invidious. I believe this is because growth in total wealth is assumed to be unlimited. ...Inequality is increasing in the U.S. and will become an issue again in spite of political efforts to deny its importance. ...At some point distribution must become an issue. ...In 1960 after-tax average pay for chief executives was about twelve times that of the average factory worker. ...In 1995 it is well over one hundred. ...No one is arguing for an invidious, forced equality. ...But bonds of community break at or before a factor of one hundred. Class warfare is already beginning." (202-203)

"To capture the cluster of values expressed by 'sustainability-sufficiency-equity-efficiency' in one sentance, I suggest the following: We should strive for sufficient per capita wealth--efficiently maintained and allocated, and equitably distributed--for the maximum number of people that can be sustained over time under these conditions." (220)

Art Hobson
University of Arkansas at Fayetteville

The Truth of Science

by Roger G. Newton. Harvard University Press, Cambridge, MA, 1997, ISBN 0-674-91092-3

When David Mermin panned The Golem in the pages of Physics Today (March and April 1996), he served public notice that physics had discovered social constructivism and found it annoying. Physicists might grant that natural science, being pursued by interacting humans, provides scope for sociological study. But to suggest that scientific results are nothing but a social construct, constrained little or not at all by the real world, is flagrant nonsense.

Roger G. Newton's irritation prompted him to write The Truth of Science to describe the intellectual structure of physical science and the understanding of reality that modern physics engenders. The book is "not intended as a polemic," but the offending sociologists are said to be "robbing rational thought of all intellectual and cognitive value, leaving its expression a hollow rhetorical shell" (pp. 3-4). Their views are called "malignant," their statements "disingenuous," "cynical," and "arrogant," and they themselves are found guilty of "anti-intellectualism" and "hubris." These are intellectual fighting words. Unfortunately, the author's arguments are inadequate to back them up.

For example, Newton thinks he has caught David Bloor in a contradiction, between Bloor's "principle of symmetry" (that both true and false beliefs are to receive causal sociological explanation) and his statement that "when we talk of truth...we mean that some belief...portrays how things stand in the world" (p. 34). Newton takes "how things stand in the world" to amount to "an external criterion of truth" that must either conflict with the sociological explanation or render it superfluous, but there simply is no need to interpret Bloor's statement that way. It seems more likely that Bloor is just affirming the common sense of what people mean when they speak of truth.

An analogy offered by Alan Chalmers, adopted by Newton against Bloor, is similarly misplaced. The idea is that when science comes up with correct results, no more external explanation is required than when a soccer player kicks the ball into the opposing goal--"he just followed the rules" (p. 34). For this analogy to hold, there would have to be a set of hard and fast rules by which science is done, just as there is a set of rules for playing soccer. No one has successfully articulated such a set of rules. It would be the holy grail for philosophers of science, who by now seem to have given up the quest.

Let sociologists investigate the social processes by which consensus is achieved in science. Let them, if they choose, seek to view those processes from the vantage point of the participants, refraining from judgment in the light of subsequently settled scientific knowledge. Where they make specific false or exaggerated claims to the effect that scientific results are just a social construct, these may best be debunked on a case-by-case basis, as Newton himself does with regard to assertions by Paul Forman and Lewis Feuer that "acausal" quantum concepts were an outgrowth of the political milieu of Weimar Germany. By appealing to the historical record, he effectively deflates their thesis (pp. 27-28).

Newton devotes only one chapter to attacking the social constructivists, but they might quote from his book for their own purposes. For example, on pages 108-9 we learn that there are fashions and fads in science, and on page 169 we are told that "Richard Feynman went to extremes in replacing all remnants of wave fields...by the quantum motion of particles". "Feynman diagrams", the author complains, "have become firmly entrenched in the in the language and imagination of physics." Entrenched!

What is the truth of science? Newton identifies coherence as the key criterion establishing the truth of scientific knowledge, and approvingly quotes John Ziman: "Scientific knowledge eventually becomes a web or network of laws, models, theoretical principles, formulae, hypotheses, interpretations, etc., which are so closely woven together that the whole assembly is much stronger than any single element" (p. 209). Where this edifice, as Newton calls it, makes direct contact with our experience, it works, in that it enables us to build powerful technology, accurately predict the outcome of experiments, and reliably assert that certain things will not happen. Scientists rightly dismiss parapsychology, astrology, and witchcraft because they are incoherent with the established edifice.

The problem with all this is that we cannot enforce a criterion of coherence against outsiders while giving the members of our club a free pass--not without inviting the social constructivists to our table. If scientific knowledge is tightly interconnected, then the entire edifice stands or falls together. There can be no "revolutionary escape clause" (p. 106) permitting serious consideration of, say, the Bohr atom, which clearly violated the laws of physics when introduced. When Newton says that "even though parts of the edifice may be found to be rotten, the coherence of the body of scientific truths accounts for its stability" (p. 209), he is denying the effectiveness of coherence as a criterion. The "edifice" is the coherent body of scientific truth. How can part of it be "rotten"? If something rotten can be part of it, why not ESP?

The preface to this book assures us that "no specific knowledge of physics is assumed," but my impression is that any reader lacking a substantial background in physics will come away from this book more intimidated than enlightened. Being informed on page 215 that "the validity of scientific statements...rests on evidence that could, in principle, be checked by anyone with the needed fundamental knowledge and apparatus," our reader will understand that "anyone" is not likely ever to include him or her. "The truth of science" is to be judged by a tiny group of initiates. The role of everyone else is to appreciate from afar and send money.

Allan Walstad
Physics Department
University of Pittsburgh at Johnstown
Johnstown, PA 15904

Science in Culture

Special Issue of Daedalus, Winter 1998, $7.95

This 40th anniversary issue of Daedalus, the journal of the American Academy of Arts and Sciences, is in honor of Gerald Holton, Mallinckrodt Professor of Physics and History of Science at Harvard, who had launched the journal as a quarterly. The issue's ten articles were compiled from presentations at a conference honoring Holton. Editor Stephen Graubard writes this issue "acknowledges that both the humanities and the social sciences are as relevant to [the cultural development of science] as are ... the physical and the biological sciences."

Holton's article "Einstein and the Cultural Roots of Modern Science" examines Einstein's creativity from the perspective of Holton's belief "that the full understanding of any particular scientific advance requires attention to both content and context." Holton writes that "Einstein's assertion of obstinate non-conformity enabled him to clear the ground ruthlessly of obstacles impeding his great scientific advances." Holton tracks Einstein's development from a rebellious youth that had a "selective reverence for tradition....The essential point for [Einstein] was freedom, the 'free play' of the individual imagination, within the empirical boundaries the world has set for us." Having access to many of Einstein's papers and correspondence, Holton was "impressed... by his courage to place his confidence, often against all available evidence, in a few fundamental guiding ideas or presuppositions." Holton attributes Einstein's breadth partly to his wide range of readings within an education that was much broader than a narrow scientific development. Holton concludes that Einstein had a "life-long struggle for simplicity and against ordinary convention."

The breadth of this collection is revealed in "Misconceptions: Female Imaginations and Male Fantasies in Parental Imprinting" by Wendy Doniger, Distinguished Service Professor of the History of Religions at the University of Chicago, and Gregory Spinner at Tulane University. The article discusses the belief in different cultures over many centuries that "a woman who imagines or sees someone other than her sexual partner at the moment of conception may imprint that image upon her child ... thus predetermining its appearance, aspects of its character, or both." The authors write that "this essay will consider a number of stories about the workings of maternal imagination, impression, or imprinting, terms that are often conflated." They discuss the Hebrew Bible, Aristotle, Heliodorus, Jerome and other Greek and Latin sources, Maimonides, various Midrashim, Goethe, Hoffmann, and Voltaire. They note "Christian theories grow out of the father's fear that his child may not be his child, or, rather, that he can never be sure that his child is his child ... unless, of course, he trusts his wife. Resemblance was the straw that men grasped in the storm of their sexual paranoia." The authors include a discussion of ancient India. They conclude that "what seems most astonishing in all of this is the extent to which the seemingly most plausible explanation for the birth of a child who does not resemble his father--namely, the fact that some other man fathered him--is rejected by most of our sources (with the notable exception of the Jewish sources) in favor of fantasies of a most extravagant nature."

In "The Americanization of Unity," Harvard's Peter Galison treats the unity of science, based on the work of Philipp Frank. He discusses a variety of attempts to develop a unified approach based upon "a unification through localized sets of common concepts."

"Fear and Loathing of the Imagination of Science," by Lorraine Daston, Director of the Max Planck Institute for the History of Science, is aimed at exploring "how and why large portions of the educated public--and many working scientists--came to think [that good science does not require imagination], systematically opposing imagination to science." Immanuel Kant believed that originality was necessary for genius and therefore denied Newton the title of genius because in Newton's work there were not the type of originalities to be found, for example, in Homer. Datson notes that, in the 19th century, "at the crossroads of the choice between objective and subjective modes stood the imagination" and that the French psychologist, Theodore Ribot "could not free himself from a certain suspicion that imagination was linked to scientific error: the 'false sciences' of astrology, alchemy, and magic." In this 19th century view, "pure facts, severed from theory and sheltered from the imagination, were the last, best hope for permanence in scientific achievement."

Edward O. Wilson, Research Professor in Entomology at Harvard, writes on "Consilience Among the Great Branches of Learning." Contrasting the Western development of science to that of Chinese science, he notes that the Chinese made brilliant advances "but they never acquired the habit of reductive analysis in search of general laws. ...Western scientists also succeeded because they believed that the abstract laws of the various disciplines in some manner interlock. A useful term to capture this idea is consilience." Wilson writes that using consilience enables one to see modern science in a clearer perspective. He illustrates this by the discipline of biology, discussing evolutionary space-time, ecological space-time, organismic space-time, cellular space-time, and biochemical space-time. He writes that "two superordinate ideas unite and drive the biological sciences at each of these space-time segments. The first is that all living phenomena are ultimately obedient to the laws of physics and chemistry; the second is that all biological phenomena are products of evolution, and principally evolution by natural selection." He argues that social sciences and humanities will change as the brain sciences and evolutionary biology will "serve as bridges between the great branches of learning. ...The boundary between [C. P. Snow's] two cultures is instead a vast, unexplored terrain of phenomena awaiting entry from both sides. The terrain is the interaction between genetic evolution and cultural evolution. ...The relation between biological evolution and cultural evolution is, in my opinion, both the central problem of the social sciences and humanities and one of the great remaining problems of the natural sciences." He believes "biology is the logical foundational discipline of the social sciences," an insight that may be a novel idea to many social scientists.

In "Physics and History," Nobel laureate Steven Weinberg considers "the uses that history has for physics, and the dangers both pose to each other." He believes that "one of the best uses of the history of physics is to help us teach physics to nonphysicists. Physicists get tremendous pleasure out of being able to calculate all sorts of things. Nonphysicists, for some reason, do not appear to experience a comparable thrill in considering such matters. This is sad but true. History offers a way around this pedagogical problem: Everyone loves a story." However, the danger that scientific knowledge presents to history is "a tendency to imagine that discoveries are made according to our present understandings." This leads us to "lose appreciation for the difficulties, for the intellectual challenges, that [past scientists] faced." While agreeing that "a scientific theory is in a sense a social consensus, it is unlike any other sort of consensus in that it is culture-free and permanent. This is just what many sociologists of science deny." He argues that the laws of nature are also "culture-free and...permanent," and that "as the number of women and Asians in physics has increased, the nature of our understanding of physics has not changed." He concludes that "physical theories are like fixed points, toward which we are attracted. Starting points may be culturally determined, paths may be affected by personal philosophies, but the fixed point is there nonetheless."

Harvard's Dudley Herschbach and Bretislav Friedrich contributed "Space Quantization: Otto Stern's Lucky Star," describing the Stern-Gerlach experiment and its historical development. The experiment showed the reality of space quanitzation and "provided compelling evidence that a new mechanics was required to describe the atomic world." This well-written article brings the reader back to those early days when quantum mechanics was debated and developed.

E.H. Gombrich, former historian at the University of London, contributed "Eastern Inventions and Western Response." He describes "the Western response to the technical inventions that had reached Europe from the East. ...In the venerable civilizations of the East, custom was king and tradition the guiding principle. The spirit of questioning, the systematic rejection of authority, was the one invention the East may have failed to develop. It originated in ancient Greece."

Harvard's James S. Ackerman contributed "Leonardo da Vinci: Art in Science." Reviewing many of da Vinci's writings, he focuses upon the artist's "gift and patience for intensive observation," which served as the foundation for both his scientific investigations and his artistry. Ackerman looks in particular at da Vinci's descriptions of the work of the human heart.

The final short essay "Educational Dilemmas for Americans" by Patricia Albjerg Graham, historian of education at Harvard, asks "[why] does education present itself as such a persistent dilemma in the United States?" Her conclusion is that Americans change their mind about what tasks schools should accomplish, are ambivalent about what the adolescent experience should be, and are not really sure how important school is in a child's education. She notes the tremendous disruption that was caused when teachers who had spent years teaching in one manner suddenly had to adjust to the demands of the special education movement and to a philosophy whose purpose was "to prevent dropouts, not to create learners." Noting that there is a growing tendency for students to work during the school year, she writes "calculus is more valuable in terms of discipline than a perfect attendance record at McDonald's." She concludes that "schools are more important for the children of the poor than they are for the children of the affluent" and that "it is thus an extraordinary tragedy that the worst schools--whether in terms of faculty and administrative skills or per-pupil expenditures--serve the children who most need excellent schools, the children of the poor, while the best ones serve the children who have the most educational alternatives, the children of well-educated and prosperous."

This edition of Daedalus is remarkable in the breadth of the articles presented. Each brings an insight into an aspect of how culture and science intertwine.

John F. Ahearne

Possibly Vast Greenhouse Gas Sponge Ignites Controversy

by Jocelyn Kaiser, Science, 1998, vol. 282, pp. 386 - 387.

Science staff writer Jocelyn Kaiser first noticed a problem of inconsistency in certain papers [1,2] published in the October 16 issue of Science Magazine. The Earth's carbon cycle is not known well enough to account for a missing one or two petagrams (1015 gms) of carbon per year. Both of the studies reported huge new carbon sinks. Such sinks would bear on possible "credits" for industrialized carbon output (human activity puts some 7.1 petagrams of carbon into the atmosphere yearly), under the recent Kyoto agreement on worldwide greenhouse gas production. Not only does each study report a huge local sink (40% of the missing carbon in South America [1]; 90% in North America [2]), but the two do not agree closely with one another. The disagreement easily might be attributed to differences in methodology and the primitive state of the art in carbon budgeting. Perhaps unfortunately, the membership of each reporting team [1,2] had considerable representation of the national labs in the respective region reported as a sink. Adding this to the recent committment at Kyoto, one has a difficult time imagining how these analyses might have been done completely free of national interest. The issue is credibility, not one of exaggeration or dishonesty. We must keep in mind that, if scientific work is to be used for treaty enforcement, the people required to limit carbon emissions can be expected to adhere to the treaty only if they know the evidence is certain and the enforcement is fair. Kaiser points out several studies done in different ways which contradict the [1,2] results above. In particular, the sink rate per unit area in North America has been estimated by a study in press at only about 1/3 that of the rate reported here. Of the numerous possible ways to determine a carbon sink rate over continents, [1] used on-site measurement of tree-trunk thicknesses, and [2] used quasi-stable computer simulation based on seawater and atmospheric CO2. One feels obliged to point out that, should Kyoto be given enforcement teeth, and should "credits" for local sinkings be allowed to permit excess local industrial greenhouse gas production, on-site measurement or computer inference from distantly-related data are not going to be credible in any kind of world enforcement system. One would suggest either that "credits" be abandoned as too prone to bias, or that an OBJECTIVE system of greenhouse gas assessment be set up as agreeable to all signers. Because world-wide averages over large areas would seem obviously to be a proper context for monitoring greenhouse gas balance over months or years, we would suggest use of published satellite analyses to document every enforcement action. This means specification, if not development, of satellite measurement protocols and standards before any more action be taken. In the future, under Kyoto, advisory or appellate actions might be based on local interests, but the system never will work if individual nations are permitted court-like introduction of their own evidence to argue for special treatment.

[1] Oliver L. Phillips, et al, "Changes in the Carbon Balance of Tropical Forests: Evidence from Long-Term Plots", Science, 1998, vol. 282, pp. 449-442.

[2] S. Fan, et al, "A Large Terrestrial Carbon Sink in North America Implied by Atmospheric and Oceanic Carbon Dioxide Data and Models", Science, 1998, vol. 282, pp. 442 - 446.

John Michael Williams

Killing Detente: The Right Attacks the CIA

Anne Cahn, Penn State Press, 1998, 232 pages, $35, $18 (paper).

Science aims to link cause and effect for natural phenomena. Linking cause and effect for historical events is often more difficult since historical events cannot be tested by rerunning history with varied parameters. Despite the difficulty, it is worthwhile to review the causes behind the magnitude of the U.S. nuclear buildup. Two critical questions should guide this analysis: How much of the $5.8 trillion (1996 dollars) that the U.S spent to build 70,000 nuclear warheads, deployed on 75,000 missiles and 8600 bombers, was too much? And, was the effectiveness of the Soviet military exaggerated with false predictions? These two books go a long way towards quantifying the costs, and explaining the large size, of the buildup.

Atomic Audit, edited by Stephen Schwartz, thoroughly compiles the U.S. costs--$5.8 trillion between 1940 and 1996--for its atomic arsenal, from the fuel cycle to the weapons to the delivery systems and to decommissioning. As a companion book, Killing Detente supplies some of the causes which prompted this spending level. Anne Cahn describes the history behind the 1970s Team B estimates of the Soviet nuclear triad. These estimates provided some of the rational for the $5.8 trillion in spending, and the demise of U.S.-Soviet detente. At the time of the 1993 Senate hearings1 on the Evaluation of the U.S. Strategic Nuclear Triad, former Secretary of Defense Casper Weinberger (1981-87), said, "Yes, we used a worst -case analysis. You should always use a worst-case analysis in this business. You can't afford to be wrong. In the end, we won the Cold War, and if we won by too much, if it was overkill, so be it." Make no mistake about it, it was better to have too much nuclear hardware and not go to the Armageddon, as compared to the converse. However, since there is no limit to spending under this argument, and more nuclear hardware can be destabilizing, it is useful to separate truth from fantasy if we are to learn from the past.

Atomic Audit provides an excellent, comprehensive description of each of the nuclear-capable systems: 14 types of deployed heavy bombers, 47 deployed and 25 canceled missiles, 91 types of nuclear warheads. It describes and quantifies the technical and economic facts for the Manhattan Project ($25 billion), command/control, SIOP targets, defensive systems, dismantlement, nuclear waste, plutonium disposition, production and testing victims, secrecy, congressional oversight, and much more. It describes programs that were developed but not really deployed such as the nuclear-powered aircraft ($7 billion), the 1970's Safeguard ABM system ($25 B), SDI ($40 B), nuclear-propelled missiles ($3 B), Project Orion (nuclear-explosion propelled rocket to stars, $50 million), and Plowshares (peaceful nuclear explosions, $0.7 B). Certainly the SIOP strategic target list, which peaked at 12,000 in 1990, dictated the size of the U.S. triad. The Brookings group concludes the obvious: (1) Nuclear weapons were much more expensive than the projected 3% of all military spending. The $5.8 T total nuclear spending was 31% of all military spending ($18.7 T) and it was 44% of all non-nuclear military spending ($13.2 T). (2) Congress, the Defense Science Board, and most presidents made errors of omission and commission. (3) The SIOP target list of 10,000 targets was more than excessive, increasing the size of U.S. nuclear forces.

In Killing Detente, Anne Cahn gives us the detailed background of the 1970s "Team B" estimates. At the end of President Ford's term, the future members of the Committee on the Present Danger lobbied hard to create a process which would, in essence, write National Intelligence Estimates (NIEs) from outside the government. This happened at a time when the CIA was weakened from attacks coming from both the left and the right. The left was angry about CIA covert action in Vietnam, such as Project Phoenix which was responsible for the deaths of 20,000 civilians. The right was angry about projections of an unwinable war in Vietnam, projections that "no amount of bombing would deter North Vietnam from its objective." Because of the very close primary elections in 1976, Ford stopped using the word "detente" and allowed a team of outsiders to create a rival NIE. CIA Director George Bush signed off on the birth of Team B with "Let her fly, OK, GB."

Cahn describes how Professor Richard Pipes of the Harvard History Department became the leader of Team B, an unbalanced panel which greatly feared the Soviets. In the only three months, Team B developed three NIE alternatives on Soviet strategic objectives (a catch-all that covered every possibility), ICBM accuracy, and low-altitude air defense. These Team B reports laid the foundation for the nuclear buildup of the 1980s. When Pipes testified before a House committee, he stated that if "it [the intelligence community's NIE's] were presented in a seminar paper to me at Harvard, I would have rejected or failed [it]." Now that 20 years have passed since the Team B trilogy was written, it seems only fair to judge Professor Pipes' work along side the NIE's. As one who has consulted many NIE's over a decade while at the Senate Foreign Relations Committee, the State Department and ACDA, I have seen bench marks by which to grade Pipes' work and I give him an "F" for failure. So that you can judge for yourself, I have included parts of Team B's declassified (Top Secret) reports at the end of this review, with my comments in brackets.

In conclusion, the Soviet Union caused honest fear in the U.S., but we could have used better scholarship than offered by Team B. Perhaps we have learned this lesson. In 1995 the Congress was concerned that the NIE on the emerging ballistic missile threat from smaller countries had understated the threat. In contrast to Team B, the congressionally mandated commission that examined the NIE was well balanced. By having a well-balanced commission, the Rumsfeld Panel stayed clear of the extremists who could have captured the process. The Rumsfeld Panel analyzed those cases that were possible, but did not specify which ones were likely.

I strongly recommend both books as excellent studies on the Cold War nuclear arms race. Hopefully, we can learn from the errors of our predecessors.

Statements by Team B:

On Soviet Low Altitude Air Defense: "It specifically does not address Soviet capability against the B-1, cruise missiles or advanced penetration aids." [This forced a comparison between future Soviet air defenses attacking 1976 U.S. airplanes without cruise missiles, or without B1 and B2 bombers.]

"Put more starkly, it is not inconsistent with current evidence that the Soviets believe they have and may actually possess the inherent ability to prevent most, if not all, penetrating U.S. bombers (of the kind presently in the force, in raid sizes of a few hundred) from reaching targets the Soviets value." [It is hard to believe "most if not all."]

"In future years, high-energy laser weapons may play a role in the air defense of the Soviet Union.... Accordingly, they could possibly begin deploying ground-based laser antiaircraft weapons in the 1985-1990 time period, if they so desire." [The U.S. continues to fund laser anti-aircraft weapons, but without being able to successfully deploy it.]

On Soviet ICBM Accuracy: "Considering the magnitude of this effort and the fact that much of the Western research in this area is available to them, we find it hard to believe... that Soviet G&G errors will be significantly greater than those of the United States. For this reason, we will assume these errors to be equal to those of the Minuteman III in 1975, 70 m downrange and 35 m crossrange." [It is the strong consensus view that Soviet accuracy has always been poorer, by more than a factor of two, than U.S accuracy. Since a factor of 2 in accuracy corresponds to a factor of 8 in yield, this is a very large effect. To say that the Soviet missiles will have an accuracy of 70 m downrange and 35 m crossrange is far beyond the pale.]

On Soviet Strategic Objectives: "After some apparent division of opinion intermittently in the 1960's, the Soviet leadership seems to have concluded that nuclear war could be fought and won." [Since our SLBMs have always been invulnerable and since some bombers and ICBMs would survive, it is not logical to think that a Soviet leader could think the Soviet Union could actually destroy all of the U.S. nuclear forces and prevent the U.S. from destroying Russian cities. Since the Cuban missile crisis, Soviet leadership indicated no desire to risk a nuclear confrontation.]

"We have good evidence that it [the Backfire bomber] will be produced in substantial numbers, with perhaps 500 aircraft off the line by early 1984. [The Soviets had 235 Backfire bombers in 1984, which need considerable in-air refueling.]

"Given this extensive commitment of resources and the incomplete appreciation in the U.S. of the full implications of many of the [ASW] technologies involved, the absence of a deployed system by this time is difficult to understand. The implication could be that the Soviets have, in fact, deployed some operational non-acoustic systems and will deploy more in the next few years." [The logic is that if powerful Soviet ASW has not been observed, it will be there in the next few years.]

"... we cannot with any assurance whatever forecast the probability or extent of success of Soviet ASW efforts. However, we are certain that these probabilities are not zero, as the current NIE implies." [The strong consensus view is that at-sea U.S. SLBMs were never threatened by Soviet ASW, and that hypothetical new ASW technologies have all failed." (See reference 1, GAO report on the triad.)]

"... it is clear that the Soviets have mounted ABM efforts in both areas of a magnitude that is difficult to overestimate." [New Soviet ABM systems did not make much technical progress, nor were they deployed. Thus, "difficult to overestimate" is fear mongering.]

"While it may be possible (though often erroneously, in our view) to disparage the effectiveness of each component of Strategic Defense taken separately, the combined and cumulative efforts may posses considerable strategic significance." [Twenty years later SDI is not capable of destroying ICBMs.]

Reference: 1. Evaluation of the U.S. Strategic Nuclear Triad, Senate Hearing 103-457, June 10, 1993.

David Hafemeister
Physics Department
California Polytechnic State University
San Luis Obispo, CA 93407