Archived Newsletters

Global Warming and Gasoline Tax, continuation

May I reply to Philip Ryan's attack (April 1993) on my letter (January 1993)? He describes my letter as "criticism of higher gas taxes," but my main intent was to encourage quantitative discussions of this (see also John McGervey's letter, October 1992).

Dr. Ryan begins by misspelling my name, and this lack of attention to detail appears elsewhere in his screed. His suggestion that Americans should pay a higher gas tax because other countries do is fit for the Washington Post but not for a scientific newsletter. He goes on to say that the gas tax "should pay for much more than road construction: --pollution and road accidents, to name just a few." As economist Thomas Sowell points out wryly, liberals expect such propositions to be accepted without proof or debate. Ryan's example of road accidents is a bad one. Through their auto insurance, drivers already pay the real costs of these, and in addition they have to support a welfare program for well-heeled lawyers and predatory "victims" taking advantage of our absurd tort system. Ryan might answer that at least the gas tax should pay for police investigations of road accidents. Well, okay, Dr. Ryan, but how many cents of gas tax would that justify? Even this might be a bad precedent. Should Koreans be taxed to pay for investigations of hate crimes against their grocery stores? Maybe it's better if such costs are paid by society at large, or by the wrongdoers where feasible.

As for making drivers pay for pollution: Well and good, but once again, how many cents would that justify? And can Ryan assure us that a gas tax to abate auto pollution wouldn't be raided to abate other pollution? Why should drivers pay for that? I don't know whether our present gas tax is too high or too low. But I do know that in Maryland it's being raided to support at least two mass-transit systems (Washington suburbs and Baltimore). Why should poor drivers subsidize rich subway riders? That makes no sense to me.

Ryan says a higher price for gas "is the only real way to spur conservation and -- new energy systems." This tired argument is not regarded highly by free-market economists. If we were urged to tax Catholics in order to build more Protestant churches, to provide them alternative religion sources, we would ridicule that idea. If that's the only way to build churches, we would say, maybe we have enough of them already.

Finally, Ryan says a gas tax would "reduce national debt." Drivers may not feel that they alone should pay the national debt. I hope that readers of this exchange between me and Drs. Ryan and McGervey will infer three points: (1) there is no revealed truth about these matters; (2) a quantitative, rather than a hortatory and moralistic, approach may be useful when we are persuading people to pay their fair share; (3) objections by the "common people" to a high gas tax may not be entirely stupid.

James E. Felten
8569 Greenbelt Road, #204
Greenbelt, Maryland 20770

Quantum Theory and Relevant Education

I was interested in your comment "Quantum Theory and Relevant Education" (April 1993). I particularly noticed the sentence, "The universe is quite non-Newtonian, but few students coming out of two semesters of introductory physics would suspect any such thing." There is a reason for that, I believe. I won't argue your statement that the universe is non-Newtonian. But the universe that the students are familiar with when they start studying physics is quite Newtonian. Thus the students' whole array of reflexes, prejudices, and even instincts are Newtonian based. But they are not really aware of this. Moreover, they don't have the background, and not even the vocabulary, to be made aware of it and to begin to grasp the limitations that it implies. And they won't get even "a rough idea of how the universe actually works" until they have acquired some of the background and vocabulary.

I'm not expressing myself too well, and I don't mean to claim that our introductory courses are ideal. I do maintain, though, that we need to start with our students where they are, which is in a milieu that for practical purposes--their purposes--is Newtonian. We then need to enable them to understand how that part of the universe works so that they are in a position to know what we are talking about when we tell them about the non-Newtonian part. Unfortunately, that takes a good deal of time. I'll agree that we can probably get along without the Bohr atom. Even torque and geometric optics may be expendable as subjects. But how do you deal with spinning electrons if you don't have some understanding of angular motion? How do you relate magnetism to atomic structure if you don't know about electric circuits?

I agree that we should start with a goal and decide what we need to teach to reach it. But I am afraid that we need to teach a lot more than you seem to think we do.

George L. Trigg
Technical Editor
275 Beaver Dam Road
Brookhaven, New York 11719

Symposium On Physicists in Environmental Affairs

Physics and Society presents here articles based on the four talks given at an invited session entitled sponsored by the Forum on Physics and Society at the March 1993 APS meeting in Seattle.

Technology for Containment of Underground Wastes: Containment Now

J.G. Dash

Effluents from underground storage tanks and waste dumps of toxic and hazardous materials are entering or threatening aquifers which supply drinking water in many regions of the nation. According to current policy, cleanup cost is estimated at $10^11 - $10^12 over several decades (1), although no existing technology provides complete remediation within this cost estimate (2). Instead of proceeding with costly yet inadequate cleanup, we should try to safeguard water supplies immediately by waste containment, while reconsidering current policy and buying time for more R&D on improved methods of remediation.

Among several available containment methods, cryogenic barriers offer advantages of effectiveness and economy (3). Barriers of frozen ground can provide a complete enclosure of buried wastes, in virtually all soils and site geometries. The enclosure is effected by freezing pipes inserted around and below the site, so as to form a boat-shaped rib case (Fig. 1). The piping insertion and refrigeration use off-the-shelf technology in common use on many engineering projects to prevent seepage and stabilize wet soils during construction of dams, tunnels, mines, and foundations (4). Installation and maintenance costs are competitive with other containment methods; more importantly, ground-freezing can provide a complete hermetic enclosure, preventing downward as well as lateral migration of water-borne hazardous, radioactive, and mixed wastes. Thick barriers provide great thermal inertia, so that frozen ground 20 m thick in typical soils can sustain a hiatus of 2 years without refrigeration before breaching. Molecular diffusion through such barriers is estimated to be below detectability for 10^4 years. A cryogenic enclosure can be maintained as long as needed; whenever remediation is completed, the barrier can be completely removed by shutting off the refrigeration and withdrawing the pipes.

Figure 1. Schematic cross section of a cryogenic barrier enclosing underground storage tanks (5). The refrigeration piping consists of two arrays of pipes, spaced 4 to 8 feet apart. The pipes are concentric tubes of heavy wall mild steel such as used for oil well casing. Refrigeration is by circulating chilled fluid, supplied through the inner tube and returned through the annulus. The fluid may be one of several (e.g. brine, aqueous ammonia, or propylene glycol) commonly employed in large freezing plants. The pipes can be inserted by angle drilling, pile driving, or vibrating. With this design, a 20 m thick barrier with a core temperature of -30^oC can be formed 6 to 12+ months after refrigeration begins. Monitoring pipes and instruments are not shown.

Patented designs for cryogenic barriers around various DOE sites have been prepared by an engineering firm (5). The cost of a containment system for a large underground waste tank such as the single-shell tanks at Hanford is estimated to be less than $5 x 10^6. The refrigeration of such a barrier can be maintained at an annual power cost within $10^4. On this basis, containment of all of 177 underground storage tanks at Hanford could be accomplished for less than $10^9, with an annual power cost below $2 x 10^6. Current plans are under way for tests at a number of DOE sites.

  • M. Russell, E.W. Colglazier, M.R. English, Hazardous Waste Remediation:
    The Task Ahead, Waste Management Research and Education Institute, Univ. of Tennessee, 1991. Also see Colglazier's article, this issue.
  • Basic Research for Environmental Restoration, Energy Research Office DOE 1989.
  • J.G. Dash, Waste Management 11, 183 (1991).
  • See e.g. H. Wind, Eng. Geol. 13, 417 (1979); M.B. Jones, Tunnels and Tunneling 14, 31 (1982).
  • RKK, Ltd., Arlington, Washington, estimate 1993.

The author is at the Department of Physics, University of Washington, Seattle, Washington 98195, and RKK, Ltd., Arlington, Washington

Research for Environmental Management

The environmental management problems facing the country and world are tremendous in scope and complexity. Government and industrial practices of the past century have created a legacy of contaminated soils and groundwater that must be restored and protected to ensure ecological and human well-being for future decades and centuries.

Two US government departments, Energy and Defense, have extensive environmental problems resulting from weapons production activities during a 50-year period starting in the 1940s. Unique materials were used in these activities, and as a result extremely complex waste byproducts were produced. Unfortunately, the approaches used to dispose of or store these wastes did not consider environmental consequences. Long-term environmental restoration and waste-management programs are now being implemented to deal with these wastes and with the contaminated environments and facilities resulting from these weapons production activities. Our challenge is to restore the environment where it has been contaminated, deal with the complex stored waste in an environmentally acceptable way, and develop environmentally benign manufacturing techniques, processes, and products.

As we set out to tackle these problems, we encounter impeding limitations. We must overcome several knowledge gaps: We don't clearly understand the risk that contaminants pose, so regulatory standards that specify the degree of clean-up may be overly conservative; our understanding of natural systems and their ability to accommodate pollutants for long durations is limited; we don't have the science or tools needed to clearly define the extent of our problems; and we don't have the technologies needed to restore the environment, deal with huge waste inventories, and develop an environmentally benign industrial infrastructure. These uncertainties plus an arbitrary and changing regulatory environment have cost large sums of money with little to show for it.

In addition to knowledge gaps, we run into financial limitations that impact cleanup activities. The projected cost of national environmental restoration far exceeds the nation's financial resources and the public's willingness to spend on cleanup.

To be successful, we need to establish environmental restoration programs that are soundly grounded in science, technology, and policy. The science and technology base resulting from a rational approach will allow us to apply available financial resources to environmental problems that have been determined to be significant on the basis of true risk. The fundamental knowledge resulting from this approach will also be applicable in solving other national problems, including health and economic competitiveness.

There are four areas where environmental R&D investments could lead to tremendous returns: human and ecological health effects; soils and groundwater, especially in situ analysis, remediation, and monitoring; waste processing technology and waste forms for permanent storage; and characterization and analysis technology.

Health Effects
The effects of man-made toxins on human health have been the driving force behind environmental standards, but environmental "health" has recently assumed a similar importance. From a scientific perspective, human health and environmental health are similar problems. Our understanding of the effect of chemical and radioactive materials on health is based largely on epidemiological or animal studies. The setting of standards from animal studies involves extrapolations from high dosages over short times to environmentally-relevant low-level chronic exposures. These extrapolations assume a linear dose response, resulting from a genotoxic effect of the toxin, and an assumed single step process to induce cancer, or another effect, in a cell. Current standards are not generally based on a molecular-level analysis of disease in humans or the environment.

Several factors could change this approach, leading to more realistic standards. We now know that the path to disease usually involves multiple steps, leading to a nonlinear relationship between dose and health effects. Many toxins now assumed to be genotoxic in fact induce disease in other ways and exhibit distinct thresholds for health effects. At low concentrations, living systems have a variety of defense and repair mechanisms that are overwhelmed at the level encountered in laboratory studies. An individual's or environment's susceptibility to disease from toxins is determined by its genetic makeup, which can vary significantly among individuals of any species. To properly understand the environmental problem, we must understand the mechanistic effect that chemicals have on all types of living systems, the mechanisms that can defend living systems, and the genetic pathways these defense systems take. One part of the problem is that some man-made materials (e.g. plutonium and some halogenated hydrocarbons) have not been accommodated by nature, so no defense mechanisms have evolved as they have for many naturally-produced toxins.

Solving the environmental problem is only a small part of the much larger problem of explaining how living systems survive among the many other toxic materials in the environment. It is widely accepted that inherited or induced faults in the genetically determined defensive makeup of individuals lead to increased susceptibility to toxin-caused disease. The development of new tests involving "biomarkers" will allow the extent of accumulated dose and damage from toxins to be determined before overt health effects are observed, and will indicate whether the proper genetic defense and repair mechanisms are in place. This approach will allow intervention before overt disease is encountered. Thus, we will not only learn what materials we need to remove from the environment, but also how to protect individuals having genetic susceptibilities to disease. Adopting this approach will help us move our entire health strategy from treating to preventing disease.

Soils and Groundwater
The behavior of complex waste streams, including their transport, transformation, and entry into the ecosystem, is only beginning to be understood. Contaminants are subjected to physical, chemical, and biological processes in a complex milieu of minerals, organic materials, biota, and liquids. The huge volumes of contaminated soils and groundwater lead us to try to develop in situ methods for characterizing, remediating, monitoring, and controlling contamination. In most cases, this problem is far more complex than any industrial chemical process; the number of variables is far greater, and our ability to monitor and control the process is limited. Further, the rates of reaction and movement are generally slow so that long times may pass before the efficacy or wisdom of an intervention is understood, and the consequences over decades or centuries cannot be predicted.

Early development of in situ technologies must be based on models, since trial and error could be disastrous. To develop realistic models, a broader base of fundamental understanding of subsurface chemistry and biology is needed. The ability to model these processes from the molecular level through the field scale is essential, and the problem of scaling through that range is formidable. The end product must be a new set of models and intervention protocols that can be successfully steered through the government permitting process and then widely applied.

A problem related to health effects at the molecular level is bioremediation, a promising technique for cleaning u soil and ground-water. Bioremediation would use naturally or genetically-altered microbes or plants to generate enzymes to chemically render compunds harmless or to sequester elements such as heavy metals or radionuclides. the molecular basis for bioremediation technology is usually the same as that for human health defense or environmental survival, and also could form the basis of new generations of bioprocesses for industrial applications. Thus, progress in this critical area could yield significant long-term societal benefits.

Recent discoveries of microbes at extreme subsurface depths offer hope that natural or minimally-modified species can be expected to deal with contaminants in deep aquifers or soils. In addition, microbes have been found to exist in extreme environments: thermophiles at temperatures above 100^oC, halophiles at greater than 4N saline solution, acidophiles functioning at pH levels as low as 2.5, and radiophiles that survive after exposure to a megarad of radiation. These exotic species may offer clues to creating microbes that are tolerant of a variety of environmental or industrial conditions and are useful in performing many chemical and physical tasks. Use of microbes or plants may be the only technically or financially feasible solution to cleaning up the extensive current soil and groundwater contamination. Possibly as important, the science and technology base for wide-scale bioremediation would be a cornerstone for the biotechnology industry.

Managing Wastes
The most difficult and potentially most expensive waste accumulations are the large volumes of complex radioactive and chemical or "mixed" wastes stored around the country. This problem has many facets, including characterization of complex nonhomogenous mixtures; interim stabilization for safety reasons; retrieval; treatment, including incorporation in some inert sequestering waste form; and "permanent" storage of any nonreducible residue. The cost and time needed to complete this waste disposal will be driven by the tremendous volumes of waste and the problem of establishing a national repository strategy. While the actual volume of radioactive elements is quite small, its incorporation in large volumes of mixed chemical components requires that the entire volume be treated and disposed of as mixed waste.

Clearly, the strategy for reducing the cost of this effort must be based on a reduction in the volume of waste to be treated. This approach will require new separations technologies, and new processing technologies to permanently dispose of, rather than store, organic and inorganic chemical constituents. Separations technologies will be based on materials designed to selectively remove specific radionuclides and heavy metals from complex mixtures. The crown ethers, pillared clays, zeolites, membranes, and other physical and chemical separations concepts will be considered in this effort. Once the critical radionuclides, such as technetium, cesium, plutonium, and strontium, have been separated, a variety of processing techniques can be used to deal with the remaining chemical wastes, with only a small volume of residue requiring permanent storage. The materials needed for these processes will be difficult to develop because of the extreme environment in which they must function, but an important benefit of this effort will be the development of new concepts and technologies for future efficient and waste-free industrial processes.

Characterization, Analysis, Monitoring, and Control
In dealing with either environmental or waste problems, several technologies must be developed. To achieve broad understanding of these problems, powerful analytical techniques and methods must be developed and used. In general these problems require capabilities that are either at or beyond the current state-of-the-art. Once basic understanding is in hand, analytical tools to quantify particular species and markers need to be applied to characterize individual cases. When attempting remediation or control using advanced processing tools, real-time analysis is required. Finally, there must be long-term monitoring of the end-products. Throughout all these activities, methods must be developed to monitor worker and public exposure to biological, chemical and physical threats. The long-term solution to problems of analysis, control, monitoring, and human exposure lies in development of new microsensors. The advent of microengineering and the ability to manipulate surfaces, materials, and biological systems at the molecular level makes possible a wide range of microsensor concepts. These tools are also needed in efforts to develop advanced approaches to health care and industrial processing.

Summary
The environmental restoration and waste management problems facing the nation are complex and enduring. Investment in a new generation of science and technology will not only make the effort technically, socially, and financially tractable, but will enable us to develop a wide range of new concepts and technologies to deal with other problems, such as national competitiveness, and thus improve the general quality of life in the United States.

The author is Senior Director of Science and Technology at the Pacific Northwest Laboratory, Richland, Washington 99352

Panel on Public Affairs Workshop on Electricity from Renewable Sources

David Bodansky

Problems of fossil fuel resource limitations and pollutants, including CO2, continue to motivate the exploration of alternatives, in particular nuclear and renewable energy sources. A major use of these would be for electricity generation.

Electricity production and use

There has been rapid growth in the use of electricity in the US throughout the past century. During the second part, from 1950 through 1992, electricity sales increased almost tenfold, from 33 to 315 GWyr, and the fraction of primary commercial energy consumption devoted to electricity generation rose from 14% to 36% (1). (For units and abbreviations used here, see (2).) In more recent years, since the beginning of the first perceived energy crisis in 1973, electricity growth has slowed, but it has nonetheless outstripped growth in total energy consumption and in population (Table 1)

Table 1. Comparison of increases in US electricity use, 1973-1992, with changes in related parameters.

Increase (%)
The energy input for electricity generation (quad) 49%
Electricity Sales 61%
Population 20%
Gross Domestic Product (Constant $) 51%
Total primary energy consumption (quad) 11%

Fossil fuels and nuclear power account for most electricity generation. In 1990, the last year for which comprehensive data are readily available, renewable sources collectively provided only 12% of generation (Table 2). Most of this was from hydroelectric, with smaller amounts from biomass and geothermal, and near negligible amounts from wind, solar thermal, and photovoltaics. Except for hydroelectric, most renewable generation is not by utilities but by so-called non-utility generators (NUG). There was little change in total generation between 1990 and 1992, with nuclear generation increasing 7%, hydroelectric dropping 14%, and fossil fuels (collectively) almost unchanged.

Table 2. US electricity generation, by source, for 1990: net generation (gigawatt), fraction from non-utility generators (NUG), and share of total generation

Source Generation (net GWyr) Percent from NUG Percent of Total Gen
Coal 181 2 53
Natural Gas 41 26 12
Petroleum 14 4 4
Nuclear 66 0 19
Hydroelectric 33 2 9.5
Biomass 4.5 95 1.3
Geothermal 1.7 43 0.5
Wind 0.24 100 0.07
Solar Thermal 0.07 100 0.02
Other 2.6 100 0.75
Total 344 7 100

In recent years, there have been projections of a much larger future renewable contribution, including greatly increased electricity generation of photovoltaic, wind, and thermal solar power. For example, a 1990 study spearheaded by the Solar Energy Research Institute (since redesignated as the National Renewable Energy Laboratory) had scenarios that projected future electricity generation from renewable sources (excluding hydroelectric power) in 2030 extending up to a primary energy input of 33 quads, compared to under 1 quad today (3). Other projections vary widely, some considerably less optimistic (4). A great deal depends upon future costs. A target for competitiveness with fossil fuel or nuclear generation is often taken to be about 4 or 5 cents/kWh, although the actual level will depend in part upon future fossil fuel prices and on taxes or emission standards established for fossil fuels.

Motivation for Workshop
In view of the importance of the issues, it is desirable to subject such projections to careful analysis.  The projections for nuclear power, made when enthusiasm was high and skepticism relatively muted, can provide a cautionary note as an example of the poor track record of past projections.  Thus, a 1972 Atomic Energy Commission report foresaw nuclear growth from about 0.3 quad (of primary energy) in 1970 to over 21 quads in 1990 and to 48 quads in 2000, while actual US nuclear energy consumption in 1990 was only 6 quads and is very unlikely to be significantly higher in 2000.In view of the importance of the issues, it is desirable to subject such projections to careful analysis.  The projections for nuclear power, made when enthusiasm was high and skepticism relatively muted, can provide a cautionary note as an example of the poor track record of past projections.  Thus, a 1972 Atomic Energy Commission report foresaw nuclear growth from about 0.3 quad (of primary energy) in 1970 to over 21 quads in 1990 and to 48 quads in 2000, while actual US nuclear energy consumption in 1990 was only 6 quads and is very unlikely to be significantly higher in 2000.

The APS Panel on Public Affairs (POPA) has a substantial history of studies of energy issues, including a 1979 study on photovoltaic energy conversion (5).  It was concluded by POPA that it would be useful to have an independent, analytic study of electricity generation from renewable sources, although in view of the nature of some of the issues it was not clear that the APS was the proper group to undertake such a study.  To assess the appropriateness of an APS study in this area, a workshop was organized by the POPA and held in November 1992 in Washington, D.C. (6)  Talks and discussion at the Workshop focused primarily on solar thermal, photovoltaic, wind, and biomass electricity sources (7).

Renewable Sources for Electricity Generation
In solar thermal electricity generation, sunlight is concentrated by large factors and used to heat a fluid to drive a steam or gas turbine.  Three geometric configurations are being actively explored:  (a) parabolic trough arrays; (b) central receivers (power towers), and (c) parabolic dishes.In solar thermal electricity generation, sunlight is concentrated by large factors and used to heat a fluid to drive a steam or gas turbine.  Three geometric configurations are being actively explored:  (a) parabolic trough arrays; (b) central receivers (power towers), and (c) parabolic dishes.

In parabolic trough systems, the heated fluid passes through tubes lying on the focal line of long troughs.  The troughs track the sun with single-axis rotation.  By 1991, nine units with a total capacity of 354 MWe had been installed in California, but the company installing them declared bankruptcy in 1991 due to a lapse in tax advantages and low natural gas prices.  The units continue to provide electricity to the California grid, despite the cessation of new construction.  They represent the only solar thermal technology that provides commercial power.

In central receiver systems, a large number of individual heliostats, with two-axis tracking, reflect sunlight upon an elevated receiver.  A 10 MWe pilot plant facility, Solar One, operated in California from 1982 to 1987.  It is scheduled to be replaced by a more advanced unit, Solar Two, at the same site.  Pending experience with the latter, construction may be undertaken in the late 1990s on a 100 MWe commercial unit. Technical advances being explored for Solar Two include:  stretched membrane reflectors in which two membranes are used, with their curvature determined by the pressure in the gap between them; molten nitrate salt for heat reception, transmission and storage; and air, with suitable particle loading, for heat reception and transmission (with bricks for heat storage).  Higher efficiencies are anticipated for an air + gas turbine system than for a molten salt + steam turbine system.

Parabolic dish units can have individual engines at the focus of each dish, or linked receivers driving a common larger engine.  Typical units are in the 25 kWe range, and are more suited to remote locations than as suppliers to the electricity grid.  Of the three technologies, it is expected that the central receiver will prove to be the most economical, eventually providing electricity at 5 to 7 cents/kWh.

The use of solar photovoltaic power is at present limited by high costs to specialized applications, such as for portable consumer devices, satellites, and remote locations such as lighthouses.  World sales of photovoltaic modules in 1992 amounted to only 60 MW of capacity, divided roughly equally between suppliers in the US, Japan, and Europe.  Crystalline silicon flat plate units remain the dominant technology, but many alternatives are being explored in the search for low cost, high efficiency, and durability.  These include other crystalline materials, thin films, multiple-junction units, and Fresnel lens concentrator units.

Since 1980, the estimated cost of electricity from photovoltaic sources has dropped from about 90 cents/kWh to about 25 to 30 cents/kWh.  If this rate of price reduction continues for another 20 or so years, photovoltaic power would become economically competitive.  Learning curve methodology suggests that such a price decrease may occur, in conjunction with a large increase in the volume of photovoltaic sales (8).

In principle, wind resources are sufficient to supply a large fraction of the total US electricity demand (9).  Aside from cost, possible constraints on the extent to which wind power will contribute arise from the uneven national distribution of wind resources (greatest in the upper midwest), the intermittent nature of the wind, and the possible environmental impacts of large wind energy farms.  At present, there is appreciable use of wind in the US only in California, where it provides about 1% of the electricity.  World generation by wind was under 0.4 GWyr in 1990, of which about 75% was in California.  However, there is increasing interest in other countries, and at present most of the new wind turbines being installed in the US are from Japan or Denmark.

The present cost of electricity from wind is 7-10 cents/kWh.  It is projected to drop to 5 cents/kWh by 1995 and to 4 "/kWh by the year 2000.  Together with a 1.5 cents/kWh incentive incorporated in the 1992 Energy Policy Act, this would make wind power economically competitive with fossil fuels or nuclear power.

Wind turbines installed in the early 1980s were mostly under 100 kWe in capacity, but new US units are now several hundred kWe and still larger units are being investigated in Europe.  There have been significant improvements in recent years in wind turbine materials and blade configurations.  This may permit an increase in unit size, while preserving long term durability under rapidly varying stresses. Although land requirements are high, of the order of 600 km2 per gigawatt of average output, most of the land can also be used for grazing or other agricultural applications. There has been increasing recent interest in biomass for electricity generation.  Biomass now provides about 4% of total US energy consumption and, in 1990, a little over 1% of electricity generation, mostly produced for internal use of companies in the wood product industries.  Any large expansion in the use of biomass for electricity generation is expected to rely on new plantations of dedicated energy crops.  It is estimated that 140,000 to 800,000 km^2 of land is available in the US for such plantations.

Harvested crops can produce steam directly or can be converted into liquid or gaseous fuels for use with steam turbines, gas turbines, or, eventually, fuel cells.  Gas turbines offer high efficiency, especially in conjunction with combined cycle operation or co-generation.  They can be started relatively quickly, making them complementary to intermittent sources.  Under favorable assumptions for crop growth rates and turbine efficiencies, about 2,000 km^2 of land would be needed per gigawatt of average output.

The intermittent nature of wind, solar thermal, and photovoltaic sources can create problems if these represent a large fraction of the electricity supply.  This is not yet the case in California, where intermittent sources now account for 1 to 2% of the electricity supply.  The problems can be ameliorated by complementary renewable sources, such as hydroelectric power and biomass, or through explicit storage.  At present, only pumped hydroelectric systems provide large storage capacities.  Other storage possibilities include compressed air, thermal storage, batteries, flywheels, and superconducting magnets.  Of course, if electricity is used more efficiently, demand is less and supply problems are eased.

The proponents of the various renewable technologies envisage that their costs will become competitive with those of fossil fuels, at 4 to 5 cents/kWh, before 2000 or 2010 for biomass and wind, and somewhat later for solar thermal and photovoltaic generation.

Workshop Conclusions
Overall, the organizing group was impressed by the substantial progress made in the development of renewable technologies, but concluded that it would be inappropriate for the APS to attempt an assessment of the overall prospects of renewable electricity generation, including future costs and market penetration. Many of the issues related to costs and environmental impacts have little physics-related technical component, and therefore do not naturally lend themselves to evaluation by a physics group.Overall, the organizing group was impressed by the substantial progress made in the development of renewable technologies, but concluded that it would be inappropriate for the APS to attempt an assessment of the overall prospects of renewable electricity generation, including future costs and market penetration. Many of the issues related to costs and environmental impacts have little physics-related technical component, and therefore do not naturally lend themselves to evaluation by a physics group.

However, it was concluded that it would be valuable to carry out a somewhat more limited study, emphasizing a critical evaluation of technological aspects of the main generation methods, including their current status and their potential progress, problems, and opportunities.  Possible topics include metal fatigue and structural dynamics problems for wind turbines, fuel cells for electricity generation from gasified biomass, photodegradation in amorphous silicon cells, and energy storage methods.  A study with a technical emphasis could be of value in identifying possible strengths and weaknesses in proposed technologies, without necessarily attempting to answer broader questions as to the future role of renewable energy.

Acknowledgements.  I have drawn heavily upon talks and discussions at the POPA workshop, plus the report prepared subsequently by members of the POPA organizing committee, and I am indebted to these contributors.

Unless otherwise indicated, data on energy consumption are based on Annual Energy Review 1991, Report DOE/EIA-0384 (91) (US Department of Energy, Washington DC, 1992) and Monthly Energy Review, March 1993, Report DOE/EIA-0035(93/03) (US Department of Energy, Washington, DC, 1993).

  1. Common units are kilowatt-hour (kWh) and gigawatt-year (GWyr), where 1 GWyr = 8.76 x 109 kWh. For renewable energy, it is common to assign a nominal primary energy input based on the energy content of the displaced fossil fuel, presently in the US at 10335 BTU/kWh (33% thermal efficiency). Thus, 1 quad of primary energy corresponds to an output of 11 GWyr, where 1 quad = 10^15 BTU = 1.055 x 10^18 J. Often, generation capacity is specified in units such as megawatts-electric (MWe), to emphasize that the reference is to the electrical output rather than the thermal input.
  2. The Potential of Renewable Energy, An Interlaboratory White Paper, Report SERI/TP-260-3674 (Solar Energy Research Institute, Golden,1990).
  3. For example, in the National Energy Strategy (1991), electricity from renewable sources corresponds to only 12 quads of primary energy in 2030, while the scenarios of reference (3) have primary inputs ranging from 13 to 38 quads.
  4. Principal Conclusions of the APS Study Group on Solar Photovoltaic Energy Conversion, H. Ehrenreich, Chairman (APS, New York, 1979).
  5. The POPA Committee which organized the Workshop consisted of: David Bodansky (Chair), Henry Ehrenreich, Anthony Fainberg, Daniel Fisher, J.D. Carcia, Pierre Hohenberg (Chair, POPA), Roberta Saxon, and Francis Slakey. Talks were given by: George D. Cody (Exx0n), William Fulkerson (ORNL), Allan R. Hoffman (DOE), Pascal De Laquil (Bechtel), Arthur H. Rosenfeld (Berkeley), Robert A. Stokes (NREL), Carl J. Weinberg (PG&E), and Robert H. Williams (Princeton).
  6. A brief summary of the Workshop is presented in: Report on POPA Workshop on Electricity Generation from Renewable Sources (1993). Single copies may be obtained by writing (with an enclosed self-addressed mailing label) to Ms. Nancy Passemante, APS, 525 14th Street NW, Suite 1050, Washington, DC 20045.
  7. G.D. Cody and T. Tiedje, "The potential for utility scale photovoltaic technology in the developed world: 1990-2010," in Energy and the Environment, B. Abeles, A.J. Jacobson, and P. Sheng, editors (World Scientific, Singapore, 1992), pp. 147-215; and POPA workshop talk.
  8. D.L. Elliott, L.L. Wendell and G.L. Gower, An Assessment of the Available Windy Land Area and Wind Energy Potential in the Contiguous United States, Report PNL-7789/UC-261, (Pacific Northwest Laboratory, Richland, 1961). For winds of Class 4 or higher (mean wind speeds greater than about 5.5 m/sec (12.4 mi/hr) at a height of 10 m), wind turbine hubs at a height of 50 m, "severe" environmental restrictions on the placement of wind turbines, and an overall efficiency of about 19% in converting wind power to electrical power, the authors conclude that wind could generate 250 GWyr per year.

The author is at the Department of Physics, University of Washington, Seattle, Washington 98195.

Hazardous Waste Remediation: The Task Ahead

E. William Colglazier

The US has embarked on a massive effort to clean up land and water that has become contaminated with hazardous materials resulting from private and public activities.  This effort has been mounted through federal, state, and private actions that stretch back a decade and a half.  Most of these efforts were begun with only the haziest notion of the money and manpower that ultimately would be required; virtually nothing was known about the environmental consequences or costs of any of the possible choices.The US has embarked on a massive effort to clean up land and water that has become contaminated with hazardous materials resulting from private and public activities.  This effort has been mounted through federal, state, and private actions that stretch back a decade and a half.  Most of these efforts were begun with only the haziest notion of the money and manpower that ultimately would be required; virtually nothing was known about the environmental consequences or costs of any of the possible choices.

Because of the work that has been done, there has been a dramatic expansion in the level of information and understanding of environmental contamination and its remediation.  With it has come the sobering realization that the extent of that contamination, the technical challenges of its remediation, and the resources required are all much greater than envisioned when the cleanup programs were initiated.  Further, what once seemed straightforward now seems much more complicated.  Many actions, though unidimensionally beneficial, are now seen to have disturbing trade-offs among environmental end points, populations, media, and generations.

The nation is at a transition.  Its hazardous waste remediation programs are moving out of adolescence and into maturity.  The time is ripe to use the past decade of experience to ensure that the course set in the early days is right for the future.  One opportunity for reassessment will come in public debates over the reauthorization of the Resource Conservation and Recovery Act (RCRA) and the Comprehensive Environmental Response, Compensation, and Liability Act.  Another will come as key decisions are made on the scope and speed of cleanups at federal facilities.

A Study of Hazardous Waste Remediation
The purpose of a University of Tennessee study (1) was to come to as complete an understanding as possible of the magnitudes of overall resources required for hazardous waste cleanup.  Resource requirements were estimated separately for the Superfund National Priorities List, RCRA Corrective Action, Underground Storage Tanks, federal facilities, and state and private cleanup programs.  The costs of the physical activities to clean up sites were separately estimated, including the costs of site investigation to determine what is to be done.  For that reason the building blocks for the studies were taken to be the remediation technologies to be applied, the extent of those applications, and the number of sites to which they were imposed.  The estimates systematically excluded the transactions component.The purpose of a University of Tennessee study (1) was to come to as complete an understanding as possible of the magnitudes of overall resources required for hazardous waste cleanup.  Resource requirements were estimated separately for the Superfund National Priorities List, RCRA Corrective Action, Underground Storage Tanks, federal facilities, and state and private cleanup programs.  The costs of the physical activities to clean up sites were separately estimated, including the costs of site investigation to determine what is to be done.  For that reason the building blocks for the studies were taken to be the remediation technologies to be applied, the extent of those applications, and the number of sites to which they were imposed.  The estimates systematically excluded the transactions component.

Somewhat arbitrarily, the study adopted a thirty year time horizon.  No shorter period appears adequate to put the inherited hazardous waste problem to rest, and yet no longer period can easily be comprehended.  Moreover, extrapolating beyond thirty years becomes so speculative as to lose much utility.

Estimates were presented on a timeless "as built" basis using current costs as a proxy for the resources required.  That is, there is no consideration of the effect of when the remediation tasks are undertaken.  Consequently, the costs are not discounted from some base period, and cannot in any meaningful way be compared--either as to magnitudes or benefits--with alternative expenditures today.

Three Scenarios
The approach adopted was to take existing behavior and practices as the basis for inferring current policy and then to posit alternative policies--"less stringent" and "more stringent"--that would lead, in the researchers' judgment, to approximately the same level of risk to human health and the environment, but with lesser or greater certainty and with lesser or greater achievement of other goals.  The current policy case was therefore grounded in observed behavior, while the two alternatives were more speculative.The approach adopted was to take existing behavior and practices as the basis for inferring current policy and then to posit alternative policies--"less stringent" and "more stringent"--that would lead, in the researchers' judgment, to approximately the same level of risk to human health and the environment, but with lesser or greater certainty and with lesser or greater achievement of other goals.  The current policy case was therefore grounded in observed behavior, while the two alternatives were more speculative.

Generally, application of current policy results in all sites with meaningful levels of contamination being restored to some degree.  It utilizes a combination of technologies to detoxify wastes to levels where risks are low, and which restores most contaminated sites to the point where future use can be made of them with few restrictions, but which in other cases isolates the contamination to limit potential exposure.

The less stringent policy option assumes the goal of eliminating current and future exposure of people and natural systems to significant levels of risk.  In this sense it is no less protective of human health and the environment than is current policy.  It is distinguished from the current policy case by depending less on destruction and more on isolating the contamination to minimize exposure.  It addresses the same sites, and allows no higher risks to exposed people.  Because this case uses a greater degree of containment and isolation, it may become necessary to revisit more sites than in the current policy case if containment fails or if use restrictions are removed.

The more stringent policy starts with the premise that destruction of contamination is the basic goal and that containment or isolation is acceptable only when technological feasibility is a constraint or when heroic measures and extremely large resource expenditures would be required.  Consequently, even the more stringent policy does not envision destruction when contamination is very low (in order to achieve pristine conditions), nor does it try to restore sites everywhere to the point where unrestricted use is appropriate.

In short, these three policy cases were conceived to lie along a continuum of more or less permanent destruction of contamination leading to unrestricted use of land and groundwater.  In design, it was the intent that these options not differ in residual health and environmental risks to which people and natural systems may be exposed.

Results
In the current policy scenario, total resources required will be approximately $750 billion if the country maintains its present course.  But contamination could be substantially less than now perceived, and if so the total could be as low as about $480 billion.  Conversely, the total could plausibly rise over $1 trillion in direct remediation costs over the next three decades, if contamination is as great as some people suspect.In the current policy scenario, total resources required will be approximately $750 billion if the country maintains its present course.  But contamination could be substantially less than now perceived, and if so the total could be as low as about $480 billion.  Conversely, the total could plausibly rise over $1 trillion in direct remediation costs over the next three decades, if contamination is as great as some people suspect.

An appropriate interpretation of a tilt toward less stringent policy is that resource requirements would be reduced about one-third from current policy levels.  Costs would drop from about $750 billion to less than $500 billion.  This would occur through use of technologies that emphasized containment and waste isolation rather than destruction, but that would not be expected to change significantly the ultimate impacts on human health and the environment.  This policy would leave more wastes in place, require somewhat greater restrictions on land and groundwater use than under current policy, and could present future generations with additional expenditures if they wished to remove those restrictions.  This estimate is accompanied by a plausible lower bound of less than $400 billion in case contamination is less than projected, and a plausible upper bound of about $700 billion if contamination is greater.

The interpretation of a more stringent policy follows along that described for the current policy and less stringent policy cases.  The best guess for resource requirements rises from $750 billion under current policy to well over $1 trillion if greater use is made of more intensive treatment technologies.  This best guess is bounded at somewhat less than $1 trillion if contamination is less than expected, and over $1.5 trillion if it is greater than contemplated.

The more stringent policy scenario differs from current policy primarily in the degree to which it lessens the contingent burden on future generations by allowing unrestricted use of more sites and groundwater and by freeing them from the need to be cognizant of wastes that remain in place.  Arguably, it also offers a greater margin of safety against future exposure to substances that may prove hazardous.  In contrast, by leading to greater handling and treatment of contaminated material, it increases exposure of those living now to potentially harmful substances.

Conclusions
The overriding conclusion is that policy toward hazardous waste remediation deserves the most serious attention from the public and decision makers.  It would be comforting to say that major decisions are behind us and that the course is set, but the facts suggest that major questions are, and should be, open.The overriding conclusion is that policy toward hazardous waste remediation deserves the most serious attention from the public and decision makers.  It would be comforting to say that major decisions are behind us and that the course is set, but the facts suggest that major questions are, and should be, open.

Take first the magnitude of the task.  The current policy best guess of $750 trillion would, if strictly comparable to current expenditures, absorb as much of the country's productive potential as one year of non-defense federal expenditures or a decade of total public and private expenditures on all other environmental quality objectives at FY 1990 levels.  As significant, the difference in expenditures between a less and a more stringent policy--both of which are feasible and have strong advocates--would be about the same as the nation is now poised to spend on cleanups.  Programs of this magnitude, and choices of this significance, deserve the closest scrutiny of operations, goals, and benefits.

But the gross numbers themselves lead to no conclusions, and certainly not to the conclusion that we cannot "afford" to do the job of cleaning up the wastes left by past generations.  As daunting as the task is, it is well within our capacity, especially since the effort will be spread over several decades.  The issue, rather, is one of incremental costs of different policies relative to their incremental benefits--and those benefits must be interpreted broadly to include matters beyond simple calculations of risks reduced or property values enhanced.

A second observation concerns the degree to which costs are being taken into account at federal facilities, particularly DOE facilities.  In the case of DOE cleanups, the decisions are being made in negotiations between DOE, EPA, and the states.  None of these parties, for understandable reasons, are especially interested in keeping costs down.  DOE is attempting to change its culture toward environmental stewardship, and is concerned about legal liabilities of its personnel if environmental regulations are not met.  For these reasons, DOE is asking for everything it thinks necessary to comply fully with the regulations and is going beyond what a private party might do.  The states are interested not only in cleaning up sites the public perceives as a threat, but also in protecting jobs at sites that would have a significant employment downturn if it were not for the cleanup effort.  And with DOE sites (unlike non-federal sites), EPA does not have to ask the President and Congress to add money to its own budget.  This perhaps explains why in examining the DOE cleanups it appears that the decisions being made are at the upper end of what might be required in terms of stringency, with obvious cost implications.  For many of the contaminated DOE sites that are isolated from the public, public health can likely be protected with institutional controls and containment remedies for the foreseeable future.  These alternatives should receive greater weight than they now do in decisions on DOE cleanups.

Given the uncertainties in what can be achieved by the DOE cleanup effort and at what cost, one approach would be for Congress to set an annual level of funding and for the key parties--DOE, EPA, and state and tribal governments--to agree on a priority-setting mechanism for allocating these funds.  That mechanism should emphasize protecting public health based on risk estimates, and reaching agreement with stakeholders on future land uses, and should place less emphasis on trying to comply as soon as possible with regulations designed for other cleanup problems.  With a tiered prioritization scheme that first emphasizes risks, more time might be available to ensure not only that funding is being wisely spent, but also that new technologies might become available that would reduce costs in the long run.  In the case of the DOE cleanup, continued investments in R&D on new cleanup technologies could have a high payoff over 30 years.

  1. Milton Russell, E. William Colglazier, and Mary R. English,  "Hazardous Waste Remediation:  The Task Ahead," Waste Management  Research and Education Institute, University of Tennessee, Knoxville,  December 1991.

Forum Election Results and New Officers

The Forum's recent elections elicited 900 votes, 20% of the Forum's membership. The newly-elected officers are Alvin M. Saperstein as Vice Chair, and two new Executive Committee members: Tina Kaarsberg and Robert Lempert. The complete list of Forum Officers for 1993-94 is as follows:

  • Chair:  Marc Ross
  • Chair-Elect:  Anthony Nero
  • Vice Chair:  Alvin Saperstein
  • Past Chair:  Anthony Fainberg
  • Secretary-Treasurer:  Caroline Herzenberg
  • Forum Councillor:  Barbara Levi
  • Executive Committee:  Lisbeth Gronlund, Tina Kaarsberg, Robert Lempert, Cindy Schwarz, Julia Thompson, Jill Wittels

Councilor's Report on APS Council Meeting

The APS Council met in St. Charles, Illinois on 24 October 1992. The following were some of the agenda items that might be of interest to Forum members:

Report from the APS Executive Committee: (1) The future of NSF and its possible redirection towards greater support of industrial research has been of concern to the APS. The Physics Planning Committee drafted a letter to NSF, voicing APS concerns, and the letter has been well received. (2) APS has appointed a task force to plan the celebration of its centennial in 1999. Mildred Dresselhaus chairs the task force. (3) APS also appointed a task force on discriminatory behavior, which has representatives from the Committee on Membership, the Committee on Minorities, and the Committee on the Status of Women in Physics.

Ground has been broken for the American Center for Physics, APS's future home near the University of Maryland. Richard Werthamer is hopeful that APS can occupy the new building in less than a year.

During the report of the Committee on Constitution and Bylaws, there was some discussion of the requirement that the membership of each subunit of APS be at least 3% of APS membership to entitle that subunit to at least one seat on the Council. The criterion will be reviewed in another year. There are now four subunits that call themselves a Forum (Physics and Society, History of Physics, Education, and International Physics). Is there some short name by which the Forum on Physics and Society can distinguish itself from the other forums? Think about it.

As part of its ongoing charge to conduct periodic reviews of APS committees, this year the Committee on Committees reviewed the Committee on Membership. The COM recently finished the analysis of a survey of APS members, and the report is quite interesting. There will be an announcement in APS News as to how members can send for a copy of the survey.

Harry Lustig reported on the finances of the society. The fiscal picture is good at present, but Lustig expects the finances in future years to be more tight, especially with rising costs of journal publication and increasing cancellations of library subscriptions. The Council passed an increase in dues, but added a new "junior" category of membership, with lower dues.

The candidates for fellowship were approved. The new Forum Fellows are Carol Jo Crannell, Art Hobson and Ruth Howes. Congratulations to all three.

The big issues concerning publishing are electronic publishing and declining library subscriptions. The committee is studying ways to accelerate the former and decelerate the latter. Physical Review will celebrate its 100th anniversary this year. Physics Today plans to publish some relevant articles and there will be special sessions at society meetings.

The Committee on Membership reported on various activities being planned to help physicists caught in the tight job market. APS has run a trial career workshop at Fermilab, and others along the same line are planned. Some subunits and some universities are planning special career counseling at their meetings. Physics Today hopes to run vignettes of physicists in non-traditional occupations.

The Panel on Public Affairs has decided to undertake as its next major study "The Technical aspects of Renewable Energy." Specifically they will focus on photovoltaics, solar thermal, wind, and biomass. They are seeking a chair (or co-chairs) and then will try to solicit funding.

The Committee on International Scientific Affairs has proposed a cooperative basic research program in physics with Vietnam. The council voted approval for CISA to explore how it might obtain the resources and institutional cooperation needed to implement such a program.

Innumerable other projects conducted by subunits or committees of APS were barely discussed, due to time limits.

Barbara Levi, Forum Councilor

Nominations Needed for Awards!

The Forum is primarily responsible for two of the SPS's annual awards: The Forum Award for promoting public understanding of issues at the interface between physics and society, and the Leo Szilard Award for the use of physics for the benefit of society in such areas as the environment, arms control, and science policy. The Forum invites nominations for these awards. Nomination forms may be obtained from Lawrence Badash, Department of History, University of California, Santa Barbara, CA 93106.

Committee Volunteers Needed!

In its operations the Forum on Physics and Society depends on committees, such as the Awards Committee, Fellowship Committee, Nominating Committee, and the Editorial Board for Physics and Society. If you are a member of our Forum and would be willing to help, please let us know. Contact our Chair: Marc Ross, Physics Department, University of Michigan, Ann Arbor MI 48109.

Organize a Forum Invited Session!

One of the Forum's most effective fora for developing and presenting important issues within the APS is that of the invited-paper session. Those attending Forum sessions get the direct benefit of hearing outstanding researchers and policy makers in their respective fields, and this benefit is extended to the entire Forum membership through the publication of most Forum sessions in Physics and Society. So think about what session you might like to see happen and, most importantly, organize! And think in terms of the meetings that you are most likely to attend in terms of potential topics of interest to that part of the APS. Send suggestions or volunteer to Anthony Nero, the upcoming Program Chair, at Lawrence Berkeley Laboratory 90-3058, Berkeley, California 94720.

Contribute to the Contributed Paper Session!

Given the upsurge in research on Forum-related issues arising from national and international changes in the last two or three years, the Forum is encouraging contributed papers for a session at the next April meeting, 18-22 April 1994, in Crystal City, VA. The overall session subject will be "Energy, Environment, and Arms Control: A New Era?" To help assure enough papers for the session, contributors are encouraged to draft an abstract by September or so, and to send a copy to the Forum's program chair: Anthony Nero, Building 90 - Room 3058, Lawrence Berkeley Laboratory, Berkeley, CA 94720. Further, if you know someone who ought to contribute a paper, you might encourage that person to do so.

Join the Forum! Receive Physics and Society!

Physics and Society, the quarterly of the Forum on Physics and Society, a division of the American Physical Society, is distributed free to Forum members and libraries. Nonmembers may receive it by writing to the editor; voluntary contributions of $10 per year are most welcome, payable to the APS/Forum. We hope that libraries will archive Physics and Society . Forum members should request that their libraries do this! APS members can join the Forum and receive Physics and Society by mailing the following information to the editor (see page 2 for address)!

From the Outgoing Chair

This year has been, I think, quite a successful one from our Forum's point of view. We are managing well the transition from arms control and defense-oriented issues to other matters, dictated by the direction the Nation and the world are going. At the intersection of physics and society in the US, there is a return to interest in energy and efficiency issues as well as global climate (and other) change; there is an arousal of interest in science and the law (partly fueled by the Daubert v. Merrill Dow case now before the Supreme Court); there is renewed interest in public perceptions of science and scientific issues; and, above all, an interest in the job market, especially for new physicists. There is, of course, still a more than residual interest in security affairs, including the proliferation of weapons of mass destruction and advanced conventional weapons.

We have had sessions, both at the Seattle and the Spring meetings, reflecting most of the above. On the job front, our new Vice-Chair Al Saperstein has begun a study on jobs and career opportunities in physics which will, we hope, produce a useful document within about 18 months.

Further, due to Norm Chonacky's tireless work, there is now a means of electronic communication available to the Forum. If you wish to be connected contact CHONACKY @ UPS.EDU or MACY @ POLAR.BOWDOIN.EDU. We will try to improve even further our capabilities in this area in the coming year. And Lisbeth Gronlund, with Norm Chonacky, Nancy Forbes, and Art Hobson, have begun an outreach program to try to expand our forum's membership. We now have had tables at the two major meetings this year, to recruit members and stimulate interest. We hope to continue these efforts.

Finally, we have decided, along with the Division on Plasma Physics, to cosponsor a Dwight Nicholson Award for Humanitarian Service, in memory of the recent Chair of the Physics Department at the University of Iowa who was killed by a deranged graduate student.

The Newsletter continues to be fresh and interesting under the able tutelage of Art Hobson. We are making progress in widening our scope and our membership, and I am confident we shall proceed successfully in this direction with Marc Ross as our Chair for the coming year. I welcome him to his new post, thank my colleagues for all their help during the past year, and offer my services to aid the incoming Executive Board in any way I can.

Anthony Fainberg

Three Story Lines for a Physics Course

I've had the good fortune to spend the past two years writing a textbook for a liberal-arts physics course. It is tied together by four story lines: energy, and the three topics discussed below. I welcome your comments.

Social effects of science
Occasionally, when I talk with groups, I ask them to call out any significant contemporary problems that come to mind. It doesn't take long to amass quite a list: overpopulation, urban decay, extinctions, cults, and all the rest. While collecting suggestions, it becomes apparent that they all have significant science and technology components.

Without science we would have other problems, such as early death by disease, but we would not have the particular problems we have today. For instance, because medical science has partly solved the problem of death by disease, we now have the problem of overpopulation. The death problem has been replaced, in a sense, by a birth problem. Because we have accepted science's help in solving the "death" side of the birth-and-death equation, but have not simultaneously taken responsibility for the "birth" side, the planet has more people than it can handle.

Science gives us great power, and we can use that power in helpful or harmful ways. Without science and technology, we would not have the automobile or television, for example. When you turn the switch on either device, you bring great power to bear on yourself, on others, and on Earth.

The problems of science and society really come down to this: humankind is not paying its dues for the fruits of science. We are quick to accept the speed of the automobile, the fun of television, and the cures of medicine, but we are slow to clean up our exhaust fumes, to maintain intelligent reading habits, or to control our birth rate. The ozone story is one good example: Humankind enjoyed its CFC-powered air conditioning and styrofoams for many decades before anybody took the trouble to investigate what problems all this might be causing, and even then it was 15 more years before we got serious about eliminating CFCs. Now the atmosphere's ozone component is damaged, and it will not soon recover. Earth, including us, is paying the price.

We dare not accept science's benefits without accepting its responsibilities.

Speaking as a science teacher who is doubtless predjudiced in the matter, my first suggestion is that we all, including especially scientists, learn much more science. Scientists, usually only narrowly trained in one specialty, are among the scientifically illiterate. We scientists need to learn and to communicate more broadly, especially with non-scientists. Non-scientists need to learn more about the physical universe and our planet. Humankind is using great power today, without knowledge. We are still in our technological childhood. If we want this experiment to succeed, we had better begin to understand what we are doing, for the use of great power without knowledge is always a prescription for disaster.

Comparing Newtonian and post-Newtonian physics
This topic runs deep into the cultural roots of western industrial civilization. Modern culture, including plenty of scientists, still perceives science largely in outdated Newtonian terms. Newton's mechanical universe is a materialistic worldview that leaves little room (no room, really) for freedom, chance, or creativity. Many people would argue that it leaves little or no room for spiritual values.

But modern physics paints a quite non-Newtonian picture. It's most basic elements are not really particles, but are fields and energy, as structured by relativity and quantum theory. Many non-Newtonian forms of energy exist, and the universe emerges not as a predictable clockwork mechanism, but rather as a dynamic and unpredictable network of energy. Material and nonmaterial particles pop unpredictably and briefly in and out of their fields. This is nothing like a clock. In many ways it is the opposite of a clock. Considering the hidden and often distant interactions that seem to be the essence of the quantum world, many have suggested that, if we are to use images at all (and apparently we are, for physicists have used the clockwork image for many centuries now), then the universe is more comparable to a living, integrated, unpredictable organism than it is to a dead, reducible, predictable clock.

It is by no means clear what philosophical view will emerge from all of this. I feel that the world is just beginning to absorb the impact of relativity and, especially, of quantum theory. This is not surprising. After all, more than a century elapsed after Copernicus' death in 1543 before Europe began to absorb the cultural impact of post-medieval science. The post-Newtonian century will be the 21st, not the 20th.

One practical example of the importance of forming a post-Newtonian worldview might be the ongoing reaction, in the US at least, against the theory of biological evolution. The reaction comes from a perceived threat to religious beliefs. Religious fundamentalists typically view evolutionary theory as mechanical, deterministic, and materialistic, with no room for spirtual values. Thus the real opposition may be not to evolution itself, but to evolution as interpreted through the concepts of Newtonian physics. A post-Newtonian culture might relieve these old religiously-based science anxieties.

How do we know?
Science's answer to this question is surprisingly simple: We know by experience, as interpreted through intelligence. Science's "method" is simple, but fundamental. It is to take nothing for granted, to form one's views on the basis of careful observations and hard honest thinking, and to be willing to modify those views in the light of new observations. It comes down to being observant, open-minded, and honest.

The 20th century has been torn by rigidly-held and conflicting ideologies. The nationalistic, religious, economic, and ideological forms seem to come in every imaginable variety, many of them in utter contradiction with each other. Yet those who believe them are all absolutely convinced that they are right. The result has been war, fear, predjudice, fanaticism, and, perhaps most frightening of all, willful ignorance. Science's view on this is that the danger lies not so much in the beliefs themselves, as in their absolute nature . Even wrong and harmful beliefs can be corrected if one is willing to trust experience and to be intellectually honest. And even correct and healthy beliefs can become dangerous if accepted uncritically or absolutely.

In thinking about how we might do better in the 21st century than we have in the 20th, we should perhaps ponder science's most basic value: All ideas are subject to testing by experience, and to challenge by critical rational thought.

It is a simple but demanding code. It is often uncomfortable, even painful, to honestly re-evaluate one's beliefs in the light of experience. It might be science's most important benefit.

Art Hobson


For more information contact us via email.