Archived Newsletters

Help for "Physics in Perspective"

I'm looking for textbook suggestions for a new and unusual physics course. Maybe somebody out there can help me.

The course is part of our new Bachelor of Arts physics program, for students who want a physics degree but who are headed for careers in medicine, journalism, law, business, science teaching, etc. This program is algebra-based, not calculus-based. The new course will follow a sequence of two semesters of algebra-based introductory physics plus one semester of algebra-based modern physics.

The course is "Physics In Perspective," with the following catalog description: Human implications of physics, including life's place in the universe, the methods of science, human sense perceptions, energy utilization, social impacts of technology, and the effect of physics on modern world views.

The course discusses two broad themes, and I'll probably need a separate text for each. The first theme is the meaning and implications of modern physics: the methods of science, life's place in the universe, the interpretation of modern physics and especially quantum physics, the effect of physics on modern world views. The second theme is societal topics: energy resources, global warming, ozone depletion, or other topics. I'm looking for an algebra-based textbook that could help me with either one of these two themes. Any textbook suggestions? Any suggestions for useful articles?

Art Hobson
Department of Physics
University of Arkansas
Fayetteville, AR 72701

Teaching versus Research?

David Pushkin's letter to P&S Jan 97 commenting on the editorial in the Oct. 96 issue, perpetuates the myth that research differs from teaching.In the current climate "teaching" means exclusively undergraduate teaching. In reality research is also teaching; it is teaching our graduate students how to ferret out natures secrets. It is also teaching ourselves and our peers through papers and conferences the latest scientific results obtained throughout the world. I maintain that it is this "research" type of teaching which has the greater beneficial impact on society, through the research and development accomplishments of our graduate students while in school and after they graduate, and by replenishing the knowledge banks upon which future generations will draw.
When we allow politicians and others to define "teaching" as only undergraduate teaching, and research as other than teaching, we become our own enemy. Until we as a community agree on this point, we are vulnerable to the type of criticism which is so prevalent now, and is likely to produce unwise decisions to the detriment of both science and society as a whole.

Allen Rothwarf
Dept. of Electrical and Computer Engineering
Drexel University, Phila. PA 19104

The Future of Industrial Careers

Roland W. Schmitt

Trends in Funding of Industrial R&D: After a decade of fast growth of industrial R&D expenditures, they began to flatten in 1984 due largely to drop in federal support. This was somewhat offset by continued growth of industry's own funds until 1992, when they, too, flattened. Thus the principal changes in industrial R&D funding over the last two decades have been

  • flattening of growth rate in ~'84
  • significant shift in support from federal to industrial
  • drop in expenditures beginning ~ '92

There are several projections of future industrial R&D spending.

  • A forecast for '96 by the Industrial Research Institute indicates growth of 6%; while this varies by industry, only the Petroleum and Energy Industries and the Fabricated Materials are projected to cut their R&D and neither is probably a big employer of physicists.
  • Battelle also makes projections of industrial R&D. They say that a further drop in federal support will be more than offset by industry's own expenditures, the first increase in several years. Battelle says that "The underlying strength of .. industries .. like telecommunications, pharmaceuticals, automotive and aerospace, computers, electronics, and software ... and the rapid changes they are going through mean R&D funding growth is likely to continue for several years."

Conclusion: Indications are that industrial R&D expenditures may have "bottomed" and will start back up - though modestly so - over the next few years.

Even more important than the expenditures on industrial R&D are the changes that have taken place in it over the recent past and will, undoubtedly, persist into the future.

The common way of describing these changes is the "squeezing out" of long-term, basic research - especially in the corporate labs of major corporations - Bell, GE, IBM, Dupont, etc. But, I believe this is an inadequate characterization of what's going on. Quite a number of people have written about these changes - they are driven by industry's desire to improve the effectiveness and productivity of the money they spend on R&D - just as they are trying to do in all other corporate activities.

Let's look at some of the factors and features driving the changes. Andrew Odlyzko of Bell Labs has written a fine article about what is happening. He says "...the accumulation of technical knowledge has made it easy to build new products and services, and so the most promising areas for research have moved away from the traditional ones that focus on basic components, and toward assembling these components into complete systems." To the extent that these observations are true, the question will be how do physicists fit: before answering that question, let's look at another factor in the changing environment of industrial R&D.

As corporations search for stronger competitive positions in their markets, and for greater efficiency and productivity in their R&D, they are asking themselves how best to acquire and develop the technologies needed for their products. They have found that tapping outside sources is often preferable to doing the job in house.

So, what's happening in corporate R&D is a shift toward what's been called the "Virtual R&D Laboratory". By this is meant a strategy for developing or acquiring the technology needed by the corporation that employs a variety of modes: in-house research; joint ventures; partnerships with suppliers; sponsored research at universities. In short, moving away from the old concept that you must do all of your research in-house: both the pioneering R&D and the R&D for evolutionary innovations.

Companies are increasingly looking for the heart of competitive advantage, for their core competencies and concentrating on leadership work in that arena. All the other technology they need, they look for the best, most effective way of acquiring it.

What I've said so far pertains to large firms. There are also small and mid-sized firms. They, too, have to concentrate on core technologies. But. to them, having internal competence that has the ability to spot and recognize what they need from outside is crucial.

Finally, there is the entrepreneurial world. All I'll say about this is that some physicists are pretty good in this arena, but it is a different life from what you learn as a graduate student. But, don't forget it.

How do physicists fit into this new industrial world? The answer is "very well". The new APS Forum on Industrial and Applied Physics and AIP's new magazine "The Industrial Physicist" are, in my opinion, doing a fine job of letting the physics community generally know what life as an industrial physicist is like. The Chairman of FIAP, Abbas Ourmazd, and Len Felderman wrote a great little piece last November, called "But Is It Physics?" They say that we have to move "...beyond such questions of definition, because ultimately, they do not matter." We have to recognize "...the evolving nature of science and technology, and the central role that can be played by physicists in this evolution."

The opportunity to exercise conceptual powers, inventiveness, originality - all characteristics develop in a physics education - lurks in many, many corners of the industrial enterprise. As Ourmazd and Feldman say, "Physics produces a 'can make anything, can fix anything' attitude'." - a trait of immense value in industry.

But, coming back to the present view of physics and physicists and how this is letting them fare in the job market, one has to say things are mixed. For the past several years the job markets for scientists and engineers at the bachelors level have been cool and, of course, physics has shared this.

One campus placement officer told me recently that demand has exploded this year for the first time in a decade, and on his campus, the number of corporate recruiters is up 30%. Moreover, beginning salaries are up. Areas of high demand seem to be E.E., Systems Eng'g, Computer Sciences, and he also explicitly mentioned master's level physics majors.

The situation for Ph.D. physicists - again in "conventional" positions - seems a bit more problematical. In 1995 the total production of physics Ph.D.s in the U.S. dropped a bit from 1994 - from 1481 to 1450. For initial employment, 60% of these went to post-doctoral positions and only about a quarter went to "potentially permanent" positions. Among the latter group, 25% went to academic positions, and 58% to industry. The fraction going to academe has been constant over the last few years, while the fraction going to industry has grown at the expense of government labs and non-profits.

In one instance, (my own former lab) out of 66 Ph.D.s hired in 1995, only 3 were physicists! I'm disappointed to have to report this.

The conclusion is pretty simple. An education in physics, at all levels - Bachelor to Ph.D. - is a flexible and versatile asset that is applicable to many types of career. But two things get in the way of taking full advantage of this: the perception of newly minted physicists and the perception of employers! Physicists are not taught to think of themselves as versatile scientists or technologists so they tend not to look broadly enough for exciting career opportunities. And, employers, by and large, don't have a clear perception of what physicists can do. Both sides of this equation have to be tackled. Again, I think that the APS Forum on Industrial and Applied Physics and AIP's magazine The Industrial Physicist will help.

Roland W. Schmitt is Chairman of the Board of Governors of AIP and President Emeritus of RPI and Sr. Vice President (Ret.) of General Electric. rws@aip.org

End of Nuclear Testing

Jeremiah D. Sullivan

The technical basis for the U.S. adoption of a "Comprehensive Test Ban" national policy was a Department of Energy sponsored study conducted by the JASON group [1] during the summer of 1995. My remarks here draw from my participation in that study together with the knowledge and experience I have gained from two decades of work as an academic and as a consultant to the US Department of Defense, Department of Energy, and Arms Control and Disarmament Agency on arms control and defense technologies. The study itself is classified, but the Summary and Conclusions are publicly available [2], and I speak primarily about them.

The primary task of the JASON study was to determine the technical utility and importance of information the United States could gain from continued underground nuclear testing...

The study panel was unique in having four senior weapons scientist-engineers as members together with ten physicists, primarily from the academic community, all of whom had considerable experience in the issues addressed by the panel. The panel had full and unrestricted access to information held by the US nuclear weapons design laboratories: Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL) and Sandia National Laboratories (SNL), and it received full cooperation from these laboratories.

US Policy and Plans Concerning Nuclear Weapons
The key assumptions of the study were given by US policies and plans regarding its nuclear forces operative during the summer of 1995,which remain unchanged today.

  1. The US intends to maintain a credible nuclear deterrent.
  2. The US is committed to world-wide nuclear non-proliferation efforts.
  3. The US will not develop any new nuclear weapons designs. (A policy first announced by President Bush in 1992 and reaffirmed by President Clinton.)

In practice, Policy 1 means that after START II reductions (scheduled for the year 2003), the US will have approximately 3,500 nuclear warheads and associated launchers in its active strategic arsenal, a smaller number of strategic warheads in the inactive reserve, and a sizable number of tactical warheads in reserve as well [3]. The warheads in the active reserve are referred to as the "enduring stockpile." The US is currently dismantling about 1,500 nuclear warheads per year and is manufacturing no new ones.

The enduring stockpile consists of nine distinct designs of various ages; six are LANL designs and three are from LLNL. All are variations of a general design type referred to as "hollow-boosted primaries with fission enhanced secondaries. Policy 1 also means that the US will require guarantees of the surety, safety, and performance (reliability) of weapons in the enduring stockpile.

Surety means that rigorous measures are in place to ensure that no nuclear weapon is used without authorization or falls into the hands of an unauthorized individual. To this end, physical mechanisms are built into warheads to prevent any nuclear yield should an unauthorized party acquire a weapon and permissive action links and related measures are included in the overall weapon systems to prevent use by authorized holders without approval from the National Command Authority - the President under normal circumstances. None of the technical or operational procedures associated with surety require nuclear testing, so it has never been an issue in debates over nuclear test ban policy.

Safety refers to choosing weapon designs and handling procedures are chosen to ensure to the highest possible levels that in the normal storage, transport, and basing of nuclear weapons, no accident, fire, collision, or other mishap will result in any nuclear yield or dispersal of fissile material (weapons grade plutonium or highly enriched uranium).

Reliability means the yield of the weapon will fall within specified limits even in the worst-case environment: just prior to tritium boost gas supply replenishment, operation in the high neutron environment of a nuclear war, or implosion of the primary at sub-freezing initial temperature.

Questions Facing the US Nuclear Weapons Community in a CTBT Era

  1. What technical capabilities will be required to maintain confidence in the enduring stockpile for an indefinite period?
  2. What should the response be to aging effects uncovered during inspections of existing weapons?
  3. How will human expertise in the science and technology of nuclear weapons be maintained?
  4. What types of zero-nuclear-yield experiments are important in maintaining confidence in the enduring stockpile? (These are often referred to as above-the-ground experiments, or AGEX.)
    and most importantly,
  5. What contributions would testing at low levels of nuclear yield make to maintaining confidence in the enduring stockpile? Such testing might be "permitted" for a finite period of time as a transition into a true CTBT era, or perhaps be allowed indefinitely, in effect defined as "not counting as a nuclear test. "

These technical questions do not exist in a vacuum, especially Question 5. Arms control policy, non-proliferation policy, international relations, and a host of other factors are impacted by the answers. My remarks below focus entirely on the fifth question. DOE initiatives to assure answers to the other four questions are contained in its Science Based Stockpile Stewardship Program (see the following paper in this issue).

Physics of Modern Nuclear Weapons
Modern nuclear weapons consist of a primary and a secondary stage [4, 5]. The primary consists of a hollow shell of fissile material (the "pit"), surrounded by an array of high explosive (HE) charges, which when detonated by an appropriate signal causes a spherically symmetric implosion that compresses the pit to a supercritical configuration. At the optimal moment, a burst of neutrons is released into the imploded pit, triggering a flood of fission chain reactions.

In the primaries of all modern nuclear weapons, a boost gas mixture consisting of deuterium (D) and tritium (T) is introduced into the pit just prior to HE initiation. As the fission energy released from the supercritical assembly builds, the DT gas in the pit is heated above the threshold for thermonuclear processes, the most important of which is D + T -> alpha + n. The sudden, intense flood of neutrons created in these fusion reactions induces vastly more fission chains, thereby greatly enhancing the fraction of the fissile material in the pit that undergoes fission. The direct contribution of fusion to the net primary yield is minor; the indirect contribution is very large.

The energy released from the primary couples to the secondary by radiative transport, creating the conditions for thermonuclear burn. The basic ingredient in a secondary is solid lithium deuteride (6Li-D), which contains the deuterium needed for the D-T fusion process and generates in a timely manner the needed tritium through the "catalytic" process n+ 6Li -> alpha + T. Depleted or enriched uranium can be added to the secondary to enhance substantially the secondary yield given the intense flux of energetic 14 MeV neutrons that result from D-T processes there. Modern secondary designs exploit this synergy between fusion and fission, and for this reason a substantial fraction of the overall secondary yield comes from fission processes. This is the basis for the popular rule of thumb for estimating fallout: 50% of the yield from fusion, 50% from fission [6].

Primaries and secondaries present quite different challenges to designers. Primaries are notoriously sensitive to small changes since the basic implosion processes, being hydrodynamic in nature, is accompanied by non-linear effects and a penchant for turbulent behavior. In addition, one has low density material driving high density material, which raises the specter of Rayleigh-Taylor and other instabilities. Finally, the primary depends critically on the uniform performance of high explosive, a notoriously idiosyncratic chemical that is sensitive to environment conditions, grain size, and other details of manufacture.

In contrast, the physics of secondaries while complex is generally forgiving because of the speed-of-light transport of radiation vs. the relatively slow supersonic speed of material transport. Computer modeling of secondaries, however, presents enormous challenges because of the need for fine time-steps to capture the rates of change of radiation processes and while follow hydrodynamic motion operating on much longer time scales.
Why the Study Was Commissioned

Following the successful indefinite extension of the Nuclear Non-Proliferation Proliferation Treaty in May 1995, a major debate developed within the US security and arms control communities. This debate had three main voices: (i) some argued that the United States needed to maintain the right to do nuclear testing at approximately the half-kiloton level for a fixed period, say ten years, to retain confidence in the safety and reliability of its nuclear weapons; (ii) others argued that retaining a right to do hydronuclear testing (defined roughly as less than four pounds of nuclear yield) indefinitely would be required to retain confidence in the enduring stockpile as it aged; (iii) yet others argued that neither sub-kiloton nor hydronuclear testing was necessary and neither contributed usefully to maintaining confidence in the enduring stockpile. (Hydro tests which study implosive assemblies that never achieve a supercritical configuration were not at issue.)

Conclusions of the Study
The primary task of the JASON study was to determine the technical utility and importance of information the United States could gain from continued underground nuclear testing at various levels of nuclear yield for weapons in its enduring stockpile as they age[2].

Conclusion 1-The first conclusion of the study panel was that the United States could have high confidence in the safety, reliability and performance margins of weapons in the enduring stockpile. This confidence is based on 50 years of experience and analysis of more that 1,000 nuclear tests, including approximately 150 nuclear tests of modern weapon types in the past 20 years.

Conclusion 2-The report recommends a number of activities that are important for retaining confidence in the safety and reliability of the weapons in the enduring stockpile whether or not testing at the half-kiloton level or less is permitted. The recommended non-nuclear tests, most being extensions of above-ground experiments that have long been part of laboratory operations, would be designed to detect, anticipate, and evaluate potential aging problems and to plan for refurbishment and remanufacture as part of the DOE Science Based Stockpile Stewardship Program.

Conclusion 3-Weapons in the enduring stockpile have a range of performance margins, all of which were judged by the panel to be adequate. The performance margin of a nuclear weapon is defined as the difference between the minimum expected primary yield [worst case] and the minimum yield required to ignite the secondary. Simple measures requiring little effort, such as increasing the tritium load in boosting or more frequent change in boost gas reservoirs to compensate for tritium decay, could substantially increase performance margins at little effort to hedge against unforeseen effects.

Conclusion 4-The primary argument for retaining the right to do testing at or around the half-kiloton level for a finite period was that it would give the US valuable information about the effects of aging of weapons designs in the enduring stockpile. Extrapolation using computer codes from yields obtained in tests of primaries that demonstrate the initiation of boosting (half-kiloton, roughly) to an expected full-boosted primary yield is quite robust, provided the codes are well calibrated.

After careful examination the study panel concluded that a finite period of half-kiloton testing would give little or no useful information about aging effects beyond what can be obtained from inspections and other AGEX in a well conceived stewardship program. Namely, nuclear tests at the half-kiloton level for a finite term would help develop codes further and improve theoretical understanding of the boosting process but would not contribute to understanding aging effects. Similarly, such testing would not provide useful checks of refurbished or remanufactured primaries made after the time the testing ceased. In view of its limited utility, the study panel found that half-kiloton testing for a limited time had a very low priority in comparison to the activities recommended in Conclusions 2 and 3.

To be useful, half-kiloton testing would need to be continued indefinitely. This, the study panel observed, would be tantamount into converting the CTBT into a threshold test ban treaty.

Conclusion 5-The study panel concluded that testing of nuclear weapons at any yield below that required to initiate boosting is of limited value to the United States whether done for a finite period or indefinitely. This yield range includes 100 ton testing and hydronuclear testing. The hydronuclear case merited special discussion.
The arguments for continued hydronuclear testing were that such tests would provide a valuable tool for monitoring the expected performance of weapons in the enduring stockpile as they age. The basic idea would be to do a set of baseline hydronuclear tests of (modified) primaries of the types now in the stockpile and to follow up in the future with regular hydronuclear testing of primaries drawn from the stockpile and similarly modified. The test results would be compared to the baseline data.
The panel concluded that a persuasive case could not be made for the utility of hydronuclear testing to detect small changes in the performance of primaries. The fundamental problem with hydronuclear testing is that primaries need to be modified drastically to reduce the nuclear yield to a few pounds of TNT equivalent. This can be done either by removing an appropriate amount of the fissile material from the pit and replacing it by non-fissile material, e.g., depleted uranium, or by leaving the pit intact and replacing the boost gas in the pit by material that will halt the implosion just after the point of supercriticality is achieved. Neither method exercises a pit through its normal sequence, and so one does not obtain data relevant to the real implosion. Extrapolation of hydronuclear test results to actual weapons performance would be of doubtful reliability and low utility.

Hydronuclear testing can be useful for checking one-point safety of a primary design as was done in the past by the U. S. [7] Today, however, all US weapons designs in the enduring stockpile are known and certified to be one-point-safe to a high degree of confidence, so hydronuclear testing is not needed for safety tests. Furthermore, the development of 3-D codes further reduces any need to do actual tests to evaluate issues of one-point safety. Modeling capabilities will further improve as computer systems and codes advance.

Conclusion 6-A repeated concern raised by the nuclear weapons community about entering into a CTBT of unlimited duration is the ultimate unpredictability of the future. No one can predict with certainty the behavior of any complex system as it ages. The study panel found that should the United States encounter problems in an existing stockpile design that lead to unacceptable loss of confidence in the safety or reliability of a weapon type, it is possible that testing of the primary at full yield, and ignition of the secondary, would be required to certify a specified fix. Useful tests to address such problems generate nuclear yields in excess of 10 kt. A "supreme national interest" withdrawal clause-standard in all arms control treaties-would permit the United States to respond appropriately should such a need arise. It is highly unlikely that even major problems with one or two designs would cause the United States to withdraw from the CTBT, given the political implications involved. Major problems with all or almost all of the designs in the enduring stockpile would most likely be required.

Conclusion 7- The study panel observed that its Conclusions 1-6 were consistent with US agreement to enter into a true zero-yield Comprehensive Test Ban Treaty of unending duration, which includes a supreme national interest clause.

Jeremiah D. Sullivan is with the Department of Physics and Program in Arms Control, Disarmament and International Security, University of Illinois at Urbana-Champaign jdsacdis@uxl.cso.uiuc.edu

References

Study panel members were: Sidney Drell (Chair), John Cornwall, Freeman Dyson, Douglas Eardley, Richard Garwin, David Hammer, Jack Kammerdiener, Robert LeLevier, Robert Peurifoy, John Richter, Marshall Rosenbluth, Seymore Sack, Jeremiah Sullivan, and Frederik Zachariasen.

  1. Congressional Record-Senate, Vol. 141, No. 129, pp. S11368-S11369, August 4, 1995. -Arms Control Today, "JASON Nuclear Testing Study," pp. 36-37, September, 1995. (At this time no other portion of the report is unclassified.)
  2. R. S. Norris and W. M. Arkin, Bulletin of the Atomic Scientists, "Natural Defense Research Council, Nuclear Notebook," July/August, pp. 76-79, 1995.
  3. P. P. Craig and J. A. Jungerman, Nuclear Arms Race: Technology and Society, pp. 185-190, McGraw Hill, 1986.
  4. D. Schroeer, Science, Technology, and The Arms Race, pp. 62-65, John Wiley and Sons, 1984
  5. S. Glasstone and P. J. Dolan, The Effects of Nuclear Weapons: 3rd Edition, United States Government Printing Office, 1977.
  6. R. N. Thorne and R. R. Westervelt, "Hydronuclear Experiments," LA-10902-MS UC-2, Los Alamos National Laboratory, 1987.

Stewardship of the Nuclear Stockpile Under a Comprehensive Test Ban Treaty (CTBT)

Philip D. Goldstone

In July 1993, President Clinton directed that means other than nuclear testing be sought to maintain the safety, reliability, and performance of U.S. weapons. This was elaborated later that year in a Presidential Decision Directive as well as in Congressional language. The 1994 Nuclear Posture Review established a "lead but hedge" policy of seeking further nuclear stockpile reductions (toward START II and if possible beyond) and established that there was, for the first time in decades, no requirement for new-design U.S. nuclear warhead production. But it also required the DOE to maintain the stockpile, maintain a readiness to test if required, and sustain the capability to develop and certify new weapons in the future if needed. In the same year, the President reaffirmed the role of nuclear deterrence in U.S. national security strategy. In declaring the U.S. intent to negotiate a "zero yield" test ban (August 1995), he codified the "supreme national interest" in confidence in stockpile safety and reliability. In addition he established six "safeguards" as conditions for our entry into the CTBT.

These safeguards include a science-based stockpile stewardship program, a new annual certification procedure--and the possibility of conducting necessary nuclear tests even under a CTBT as a matter of supreme national interest, should this science-based program be unable to provide the needed confidence in the stockpile at some time in the future. When ratifying the START II treaty in 1996, the Senate also reaffirmed the U.S.. commitment to stockpile stewardship and other nuclear-security capabilities.

Technical Challenge. What, then, is the challenge of stewardship without nuclear testing? The answer lies in three broad areas:

  1. The certainty that issues will arise in ever-aging weapon systems which will require evaluation and repair;
  2. The important role that nuclear testing played in validating not only the operation of new warheads and bombs, but also the quality of production practices and the quality of the scientific tools and expert judgment applied to evaluating weapon safety, reliability, and performance;
  3. The scientific unknowns and technical gaps that we must necessarily fill in order to provide sufficient quality in such evaluations for the U.S. stockpile in the future. (On the other hand, a far lower level of science and technology is needed just to enter the nuclear arena; 1945's "Little Boy" was used first in war without a nuclear test.)

Aging Historically, retirement and replacement of old nuclear weapons with new or different types kept the average age of the U.S. stockpile relatively small. Until as late as 1975 the stockpile was, on average, less than a decade old. Through the 1980's, its average age was roughly constant, about 13 years. Selective retirement of older weapons in the large stockpile reductions at the end of the Cold War reduced the average age for a time. But without new weapons replacing old, the stockpile age is now increasing year for year even as we further dismantle. In 1997/1998, the average age of the U.S. stockpile will exceed our historical experience, and by 2005 will exceed 20 years. Many individual weapons will, of course, be considerably older than the average.

Instead of replacing old weapons with new types tested through underground nuclear explosions, now the task is assessment, revalidation, and renewal of the stockpile without nuclear testing. This requires a continuous process of surveillance, assessment, and response as an organizing principle. Specific revalidation and life extension of individual stockpile weapon types, and refurbishment of their nuclear packages, will be a part of this process so that each system in the stockpile has periodic intensive review and renewal in addition to annual certification of all systems. The first systematic "dual revalidation" of a weapon system (i.e., involving formal peer review with both weapon design laboratories) began in 1996. A life-extension program for another warhead is also under way.

In general, stockpile aging will raise a wide range of issues that will require enhancing the scientific and technical capabilities that can be applied to the surveillance-assessment-response process. High explosive and other organic-compound components will undergo chemical degradation. Materials may corrode and radiation damage may occur in some. Cracks may occur in components. Even plutonium itself can age through alpha decay and the ingrowth of helium in the material. Evaluating the effects of these changes will be complex without nuclear testing since in general, changes or defects may be localized and not amenable to symmetry assumptions, requiring three-dimensional analysis.

Aging of weapon components is known to occur. Roughly 14,000 nuclear weapons have been disassembled and examined since 1958 as part of a rigorous surveillance program. There have been a number of "findings" from this process, which are called "actionable" when corrective action has been needed to preserve safety or reliability. Of about 400 distinct actionable findings since 1958, over 100 have involved the nuclear explosive package itself, and most of those (but not all) have involved the weapon's primary stage,1 which includes both high explosive and plutonium. Past defects have on occasion systematically affected thousands of individual weapons. From the historical data, age-related findings (e.g. due to deteriorating components) can appear at any time; but there is little data on weapons more than 25 years after their introduction to the stockpile, since there were so few of these. The data suggest that statistically, one or two of these "actionable" defects could be discovered each year in the continuing stockpile through formal surveillance and other processes.

Aging: an example. One example of weapon aging and of the role of nuclear testing in validating predictions of performance, is the story of the now-retired W68 warhead for the Poseidon submarine-launched ballistic missile. A premature degradation of the high explosive in that weapon, which would have ultimately made the weapon inoperable, was found through routine surveillance. The weapons were disassembled and the high explosive replaced by a different and more chemically stable formulation that had, it turned out, been used successfully in earlier nuclear tests of the W68 design. Unfortunately, some of the other materials that had to be replaced in rebuilding this warhead were no longer available from the commercial infrastructure, so additional changes in the rebuilt weapon were required. The best available computational simulations, normalized to nuclear test data, were used to evaluate the repair and assure that the fix did not compromise the weapon's performance or reliability. A nuclear test was performed to validate this answer; however, the test data showed a reduced yield compared to calculational expectations, for which the cause remained unclear. While the reduced yield was ultimately deemed acceptable, this result did require the military to modify some maintenance procedures to assure reliability of the warhead over the full range of potential operating conditions.2

Safety. Past actionable findings have included those related to weapon safety assurance as well as to performance and reliability. While our current nuclear weapons are judged, on the basis of considerable data, to be adequately safe against credible accidents, the nation needs to be able to evaluate weapon safety with the highest possible confidence. U.S.. weapon experts will, for example, continue to address questions about safety in complex abnormal environments (for example, with multiple insults). They also must be able to ensure that aging or remanufacture of components do not compromise the appropriate safety margins. For example, age-induced changes could alter the expected behavior of explosives or fire-resistant features in accidents.

Remanufacture? Remanufacture and replacement of weapon components is obviously an important part of maintaining stockpile safety and reliability. Why not simply rely on routine remanufacture of nuclear weapons to original specifications, keeping their average age down? For one thing, large-scale production to ensure replacement rates comparable to those of the last 50 years would be more costly in terms of production infrastructure, and entail additional materials and environmental management issues. For another, identical remanufacture (in the sense of full replication of components and manufacturing processes to original specifications, without detailed evaluation via computation and experiments) is not generally feasible, and is harder with each passing decade. There are several reasons for this. Many of the Cold War manufacturing process lines have been disassembled, so both the facilities and the people involved in fabrication will be different by necessity. As in the W68, there are commercial materials, practices, and nuclear weapon manufacturing processes that either have or will become unavailable or obsolete, sometimes because of environmental and health concerns. Stockpiling obsolescent materials would not be a solution, because they too will age and the associated materials processing practices and knowledge cannot be "stockpiled" indefinitely either. For technologically complex systems, establishing a "complete" set of specifications adequately prescribing all relevant processes (e.g. those that could affect dynamic material behavior) is generally problematic.

Both aging and remanufacture can introduce changes which may affect the dynamic behavior of a weapon, either in the way it is designed to function, or in response to potential accidents. Since we need confidence in the outcome, a capability for component remanufacturing and replacement is essential, but would not be sufficient without adequate means for evaluation. In essence, weapon scientists will find themselves assessing the health of their "patient" and making judgments about whether, or when, to subject the patient to "surgery"--recognizing that the process of surgery entails potential risks and must itself be carefully evaluated.

Stewardship. Present-day nuclear weapons are complex objects. There are many technical issues associated with developing an adequately fundamental understanding of the consequences of aging and manufacturing processes on weapon safety and reliability. To predict when refurbishment will be needed before problems arise and avoid "remanufacturing crises", the stewardship community has begun to develop enhanced surveillance and assessment processes so that we may anticipate aging related phenomena, and predict stockpile lifetime issues perhaps ten to fifteen years into the future. This in turn will require linking existing engineering and nuclear test data, assessment of disassembled components, "forensic" surveillance techniques, computational modeling of materials phenomena and processes up to fully integrated simulations of weapons behavior, and a variety of laboratory experimentation to develop fundamental data and test theoretical models and understanding. A significant fraction of the necessary research involves fundamental science and technology; most of it is challenging.

The technical challenges of stewardship include the development of improved scientific data and models, as well as the tools necessary to explore and apply them. For example, to evaluate nuclear weapons primaries, improvements are needed in current U.S. capabilities for flash radiography of materials dynamics experiments and non-nuclear hydrodynamic tests--in which inert mockups of primaries (e.g. with tantalum or depleted uranium replacing the plutonium) are imploded. These tests help assess the quality of the implosion that in a real weapon would ultimately lead to the ignition of the deuterium-tritium boost gas, and can map the density so that criticality can be predicted. Since localized, aging-related perturbations, as well as hypothetical accident scenarios, generally would result in asymmetric hydrodynamic behavior, three dimensional imaging of very high density, high-velocity objects with sub-millimeter resolution is needed. The Dual-Axis Radiographic Hydrodynamic Test facility under construction at Los Alamos will provide the first two-view, high-resolution capability available to the U.S. when it is completed near the end of this decade. Of course, since these mockup experiments do not produce a nuclear explosion, the link to weapons safety or reliability must occur through computational simulations.
There is a related need for improved physical models and data on the dynamic behavior of materials at high strain and strain rate (i.e. constitutive properties including equation of state, spall, and dynamic deformation). To properly predict the effects of age-induced material changes or defects, such models must be incorporated, with sufficiently few assumptions or simplifications, into computational simulations for weapon performance and safety assessment. Plutonium is a unique material; experiments on its specific properties, including subcritical experiments at the Nevada Test Site, are needed to provide essential data. Such experiments are consistent with the CTBT since there is no nuclear explosion (DOE recently announced its decision to conduct subcritical experiments at the NTS). Similarly there is a need for more scientifically predictive models of high explosive initiation, burn, and detonation, including the effects of aging on these phenomena.

The detonation of high explosive and the subsequent release of nuclear energy from a weapon primary result in extreme conditions of high energy-density in which such issues as fluid instabilities and the nonlinear development into turbulence, material properties, and radiation transport must be better understood. We will need to use accurate and predictive models of fusion ignition and burn, both in the boost process in the primary and in the secondary. It will be necessary to link phenomena at vastly different scales to understand the effects of microscopic aging-related structural and chemical changes on the dynamic properties of the macroscopic engineering materials in weapons.

A variety of experimental capabilities would provide these data and help develop and validate theoretical models. But the integrating factor in science-based stockpile stewardship, tying together the scientific and engineering data (including past nuclear test data) and models, is high performance computing. Without testing, weapon performance or safety must ultimately be calculated. Because of the 3-D complexity of the task and the need to replace currently oversimplified models, up to 10,000 times today's capabilities will be needed (platforms capable of 10-100 teraFLOPS, handling many terabytes of data, and the computational techniques to exploit them). The Accelerated Strategic Computing Initiative, organized to develop this computational capability, has engaged supercomputer manufacturers to develop what will be the world's most powerful supercomputers--and produced some early hardware and software successes, like the recent achievement of a 1 teraFLOPS milestone.

Reducing Global Nuclear Danger. Stewardship of the nuclear stockpile without testing will be a grand technical challenge--one that the US is committed to meeting to provide unquestioned continuity of U.S. nuclear deterrence. The reason for science-based stewardship in lieu of nuclear testing is to help reduce global nuclear dangers while maintaining security in a nuclear world. Ultimately, any path toward reduced global nuclear dangers must be pursued in the context of broader processes of global security; major reductions of nuclear inventories have not stood completely separate from other factors, nor will they. For example, it can be argued that the step to START-II was only practically achievable as a result of the sociopolitical changes in the then-Soviet Union. Furthermore, it is the net dangers that need to be reduced, and this may not always equate to simply reducing nuclear inventories or postures.

Further steps along the path to reduced nuclear dangers will have to address the potential for altering the balance of security and deterrence equations as stockpiles become smaller, and acknowledge that the possibility of creating nuclear weapons cannot be eradicated from human knowledge. Many factors will be intertwined with and may pace the achievability of future arms reductions 3, for example if reductions continue toward zero, developing acceptable national and international responses to latent nuclear forces in times of crisis would be critical. Reducing nuclear dangers, it seems, will not only be a technical challenge; it will also be a grand challenge of policy and diplomacy.

Philip D. Goldstone is at Los Alamos National Laboratory, Los Alamos, New Mexico 87545; pgoldstone@lanl.gov. This article summarizes and updates the paper by John D. Immele and the author, presented at the American Physical Society 1996 Annual Meeting in a session on the CTBT sponsored by the APS Forum on Physics and Society. Immele was the Program Director for Nuclear Weapon Technology at Los Alamos at that time. Acknowledgements go particularly to R. Wagner and T. Scheber for valuable discussions on issues of latency and stability.

Footnotes

  1. "Stockpile Surveillance: Past and Future", Johnson et al. 1995, available as Sandia National Laboratory Report SAND95-2751. This report contains considerable unclassified summary data on stockpile surveillance findings from 1958 to 1995.

  2. Report to Congress on Stockpile Reliability Weapon Remanufacture, and the Role of Nuclear Testing, G.H. Miller, P.S. Brown, and C.T. Alonso, Lawrence Livermore National Laboratory, 1987, UCRL-53822.

  3. See for example #An Evolving Nuclear Posture,# A. J. Goodpaster et al., Henry L. Stimson Center Report #19, 1995, and #Phased Nuclear Disarmament and U.S. Defense Policy,# M.E. Brown, Henry L. Stimson Center Occasional Paper #30, October 1996.

The Role of Fusion in the Future World Energy Market

Introduction
Today, nuclear energy contributes some 6% of the world annual energy use of about 9000 million tonnes of oil equivalent (Mtoe/a; 1 toe = 42 GJ). Renewables contribute 14%. Fossil fuels account for the bulk, about 80%, of the energy use. In the future, it is postulated that these roles will change as energy demand rises and cheap oil and gas are depleted; even without consideration of global warming effects. The changes will be driven mainly by the developing areas, which have relatively less fossil fuel reserves.

A study of historical trends suggests that population growth rate may be viewed as depending, roughly, on a mixture of two factors relating to culture and standard of living respectively [1, 2]. A good surrogate for standard of living appears to be the annual energy use per capita [1]. To illustrate how energy demands might evolve, the author has developed a simple relationship, which permits coupled annual values of population growth rate and energy use per capita to be derived for each part of the developing world - Africa, China, East Asia, South Asia, and Latin America. The growth rate is updated every decade.

Annual Growth rate = (Ec -Ea)/(160 Ea^0.38); - where Ea is the annual energy use per capita, adjusted for efficiency gains obtained after the year 2000 (i.e., a 50% efficiency improvement allows a given amount of primary energy to be worth 1.5 times as much in supporting the standard of living);
- and Ec is the annual effective energy use per capita at which the population growth rate is zero (typically 2 to 3 toe/cap.a today). It may be viewed as a measure of the cultural factors.

Reference Case
The reference case for projecting world population and energy use assumes that steady efficiency gains will be made across-the-board from 2000, rising to 37.5% by 2050, 75% by 2100, and 100% by 2200.
The World Bank population projections [3] and the energy/capita predicted by the IEA [4] for 2010 are used for the OECD countries of Europe, the East, and North America, and the Former Soviet Union, Centrally Planned Europe, and the Middle East. These countries have low population growth rate or in the latter case unclear connection of population to energy use. The energy per capita is decreased in proportion to the efficiency improvements. The alternative case in which these areas take efficiency improvements as an increase in standard of living is also calculated.

For the developing areas, the primary annual energy per capita is increased systematically beyond the level predicted for 2010 by the IEA so as to stabilize each population in the time period 2100 to 2150. The starting factor Ec is chosen to match the present day trends, and is allowed to decrease with time, see figure 1 for the example of South Asia for which Ec = 2.5 initially and decreases to 2.0 by 2100.

Historical values are used for 1971-1992; predictions of the IEA [4] and the World Bank [3] for 1992-2010; and the reference case of this paper for 2010 to 2100. Interestingly, this approach led to population evolution for developing areas similar to those of the World Bank's standard case [3].

Reference case of this paper: designed to stabilize population by 2150.

Area EC (10)-Ec(100) Stable Populations Peak Energy Demand
Africa 2.6---2.2 3,140 million 3,270 Mtoe/a
China 2.5---2.0 1,700 million 2,250 Mtoe/a
East Asia 3.0---2.4 1,140 million 1,530 Mtoe/a
South Asia 2.5---2.0 2,850 million 3,000 Mtoe/a
Latin Amer. 3.5---2.8 940 million 1,410 Mtoe/a
World 11,810 million 15,900 Mtoe/a

Value of Efficiency Gains

For the reference case, the world peak annual energy use is 15,900 Mtoe/a

  1. If the efficiency improved more slowly, e.g., to 25% by 2050, 50% by 2100 and 95% by 2200, the peak energy would be 18,100 Mtoe/a. The increase in energy demand from 2010-2100 would be 138,000 Mtoe; comparable to the proven plus projected recoverable natural gas reserves [5].
  2. If the developed world, FSU, CEE, and Middle East chose to take eficiency gains to increase their standard of living, rather than to reduce energy use, the total energy use would be 20,800 Mtoe/a and the energy increase from 2010-2100 would be an additional 110,000 Mtoe.

Total proven and projected, recoverable, oil reserves are 212,000 Mtoe. [5]. Thus, uncertainties in this simple assesment of needs are comparable with projected readily accessible, oil and gas reserves.
It appears that the availability of cheap oil and gas may be a one time chance for the developing areas to increase their energy use, improve their standards of living, and stabilize their populations, prior to the development and deployment of the long-term (renewable) energy sources.

Renewable Energy Deployment
A substantial potential is believed to exist in the world for renewables; an aggressive approach to biomass is described in reference [6], 4,900 Mtoe/a; for hydropower [5], 780 Mtoe/a (electric); an substantial potential for windpower is described in reference [7], 4540 Mtoe/a (electric). Roughly a half of these energy resources are in developing countries. The rest of the energy will have to be supplied by fossil, solar, geothermal and nuclear (fission and fusion) sources. While China and Latin America have large fossil energy resources compared to their expected demands, Africa, East Asia and South Asia do not. An example distribution of world energy sources for the reference case is shown in figure 2. For each area preference was given to indigenous energy sources; on average, 55% of the potential biomass, 95% of the hydropower and 20% of the wind power was deployed, with the highest
percentages of the potential in the the areas which had the greatest need.

The distribution of the balance, solar and nuclear (fission + fusion) energy use - 5000Mtoe/a, will depend upon the relative costs, technical capabilities of each area and public acceptance. Where large, centrally generated, power is required the nuclear options offer distinct advantages. The alternative to their use is almost certainly a greater use of fossil fuels (coal), and a more rapid depletion of fossil resources.

Fusion Energy Deployment
Substantial progress has been made in recent years in both magnetic and inertial fusion: achievement of 10 MW of fusion power in the Tokamak Fusion Test Reactor, and the successful Halite-Centurion test in the inertial area; demonstration that energetic fusion products behave as expected; calibration of plasma modelling codes; and demonstration of some of the key technologies. These successes support the design studies of the International Thermonuclear Experimental Reactor (ITER), the National Ignition Facility (NIF), and a similar inertial facility in France. Assuming succesful operation of these faclities, and success in the development of radiation-resistant materials and a heavy-ion-beam driver for inertial fusion, a Demonstration Fusion Power Plant could be operating by 2030. A commercial power plant might then be operating by 2050. The most likely initiators of the fusion era are countries which have deployed substantial nuclear power, plan more, and will need even more as cheap fossil fuel becomes scarce, and have the technical capability; e.g., Japan and Europe.

Build-up rates may be constrained by the energy payback time - about 1.5 years for some reference fusion plants - and tritium build-up rates to support new plants. Consideration of these factors suggest that a doubling period of 5 years or less should be possible. Following the demonstration of commercial fusion energy, a systematic deployment of fusion plants is anticipated. It may be expected that fusion and fission plants will be built and operated by international consortia, allowing the deployment in countries which do not have all the in-house skills needed.

For the reference case, the following examples are considered for 2100 and 2200. The energy quoted is the replacement value for fossil fuel at 40 % thermal electric conversion efficiency. Note that a 500 Mtoe/a (fossil replacement value) corresponds to 350 fusion reactors of 1000MWe capability, operating at 75% capacity factor.

Example 2100 (Mtoe/a) 2200 (Mtoe/a)
1 50 500
2 100 1000
3 150 1500


The consequences of not having the energy available, within this model, would be massive population increases in the deprived areas and, ultimately, the same demand for energy but from more poverty stricken populations! It is essential, therefor, that energy efficiency improvements and all energy sources, including fusion, are developed and deployed rapidly to ensure that the converse occurs - population stabilization with a decent standard of living for all!

John Sheffield is at Oak Ridge National Laboratory, Sheffieldj@ornl.gov. The submitted manuscript has been authored by a contractor of the U.S. Government under contract No.DE-AC05-96OR22464. Accordingly, the U.S. Government retains a nonexclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purposes.

References

  1. J.Goldemberg and T.B. Johansson, "Energy as an Instrument for Socio-Economic Development", United Nations Development Programme, New York, NY, p 9, 1995.
  2. F.Duchin, "Global Scenarios about Lifestyle and Technology", The Sustainable Future of the Global System, United Nations University, Tokyo, October, 1995.
  3. E.Bos, My T. Vu, E.Massiah, and R.A.Bulatao, "World Population Projections: 1994-95 Edition", published for the World Bank by The Johns Hopkins University Press, Baltimore and London, 1994.
  4. International Energy Agency, "World Energy Outlook, 1995 Edition", OECD Publications, 2 rue Andre Pascal, Paris, France, 1995.
  5. World Energy Council, "1995 Survey of Energy Resources", Holywell Press Ltd, Oxford, England, 1995.
  6. T.B.Johansson, H.Kelly, A.K.N. Reddy, and R.H.Williams, in T.B.Johansson et al. (eds), Renewable Energy: Sources for Fuels and Electricity (Washington: Island Press), 1993.
  7. B. Sorensen, Annual Review of Energy and the Environment, Vol. 20", Annual Reviews Inc, Palo Alto, CA, USA, p387, 1995.

Looking Forward: The Status of Renewable Technologies

Allan R. Hoffman

Introduction
Renewable electric technologies have been under development since the mid 1970's. Considerable progress has been made in improving technical performance and reducing capital and energy costs. As the technologies have matured, the use of renewable electric technologies has increased, both in the U.S. and in other countries. This paper outlines the factors that are encouraging the growth of renewables, describes the current policy environment, and discusses the status of renewable electric technologies.

Converging trends
Improving technological performance and reductions in costs have enabled renewables, under certain conditions, to be the low cost option for generating power. In addition to standing on its own merits, there are external driving forces that are encouraging the widespread use of renewable technologies. These factors include: increasing environmental awareness, availability of new technology options, world energy demand growth, increasing business interest, and energy security.

Increasing environmental awareness--It is very clear that if the rest of the world powers up the way we did, the environmental impacts could be very serious. If we do not want countries like China or India to use their coal we have to be willing to offer them some affordable alternatives. We have the opportunity to sell them vehicles that are less polluting, or renewable technologies that reduce their dependence on coal. DOE is working to develop advanced energy systems for cars that do not require petroleum and to develop various forms of renewable energy that replace coal.

Availability of new technology options--There are many new technology options that are becoming available both in the areas of energy supply and in the more efficient use of energy. The most notable new technologies on the horizon are: advanced fission, fusion, efficient gas turbines, renewable energy, storage technologies, and hydrogen. Efficient gas turbines are a reality and offer competition for renewables today. Natural gas could be the transition fuel to a renewable/hydrogen economy and they are natural partners in many ways. New storage technologies are being developed by DOE in partnerships with U.S. industries.

World energy demand growth--Increased deployment of renewables is also being driven by the recognition that renewables are competing for a total target market in the trillions of dollars. The World Bank has estimated that, over the next 30-40 years, developing countries alone will require 5 million megawatts of new generating capacity. This compares with a total world capacity of about 3 million megawatts today. At a capital cost of $1,000-2,000 per kilowatt, this corresponds to a $5-10 trillion market, exclusive of associated infrastructure costs.

Increasing business interest--There is increasing understanding among corporations that renewable energy can mean big business and high profits in the longer term. An example of this interest is provided by Jeffrey Eckel with EnergyWorks, a joint venture between PacifiCorp and Bechtel:
"The market for human-scale energy systems rather than gigantic projects is enormous... You have got 2 billion people in the world today who do not have a light bulb, many of these people cannot be reached by traditional power lines."

--Jeffrey Eckel, CEO, EnergyWorks, October 17, 1995

Energy Security--Currently nearly 50% of our petroleum needs are met with imports, primarily from Saudi Arabia, Venezuela, Canada, Mexico, and Nigeria, resulting in $50 billion of revenue that is going overseas. That number may increase to $100 billion over the next ten years as we increase imports from the rest of the world. Energy security requires us to develop alternatives so that we can deal effectively with the next oil price shock and increasing competition for petroleum resources.

Current policy Environment
The current policy environment strongly supports deployment of renewables and energy efficiency technologies. The following statements underline the Clinton Administration's support and encouragement of these programs:
"The administration will launch initiatives to develop new, clean renewable energy sources that cost less and preserve the environment." -- President Bill Clinton, from A Vision of Change for America (1993)
"The Administration's energy policy promotes the development and deployment of renewable resources and technologies...

The Administration supports fundamental and applied research that helps the renewable industry develop technologically advanced products...
The Administration is working throughout the Federal Government to identify and overcome market barriers and accelerate market acceptance of cost-effective renewable energy technologies." -- National Energy Policy Plan, July 1995.

One example of the Administration's commitment to sustainable development is DOE's program to showcase energy efficiency and renewable technologies at the 1996 Summer Olympic Games in Atlanta. Demonstrations of photovoltaic technologies, solar thermal water heating, solar thermal dish/Stirling generators, geothermal heat pumps, fuel cells, energy efficiency technologies, and alternative fuel vehicles will be presented at the Olympics. The Olympic showcase will also include a display of the "Cool Communities" concept, in which strategically planted trees and light-colored building materials are used to reduce air temperatures in urban areas.

A vision of the future
Given the prospect that fossil fuel supplies will peak and then begin to diminish before the middle of the next century and the need to move to sustainable economic systems, there should be a gradual transition to a global energy system largely dependent on renewable energy. Previous energy transitions, e.g. from wood to coal and coal to oil, have taken 50 to 100 years to occur, and there should be no difference in this case. Over this time period, hydrogen may well emerge as an important energy carrier to complement electricity, given its ability to be used in all end use sectors and its benign environmental characteristics.

In this vision, all renewables will be widely used: biomass for fuels and power generation, geothermal in selected locations for power generation and direct heating, wind, hydro, photovoltaics (PV), and solar thermal for power generation. Large amounts of renewable power generated in dedicated regions (e.g.wind in the midwest and solar in the southwest) will be transmitted thousands of miles over high voltage direct current power lines to load centers. Electricity and the services it provides will be available to almost everyone on the planet.

Technology challenges and accomplishments
This vision of the future can only be realized if substantial investments in renewable development are made today. A brief outline of technology challenges and accomplishments in PV, wind, solar thermal, geothermal, and biomass is presented here. Cost trends for these technologies as shown in Figure 1 indicate that they have been steadily falling for a number of years.

Photovoltaics - Recent accomplishments:

  • Achieving world record efficiencies of 17.7% for copper-indium-gallium-diselenide (CIGS) thin film polycrystalline cells and 10.9% for amorphous silicon cells --- System lifetimes have doubled since 1991

  • PV system costs have been reduced by 30% since 1990

  • Companies involved in DOE's PV Manufacturing Technology (PVMaT) pro

Henry H. Barschall, 1911-1997

The Forum has lost a good friend. Henry H. Barschall died at his home in Madison, Wisconsin on February 4. A member of the National Academy of Science and recipient of the first Bonner Prize in nuclear physics, Barschall had a distinguished career in nuclear physics dating back to the Manhattan Project. He was 81 and a valued member of the Forum on Physics and Society.

Heinz was a busy person who certainly did not need more to do, but his dedication to the issues and open discussion of them enticed him to serve the Forum as secretary-treasurer from 1988-1993. His example as a distinguished senior physicist and his untiring attention to organizational detail were crucial in establishing Forum's present mainstream role in the American Physical Society. Dedicated to the seeking truth, he refused to back down from well-demonstrated positions even in the face of law suits. We'll miss his hard work, his good advice and his great personal kindliness.

The Forum and the Division of Nuclear Physics will sponsor a memorial session at the April APS/AAPT meeting, probably on Sunday, April 20 at 11am; speakers will be: D. Allan Bromley, Sam Austin, Robert Adair, Jay C. Davis, Ruth Howes, Robert Sachs.

Ruth Howes

Ball State university

rhhowes@bsuvc.bsu.edu

Szilard Award Goes to Tom Neff

At the April meeting in Washington, DC, APS will give the Leo Szilard Award for Physics in the Public Interest to Thomas Neff, a senior member of MIT's Center for International Studies. Neff is being cited "for proposing and working to keep on track the historic agreement for the US to purchase uranium from the former Soviet Union weapons stockpile and to transform it from highly enriched uranium to low-enriched uranium for civilian purposes, thereby significantly reducing the numbers of nuclear weapons." According to Frank von Hippel (Princeton University), this agreement has also helped to stabilize Ukraine's decision to become a non-nuclear-weapon state by giving that new nation an economic interest in disarmament.

After receiving his PhD in theoretical physics from Stanford University in 1973, Neff held postdoctoral positions at Berkeley, Stanford and MIT. At Stanford, he worked as an assistant to Wolfgang Panofsky, who was APS president at the time, on matters of science and public policy, helping with the first studies of these topics and assisting in the formation of the Panel on Public Affairs. From 1977-85, Neff was manager and director of MIT's International Energy Studies program. He has been an advisor to numerous US government agencies as well as to governments and companies around the world.

Over the years, Tom has worked on issues of energy policy and nuclear nonproliferation. He was a senior staff member of the Ford Foundation's study, "Nuclear Power Issues and Choices." He has written books and articles on oil and nuclear fuel markets and on solar energy.

In a New York Times Op-ed on 24 October 1991, Neff proposed the purchase of Soviet weapons-grade uranium by the US government. The Government picked up his idea and agreed in January 1994 to purchase 500,000 kilograms of Soviet weapon-grade uranium after it has been blended down to low-enriched uranium for use in power-reactor fuel. Since then Neff has worked creatively and successfully to help devise strategies to overcome the obstacles to the agreement as they have arisen. The obstacles have ranged from trivial to profound commercial and policy problems on both the Russian and US sides of the deal. In Neff's words, "I've had to become part of the interagency process in Russia as well as the US, broker a deal between the US Administration and Congress (the Domenici legislation) and keep the commercial players true to the terms of the deal. The larger issue is that policy processes and institutions still reflect the cold war, rather than being redesigned to undo it. I hope to talk about this in Washington."

Barbara G. Levi

Senior Editor, Physics Today

bgl@worldnet.att.net

Forum Award Recognizes Martin Gardner

The Forum Award for Promoting Public Understanding of the Relationships of Physics and Society will go to Martin Gardner, who is now retired from his 25-year tenure as the mathematical games columnist for the Scientific American. Gardner is being honored "for his popular columns and books on recreational mathematics which introduced generations of readers to the pleasures and uses of logical thinking; and for his columns and books which exposed pseudoscientific bunk and explained the scientific process to the general public."
Gardner earned a BA from the University of Chicago in 1936 and spent his career as a journalist and writer. His jobs included a stint as a contributing editor to Humpty Dumpty (1952-62) as well as his term at Scientific American. Now retired, Gardner is still active as a member of the executive council of the Committee for the Scientific Investigation of the Claims of the Paranormal (CSICOP), and columnist for its magazine, Skeptical Inquirer.

Gardner has published dozens of books on everything from cryptography to pseudoscience and an exegesis of Alice in Wonderland. One nominator praised the approach Gardner took in all these books: "It is characterized by exemplary scholarly thoroughness, excellent taste in the choice of topics, refusal to take anything for granted, and a keen nose for nonsense."

Two of his books, Fads and Fallacies in the Name of Science and Science: Good, Bad and Bogus, were among the first to directly attack fraudulent science and pseudoscience, and have been used in many science courses around the country. A review of the former noted that "Gardner has written a highly critical and at times hilariously entertaining account of cults and fad sciences in various fields." Gardner delved into the philosophy of science and its relation to theology in The Whys of a Philosophical Scrivener and in The Flight of Peter Fromm. Among his contributions to science education are Great Essays in Science and Entertaining Science Experiments with Everyday Objects. Accepting the award, and speaking, for Mr Gardner will be James Randi, a famous "magician" and previous award winner.

Barbara G. Levi

Looking for Work?

It is with regret that we must announce the resignation of Lee Sorrell from his position as articles editor for this news letter, owing to other professional commitments. Lee will remain as articles editor through the publication of the July 1997 issue of Physics and Society. We, the remaining editorial staff at P&S, wish to thank Lee for his contribution during the past year and to extend our very best wishes to him. We'll miss him!

Which also means: we need a new articles editor! Might you be interested? The work is interesting, and you won't pay a penny of taxes on the money you make editing for P&S (...it's a volunteer position....). If you might be interested in helping produce this periodical, contact Laurie Fathe, Chair of the Editorial Board, whose address is Department of Physics, Occidental College, Los Angeles, CA 90041, phone 213-259-2812, and e-mail address fathe@oxy.edu. Or, if you know somebody who likes to write and who might be interested, please pass on Laurie's name.

Forum Sponsored Sessions at Washington APS/AAPT Meeting

Federal Funding of Science Education, 8am; 4/18 .

What Do Scientists Owe Society? 8am; Awards Session, 2:30pm; 4/19.

Memorial Session for Heinz H. Barschall 11am; 4/20.

The Low Level Radiation Risk Controversy, 8am;
Teaching in Other Countries, 11am; 4/21.

Political Prospects for Fusion Energy

The AIP's FYI #165 contains quotations from Energy Secretary O'Leary's speech, in May 1996, to the congress. Some of the numbers cited in her talk are sobering: By 2010, world wide energy consumption is expected to increase to over 3.5 billion tons of oil equivalent. By 2020, India and China, alone, will have an energy demand that is twice today's entire world demand, assuming that per capita consumption in those nations rises to that of present-day South Korea and that population increase is as presently expected.

O'Leary went on to say that a good way to try to provide for such energy demands is via fusion energy research. In particular, she said, "Fusion is our longest term option that shows significant promise....Fusion research is exactly the kind of program government should support. The payback period is long term. Industry can't and won't do it alone because of the payback period and because of high front end costs." She went on to demonstrate how progress in research into fusion energy production has far outstripped progress in increasing the capacity of semi-conductor chips. She also argued that international collaboration will be essential, since even the U.S. government does not have sufficient funds to unilaterally support the required research effort.

Quality Control in Science

In line with many U.S. corporations, the Federal Government is in the process of instituting formal quality controls as part of departmental and agency management, in the form of written progress assessment reports. The American Institute of Physics' FYI #140 describes a report from the National Science and Technology Council (NSTC) Committee on Fundamental Science concerning the assessment of government's role in fundamental research, entitled "Assessing Fundamental Science", and its relationship to the Government Performance and Results Act (GPRA) of 1993.

Government agencies that are involved in fundamental science are learning what technology corporations have long known: Quantitative measurement of the effectiveness of basic research programs is very difficult. Government agencies that perform science are now involved in pilot studies to develop metrics for this task. In the meantime, the Clinton administration has identified the intermediate goal of U.S. leadership across the frontiers of scientific knowledge as a yardstick against which government research is to be measured. It is not clear how such leadership is to be measured.

According to "Assessing Fundamental Science", "...merit review based on peer evaluation will continue to be the primary vehicle..." for assessing the quality of scientific work. Although the report warns that methods for measuring impacts on creativity, innovation, and risk-taking are as yet not well developed, it concludes that, "The passage of GPRA offers scientists and science managers the best planning and management methods to build world-class science programs."

Clinton Administration Space Policy

FYI #139 from the American Institute of Physics summarizes President Clinton's new National Space Policy, a fact sheet which can be seen at www.whitehouse.gov/WH/EOP/OSTP/NSTC/html/fs/fs-5.cfm

The National Space Policy concerns objectives for civilian, military, and commercial uses of space. As far as NASA is concerned, the Policy unveils no dramatic changes from current activities. In particular, it reiterates U.S. commitment to the International Space Station. It also urges NASA to work with the private sector on the development of a next-generation reusable launch vehicle to reduce the cost of access to space. Perhaps in line with budget realities, the Policy advocates a robotic presence on Mars by year 2000, but not manned exploration of Mars or any other destinations in the immediate future.

In terms of long-term goals, the Policy directs NASA to explore other bodies in our solar system and planets in other solar systems.

With respect to commercial space activities, NASA is directed to promote partnerships with the private sector, and to facilitate private access to NASA expertise. In connection with this, the Policy recommends whatever modifications that are required of laws and regulations that now impede private space activities.

No Health Effects of Residential EM Fields

An expert panel convened by the National Academy of Sciences has made a definitive statement regarding health effects of residential electromagnetic fields: There aren't any. The panel concluded, at the end of a 3-year study funded by DOE, that there is no evidence linking exposure to such fields with human illnesses, including cancer and neurobehavioral,reproductive, and developmental abnormality. For information regarding a similar study by APS, see the May 5, 1995 issue of What's New.

History of Physics Web Site

There is a new WWW site (http://www.aip.org/history/) for the History of Physics, Astronomy, and Geophysics. The site gives information about programs and services of the Center for History of Physics (e.g., grants), about AIP's Niels Bohr Library and access thereto, about the Emilio Segre Visual Archives, links to related Web sites, an Einstein exhibit, and more. A search engine is now being developed to support on-line access to abstracts of all the Niels Bohr Library's archival holdings, its catalog of books, and the entire International Catalog of Sources for History of Physics and Allied Sciences.

Physics and Government Net

The Physics and Government Network (PGNet) was established by the APS three years ago to provide physicists' inputs into science policy formulation, and appropriation, in Washington, D.C. Physicists who join PGNet are alerted, via e-mail, of pending science crises/issues in Washington, D.C. that potentially effect the national well-being. They are also given specific suggestions of actions that they can take.

In a review of PGNet's 1996 activities, D. Allen Bromley (APS President) and Robert Schrieffer (APS Past President) described five ACTION ALERTS during 1996, including strategies and outcomes for each ALERT.
ALERT #1 concerned the proposed sale of the nation's strategic helium reserves. The final outcome was bipartisan acceptance of an amendment that requires the NAS to report on implications of the sale. The amendment was eventually signed into law.

Other alerts concerned lack of full-year appropriations for NSF during FY1996 (outcome: full funding was restored to NSF in March), proposed major cuts to the DOE FY1997 research budget (outcome: a slightly more than 1% increase in spending for Office of Energy Research relative to FY1996), and requests for additional funds for the NSF to come from $350 million that had been added to VA-HUD appropriations funding. (outcome: zilch extra for NSF).

Physicists who are 1) interested in public affairs and 2) wish to work with the support of, and in conjunction with, the APS can join PGNet by contacting opa@aps.org or phoning Delia Victoria there at 202-662-8700.

International Comparison of Mathematics Education

The Third International Mathematics and Science Study (TIMSS), considered by many to be the most important study of education during this decade, is described at http://www.ams.org/notices/199605/comm-timss.html The study included curriculum analysis, student surveys, performance assessment, teacher questionnaires, teacher reports on content goals, and other components. In this article, I describe some of the contents of the Executive Summary of the study, which can be seen in its entirety at http://www.ed.gov/NCES/timss/97198-2.html
The summary starts out by stating that TIMSS involved, during the 1995 school year, the testing of a half-million students from 41 nations at five different grade levels.

With respect to President Bush's exhortation to be number one in math and science by the year 2000, U.S. eighth graders scored below the international average of the 41 participating countries in the field of mathematics. In science, they scored above the average, with U.S. students scoring comparably with Canadians, Germans, and English children at grade eight. In the fields of geometry, measurement, and proportionality, our eighth graders perform below average. Five percent of U.S. students perform with the top 10% of all students (in the 41 TIMSS countries) in math. For science, the corresponding American fraction is 13%.

With respect to curriculum, the summary states, "U.S. policy makers are concerned about whether expectations for our students are high enough, and in particular whether they are as challanging as those of our foreign economic partners....[The U.S. is] atypical among TIMSS countries in its lack of a nationally defined curriculum." Eighth-grade math in the U.S. is taught in the seventh grade in other TIMSS countries, and the content of those U.S. classes is not as focused as in Germany and Japan. U.S. eighth graders spend more hours per year in math and science classes than German and Japanese students.

Other subjects included in the summary include teaching ("U.S. mathematics teachers' typical goal is to teach students how to do something, while Japanese teachers' goal is to help them understand mathematical concepts."), teacher's lives (" Unlike new U.S. teachers, new Japanese and German teachers undergo long-term structured apprenticeships in their profession."), students' lives ("Eighth-grade students of different abilities are typically divided into different classrooms in the U.S., and into different schools in Germany. In Japan, no ability grouping is practiced at this grade level."), and conclusions (e.g., "Evidence suggests that U.S. teachers do not receive as much... daily support as their German and Japanese colleagues.")

For people interested in math education reform, the web site http://ourworld.compuserve.com:80/homepages/mathman/#where is the homepage of Mathematically Correct, a group of parents and scientists who are concerned with math reform and standards. This particular web site is rich with linkages to related sites and is a great place to enter the fray of math education reform.

Possible Health Effects of Exposure to Residential Electric and Magnetic Fields

National Research Council, National Academy Press, Washington, DC 1996,

ISBN 0-309-05447-8, $39.95, 314 pages.

Those of us living near power lines can breath a little easier knowing that the National Research Council of the National Academy of Sciences has determined that we are safe from power-line electromagnetic fields (EMF) in the following sense: As scientists we know that we cannot prove the negative, that cancer from power-lines is impossible; but we can determine the absence of the proof of a positive link between cancer and EMF. The Academy panel determined the later, the lack of a positive link. The panel's main conclusion follows: "Based on a comprehensive evaluation of published studies relating to the effects of power-frequency electric and magnetic fields on cells, tissues, and organisms (including humans), the conclusion of the committee is that the current body of evidence does not show that exposure to these fields presents a human-health hazard. Specifically, no conclusive and consistent evidence shows that exposures to residential electric and magnetic fields produce cancer, adverse neurobehavioral effects, or reproductive and developmental effects."

Physicists have long been skeptical of the 1979 paper by Nancy Wertheimer and Ed Leeper that began the controversy on the potential danger of power lines. Wertheimer and Leeper [1] reported a weak association (a correlation) between childhood leukemia and the wire code classification of power lines near residences. The wire-code classifications are based on the current capacity of the power lines. However, a correlation does not prove causality; other factors can correlate with the wire-code designations and "confound" the data, resulting in false conclusions. This is particularly true for power lines since the Academy and others conclude that "Magnetic fields from external wiring, however, often constitute only a fraction of the field inside the home." When the magnetic fields in homes are measured, a large problem is discovered with the wire-code based data. In particular, the internal magnetic fields in the house do not correlate with the wire-code predictions since the internal fields predominate. The Academy panel report includes the following conclusion: "Magnetic fields measured in the home after diagnosis of disease in a residence have not been found to be associated with an excess incidence of childhood leukemia or other cancers." That would end the story except for the existence of a "weak but statistically significant" association between childhood leukemia and the power-line wire codes. The Academy is uncertain as to the cause of the weak association, but it concludes that the link with magnetic fields is not proven: "More important, no association between the incidence of childhood leukemia and magnetic-field exposure has been found in epidemiologic studies that estimated exposure by measuring present-day average magnetic fields. Studies have not identified the factors that explain the association between wire codes and childhood leukemia."

It is very difficult for epidemiologists to determine true associations of very small (or zero) effects on relatively rare death modes since the confounding affects confuse the data. Over one hundred EMF epidemiology studies have been carried out. Many conclude that EMF epidemiology has reached its practical limitations, particularly when one considers the difficulty of doing good EMF dosimetry on great numbers of people. However, the panel opens the door for more work in this area by stating, ".... the epidemiologic evidence is not entirely persuasive for two modest numbers of homes with the high-wire-code categories." In 1960 Hill established criteria by which one can assign causality from epidemiologic data. It is clear that the childhood leukemia data dramatically fail to satisfy Hill's criteria in assigning causality to EMF.

After many years of biomedical EMF experimentation, the Academy panel found no convincing cancer links with biological systems. The Academy report concludes as follows: "... typical residential exposures (0.1 to 10 mGauss) do not produce significant in vitro effects that have been replicated in independent studies.... The overall conclusion, based on evaluation of these studies, is that exposure to electric and magnetic fields at 50-60 Hz induce changes in cultured cells only at field strengths that exceed typical residential field strengths by factors of 1,000 to 100,000." With regard to total living systems, the Academy report concludes that "There is no convincing evidence that exposure to 60-Hz electric and magnetic fields causes cancer in animals." The Academy report further concludes as follows: "There is no evidence of any adverse effects on reproduction or development in animals, particularly mammals, from exposure to power frequency 50- or 60-Hz electric and magnetic fields.... The general conclusion from these studies is that power-frequency electric and magnetic fields are not directly a genotoxic agent: If they were, a wider range of positive responses would have been observed."

The rejection of the claims for an EMF-cancer effect are not surprising since the Academy panel concludes that "typically externally induced currents are 1,000 times less than the naturally occurring currents." These results are to be expected since basic physics calculations show that the electric fields from the time-varying magnetic fields from power lines are much less than the electric fields from natural thermal motion in the body. The basic physics results alone are not a sufficient proof, but they certainly are a strong guidepost that we are safe from EM fields near power lines.

At the Academy's press conference the Chair of the panel, Charles Stevens, was asked how the Academy's report compared to the results of the Oak Ridge [2] and American Physical Society POPA [3] reports. He responded that the Academy agreed with the results of these two studies, but that the Academy did a better job of proving their results. Of course, the multidisciplinary and well-financed Academy study did a good job, but it did not do some of the tasks that the APS-POPA report did. For example, the Academy did not give conclusions on the occupational data, which, in my view, clearly fail to give a viable cancer link. In addition, the Academy did not consider the breadth and depth of the physics papers of Robert Adair, referencing but one of his works. The 1995 APS conclusion on cancer mechanisms continues to be valid, as stated here: "No plausible biophysical mechanisms for the systematic initiation or promotion of cancer by these power line fields have been identified." Unfortunately, the Academy panel failed to emphasize the basic physics because it wanted primarily to emphasize the experimental biomedical and epidemiological data. Lastly, the Academy panel did not consider economic effects. I would agree with this omission in that it makes no sense to consider a cost/benefit analysis if there is "no conclusive and consistent evidence" of cancer, but it would have been useful to point out the extravagant waste of billions of dollars without a known life being saved. Along the same lines, the Academy study avoided picking a "safe" level for living in magnetic fields because the panel "wouldn't know how to pick the level." Similarly, Stevens deflected the "prudent avoidance" trap of requiring mitigation without proof of a cancer cause by stating, "It is a personal decision; we wouldn't know what to suggest people avoid." Nevertheless, our society continues to add an extra 4% to power-line construction costs to satisfy the prudent avoidance theory in spite of the "innocent" verdict by the Academy and others.

It is amazing that the Academy was able to produce a unanimous report since one-half of the panel members are professional ELF/EMF researchers. This coalition collapsed somewhat after the press gave absolutist headlines of "Panel Finds EMFs Pose No Threat" and "Panel of Scientists Finds No Proof of Health Hazards From Power Lines." Recognizing the present lack of viable cancer data, three panelists, the president and two former presidents of the Bioelectromagnetics Society, later cautioned against the attitude that "a lack of confirmed proof at this point in the study of EMF effects means that the question can be ignored." And, indeed, the Academy panel states that "Continued research is important, however, because the possibility that some characteristic of the electric or magnetic field is biologically active at environmental strengths cannot be totally discounted." The congressional-mandated ELF research program will be completed in about 2 years. At that time one would expect a follow-up analysis of the all the ELF data and a debate on funding levels on ELF research.

At the press conference one could almost feel the ghost of Paul Brodeur, author of CURRENTS OF DEATH and THE GREAT POWER LINE COVER-UP: HOW THE UTILITIES AND THE GOVERNMENT ARE TRYING TO HIDE THE CANCER HAZARD POSED BY ELECTROMAGNETIC FIELDS. Stevens was asked about authors who make "a career scaring people." Stevens adroitly ducked the question by stating that "we didn't go into the sociology of this."

  1. N. Wertheimer and E. Leeper, Am. J. Epidemiology 109, 273-284 (1979).
  2. HEALTH EFFECTS OF LOW FREQUENCY ELECTRIC AND MAGNETIC FIELDS.
    Presidential Committee on Interagency Radiation Research and
    Policy Coordination, Oak Ridge Associated Universities, Oak Ridge, TN, 1992.
  3. D. Hafemeister, Am. J. Phys. 64, 974-981 (1996).

David Hafemeister

California Polytechnic State University

San Luis Obispo, CA 93407

armd@physics.wm.edu