Archived Newsletters

On the Road in 2020, A Customer's View

Vince Fazio

This paper discusses some of the challenges to getting high-mileage and environmentally clean vehicles on the road, and progress being made by Ford Motor Company in this area. It is based on a presentation given in October 2000 (but updated in Nov 2001) by Vince Fazio, Ford's Director (now retired) for the Partnership for a New Generation of Vehicles, at the MIT Energy Laboratory in response to the paper "On the Road in 2020, a life-cycle analysis of new automobile technologies."

Partnership for a New Generation of Vehicles
The Partnership for a New Generation of Vehicles (PNGV) was established on September 29, 1993. It draws on the resources of seven federal agencies, the national laboratories, universities, suppliers, and the United States Council for Automotive Research (USCAR), a cooperative, pre-competitive research effort among DaimlerChrysler Corporation, Ford Motor Company, and General Motors Corporation.

  1. The PNGV goals are to collaboratively conduct long-range, high-risk research on technologies that will result in breakthrough improvements in fuel efficiency for passenger vehicles. To retain customer acceptance and hence high-volume market penetration, safety, emissions, and consumer expectations must be met. Research will be on technologies that are pre-competitive and will combine the manufacturing capabilities and technical expertise of the auto industry with the advanced research talent of the national laboratories.

Collaborative research on new high-risk technologies benefits society, the industry, and the individual automakers by lowering their costs, sharing the risks, and shortening the time it takes to bring new technologies to market. The partners have been working together despite their competitive nature and have already begun to reap the benefits of this collaboration.

PNGV is already a success as many technologies have been put to use in production vehicles. For example, there are over 410 pounds of aluminum in the Lincoln LS, yielding great weight savings over steel. PNGV advances in powertrain technology will be entering the market soon, with a hybrid-electric version of the Ford Escape compact SUV scheduled for production in the 2003 calendar year. A fuel cell powered version of the Focus, the world's best selling car, will debut in 2004.

In January 2000 Ford met its target of demonstrating a proof-of-concept prototype vehicle with the debut of the Prodigy hybrid electric sedan. Prodigy demonstrates that the technology exists today to significantly improve fuel economy and still meet emissions requirements. Many of the technologies are currently in production and others are planned. However, great advances are still needed before the affordability targets can be met for the other design approaches.

As these projects have progressed, the PNGV is becoming a process for technology development rather than simply a set of one-time goals.

Environmental Effectiveness
A good indicator of the industry's improvement is total fuel consumed. It is a function of both the fuel economy of the vehicle and total distance traveled. Additionally, fuel consumption shows the effect of vehicles on the United States' goal of energy independence. Most of the petroleum used in producing automotive fuel for the United States is imported. Decreasing automotive fuel consumption makes a positive impact toward the nation's dependence on foreign oil. But significantly lowering overall fuel consumption is more complicated than simply improving the fuel efficiency of every car on the road.

Harmonic Average and Diminishing Returns
One of the keys to understanding the environmental effectiveness of advanced technology application is the principle of harmonic averages. Put simply, it is more beneficial to make modest improvements in the worst performing vehicles than to make the same improvements only in the vehicles that already do well. For example, consider a fleet consisting of two vehicles. One travels ten miles per gallon of gasoline, the other twenty. If both vehicles are driven 10 000 miles, they will use a combined 1500 gallons of fuel. This gives an average mileage of 13.3 miles per gallon (Figure 1), not the 15 that might be expected at first glance. Two cars with a mileage of 15 mpg would, in fact, require only 1333 gallons to cover the same distance (Figure 2).

Now let's take the original example and improve the fuel economy of just one vehicle at a time.
As Figure 3 shows, which car we choose to improve has a profound impact on how effective that improvement is. Improving the first car by 10 mpg gives a lower total fuel consumption (1000 gal) than the same 10 mpg improvement applied to the second one (1333 gal). From this we can see that the better a vehicle already is, the less the payback for improving it further.

Sales
For advanced technology vehicles to help improve the environment, they have to be driven in place of traditional vehicles and in high volumes. A few thousand expensive vehicles that get 80 mpg have an insignificant effect on the nation's car fleet. The key to getting high sales volume isn't simply manufacturing capacity. Customers must want to buy these vehicles because they have value. Also, manufacturers must make a profit, and thus want to produce them.

Customer Requirements
One key to customers accepting these vehicles is that the changes must be transparent, e.g., the new vehicles maintain comparable performance, utility, safety, and cost of ownership to the baseline vehicle. Although most consumers say that they want to improve the environment, they are unwilling to make many personal sacrifices for a public benefit. This is a global issue: even in Germany, where gas is far more expensive than in the U.S., a recent study revealed 98% of drivers agreed people should drive less, but 80% said they didn't intend to drive any less themselves. The success of the automobile industry in the U.S. depends on the fact that American consumers want personal mobility. They want their car to take them where they want to go, whenever they want, quickly and inexpensively. The vast majority will not accept being inconvenienced by a battery that needs charging overnight, an exotic fuel that they must drive farther to get, or a car that only goes 100 miles before refueling. They also will not pay substantially extra just to help the environment. The purchase decision must be economically attractive. Any increase in purchase price must pay back in fuel savings and other benefits.

Cost
Another key to customers accepting advanced technology vehicles is the cost. This is also a major concern for the manufacturers, who must be able not only to sell the vehicles at a price the consumers will pay, but also to develop the technology and manufacture the vehicles affordably.

Table 1: Annual gallons and dollars saved at Fuel Economy Multiples

Improvement in fuel economy over base vehicle

gallons saved $US saved, per year
Vehicle Class Today's F.E. 1.5x 2.0x 3.0x
Compact car 40 125/$188 188/$282 250/$375
Midsize car 27.5 182/$273 273/$410 364/$546
SUV 20.5 244/$366 366/$549 488/$732
(based on 15,000 miles/year, $1.50 per gallon gasoline)

Table 1 is a comparison of the customer's savings in fuel and fuel cost associated with incremental improvements of the mileage in various classes of vehicles. As you can see, the mileage of a mid size car has to nearly double to equal the savings of a 1.5x increase in an SUV, and that of a compact car has to triple. Additionally, the savings from the first increase, from base to 1.5x, in any sized car, are twice the additional savings in going from 1.5x to 2x.With the current state of technology, the cost premium for doubling mileage would be in the $3000-$8000 range, well above the savings realized through fuel efficiency. Figure 4 clearly shows why these savings aren't enough.

With the current trend of customers leasing vehicles, it is only the first few years that should be considered in calculating savings. After the two years over which the typical customer wants to break even, the savings realized by doubling the vehicle's fuel efficiency are only $820. It will take nearly eight years for the savings to offset the premium, and that's assuming the minimum cost.

Progress
In terms of overall environmental impact, a lot of progress has already been made with conventional approaches, and today's vehicles are inherently cleaner than those of a few years ago. A model year 2000 Ford Taurus driven the equivalent of two Boston Marathons™ gives off fewer emissions than a similar 1970 model sitting all day in the sun with its engine off. As another example, it would take 50 of Ford's European sub-compact car, the Ka, to produce the same emissions as one 1976 Ford Fiesta. Figure 5 illustrates the changes over time of allowable hydrocarbon emissions. Again, this shows that the incremental benefits decrease with incremental improvements in the standards. Additionally, the hardware required to achieve these improved emissions levels actually decreases fuel economy.

Fuel
The challenge of making high mileage, low emission vehicles is not simply one of components, but of systems. Fuel is an integral part of the system, whether the vehicle uses it in an engine or a fuel cell. Only by recognizing and understanding this relationship can we make real advances.

While efficient combustion is obviously the first step to clean burning engines, clean fuel is also important. Impurities in the fuel will either leave the tailpipe as emissions or stay in the vehicle and degrade operation of the whole system. Sulfur, for example, binds to sites in the catalyst that otherwise would assist in NOx, hydrocarbon, and CO reduction. The EPA's Tier 2 emissions standards place the major focus on NOx emissions in driving all of the pollutants to lower and lower levels. Tier 2 requires the average vehicle to achieve emissions standards of 0.07g/mi NOx and 0.01g/mi particulate matter (PM) starting in 2004, far less than today's 0.4 g/mi NOx and 0.04 g/mi PM. To achieve Tier 2 levels will require 90 percent catalytic efficiency, an 81 percent improvement, which is virtually impossible with the sulfur content in today's fuel. Figure 6 shows that to achieve the 81 percent improvement required, gasoline sulfur content must be reduced beyond the 30 ppm average currently mandated by the EPA beginning in 2005.

Like improving the operation of vehicles, getting clean fuel isn't just a matter of the government mandating that it be done. The automotive and fuel industries must work together to find and implement the best solutions toward their common goals of improving the environmental impact of their products while maintaining affordability.

Ford's Voluntary Leadership
Ford Motor Company has been voluntarily working for many years to improve the environmental impact of its vehicles. As of the 2000 model year, all Ford, Lincoln, and Mercury light trucks, SUVs, and minivans are certified as Low Emission Vehicles (LEV). Ford has also committed to a 25 percent improvement in the fuel economy in its SUV fleet by 2005. This is over and above the levels currently required by CAFÉ Corporate Average Fuel Economy Standards, which mandate average minimum miles per gallon for the fleet of automobiles and trucks sold in America by each manufacturer), because the company recognizes its environmental responsibility for this improvement and does not need government intervention in order to improve. Chairman Bill Ford Jr. explained: "We have three obligations: to provide superior returns to our shareholders; to give customers exactly what they're looking for; and to do it in a way that has the least impact — or the most benefit — for the environment and for society in general. Keeping our eye on all three of those equations is the recipe for future success."

Given the market position of Ford's vehicles, its fleet of LEVs is more environmentally effective than such high mileage vehicles as the Toyota Prius, even if the Prius was selling at full plant capacity. In addition to LEV gasoline vehicles, Ford is also the top seller of alternative fuel vehicles (AFV), offering more alternatives in AFVs than any other automaker (Table 2). The electric Ranger pickup truck is the top selling electric vehicle in the world.

Table 2 U.S. Alternative Fuel Vehicles by Manufacturer for 2002 Model Year

Manufacturer Fuel Models
Ford Motor Co.

CNG

Propane

Ethanol

Electric

Crown Victoria, b-fuel-F-150 ordedicated Econoline & F-150

Bi-Fuel F-150

Flex-Fuel Explorer, Ranger & Taurus

Ranger & THINK City

Daimler-Chrysler

CNG

Ethanol

Propane

Ethanol

Ram Van, Wagon

Flex-fuel Chrysler Town & Country, Voyager (& Grand), Caravan (& Grand)

Chevrolet/GM Medium Duty Truck

Flex-fuel Chevrolet S-10, GMC Sonoma, Tahoe/Yukon, Suburban

Honda

CNG

Hybrid Electric

Civic

Insight

Toyota

CNG

Hybrid Electric

Electric

Camry

Prius

RAV-4 EV

While most AFVs are sold to government and commercial fleets, Ford's consumers can benefit directly from its commitment to environmental vehicles. The newest models of Ford's Taurus, Ranger, and Explorer offer a flex-fuel engine at no cost to the customer. These vehicles are capable of running on E85 ethanol fuel, gasoline, or any mixture of the two. As availability of E85 increases at gas stations across the country, more and more drivers will be able to choose the environmentally friendly fuel.

Conclusions
Great progress has been made in the last 30 years, and will continue into the future. The challenge now is for automakers and policy makers to align affordable solutions with customer wants through a robust approach to ensure high volume usage. The challenge lies with scientists and engineers throughout the US and the globe to develop the technology to provide market-driven solutions.

Misha Hill, writing for Vince Fazio
Partnership for a New Generation of Vehicle
Ford Motor Company
World Headquarters, Dearborn, Michigan

Losing Weight to Save Lives: A Review of the Role of Automobile Weight and Size in Traffic Fatalities

Marc Ross and Tom Wenzel

This is a short version of a 3/13/01 report to the National Research Council’s Committee on Effectiveness and Impact of Corporate Average Fuel Economy (CAFE) Standards,.*

Critics of higher fuel economy standards argue that improving fuel economy will require reducing vehicle weight, and that that would result in more deaths. In this report we study the safety implications of improving fuel economy in a more sophisticated way than across-the-board mass/size reduction.

Fatality statistics, 1999. The Fatality Analysis Reporting System (FARS) produced annually by the National Highway Traffic Safety Administration (NHTSA) includes a record on every fatal highway crash, with about 340 variables for each.

Table 1 shows the distribution of deaths by type of crash. If one excludes pedestrian and bicyclist deaths, essentially half of all fatalities are the result of collisions between two vehicles, while the other half are the result of one-vehicle crashes. The latter includes crashes due to the driver’s loss of control rather than to a collision with an object.

Table 1. Fatalities by Type of Crash, 1999 FARS, as Defined by "First Harmful Event"

multi-vehiclea One-vehicle w/objectb non-collisionc Pedestrian/bicycle total
17541 12916 4809 5831 41097

  1. Of these, there were 6193 deaths in car-to-light truck crashes and 1289 in motorcycle-to-vehicle crashes.
  2. Stationary objects like trees, guard rails, and utility poles, but not temporarily stopped vehicles
  3. 80% of these are primary rollovers.

The advantage for occupants of heavy vehicles is evident in Table 2. For instance, buses and heavier trucks are 9% of vehicles in crashes, but their occupants account for only 2% of fatalities.

Table 2. Vehicle Types Involved in Fatal Crashes, 1999 (pedestrian & bicycle deaths excluded)

Cars Light Trucks Other trucks, buses motorcycles
Vehicles, total involved 49% 35% 9% 4.8%
Fatalities to vehicle occupants 58% 31% 2% 7.4%

The relatively high rate of light-truck rollover fatalities can be seen in Table 3. Kahane’s (1997) regression analysis also found that weight reduction in cars would increase the rollover risk, because, for 1980s model years, light weight was correlated with fatal rollovers. But this tendency largely disappeared in the 1990s as manufacturers improved car designs. The correlation of rollovers with weight is not inherent, but depends on features such as height of center of mass, track width, and stiffness of suspension. With rollover standards, changes will also be made to reduce rollovers for light trucks.

Table 3. Number of Vehicles in Fatal One-Vehicle Crashes, by "First Harmful Event", 1999

Cars Light trucks Other
One-vehicle collision with objects 7,024 4,025 1,426
Non-collision 1,433 2,390 731

Seat belts are the most important safety technology. Their importance is evident from comparing belt use by drivers killed in crashes with that observed in the careful National Occupant Protection Use Survey, Table 4. Seat belt use among drivers killed is roughly half as great as among all drivers! Part of the reason is that careless drivers, who are more likely to be involved in a crash, are less likely to use seat belts.

Table 4. Seat Belt Use by Drivers in Percent

Of Those Killed in Fatal Crashes (FARS 1999) In a Survey of All Drivers (2000)
Passenger Cars 41 75
Vans and SUVs 30 75
Pickup Trucks 21 61

Historical trends. The total number of fatal crashes and fatalities declined 18% from 1979 to 1999, even though the number of vehicles in use increased 55% and estimated vehicle miles traveled increased 75%. There was a dramatic decline of fatalities in car-to-car collisions, accompanied by a sharp rise in the fatalities involving light trucks. While car-to-car crashes became much safer, the number of light trucks and dangerous car-to-light truck crashes increased.

Analysis of fatalities in one- or two-year-old cars which crash with cars of any age reveals patterns for late-1990s models. Dividing by the number of new cars, one obtains the deaths in car-to-car crashes per million new cars. We find the extraordinary result that deaths in new cars from car-to-car head-on collisions were only a third as great in 1998 as one decade earlier, and only 20% as great as in 1980. The improvement shown in Table 5 is partly due to standardized testing programs which have emphasized crash types in the order shown. These developments suggest a point of departure: Driving cars in an environment of cars has become much safer, and there are great opportunities for further progress. Before building on this concept, consider some background.

Table 5 Decline in Driver Fatalities per Car, for Cars One and Two Years Old, 1988 to 1998

Kind of Crash Denominator Decrease 1987-1997
car-to-car head-on new cars 65%
car-to-car side impact new cars 45%
car collision w/stationary objects new cars 30%

Exposure. Qualitatively, the risk of a fatality is the product of the risk of a crash, and the risk of fatality given that crash:

R(fatality) = R(crash)•R(fatality|given the crash) (1)

Often analysts choose the total number of vehicles as the denominator or the measure of "exposure" to a fatal crash. The exposure is then the number of registered vehicles. Consider, however, young male drivers. Teenage male drivers are so much more likely to crash that they have a fatality risk about 4 times that of 35-to-50 year old males, and 7 times that of 35-to-50 year old females, even though they are less fragile (Kahane 1997, p 6). Consider driving on rural roads, excluding expressways. In a survey of 1999 fatalities at the county level, we find a striking pattern: several times more deaths per resident in counties with low density (Scientific American, Aug. 1987, for 1980). The uncertainties inherent in exposure bedevil interpretation of crash data.

Standardized crash tests. The crashworthiness of new vehicle designs is regulated through Federal Motor Vehicle Safety Standards, and publicized through a rating program, the New Car Assessment Program (NCAP). Both are managed by NHTSA and based on standardized crash tests. The long-running test is for frontal collisions. New vehicles are equipped with instrumented dummies and crashed head-on into a fixed barrier, at 30 mph for the regulation, and 35 mph for NCAP.

A striking but not so surprising consequence of the measurements reported under NCAP are the great reductions which have occurred over time in frontal-collision ratings for many vehicle models. The capability of engineers to improve products so they meet challenging requirements in standardized tests is well established. These improvements are partly responsible for the declining fatality rates in FARS.

Elementary physics. In moderately severe head-on crashes, one may be able to neglect the occupant’s contact with hard objects, and analyze the deceleration of the vehicle. The collision of two vehicles tends to leave them attached. In the simplest case, conservation of momentum shows the lighter vehicle, L, experiences a larger change in velocity than the heavier vehicle, H: DvLDvH = mH/mL (2)

The average deceleration to which the vehicle is subject, <a>, is approximated

< a > = v c Dv/[2(s L + s H )] (3)

where sL = the crush distance of vehicle L, and v c is the closing speed. Eq(2) shows that a light vehicle is less safe in collision with a heavy vehicle. Eq(3) shows that a vehicle with a small crush space is less safe. However, the relative roles of mass and space are not understood.

Peak deceleration correlates with severity of injury. One way to mitigate peak deceleration is to lengthen crash duration by increasing crush space. (A serious collision lasts roughly one-tenth of a second.) Another is vehicle design that spreads the deceleration more evenly during the crash. However, all fatalities cannot be eliminated; and one cannot usually protect adequately at high speeds.

Car-to-car frontal collisions. Consider the ratio of the fatality risks of drivers in two-vehicle collisions (Joksch 1998, section 5). This ratio doesn’t involve the likelihood of collision. To separate the effects of gross vehicle structure from vehicle mass/size, we consider car-to-car head-on collisions. As suggested by Eq(2), the risk of death depends empirically on the ratio of the car masses. Joksch finds for early 1990’s crashes: RL/RH ~ (mH/mL)n with n ~ 3 (4)

This means (for an average case) that if car H is 1.6 times as heavy as car L, then 4 times as many are killed in car L as those in car H. (For current four-door sedans, Cadillac Seville and Honda Civic, the mass ratio is 1.58.) Improving safety technology has affected these mass-risk relations. For example, average results for model years 1995-99 imply a fatality risk ratio of 2.3, rather than 4, for a mass ratio of 1.6. Of course, this is still a large ratio. Although presented in terms of mass, these are dependences on mass and correlated dimensions.

The risk to passengers in both vehicles. While it is more risky to be in a light/small vehicle than in a heavy/large vehicle if the two collide, let us ask: What is the risk to society considering the occupants of both vehicles? And how does that risk change as masses/sizes are changed?

In the most convincing analysis, Joksch shows that the dependence of the ratio, RL/RH, just discussed, establishes rather generally that in collisions between two cars of the same mass there are many fewer fatalities than for collisions between cars of different mass, but the same average mass. That is:

RL + RH declines as mH/mL ® 1 (5)

with mH + mL unchanged

This analysis is straightforward and convincing because it is based on fatality ratios.

Cars vs light trucks. Fatalities in car-to-light truck head-on collisions are about 5 times higher in the car than in the light truck (Joksch 1998, Gabler & Hollowell 1998). In collisions where an SUV strikes the left side of a car, there are 30 driver fatalities in the struck car for each fatality in the striking SUV! Today’s light trucks are incompatible with cars.

Table 6 enables comparison of the fatality risk for both drivers in car-to-car and car-to-light truck collisions (Joksch 2000, pp 9-10). This approach shows, for example, that the ratio of fatalities in SUV-to-car collisions to those in car-car collisions is 2.32/1.28 = 1.8. Based on the 1013 driver fatalities in SUV-to-car collisions (Table 7), the ratio 1.8 leads to an estimate that about 450 excess driver’s lives were lost in 1999 due to the use of SUVs as substitutes for cars. On this basis, an annual excess of about 2200 deaths may be associated with the use of light trucks as car-substitutes.

Table 6 Drivers Killed per 1000 Drivers Involved: Car-to-Light Duty Vehicle Collisions,1991-1997. The "denominator" is 1000 times the number of police-reported crashes.

Death in:

Other vehicle type

car

SUV

van

pickup

car

0.64

1.98

1.57

2.11

other vehicle

0.64

0.34

0.26

0.49

both vehicles

1.28

2.32

1.83

2.60

Table 7 Driver Fatalities in Car-to-Car and Car-to-Light Truck Collisions, 1999 FARS

Death in:

Other vehicle type

car

SUV

van

pickup

car

2,850

624

526

1407

other vehicle

--

389

302

956

both vehicles

2,850

1,013

828

2,363

Ford recently announced design changes to improve the compatibility of SUVs and pickups with cars, starting in MY2004. This is a promising, if modest, beginning. Presumably others are taking similar steps.

A Safety-Fuel Economy Scenario. In order to address both fuel economy and safety, we propose making heavier vehicles lighter but not smaller, and making lighter vehicles larger but not lighter. These changes would be enabled by mass-reduction technologies. Two or three major kinds of changes could be made to reduce mass, independent of size:

  1. The basic structural design of those light-truck car-substitutes which are now "body on frame" could be unibody, like today’s cars, or a skin-on-frame design called space frame. These structures would also improve compatibility between light trucks and cars.
  2. Lightweight materials would be emphasized, such as high and ultrahigh strength steels, aluminum and engineering plastics.
  3. High-efficiency propulsion systems would be lighter. The technologies could include: a) smaller displacement engines with high ratio of power-to-displacement, b) automatic transmissions without torque converter (with motor-shifted standard transmissions using a sophisticated clutch and management to assure smooth acceleration, or with continuously variable transmission), and c) on-shaft starter-generators, with the imminent 42-volt electrical system, enabling idle-off and other modest hybrid-drive capabilities without a significant battery mass penalty. The shift busyness and slight shift delays that characterize driving with a powerful but small engine could interfere with marketing. But minimizing this problem should be viewed as an engineering challenge. The concept is to achieve smooth control of acceleration through intelligence rather than friction.

To illustrate, we assume that the mass range (curb weights) of most light duty vehicles would be reduced to between 2400 and 3300 lbs (the present distribution is 2400 to 5000 lbs). Mass reductions of up to 33% are projected. About half of light-duty vehicles would be reduced in mass, with an average 1000 lb reduction. To achieve such major mass reductions while maintaining the size and performance characteristics that attract customers would be a major engineering challenge. In addition, safety technologies would continue to be developed and applied.

We estimate this scenario would save about 2200 lives a year in two-vehicle crashes. This estimate is conservative, based on the excess of deaths in light truck-to-car crashes, for only those light trucks used as car substitutes, and using 1999 fatality rates. There would also be many lives saved in car-to-car crashes, but some extra lives lost in heavy truck-to-light duty vehicle crashes. We also estimate an increase of less than 400 deaths per year in one-vehicle crashes with stationary objects.

We project improvement in combined city-highway fuel economy of 55% (DeCicco et al. 2001). Fuel economy improvement in gas guzzlers is especially important for fuel savings and CO2 reduction.

There would be increases in manufacturing cost-per-unit in this scenario if the lightweight materials aluminum and engineering plastics are substantially involved. However, ultralight steel techniques are not costly, and can reduce the mass of steel bodies and associated parts 15% to 20%. While there would be substantial development and re-tooling costs, distinct from the per-unit manufacturing costs, in the larger picture of automotive model changes and safety they appear reasonable (DeCicco et al. 2001).

Marc Ross
Physics Dept, University of Michigan, Ann Arbor, MI 48104-1120
734 764-4459

Tom Wenzel
Lawrence Berkeley National Laboratory, Berkeley, CA 94720
510 486-5753

References

  1. DeCicco, John, Feng An & Marc Ross, Technical Options for Improving the Fuel Economy of US Cars and Light Trucks by 2010-2015, American Council for an Energy-Efficient Economy, Washington DC, 2001.
  2. Gabler, HC, & WT Hollowell, The Aggressivity of Light Trucks and Vans in Traffic Crashes, Society of Automotive Engineers, 980908, 1998.
  3. Joksch, Hans, Vehicle Design versus Aggressivity, NHTSA DOT HS 809 194, April 2000.
  4. Joksch, Hans, Vehicle Aggressivity: Fleet Characterization Using Traffic Collision Data, NHTSA, DOT HS 808 679, 1998.
  5. Kahane, Charles J, Relationships Between Vehicle Size and Fatality Risk in Model Year 1985-93 Passenger Cars and Light Trucks, NHTSA, DOT HS 808 570, 263 pages, 1997.

"A Renaissance for Nuclear Energy?"

Andrew C. Kadak

Current Situation
The California electricity "crisis" gave national prominence, once again, to the issue of energy supply. The "crisis" apparently came and went and will likely soon be forgotten. What it did accomplish however was a more lively discussion of the importance of supply, recognizing the ever increasing demand as we electrify. In this discussion of demand came the realization that approximately 20% of the nationUs electricity was being generated by nuclear energy which was not subject to the escalating cost of natural gas that drove electricity prices far above what the public would tolerate. The net consequence of a number of factors, such as a faulty deregulation scheme that drove one of the major electric companies in California into bankruptcy, were rolling blackouts due to lack of generation at any price. Given that, in California, the construction of new plants of any type was frowned upon and the regulatory climate somewhat hostile, no company was interested in making generation investments. So, a "crisis" was created that has for the time being abated at a cost to the economy of at least $ 5 Billion.

In the rest of the world, serious people are debating the existence and implications of increasing greenhouse gases in our environment due to the burning of these same fossil fuels. While the environmental ministers of nations from around the world seek to find ways to meet the 1992 Kyoto accords which call for reductions in CO2 and other greenhouse gases to 10% below 1990 levels, the reality almost 10 years later is that CO2 emissions have not decreased at all but increased by 10%. As most know, one of the key advantages of nuclear energy is that it is essentially a greenhouse-gas-emission-free technology. Yet, at its most recent meeting of the Conference of the Parties in Bonn in July, these same environmental ministers voted to specifically exclude nuclear energy fom helping address the global warming problem. Clearly, there is something wrong here since, in the United States, nuclear energy provided over 69% of the emission-free generation, far exceeding the 30% hydroelectric power. Solar and other renewables provide the rest (~1%).

For many years, nuclear energy, while arguably a -CO2 emitting energy source, has been judged to be unacceptable for reasons of safety, unstable regulatory climate, a lack of a waste disposal solution and, more recently, economics. In recent years, however, the nuclear industry has made a remarkable turnaround. While a number of older plants have been shutdown for largely economic reasons, the 104 operating nuclear plantsU performance has increased to the point, that as an overall fleet, its capacity factor was 89% in 2000. This means that these plants were operating full power for over 89% of the year. This improvement in the last 10 years is essentially the same as building 23 new 1,000 Mwe plants in that time period, based on historical performance averages. In addition, all safety statistics, as measured by the Nuclear Regulatory Commission, have shown dramatic improvements as well. The Three Mile Island accident occurred over 22 years ago. The image of nuclear energy as an unsafe technology still persists. Yet the record is quite the opposite.

There has not been a new order for a nuclear plant since the mid 1970Us. The reason for the lack of new orders was the high capital cost. When operating in a difficult regulatory environment, utility executives simply avoided new nuclear construction and went to the cheapest and fastest way to make on-line generation available, which was natural gas. Combined cycle gas plants were the generation source of choice for many years for those companies that needed to build plants. Hence the beginning of the "crisis."

Today, utility executives still do not have new nuclear plant construction in their future plans even though the regulatory regime has stabilized. Nuclear plants are performing extremely well. Safety issues have been addressed with no new issues emerging and slow progress is being made to finally dispose of spent fuel at Yucca Mountain. What has happened is a consolidation of the utility and nuclear industry with some larger utilities purchasing existing nuclear plants from companies that do not want to be in the business. To address the inevitable problem of replacing existing generation, utilities have chosen to re-license existing plants from the current 40 years to 60 years. Several nuclear plants have applied and received Nuclear Regulatory Commission approval to do so. These extensions will allow utilities to continue to use these plants as long as they are economic and continue to be safely operated. Unfortunately, we still donUt see anybody ready to build a new nuclear plant. The reason is there is no new nuclear plant on the market that can compete with natural gas or mine mouth coal plants.

However, there are developments in two parts of the world that are aimed at changing that situation. The objective of these related efforts is to design, license, and build a nuclear power plant that can compete with natural gas. The two projects started independently but reached the same conclusions P that small modular high temperature gas reactors, that are naturally safe, can be built in two to three years and can compete in the electricity market. While the basic technology is over 20 years old, the application and concepts are quite new. The leader in this effort is ESKOM, the 5th largest utility in the world, located in South Africa. The other effort is being lead by the Massachusetts Institute of Technology with support from the Idaho Nuclear Engineering and Environmental Laboratory.

The nuclear energy plant that both groups are developing is a modular 110 Mwe high temperature, pebble bed reactor, using helium gas as a coolant and conversion fluid and gas turbine technology. The fundamental concept of the reactor is that it takes advantage of the high temperature properties of helium which permit thermal efficiencies upwards of 50%. It utilizes an online refueling system that can yield capacity factors in the range of 95%. Its modularity design concepts, in which the balance of plant can fit on a flat bed truck be shipped from the factory, allows for a 2 to 3 year construction period, with expansion capabilities to meet merchant plant or large utility demand projections.

Economic projections for the plant in South Africa indicate capital costs of between $ 800 to $ 1,000 per kilowatt. Staffing levels for an 1100 Mwe, 10 unit module are about 85, and fuel costs about 0.5 cents/kwhr. When all is combined, the total busbar cost of power ranges from 1.6 to 2 cents/kwhr. Very preliminary estimates in the US for the MIT project show higher costs but, on a relative, scale the numbers are well within the range of competitive prices with new combined cycle plants, at 3.3 cents per kilowatt hour.

What is a Pebble Bed Reactor?
Pebble bed reactors were developed in Germany over 20 years ago. At the Juelich Research Center, the AVR pebble bed research reactor rated at 40 Mwth and 15 Mwe, operated for 22 years, demonstrating that this technology works. The reactor produced heat by passing helium gas through a reactor core consisting of uranium fueled pebbles. A steam generator was used to generate electricity through a conventional steam electric plant. Germany also built a 300 Mwe version of the pebble bed reactor but it suffered some early mechanical and political problems that eventually lead to its shutdown. In December of 2000, the Institute of Nuclear Energy Technolog, of Tsinghua University in Beijing China, achieved first criticality of their 10 Mwth Mwe pebble bed research reactor. In the Netherlands, the Petten Research Institute is developing pebble bed reactors for industrial applications in the range of 15 Mwth. The attraction to this technology is its safety, simplicity in operation, modularity and economics.

Advances in basic reactor and helium gas turbine technology have produced a new version of the pebble bed reactor concept. Instead of wasting heat by using steam to produce electricity, new designs are going to direct or indirect cycle helium gas turbines to produce electricity. By avoiding the use of high temperature water all the difficulties associated with maintaining high temperature water systems is eliminated. The optimum size for a pebble bed was concluded to be about 250 Mwth thermal to allow for rapid and modular construction as well as maintaining its inherent safety features. These designs do not require expensive and complicated emergency core cooling systems since the core can not melt. These advances have lead ESKOM and the MIT team to independently conclude that the modular pebble bed reactor can meet the safety and economic requirements for a new generation. Each group is working cooperatively to develop and demonstrate the technology for commercial application. The time line for demonstration in South Africa is to have the first reference plant in startup testing in 2005 and commercial operation in 2006. Presently, a detailed design feasibility study is underway that will lead to a decision in November of 2001 as to whether the South African project will continue to build the demonstration plant. Licensing submittals are being prepared for submission to the South African nuclear regulator. In short, within the next 5 years, should this project be successful, there will be a credible nuclear alternative to fossil fuels that is projected to be competitive with natural gas even under the old, uninflated prices.

A pebble bed reactor is graphically illustrated in Figure 1. The reactor core contains approximately 360,000 uranium fueled pebbles about the size of tennis balls. Each pebble contains 9 grams of low enriched uranium in 10,000 tiny grains of sand-like microsphere coated particles, each with its own hard silicon carbide shell. These microspheres are embedded in a graphite matrix material as shown in Figure 2. The unique feature of pebble bed reactors is the online refueling capability in which the pebbles are recirculated with checks on integrity and consumption of uranium. This system allows new fuel to be inserted during operation and used fuel to be discharged and stored on site for the life of the plant. It is projected that each pebble will pass through the reactor 10 times before discharge in a three year period, on average. Due to the on-line refueling capability, plant maintenance outages are now required every 6 years.

The key reactor specifications for the modular pebble bed reactor are shown in Table 1.

Table 1

Nuclear Specifications for the MIT Pebble Bed Reactor

Thermal Power 250 MW
Core Height 10.0 m
Core Diameter 3.5 m
Pressure Vessel Height 16 m
Pressure Vessel Diameter 5.6 m
Number of Fuel Pebbles 360,000
Microspheres/Fuel Pebble 11,000
Fuel UO2
Fuel Pebble Diameter 60 mm
Fuel Pebble Enrichment 8%
Uranium Mass/Fuel Pebble 7g
Coolant Helium
Helium Mass Flow Rate 120 kg/s (100% power)
Helium entry/exit temperatures 450 C/850 C
Helium Pressure 80 bar
Mean Power Density 3.54 MW/cubic meter
Number of Control Rods 6
Number of Absorber Ball Systems 18


The pebbles are located in the reactor core structure whose cross section is shown in Figure 3. The internals are made of carbon blocks which act as a reflector and structural support for the pebble bed. A picture of the Chinese internal carbon structure is shown in Figure 4. Please note the pebbles in the bottom of the central portion of the graphite discharge section.

Balance of Plant
There are two options under development P a direct cycle helium gas turbine system being developed by ESKOM and an indirect P helium to helium intermediate heat exchanger gas turbine system being developed by MIT. Each has its advantages and disadvantages with the key being the bottom line cost as measured in cents per kilowatt-hour. The direct cycle plant configuration of the ESKOM PBMR design is shown in Figure 5. In this design there are essentially two large vessels P one containing the reactor and the other the balance of plant. The indirect cycle being developed by MIT is shown in Figure 6. Conceptually, the MIT turbomachinery module could be built in a factory and truck shipped to the site for simple assembly. If this modularity strategy is realized, it would revolutionize how nuclear energy plants are built. The MIT schematic of the thermo-hydaulic system for its indirect cycle is shown on Figure 7. The basis for this preliminary design is that all components are commercially available today. Advanced designs are being developed to simplify the plant even further.

Should there be a need for an 1,100 Mwe plant, 10 modules could be built at the same site. The concept calls for a single control operating all 10 units through an advanced control system employing many of the multi-plant lessons of modern gas fired power plants in terms on modularity and automatic operation. Construction plans and schedules were developed to refine the cost estimates and schedule expectations. The preliminary schedule calls for getting the first unit on line in slightly over 2 years with additional modules coming on line every three months. A unique feature of this modularity approach is that it allows one to generate income during construction as opposed to interest during construction.

The Safety Case
The basis for the safety of pebble bed reactors is founded on two principles. The first is the very low power density of the reactor which means that the amount of energy and heat produced is volumetrically low and that there are natural mechanisms such as conductive and radiative heat transfer that will remove the heat even if no core cooling is provided. This is significant since the temperature that is reached in a complete loss of coolant accident is far below the core melt temperature and it takes about 70 to 80 hours to reach the peak temperature. Hence the conclusion that the core will not melt is valid and supported by tests and analysis performed in Germany, Japan, South Africa and the US.

The second principle is that the silicon carbide, which forms the tiny containment for each of the 10,000 fuel particles in a pebble, needs to be of sufficient quality that it can retain the fission products. In tests performed to date on fuel reliability, it has been shown that micospheres can be routinely manufactured with initial defects of less than 1 in 10,000. In safety analyses, it is assumed therefore that 1 in 10,000 of these microspheres has a defect that would release the fission products into the coolant. Since the amount of fuel in each particle is very small, only 0.0007 grams, even with this assumption and under accident conditions, the release from the core would be so low that no offsite emergency plan would be required. In essence, it is recognized that the fuel can not be made perfectly but the plant is still safe because is has natural safety features that prevent meltdowns. Manufacturing fuel quality is a key factor in the safety of high temperature gas reactors.

The other safety issue that needs to be addressed with all graphite reactors is that of air ingress. When at high temperatures, oxygen reacts with carbon to form C0 and CO2. This oxidation and corrosion of the graphite is both an exothermic and endothermic reaction depending upon the conditions. Analyses and tests in Germany have shown that it is very difficult to "burn" the graphite in the traditional sense, but it can be corroded and consumed. Some have made references to Chernobyl as an example of the problems with graphite reactors. Fortunately, however, the Chernobyl design is radically different than the pebble bed in that the pebble bed does not contain water (steam explosions), nor zirconium that really burns in air at high temperature, and the pebble bed reactor can not reach the temperatures for melting fuel P all of which fueled the Chernobyl fire.

The key issue for the pebble bed reactor is the amount of air available in the core from the reactor cavity and whether a chimney can form allowing for a flow of air to the graphite internal structure and fuel balls. Tests and analyses have shown that at these temperatures graphite is corroded and consumed but the natural circulation required for "burning" is not likely due to the resistance of the pebble bed to natural circulation flow. The corrosion process is more of a diffusion process. MIT is now performing confirmatory analyses to understand the fundamental behavior of air flow into a pebble bed reactor under the assumption of a major break in the circulating pipes or vessels.

Economics
No matter what the environmental, public health, safety and energy security advantages that nuclear energy may offer, if the product is not competitive, it will not be used. The MIT team used a comparative analysis of energy alternatives that was performed in 1992 by the Nuclear Energy Institute. The results of this comparative analysis for capital costs for a 10 unit modular plant show that the base plant overnight construction cost was $ 1.65 Billion. Applying a contingency of 23 % and an overall cost of money of 9.47%, total capital cost estimate was $ 2.3 Billion or about $ 2,000 /kw installed. On a per unit module, for a 110 Mwe plant the capital cost is estimated to be about $ 200 million. This estimate is approximately double that of the PBMR proposed by ESKOM.

If construction costs were all that mattered, the pebble bed reactor would clearly not be economic compared to natural gas plants. However, when one includes the fuel and operating and maintenance costs, since the pebble bed plant requires far fewer staff than conventional reactors due to their simplicity, the total cost of power is estimated to be 3.3 cents per kilowatt hour, well within the competitive range for new natural gas plants.

Financing Strategy
The financial community is justifiably skittish about nuclear investments due to the huge right-offs that were required for the latest generation of nuclear plants. The beauty of this small modular plant is the initial investment for a module may range from $ 100 to $ 200 million dollars. This is not an astronomical amount of money. Also the plant should be producing electricity within two and a half years, a fairly short time to be nervous about getting a return. These two factors should provide sufficient confidence to make the required investments, as opposed to the billions and 6 years to see similar generation and returns for conventional light water reactor plants.

Nuclear Waste Disposal
The lack of a final repository for used nuclear fuel has been cited by some as a major obstacle to the building of new nuclear plants. While the need for a permanent waste disposal facility is real for existing plants and future plants, the progress being made in the US and in other parts of the world in actually siting a number of these facilities is encouraging. As most scientists and engineers realize, geological disposal is the right strategy. In the US, studies of the Yucca Mountain repository site on the grounds of the former Nevada Nuclear Weapons Test site continue to show that this location is a good site for the burial of nuclear wastes for tens of thousands of years. This year, the Department of Energy will send to President Bush a recommendation to proceed with licensing. In Sweden and Finland, underground repository experimental facilities have gotten local community support to actually build a test facility. Some nations are looking to reprocessing and long term storage since they do not feel the urgent need to have a facility in operation since the quantity of spent fuel in storage today is still relatively small. One repository could store all the spent nuclear fuel from all this nationUs operating nuclear reactors for their 40 year licensed lives. Under optimistic circumstances, a repository at Yucca Mountain could be open by 2010 according to DOE. After 15 years of study and exploration of Yucca Mountain, it is likely that the facility can be safely built at the site for long term storage of spent fuel for hundreds of thousands of years.

Summary
As the California electricity crisis reaches the consciousness of the American public, the politics of nuclear energy will surely improve. Right now, the industry does not have a product that can compete in the market place even with currently high natural gas prices. It is vital to develop that product and not wait for the price of natural gas to be so high that nuclear energy becomes competitive, since it will hurt the US and world economy. MIT and the ESKOM PBMR projects are working to provide this product that is not only competitive but also will gain the publicUs support due to its safety advantages. A lot of work is still required to demonstrate the capabilities of the pebble bed, but all work to date by both ESKOM, its US partners, and MIT continue to show positive results. While not a replacement for natural gas today, in the next five to ten years, nuclear plants powered by pebbles may not

Andrew C. Kadak, Ph.D.
Professor of the Practice, Nuclear Engineering Department
Massachusetts Institute of Technology
Room 24-207 A, Massachusetts Ave., Cambridge, MA 02139
617-253-0166, fax: 617-258-8863

References

  1. AVR P Experimental High Temperature Reactor: 21 Years of Successful Operation for a Future Energy Technology, Association of German Engineers (VDI) P the Society for Energy Technologies, Dusseldorf, 1990.
  2. "Advanced Design Nuclear Power Plants: Competitive, Economic Electricity, Nuclear Energy Institute, 1992.
  3. "Evaluation of the Gas Turbine Helium Reactor" - DOE-HTGR-90380 - Dec. 1993.
  4. MIT Nuclear Engineering Web site: web.mit.edu/pebble-bed/
  5. G. Melese, R. Katz, Thermal and Flow Design of Helium Cooled Reactors, American Nuclear Society, 1984.
  6. R.A. Knief, Nuclear Engineering, Theory and Technology of Commercial Nuclear Power, second edition, 1992, Hemisphere Publishing Corp.

"Life Views and Particle Physics"

David Hafemeister

Physicists have made very positive contributions to the "science and society" issues. In this brief essay I thought it would be interesting to describe the particle-physics subset, to see how they have reached beyond their discipline to make a difference in the areas of national security and energy/environmental matters. The basic conjecture is that those who think the deep physics thoughts have the basic tools to reach outside of themselves and consider science-related societal issues. This is not to say that one must have deep scientific thoughts to make a contribution, it only says scientific training can assist in this pursuit. Let us explore this conjecture by describing seven particle physicists who also did credible work on nuclear arms and energy/environment. This essay is not statistically-based, but rather it contains anecdotal observations of the lives of particle physicists that I have known over the past couple of decades. Because of the quality of their work, the American Physical Society has honored them with the Szilard and Burton-Forum awards.

W.K.H. (Pief) Panofsky obtained his PhD from Cal Tech and become a Professor at UC Berkeley and at Stanford University, first in the Physics Department and then at the Stanford Linear Accelerator Center. As graduate students, many of us first became aware of Pief because of his well-received text, Classical Electricity and Magnetism (with Melba Phillips). Panofsky carried SLAC from its basic concepts to a viable, two-mile accelerator, which he directed from 1961 to 1984. Panofsky’s contributions to particle physics convinced the American Physical Society in 1985 to create the Panofsky Prize "to recognize and encourage outstanding achievements in experimental particle physics." The Panofsky ratio results from one of his discoveries which determined the ratio of the electromagnetic interaction as compared to the strong interaction for pion-nucleon interactions. By using a negative pion beam stopping in a hydrogen target, two outgoing channels were observed, namely the electromagnetic n + g channel and the strong n + po => n + 2g channel channel. The experiment using a deuterium target established that the intrinsic parity of the pion is odd since a channel leading to two neutrons only was observed from capture in deuterium.

As a twenty-three year old, Panofsky first encountered nuclear weapons at Los Alamos. Panofsky was in an airplane above the Trinity test using monitoring devices he developed. During the Eisenhower Administration he chaired the technical working group on the detection of nuclear weapons exploded in space for the Geneva negotiations on the Limited Test Ban Treaty. In recent times Panofsky has chaired the National Academy of Science’s Committee on International Security and Arms Control from 1985 to 1993. Under his leadership, the Academy panel has made many useful recommendations on arms control and nonproliferation.

Arthur Rosenfeld obtained his PhD from the University of Chicago and became a professor of physics at the University of California at Berkeley and the founding Director of the Center for Building Sciences at Lawrence Berkeley National Laboratory. He is currently a Commissioner on the California Energy Commission. Rosenfeld coauthored the 1949 classic text Nuclear Physics with E. Fermi, J. Orear and R. Schluter. Rosenfeld led the particle physics group at Berkeley after Luis Alverez shifted his interests to astrophysics. After the 1973-74 oil embargo, the American Physical Society created a panel to study the question of enhanced energy efficiency from improved technologies. This issue captured Rosenfeld’s heart as he soon established what is today’s foremost center for research on energy-savings in buildings and appliances, an area that consumes 40% of U.S. energy. The Center for Building Sciences and its predecessors with a staff of 200 soon developed successful products, each of which, as they "saturate" the market, will save about $5 billion/year, or 1% of the U.S. energy bill. The list includes electronic power supplies for fluorescent lamps (which in turn led to compact fluorescent lamps), low-emissivity windows and later selective windows, and the DoE-2 program for energy design of buildings. It also advanced understanding of indoor air quality and of cool roofs and shade trees to mitigate summer urban heat. For this work, Rosenfeld received the Carnot Award for Energy Efficiency from DOE in 1993.

Sid Drell obtained his PhD from the University of Illinois in 1949, became Professor of Physics at Stanford University in 1956, and Deputy Director at the Stanford Linear Accelerator Center from 1969 to 1998. In 1970 Drell and Tung-Mow Yan developed the Drell-Yan process, which extended the parton/quark concept to time-like regions for the creation of lepton pairs as a result of quark–anti-quark annihilation in high energy hadron-hadron collisions. For collisions in which the lepton pair is produced with a large positive (time-like) invariant mass squared by an intermediate virtual photon, or W/Z weak vector boson, they derived the analogue to Bjorken scaling for deep inelastic lepton scattering. Drell and James Bjorken also authored the well-known book, Relativistic Quantum Mechanics and Fields.

Drell has held a wide variety of government positions over the years, including being the chair of the House Armed Services Panel on Nuclear Weapons Safety, the Senate Intelligence Committee Panel on Technology Review, and various JASON Panels, as well as being a member of the President’s Foreign Intelligence Advisory Board and Science Advisory Committee. Drell’s technical analysis on the safety and reliability of nuclear weapons played an important part in the U.S. decision to support the Comprehensive Test Ban Treaty. Among his awards are the Fermi and Lawrence Awards from the Department of Energy and the MacArthur prize fellowship.

Frank von Hippel received his PhD from Oxford in 1962 and is Professor of Public and International Affairs at Princeton University. For ten years he worked mostly on tests of SU(3)xSU(3) symmetries and symmetry-breaking in low-energy interactions and also on neutrino-proton interactions. In 1974, von Hippel shifted his research interests to public policy and organized a group at Princeton on nuclear-security problems. Von Hippel is well known for his insightful calculations on public policy issues, but more importantly for working with the Soviet scientists who advised Gorbachev on initiatives to reduce nuclear tensions between the two superpowers. In many ways von Hippel was ahead of his time and the U.S government in making contacts with Russian scientists. At the end of the Cold War, the U.S. government used these contacts to help set the agenda for cooperative programs to help Russia safeguard and dispose of nuclear materials and downsize its nuclear-weapon complex.

Martin Perl obtained his PhD from Columbia University and is a Professor at the Stanford Linear Accelerator Center. Perl is most famous for obtaining the Nobel Prize in 1995 for the discovery of the tau lepton, the heaviest known member of the electron-muon-tau sequence of charged leptons. This work also led to the discovery of the third generation of elementary particles. While Perl was doing his basic research on the tau lepton, he was one of the key persons in organizing the APS’s Forum on Physics and Society. While Perl was chair of the Forum he organized workshops on graduate education held at Penn State University in 1974 and 1976. The proceedings from this conference gave excellent and still-timely advice to the physics profession.

Henry Kendall obtained his PhD from the Massachusetts Institute of Technology and was a professor at MIT until he died in 1999. While at Stanford, Kendall used 20 GeV electrons to examine the internal structure of the proton. By examining the highly inelastic electron-proton scattering events, Kendall, Jerome Friedman and Richard Taylor observed copious events which indicated that the proton was made-up of point-like charges, or quarks. In 1990 Friedman, Kendall and Taylor were awarded the Nobel Prize for the discovery of quarks, which vindicated the standard model of particle physics. In the 1960s Kendall worked on classified work for the Pentagon, but ultimately withdrew from much of this work as he lead the Union of Concerned Scientists from 1973 to 1999. In this role, Kendall helped shepherd a number of arms control studies.

Kurt Gottfried obtained his PhD from MIT and is Professor Emeritus at Cornell University. He was chair of the Division of Particles and Fields of the American Physical Society. His work in particle physics encouraged him to write the books, Concepts of Particle Physics (Vol. I/II, with V.F. Weisskopf) and Quantum Mechanics, Vol. I: Fundamentals. Gottfried (with J.D. Jackson) studied meson-nucleon reactions, devised sum rules for deep inelastic scattering, and was very active in unraveling the spectroscopy of heavy-quark bound states.

Gottfried has a long history of research and publication on arms control matters, publishing in the Scientific American and elsewhere on such topics as "Space-based Ballistic-Missile Defense," "No First Use of Nuclear Weapons," "Anti-satellite Weapons," and he directed a major study on Crisis Stability and Nuclear War (Oxford University Press). Gottfried worked diligently with his individual efforts as well as with the SOS Committee (Sakharov, Orlov and Sharansky) organization to support the human rights movement during the years of oppression in the Soviet Union. He is a cofounder of the Union of Concerned Scientists and is currently the chair of its board of directors.

It has been a pleasure to know these particle physicists and write their brief biographies. They have given us important lessons, which must be written for the next generations before they pass on to a higher calling. They have been the thin blue line that has examined national security and energy/environment issues when they needed quantification. We rejoice that this fine group of particle physicists laid aside their busy cares to think about future implications. We of the physics community thank you and toast you wherever you are.

David Hafemeister
National Academy of Sciences, Washington, DC

Population, Fossil Fuel, and Food

Richard D. Schwartz

The World's looming energy crisis can be tied directly to the exploding world population and the attendant needs for expanded energy resources. A key to population growth in the past 80 years has been the increased production of food supplies. Perhaps one of the most important factors behind the increase in food supplies is the Haber process (developed by Fritz Haber in 1909) for the production of anhydrous ammonia. As a widely applied fertilizer,anhydrous ammonia has increased crop yields by a factor of two to three times over that which existed on a wide scale prior to the introduction of the fertilizer. Aside from the expansion of agricultural lands, the introduction of anhydrous ammonia is arguably the most important development of the 20th Century which has promoted world population growth from about 1.5 billion persons in 1920 to about 6 billion in 2000.

When addressing the issue of fossil fuel consumption as it relates to food production, one automatically thinks of the intense use of machinery in modern agriculture. Obviously, significant amounts of fuel are consumed in tilling, planting, harvesting, processing, and transport to markets and consumer outlets. When it is stated that "modern agricultural is the process whereby fossil fuels are turned into food", our first thoughts are of the immense fuel consumption involved. However, a primary constituent in the production of anhydrous ammonia is natural gas. Perhaps as much as half of the biomass in our foods today is derived from the use of anhydrous ammonia which is produced directly from natural gas. Quite literally, it can be said that natural gas is being turned into food.

An important process for the production of anhydrous ammonia is the conventional steam reforming process wherein high temperatures (450-600K) and high pressures (100-200 bar) are employed. The production proceeds via the following steps:

(0.88) Methane + (1.26) Air + (1.24) Water yields (0.88) Carbon Dioxide + (1.0) Molecular Nitrogen + (3) Molecular Hydrogen

(1.0) Molecular Nitrogen + (3) Molecular Hydrogen yields (2) Ammonia Molecules

The latter reaction involves use of a catalyst. A complete discussion of this process is available on the website of the European Fertiliser Manufacturer's Association at www.efma.org/publications/ (see publication1 on the Production of Ammonia, section 04). For a typical natural gas feedstock plant, about 22 GJ of feedstock are required to produce one metric tonne (t) of ammonia. In addition, about 8 GJ of fuel (usually natural gas) is required to power the process, leading to the use of about 30 GJ per tonne of ammonia produced. In 1997, North American nitrogen fertilizer production was about 13 Mt. Thus about 3.9E17 J was required. The energy content of natural gas is about 1.05 MJ per cubic ft (cf), so this translates to the use of about 0.372 tcf of natural gas. The natural gas production in North America was about 18 tcf in 1997 (Ristinen and Kraushaar1999, Energy and the Environment (John Wiley: New York), p. 48). Thus about 2.1% of the natural gas produced was used in the production of ammonia fertilizer.

Worldwide, about 85 Mt of ammonia fertilizer was produced in 1997. Whereas ammonia production has seen small increases in developed nations in the decade of the 1990s, developing countries increased ammonia output from 42 Mt to 51 Mt from 1991 to 1997, more or less tracking the population increase in these countries. It is likely that a much higher percentage of domestic natural gas is used to produce fertilizer in developing countries (such as China) than in developed countries. As the world population continues to increase and natural gas production rates peak and go into decline (pending potential development of a fundamentally new natural gas resource such as the sea floor methane ices), an increasing percentage of natural gas will be used for fertilizer production, obviously at increasing costs. Coal gasification can also provide feedstock for ammonia production, but the present production cost per tonne is about 1.7 times greater than for ammonia production from natural gas. In coming decades, pressures will build for the construction of coal gasification plants.

Perhaps a more important delimiter for population growth will be the lack of additional agricultural land and fresh water supplies, as well as the degradation of present irrigated lands. Modern agriculture has benefited from extensive irrigation of dry land (e.g., the high plains of the U.S.), but at the cost of depleting aquifers which cannot last for more than a few decades at present extraction rates. Moreover, extensive fertilization with anhydrous ammonia has produced copious quantities of soil nitrates, many of which have leached into groundwater supplies and polluted the drinking water of a large area of the Central U.S.

The bottom line would appear to be that the end of exponentiating population and food supplies is in sight. If a Herculean effort at worldwide birth control is not successfully launched within this decade, nature will undoubtedly level the playing field with a combination of malevolent acts, not the least of which will be mass starvation and wars over the world's resources. Some would argue that, given world events, we have already stepped over that precipice.

Richard Dean Schwartz
Dept. of Physics and Astronomy
University of Missouri-St. Louis

Weapons Plutonium Disposition: MOX Gets Go Ahead; Immobilization Dead in Water

A. DeVolpi

Long a disputed issue, the disposal of excess weapons plutonium seems to be headed for technical resolution in U.S. national policy. In a report posted on the Internet, the National Academy of Sciences Committee on International Security and Arms Control (CISAC) has concluded that irradiating the plutonium in MOX (mixed oxides of uranium and plutonium) in a once-through nuclear reactor fuel cycle would meet its "spent-fuel" standard for resistance to theft and proliferation.

A close look at the report also indicates that previously considered methods of immobilization by underground burial are not now approved or available for weapons plutonium.

Because of the longstanding debate about plutonium disposition, it’s a little odd that these significant NAS results have not gotten more public attention. I’ve been following this issue for years, partly in connection with a book being prepared with some colleagues about Cold-War nuclear arsenals and legacies, yet it was only this year (2001) that I noticed the July 1999 "Interim Report." No final report is available, although it was scheduled for the Fall of 1999.

It seems strange that all three of their significant conclusions–regarding MOX burning, immobilization, and demilitarization–are only published in an unheralded interim report. It is hard not to wonder whether the apparently imminent final report was not issued because it was considered ideologically unacceptable in certain quarters.

CISAC adopted for their 1999 review a systematic methodology to compare options for plutonium disposal. Perhaps because it is only "interim," the 1999 report is difficult to read and understand, with many of its important findings rather obscure or relegated to footnotes. It is not a simple matter to sort through the jargon, but a careful reading indicates that today's state of the art does not permit the spent-fuel standard to be met through immobilization (either by vitrification or can-in-canister).

Those of us who have long been calling attention to the advantages of isotopic "denaturing" of weapons plutonium can find some satisfaction in a footnote (no. 13) in the CISAC report. For the first time in print, the official body explicitly acknowledges that any attempt to insert isotopically demilitarized plutonium in existing weapon configurations would be "likely" to run into an abundance of difficulties, such as "design modifications and . . . new nuclear-explosive tests . . . to confirm that the change in isotopic composition had not unacceptably degraded performance."

The chairperson of CISAC, John Holdren, explained his own views about reactor-grade plutonium in an article he wrote for The Bulletin of the Atomic Scientists in 1997:

. . .because the isotopics are different, weapons using this plutonium would have to be redesigned, which would require nuclear tests. That means the path to reuse of spent fuel would be more difficult technically and politically–as well as easier to detect–than reusing weapons plutonium extracted from glass.

Thus the Committee has definitively embraced the proposition that demilitarized plutonium is really not useful for making military-quality weapons. This has important implications in evaluating treaty-breakout scenarios for nuclear-weapons states after deep cuts in arsenals have taken place. The report now effectively supports the contention that isotopic demilitarization would make plutonium inherently unsuitable for rapid recovery into weapons. Demilitarized plutonium is more securely protected against reversal into high-quality weapons than is buried "immobilized" plutonium that has not been put through a reactor fuel cycle.

After deciphering the oblique vocabulary of the report, it looks as though the Academy is saying not only that MOX irradiation (which results in chemical, metallurgical, and isotopic demilitarization of plutonium) meets the disposition standard, but that irradiation in reactors is the only practical way currently available to dispose of weapons plutonium.

Am I the only one who has noticed this progress?

Alexander DeVolpi
Retired from Argonne National Laboratory
21302 W. Monterrey Dr., Plainfield, IL 60544

References:

  1. Committee on International Security and Arms Control, National Academy of Sciences, "Interim Report for the US Dept. of Energy by the Panel to Review the Spent-Fuel Standard for Disposition of Weapons Plutonium"http://nationalacademies.org (July, 1999).
  2. A. DeVolpi, V.E. Minkov, V.A. Simonenko and G.S. Stanford, Nuclear Shadowboxing: Myths, Realities, and Legacies of Cold War Weaponry (unpublished).
  3. J.P. Holdren, "Work with Russia", The Bulletin of the Atomic Scientists (Mar/Apr, 1997).
  4. A. DeVolpi, "The Physics and Policy of Plutonium Disposition", Physics and Society, Vol. 23, no. 4 (1994), "A Coverup of Nuclear Test Information", Physics and Society (Oct. 1996).

Our Daily Minimum of Uranium?

I found John Cameron's article, "Is Radiation an Essential Trace Energy" (FPS, 2001, 30(4), 14 -16) a bit wild. The problem is in the use of epidemiological evidence. The purpose of epidemiology is to detect statistical trends for further study. There is no validity at all in the use of epidemiology to conclude anything, or to prove or disprove any hypothesis. The underlying principle here is that statistical correlations in populations can not distinguish population characteristics from cumulated individual characteristics and furthermore can not distinguish cause from effect.

Yet, Cameron uses epidemiological data to test a hypothesis that ionizing radiation stimulates the human immune system. For no obvious reason, he assumes that a stimulated (more active) immune system is a good thing. He suggests that a double blind study should be performed to explore this hypothesis further.

Over and above the logical issues here, are other problems with Cameron's idea:

  1. The function of the immune system is to identify and destroy foreign biological agents, such as bacteria. Unless there is some rationale for believing that ionizing radiation introduces foreign biological material, why should the hypothesis in question even be formulated, not to mention tested, on animals or humans? True, radiation can kill human cells, causing activation of components of the immune system during scavenging of the dead material, but why should destruction of functioning cells somehow be viewed as good?
  2. A highly active immune system in the absence of harmful biological agents has another, more common name: Allergy. Why should induction of allergy, even at a mild level, be considered a good feature of irradiation?
  3. There is some evidence that HIV causes AIDS and death by exhaustion of certain immune system components (T-cells). Why should hastening of any such exhaustion be a good thing? It is true that vaccination can activate the immune system and cause it to be programmed to defend against the biological agent of the vaccine; but, it is the programming, not the activation, which is beneficial in vaccination.

So, I read Cameron's hypothesis as being more or less equivalent to the statement that radiation improves life expectancy by making people sicker. He then proposes to administer a sub-sickening dose of radiation to help people live longer. A practitioner of homeopathy might view this as a stroke of genius, but I find it obnoxious.

John Michael Williams

The Sacred Depths of Nature

by Ursula Goodenough, Oxford University Press, 1998

This is a most interesting and ambitious and yet personal and beautiful book. The author asks in the preface whether it is possible to feel religious emotions in the context of a fully modern understanding of nature. This book is her answer, and the answer is in the affirmative. The book presents an insightful overview of contemporary scientific knowledge, touching lightly on the physical sciences while emphasizing biological science and particularly evolution, and combined elegantly with associated religious reflections.

The author is a cell biologist and professor at Washington University in St. Louis, and a former president of the American Society for Cell Biology. In parallel with her life as a scientist she has a deep religious orientation and a fine religious sensibility, and has also served in the leadership of the Institute on Religion in an Age of Science. The religious reflections that she presents are non-denominational and non-theistic, but she has great respect for and wide knowledge of different religions, and includes quotations from a variety of religious traditions.

In the introduction, the author says that it is the goal of this book to present an accessible account of our scientific understanding of Nature and then suggest ways that this account can call forth appealing and abiding religious impulses, an approach that she refers to as religious naturalism. Thus, her ambitious agenda for this book is to outline the foundations for a planetary ethic. (She indicates however that such an ethic would make no claim to supplant existing traditions but would seek to coexist with them, informing our global concerns while we continue to orient our daily lives in our cultural and religious contexts.) She indicates that such a global ethic must be anchored both in an understanding of human nature and an understanding of the rest of Nature. She believes that this can be achieved if we start out with the same perspective on how Nature is put together, "and how human nature flows forth from whence we came".

Goodenough points out in her introduction that the major religions address two fundamental human concerns: "how things are" and "which things matter." "How things are" becomes formulated as a cosmology, and these aspects have been essentially superceded by science. She refers to the scientific world picture as "the story, the one story, that has the potential to unite us, because it happens to be true." "Which things matter" has remained to a larger extent in the domain of religion. The author recognizes and affirms the human quest for meaning. This book appears to be an attempt to restore, to the understanding of science, a sense of the sacred in Nature while viewing "How things are" within the world-view of science.

Thus, this eloquent book is addressed toward a coordination of science and religion. In form, it consists of 12 concise chapters, each lucidly treating a topic in science starting with the origin of the earth, and discussing various topics including the evolution of biodiversity. Following the science content of each chapter is a short religious "Reflection" that presents an associated religious or spiritual perspective, and corresponds to a religious meditation on the meaning of the topic.

While the direct experiences of our encounters with Nature have always had the power to evoke emotions such as awe, reverence and spiritual responses in human beings, the scientific interpretations of Nature unfortunately too often elicit emotional detachment or even negative attitudes, especially in non-scientists. Early in the book the author describes her own brushes with unemotional rationalism and nihilism and even existential fear in conjunction with her early studies of science, thereby providing an empathetic connection with many readers. Then she invites the reader along and shows the reader, eloquently and effectively and with great charm, that scientific understanding can be a source of positive emotions such as awe, wonder, reverence, and joy. What Goodenough is aiming for and to some extent accomplishes is to bring in positive emotions as accompaniments to the scientific understanding of Nature. She helps the reader to realize personally that the origin of life and the universe can be even more meaningful with our increased scientific understanding of them. It seems to me that this is a very good thing and that such efforts should be widely supported by other scientists, even for mundane and practical reasons such as improving public understanding of science.

Goodenough's efforts to lay the foundations for a planetary ethic come across with mixed success. Her evocations of joy and wonder in contemplating the scientific understanding of the biological world are really very fine. She discusses sex and sexuality in the biological evolutionary context, and then reflects on nurturing and the love of God and Christian love. Yes, there is real wisdom here, but what about the intense emotions of human sexuality? She addresses the issue of death sensitively in context as the price we pay for life as multi-cellular beings. This is a clear insight, but does not resonate with the agony of suffering. It is important, but it is not enough. She does not bring up the most primitive issues of evil, such as the fact that the very existence of living beings requires food and thus the death of other living beings, nor does she address the darker aspects of evil and destruction in the world and in human culture.

This is a very valuable book that makes an important contribution to the ongoing dialog between science and religion. It is my hope that it will not only open a spiritual vista on contemporary science but also inspire readers from both scientific and non-scientific backgrounds to think long and hard and meditate on the topics that it touches and illuminates.

Caroline L. Herzenberg

Figure 1

Figure 1: Fleet Fuel Economy Example

Figure 2

Figure 2: Fleet Fuel Economy Second Example

Figure 3

Figure 3: Total fuel used and average mileage over 10,000 miles for various fleets

Figure 4

Figure 4: Fuel cost savings over Vehicle Life, Mid Size Sedan

Figure 5

Figure 5: Hydrocarbon emissions standards over time; NOx and other materials show similar trends

Figure 6

Figure 6: Improvement in catalyst efficiency compared to sulfur content


Physics and Society is the non-peer-reviewed quarterly newsletter of the Forum on Physics and Society, a division of the American Physical Society. It presents letters, commentary, book reviews and articles on the relations of physics and the physics community to government and society. It also carries news of the Forum and provides a medium for Forum members to exchange ideas. Opinions expressed are those of the authors alone and do not necessarily reflect the views of the APS or of the Forum. Contributed articles (up to 2500 words, technicalities are encouraged), letters (500 words), commentary (1000 words), reviews (1000 words) and brief news articles are welcome. Send them to the relevant editor by e-mail (preferred) or regular mail.

Editor: Al Saperstein, Physics Department, Wayne State University, Detroit, MI 48202, (313) 577-2733/ fax (313) 577-3932. Articles Editor: Betsy Pugel, Loomis Laboratory, University of Illinois, Urbana-Champaign, Urban IL 61801,. Reviews Editor: Art Hobson, Physics Department, University of Arkansas, Fayetteville, AR 72701, (501) 575-5918/fax (501) 575-4580. Electronic Media Editor: Marc Sher. Layout(for paper issues): Alicia Chang. Web manager for APS: Joanne Fincham