Setting Radiation Protection Standards: Science, Politics, and Public Attitudes in Historical Perspective
Barton C. Hacker
Radiation safety is a peculiarly 20th century issue, dating from the discovery of x-rays and radioactivity in the late 1890s. Initially, only a few doctors and technicians risked harm from prolonged or intense contact with x-ray machines or radium. Working together they devised radiological safety codes to protect themselves and, in due course, others whose work might put them at risk. These self-imposed standards defined what informed medical judgment accepted as safe levels of exposure to external x-rays and gamma rays or to radioactive substances that somehow entered the body. By and large, they worked.
Early Standards
From 1928 onward, standard setters expressed acceptable limits for external radiation in roentgens. Technically defined in terms of radiation-caused ionization of air, the roentgen strictly speaking measured exposure, not dose. Specifying dose required another unit that considered both energy absorbed in tissue and the relative biological effect of the kind of radiation. Although one such unit, the rem, was devised during World War II, many practitioners persisted in using roentgens to express exposure or dose indifferently for another decade or more. By the late 1950s, however, the rem had become the standard unit of dose, while still another unit, the rad, was coming into use to express energy absorbed. Ordinarily, shifting from one unit to another did not greatly alter the numbers, which may account for the sometimes casual usage of the several units, even among knowledgeable practitioners.
What do these units mean? According to an authoritative 1950 manual, acute whole-body exposures up to 50 roentgens produced little more than blood changes; serious injury and any likelihood of disability took more than 100. Table 1, first published in 1950, summarized the acute effects of radiation exposure. Essentially similar tables, with "r" standing for rads instead of roentgens, can be found in the latest textbooks. Controversy over health effects of radiation, however, barely touches the area defined by this table. It centers rather at the very lowest end of the exposure spectrum, where "no obvious injury" tends toward no directly observable effect of any kind.
Table 1. Probable Early Effects of Acute Radiation Doses over Whole Body
0-25 r[oentgens] |
No obvious injury |
25-50 |
Possible blood changes but no serious injury |
50-100 |
blood-cell changes, some injury, no disability |
100-200 |
Injury, possible disability |
200-400 |
Injury and disability certain, death possible |
400 |
Fatal 50% |
600 |
Fatal |
Source: Samuel Glasstone, ed., The Effects of Atomic Weapons (Los Alamos: Los Alamos Scientific Laboratory, September 1950), Table 11.28.
World War II
The advent of controlled nuclear fission and then atomic bombs during World War II did not so much transform the nature of radiation hazards as vastly expand their scope. Radioactivity that had once mattered chiefly in the laboratory or clinic came to concern much of postwar society, especially after 1951 when the new Atomic Energy Commission (AEC) began regularly testing nuclear weapons. Test fallout stirred widespread fears and provoked public outcries over the hazards of radiation at any level of exposure. Undoubtedly, thousands of people who lived and worked in regions affected by fallout received varying, though almost always very low, doses of external gamma and beta radiation during the era of aboveground testing, from 1945 to 1963. Many also must have ingested radionuclides through normal eating, drinking, and breathing in fallout regions.
At issue were the hidden risks of fallout. Only in a few well documented and widely known instances, most notably the Marshall islanders and Japanese fishermen caught by fallout from the Castle Bravo test in 1954, were doses high enough to cause radiation sickness and even threaten life. Radiation at such high doses produces well known and obvious effects, as Table 1 shows. But most doses were far lower, so low as to defy detection except by laboratory analysis of blood samples from those exposed, and sometimes not even then; they certainly produced no evident damage to health, despite the controversial retrospective studies that suggest an unusually high incidence of radiation-related disease in regions affected by fallout.
The question of low doses
Scientifically, the real question is whether or not very low levels of exposure have had disproportionately great health consequences. It remains unanswerable. Mainstream scientific opinion still judges the danger minute; that is, very low dose implies very low risk, though experts disagree about precisely what that means quantitatively. Apparently contradictory conclusions about the risks of low-level radiation in successive reports of the National Academy of Sciences-National Research Council Advisory Committee on the Biological Effects of Ionizing Radiation (BEIR III in 1980, BEIR V in 1990), partly reflected new data, but also arose from persistently divergent viewpoints about how to evaluate them.
What limits should be imposed on exposure to radiation? The question persists because it depends as much on public policy, social values and philosophy, as on science.
Ambiguity begins with the study of radiation-caused harm to living things and the concept of dose itself. For individuals, damage depends on both dose (how much radiation absorbed) and dose rate (how fast). A dose lethal if received in a day might well be survived if spread over a month and prove harmless if stretched through years. Precisely what physical processes convert absorbed radiation dose into biological damage are still unclear. Inevitable death follows only very high acute doses. Exposure at lower levels and rates produces much less clear-cut results. In many instances, no one can say for sure that a certain dose will cause a certain injury to a certain living thing.
Prognosis becomes instead an exercise in probability, as witness the so- called median lethal dose: the dose that will kill half of a large number of exposed subjects within a specified time. Statistically, one might know that half those exposed in 24 hours to 350 roentgens will die in 30 days, yet remain unable to predict which persons that half will comprise. Efforts to reconcile such statistical group effects with individual harm have been a source of constant tension in the medical-legal sphere, and a fertile source of confusion in popular discussions. Fortunately, as we shall see, setting the problem in a new frame can do much to clarify the issues.
All such questions grow harder to answer as dose and dose rate decrease. Acute or short-term radiation effects, the result of large and rapid exposures, have never seemed baffling or controversial. Acute effects were clear, how to deal with them obvious: Straightforward measures like shielding sufficed to safeguard those at risk. Chronic or long-term effects were another matter entirely. That exposure to relatively low levels of radiation could cause harmful late effects had been known since the early years of the century. Such effects might be downplayed or ignored in the midst of war, including Cold War, but everyone knew that some forms of cancer and other disease sometimes occurred many years after exposure.
Evidence of damage appears more slowly and rates of injury decline as dose and dose rate fall. Someone exposed to a massive gamma ray burst quickly shows the effects, leaving no doubt about the cause. Some lesser dose might induce leukemia years later, when the cause will seem less clear-cut. At still lower levels and longer times between exposure and injury, causal links grow fainter yet. Just how very low doses trigger biological responses remains obscure. So, too, does the full range of possible late effects, which may include metabolic or immune system disorders as well as cancers. Medical and legal questions alike multiply as ties between cause and effect loosen, and the evidence of injury becomes statistical rather than clinical.
Different approaches to low doses
Scientific opinion divides about the shape of the dose-response curve at the very lowest levels, where cause and effect become hardest to measure or even to detect (Figure 1). Scientists taking one approach in effect graph the curve as a straight line from known higher values through lower unknown values to zero. Any exposure thus implies some chance of harm, even if damage cannot always be detected. Linear extrapolation, in other words, means that only zero dose causes zero damage.
Other approaches adjust the curve for biology. Biologically active agents, such as drugs or poisons, normally must exceed some threshold before working damage. Biological systems exposed at levels below threshold can restore themselves and so suffer no lasting harm. Radiation at low enough levels, in this model, causes no cumulative damage. The dose-response curve turns sharply downward toward zero damage at some dose higher than zero.
Still a third view, much less widely held, has disproportionately greater risk at very low doses. This seeming paradox is resolved by explaining that very low dose allows damaged cells to survive and become the seeds of cancer.
Since the mid-1970s scientists have begun to frame the problem in different terms, stochastic versus nonstochastic effects of radiation, although the basic issues remain the same. Stochastic effects are those for which incidence is the chief function of dose. Higher dose makes cancer more likely, for instance, not more severe, just as flicking a switch turns a lamp on or off but has no bearing on its brightness. Nonstochastic effects are precisely those for which severity is the chief function of dose. Cataracts of the eye offer a well-known instance: how great the injury depends on how high the dose. Radiation damage to blood vessels likewise increases as dose rises.
Restating the problem in these terms also helps distinquish group from individual effects. Stochastic effects are in essence population effects, the kind that can be predicted statistically but not individually; they also are low-dose effects and, as the word's Greek root implies, matters of conjecture. Nonstochastic effects, in contrast, are the predictable and unambiguous consequences of higher radiation doses. Because such effects can, in general, be observed only at fairly high levels of exposure, they also provide the strongest evidence for thresholds.
The view that radiation damage had biological thresholds largely prevailed during the first half-century of radiation protection, and finds many supporters today. Experiment cannot easily resolve the issue. Meaningful data on the rare and often minor damage inflicted by very low doses or dose rates could come only from armies of animals studied over many generations. Possible in theory, such studies simply exceed the limits of any realistic research program, all the more so because animal findings will not necessarily apply to humans; even closely-related species show markedly different effects. Practically, this seeming impasse poses no insuperable problem. Radiation safety has never relied on final answers. Pragmatic safeguards countered the everyday hazards long before science could explain either hazard or safeguard.
The concept of permissible exposure
Threshold thinking shaped early safety codes. "Tolerance" expressed the basic idea: Living things could survive, without patent ill effect, some defined level of radiation for an indefinitely long time. Inhabitants of Denver, after all, seem as healthy as New Yorkers, although Denver's altitude means they receive double the background radiation of dwellers at sea level. "Permissible exposure" first emerged as an alternate concept in the mid-1930s, and gained wide currency only in the early 1950s. The newer term added social-political views to medical-biological judgments about what might be harmless. Its adoption would, in effect, shift the thrust of radiation protection from seeking biological thresholds to weighing risks and benefits. Although the bulk of evidence in fact argued threshold, guideline writers assumed philosophically that any exposure was risky. Whatever they believed about physical realities, many experts came to prefer erring on the side of caution, acting as if any exposure could be harmful.
Steadily falling dose limits should not be construed as solely, perhaps even chiefly, a product of greater knowledge. Certainly, better data and new findings have affected standards, but no new danger needed proving to invest exposures regarded calmly in one decade with deep concern in another. Technology here has played a crucial role over the years in changing what "low" means with respect to radiation exposure. Improving instruments detect ever smaller amounts of radiation in the field as well as the laboratory. As for many other toxic hazards, detection in practice if not in theory implies danger. Technical prowess, in other words, rather than assured hazard tends to define safe limits, for radiation as for other potential hazards. Changing political climate has often affected standards far more than new data or deeper understanding.
For radiation the crucial question has scarcely varied since turn-of-the-century x-ray and radium users began to worry about the side effects of their wonderful new tools: What limits should be imposed on human exposure to ionizing radiation? The question persists because the answer depends at least as much on public policy, social values and even on philosophy, as on science. Social concerns in the widest sense have always molded safety standards, science at best setting guidelines for decision makers.
That radiation protection standards are socially constructed and politically, rather than scientifically, decided may be the most pervasively misunderstood point in the entire public controversy. Whatever the dose, more often than not a showing of exposure is simply assumed by critics to be a proof of damage, a transition so easy as hardly to be noticed. It reflects a long history of public apprehensions about nuclear matters and the subjective judgment of their unseen risks. Quantitatively assessed risk, the "real" danger, need bear little relationship to danger perceived or risk deemed acceptable; this is perhaps even more true of the atom than of other technological hazards. Unresolved questions about test veterans or downwinders have simply added to nagging doubts about long-term health effects of exposure to low levels of ionizing radiation.
Discrepancies between real and perceived risk surely owe something to public misunderstandings and confusion about the scientific bases of radiation protection standards, about distinctions between natural law and practical guidelines, about the ambiguities of cause and effect in radiation injury. But such discrepancies may owe even more to wide and growing mistrust of government motives and suspected conflicts of interest in choosing and applying standards. Unfortunately, skepticism seems all too well deserved.
When fallout from nuclear weapons testing became an issue during the 1950s, government officials mostly preferred to reassure rather than inform. Practically no one doubted that testing could be conducted safely, in other words without seriously endangering either test participants or members of the public, provided suitable precautions were observed. On the one hand convinced that trying to explain risks so small would simply confuse people and cause panic, on the other hand fearing to jeopardize the testing vital to American security, officials simply refused to admit any risk at all. When outsiders revealed that fallout might indeed be hazardous, government credibility suffered prompt and long-lasting damage.
Moderately greater openness in the 1960s with the advent of commercial nuclear power, and even more in the late 1970s after the AEC's demise, proved inadequate: What looked like efforts to downplay risks came to seem no less suspect than to deny them altogether. Each time the AEC or one of its successors faced questions about possible hazards, it tended to issue reassuring statements and discount the danger. Assuming that the public could not grasp the nuances of minor versus major risk, agency representatives preferred to claim no risk at all. No one likely thought of that as lying, but powerful officials free of much direct accountability in a secrecy-shrouded program found it all too easy to deny, dissemble, or mislead as a matter of course without a second thought. Forgetting how much their special knowledge owed to their places rather than their virtues, they too lightly dismissed the costs their high purposes may have imposed on their fellow citizens.
Copyright, 1995, by Barton C. Hacker. All rights reserved. The author is at L-451, Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, CA 94551, hacker1@llnl.gov . He will be happy to provide an annotated version of this essay upon request. Alternatively, interested readers may consult the two books, both published by the University of California Press, in which he more fully discusses and documents the issues raised here. See Barton C. Hacker, The Dragon's Tail: Radiation Safety in the Manhattan Project, 1942-1946 (1987) and Elements of Controversy: The Atomic Energy Commission and Radiation Safety in Nuclear Weapons Testing, 1947-1974 (1994).