Newsletters

FHP Essay Contest Winners

The FHP Student Essay Contest received a record number of entrants in 2019, with submissions from 4 continents and essay writers ranging from high school students to graduate students. The winning essay “A Changing Dichotomy: The Conception of the ‘Macroscopic’ and ‘Microscopic’ Worlds in the History of Physics” was submitted by Zhixin Wang, a graduate student in applied physics at Yale University. Wang's essay examines scientists' shifting views on the what distinguishes micro and macro over four centuries, from 17th century hypotheses of hidden mechanical mechanisms as explanations for visible phenomena to more contemporary distinctions where it is not size, but quantum metrics that are often appealed to. For winning the contest Mr. Wang will receive a cash award of $500 cash award for her work. Both essays are posted on the FHP's website.

Zhixin Wang

Melia Bonomo

The Stony Brook Nuclear Structure Laboratory: A Personal History

By Linwood Lee, Emeritus Professor of Physics and Astronomy, Stony Brook University

Prologue
The State University of NY (SUNY) was established in 1948, as a unit of the New York State Department of Education, in response to the expected demand for higher education following WWII. It initially brought together 29 State supported institutions, mostly Teachers Colleges, under a central administration. Since there was only one State institution on Long Island, a new “State University College on Long Island” was established in 1957 on a donated estate in Oyster Bay. SUNY got its big start when Nelson Rockefeller, a strong supporter of SUNY, became Governor in 1958. He appointed a select commission, chaired by Henry Heald, then Chairman of the Ford Foundation, to recommend the framework for the future of SUNY. The forthcoming “Heald Report” called for major expansion of SUNY and the creation of two comprehensive University Centers “to stand with the finest in the Country”, one on Long Island and one upstate. The SUNY Trustees 1960 Master Plan expanded the Centers to four with one to be built at Stony Brook on land donated by Ward Melville, a shoe magnate. The other three were to be in Albany, Buffalo, and Binghamton on expanded existing campuses. Ground was broken for the Stony Brook campus on 1960 with Governor Rockefeller turning the first spade.

It was into this situation that T. Alexander (Alec) Pond arrived at Stony Brook in the Fall of 1962 as the new Chairman of the Physics Department replacing Leonard Eisenbud, the original Chairman. Leonard had led the recruiting of a high-quality faculty, mostly theorists, and had received approval to establish a PhD program starting Fall 1962, but wanted to step down. Alec was the ideal person to take over. He had great ambition for the Department and the University and was developing his plans well before his arrival. He looked for funding opportunities within the State and federal system and, with Department approval, had proposals ready to go. He knew it was important to initiate projects large enough to be noticed in the academic community. The Physics Department proposed to study the structure of the atomic nucleus and create the Stony Brook Nuclear Structure Laboratory (NSL), which would have a tandem Van de Graaff accelerator, and appropriate researchers to exploit its capability. Proposals were submitted to both SUNY and the NSF, which was already supporting labs at several Universities. Strong support for the proposals came in the form of a letter from Maurice Goldhaber, the Director of nearby Brookhaven National Laboratory (BNL) pointing out that the energy range complemented the energies of existing and future facilities at BNL, facilitating collaborations.

The proposal for purchase of the tandem and construction of its building was submitted to SUNY officials in December 1962 and won approval from Harry Porter, the SUNY Provost in mid- February. The State legislature (finally) in April 1964, appropriated funds for a standard EN tandem and a 25000 square foot laboratory building. When a subsequent grant from NSF “for half the estimated cost of the building” was received in October 1964 Alec persuaded the Stony Brook leadership to use the NSF funds to acquire the newly developed model FN (King) tandem which proved to be far superior to the EN.

With the funds in hand the Physics Department had its high profile facility but no experimental nuclear physicists (and only one theorist) to develop and utilize the Nuclear Structure Laboratory. A search was initiated for the Laboratory Director and at the urging of Jim Raz, the only nuclear theorist, with whom I had worked at Argonne National Laboratory, I agreed to be a candidate.

Stony Brook
In the late fall of 1964 I visited Stony Brook for two days at the invitation of Alec Pond. In addition to presenting a Colloquium on our research at Argonne I met with most of the Physics faculty as well as Arts and Sciences Dean, Stanley Ross. In our discussions Alec exhibited great enthusiasm for the (yet unannounced) aspirations of Stony Brook. Among these was that John Toll, the dynamic Chairman of Physics at The University of Maryland, was expected to become Stony Brook’s President in the fall of 1965, finalizing a multiyear search. Alec also expressed confidence that Nobelist C.N.(Frank)Yang would leave The Institute for Advanced Studies to hold an Einstein Professorship at Stony Brook and establish an Institute for Theoretical Physics.

One discussion regarding the Laboratory was over equipment funds which would be immediately available. New York State recognizes that a new building must be suitably equipped and lists are prepared and budgeted as construction proceeds. Alec and his colleagues had done well in “equipping” the existing new Physics Building and a large portion had been allocated to the nuclear program. This would provide the means to start preparing for experiments as accelerator acquisition and building design and construction proceeded.

Stony Brook concluded its search by late 1964 and I was offered the position of Professor and Director of the Nuclear Structure Laboratory (NSL). The decision to move was very difficult. We were very happy at Argonne and in the Chicago suburbs. The years at Argonne had been extremely productive and personally enjoyable. The Physics Division at Argonne was almost family - a marvelous group of friends and collaborators. However the opportunity to be part of the creation of a possibly great University proved decisive and I accepted the position and prepared to start at Stony Brook at the start of the Fall 1965 semester.

Alec’s “predictions”, and more, started to become reality even before I arrived. The announcement that John Toll would become Stony Brook’s President was made in the early Spring 1965 followed by the appointment of H. Bentley Glass, a very distinguished biologist, as Academic Vice President. A four-page article in “Science” July 30, 1965 summarized Stony Brook’s recent accomplishments including a discussion of the NSL and the likelihood of Yang’s appointment which was formally announced in November. Articles in “Physics Today”, “Time”, and ”Newsweek” soon followed and it was clear that Stony Brook had arrived. It was no longer necessary to start discussions by describing the University.

Meanwhile there was much to be done. The Laboratory building had to be sited and designed; the contract with High Voltage Engineering (HVEC) had to be negotiated and finalized and NSL faculty and staff had to be recruited. All of these had to be addressed simultaneously and immediately. It was agreed that I would take some of my accumulated vacation time at Argonne to be a consultant to Stony Brook during part of the period before my arrival. For help with the building and accelerator I was fortunate to recruit Karl Eklund who had worked with Allan Bromley in setting up the Wright Nuclear Structure Laboratory at Yale. His work at Yale was essentially finished and Allan recommended him highly.

Funds for SUNY buildings and equipment are provided through the State University Construction Fund (SUCF) which normally chooses and negotiates with providers. Fortunately, Karl and I were allowed in these cases to work directly with HVEC and the architects on behalf of the Fund. A grant from NSF for part of the building costs enabled us to purchase a larger model FN (King) tandem and before discussions with HVEC I visited several FN installations to obtain information on any problems and suggested improvements. Two improvements prompted by these visits were the choice of pure sulfur hexafluoride insulating gas with liquid storage and a new high intensity Helium ion source. The resulting contract with HVEC also included their supervision of the tandem installation for which funds were provided.

The architect chosen by SUCF, Smith, Haines Lundberg & Wheeler, had done much of the early Stony Brook campus, and had also done much work for AT&T. They produced rather dull buildings with good engineering, the last of which was very helpful for us. Once suggestions to locate the Lab at the edge of the campus because of radiation were rejected (shielding would suffice) a site was chosen. The Lab would be a separate building adjacent to Physics and positioned to expand into part of a new Physics building in the future. The space arrangement was to be very simple; large shielded areas for tandem and experimental areas and a single large area combining accelerator control, data stations, and evolving spaces such as a library/seminar area, computer stations and grad student desks. This proved to be a singular asset for the Laboratory. Instead of dispersing to their offices elsewhere, students, postdocs and staff all tended to stay in the Lab, leading to a collegial atmosphere for the exchange of ideas.

The offer from Stony Brook included two junior faculty positions and four support positions to be filled before the opening of the NSL. As soon as I accepted I contacted many friends and colleagues in the Nuclear Physics community for help in locating the best possible faculty candidates. Outstanding among the early applicants was Dave Fossan from the small but excellent Lockheed group using the Stanford tandem. Dave had been a postdoc at Copenhagen before Lockheed and was strongly recommended by colleagues at both laboratories. Dave and I visited Stony Brook together right after the Spring 1965 APS meeting and it became clear that he was the perfect person for us. We were able to offer Dave an Assistant Professorship starting in Fall 1965 and, fortunately for us, he promptly accepted. Dave was the backbone of our Laboratory from his arrival until his untimely death in 2003. He mentored by far the most students and his skill and enthusiasm for Physics inspired all of us.

After considering a number of other candidates for the second position we chose Bob Weinberg, a former student of Leon Lidofsky at Columbia and a NATO postdoc at Harwell UK. He was very highly recommended by the faculty at Columbia and by George Morrison with whom he was working at Harwell. His expertise in the use of computers to analyze reaction data matched our needs well and he accepted our offer to start in the fall of 1966.

The architect produced an excellent building design which met all of our requirements very well. Unfortunately, the estimated cost was about double the amount budgeted by the Construction Fund. Their people had treated the Lab building as if it were classroom space with no consideration for the needs of a Nuclear Laboratory. My response was to contact colleagues who were involved in recent or current Laboratory construction regarding their building costs. All responses confirmed the accuracy of the Smith-Haines estimate. This, coupled with the potential embarrassment of an accelerator arriving and no place to put it, convinced the Fund to revise the budget to an amount which met our needs after a few “luxuries” were dropped out of the proposed design. Even after this agreement I never felt really comfortable until a bulldozer arrived to destroy the beautiful small woods which was to be the Lab location.

Getting Going
The contract with HVEC called for assembly and testing of the accelerator at the factory before disassembly and shipping to Stony Brook. This was slowly proceeding with minor difficulties through much of 1966 and early 1967, anticipating delivery in late 1967. Meantime Alec Pond had created the position of “Director of the Physical Laboratories” in the Department office and moved Karl Eklund into that position. While searching for a replacement Associate Director for NSL I received a note from Georges Temmer at Florida State on the possible availability of Tony Bastin, who had recently left their lab for England (and had been replaced) but now was hoping to return to accelerator technology. I immediately contacted Tony and was able to convince him to join us as Associate Lab Director starting January 1968. We were also able to augment our technical staff with Gene Schultz, an experienced accelerator technician who had just worked on installation of an FN Tandem at Argonne and was looking for a change. His experience there was a great asset that helped our upcoming assembly go very smoothly.

Tony and Gene now occupied two of the State funded support positions and it was necessary to fill the rest. I decided to devote one of the remaining positions to the making of targets (usually thin films) for our future experiments. Traditionally the making of targets for their experiments was part of the “learning experience” for graduate students but as experiments were becoming more demanding target development was becoming a field for professionals. By about 1970 there were “target makers”, mostly in national laboratories, and I felt our lab should have one. The man we recruited, Dan Riel, worked hard to acquire the specialized skills required and we were able to provide him with the equipment necessary for a target laboratory. Target making is an art and Dan became one of its masters. In May 1972 16 target makers (all but four Canadian) met in Montreal to discuss the art and share experiences. Dan was one of the attendees and suggested that the group consider becoming a more formal organization. This was followed by a meeting in October 1973 at Stony Brook at which the “Nuclear Target Development Association” (which still exists) decided to incorporate, and with formal incorporation in 1975 Dan was elected President. Dan left us in 1979 for a job at JLab in Virginia and we were very fortunate to immediately recruit Andre Lipski to replace him. He had been the target maker at the Rutgers Nuclear Lab which had just lost its funding so was ready to continue and expand our target preparation work. Stony Brook became a major target source for many other labs where the skills were not available. The targets were usually free or “at cost”; any payment was by barter. Sadly, now that NSL is no longer active, Andre is no longer making targets but performing other tasks for the Department

Meanwhile Dave Fossan and I were recruiting graduate students and arranging for the equipment needed to initiate our experimental program as soon as the tandem became available. Dave was able to start some experiments at Brookhaven Lab (BNL) and spent the summer of 1966 there. I was at BNL one month and finished up some Argonne work the other. Back at Stony Brook we submitted our first proposal for operating support from the National Science Foundation (NSF), the only federal agency funding new University nuclear physics initiatives. It was for $19,555 to support grad students and some faculty during the summer of 1967 and was funded. I should say at this point that Bill Rodney, who did an excellent job running the Nuclear Physics desk at NSF for many years, was supportive of Stony Brook from the beginning and did his best for us. It was NSF policy to favor installations which they had financed so our State financed program had to prove itself outstanding among university nuclear laboratories.

As plans for the Laboratory proceeded it became clear that a program involving only three faculty could not make take full advantage of the capabilities and promise of NSL, Fortunately I became aware that Peter Paul, an outstanding young visitor at Stanford, would be leaving there for another position. I was able to convince the Physics Department of our needs and that Peter represented an unusual opportunity and we were able to invite him to join us. Fortunately, Peter welcomed the excitement of building a new program and accepted our offer to start January of 1968. He has been a driving force in the Laboratory and the University right from his arrival.

Delivery and Assembly
At HVEC construction of the Tandem proceeded with the usual minor setbacks and successes. Factory tests were completed in late 1967 and delivery scheduled for early Spring 1968. Meanwhile, after finally obtaining adequate funds, ground was broken for the building with completion just in time for acceptance of the accelerator. The high voltage portion of the accelerator system is contained inside a pressure tank 44 ft. long and 13 ft. diameter. To get the tank into the building one end of the accelerator vault had to be left open (to be closed later) and a ramp prepared to move the tank down into the vault. The tank was to be shipped by rail to western CT and by oversize truck from there to Stony Brook. It was too large to be handled by The Long Island Railroad.

The building was completed just in time and the tank was due to arrive on a beautiful spring day in early April of 1968. The arrival was not without excitement and humor. On entry to the campus at the North entrance the truck carrying the tank got “hung up” at the crest as its length to clearance ratio was too large for the slope. After much wiggling and scraping it finally made it to the slope into our building (which also had to be reduced) The tank was painted with a chrome yellow primer and as it moved slowly into the building a group of students cheerfully serenaded the procedure with renditions of “We all live in a yellow submarine”.

Once the tank was in place assembly of the tandem and associated experimental equipment proceeded at a rapid pace. At the accelerator Gene Schultz took charge of the work inside the tank and Tony Bastin the ion source, analyzing magnet, gas handling system, and other externals. Students (both graduate and undergrad) and faculty worked feverishly to set up beam lines and experimental stations, run cables, and set up electronics to have experiments ready when the tandem passed acceptance specifications.

This was accomplished by mid August and the first of hundreds of papers reporting results from experiments in the Stony Brook Nuclear Structure Laboratory (NSL) appeared in the Physical Review in early 1969.

Nuclear Theory
A top experimental group should be complemented by a similar group of top nuclear theorists. The opportunity to form such a group arrived when Frank Yang arrived to head the Institute of Theoretical Physics. Shortly after his arrival in early 1966 I had lunch with Frank and asked for his help in establishing nuclear theory at Stony Brook. His immediate response was “ Who is the best nuclear theorist in the world?” After some discussion it was decided that Gerry Brown, at that time a Professor at Princeton and Nordita, was the best theorist for Stony Brook. I knew Gerry from when we were both briefly at The University of Minnesota and from his frequent visits to Argonne and, of course, Frank knew him from their both being in Princeton. Frank contacted Gerry and, as Gerry has said, they went for “a walk in the woods” and discussed a possible move to join Frank’s Institute and establish a nuclear theory group at Stony Brook.

Gerry and his family visited on a beautiful spring weekend, getting a good first impression in spite of the “construction site” Gerry has referred to. That initiated negotiations, mostly informal, as Gerry returned to Nordita (on leave from Princeton) “before thinking more about future plans”.

There followed a correspondence between Frank and Gerry which included a tentative offer of a Professorship in ITP and other considerations which were important to Gerry. One significant source of confusion occurred when Frank’s letter containing the offer inadvertently went by surface mail, taking about six weeks to reach Gerry. Meanwhile Frank persisted although Gerry, partly out of concern about what he called “the Vietnam business” was reluctant to give up his foothold at Nordita. Gerry proposed an arrangement in which he would establish a group at Stony Brook while alternating his presence between Copenhagen and Long Island. This would not have been possible at Princeton but Frank and our President, John Toll quickly agreed and an offer was sent to Gerry (this time by air).

Gerry indicated he would probably accept but delayed his reply until the Nordita Board approved the arrangement. On receiving approval in late April 1967, Gerry accepted the offer to come to Stony Brook in the fall of 1968, asking us to postpone any announcement until he had returned to Princeton in the fall of 1967. Along with Gerry, in the fall of 1968, Tom Kuo and Andy Jackson came as tenure-track faculty and Akito Arima joined the group as a Professor. Thus the start of the Stony Brook Nuclear Theory Group which continues to evolve and excel. The opportunity to establish this “Institute” was a major factor in Gerry’s decision to come to Stony Brook.

Normalcy
I will not attempt to describe the many experiments conducted in the decades following under normal operations. Initially groups formed around interests as they developed. Dave Fossan concentrated on gamma ray spectroscopy; Peter Paul’s main interest was in Giant Resonances; Bob Weinberg and I studied charged particle reactions. We also had to develop external support for operation of NSL. As mentioned above, Bill Rodney at NSF was supportive and we received a grant which enabled us to support a number of graduate students and add our first postdoc, Nelson Cue, a student of Dave Bodansky from the University of Washington.

Of course we faculty were all teaching during the academic year. In particular Bob Weinberg found great success and satisfaction teaching one of the large introductory courses. He also became heavily involved in University affairs as an “ombudsman” during the student unrest here, as on many campuses, during the late 60’s. For Bob this resulted in reduced interest in NSL , although he continued to contribute to our experimental buildup. Finally Bob decided that his future in Physics was to be in innovating teaching. He decided he would no longer be doing research in NSL and requested a tenure decision based mostly on his teaching and University service. I told the Department that, while Bob had been a fine colleague, we would need a replacement for the research program in NSL. Bob was not offered tenure and left in 1969 for a faculty position at Temple University where he has had a very successful teaching career. Meanwhile we were fortunate in immediately recruiting Bob McGrath, then a postdoc at Berkeley, as his replacement. He has been a perfect match, with a fine career in our Lab, mentoring a large number of very successful students. Later, as activity in the NSL slowed, he gradually moved into University administration, becoming Provost in 2000.

The Lab continued to produce a number of significant experimental results. Support from NSF picked up and we were able to consider adding more faculty. One direction I felt we should go was the study of “hyperfine interactions”, the study of the interaction of the nucleus with its atomic environment. Stan Hanna, a friend form Argonne who was now at Stanford, told me of his student Gene Sprouse, who had done some beautiful work on accelerator induced Mössbauer effect and other work on hyperfine interactions. I was able to convince the Department that adding him could benefit a variety of groups and in 1970, we were able to recruit him. Gene has become a world leader in the field. He and his students have performed a series of remarkable and difficult experiments, testifying to his deft leadership.

With Gene’s arrival the Lab reached its (more or less) equilibrium level: five faculty, four or five postdocs, twelve to fifteen graduate students, and selected undergrads. I will not attempt here to describe the many and varied experiments performed over the early years as our program grew and flourished. Support from NSF grew appropriately and a number of remarkable PhD students came through the Lab and have gone on to distinguished careers. During this period Nuclear Structure Physics began to move to studies of reactions induced by accelerated heavy ions for which our tandem was an ideal tool.

Two notable events occurred during the mid 1970’s. Now that Stony Brook was recognized as a major center for nuclear studies we were able to expand and recruit an outstanding young experimenter, Peter Braun-Munzinger, from the Max-Planck Institute for Nuclear Studies, Heidelburg. In NSL Peter initiated a series of novel experiments using beams of heavy ions, greatly expanding the lab’s output of exciting results. He later led the Department’s efforts as our priorities moved into the study of relativistic heavy ion interactions at BNL. His leadership in that transition maintained Stony Brook’s leadership at the frontiers of nuclear studies. In these efforts he was joined by Johanna Stachel who had joined us as a Noether fellow and moved into a faculty position.

The other major increment was, in 1975, the moving of the Physics Department onto a new and very large Physics Building. In the initial location of NSL we had anticipated the new building which was positioned so its basement extended to NSL and provided a complete new target room which could receive beams from the Tandem. This gave us the ability to set up experiments in one target room while the beam was in the other and also proved crucial in future expansion of NSL described below.

The Linac
During this period experiments in Nuclear Structure Physics moved toward use of beams of heavy ions (C12 and heavier) for which tandem Van der Graaffs are excellent tools. However, higher beam energies are needed if one wishes to study reactions with a variety of beams. Argonne National Lab was exploring the use of superconducting resonators in a linear accelerator booster injected by their tandem. Meanwhile a select NAS committee chaired by Gerhard Friedlander of BNL had recommended construction of two boosters at University Labs. Led by Peter Paul’s great enthusiasm we all agreed that the possibility of proposing to build a booster linac was a great opportunity for our laboratory.

When we expressed our enthusiasm to Bill Rodney at NSF he suggested joining with the applied superconductivity group at Cal Tech who, with NSF funding, were developing a suitable prototype superconducting resonator which would not be used at Cal Tech. Stony Brook could be given the task of making the Cal Tech resonators into an accelerator. This seemed a marvelous opportunity for us and discussions were quickly initiated resulting in an informal collaboration agreement in May 1975. The Caltech group would produce and improve resonators and Stony Brook would test their performance with beams of particles. NSF would provide additional funds to support these developments. As part of this collaboration Gene Sprouse joined Peter in heading this effort and spent a semester at Caltech working on the resonators and developing an elegant method for resonator fine tuning. In a little over one year the work at Caltech produced a 150 Mhz resonator of lead plated copper (lead is a superconductor at liquid Helium temperatures) ready for testing with beam at Stony Brook.

Meanwhile, with the early work looking so promising, Peter and Gene produced a massive proposal for a full booster linac based on the Caltech resonators. The booster was to have forty liquid Helium cooled 150 Mhz resonators in 12 modular cryostats to reach energies of 10 Mev/A for A up to over 100. Features of the proposal were a NY State contribution of one million dollars and the ability to fit the entire accelerator complex into the existing building, avoiding any new construction. The detailed beam dynamics were provided by Ernest Courant, one of the world’s leading accelerator theorists, who was in our Institute of Theoretical Physics. The proposal was submitted to NSF July 7, 1977 with Peter and Gene as Co-Principal Investigators.

Meanwhile other university labs were also considering the possibility of major upgrades. In particular the nuclear group at Stanford proposed a booster drawing from the experience at SLAC with Niobium superconducting resonators. This resulted in a direct competition between their proposal and ours for NSF support. In response to this NSF set up a committee of accelerator experts to evaluate the two proposals. The panel was chaired by Robert Behringer from Yale and included Lowell Bollinger who was directing superconducting booster development at Argonne National Laboratory which was already well along. The panel made extensive visits to both Stanford and Stony Brook and were to advise which, if any, proposal to fund.

Tests of the Caltech 150 Mhz resonator cited above went very well and, hoping for choice of our proposal, a meeting, including Bill Rodney, was held at Caltech to determine responsibilities. A full four resonator module was ready to be sent to SB for testing with beam. For the final accelerator construction Caltech would fabricate the resonators and controllers and send them to Stony Brook for plating and assembly. A budget was worked out which was in rough agreement with the proposal. Work toward the accelerator would continue as rapidly as possible awaiting the recommendation by the panel.

In early January 1979 members of the two competing groups met with the panel at NSF offices in Washington DC to reach a final decision. Stony Brook emerged as the winner and we were off and running. On advice of the panel the proposed budget was increased and NSF funds were to be available in July. The New York State equipment funds were available immediately and could be used for “long range” items. One of these was the 400 kV ion source table required for suitable beam injection into the tandem. Specifications had already been sent out for bid and on January 9, 1979 a purchase order contract was issued by the State Office of General Services to General Ionex Corp. (GIC) for a “Heavy Ion Injection System”, a 400 kV table and all associated instrumentation. GIC was the commercial supplier of the ion sources we had been using and had come up with what appeared to be a good design. All components on the table were to be controlled from the accelerator control table through fiber optic telemetry. The remaining State funds were designated for an extensive upgrading of the tandem.

Now came the task of creating an accelerator. A prototype four resonator module was successfully tested in March 1979 and everything looked good. The team that Peter and Gene put together was truly remarkable, ranging from experienced and aspiring accelerator physicists to a number of graduate and undergraduate students. Prominent among these are Ilan Ben-Zvi on leave from the Weiszman Institute, who joined us from the Stanford team, and Mike Brennan, a new postdoc from Rutgers who had decided to do accelerator physics. Both of them are now leaders in the accelerator programs at Brookhaven National Lab. Important visitors were Bala Kurup and Raj Pillay from the Tata Institute in Mumbai and Chen Chia-ehr from Beijing University who later became President of the University. The resonator development at Cal Tech was led by Jean Delayen who later led superconducting resonator development at the Thomas Jefferson Lab in Virginia. A very special contribution was the system for computer control of the whole accelerator system which was developed by Al Scholdorf, a graduate student, John Hasstedt, our computer specialist, and Chuck Pancake, the director of the department electronics shop. The controls linked a state of the art system of small computers to provide user friendly control of the entire accelerator system. The development was part of Al Scholdorf’s PhD thesis.

By mid 1982 construction was well along. Gene Sprouse was handling the very difficult job of the cryogenic system, John Noe, our Associate Lab Director, was supervising the tandem upgrades and assembly of the accelerator modules was proceeding well. It seemed sure that we would have an operating tandem-linac system within a year. An operating Ion Source Table was delivered by GIC and beams became available for testing the linac modules as they were installed. The ion source and tandem injection beam optics were operated without the expected telemetry which GIC was having trouble completing.

With the end in sight and with support from NSF we announced an International Conference on nuclear physics with heavy ions at energies below 20 Mev/A to be held at Stony Brook April 14-16, 1983 to celebrate the dedication of our LINAC. This provided a target for us to obtain full energy performance of the tandem-linac system and on March 17 a 280 Mev S-32 beam was produced on target using all twelve of the accelerator modules. This called for the bottle of Champagne I had put in the lab refrigerator to help in the celebration.

The Dedication/Conference was a great festive occasion. The keynote speaker at the Dedication was Edward Knapp, the Director of NSF who emphasized the linac as an outstanding example of NSF support of University laboratories. Peter and Gene also spoke along with Jim Mercerau, the head of the Caltech group. April 14 was proclaimed “LINAC DAY” by our County Legislature. In the next two days over 200 physicists from as far away as Japan and Australia presented the most recent work from their institutions, 25 papers in all. The Proceedings, edited by Peter Braun-Munzinger, were published by Harwood Academic Publishers. Most important, the addition of the superconducting linac booster, the second in the world and the first at a University, placed Stony Brook among the leading centers of nuclear structure research. Peter and Gene had done a magnificent job in creating this new facility. It was a singular moment for them and for Nuclear Physics at Stony Brook.

The successful operation of both the Stony Brook and Argonne superconducting linacs demonstrated that this new technology could be utilized at a variety of levels. Within a year from our dedication a number of laboratories were considering proposals and one, Florida State, was funded by NSF. Future resonator construction was greatly enhanced by the development, by Mike Brennan and Ilan Ben-Zvi at Stony Brook, of an improved resonator design, a Quarter Wave Resonator (QWR) which was sturdier and easier to fabricate than the Cal Tech and Argonne Split Loop Resonators (SLR). The lead plated QWR was chosen by The University of Washington for a DOE funded booster and the QWR became the design of choice for major superconducting linacs such as ALPI at INFR Legnaro Italy and the Facility for Rare Isotope Beams (FRIB) being built at Michigan State University.

This seemed to me to be a good time for me to step down as Laboratory Director after 18 exciting years. The group decided to rotate the job of Director among the senior faculty in three-year terms and Gene Sprouse was selected to be the first. Peter Paul became Chairman of the Physics Department. I decided to take a long-postponed sabbatical at Argonne in the Spring of 1984 working with a group who were using their linac to study few nucleon transfer in reactions induced by Ni beams. It was to be a very productive visit which set the stage for the research I would do on returning to Stony Brook.

Soon after the dedication the linac was in regular use for experiments with a remarkably fast learning curve. While construction had been proceeding those not closely involved were preparing the target area for future experiments and beam lines were instrumented and ready. From May through the rest of 1983 a wide variety of experimental runs were producing good results; we were back in business with new capabilities. This was interrupted by the discovery, during a routine tank opening in January, of cracks on two sections of the glass column supporting the entire tandem structure. This was serious, and after careful study by us and HVEC it was determined that those sections of the column must be replaced, a very big job. The complete rebuilding was accomplished with great support from HVEC over a six month period and operation for experiments was resumed in July 1984.

Those six months provided an opportunity to complete several major improvement projects which did not require a tandem beam. Among these was the completion of the linac beam pre- and re-buncher arrangements in their final form. The re-buncher was positioned in the target area and a new “double-drift” pre-buncher was developed and installed. Development of the final linac control system was also completed including a computer-assisted linac phasing system which used the +45 degree beam line for energy analysis. This provided a systematic method for setting up selected linac beams, greatly increasing the efficiency of the process.

Substantial improvements were also made to the 400 kV ion source table. These included a more intense ion source and (finally) installation of a CAMAC fiber-optic control system. This enabled the final closing of the contract with General Ionics. Other projects were rebuilding of the Laddertron charging chain and the designing and installation of a radiation-protection system for the linac and target areas.

Operation with the rebuilt tandem and the complete linac resumed with great enthusiasm. The experimental stations evolved with regular improvements and new and extended efforts developed. In addition to the increase in beam energy the linac beam arrives at the target in sub-nanosecond bunches at chosen intervals, about 100 nanoseconds in normal operation. With rebunching these can be as short as 100 picoseconds or lower and we achieved a record 12 ps FWHM for a 140 Mev C-12 beam. The availability of these pulsed beams opened up new experimental opportunities.

Our ability to perform experiments involving charged particles was greatly enhanced by the installation in the summer of 1985 of a very large (2.4m diameter) scattering chamber designed by Bob McGrath. Its size provided space for the use of a variety of detector arrays and long paths for time-of flight experiments. Its appearance and designer gave it the lab nickname of Big Mac.

As linac operating experience developed during the next few years, avenues for improvements were identified. Small ones were routinely implemented, providing modest gains in operational effectiveness. Others were more challenging and required long range solutions. Most important of these problems was a mechanical instability which developed in the low beta resonators seriously limiting the power level at which they could be phase locked. The ultimate solution for this was to replace all of the low beta split loop resonators with quarter wave resonators following the design developed by Mike Brennan and Ilan Ben-Zvi (see discussion above).

As earlier I will not try to describe the many and varied experiments performed but will say a few words about the work of each of the varied research groups in the decades following initial linac operation.

Dave Fossan’s group were able to set up an array of six large Compton-suppressed Germanium gamma detectors and 14 small BGO multiplicity detectors to continue his spectroscopic studies in a wide variety of nuclei. Dave had been one of the designers of Gammasphere, a large spherical array of Ge detectors and when it began to circulate among National Labs he extended many of his experiments to its use.

Peter Paul and his group continued their giant resonance studies to resonances built upon excited nuclei and to a series of experiments using the GDR as a time reference when used to study fission dynamics. He developed an array of parallel-plate gas avalanche counters to measure fission fragment distributions in coincidence with the GDR. He also developed a series of electron-positron detectors to study internal pair decays in nuclei, particularly to search for Giant Monopole transitions which can determine the compressibility of nuclear matter.

After a spectacular success in measuring magnetic moments of fission isomers Gene Sprouse embarked on an ambitious and very successful program of laser spectroscopy of nuclei produced in reactions induced by beams from the linac. An innovative “recoil into gas” method was developed and isotope shifts and hyperfine interactions were measured for several systems. Gene and his students then turned to studies of Francium, the heaviest of the alkali elements, which is unstable and not available in nature but can be produced by beams from our linac. Francium is particularly attractive as a laboratory to explore for parity non-conserving (PNC) effects which test the Standard Model and become more probable for heavier nuclei. In collaboration with Luis Orosco from our Atomic Physics group Gene’s group were able to capture Francium atoms in a magneto-optical trap and perform a series of laser spectroscopic studies - a major series of accomplishments over a number of years. In the end our linac beams could not produce enough Francium for PNC studies and the experiment has now moved to TRIUMF, a large nuclear physics lab in Vancouver BC where Luis (now at the University of MD) and several of Gene’s students and postdocs are leading a collaboration.

With the availability of the “Big Mac” Bob McGrath and his group, in collaboration with John Alexander from Chemistry, embarked on a series of experiments to study nuclear properties at high temperature, the formation and decay of heavy composite nuclei formed in fusion-evaporation reactions. They set up an electrostatic deflector to separate the evaporation residues from beam and used arrays of small detectors and position sensitive large detectors to study their decay. Analysis of correlations between detectors gave information on the source size and energy spectra gave a measure of the effective “temperature” of the emitting nucleus. These results provided tests of various statistical models being developed to treat highly excited nuclei.

When I returned from my sabbatical at Argonne I decided to initiate a program to study the influence of few nucleon transfer on heavy ion reactions at energies near the threshold for a reaction, the “Coulomb Barrier”. Measurements a several labs had observed larger fusion cross sections than predicted by barrier penetration models and along with two graduate students and one postdoc, we set out to measure transfer strength near the barrier. To identify the transfer products we measured total energy, energy loss and mass in counters one meter from our target with velocity from time-of-flight using the linac bunching as our time reference. Using the Big Mac we were able to set up an array of ten detectors separated by ten degree intervals to get an almost complete angular distribution in one run.

In the late nineteen eighties a new approach to nuclear studies was emerging; the study of heavy ion reactions at very high energies - relativistic heavy ion collisions (RHI). The Bevalac at LBL and the AGS at Brookhaven were modified to produce beams of heavy ions and experiments were being planned. Peter Braun-Munzinger and Johanna Stachel left our lab to set up a group to work at BNL. They quickly became leaders as these studies showed promise to become a major avenue of future nuclear physics research. As particle physicists realized earlier, relativistic energy experiments are best performed in a collider mode and a proposal was developed to build a Relativistic Heavy Ion Collider (RHIC) at BNL. In 1989 the NSF/DOE Nuclear Science Advisory Committee (NSAC), chaired by Peter Paul, designated RHIC the highest priority for new construction and urged prompt funding. With the decision to build RHIC at BNL the next new direction in nuclear physics was established. All of us agreed that this was a marvelous opportunity for Stony Brook nuclear physics. The NSL would continue its very successful program but new investments, both in theory and experiment, should be in RHI physics. Peter and Johanna were among the leaders in the experiments at the AGS and in 1990 they were able to add Tom Hemmick as an assistant professor in their group. Tom has had great success in RHI physics and also in teaching our first year courses.

As RHIC approached reality potential user groups deluged BNL with proposals for its utilization. BNL management urged groups with similar interests to merge and eventually two large detector systems and two smaller efforts were approved for construction. Peter, Johanna and Tom, along with Michael Marx from our particle physics group, took a major role in building and using PHENIX, one of the large systems. In 1995 Peter and Johanna both accepted offers of top Professorships in Germany, Johanna at Heidelberg and Peter at Darmstadt and GSI. We were soon able to establish new leadership by recruiting Barbara Jacak from Los Alamos who was already one of the PHENIX leaders. Barbara was a wonderful colleague and, with Barbara as leader, the Stony Brook group became one of the top contributors to the successes of PHENIX which continue to this day. Unfortunately for us, Barbara was lured away from us by Berkeley in 2010 but the success of the group continues. (see below)

This left NSL with the “core” faculty who had been together since 1970. In the fall of 1994 Bob McGrath accepted a part time appointment as an Associate Provost which did not diminish his research program but in 1996 a new Provost asked him to be Vice Provost, a full time job. Bob managed to graduate two more students but effectively left the Lab, becoming our Provost from 2000 until his retirement in 2010.

Meanwhile important things were happening at BNL. In the summer of 1997 the Department of Energy decided to terminate the contract with Associated Universities Inc. (AUI) which had managed BNL since its inception in 1947 and requested proposals for taking over the job. Bob suggested to the Provost and President that this was a great opportunity for Stony Brook and was given the job of coordinating the proposal preparation. The result of hard work by Bob and many colleagues was the formation of Brookhaven Science Associates (BSA), an alliance between Stony Brook and Battelle (an experienced lab management company) including representation from six elite Eastern Universities. BSA won the award and took over the management of BNL on January first 1998. Jack Marburger, our former President, became Director of BNL and Peter Paul became his Deputy Director for Science. At Stoney Brook Bob McGrath became Vice President for Brookhaven Affairs and, when our Provost moved on in 1999, acting Provost and then, in 2000, Provost.

In 1998 I turned 70. My small program had produced some good results in “near barrier” physics and, more important, two PhD students who obtained good postdoc positions. I decided that I should not commit to any more graduate students so I told our Chairman I would retire if a nuclear physicist were added to replace me and I could continue to teach some of our first year courses with a modest compensation; at that time new faculty could not be added without a retirement. The next year our Chair, Janos Kirz, had received approval from the Dean and I retired effective Dec 31, 1999. My replacement was Axel Drees one of the current leaders in our RHI program and, as of this writing (2020), Department Chairman.

Even with the depletion of faculty, research progress in NSL continued to be high thanks to Dave Fossan’s extremely active spectroscopy group and the continuing successes in Gene Sprouse’s trapping and studying of Francium (the world’s only Francium lab). Dave would form collaborations for approved experiments at Gammasphere and obtain preliminary data here with his smaller array. Armed with this information the Gammasphere experiments usually went well and their productivity was remarkable. In 2002 we were also able to add Norbert Pietralla, an outstanding young PhD from Cologne by way of Yale. His interests in Nuclear Spectroscopy complemented Dave’s nicely and he quickly attracted graduate students and produced good results.

A singular moment in the history of Stony Brook Physics and the NSL occurred in the spring of 2000 when the Stony Brook Physics Department hosted a “Reunion”, inviting as former students, postdocs and faculty for a weekend of physics and remembrance. The response for our lab was very good. Almost a third of the people we were able to contact were able to come and many others sent extensive regrets. On Saturday afternoon there were breakout sessions for short talks and general discussion. Ours was, as always, in the middle of the control room, and was very lively. It was especially gratifying to see different generations of our students sharing their memories and tell of their varied current undertakings. The gathering reminded all of us that the greatest contribution of our Lab to science was not our experiments but our 62 PhD students, many of whom have had distinguished careers and the many postdocs and visitors who passed through the NSL.

The 90’s had been exciting years for the NSL but a great loss for Nuclear Physics, and for all of us, occurred when Dave Fossan suffered a massive fatal heart attack in the summer of 2003. Dave was the first faculty member I brought here and was a marvelous personal friend and colleague for all those years. We all were deeply saddened. He had become a world leader in nuclear spectroscopy and was by far the most productive member of our group (21 PhD students, 17 postdocs and 260 publications). His presence is still felt in nuclear physics with the successes of his students and the Gammasphere detector.

Dave’s passing was a huge loss for Nuclear Structure Physics and, of course, for the NSL. Gene and Norbert continued to make good use of beams form the linac for several years and completed the thesis work for five more students, but choices had to be made. Norbert’s became obvious; he was offered the Professorship in Nuclear Physics at the University or Darmstadt, one of the most distinguished in Germany. That was an offer one does not refuse and he left us in 2005. Gene completed beautiful Atomic Physics studies of Francium and some other unstable systems and the linac produced its last beams in November 2006 which was close to the end of our last NSF grant. Almost immediately Gene was offered the position of Editor in Chief for the American Physical Society, one of the three leadership positions in APS. Gene accepted this new challenge and did a magnificent job for eight years before returning to Stony Brook.

As word of the closing down of our linac spread we were contacted by Shengyun Zhu, who had been a visitor in NSL working with Gene, suggesting that our linac might become a booster for the tandem at the China Institute for Atomic Physics, where he had a leadership position, Discussions followed and it was finally agreed that China would “purchase” the entire linac for a sum representing its market value. In 2009 a small army of technicians and workers came to NSL and everything associated with the linac was loaded into shipping containers bound for Beijing. Recent word from Zhu suggests it is almost installed in their laboratory.

The tandem was used for a few years for experiments in our grad/senior laboratory but as of this writing it is “operative” but suffering from lack of SF6 and funds for even minor repairs. Meanwhile Nuclear Physics at Stony Brook is thriving. The PHENIX detector at RHIC continues to produce groundbreaking results and the Stony Brook group is one of its leaders. Before she left us in 2010 Barbara Jacak had been “spokesman” for the PHENIX collaboration and in 2004 Abhay Deshpande was added to the group as a BNL-RIKEN fellow. He is a leader in “RHIC SPIN’ studies in which polarized proton and deuteron beams are studying the quark contribution to the spin of the nucleon.

The Stony Brook Nuclear group is also looking farther into the future. The study of nuclear structure is evolving into nucleon structure, the study of the behavior of the quark content of the proton and neutron best done with high energy electrons as probes. Toward this end the 2019 NSAC Long Range Plan (Chaired by Don Geesaman, a student of Bob McGrath) endorsed the creation of an Electron Ion Collider (EIC). Designs are being developed and it is hoped that an EIC will be the next major investment in Nuclear Science. A Center for Frontiers in Nuclear Science (CFNS) has been created in collaboration with BNL with Abhay Deshpande as Director. The center combines both theory and experimental work and will concentrate on the physics and development of an EIC. As I complete this “History” word has come from DOE that BNL has been chosen as the site for the EIC assuring Stony Brook’s leadership in Nuclear Science will continue.

Reflections on the Centenary of Lord Rayleigh's Death

By John Meurig Thomas and Edward A Davis, Department of Materials Science and Metallurgy, University of Cambridge

Introduction
Even when judged by the highest standards of intellectual achievement, the legacy of the Third Baron Rayleigh (John William Strutt 1842-1919) ranks among the very highest in the entire realm of the physical sciences. The list of eponymous phenomena and equations associated with him (Figure 1) reflects his prodigious versatility and pre-eminence as both a theoretician and experimentalist. His enduring influence on a vast range of present-day scientific and technological endeavours requires no elaboration. Suffice it to recall that Rayleigh waves are of vital importance in seismology, his mathematical procedures are still used by acoustic engineers and quantum theorists, the Rayleigh criterion limits the performance of optical instruments, and it is appropriate to recall also that he was the first to explain theoretically why light from the sky is blue and polarized, and sunsets are red.

Figure 1

Figure 1: Comilation of Effects, laws, mathematical methods, etc. named after Lord Rayleigh

Rayleigh died in the summer of 1919 at his Baronial home in Terling, Essex (Figure 2), where he carried out most of his work [1]. To commemorate the centenary of his death, the present (sixth) Lord Rayleigh and his family held an ‘Open House’ on May 11th 2019, which was attended by many eminent physicists, engineers, chemists, geologists and historians of science, who were invited to tour Rayleigh’s still extant laboratories (Figure 3), his workshops and his study, to see the visitors book as well as many other items, including some of his extensive correspondence in the family archive. Amongst the large collection of letters, equipment, experimental rigs, photographs and specimens are items donated to him by his contemporaries, for example William Crookes (1832-1919), George Stokes (1819-1903), Joseph Larmor (1857-1942), J.J. Thomson (1856-1940), Lord Kelvin (1824-1907), Henry Rowland (1848-1901), and Nicholas Tesla (1856-1943).

Figure 2

Figure 2: Terling Place; home of the Strutt family. The laboratories created by the Third Baron Rayleigh are on two floors in the near-most wing of the house. 

Figure 3

Figure 3: View of one of the rooms of the Third Baron Rayleigh's laboratories.

The visitors’ book alone makes fascinating reading. Apart from the entry for May 1904 (Figure 4) when Rayleigh brought Kelvin and Rutherford together (as a result of their conflicting views of the age of the Earth [2]), there is the July 1908 list of guests that includes Lord Rayleigh’s brother-in-law, the Prime Minister A.J. Balfour, and the future Prime Minister H. H. Asquith with his wife and the 25 year old lady with whom he later became besotted, Venetia Stanley [3]. Eminent scientists whose names also appear in the visitors’ book include Albert Michelson, Edward Morley, Alfred Mayer, Hermann von Helmholtz, Alfred Cornu, Oliver Lodge, J J Thomson, William Thomson, G G Stokes, John Tyndall and William Crookes.

Figure 4

Figure 4: Extract from vistors' book at Terling Place, May 1904 confirming meeting between Lord Kevin, Lord Rutherford and Professor Schuster. 

Rayleigh’s network of contacts was enormous, partly because he held so many important posts: Cavendish Professor and Chancellor of the University of Cambridge (1879-1884), President of The Royal Society (1905-1908), progenitor of the National Physical Laboratory Teddington and Scientific Advisor to Trinity House quite apart from his closeness to another Prime Minister, Lord Salisbury, who was the uncle of his wife, Evelyn Balfour. Furthermore, his correspondence with scientists abroad, including Mendeleev, Nernst and Helmholtz, was considerable.

A great deal has been published about the life and legacy of Lord Rayleigh [4-8], but new information came to light at the Open House which has not hitherto been reported, and which adds to the esteem with which Rayleigh is held.

One of us (EAD) elaborated in an illustrated presentation on the breakdown of Rayleigh’s publications (Figure 5), all 445 of them [9], and the other (JMT) traced his persistence, initiated by the letter that was published in Nature in 1892, when he was pre-occupied with determining the density of nitrogen (Figure 6). This was to lead to the discovery of argon, which earned Rayleigh the 1904 Nobel Prize in Physics – the first to be awarded to a British physicist. His discovery of argon led, in turn, to the monumental paper that he published with Sir William Ramsey (the Nobel Prize winner in Chemistry in 1904) entitled Argon, a New Constituent of the Atmosphere [10].

Figure 5

Figure 5: Breakdown of the Third Baron Rayleigh's publications into topics. 

An Outline of His Early Career
J. W. Strutt was Senior Wrangler, ­ the top graduate in mathematics at Cambridge in 1865. He had been tutored, like other scientific luminaries in Cambridge, for example J.J. Thomson, J.A. Larmor, G. Darwin and A.N. Whitehead, by one of the most gifted tutors, E.J. Routh, who, in a 33-year career, coached over 600 pupils and guided 28 of them to be Senior Wranglers.

Writing almost 60 years after Strutt’s graduation, Sir James Jeans disclosed that “… there still lingers in Cambridge a tradition as to the lucidity and literary finish of his answers in the examination …”. The fine sense of literary style that Strutt exhibited, even under pressure in the examinations, never deserted him. Every paper he wrote, even those dealing with the most abstruse subjects is a model of clarity and simplicity and conveys the impression of having been written with effortless ease ­ ­– see for example the opening sentence of his eighth paper “On the Light from the Sky” (figure 6).

Figure 6

Figure 6: Hand-writtten manuscript of Rayleigh's classic paper explaining why the sky is blue. 

Shortly after he graduated, the young Strutt struck up a friendship with the taciturn George Gabriel Stokes (1819-1903), another Senior Wrangler, who had become Lucasian Professor, and who, like another earlier occupant of that illustrious chair, Isaac Newton, was, in addition to being a redoubtable mathematician, also an accomplished experimentalist [11]. It was this latter quality of Stokes’, as much as anything else, that was responsible for Strutt’s affinity towards him.

In 1861, shortly after he was appointed as a Fellow of Trinity College, Strutt felt dissatisfied that there was no course available in the University of Cambridge at that time that would enable him to acquire experimental skills desirable for someone eager to pursue natural philosophy. Whilst Stokes gave him some encouragement and guidance in this regard, Strutt resorted to seeking help from G.D. Liveing, the then Professor of Chemistry at Cambridge. He received from this source, for six months or so, a course on analytical chemistry, which greatly assisted him in developing experimental skills. Indeed, he was to become an adroit experimentalist in addition to blossoming as a fiendishly able theoretician.

Rayleigh the Physicist
Rayleigh wrote the majority of volume 1 of his classic text ‘The Theory of Sound’ in 1872 while cruising on the Nile – a trip that had been prescribed as a recuperative measure following a serious attack of rheumatic fever. It contains detailed descriptions of the mechanics of vibrating systems, while acoustic wave propagation is treated in volume 2. Both texts are still in use today by acousticians. Such is the generality of many of Rayleigh’s mathematical descriptions of vibrations and waves that many of these could be applied to, or analogies found in, other fields -­ for example hydrodynamics, optics and electromagnetism - as indeed he demonstrated over much of his scientific life.

From an early age Strutt (as he then was) made himself an accomplished photographer, constructing pin-hole cameras and developing his own photographs. He used his expertise in this regard to fabricate diffraction gratings by contact printing and produced optical zone plates with light focusing properties. He established formulae for the resolving power of gratings, determined the criterion for the ultimate optical resolution achievable by optical instruments and contributed to an early understanding of the behaviour of spectroscopes. It was during this phase of his work that Rayleigh struck up correspondence with H.A. Rowland in Baltimore.

An extremely productive period of his life was during his tenure as Cavendish Professor from 1879 to 1885. His work there on electrical standards, in which he was assisted by his sister-in-law, Eleanor Sidgwick (née Balfour), provided new values for the units of resistance, current and voltage, which stood for many decades. No doubt these studies also influenced his later support for the establishment of the National Physical Laboratory in Teddington. His other interests were not neglected during this period; for example, he developed the Rayleigh disc for measuring the intensity of sound and introduced the concept of surface waves in solids, which are important in seismology as already mentioned but also in today’s electronic circuits as filters and delay lines. In total, 60 of his papers were published in this period, during which he also introduced practical classes as an important part of undergraduate teaching.

On his return to Terling, Rayleigh continued to work on a wide variety of problems, including wave propagation in periodic systems, stimulated by his observations of the colours of thin films, beetles, butterfly wings, etc. Magnetic phenomena, electrodynamics, viscosity, elasticity, wave-guide theory, optics and, of course, his first-love, acoustics, all figure in his numerous later publications. The story of the discovery of argon and his work in parallel with Ramsay to isolate the gas has been well documented [4,10,12]. What has recently come to light [13] is a ‘pli cacheté’ (secret packet) deposited by Ramsay with the French Académie des Sciences in July 1894 and first opened as recently as 2004. Deposition of such packets (sealed with wax) allowed an individual to lay claim to a discovery without actually publishing it. A translation (by Alwyn Davies [14]) of the document signed by Ramsay is as follows:

Guided by the experiments of Lord Rayleigh on the density of nitrogen, which show that the nitrogen obtained from the air by means of copper metal is more dense by one part in 231 than the nitrogen obtained by chemical means from ammonia, or from nitrous acid, I have studied the residue after a large quantity of nitrogen from air has been absorbed by means of magnesium at red heat. …….

….. I hope to decide soon if the gas is a modification of nitrogen, perhaps N3, or a new element, by carrying out the absorption by magnesium, making each experiment more quantitative by volume, and by measuring the ammonia or a combination of X with hydrogen, I can decide what is its approximate atomic weight. If it happens that I have a new gaseous element, I intend to name it Eikazote, with the symbol Ez.

To end this note, it should be recognized that it is to Lord Rayleigh, with whom I have been in communication, that this discovery is credited; he has shown the place to explore; I found the means to isolate this novel gas.

In fact, both Rayleigh and Ramsay worked independently to isolate the gas and both succeeded within a month of the packet being deposited (Figure 7). A joint paper and two Nobel Prizes followed, not to mention a whole new column in the Periodic Table!

Figure 7

Figure 7: The opening sentences of the letters in which Rayleigh and Ramsy report their simultaneous isolation of argon (from reference [12]). 

This section of the paper cannot be concluded without mention of the Rayleigh-Jeans law. Rayleigh’s work embraced essentially all aspects of classical physics known at the time. His theoretical treatment of the spectrum of black-body radiation accounted exceedingly well for the spectral distribution of energies at long wavelengths but, being classical, failed at short wavelengths -­ the so-called ultraviolet catastrophe. Although he was still very active at the age of 58 when Planck produced his formula and quantum concepts came to the fore to account for this evident failure, Rayleigh was rather reluctant to embrace these ‘new’ ideas and preferred to leave them to future researchers.

Rayleigh the Chemist
It is noteworthy that Rayleigh assembled in the 1870s one of the best-equipped chemical-physical laboratories in England. The only one that surpassed it in the range of chemicals and equipment had been established in the early 1800s, first by Humphrey Davy and then enlarged by Michael Faraday, at the Royal Institution in London [15]. Rayleigh’s laboratory, even to this day, is impressive. It is equipped with an unusually large collection of chemicals – far greater than was typical in a British university chemical laboratory fifty years ago.

Rayleigh’s skills as a practical chemist are reflected in the many papers he wrote, having carried out the experimental work himself. Thus, in his 1902 paper on the ‘Distillation of Binary Mixtures’ [16] he describes his detailed work on mixtures of alcohol, ammonia, and several acids with water, preceded by a theoretical outline of the phenomenon of distillation of pure liquids and liquids that form homogeneous mixtures. Other papers, such as his work on the electrochemical equivalent of silver [17] and periodic precipitation of crystals [18] (one of his last publications), testify to his skills as a laboratory-oriented chemist. Moreover, in connection with his discovery of argon – which followed from his finding that ‘pure’ nitrogen derived from the atmosphere was always slightly heavier than nitrogen he prepared from chemical sources, such as ammonia and urea – he devised several ingenious chemical methods of eliminating traces of oxygen from atmospheric nitrogen [12,19].

Rayleigh and Dimensional Analysis
Rayleigh’s landmark paper [20] where he established that the intensity of scattered light is inversely related to the fourth power of the wavelength used the so-called method of dimensional analysis. He also derived this relationship and the facts pertaining to the polarization of scattered light by rigorous mathematical analysis. Throughout his life he commended the virtue of using the method of dimensional analysis. Indeed, forty-five years after his landmark paper he wrote an article in Nature, under the title ‘The Principle of Similitude’ [21], which began with the following remark: “I have often been impressed by the scanty attention paid even by original workers in physics to the great principle of similitude”. In this article, Rayleigh bemoans the fact that this was not used as extensively as it should be. To press home this point, he recites fifteen examples of how information of vital importance can be readily retrieved by dimensional analysis. Here are just three:

The velocity of propagation of periodic waves on the surface of deep water is as the square root of the wavelength,

The resolving power of an object-glass, measured by the reciprocal of the angle with which it can deal, is directly as the diameter and inversely as the wavelength of the light,

In a gaseous medium, of which the particles repel one another with a force inversely as the n’th power of the distance, the viscosity is as (n + 3)/(2n – 2,) power of the absolute temperature. Thus, if n = 5, the viscosity is proportional to temperature.

Concluding Remarks
The impression left on all the visitors to the ‘Open House’ celebration was that they were exposed afresh not only to a supreme theorist and experimentalist, but also to an individual who had an unquenchable curiosity for an enormous range of natural phenomena and their scientific interpretation. The soaring of birds [22], the sailing flight of the albatross [23] and the fascination of iridescent crystals [24] were topics that engaged his interest in the 1900s, and one of his last papers published in the year of his death was entitled On the optical character of some brilliant animal colours [25]. Few practitioners and historians of science in its broadest aspects would dispute that Lord Rayleigh was one of the most remarkable physical scientists of his generation.

Notes and References

  1. Contrary to the information given in Wikipedia, Rayleigh did not spend his whole scientific life in the University of Cambridge. He was the second occupant of the Cavendish Chair of Experimental Physics (after Maxwell died) but stipulated that he would hold this post for only five years (1879-1884). He later was engaged as Professor of Natural Philosophy at the Royal Institution, where, in 1898, he became joint Director with Sir James Dewar, of the Davy-Faraday Research Laboratory. For the rest of his life he worked in his own, privately-built laboratory in converted stables at the family seat at Terling.
  2. Rutherford gave a Friday Evening Discourse at the Royal Institution in early 1904, during the course of which he declared that the age of the Earth, based on his work on radioactivity, could be as large as a thousand million years in blatant contradiction to the estimates made by Lord Kelvin, who was in the audience. After the Discourse, Rayleigh told Kelvin that Rutherford was likely to be nearer the truth than Kelvin. He said he would arrange for the two to meet in Terling; the visitors book records when exactly they met and who the others were that attended.
  3. H.H. Asquith, the 54-year-old Prime Minister of the day became intensely associated with the 25-year-old Venetia Stanley who was present, along with Asquith and his wife and A.J. Balfour at Terling Place in July 1908. The definitive biography of Asquith (by Roy Jenkins, published in 1964) disclosed that Asquith was besotted by Miss Stanley and wrote frequently to her when he chaired cabinet meetings, even when his country was at war.
  4. Life of John William Strutt, Third Baron Rayleigh by Robert John Strutt (Fourth Baron Rayleigh) University of Wisconsin Press 1968
  5. Lord Rayleigh: The Man and His Work* by Robert Bruce Lindsay, Pergamon Press 1966
  6. J.H. Howard, Research of the Third Baron Rayleigh, Proc. of the Royal Institution, 60. (1988) p73.
  7. Maxwell’s Enduring Legacy; A Scientific History of the Cavendish Laboratory by Malcolm Longair, Cambridge University Press 2016, p69.
  8. E.A. Davis, Lord Rayleigh: His Works and Laboratories, in The Roots of Physics in Europe, Proceedings of the first European Symposium on the History of Physics, ed. P.M. Schuster, Living Edition Science, Austria 2010, p35.
  9. All but a few are single-author papers which Rayleigh submitted in long hand.
  10. Lord Rayleigh and Professor William Ramsay, Argon, a New Constituent of the Atmosphere, Phil Trans Roy Soc A, (1895) pp187-241.
  11. It was Stokes who discovered the phenomenon of fluorescence, which he demonstrated at a Friday Evening Discourse at the Royal Institution in 1853.
  12. J.M. Thomas, Argon and the Non-Inert Pair, Angewandte Chemie International Edition, 43 (2004) p6418.
  13. Yves Jeannin, Comptes Rendus Chimie, 8 (2009), pp3-7. The assertion of priority while avoiding publication is discussed in this case and more generally by Michael Jewess, Royal Society of Chemistry Historical Group Newsletter, No 71 (Winter 2017), 22-27 (hard copy), 12-14 (online).”
  14. Alwyn Davies, Royal Society of Chemistry Historical Group Newsletter No.70 (Summer 2016) pp33-37 (hard copy),pp18-20 (online).
  15. The Clarendon Laboratory was established in Oxford in 1868, the Cavendish Laboratory in Cambridge in 1872. Thanks to the efforts of Prince Albert, the Royal College of Chemistry in London (by 1907 indirectly subsumed into Imperial College) was established in 1845 and Prince Albert facilitated the appointment of one of Germany’s foremost experimentalists, A. F. Hofmann, to be the first Professor of Chemistry in the College. The Royal Institution had an impressive laboratory from 1801 onward.
  16. Lord Rayleigh, On the Distillation of Binary Mixtures, Phil. Mag. IV (1902) p521.
  17. Lord Rayleigh, The Electrochemical Equivalent of Silver, Nature, LXI (1897) p292.
  18. Lord Rayleigh, Periodic precipitations of crystals, Phil. Mag. XXXVIII (1919) p738.
  19. In a letter to Nature XLVI (1892) p512 he wrote:
  20. ‘I am much puzzled by some results on the density of nitrogen, and I shall be obliged if any of your chemical readers can offer suggestions as to the cause. According to the methods of preparation, I obtain two quite distinct values. The relative difference, amounting to about one part in 1000, is small in itself, but it lies entirely outside the errors of experiment, and can only be attributed to a variation of the character of the gas.’
  21. Lord Rayleigh, On the Light from the Sky, its Polarization and Colour, Phil. Mag. XLI (1871) pp 107-120, 274-279.
  22. Lord Rayleigh, The Principle of Similitude, Nature, XCV (1915) p66.
  23. Lord Rayleigh, Soaring of Birds, Nature, XXVII (1883), p534.
  24. Lord Rayleigh, The Sailing Flight of the Albatross, Nature, XI (1889) p14.
  25. Lord Rayleigh, On some Iridescent Films, Phil. Mag. XXIV (1912) p751.
  26. Lord Rayleigh, On the Optical Character of Some Brilliant Animal Colours, Phil. Mag. XXXVII (1919) p98.

On Future High-Energy Colliders

By Gian Francesco Giudice, CERN, Theoretical Physics Department, Geneva, Switzerland

While the Large Hadron Collider (LHC) physics program is still in full swing, the preparations for the European Strategy for Particle Physics and the recent release of the Future Circular Collider Conceptual Design Report bring attention to the future of high-energy physics. How do results from the LHC impact the future of particle physics? Why are new high-energy colliders needed?

Where do we stand after the LHC discovery of the Higgs boson?
Undoubtedly, the highlight of the LHC programme so far has been the discovery of the Higgs boson, which was announced at CERN on 4 July 2012. The result has had a profound impact on particle physics, establishing new fundamental questions and opening a new experimental programme aimed at exploring the nature of the newly found particle. The revolutionary aspect of this discovery can be understood by comparing it with the previous discovery of a new elementary particle — the one of the top quark, which took place at Fermilab in 1995. The situation at the time was completely different. The properties of the top quark were exactly what was needed to complete satisfactorily the existing theoretical framework and the new particle fell into its place like the missing piece of a jigsaw puzzle. Instead, the discovery of the Higgs boson leaves us wondering about many questions still left unanswered. The top quark was the culmination of a discovery process; the Higgs boson appears to be the starting point of an exploration process.

The Higgs boson is not simply another particle to be added to the list of the fundamental building blocks of matter, but it is the manifestation of a completely new physical phenomenon, whose dynamical origin still remains poorly understood and largely mysterious in the context of the Standard Model. This is the phenomenon of spontaneous symmetry breaking — something that has been seen at play in superconductivity and other condensed-matter systems, but never before in the realm of elementary particles. The Higgs boson is something truly different from anything we have found so far in the particle world. Paradoxically, the intricacy of the Higgs boson lies in its simplicity. Unlike any other known elementary particle, the Higgs boson has no spin, no electric charge, and it superficially appears as a simple bundle of mass. This simplicity is at the origin of a puzzle that has mystified theoretical physicists for decades.

For every massive particle, it is always possible to conceive an observer that catches up with the particle, thus seeing it at rest. When viewed at rest, a particle cannot distinguish any preferred direction, as a consequence of the rotational invariance of empty space. Therefore, a massive particle must be allowed to spin along all possible directions and must contain all possible polarisation states. This is not necessarily true for a massless particle which, according to special relativity, must travel at the speed of light and no physical observer can ever measure it at rest. This is why a massless particle can be found spinning clockwise with respect to the direction of motion, while lacking the state corresponding to anti-clockwise spin. In other words, for spinning particles there is a clear distinction between being massless or massive because the two cases correspond to a different number of quantum states. This is not true for a spinless particle like the Higgs boson, for which the massive and massless cases are described by exactly the same number of quantum states.

The conundrum shows up when quantum mechanics enters the game. One of the rules of the uncanny world of quantum mechanics is that everything not forbidden is compulsory. This rule is due to the ubiquitous quantum fluctuations which blur every physical system, rendering special and non-generic configurations highly unlikely. For spinning particles, the mismatch of states between the massless and massive cases forbids quantum fluctuations to pump energy into a massless particle and turn it into a massive one. But nothing prevents them from doing so in the case of a spinless particle and indeed theoretical calculations predict that the Higgs boson should gain an enormous mass in any realistic quantum system. The discovery of a relatively light Higgs boson clashes with the logic of quantum mechanics, leaving theoretical physicists bewildered. This problem is known as ‘Higgs naturalness.’

Another important consequence of the structural simplicity of the Higgs boson is that it provides a natural bridge between the known particle world — the one described by the Standard Model — and other possible hidden sectors, related perhaps to dark matter or to other still-undiscovered particles. This special property is related, once again, to the spinless nature of the Higgs boson. Within the Standard Model, the Higgs boson is the only particle that can form combinations that preserve all spacetime symmetries and interact with hidden particle sectors in a way that is not suppressed by short-distance scales. This unique property makes the Higgs boson one of the best keys at our disposal to unlock the door towards the mysteries of possible hidden worlds.

One of the most striking results of the LHC exploration was the discovery of a completely new type of force. Before the discovery of the Higgs boson, we knew of four fundamental forces governing natural phenomena (strong, weak, electromagnetic forces and gravity). All of them were successfully deduced from a single conceptual principle — called ‘gauge symmetry.’ In the meantime, the LHC has discovered the existence of new forces in nature. These forces control how the Higgs boson interacts with quarks and leptons and, so far, the LHC has identified their effect on top and bottom quarks and on tau leptons (the so-called ‘third generation’). According to the Standard Model, these new forces are as fundamental as those previously known, but of a different nature — not ‘gauge-like.’ While the Standard Model properly describes these forces, it is silent on the origin of the many free parameters associated with them or on any deeper explanation of their complex structure.

The ‘non-gauge’ nature of the newly discovered forces means that we are confronted with a completely new phenomenon. For instance, the interaction strengths of the new forces are not quantised, unlike all other quantum forces we have probed so far. When compared with the logical purity and conceptual simplicity of the ‘gauge forces,’ the new forces associated with the Higgs boson look provisional. Their structure raises conceptual puzzles, which are not necessarily inconsistencies of the Standard Model, but possible clues indicating the existence of a deeper theoretical description.

While the Higgs boson is playing a central role in studies at the LHC, it is only one aspect of a very broad scientific programme. The LHC has already produced a wealth of precision measurements on parameters as fundamental as the W and top-quark masses, on important properties of B mesons, and on observables relevant to strong interactions. With these measurements we are acquiring refined knowledge about the Standard Model, which is a necessary prerequisite to advance our understanding of the particle world and to search for new phenomena.

The status of high-energy physics
At the end of the 19th century, physics was in a satisfactory state, able to describe all known phenomena. This was summarised by the slogan: “There is nothing more to discover in physics; all that remains is more and more precise measurements.” Nonetheless, in a famous speech delivered in 1900, Lord Kelvin identified a few ‘clouds’ obscuring the bright sky of 19th century physics. Luckily, some physicists did not ignore those clues: relativity and quantum mechanics followed.

Similarly, today the Standard Model of particle physics, together with the current model of cosmology and a sufficient number of free parameters, are able to describe all known physical phenomena and the large-scale properties of the universe. But, watching attentively, we see some ‘clouds’ in the horizon.

Puzzles in today's particle physics come from structural problems of the Standard Model (the nature of the Higgs boson, Higgs naturalness, the origin of symmetry breaking dynamics, the stability of the Higgs potential, the existence of three generations of matter, the pattern of quark and lepton masses and mixings, the dynamics generating neutrino masses), from embedding the Standard Model into a broader framework (the unification of forces, quantum gravity, the cosmological constant), from attempts to give particle-physics explanations of cosmological properties (the nature and origin of dark matter, dark energy, cosmic baryon asymmetry, inflation).

Just as at the end of the 19th century, one may argue today that the Standard Model is satisfactory and there is nothing more to discover in particle physics, other than making more precise measurements. Nonetheless, clues should not be dismissed because those ‘clouds’ could be symptoms of a more fundamental disease. The real limitation of 19th century physics was describing nature in terms of disconnected theories (mechanics, gravitation, electromagnetism, thermodynamics, optics, etc.) lacking a unified vision. Today, the Standard Model's shortcomings may be indicators of a deep structural limitation of our theoretical description. Without searching, we will never find answers.

Special puzzles for the LHC
Certainly not all the problems listed above can be addressed by the LHC, although studies at present and future colliders are essential to guide research. But out of the many ‘clouds’ that could turn into storms shattering the Standard Model, two of them are playing a special role at the LHC: Higgs naturalness and dark matter. This is because there are good arguments to link these problems to the energy domain explored by the LHC. In other words, they are the most likely place to find surprises at the LHC.

Starting from these clues, in the past decades theorists have elaborated hypotheses to address some of the shortcomings of the Standard Model, proposing new bold ideas such as supersymmetry, technicolour, extra dimensions, Higgs compositeness, and a variety of other speculative theories. So far, the LHC has given no positive indication for the existence of new phenomena, ruling out many versions of the postulated theories.

This result is nothing but the scientific method at work. Experiments act as a ‘natural selection’ process in which some theoretical hypotheses become extinct and others evolve, according to ‘survival of the fittest.’ Although the LHC programme is still at an intermediate phase and it is premature to draw definitive conclusions, it is already clear that the LHC is radically reshaping our perspective on the particle world. The LHC experimental results are forcing theorists to think differently about problems and to search for new solutions. In this situation of renewed theoretical exploration, experimental physics is needed — more than ever — to break new ground.

Measurements versus discoveries
The rugged and twisted path towards scientific knowledge is punctuated not only by discoveries, but also by disproved hypotheses and null results that redirect research. As shown repeatedly by history, the non-discovery of expected results can be as effective as the discovery of unexpected results in igniting momentous paradigm changes.

When 19th century theorists were puzzled about how electromagnetic waves could propagate, they addressed the conundrum using the known framework of Maxwell's theory and added the hypothesis of a new medium permeating space: the aether. The lack of discoveries in the Michelson-Morley experiment ruled out this hypothesis and eventually triggered a much more revolutionary solution: special relativity.

When Le Verrier was puzzled by the discrepancy in the precession of Mercury's perihelion, he turned to Newton's gravity for a solution and predicted the existence of a new planet, named Vulcan. The planet was never discovered and the real solution turned out to be much more revolutionary: general relativity.

These two examples illustrate an identical pattern. When scientists are confronted with a problem, they first look for solutions within the accepted theoretical framework. When experiments declare a non-discovery, then scientists are forced to think differently and this may spark a paradigm change.

The lesson for us could be that the solutions devised so far by particle physicists to deal with the shortcomings of the Standard Model are overly rooted in the currently accepted framework. In spite of their bold appearance, the proposed theories only look like logical extensions of the same principles that govern the Standard Model. The LHC experimental results may push theoretical thinking into the territory of radical changes.

Measurements, and not only new discoveries, can provide the decisive steps for advancements in science and knowledge. A pertinent example is the story of the Large Electron-Positron Collider (LEP). In spite of the lack of any discovery of new particles, LEP transformed particle physics like few other experimental projects before. It established the gauge paradigm as the ruling principle of the Standard Model. It pruned away a variety of alternative hypotheses. It substantiated the existence of only three generations of quarks and leptons (by ‘not discovering’ additional light neutrinos). It brought the question of force unification to a quantitative level. It revolutionised the use of precision measurements to gain knowledge of short-distance phenomena. It is largely because of LEP that today we recognize the formidable power of gauge theories and the conceptual synthesis of the Standard Model.

What can we expect from future high-energy colliders?
Fundamental research beyond the frontiers of knowledge is - by definition - unpredictable. When Galileo perfected the telescope, he could not foretell how many moons were to be discovered around Jupiter. When we propose a new high-energy collider, we cannot predict the number of particles we are going to discover, but we can define the questions we want to address. The value of a scientific project dedicated to the exploration of the unknown does not lie in the number of foreseen new discoveries, but in the knowledge that will be gained from its results.

With the information that we have gathered so far from the LHC, we can already identify some broad research programs that have guaranteed goals of expanding our knowledge. Moreover, we know that only a vigorous program of exploration can address the many fundamental questions that still remain unanswered in particle physics.

The Higgs program
The discovery of the Higgs boson has opened a new compelling experimental program, whose goal is the precise determination of the new particle's properties. This program is a search into unexplored territory because the dynamics behind the Higgs boson has still to be understood and the LHC has so far only scratched the surface of the phenomenon. Exploring the characteristics of the Higgs boson and its associated forces, establishing their nature and understanding their origin are essential objectives of future collider projects. Indeed, all existing proposals for future accelerators have the Higgs program as a primary goal.

The Higgs program is a new frontier for particle physics, whose targets are well defined. The first goal is to measure the Higgs coupling constants (which determine the interaction strengths of all forces involving Higgs bosons) with precision down into the per mille region. Such measurements are sensitive to any substructure possibly present inside the Higgs boson, if the particle is not truly elementary. Future collider projects can measure the inner Higgs structure with an astounding experimental resolution, which can reach distances 100,000 times smaller than the size of a proton. The new Higgs forces have been measured so far only for the third generation of matter (top, bottom, tau). The next challenge is to test the Higgs forces for second-generation particles, especially the charm quark and the muon. Since the origin for the wide difference in mass between particle generations is still a mystery, probing Higgs interactions with second-generation particles is a conceptually new test of the mechanism that feeds mass into elementary particles. Another important test is the measurement of the so-called ‘invisible Higgs decay’, which corresponds to the disintegration of the Higgs boson into particles that cannot be directly revealed by particle detectors at high-energy colliders, such as neutrinos. ‘Invisible Higgs decays’ are especially interesting because they explore possible new types of elusive particles, maybe related to the nature of dark matter. Another target of the Higgs program is the measurement of the Higgs self-interaction, which is a direct test of the new kind of force that is thought to be responsible for generating the non-trivial structure of the Higgs vacuum. Such a measurement will also provide us with the elements to reconstruct theoretically the details of the phase transition that is believed to have occurred in the universe only 10−11 seconds after the Big Bang. Finally, another goal is to measure a variety of rare Higgs decays, which are rich with important information. For instance, the Higgs decay into Z and a photon is an efficient probe of new physics effects occurring at very short distances; Higgs decays into two different leptons (𝜏𝜇, 𝜏e or 𝜇e) or into CP-violating final states can probe the nature of symmetries in the particle world.

Interestingly, the different proposals for future high-energy collider projects have a complementary role in the exploration of the Higgs properties. Linear or circular electron-positron colliders can measure the main Higgs interactions with high accuracy and make the first measurements of the as-yet untested interaction with the charm quark. Exploiting the formidable rates of Higgs-boson production, future proton-proton colliders can then pursue further these studies and test rare Higgs decays, including the measurement of the Higgs interaction with muons at the percent level. Moreover, future high-energy proton-proton colliders can measure with precision better than ten percent the Higgs self-interaction. Measurements at proton-proton colliders are most precise for ratios of Higgs couplings, where the large systematic uncertainties cancel out. For this reason, global studies benefit from combining results from hadron colliders with results from lepton colliders, whose cleaner environment allows for very precise absolute determinations of certain Higgs couplings. For instance, the synergy between different projects is illustrated by the measurement of the Higgs interaction with top quarks, which can reach the percent precision at proton-proton colliders only thanks to information derived from future electron-positron collider data.

These measurements are not done purely for the sake of precision, but because they provide us with new knowledge about the least understood and most puzzling sector of the Standard Model. From these measurements we will learn about untested fundamental forces of nature, about the dynamics of the space-time vacuum, and about the mechanism responsible for the masses of Standard Model particles. Our knowledge of the electroweak symmetry breaking mechanism will be radically different after a precision Higgs program. Much as LEP transformed our knowledge of the Standard Model gauge sector, future colliders will transform our knowledge of the Higgs sector.

The goals of the Higgs program may sound rather abstract. But, on the contrary, the Higgs boson is an essential element that determines the properties of the ordinary matter we are made of. This can be understood by imagining a hypothetical world in which one could dial the vacuum configuration of the Higgs field and change its intensity with respect to the value we observe in our world. By reducing the intensity of the Higgs field, one would gradually make atoms grow in size. All matter (including ourselves) would puff up, until bound atoms could no longer exist. Were one to rotate the dial in the opposite direction and make the Higgs field more intense, the consequences would be equally dramatic. If the vacuum configuration of the Higgs field were only a factor of five stronger than what we observe in our world, atomic nuclei would not be stable because neutrons would rapidly decay: hydrogen would be the only chemical element that could possibly exist in the universe. This would make for a pretty dull universe without any chemical structure — let alone life. In summary, the universe in which we live depends critically on the existence of the Higgs field and matter would behave very differently if the Higgs boson differed even slightly from what we observe. Understanding the properties of the Higgs boson means understanding the underlying reasons for the observed structure of matter.

The precision program
One of the most important legacies of the LHC is about the experimental capabilities of hadron colliders. Progress in data analysis and detector performance, combined with advanced theoretical calculations, led to previously unimaginable precision in measurements at proton-proton colliders. Today the LHC is performing measurements that would have been unthinkable at the time in which the project was designed. This patrimony of experience opens up the possibility of a full program of precision studies at future colliders.

A campaign of precise measurements of all Standard Model observables is essential to test the theoretical framework and to sharpen the tools necessary to look for the small departures between theoretical predictions and experimental data that would indicate the presence of new phenomena. A crucial aspect of the precision program rests on a characteristic feature of quantum mechanics. The result of a high-energy scattering process is sensitive not only to the properties of the particles that participate in the collision, but also to particles with masses much larger than the available initial energy. This counterintuitive result is due to quantum fluctuations, which bring to existence, even if only for a very short time, ‘virtual particles’ whose presence naively seems to violate the requisite of energy conservation. These particles, although too heavy to be directly produced with the available energy of the collider, nonetheless leave an ‘indirect’ trace in the scattering process as a consequence of their ephemeral ‘virtual’ quantum existence. These ‘indirect’ traces can be detected through a combined effort involving painstakingly precise experimental measurements and extremely accurate theoretical calculations of the Standard Model prediction. The quantum effects from virtual particles make precision measurements in high-energy collisions especially interesting because they allow us to probe, in specific cases, new phenomena occurring at distances much smaller than those directly explored by the collider energy.

Another important aspect of ‘indirect’ searches for new phenomena is that ‘virtual’ effects generally grow with the energy of the collision. This means that the higher the collider energy, the more effective indirect searches become. Indirect searches are optimized by the right compromise between energy and precision. Different high-energy collider proposals have different strengths for reaching the best conditions.

As for the Higgs program, electron-positron and proton-proton colliders can complement each other in the precision program. By measuring precisely the properties of known particles (such as the W, Z, and top quark) at extreme high-energy conditions, not only do we gain knowledge about those particles, but we also explore new phenomena that could help us to resolve some of the Standard Model's shortcomings. The stupendous number of W, Z, and top quarks that can be produced at future colliders provides an unprecedented probe of the detailed properties of the Standard Model and a powerful exploration tool into new phenomena.

In many respects, the Higgs and precision programs are two aspects of the same scientific strategy. According to the Standard Model, the Higgs boson and the longitudinal polarizations of the W and Z bosons are only different states of the same fundamental entity. Therefore, precision Higgs and electroweak measurements are intimately related and complement each other in terms of the information that can be extracted from data.

A particularly interesting aspect of the precision program is the study of a special class of decay processes involving quarks and leptons (the so-called ‘flavor physics’). These processes offer a unique opportunity for new discoveries because in the Standard Model, for accidental reasons related to the structure of the theory, they happen to be very rare or even absolutely forbidden. To a certain extent, the study of these processes investigates, from a different angle, the nature of the same new forces explored by the Higgs program.

The exploration program
Exploration of the unknown is the main driver of fundamental science, and it embodies one of the aspirations that define human civilization. The thirst for pure knowledge is a powerful vehicle for progress, which propagates from basic science into technological and social advancements. Particle physics has always been driven by the spirit of exploration, leading it to push the frontier of knowledge deep into the smallest fragments of spacetime and to unravel the natural phenomena that occur at the most minute distance scales. This exploration was rewarded with amazing insights into the fundamental laws of nature. The same spirit of exploration remains today the primary motivation for any high-energy collider project beyond the LHC. Exploration is the very spirit and essence of research.

Addressing many of the mysteries in particle physics requires a bold step in the exploration of Nature at the smallest possible distances. With present available technology, this can be done only by means of high-energy colliders, which allow for direct observation of the phenomena that occur in the microworld. Breaking the frontier of knowledge is what drives particle physics towards exploration into ever smaller distances with more powerful colliders.

Many of the conceptual tools that led to progress in our understanding of the early evolution of the universe and its large-scale structure came — almost paradoxically — from our study of the microworld. The deep connection existing in nature between inner space and outer space first emerged with the understanding of how stars shine in terms of nuclear reactions and later led to the successful prediction of the cosmic abundance of light chemical elements in terms of nucleosynthesis. A more modern and pertinent example of the link between elementary particles and the cosmos is provided by inflation. According to this theory, a Higgs-like field is the engine driving the early evolution of the universe and its quantum fluctuations are the seeds that created the structures in matter and radiation that we observe in the sky.

Today particle physics and cosmology are inextricably intertwined. More generally, research in particle physics is evolving towards an interplay of different subjects addressing, from different angles, related questions in fundamental physics. In this situation it is futile to expect substantial progress by pushing only a single line of research. Further advancements require a global scientific effort with a diversified experimental program that ranges from particle physics to observational cosmology, astroparticle physics and beyond.

In the context of this broad scientific effort, high-energy colliders remain an indispensable and irreplaceable tool to continue our exploration of the inner workings of the universe. While each experimental technique can contribute to this search from a different and complementary perspective, high-energy colliders remain the best microscopes at our disposal, with a formidable exploration power into the mysteries of matter at short distances. No other instrument or research program can replace high-energy colliders in the search for the fundamental laws governing the universe.

Acknowledgements
I am much indebted to my colleagues Fabiola Gianotti, Michelangelo Mangano, Matthew McCullough, Riccardo Rattazzi, and Gavin Salam for interesting discussions, useful suggestions, and for sharing with me their insight on the subject.

Fifteen Manhattan Project Myths and Misconceptions

By B. Cameron Reed, Department of Physics (Emeritus), Alma College

Over many years of studying the Manhattan Project, I have come across a number of distorted or outright erroneous statements regarding either the physics involved in it or of how various aspects of it unfolded. These statements usually turn up in semi-popular treatments of the Project, where the authors are not physicists and likely just reiterated something claimed in another source without bothering to check an authoritative record. In some cases one has to wonder if claims are made to bring notoriety to the claimant or for advancing some agenda. Like most myths and misconceptions, I have no idea where most of them started, nor can I cite a specific source for some of them beyond saying that I know I have heard them over the years. Some do contain germs of truth, but have grown into perversions of the facts.

These misconceptions will likely never be stamped out. My motivation in preparing this article is the hope that a physicist who does hear them can try to set the record straight. What follows are listed in roughly chronological order. Details on many of these issues can be found in my book The History and Science of the Manhattan Project; several other supporting references are also cited [1]. When I cite a source, it is by no means to imply that this is where something started, but rather where I happened to come across it recently.

I would be pleased to hear from readers who have information on how any of these misconceptions originated or have others they would like to share.

(1) Albert Einstein was “The Father of the Bomb”

The idea of the work of an avowed pacifist being involved with the most destructive weapon in history has a powerful appeal, but is largely incorrect. Einstein’s most significant involvement in the Project was that he signed a letter to President Roosevelt warning him of the potential dangers of nuclear energy. The letter, however, was written by Leo Szilard and Eugene Wigner, who recruited Einstein on the basis that his name would be recognized by Roosevelt. Einstein actually dispatched four letters to FDR between 1939 and 1945; copies and transcriptions can be found at various sites [2]. I have read that Einstein was consulted on problems involving diffusion at Oak Ridge, but this does not seem to have amounted to much. Nuclear physics was never Einstein’s research area, and the idea of a chain reaction apparently came as a revelation to him when he was approached by Szilard and Wigner. Einstein would surely have been regarded by Manhattan Commander General Leslie Groves as a dangerous security threat.

Along the same line, E = mc2 had little direct connection to the Manhattan Project. To be sure, the energy released in fission arises from a minute loss in mass during that process, but this famous equation was never used to predict fission, was not directly involved in its interpretation (which came through the liquid-drop model), played no role in the experiments which led to the understanding of the roles of different isotopes in the fission process, was not involved in uranium enrichment or plutonium synthesis, and had nothing to do with establishing the values of critical masses or in designing the bombs themselves. The use of fission as a weapon can be treated as a largely empirical affair. Quantum physics played no role at all.

Albert Einstein and SzilardPresident Roosevelt

Left: In this 1946 photo, Einstein and Szilard re-enact the preparation of the letter to President Roosevelt. Source: Courtesy Atomic Heritage Foundation

Right: President Roosevelt signs the declaration of war against Japan, December 8, 1941 Source: Wiki Franklin Roosevelt

(2) The energy of fission can make a grain of sand visibly jump*

I analyzed the physics of this claim in a paper published in the December 2018 edition of The Physics Teacher [3]. The idea of a grain of sand being propelled off your desk by the fission of a single uranium nucleus is an appealing image, but the physics doesn’t hold up. This myth was recently repeated in Gerard DeGroot’s book The Bomb: A Life, where he mistakenly attributes it to having been calculated by Lise Meitner and Otto Frisch when they were preparing their paper on the physics of fission [4].

Some numbers: The most common form of sand is silicon dioxide, usually in the form of quartz, which has a density of 2.65 g cm-3. A grain of diameter 1 mm will have a mass of about 1.4 milligrams. The energy released in fission averages about 170 million electron-volts (MeV) per event, or about 2.7 x 10-11 J. If all of this energy is directed into projecting the grain upwards, then the usual mgh formula for potential energy shows that we can expect to reach a maximum height of about 0.002 millimeters, or a mere 1/250 of the radius of the grain itself. Unless you have super-power eyes, you are unlikely to be able to discern this. The visual acuity of the eye is about one minute of arc. If we optimistically assume half a minute, we can use a standard arc-length calculation to estimate the distance from which we would have to view the jump in order to resolve it. For a jump of 0.002 mm we get about 14 mm, or a little over a half-inch. The near point of the eye (the closest distance at which one can still focus) is about 25 cm, so we are out of luck. If you want a more accurate figure, you are safe in saying that the complete fission of a single kilogram of uranium-235 releases as much energy as does exploding some 17 million kg of dynamite.

(3) American scientists and government officials were relatively inactive on atomic matters before Pearl Harbor*

The period before Pearl harbor could be considered a sort of “quiet phase” of the Manhattan Project. The Szilard-Wigner-Einstein letter reached President Roosevelt in October, 1939, and resulted in the establishment of the “Advisory Committee on Uranium” headed by the Director of the National Bureau of Standards, Lyman Briggs. One of the group’s first actions was to contribute

$300,000 had been let for contracts for fission and isotope separation research to various universities, industries, government agencies, and private research institutions; details can be found in Chapter 4 of Ref. 1. Of particular importance during this time were three reports prepared by a committee under the chairmanship of Arthur Compton, which examined the possibilities for reactors and bombs. The second of these (June, 1941) related that plutonium had been synthesized and tested for fissility by Glenn Seaborg, and the third, dated November 6, examined the physics of a putative fission bomb in considerable detail [5]. This latter report was heavily (although unofficially) influenced by a British report. Bush had been briefing FDR periodically since inheriting the project, and during an October 9 meeting, the President, clearly recognizing the possible implications, ordered that any considerations of atomic policy were to be restricted to a group comprising himself, the Vice President, the Secretary of War, the Chief of Staff of the Army, and Bush and Conant: the “Top Policy Group”. At the same meeting, the possibility of having the Army take over the project was also discussed. A theme in Bush’s reports was that while he had no information on what might be happening in the way of nuclear research in Germany, this was certainly a concern. Bush briefed FDR on the third Compton report on November 27, about the time the Japanese fleet was setting sail to Pearl Harbor. The President authorized further research and development, and more detailed plans were developed at a meeting between Bush and his scientific advisors in Washington on December 6, the day before the attack. The rest, as is said, is history.

Ernest Lawrence, Arthur Compton, Vannevar Bush, James Conant, Karl Compton, Alfred Loomis

Some of the key figures of wartime research, April, 1940: Left to right: Ernest Lawrence, Arthur Compton, Vannevar Bush, James Conant, Karl Compton, Alfred Loomis. Source: Wiki Image

(4) The headquarters of the Manhattan Project were in Manhattan*

This one is partially true, although many people know little of how this came to be. When control of the uranium project was handed over to the Army in June, 1942, the first officer to whom it was assigned, Colonel James C. Marshall of the Corps of Engineers, set up his headquarters in an office building located at 270 Broadway in New York City, the location of the Corps’ North Atlantic Division. One of the contractors for the project, Stone and Webster Engineering, had offices in the same building, and it was also convenient to Columbia University, where research on uranium enrichment was underway.

Marshall’s assignment was unusual. The Army divided the country into eleven geographical divisions, each under the authority of a Division Engineer. Within these divisions, smaller areas within which individual projects were sited (camps, airfields, ordnance plants, depots, ports, etc.) fell under the authority of “District Engineers”. Marshall’s new “Manhattan Engineer District” had no geographical restrictions; in effect, he had all of the authority of a Division Engineer.

In September, 1942, Marshall was replaced by then-Colonel Leslie R. Groves, although he did remain as District Engineer until July, 1943, when Groves eased him out in favor of Marshall’s own deputy, Colonel Kenneth D. Nichols; at the same time, the District headquarters were shifted to Oak Ridge. Groves’ appointment came with promotion to Brigadier General. His personal headquarters was a small suite of offices on the fifth floor of the New War Building at the intersection of Twenty-First street and Virginia Avenue NW in Washington. This building is now part of the Department of State.

Groves graduated from West Point in November 1918, and also trained at the Army Engineer School, the Command and General Staff School, and the Army War College. His career in the Corps of Engineers was marked by steady advancement, and by 1942 he was responsible for overseeing all Army construction within the United States as well as at off-shore bases. His intimate knowledge of how the War Department and Washington bureaucracies functioned and of contractors who could be depended upon to undertake the design, construction, and operation of large plants and housing projects made him the perfect candidate to oversee Manhattan. In the spring of 1942, one of the projects on his plate was the construction of the Pentagon, which was completed within sixteen months of ground being broken.

While the terms Manhattan Project and Manhattan Engineer District are often used interchangeably, the legal Army term was the latter; “Manhattan Project” only came into general use after the war.

(5) A “suicide squad” manned Enrico Fermi’s CP-1 reactor in case it ran out of control*

General Leslie Groves (1896-1970).

A group of three young physicists were perched atop CP-1 at its startup, armed with jugs of neutron-absorbing cadmium-sulfate solution to dump into the pile in case it threatened to go into an uncontrollable divergent reaction. Richard Rhodes characterizes the group a suicide squad [6]. A Department of Energy publication on CP-1 lists the group as Harold Lichtenberger, Warren Nyer, and Alvin Graves [7]. In a reminiscence published in 1982, Albert Wattenberg attributed the idea to Samuel Allison, and recalled that several people were upset with it: If an accidental breakage had occurred, the pile would have been ruined [8]. But the term “suicide squad” is literary license; “Chicago pile hit squad” might be more apt. Fermi had designed the pile to be barely critical, and provided for redundant over-control. Cadmium-sheathed wooden rods were used as control rods; inserting any one of them into the pile would bring it to below criticality, but several were used (the exact number now seems unknown; Fermi Artist’s conception of the startup of CP-1.himself later described it as several). In addition, two safety rods (known as “zip” rods) and one automatic control rod were also incorporated into the design. During operation, all but one of the rods would be withdrawn from the pile. If neutron detectors signaled too great a level of activity, the vertically-arranged zip rods would be automatically released, accelerated by 100-pound weights. The automatic control rod could be operated manually, but was also normally under the control of a circuit which would drive it into the pile if the level of reactivity increased above a desired level, but withdraw it if the intensity fell below the desired level. During the historic startup, one of the safety rods was tied off to a balcony railing, with Norman Hilberry standing by with an axe in order to cut the rope in case the automatic system failed. According to some sources, the phrase “to scram” a reactor – execute an emergency shutdown – is an acronym for “safety control rod axe man.”

(6) The Clinton Engineer Works at Oak Ridge was using 1/7 of the electricity being generated in the United States*

Robert Oppenheimer (1904-1967), ca. 1944.This claim seems to have originated in the autobiography of Colonel Nichols [9]. As described in an article published in the Spring 2015 FHP newsletter, this one has an element of numerical truth [10].

Data on electrical facilities at Oak Ridge can be found in Vincent Jones meticulously researched book on the Army’s role in the Manhattan Project [11]. By mid-1945, transmission facilities there could provide electrical power up to 310 megawatts (MW), of which 200 MW were for the Ernest Lawrence’s electromagnetic separation plant. In August, 1945, electricity used by all facilities totaled about 200 million kilowatt-hours (MkWh). Statistics on national generating capacity can be obtained from back issues of the Statistical Abstracts of the United States [12]. Figures published in the 1949 edition indicate that national generating capacity remained fairly steady between 1943 and 1945 at an average of about 272.8 billion kWh per year. One month’s worth would be about 22,700 MkWh, of which the Oak Ridge fraction cited above would have been about 0.9%.

Oak Ridge got its power from the Tennessee Valley Authority (TVA), and Nichols’ figure corresponds to roughly 1/7 of that agency’s generating capacity. Details are given in the newsletter article cited above, but the short version is that by mid-1945, TVA capacity stood at about 2,500 MW. The full CEW capacity of 310 MW would represent just over 12% of the latter figure, or about one-eighth.

The Trinity fireball a few seconds after the explosion(7) Robert Oppenheimer viewed the Trinity test from* Campañia Hill

This assertion appears in David Schwartz’s otherwise excellent biography of Enrico Fermi, and is included here as an example of how errors can slip into reputable sources [13]. Campañia Hill was located some 20 miles to the northwest of the test site, and served as a viewing location for personnel who were not needed at the control station during the countdown. The Campañia group included Hans Bethe, James Chadwick, Richard Feynman, Ernest Lawrence, Edward Teller, and Robert Serber. Oppenheimer viewed the test from the control station 10,000 yards to the South of ground zero; Groves was at Base Camp at about 17,000 yards.

(8) The light from the Trinity test could have been seen reflected from the Moon

This appears in Richard Rhodes’ The Making of the Atomic Bomb; I analyzed it in some detail in a paper published in 2006, but a refined estimate is given here [14,15]. The issue boils down to what change in astronomical magnitude would have been involved. This would involve the phase of the Moon at the time, so numbers here will necessarily be back-of-the envelope. The radiant energy at 10,000 yards from the explosion was estimated at about 12,000 Joules per square meter. If this was emitted over a time of one microsecond (probably too short, which will make the numbers overly optimistic), this would correspond to a power of ~ 7 Watts per square meter at the Moon. The solar flux at the Moon will be essentially the same as that as at the Earth, ~ 1400 Watts per square meter. A change of one part per 200 corresponds to about 0.005 magnitudes. A variation of this size is certainly detectable now with a large telescope and modern detectors, but would have been very iffy in 1945. At best, this assertion is a stretch. At the Trinity site, the Moon had set about 4.5 hours before the test. Had an observer on the Moon been looking toward New Mexico at the time, however, the explosion would have momentarily appeared thirty times brighter than Venus.

The Enola Gay and Bockscar B-29 bombers had fighter escortsThe Enola Gay and Bockscar B-29 bombers had fighter escorts

(9) The Enola Gay and Bockscar B-29 bombers had fighter escorts

I have heard occasionally of stories of fighter pilots describing with great emotion how they participated in the bombing missions at Hiroshima (Enola Gay) and Nagasaki (Bockscar) by valiantly protecting the bombers against possible Japanese action; that there were escorts is claimed in James Kunetka’s 2015 book on Oppenheimer and Groves.[i] Copies of the mission orders can be found on a Time magazine website, and indicate no fighter escorts; a copy of the Nagasaki order also appears in John Coster-Mullen’s fantastic book on the bombs, as does a field order indicating that no friendly aircraft were to be within a 50 mile area of any of the targets from four hours prior to six hours after planned strikes times [17,18]. Fighters would have been too fragile to withstand the shock waves from the bombs, and General Groves wanted nothing that would draw extra attention to the bombers; the Japanese were used to seeing lone or small formations of bombers on reconnaissance and weather missions. Weather-report bombers had preceded the strike bombers to each possible target, and the bombers were accompanied by observation bombers which dropped instrument packages.

(10) The Little Boy and Fat Man bombs were dropped by parachute

The bombs each weighed about five tons. Parachute drops would not only have been impractical and complicated the designs of the bombs, but would have been undesirable, giving Japanese anti-aircraft crews time to take aim at them. However, instrument packages were dropped by parachute, so it is entirely plausible to imagine that survivors recalled seeing parachutes.

(11) The crews of the Enola Gay and Bockscar all suffered from cancers, radiation poisoning, sterility, and mental problems after the war

A total of 24 men flew on the Enola Gay and Bockscar; one of these, radar officer Jacob Beser, flew both missions. A few years ago I undertook an extensive online search for obituary data, and was able to find information for all but four. Ages at death ranged from 46 (acute leukemia in 1967) to nearly 94; three survived to over 90. The average age at death was 76. Causes of death did include five cancers (including the leukemia), but this is certainly not out of line with the fact that about 20% of the population overall succumbs to cancers. Other causes included the menu of woes one would expect for an aging population: pneumonia, emphysema, heart failure, heart attacks, strokes, cardiac arrest following prostate surgery, and an automobile accident. Between them, these men fathered at least 50 children (surely an incomplete count), including 10 by Bockscar pilot Charles Sweeney and 5 by Enola Gay Co-Pilot Robert Lewis. Not unlike any group that served in any war, some wrote memoires of their experiences while others said little to family and friends.

Left: Partial crew of the Enola Gay: Standing (l-r): John Porter (ground maintenance officer), Theodore Van Kirk, Thomas Ferebee, Paul Tibbets, Robert Lewis, Jacob Beser; kneeling (l-r): Joseph Stiborik, George Robert Caron, Richard Nelson, Robert Shumard, Wyatt Duzenbury. Right: Partial Bockscar crew. Standing (l-r): Kermit Beahan, James Van Pelt, Don Albury, Fred Olivi, Charles Sweeney; kneeling (l-r): Edward Buckley, John Kuharek, Ray Gallagher, Albert Dehart, Abe Spitzer. Photos courtesy John Coster-Mullen.

(12) Only a few bombs were available; the Japanese could have kept fighting and not suffered further nuclear attacks

This one is just plain wrong. To be able to enrich enough uranium or synthesize enough plutonium to make a bomb in a short enough time to affect the war – two years, say – required building facilities which, once operating, turned out material continuously. To get a sense of this, we can do no better than to quote from memos from General Groves to Army Chief of Staff General George C. Marshall. On August 10, 1945, the day after the bombing of Nagasaki, Groves informed Marshall that [19]

The next bomb of the implosion type had been scheduled to be ready for delivery on the target on the first good weather after 24 August 1945. We have gained 4 days in manufacture and expect to ship from New Mexico on 12 or 13 August the final components. Providing there are no unforeseen difficulties in manufacture, in transportation to the theatre or after arrival in the theatre, the bomb should be ready for delivery on the first suitable weather after 17 or 18 August.

A couple weeks earlier, on July 30, Groves had outlined anticipated bomb production figures:

In September, we should have three or four bombs. One of these will be made from 235 material and will have a smaller effectiveness, about two-thirds that of the test type, but by November, we should be able to bring this up to full power. There should be either four or three bombs in October, one of the lesser size. In November there should be at least five bombs and the rate will rise to seven in December and increase decidedly in early 1946. By some time in November, we should have the effectiveness of the 235 implosion type bomb equal to that of the tested plutonium implosion type.

Thus, some 18-20 bombs were expected to be available from September through the end of the year, or about one every six to seven days. This rapid pace testifies to the fact that all fissile-material production plants were reaching full capacity at that time.

(13) Most of the victims of the bombings died by radiation poisoning

Various estimates of the fraction of victims who died of radiation exposure have been published, but to claim that even the majority of victims perished in this way is an exaggeration. Surveys of the effects of the bombs were carried out by the Manhattan Engineer District, the United States Strategic Bombing Survey, and the Atomic Bomb Casualty Commission [20]. If you are close enough to a nuclear explosion to receive an injurious dose of radiation, you are more likely to have been blasted or burnt to death already. The Manhattan Project’s medical director, Dr. Stafford Warren, estimated that some 7% of deaths resulted primarily from radiation, although some estimates ran as high as 15-20%. This said, radiation effects were unpleasant to say the least. Radiation effects included depressed blood counts, loss of hair, bleeding into the skin, inflammations of the mouth and throat, vomiting, diarrhea, and fever. Deaths from radiation began about a week after exposure, peaked in about 3-4 weeks, and ceased by 7-8 weeks. A person who survived but remained continuously in a bombed city for six weeks afterwards could expect to receive a dosage estimated at 6-25 rems (Hiroshima) or 30-110 rems (Nagasaki), with the latter figure referring to a localized area; the usual benchmark for a lethal single-shot dose is ~ 500 rems. The USSBS report states that of women in Hiroshima in various stages of pregnancy who were known to be within 3,000 feet of ground zero, all suffered miscarriages, and some miscarriages and premature births where the infant died shortly after birth were recorded up to 6,500 feet. Two months after the bombing, the city’s total incidence of miscarriages, abortions and premature births ran to 27%, as opposed to a normal rate of 6%.

(14) President Truman knew little of the Manhattan Project before the bombs were dropped

While Truman had only an inkling of the project before President Roosevelt’s sudden death on April 12, 1945 [the Vice President of the Top Policy Group in item (3) above was Truman’s predecessor, Henry Wallace], he was brought up to speed on the new weapon very quickly. After a brief Cabinet meeting following his swearing-in, Truman was approached by Secretary of War Henry Stimson, who related that he wished to inform the new President “about … a project looking to the development of a new explosive of almost unbelievable destructive power.” The next afternoon, James Byrnes, head of the Office of War Mobilization (and soon to be Truman’s Secretary of State), dramatically told Truman that “we are perfecting an explosive great enough to destroy the whole world. It might well put us in a position to dictate our own terms at the end of the war.”

At noon on April 25, Stimson and Groves briefed Truman on the project. Two days earlier, Groves had submitted to Stimson a background memorandum to be given to the President. Essentially a primer on the entire Manhattan Project, this memorandum, titled “Atomic Fission Bombs,” ran to only 24 double-spaced pages, but managed to cover every aspect of the work from the idea of uranium fission up to the prospects for fusion weapons [21]. The report opened by relating that “Within four months we shall in all probability have completed the most terrible weapon even known in human history, one bomb of which could destroy a whole city,” and went on to state that “The successful development of the Atomic Fission Bomb will provide the United States with a weapon of tremendous power which should be a decisive factor in winning the present war more quickly with a saving in American lives and treasure. … Each bomb is estimated to have the equivalent effect of from 5,000 to 20,000 tons of TNT now, and ultimately, possibly as much as 100,000 tons.”

At the time of the Trinity test on July 16, Truman was in Germany for the Potsdam conference, but he received a report that evening that the test was successful. In an entry in his personal diary for July 25, Truman indicates that he clearly understood the power of the new weapon (excerpted):

We have discovered the most terrible bomb in the history of the world … we think we have found the way to cause a disintegration of the atom. An experiment in the New Mexico desert was startling - to put it mildly. Thirteen pounds of the explosive caused the complete disintegration of a steel tower 60 feet high, created a crater 6 feet deep and 1,200 feet in diameter, knocked over a steel tower 1/2 mile away and knocked men down 10,000 yards away. The explosion was visible for more than 200 miles and audible for 40 miles and more. This weapon is to be used against Japan between now and August 10th. I have told the Sec. of War, Mr. Stimson, to use it so that military objectives and soldiers and sailors are the target and not women and children. … It is certainly a good thing for the world that Hitler’s crowd or Stalin’s did not discover this atomic bomb. It seems to be the most terrible thing ever discovered, but it can be made the most useful…[22]

The notion that the effects of the bomb could be limited to purely military objectives was illusory, but it cannot be said that Truman did not appreciate the implications of the new weapon.

(15) The Germans were close to having an atomic bomb

That German scientists maintained an active nuclear research program during the war, particularly devoted to experimental piles, is unquestionably true: They constructed some 20 piles (if not more) of various designs, mostly involving heavy water. The last of these, pile B-VIII, came close to achieving a chain reaction in the closing weeks of the war [23]. However, the German effort was always at a much smaller scale of funding, priority, and personnel than was the Allied effort, and was hobbled by personality clashes and bureaucratic turf wars. In 2005, historian Rainer Karlsch published a book titled Hitlers Bombe in which he made the remarkable assertion, based on German documents captured by the Russians and returned to the Max Planck Society in 2004, that that a group of scientists under Kurt Diebner and Walther Gerlach achieved a chain reaction and detonated two hybrid fission/fusion bombs before the end of the war. An English-language summary of the work was published by Karlsch and Mark Walker, a noted historian of wartime German nuclear efforts [24]. However, historian of science Dieter Hoffmann has concluded that, while Karlsch produced a valuable work that brought to light much previously unknown archival material, the bomb assertion is not borne out by the book’s content: The reaction rates and pressures of the purported design are too small by at least two orders of magnitude to initiate a fusion reaction, it is not made clear how the Germans obtained plutonium or enriched uranium, there is no physically plausible description of the bomb’s design, and there is no reliable analysis of the purported test regions to show that nuclear reactions really occurred [25]. There may yet be more to the German program to be revealed, but these sort of claims are examples of how tantalizing but unverifiable information can be extrapolated to sensationalized, premature conclusions.

References

[1] Reed. B. C. The History and Science of the Manhattan Project (Springer, Berlin, 2019).

[2] https://hypertextbook.com/eworld/einstein/

[3] Reed, B. C. “Can the Energy of Fission Make a Grain of Sand Visibly Jump?,” The Physics Teacher 56, 583 (2018).

[4] DeGroot, G. The Bomb: A Life (Harvard University Press, Cambridge, MA, 2004), p. 16.

[5] B. C. Reed, “Arthur Compton’s 1941 report on explosive fission of U-235: A look at the physics,” Am. J. Phys. 75(12), 1065-1072 (2007).

[6] R. Rhodes, The Making of the Atomic Bomb (New York: Simon and Schuster, 1986), p. 438.

[7] https://www.energy.gov/sites/prod/files/The%20First%20Reactor.pdf

[8] A. Wattenberg, “December 2, 1942: the event and the people,” Bull. Atom. Sci. 38(10) 22-32 (1982).

[9] Nichols, K. D, The Road to Trinity (New York: William Morrow and Company, 1987), p. 146.

[10] Reed, B. C. “Kilowatts to Kilotons: Wartime Electricity Use at Oak Ridge,” History of Physics Newsletter XII(6), 5-6 (2015).

[11] V. C. Jones, Manhattan: The Army and the Atomic Bomb (Washington: Center of Military History, United States Army, 1985), p. 391.

[12] https://www.census.gov/library/publications/time-series/statistical_abstracts.html

[13] D. N. Schwartz, The Last Man Who Knew Everything: The Life and Times of Enrico Fermi, Father of the Nuclear Age (New York: Basic Books, New York, 2017), p. 257.

[14] Ref. 5, p. 672.

[15] Reed, B. C., “Seeing the Light: Visibility of the July ’45 Trinity Atomic Bomb Test from the inner solar system,” The Physics Teacher 44, 604-606 (2006).

[16] J. Kunetka, The General and the Genius: Groves and Oppenheimer – The Unlikely Partnership That Built the Atomic Bomb (Washington, DC: Regnery History, 2016), p. 357.

[17] https://time.com/3980421/hiroshima-nagasaki-operations-orders/

[18] J. Coster-Mullen, Atom Bombs: The Top Secret Inside Story of Fat Man and Little Boy (Coster-Mullen, Waukesha, WI, 2016). See pp. 326 and 331.

[19] Groves’ July 30 memo can be found at https://nsarchive2.gwu.edu//NSAEBB/NSAEBB162/45.pdf. The August 10 memo can be found at National Archives and Records Administration microfilm record M1109, reel 3, images 0653-0654. This microfilm set is the Correspondence (“Top Secret”) of the Manhattan Engineer District, 1942-1946 (Records of the Office of the Chief of Engineers; Record Group 77; 5 rolls).

[20] MED: http://www.atomicarchive.com/Docs/MED/index.shtml

USSBS: https://www.trumanlibrary.gov/library/research-files/united-states-strategic-bombing-survey-effects-atomic-bombs-hiroshima-and?documentid=NA&pagenumber=1

ABCC: http://www.nasonline.org/about-nas/history/archives/collections/abcc-1945-1982.html

[21] Groves’ April 23 memo can be found at http://www.gwu.edu/%7Ensarchiv/NSAEBB/NSAEBB162/3a.pdf

[22] Excerpts from Truman’s diary can be found in R. H. Ferrell, Harry S. Truman and the Bomb (High Plains Publishing Co., Worland, WY, 1996)

[23] B. C. Reed, “Piles of piles: In inter-country comparison of nuclear pile development during World War II,” https://arxiv.org/abs/2001.09971.

[24] R. Karlsch and M. Walker, “New light on Hitler’s bomb,” Physics World 18(6), 15-18 (2005).

[25] D. Hoffmann, “The race for the bomb: How close was Nazi Germany to developing atomic weapons?” Nature 436 (7047), 25-26 (2005).

History of Materials Science

Overview of History of Materials Research by Arne Hessenbruch (MIT)
In 2004, Bernadette Bensaude-Vincent and I asked whether Materials Science was about to explode, in the sense of centrifugal forces overpowering its centripetal ones. In 2020, Hessenbruch revisited that question. We identified Merton Fleming's metaphor of the tetrahedron as a centripetal force. It was holding together the equal four poles of structure, properties, performance and process and was embodied in curricula, such as Sam Allen and Ned Thomas' 1999 textbook. The centripetal forces were visible in the change of session topics at the Materials Research Society (MRS).

What does it look like in 2020? The MRS sessions continue to diversify. Five years ago the MRS began to classify topics to get a conceptual grip on the approximately 50 parallel sessions at Fall Meetings. One large category is biomaterials (that had still been minor in 2004). But even MRS' classification changes from year to year. The main reason is that the MRS business model relies on the number of attendees, and hence permanence of categories is pushed aside by the desire to - putting it tendentiously - catch the latest hype wave.

Whereas such centrifugal forces continue unabated, the centripetal ones have weakened. Allen & Thomas was able to bring metals, semiconductors, and ceramics under one single conceptual framework, but it didn't address biomaterials - nor could it. And no textbook integrating bio has emerged.

Nano is now a category of its own on par with bio at the MRS meetings. Ore extracted materials, or metals I guess, are being squeezed out, much to the chagrin of older materials scientists that I have spoken to. Among the centrifugal forces I would add: the nano wave with the NBIC program as an attempt to re-organize the entire field of materials design at the molecular level, the still fashionable biomimetic wave as an environmental friendly design of materials. Both of them are related to materials by design. What about raw materials (extracted from ores)? Are there still part of Materials Sciences and Engineering?

Our 2004 conclusion is at least not refuted: materials research is the prototype of unstable disciplines characterized by a closer integration with industrial (and military I would now add) demand. How to teach foundational courses in this environment is a growing challenge.

From Hidden Utility to Heroic Machines by E. F. Spero (Department of Materials Science and Engineering, MIT)
How do scientists imagine (and possibly enable) futures through their practices of computation? How might particular tools and approaches transition from serving as hidden utilitarian elements to those that give rise to new subfields and styles of thought? This presentation takes a historical approach to the emergence of computation in macromolecular science, an interdiscipline focused on polymers that bridges physics, chemistry, and materials science and engineering. In this field today, there is a palpable enthusiasm for the power and speed of computation investing in the promise of big data to revolutionize what is possible within these disciplines. Employing tools of high-throughput simulation and new machine learning algorithms, scientists aim to close the gap between theory and experiment, and redefine what it means to create, designing new sustainable materials built with atomic precision. However, in the late 1950s when computation was yet a nascent tool, often used for verification of existing models (rather than one for creation) scientists studying the physical behavior of long chain molecules downplayed, mistrusted, or even openly disdained computational methods. Along with fellow panelists this presentation will open space for reflection on tools, methods, and imagination.

History of Materials Science Institutions by Robert P. Crease (State University of New York – Stony Brook)
An institution may be defined as a basic physical, organizational, educational, or regulatory structure needed for the operation of scientific research. Institutions are a fundamental feature of science, but tend to drop out of its history. Scientific research requires institutional networks, and the character of these networks depends on scientific field and national context. It is impossible to understand an institution apart from its network.

The networks of materials science research institutions have a markedly different character from the networks of other kinds of science. Materials science research can be thought of as advancing through a series of “performances” that are possible thanks to the intersection of five different kinds of institutions. Laboratories are the specially equipped physical “stages” where researchers can mount the performances; educational institutions, such as university departments, train researchers to carry out such performances; professional institutions tend to the organization and professionalization of the researchers and their communities; governmental organizations fund the research; and communicative institutions disseminate the results. The contribution consists of surveys of these five types of institutions in materials science and how their networks vary by region.

Computation in the History of Physics (cosponsored with DCOMP)

Physics in the History of Computing by Peter Freeman (Georgia Institute of Technology)
My focus here is on the organizational history of the U.S. National Science Foundation (NSF) for providing advanced computational resources for the research community. It is a history of push-pull between the advancement in computational capabilities and the need for greater capabilities. It is also a story of interaction specifically between physics and computer science. At NSF this push-pull has led to an on-going tension between the user community (initially physical, geological, and atmospheric sciences and more recently the biological and social sciences) and the research communities on which computer designers rely (primarily physics, electrical engineering, and computer science). That tension has caused NSF’s organizational responsibility for providing advanced computing services to migrate among different directorates and offices from the 1950s to this day. For convenience I’ve divided the story into six time periods, each loosely marked by an important event. I have been a participant in this history from the early 1960’s in several ways, as a primary decision maker about High Performance (HPC) in the early 2000’s, and now as a chronicler of it.

By 1950, when NSF came into existence, the scientific need for HPC was clear to those whose research demanded it. For example, in 1953, the concept of using a computer to perform an experiment in silico was first demonstrated at Los Alamos by Enrico Fermi, John Pasta, and Stanislaw Ulam. One of the first organizational recognitions of the need for computers occurred in the 1955 NSF annual report. In May 1955, the National Science Board decided that NSF should provide computers to universities, although a formal funding program was not established until 1959. A collateral, unplanned effect of some of the resulting grants to computational scientists at least through the 1960’s was to provide support for research which later led to the foundations of computer science. Studies commissioned in 1966 and 1967 led NSF to create the Office of Computing Activities.

By the late 1970s many in the research community felt future scientific advances would be impeded by the lack of advanced computers. In mid-1983, an internal NSF working group, led by Marcel Bardon, Division Director of Physics, and Kent Curtis, Division Director of Computer Research, recommended that NSF provide “supercomputer services for academic research and science education” and support “networks linking universities and laboratories with each other.” Starting in 1985 five awards for NSF supercomputer centers were made to the San Diego Supercomputing Center (SDSC), the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Champaign-Urbana, the John von Neuman Center at Princeton, the Pittsburgh Supercomputer Center, and the Cornell Theory Center. The directors of all the centers were physicists! Except for the Princeton and Cornell centers, all are still in operation today as resources for the scientific community broadly.

In May 1989 Senator Al Gore introduced Senate Bill 1067: “To provide for a coordinated Federal research program to ensure continued United States leadership in high-performance computing.” A later version, known as the High Performance Computing Act of 1991, was enacted on December 9, 1991, and is colloquially known as the “Gore Bill.” It led to the development and funding of the National Research and Education Network (NREN) and advanced HPC. The NREN eventually became the Internet.

The rapid advance of computing, the emergence of the Internet, and the explosion of computer usage created an environment in which NSF often struggled to keep up with the demand and research opportunity in HPC. It seemed as though a new or revised HPC program was barely started before studies and panels were convened to recommend a successor. NSF followed up with two Partnerships for Advanced Computational Infrastructure, called PACIs. Funding was greatly expanded in the period 2000 – 2004. The first awards for terascale computers were made in 2001. By 2004 it was clear to most observers that, for many scientific problems, it was essential to have available the most advanced computational infrastructure possible; it was no longer a discretionary choice in order to be competitive.

Ultimately, NSF created a new Office of Cyberinfrastructure reporting directly to the NSF Director. The Office has been reorganized several times, but to this day NSF continues to provide academic researchers with the latest computational resources. The organizational and programmatic changes at NSF in providing HPC for the general scientific community are a result of two intertwined forces: the continuing, even accelerating, pace of change in the technologies available and the ability of the general scientific community to utilize them. The pace of change is a well-known story. On the other hand, the adoption and utilization of new technology often takes much longer. NSF’s organizational changes, then, are often a direct result of the push-pull between technological advance and scientific need. For the past thirty years the organizational home of HPC has oscillated between the Office of the Director and the Computer and Information Science and Engineering (CISE) Directorate in response to competing demands. As advocacy intensifies for more service for the scientific community in the HPC arena (which they believe the Director will ensure) and an increased focus on utilization of what is available (which leads the Director to move it back to CISE) the office migrates. The broad history of computing and NSF affords a fairly rich context for further historical research.

The above synopsis is based on Chapter 10 of Computing and the National Science Foundation, 1950 – 2016: Building a Foundation for Modern Computing, Peter A. Freeman, W. Richards Adrion, and William Aspray, ACM Books, 2019, New York.

On the Status of Landauer’s Principle by Katherine Robertson (University of Birmingham)
Maxwell’s demon is a creature who cunningly violates the second law of thermodynamics. In what sense is such a demon possible? Whilst thermodynamics legislates against such a creature, the demon looks eminently possible according to the underlying classical or quantum dynamics: Poincare’s recurrence theorem and Loschmidt’s reversibility objection reveal that entropy can decrease in certain situations.

The orthodoxy is that Maxwell’s demon is vanquished by Landauer’s principle, according to which there is an entropy cost to reset the demon’s memory - a vital step in the cyclic process that supposedly leads to a violation of the second law. But the status of Landauer’s principle is controversial: some take it as obviously true, others (such as John Norton) have criticised the proofs of this principle.

In this talk, I clarify the status of Landauer’s principle. First I discuss which assumptions are required to establish Landauer’s principle, and argue that establishing to which theory (thermodynamics, statistical mechanics or quantum mechanics) these principles belong reveals the status of Landauer’s principle. I then consider one of Norton’s counterexamples to Landauer’s principle, and discuss how it depends on certain views about the physical implementation of computation.

Simulation Model Skill in Cosmology by Eric Winsberg (University of South Florida)
What role can/could simulation play in supporting, puzzle-solving, modifying, disconfirming, or falsifying ΛCDM and its competitors? I review some of the problems cosmologists have solved or hope to solve using computer simulation, and examine some of problems and successes that have emerged. I draw some conclusions regarding the kind of simulation model skill we should expect to find in Cosmology. Simulation models have been used for over 50 years now to test and explore the Lambda Cold Dark Matter model of cosmology and its precursors and rivals. The approach has been to simulate the gravitational evolution of CDM in order to predict the evolution of structure in visible matter. But such simulations are highly non-trivial, and give rise to many overlapping Duhem-type problems. But not all such simulations have had the same epistemic goals, and hence not all have been subject to the same problems. My talk explores various approaches to tackling the various problems that arise.

Cosmology in Silico by Marie Gueguen (University of Pittsburgh)
Computer simulations constitute an indispensable tool of contemporary cosmology. They are necessary to extract predictions from cosmological models, to design the observational surveys that will collect the data thanks to which models are assessed, to supplement sparse or non-existing observations. Their ubiquity at every stage of the scientific inquiry must be met with a rigorous methodology for evaluating when simulations faithfully track the physical consequences of the model, especially when expensive observational facilities are built based on simulations-based arguments. Yet, a few astrophysicists (e.g. Melott et al [1], Baushev et al [2], van den Bosch et al [3] have expressed their concerns with respect to current methods for assessing the reliability of simulations, i.e., convergence studies and code comparisons. Consider, for instance, one of the main sources of artefacts in N-body simulations, the fact that dark matter is substituted by fewer, but more massive particles, due to limited resolution. Such an idealization exposes simulations to discreteness-driven effects-e.g., collisional effects, or two-body relaxation. Code comparisons, searching for ‘robust’ properties that remain the same across different codes, do not allow to diagnose such artefacts, based on a common assumption. Convergence studies, on the other hand, look for properties within a given code that resist a change in the value assigned to purely numerical parameters and thus are not sensitive to the specifics of the calibration— e.g., to higher mass or force resolution. Increasing the resolution to diagnose discreteness-driven artefacts does not help however: in the Cold Dark Matter model, structure forms bottom-up, so the first objects to form always contain only a few number of particles. A higher resolution results in a denser environment, from which these first objects condense out with a higher physical density, thereby increasing two-body relaxation effects (Diemand et al [4]). As this example shows, higher resolution is not always better resolution!

Hence, I suggest a method to diagnose artefacts that does not promote the race to ever-increasing resolution but rather facilitate the local evaluation of distinct components of simulations. I refer to this method as that of “crucial simulations”. A ‘crucial simulation’ proposes an idealized, simplified scenario where a physical hypothesis can be tested against a numerical one, by allowing the observation of a prediction drawn from one of the hypotheses and absent from its rival. The observation of the phenomena in the outcome of the simulation then disproves one of the alternatives, thereby confirming the other; i.e., it disproves or confirms the numerical or physical nature of a given property of a simulation outcome. Interesting examples of this method can be found in the literature of the 1980’s, when astrophysicists were developing and testing their P3𝑀 codes. Efstathiou and Eastwood [5], for instance, tested their code against two-body relaxation effects by including in their simulations particles of different masses, successfully detecting mass segregation effects. In the simplified scenario upon which crucial simulations are based, moreover, it is more likely to have an analytically tractable solution to compare with the outcomes of simulations, and thus to better understand the impact of specific numerical or physical components. This strategy was used in Efstathiou et al [6]: find a simple case—here, simulating a one-dimensional cloud of particles to verify whether the cloud remains one-dimensional, as it should if the system is collisionless—where exact solutions are available, and test whether these idealized simulations succeed in reproducing the analytically known results. Such a method, given its simplicity, is not computationally expensive and does not have to put in competition with other methods, but rather can be strategically used to complement them when the epistemic opacity of simulations, due to the complexity of the simulated systems and the lack of understanding of how different modules contribute to the final outcome, makes the task of evaluating the reliability of simulations especially difficult.

References

[1] Melott, A. L., Shandarin, S. F., Splinter, R. J., & Suto, Y. (1997). Demonstrating discreteness and collision error in cosmological N-body simulations of dark matter gravitational clustering. The Astrophysical Journal Letters, 479(2), L79.

[2] Baushev, A. N., del Valle, L., Campusano, L. E., Escala, A., Muñoz, R. R., & Palma, G. A. (2017). Cusps in the center of galaxies: a real conflict with observations or a numerical artefact of cosmological simulations?. Journal of Cosmology and Astroparticle Physics, 2017(05), 042.

[3] Van den Bosch, F. C., & Ogiya, G. (2018). Dark matter substructure in numerical simulations: a tale of discreteness noise, runaway instabilities, and artificial disruption. Monthly Notices of the Royal Astronomical Society, 475(3), 4066-4087.

[4] Diemand, J., Moore, B., Stadel, J., & Kazantzidis, S. (2004). Two-body relaxation in cold dark matter simulations. Monthly Notices of the Royal Astronomical Society, 348(3), 977-986.

[5] Efstathiou, G., & Eastwood, J. W. (1981). On the clustering of particles in an expanding universe. Monthly Notices of the Royal Astronomical Society, 194(3), 503-525.

[6] Efstathiou, G., Davis, M., White, S. D. M., & Frenk, C. S. (1985). Numerical techniques for large cosmological N-body simulations. The Astrophysical Journal Supplement Series, 57, 241-260.

History of Physics in India

The Making of Modern Physics in Colonial India by Somaditya Banerjee (Austin Peay State University)
How did modern physics establish itself in India—a British colony—in the early 20th century? Who were the key actors and why did they develop physics in a colonized country far away from a European metropole? By using the case studies of Jagadish Chandra Bose, Satyendranath Bose and Chandrasekhara Venkata Raman, I explore their physics, nationalism and social identity as "well-mannered intelligentsia" who played a key role in the making of modern physics in a country still under colonial domination. Finally, argue how the local and the global were entangled in the worldview of these colonial intellectuals and the correlations between the discontinuous ‘light quantum’ and Indian history that played a key role in the ushering in of modern Indian physics.

Women and Physical Sciences in India: Bimla Buti and efforts to flourish a physical plasma community in her home country by Inianara Silva (Graduate Program in History, Philosophy and Science Teaching, Federal University of Bahia)
Bimla Buti is the first Indian woman Physicist Fellow of Indian National Science Academy (INSA) and The Academy of Sciences of the Developing World (TWAS). Her contributions to physical sciences have been celebrated with awards, such as INSA-Vainu Bappu Award, Vikram Sarabhai Award, and Jawarharlal Nehru Birth Centenary Lectureship Award. Buti received her scientific recognitions with surprise as, using her own words, “it was almost impossible for me, a woman scientist in a man-dominated field, to get nominated for prestigious awards like the Bhatnagar award” (Buti, 2008, p. 38). The man-dominated field was plasma physics. To Indian physical sciences, besides writing papers and books, she contributed to developing a research program on plasma physics at the Physical Research Laboratory (PRL) and founded The Plasma Science Society in India. “We managed to establish a very strong group in plasma physics, both theoretical and experimental, at PRL” (Buti, 2008, p. 39). I trace her contributions to physical sciences, struggles to become a female physicist, and efforts to build a career and community in plasma physics, contributing, thus, to the History of Science in India.

“The free side of the meter”: Trustworthiness, theft and class identity in reading the domestic electric meter in early twentieth-century Calcutta by Animesh Chatterjee (Leeds Trinity University)
Of all the devices used in early electric supply projects in early twentieth century Calcutta, the domestic meter was perhaps the most controversial. Introduced as a reliable billing method to measure consumption by customers connected to the newly introduced electric supply system, the electric meter was also at the centre of cases of “improper use” of electricity supply. The term “improper use” is used broadly here to refer to a variety of consumer practices that the Calcutta Electric Supply Corporation believed to be interferences to their property and operations. These included theft of electricity by bypassing or breaking seals on electric meters, or using electricity for purposes other than that for which it was supplied to the customer. This paper examines some of the disputes between customers, and engineers and inspectors of Calcutta Electric Supply Corporation on the deployment and use of electric supply and meters in early twentieth century Calcutta. Following recent works on users and non-users of technologies, and trust and the morality of measurements, this paper examines how electric meters became central to concerns over issues of quantities measures by meters, the class identity of customers, and trust between the supplier, consumers and the electric meter. In doing so, this paper focuses on both the design of measurements instruments, and the agency and discretion of the electrical consumer, thereby providing new perspectives on how consumers, suppliers and electrical measurement technologies interacted during the early days of electricity supply in colonial Calcutta.

Scientific Creativity in Peripheral Locations: The Madras Triple Helix Model of G.N. Ramachandran by Deepanwita Dasgupta (The University of Texas at El Paso)
The name of the Indian scientist G. N. Ramachandran is associated forever with the discovery of the triple-helix structure of Collagen. Present most abundantly in almost all connective tissues of the human or animal body, collagen was the third great structural discovery in biomolecules after the discovery of alpha helix by Linus Pauling and the DNA double helix by Crick and Watson. Unlike the first two however, this third discovery came from a young peripheral scientist who worked alone from an obscure newly-founded Department at the University of Madras. The discovery of the triple helix structure in collagen was thus truly a case when a peripheral scientist won the race for discovery against numerous Goliaths working in the field. In this presentation, my goal would be to trace the lines of reasoning that led Ramachandran develop the triple helix model from his early X-ray diffraction images and the subsequent controversy on collagen that finally led to creation of the Ramachandran Plot.

FHP 2020 Essay Contest Announcement

The Forum on the History of Physics (FHP) of the American Physical Society is proud to announce the 2020 History of Physics Essay Contest.

The contest is designed to promote interest in the history of physics among those not, or not yet, professionally engaged in the subject. Entries can address the work of individual physicists, teams of physicists, physics discoveries, or other appropriate topics. Entries should not exceed 2,500 words, including notes and references. Entries should be both scholarly and generally accessible to scientists and historians.

The contest is intended for undergraduate and graduate students, but is open to anyone without a PhD in either physics or history. Entries with multiple authors will not be accepted. Entries will be judged on originality, clarity, and potential to contribute to the field. Previously published work, or excerpts thereof, will not be accepted. The winning essay will be published as a Back Page in APS News and its author will receive a cash award of $1,000, plus support for travel to an APS annual meeting to deliver a talk based on the essay. The judges may also designate one or more runners-up, with a cash award of $500 each.

Entries will be judged by members of the FHP Executive Committee and are due by September 1, 2020, at 11:59 p.m. US Eastern Time. They should be submitted, as Word documents or PDFs, by emailing fhp@aps.org with “Essay Contest” in the subject line. Entrants should supply their names, institutional affiliations (if any), mail and email addresses, and phone numbers. Winners will be announced by October 1, 2020. 

Resource Link

Website dedicated to the history of the University of Texas at Austin Physics Department: http://web2.ph.utexas.edu/utphysicshistory/

Officers and Committees 2020 - 2021

Forum Officers
Chair: Paul Joseph D Martin
Chair-Elect: Michel H P Janssen
Past Chair: Paul Cadden-Zimansky
Secretary-Treasurer: Dwight E Neuenschwander

Forum Councilor
Virginia Trimble

Other Executive Board Members
Melinda Baldwin
Chanda Prescod-Weinstein
Katemeri Rosa
Audra Wolfe
Gregory Good (AIP)
Donald Salisbury (newsletter)

Program Committee
Michel H P Janssen (chair)
Paul Cadden-Zimansky
Joseph D Martin

Nominating Committee
Chair: Paul Cadden-Zimansky
Michel H P Janssen
Chanda Prescod-Weinstein
Audra J Wolfe

Fellowship Committee
Chair: Paul Helpern
Peggy Kidwell
Katemari Rosa

Pais Prize Committee
Chair: Helge Kragh
Melinda Baldwin
Dieter Hoffmann
Jinyan Liu
Gregory Good (ex-officio)


The articles in this issue represent the views of their authors and are not necessarily those of the Forum or APS.