publisher colophon

CHAPTER SIX

Atmospheric Chemistry

The Martian surface is fried by ultraviolet light because there’s no ozone layer there. So the nearby planets provide important cautionary tales on what dumb things we should not do here on Earth.

—Carl Sagan, 1992

During the 1980s, NASA’s planetary program essentially ended. The first planetary launch of the decade was the 1989 Galileo mission, which did not arrive at its target planet, Jupiter, until 1995. Due to the Challenger explosion, the Mars Observer mission approved in 1985 did not launch until 1993. It disappeared right before arriving at Mars, an apparent victim of an explosion.1 The Magellan high-resolution radar-mapping mission to Venus did not launch until 1989. There were two causes of this cessation of a vibrant and successful planetary science program. The first was the budgetary environment of the first several years of the decade. President Ronald Reagan had not initially been a supporter of space exploration, and he and his budget director had imposed budget rescissions that ended a number of programs and blocked all new mission starts.2 While this changed eventually, the interruption ensured no data until the 1990s.

The second reason for the hiatus in space science was the failure of the Space Shuttle to deliver on its promise of inexpensive, reliable space access. Instead, the Shuttles were relatively quickly recognized as unreliable and extraordinarily expensive. Late in 1984, the Reagan administration began to look for a new launch vehicle, and after the 1986 Challenger accident began to force payloads off the Shuttle and onto expendable rockets again.3 The Upper Atmosphere Research Satellite (UARS) had to be redesigned after the Challenger explosion, producing a cost explosion that put its ultimate price tag over the $1 billion mark. This money came out of other programs, as did the cost of making fixes to the Shuttle and of delaying and modifying other missions like Galileo.4 The increasing costs of approved missions meant fewer new approvals.

With no new missions, the only planetary data scientists had to work with for the decade was from the Voyager outer planet flyby missions launched in 1977, which had encounters with Jupiter in 1980, Saturn in 1981, Uranus in 1986, and Neptune in 1989. Many planetary scientists therefore turned to more Earthly questions, in search of intellectual stimulation and funding. There were plenty of available scientific questions, but two stood out that were of global interest and had national support: atmospheric chemistry and climate change. NASA began to devise a global climate observing system in the late 1970s, but this did not begin to garner political support, and therefore funding, until 1989. Instead, NASA’s major atmospheric science effort during the 1980s was its atmospheric chemistry program. There were two primary efforts: Robert Watson’s Upper Atmosphere Research Program (UARP) and Robert “Joe” McNeal’s Tropospheric Chemistry Program.

These programs did not rely on space hardware, however. The UARS that was supposed to be the backbone of the stratospheric research program during the 1980s did not get a ride into orbit until 1991. A single flight of the Jet Propulsion Laboratory (JPL) Atmospheric Trace Molecule Spectroscopy (ATMOS) instrument in 1985 and the aging TOMS/SBUV onboard Nimbus 7 were the only sources of space-based ozone chemistry data during the decade, and they were insufficient to answer the major scientific questions. Aircraft-, balloon-, and ground-based research were the basis of NASA’s atmospheric chemistry program in the 1980s. When UARS finally launched, it provided corroboration and demonstrated the global extent of stratospheric conditions measured locally by these other means. But it was not the source of fundamental advances in knowledge. The head of NASA’s Earth Observations Program, Shelby Tilford, believed that a proper research program needed to be comprehensive, involving laboratory-, aircraft-, and model-based studies as well as providing for spacecraft instrument development; these proved the salvation of the agency’s scientific reputation as UARS sat in storage.

TROPOSPHERIC CHEMISTRY

In 1978, Jack Fishman, a researcher at the National Center for Atmospheric Research (NCAR), and Paul Crutzen published an analysis of tropospheric ozone that launched an extensive series of investigations during the 1980s.5 Ground-level ozone, while a pollutant that caused health problems for humans, was largely considered to be chemically inert in the troposphere. In the troposphere, ozone was produced by photolysis of nitrogen oxides, which are industrial emissions and are also generated within internal combustion engines. It is destroyed by plant life. The scientific community believed until the late 1970s that the primary natural source of ozone in the troposphere was the stratosphere. Ozone-rich stratospheric air descended into the troposphere somewhere in the high mid-latitudes (although the “where” was rather speculative) and was eventually removed at the ground.

Fishman and Crutzen argued that this could not be true. There was much more land in the Northern Hemisphere than in the Southern Hemisphere, and thus greater ozone destruction. For them to have roughly the same ozone concentrations, stratospheric descent into the Northern Hemisphere had to be much larger than into the Southern Hemisphere. But there was no evidence that this was true, and, they contended, no theoretical basis for believing it either. There had to be a significant ozone production in the troposphere to make up the difference. This had to be particularly true for the Southern Hemisphere, where much less industrial activity occurred but where tropospheric ozone levels were nonetheless similar to Northern Hemisphere levels. They speculated that photolysis of carbon monoxide could provide some of the additional ozone, but there were other possibilities as well, including as yet unidentified sources of nitrogen oxides.6

Combined with interest in chemical cycling triggered by James Lovelock’s speculations about climate regulation, this argument set in motion a tropospheric chemistry program at NASA that effectively paralleled the UARP. Called originally the Air Quality Program, and later the Tropospheric Chemistry Program, like the UARP, it was comprehensive in nature, involving laboratory studies, model development, and field experiments. The field component was the Global Tropospheric Experiment (GTE), a name that failed to reflect its true nature as a continuing series of instrument development and field studies lasting from 1982 through 2003.

The manager for the Tropospheric Chemistry Program from its founding through 1999 was Robert “Joe” McNeal. McNeal had started the National Science Foundation’s (NSF) tropospheric chemistry program in 1978, after working as an atmospheric chemist at the Aerospace Corporation in Santa Monica for more than a decade. NSF’s program managers rotated every two years, and in 1980 Shelby Tilford asked McNeal to come to NASA to create a tropospheric chemistry program there. Atmospheric chemistry was just beginning to become institutionalized within the government, and McNeal recalls that his initial focus was on figuring out what NASA’s contribution to the field could be. Whereas NASA had been made lead agency for stratospheric chemistry, no one had been for tropospheric chemistry. So his first task was to assess what NASA’s capabilities were, what unique abilities it possessed vis-à-vis other research agencies, and what it could contribute to the new research area.7

Atmospheric chemistry was, in McNeal’s view, a “measurement limited” field. The trace species that were important were present in the atmosphere in minute quantities, at one part per billion and in some cases one part per trillion levels. The ability to measure at these tiny levels was new, and just as was true for stratospheric chemistry, the number of chemical species that could be measured was small. Further, no one had attempted to systematically evaluate which techniques gave the best results—the intercomparison problem. Finally, the laboratory instruments that could measure to these levels could not be taken into the field, which was where researchers wanted to make the measurements, and so development of instruments that could be put into an airplane or on a balloon was an obvious priority. The Upper Atmosphere Research Office was performing this role for stratospheric chemistry, and the development and intercomparison of instruments for field research became one part of McNeal’s area.8

McNeal also believed that one of NASA’s primary strengths as a research organization was management resources. The field experiments that he anticipated carrying out during the 1980s would involve the efforts of hundreds of people from many different universities and government agencies, and hence coordination was a significant challenge. Further, a global-scale effort would involve collaboration with other governments, which was something that NASA, which maintained its own foreign affairs staff, could also handle. Finally, it also had research aircraft that were relatively underutilized and thus available in the earlier years of the program, stationed at the Ames Research Center in California and at Wallops Island in Virginia.

McNeal chose Langley Research Center to manage the day-to-day operation of the Tropospheric Chemistry Program. He had met Don Lawrence and his chief scientist at the time, Robert Harriss, during a “get acquainted” visit, and had been impressed with their knowledge and interest in this general research area. Harriss had been hired away from Florida State University for an ocean science program at Langley Research Center, but this was terminated when the oceans program manager at NASA headquarters had decided to centralize physical oceanography at JPL. Harriss had then become Lawrence’s chief scientist. Harriss’s own specialty was biogeochemistry, and he focused on the exchange of gases between the ocean surface and the tropospheric boundary layer, the perfect skill set for the program McNeal intended to forge. So McNeal asked Harriss to be the project scientist for the effort, and assigned the project management function to Langley, with the caveat that Lawrence had to keep the project management functions separate from the science functions. That way, Langley’s scientists would have to compete alongside researchers from other NASA centers, other agencies, and universities.9

Early the following year, a small group of atmospheric chemists and meteorologists met at NCAR in Boulder to discuss the scientific questions pertinent to the new field. This group sent a letter report to NSF calling for a coordinated study of tropospheric chemistry; NSF then asked the National Research Council (NRC) to form a committee to draft this plan. NRC formed a Panel on Global Tropospheric Chemistry, chaired by University of Rhode Island biogeochemist Robert Duce, to carry out this task. Harriss and McNeal formulated their scientific program from this committee’s deliberations and report.10

One of the central themes of the committee’s discussions was that while knowledge of tropospheric chemistry was growing explosively, the research being done was crisis-driven. It was formulated in response to short-term policy needs. It lacked the comprehensiveness that was necessary to produce a well-integrated understanding of the full range of the atmosphere’s chemical processes and fluxes. Anthropogenic sources of ozone-generating trace gases were well-inventoried in North America and Europe due to the decades of pollution research carried out there, but the policy focus of the research had resulted in relative neglect of source gases of natural origin as well as of origins outside the developed world. Hence, one of the committee’s recommendations was that the proposed program be long-term in nature, not focused on the immediate problems of the early 1980s.11

In the NRC committee’s view, progress in atmospheric chemistry was also being inhibited by the tendency of individual scientists to focus on a small piece of the overall challenge. In one sense, of course, this was vital. As was the case in stratospheric chemistry, for example, one needed individual reaction rate measurements in order to lay the foundations for chemical models. But building a better understanding of biogeochemical cycles also required that studies be carried out at larger scales. Investigation of sources and sinks within the biosphere, of transport from one region to another, and of the chemical transformation and removal processes all needed to be performed to fully understand the complex chemical cycles. Such studies would require the participation of many scientists in organized field experiments, drawn from a variety of specialties.

The biogeochemical cycles that were of most interest were those of nitrogen, sulfur, and carbon. The committee recommended investigating potential natural sources of these trace gases in places relatively remote from industrial society: in Arctic tundra, tropical rainforest, the open ocean, and African savanna. Further, they recommended studying the chemical impact of biomass burning on the atmosphere. From basic chemistry, biomass burning had to be a source for these trace gases, but the magnitude of its impact on the global atmosphere was unknown.12

Finally, the committee argued that new instruments with faster response times needed to be developed and validated. Existing instrumentation largely did not respond quickly enough to changing levels. This made it difficult to link chemical concentrations to specific air masses so that the chemicals could be traced to their sources. Since one focus of the proposed program was on transport, being able to credibly trace gases to their sources was an important factor. Fast-response instruments seemed to be possible for a number of important species, and the committee sought support for their development.13

Thus, the first part of the program McNeal and Harriss assembled was instrument development and comparison. This became known as the Chemical Instrumentation Test and Evaluation (CITE) series of missions. The first of these, carried out in July 1983, will serve to illustrate the CITE series. Harriss recalled later that the initial focus was on getting an instrument to measure the hydroxyl radical. Hydroxyl was suspected of being the atmosphere’s cleanser, able to bond with and convert other chemically active trace species and remove them from the atmosphere. But it was extraordinarily difficult to measure because of its chemical reactivity. It was also present in minute quantities, in the parts per quadrillion range. A number of scientists believed they had effective hydroxyl instruments, yet there was a great deal of doubt in the chemistry community that they really produced good results. Hence, the first instrument comparison done by the new GTE for hydroxyl and two trace species related to it, carbon monoxide and nitric oxide.14

The first CITE experiment was held at Wallops Island, Virginia, and took place in two phases, ground-based and airborne. GTE’s project office had arranged for a cluster of trailers to be set up at the northern end of the island, open to the expected sea breeze, with the trailers equipped with necessary test and air handling equipment to support the experimental equipment. McNeal had accepted proposals from researchers at ten different institutions for this experiment, including the Ames, Langley, Wallops, and Goddard centers, the University of Maryland, National Oceanic and Atmospheric Administration (NOAA), Georgia Institute of Technology, Washington State University, and Ford Motor Company. Three different measurement techniques for each chemical were chosen, with all but one being in situ sampling instruments. The one remote sensing instrument was a lidar proposed by Ford Company researcher Charles C. Wang; this was intended to measure hydroxyl via laser-induced fluorescence.15

The experimental procedure that the GTE project team established was to test the nitric oxide and carbon monoxide instruments against samples of known test gas concentrations as well as against ambient air samples fed to them through common manifolds. Hence, the three carbon monoxide instruments, for example, would sample essentially the same atmosphere at the same time via a single air duct. This could not be done for the hydroxyl instruments, however, because one was a remote sensing lidar. Further, no laboratory standard gas mixtures for hydroxyl existed because of the molecule’s extremely short lifespan to test these instruments against. For hydroxyl, the strategy was simply to compare the ambient measurements in the hope that they would at least be within the same order of magnitude.

The ground test results for carbon monoxide and nitric oxide were definitive in the eyes of the GTE experiment team. The two groups of instruments agreed to within 10 percent, with no detectable biases in any of the instruments. Given that atmospheric variability of these trace species was much higher than the instruments’ demonstrated error levels, these were excellent results. The hydroxyl measurements, however, were disappointing. The three instruments all had operational difficulties, and there were few periods of overlapping data by the end of the experiment. It was therefore impossible to draw any conclusions about their levels of agreement that would have any statistical relevance. The conclusion the GTE team drew from this was that none of the hydroxyl instruments they had tested was capable of producing reliable measurements.16 In fact, hydroxyl proved so difficult to measure that GTE did not get a reliable hydroxyl instrument until 1999.

After the disappointing hydroxyl results, but with good results from many other instruments, Harriss convinced McNeal that it was still worth conducting field experiments to begin characterizing fluxes of carbon, sulfur, and nitrogen in and out of the biosphere. This led to measurement campaigns in Barbados, Brazil, and the Alaskan and Canadian Arctics. These missions went by the acronym ABLE (for Atlantic [Arctic] Boundary Layer Experiment), and involved use of the NASA Electra aircraft to measure trace species at very low altitudes in the atmospheric boundary layer as well as the establishment of ground measurement stations for comparison and to establish the existence of specific sources.17

BOUNDARY LAYER EXPERIMENTS

In July 1981, while the new tropospheric chemistry program was being organized, Harriss had arranged for a new instrument from the Langley Research Center, the Differential Absorption Lidar (DIAL), to be flown on the NASA Electra from Wallops Island, Virginia, to Bermuda. The DIAL had been developed by a group led by Edward V. Browell to detect aerosols via a backscatter technique. It could also measure ozone. The DIAL system produced a continuous profile of the aerosol and ozone content of the air either above or below the aircraft, and still more useful, it produced its output in real time.18 It could therefore be used to guide the aircraft toward interesting phenomena. And because it produced a continuous readout, it could also be used to examine the continuity of atmospheric structure.

On the July 1981 flight, and a second set of flights to Bermuda in August 1982, Harriss, Browell, and some of their colleagues used the DIAL and some companion instruments designed to sense ozone and carbon monoxide to trace the movement of haze layers from the continental United States eastward into the Atlantic. These haze layers extended more than 300 kilometers into the Atlantic from the U.S. East Coast, clearly demonstrating the existence of long-range transport of pollutants. More interesting to the team, the lidar returns clearly showed that the layers of aerosols maintained a consistent vertical structure over great distances. They did not blend together as the air mass moved. This fact offered the ability to link individual layers to specific sources. Hence chemical transport could be studied in detail. Based on the vertical distribution of ozone and aerosols, their initial data supported the Fishman-Crutzen hypothesis that ozone was produced within the boundary layer due to surface emissions.19

In 1984, Harriss’s group mounted a similar expedition to Barbados also named ABLE. Their target was study of the Saharan dust clouds that blew westward over the island. These had been known since at least the 1840s, when naturalist Charles Darwin had witnessed them, but Browell’s DIAL system allowed investigation of the transport mechanism. The lidar revealed that the dust actually formed many very thin layers that remained distinct over very long distances. This was useful knowledge, as the lack of mixing meant that specific air parcels could conceivably be traced to their origins.

The next field mission, ABLE 2, was considerably more complex an undertaking than its predecessors. In November 1981, Hank Reichle’s Measurements of Air Pollution instrument had flown aboard Space Shuttle Columbia, and had returned a surprising result: there appeared to be high concentrations of carbon monoxide over the Amazon basin. This was unexpected, as carbon monoxide is a combustion product that at the time was typically associated with industrial emissions.20 Instead, this appeared to be from biomass burning. In 1977, for example, the National Academy of Science had evaluated biomass burning’s contribution to global carbon monoxide concentrations as about 3 percent, a number that could not possibly be true given Reichle’s new data. Figuring out where this anomalous concentration of carbon monoxide was coming from was the scientific basis for ABLE 2. To accomplish this, McNeal worked with Luis Molione of the Brazilian space agency, Instituto Nacionãl de Pesquisas Espaciais, to arrange logistics, identify potential Brazilian collaborators, and gain all the necessary permissions. This was not an easy thing to do, as Brazil’s government at the time was a military dictatorship and not particularly interested in science or in having its territory overflown by the aircraft of other governments. But Molione was able to get the government to grant permission for the experiment, and was also able establish a parallel ground-based program that continued after the GTE portion was over.21

The ABLE 2 field experiment was carried out during two phases, in order to capture data from the two dominant tropical seasons, wet and dry. Chemical conditions would obviously be different, affecting biogenic emissions. Hence, the first field phase of the experiment was carried out during 1985’s dry season, July and August, with the NASA Electra operating out of Manaus, Brazil. Surface measurement stations were established at Reserva Ducke, a biological preserve about 20 kilometers northeast of Manaus, on the research vessel R/V Amanai, and on an anchored floating laboratory on Lago Calado. A tethered balloon station, radiosonde and ozonesonde launches, and a micrometeorological tower completed the experimental apparatus. The surface measurements included enclosures designed to identify specific emission sites; by establishing the location of specific emissions, the science teams could link surface emissions to the airborne measurements. They also provided some of the most interesting results of the experiment.

The science team encountered astonishing chemical conditions as the dry season evolved. In the nineteen research flights, the GTE group measured carbon monoxide levels above the forest canopy that slowly increased through the experiment period. These eventually averaged 3 to 6 times that of the “clean” tropical ocean atmosphere. The primary source for these high levels were agricultural fires that had been set to clear fields; the haze layers produced by burning had carbon monoxide concentrations more than 8 times that of the clean atmosphere. By early August, the haze layers were clearly visible in imagery from the Geosynchronous Operational Environmental Satellite (GOES) satellites, and covered several million square kilometers.22 This finding was the most significant of the expedition, strongly suggesting that biomass burning was capable of influencing tropospheric chemistry on a global scale.

There were other interesting results. Steven Wofsy of Harvard found that the rainforest was a net source of carbon dioxide at night and a net sink during the day, with the forest soil appearing to be the dominant emission source. Rivers were net sources of carbon dioxide regardless of diurnal affects, and wetlands showed a weaker diurnal cycle than the forest soils. Another discovery by Wofsy’s group was that the forest soil was a large producer of nitric oxide and isoprene. This was a surprise because the forest soils in the mid-latitudes were not significant producers of these chemicals, and thus were not implicated in ozone production. But the levels being emitted by the Amazonian soils were high enough to initiate substantial ozone production, leading the researchers to conclude that the natural emissions of the rainforests influenced the photochemistry of the global troposphere.

The wet season expedition, carried out in April and May 1987, was considerably less dramatic. The science teams deployed a similar arrangement of ground-, tower-, balloon-, and aircraft-based measurements to examine the chemistry of the boundary layer in the rainy season. This permitted them to measure the “respiration” of the rainforest, via monitoring carbon dioxide levels within and above the canopy, as well as the fluxes of other chemicals. One significant finding was that nitric oxide emissions from the soil were much higher from pasturelands than from the rainforest soil itself. Hence continued conversion of rainforest to crop or pastureland could itself affect the chemistry of the atmosphere.

GTE’s next set of field expeditions were to the Arctic. Harriss and his colleagues believed that the lightly inhabited Arctic region was an area that was likely to be very sensitive to the effects of anthropogenic changes in the atmosphere. The soils in the region were high in carbon content that might be released as the Earth warmed, providing a potential positive feedback effect, and ground data suggested that even the most remote areas of the Arctic were being affected by air pollution from mid-latitude sources. Because of the complex linkages among atmospheric trace gases and the biosphere, and because in winter Arctic air masses moved southeastward across North America, the changing chemistry of the Arctic could also impact air pollution in the mid-latitudes. The Arctic, like the Amazon, had largely been neglected, however, justifying field research in the region. These expeditions became known as ABLE 3A, carried out during July and August 1988, and ABLE 3B, carried out during July and August 1990.23

The first phase of the Arctic boundary layer expedition took place primarily in Alaska, with the Wallops Electra operating out of Barrow and Bethel for most of the experiment. As in the Amazonian expeditions, the GTE project office had erected a micrometeorological tower as well as placing enclosure measurements in selected areas to sample soil emissions. The mission scientists had been particularly interested in the methane emissions of the lowland tundra, which was dominated by peatland and shallow lakes, and placed their ground instrumentation in the Yukon-Kuskowkwim Delta region for this study. Methane emissions were known to be widely variable, dependent on the wetness or dryness of the soil and on soil temperature. Other recent examinations of tundra emissions had indicated that methane emissions increased as temperatures rose; since methane is a greenhouse gas, this would provide a positive climate feedback. In the eyes of Harriss, it would also provide an early warning system for global environmental change, as one could monitor the methane emissions as a proxy measurement for soil warming. The ABLE 3A results from the enclosure and aircraft measurements of methane emissions confirmed this earlier work; the Arctic tundra was very sensitive to temperature, exhibiting a 120 percent increase in methane emissions for a 2 degree C increase in temperature.24

The mission scientists also measured nitrogen species. The Arctic region was widely believed to be a net sink for tropospheric ozone, with destruction processes outweighing the combined effects of tropospheric production and intrusions of high-ozone air from the stratosphere. The ability of the region to continue destroying ozone depended upon concentrations of nitrogen oxides remaining low so that the Arctic troposphere itself did not become a source of ozone; given their growing awareness of the ability of long-range transport to move pollutants across thousands of miles, it was not clear this would be the case. McNeal had chosen two groups of scientists to make the nitrogen oxides measurements, one led by Harvard University’s Stephen Wofsy, the other by John Bradshaw from Georgia Institute of Technology. Their results indicated that the region was, as expected, a net sink for nitrogen species and ozone during summer. However, doubling the nitrogen oxides levels would transform the Arctic into a net source of ozone, and growing levels of industrial pollution being transported in could achieve that.25

ABLE 3B expanded on these results two years later with a deployment to the Canadian subarctic region around Hudson Bay. This area was the second largest wetland in the world and thus a major source of natural methane emissions; studying it, the study’s scientists believed, would contribute to overall understanding of what they called the “chemical climatology” of North America. Their results here, however, were remarkably different from the ones in the Alaska experiment. Methane flux from the wetlands region was much lower than expected, while stratospheric intrusions of ozone were higher than they had been in the previous expedition. Ed Browell’s DIAL laser system showed that aerosol plumes from forest fires affected a significant portion of the troposphere in the region. Steve Wofsy concluded that these fires were the primary source of hydrocarbons and of carbon monoxide in the high latitudes, with industrial emissions contributing less than a third of these contaminants. The DIAL laser’s ability to distinguish air masses by their aerosol and ozone composition also led the mission scientists to conclude that they had found several examples of air parcels transported in from the tropical Pacific, however, raising questions about very long-range transport. The chemical age of these air masses was about fifteen days, and they contained much lower amounts of most of the trace species the mission scientists were looking for than the Arctic background air.26 This led Harriss and McNeal to propose shifting future missions to the Pacific, in order to examine the Pacific basin’s chemical climate. These expeditions took place later in the 1990s, after some other important investigations had expanded knowledge of atmospheric chemistry.

The final major field experiment prior to GTE’s Pacific shift took place in 1992. This was TRACE-A, the Tropospheric Aerosols and Chemistry Expedition-Atlantic. Planned in conjunction with a major international field experiment to characterize the atmospheric impact of biomass burning in Africa, the Southern African Fire-Atmosphere Research Initiative (SAFARI), the TRACE-A mission was conducted out of Brazil to enable study of transatlantic transport properties. The overall initiative had grown out of a meeting in November 1988 at Dookie College, Victoria, Australia, that had been held to develop the scientific goals for a new International Global Atmospheric Chemistry Program. GTE had already shown that biomass burning and the associated land-use changes had global-scale impacts, but its two Brazilian field expeditions had been too limited in scope to establish longer-range processes. With TRACE-A in Brazil, equipped for long overwater flights, and SAFARI’s scientists to the east in Africa, the scientific community could begin to quantify more fully the chemistry and transport of fire emissions.27

For TRACE-A, GTE switched from the low-altitude, short-range Electra aircraft to the higher-altitude, longer-range DC-8 from Ames Research Center. The expedition’s goal was to trace the movement of burning-derived aerosol plumes as they moved eastward across the Atlantic, and this happened at higher altitudes than the Electra was efficient at. The instrumentation on the DC-8, however, was essentially the same. It was equipped to measure tracer species and aerosols as well as the reactive gases that produced ozone. The greater range enabled it to make flights across the entire tropical Atlantic to Africa, permitting it to trace air masses through the whole region. This ability enabled the science teams to determine that the high levels of ozone that Ritchle’s MAPS instrument and that Donald Heath’s Total Ozone Mapping Spectrometer (TOMS) instrument both saw over the tropical Atlantic came from reactive nitrogen species, some of which flowed in from Africa, some from South America, and some, in the upper troposphere, that seemed to be generated in situ. This led the scientists to speculate that lighting was a significant source of active nitrogen in the upper troposphere. But the fact that polluted air flowed into the Atlantic basin from both directions (at different altitudes) was surprising; the meteorology of the tropics was more complex than they had expected.28

The 1980s, then, witnessed a dramatic shift in scientific understanding of tropospheric chemistry. While NASA was hardly the only organization studying the question, it had the ability to mount large, multi-investigator experiments to examine the full range of scales in the atmosphere. McNeal, Harriss, and the many other GTE experimenters used that capability to develop new knowledge about the biosphere’s interaction with the atmosphere, demonstrating in the process that nonindustrial, but still anthropogenic, emissions were substantial suppliers of ozone precursors to the atmosphere. While there is no doubt that the local residents of South American and African regions already knew that their activities produced polluted air, these expeditions demonstrated the global reach of these fire emissions and caused Western scientists to begin accounting for them in global studies. After GTE’s missions of the 1980s and early 1990s, pollution was no longer merely a problem of modern industrial states. Instead, the increasing scale of biomass burning, tied directly to increasing population pressures in the underdeveloped world, contributed at least as much to global ozone pollution as the industrial regions.

In the process of carrying out its own research agenda, the Global Tropospheric Chemistry Program also contributed to the scientific capacities of the host nations. Many of the investigators who participated in the field expeditions were funded by the Brazilian and Canadian governments. McNeal’s strategy had been to use the visibility of a NASA visit to raise the visibility of local scientists to their own governments, improving their status and funding prospects.

VOLCANIC AEROSOLS

Another area of research NASA entered during the late 1970s was aerosols, particles suspended in the atmosphere. In trying to understand Venus’s atmosphere, James Pollack’s group at Ames Research Center had undertaken detailed studies of sulfate aerosol chemistry and radiative impacts. In a similar vein, Brian Toon had written his doctoral thesis on a model comparison between Martian climate shifts and potential shifts in Earth climate caused by volcanic explosions. He had employed data from the Mariner 9 mission and data from historic volcanoes on Earth, to conduct his study. He had moved to Ames in 1973 from Cornell, to work with Pollack, on the Pioneer Venus project.

Sulfate aerosols also tended to be injected into the Earth’s atmosphere by both human sources (coal-fired power plants, for example) and volcanoes, and as the Mars and Venus efforts wound down, Pollack’s group turned their efforts toward understanding how volcanic eruptions on Earth might affect climate. There was some historical evidence that large volcanic explosions did cause global cooling. In 1815, Mount Tambora in the Dutch East Indies had exploded, turning 1816 into a “year without a summer.” Half a world away, in New England, frosts continued throughout the summer of 1816, causing widespread crop losses. Speculation at the time connected the sudden cold to dust from the explosion; the much better recorded (although smaller) Krakatoa eruption of 1883 had been identified as the probable cause of a weaker cooling the following year.29

The Krakatoa explosion was the first from which useable information about the distribution of volcanic debris was available, primarily from a scattered handful of astronomical observatories; a better set was available from the 1963 eruption of Mount Agung in the Philippines. Pollack’s group used their model to examine the response of the model climate system to elevated levels of volcanic debris in the stratosphere, finding that their model results were consistent with the observed cooling from the Agung explosion. This they traced largely to the radiative effects of the sulfate aerosols, not to the ash component of the ejecta plume. They concluded that a single, large explosion could produce globally average cooling of up to 1 degree K. If a series of such explosions over a period of years occurred, this cooling effect could be deepened and prolonged. In their published paper, they also discussed a number of sources of potential errors, including the limited available data and the inability of their model to simulate cloudiness changes (a flaw common to all such models).30

The 18 May 1980 eruption of Mount St. Helens in Washington State happened to be well timed. NASA had two satellites with aerosol instruments aboard in orbit, Stratospheric Aerosol Measurement (SAM II) on Nimbus 7 and Stratospheric Aerosol and Gas Experiment (SAGE) on AEM 2, with which to study the evolution of the volcanic plume as well as aircraft-based instruments and ground-based lidar instruments. Beginning late in 1978, Pat McCormick, Jim Russell, and colleagues from Wallops Flight Center, Georgia Institute of Technology, NCAR, NOAA, and the University of Wyoming had carried out ground truth experiments to evaluate how well the SAM II and SAGE instruments characterized the stratosphere’s aerosol layers. To accomplish these intercomparisons, they had employed a variety of balloon and aircraft sensors. NASA’s P-3, a large, four-engine aircraft originally developed for ocean surveillance, carried Langley’s lidar instrument to examine aerosol size and density by laser backscatter measurements. NCAR’s Sabreliner, which could fly in the lower stratosphere, was equipped with direct sampling instruments. The University of Wyoming’s balloon group, finally, deployed dustsondes, which provided measurements from ground level through about 28 kilometers.31

The ground truth experiments were international in scope, starting with flights from Sondrestrom, Greenland, and then moving to White Sands, New Mexico; Natal, Brazil; Poker Flat, Alaska; and finally Wallops Island, Virginia. International partners in Britain, France, Italy, and West Germany had also developed ground-based lidars to use in evaluating the SAGE data on their own. The researchers had chosen the sites to permit examining performance at both high and low latitudes; as luck would have it, while the teams were in Brazil, the volcano Soufriere on St. Vincent erupted, permitting the P-3 and SAGE satellite to examine its plume.32 The Soufirere measurements suggested that the mass of the material lofted in the stratosphere by the eruptions represented less than half of 1 percent of the global stratospheric aerosol loading, leading McCormick to conclude in a 1982 paper that the eruption was unlikely to have had any significant climate effect.33

Analysis of the Soufriere data had been delayed by the eruption of St. Helens, whose location made it a prime opportunity to document the eruption of a large volcano. St. Helen’s plume would move west-east across North America, where the existing meteorological and astronomical observing network would be able to track and record it in great detail. NASA had also been in the process of finalizing an agreement with NOAA and the U.S. Geological Survey to initiate a program called RAVE (Research on Atmospheric Volcanic Emissions), another of Pollack’s ideas, when the eruption began. McCormick’s team at Langley, using the Wallops P-3, and groups from Ames Research Center, using a U-2, and the Johnson Space Center, using an RB-57, all flew missions to characterize the volcano’s emissions as the plume moved east, the P-3 from below via airborne lidar and the other two aircraft from inside the cloud. The SAGE satellite’s orbit carried it over the plume between 20 and 28 May, adding its larger-scale data to the aircraft and ground measurements.34

In November, Langley hosted a symposium to discuss the findings of the St. Helen’s effort.35 The St. Helens explosion was the first Plinian-type eruption to be subjected to modern measurement technology, and it contained some surprises. The plume from the initial explosion had reached to 23 kilometers, well into the stratosphere. Most of the silica ash had fallen out of the stratosphere quickly, with researchers from Ames, Lewis Research Center in Cleveland, and the University of Wyoming’s balloon group all finding that the ash was no longer present after three months. Sulfate aerosols from the eruption, as expected, remained in the stratosphere six months later, but St. Helens had turned out to be unusually low in sulfur emissions. Based on this, Brian Toon’s group at Ames had therefore predicted the eruption would not have any significant climate impact. Another surprise to the conferees was that chlorine species in the stratosphere had not increased as they had anticipated. Throughout the several eruption events, chlorine remained at essentially normal background levels. Many types of rock contained chlorine, and they had assumed that some of this would be transported into the stratosphere by the ejecta plume. That had not happened. Instead, the missing chlorine became a bit of a mystery.

If St. Helens’s eruption had not been expected to have a measurable impact on the Earth’s climate because it had not propelled large enough amounts of sulfates into the stratosphere, the late March 1982 eruption of El Chichón in Mexico was another question entirely. It was not a particularly large eruption, but whereas St. Helens had been unusually low in sulfur, El Chichón proved unusually high.36 Located in the province of Chiapas, this explosion produced the largest aerosol cloud of at least the previous seventy years. The cloud impacted the operation of satellite instruments that had not been designed to study aerosols—it played havoc with the Advanced Very High Resolution Radiometer (AVHRR) instrument on the operational weather satellites, for example, invalidating much of its surface temperature data, while suggesting that AVHRR might be able to provide information on global aerosol density once appropriate algorithms were developed—but the eruption offered the opportunity to study the climate effects of these aerosols in a way that St. Helens had not.

Writing for Geofisica Internacional, Pat McCormick explained that the eruption’s potential value for testing models of stratospheric transport, aerosol chemistry and radiative effects, and remote sensors had caused NASA to organize a series of airborne lidar campaigns to examine the movement of the volcano’s aerosols from 56S to 90N latitudes, and supporting ground and airborne measurements. (The SAGE satellite had failed late in 1981 due to contamination of its batteries and was not available to help follow the movements of the aerosols.) The first airborne mission, in July 1982, flown from Wallops Island to the Caribbean, was exploratory, with the researchers trying to determine whether the edge of the aerosols cloud was detectable. For a later effort in October and November, they orchestrated a series of ground and other in situ measurements timed to coincide with their aircraft’s flight path to provide data for comparison purposes. These flights extended from the central United States to southern Chile, to assess the spread of the aerosols into the Northern Hemisphere mid-latitudes. Two more series of flights in 1983 examined the aerosol dispersion into northern high latitudes, into the Pacific region, and a final series of flights into the Arctic in 1984 reached the North Pole.37

Even without SAGE, however, satellites proved able to provide unexpected information about the volcanic plume. Reflecting the very high density of the cloud, the geosynchronous imaging satellites had been able to track it throughout its first trip around the world. During this first circumnavigation, it spread to cover a 25 degree latitude belt. After it could no longer be followed on the visible-light imagery, other satellites could still detect it. The Solar Mesosphere Explorer, which had a stratospheric infrared water vapor sensor, was able to follow the mass via its infrared signature. It showed that most of the aerosol mass remained south of 30N for more than six months, the result of an unexpected stratospheric blocking pattern. Similarly, the TOMS instrument on Nimbus 7, which utilized ultraviolet backscatter for detection, proved able to trace the sulfur dioxide from the eruption.

And unlike the St. Helens eruption, the El Chichón eruption was followed by substantial ozone losses extending into the mid-latitudes. It was not clear what the cause of this was. Using an infrared spectrometer, two researchers at NCAR had measured greatly increased levels of hydrogen chloride in the leading edge of the cloud as it passed over North America six months later. This was one of the reservoir species resulting from the complex ozone depletion reaction, and its presence was evidence that chlorine was responsible for the missing ozone.38 It was not clear how it had gotten there, however. While they assumed that it was derived from chlorine species released from the volcano, that was not necessarily the case. No one had actually measured this chlorine species close to the source, and, of course, the previous investigation of Mount St. Helens had shown that chlorine levels had not been significantly affected. In the El Chichón case, the chemistry mystery was further deepened by a clear correlation between maximum aerosol density and maximum ozone loss in time and space. This suggested that aerosols were somehow involved, but in ways that were not at all obvious or easy to parse out. The data from El Chichón itself could not resolve the mystery.

Finally, the predicted climate impact of the eruption was not apparent. In his review of El Chichón’s impact on scientific knowledge, David Hofmann noted that whatever climate impact El Chichón might have had was masked by an unusually strong El Niño the following year. El Niño, a phenomenon that causes dramatic, short-term meteorological changes, begins with formation of a large, unusually warm body of water in the Pacific. Eventually, that warm pool forces a temporary reversal of the Pacific equatorial current, bringing the warm water to the west coast of North and South America. There it produces torrential rains, which effectively transfer the excess heat of the upper ocean into the troposphere. Hence, a strong El Niño translates into general tropospheric warming over a period of about six months. By removing El Niño’s effect mathematically, James K. Angell of NOAA had argued that the El Chichón cooling had occurred. In effect, it had slightly weakened the El Niño that had been forming. The cooling effect he found had been slightly weaker than predicted by the climate models, but was the same order of magnitude, providing a measure of confirmation.39 This result met with some skepticism, however. While mathematical adjustments to data were normal practice in science, they could lead one astray. The only way to prove Angell’s point definitively would have been to do the eruption over again while preventing an El Niño from forming, thus eliminating one variable from contention. That, of course, was not possible. Confirmation would have to wait for another volcanic explosion to occur.40

HOLES IN THE OZONE LAYER

In May 1985, as the first international assessment of the state of ozone science was being prepared under the World Meteorological Organization’s (WMO) auspices, researchers at the British Antarctic Survey revealed that there were huge seasonal losses of ozone occurring over one of the Dobson instruments located in the Antarctic.41 The 30 percent losses they were seeing were far more than expected under existing theory; at this point in time, the consensus was that there had been about 6 percent depletion globally. Nor had the TOMS/SBUV science team reported the existence of deeply depleted regions. If the Dobson instruments were to be believed, both theory and satellites were in error somehow.

The Antarctic Survey’s paper came out when many of the leading stratospheric scientists were meeting in Les Diablerets, Switzerland, to review the draft of the next major international ozone assessment, which was due that year. This document updated the status of the laboratory-based efforts to refine the rate constants of the reactions involved in the nitrogen and chlorine catalytic cycles involved in ozone chemistry and their incorporation into chemical models as well as examining the recent history of stratospheric measurements. The Balloon Intercomparison Campaigns (BIC) that Bob Watson’s office had funded in the early 1980s and the fact that the most recent several years of laboratory measurements had not produced major changes to reaction constants had resulted in a belief that the gas-phase chemistry of ozone was finally understood. In reviewing and summarizing all this, however, the 1985 assessment, which was also the first multinational assessment sponsored by WMO, had grown to three volumes, totaling just over a thousand pages.

The Antarctic Survey’s announcement thus came as a bit of a shock. Some of the conferees at Les Diablerets were already aware of the paper, as Nature’s editor had circulated the paper to referees during December 1984.42 The paper’s authors had raised the question of a link to chlorine and nitrogen oxides, in keeping with the prior hypotheses regarding depletion mechanisms, but did not provide a convincing chemical mechanism. The gas-phase chemistry of ozone did not appear to enable such large ozone losses. The paper was too late to incorporate into the 1985 assessment, however, and the conference did not formally examine it. It was much discussed, however, in the informal hallway and dinner conversations that accompany conferences. Adrian Tuck, then of the U.K. Meteorological Office, recalls that many of the attendees were inclined to ignore the paper, as it was based on only the measurements of one station.43 Several earlier attempts to find an overall trend in the Dobson data had not succeeded, defeated by the combination of quite significant natural variation in ozone levels and the inability to resolve differing calibrations. The network’s data had gained some disrepute in the community because of this. But Tuck had found Farman’s paper well enough done to be taken seriously. He knew Joseph Farman, the paper’s lead author, to be a very careful researcher. There was clearly something wrong with either the Dobson instrument or the stratosphere.

Farman’s paper caused Goddard Space Flight Center’s Richard Stolarski to look again at the TOMS/SBUV data. Farman had written to Donald Heath, the TOMS/SBUV principal investigator, well before publishing his paper but had not gotten a response. But the TOMS/SBUV should have detected the depleted region described in the paper if it were real and not an artifact of the Dobson instrument, and Stolarski found that it had. The TOMS/SBUV’s inversion software contained quality-control code designed to flag ozone concentrations below 180 Dobson units as “probably bad” data.44 Concentrations that low had never been seen in Dobson network data, and could not be generated by any existing model. It was impossible as far as anyone knew, and it was a reasonable quality-control setting based on that knowledge. But the Antarctic ozone retrievals had come in well below the 180 unit setting. Their map of error flags for TOMS had showed the errors concentrated over the Antarctic in October. They had ignored it, however, assuming the instrument itself was faulty.

Stolarski had reexamined their data and found that the depleted region encompassed all of Antarctica—the “ozone hole” that rapidly became famous—by the end of June that year. In August, at a meeting in Austria, Heath showed images generated from the data for 1979–83 depicting a continent-sized region in which ozone levels dropped to 150 Dobson units.45 Plate 2 shows the phenomenon two years later. It was after these images began circulating that a great many atmospheric scientists (and policymakers) began to take the ozone question seriously again. On one level, the images offered confirmation that Farman’s data reflected a real phenomenon and not an instrument artifact, and thus it merited scientific investigation. They also demonstrated that it was not a localized phenomenon. The Dobson measurements were point measurements, taken directly overhead of the station, while the TOMS/SBUV data covered the entire Earth.

The TOMS data images placed the depleted region into perspective, in a sense, showing the geographic magnitude of the phenomenon. On another level, like the older images of the Earth from space, these images were viscerally powerful. They evoked an emotional response, suggesting a creeping ugliness beginning to consume the Southern Hemisphere. JPL’s Joe Waters, for one, saw it as a cancer on the planet.46 While almost no one lived within the boundaries of the depleted region, if it grew very much in spatial extent, it would reach populated landmasses. And since no one knew the mechanism that produced the hole, no one could be certain that it would not grow.

Susan Solomon at the NOAA Aeronomy Lab had also been struck by the Farman paper, but also recalled that a group in New Zealand had measured unusually low nitrogen dioxide levels.47 In the stratosphere, nitrogen dioxide molecules react with chlorine molecules to form sink species, thereby removing the chlorine from an active role in destroying ozone. She speculated that the relative lack of nitrogen dioxide could mean increased chlorine levels through some unknown mechanism. She also drew on two other bits of recent work to forge a model of what that mechanism might be. In 1982, Pat McCormick had published a paper based on SAM II data showing the presence of what he called Polar Stratospheric Clouds (PSCs). While these had been known since the late nineteenth century, they had been thought to be rare—there had been few written accounts of them. But they were actually quite extensive, according to his data.

The other bit of information she drew on was work done during 1983 and early 1984 by Donald Wuebbles at Lawrence Livermore and Sherry Rowland. They had begun investigating the possibility that chemical reactions in the stratosphere might happen differently on the surface of aerosols than they did in a purely gaseous phase. At a meeting in mid-1984, they had shown data from laboratory experiments that suggested that the presence of aerosols did serve to alter the chemical reaction pathways. Their model, which was very preliminary and based on a number of hunches, had suggested this heterogeneous chemistry could be responsible for depletion rates of up to 30 percent.48

The ozone hole extended back to 1979, and therefore could not be related to either El Chichón or volcanic activity more generally. There had been no significant eruptions between 1963 and 1979, eliminating the possibility that volcanic aerosols were to blame. But McCormick’s PSCs were an obvious alternate suspect. While the composition of the PSCs could not be determined from the SAM II data, the temperatures they seemed to be forming at were consistent with water ice and with nitric acid trihydride—also mostly water. The mechanism Solomon and her co-workers proposed depended upon the presence of water, which would react with chlorine monoxide to release chlorine. This could only happen if nitrogen oxide species concentrations were extremely low, however, because they would normally remove the chlorine monoxide into a reservoir species more quickly. To get the very low concentrations of nitrogen oxides, a reaction on the surface of PSC crystals involving chlorine nitrate was necessary. This reaction created nitric acid and chlorine monoxide; this would decompose under the Antarctic sunrise to release chlorine.49

As the TOMS satellite images of the stratospheric ozone spread through the atmospheric research community during late 1985, the number of potential mechanisms expanded. A group led by Michael McElroy at Harvard University proposed a chemical mechanism that also included reactions involving PSC surfaces, but was focused on the release of bromine. Various meteorologists proposed several dynamical mechanisms, generally postulating ways that low-ozone tropospheric air might have ascended into the stratosphere, causing the hole simply by displacing the normally ozone-rich stratospheric air. Finally, Linwood Callis at Langley Research Center proposed a solar mechanism. In his hypothesis, odd nitrogen produced by high-energy particles hitting the upper atmosphere might be descending into the stratosphere, where it would destroy ozone.50 The solar cycle had reached an unusually high maximum in 1979, coinciding with the formation of the ozone hole. This mechanism had the happy consequence of eliminating the hole naturally. As solar activity returned to normal in the late 1980s, the hole would disappear on its own.

There were, then, three general classes of mechanisms proposed to explain the ozone hole by mid-1986: anthropogenic chemical (i.e., variations on chlorofluorocarbon [CFC] depletion chemistry), natural chemical (odd nitrogen), and dynamical. These hypotheses were testable, in principle, by measurements. During the preceding years, NASA and NOAA had fostered instruments for stratospheric chemistry that could look for chemical species required by the chemical hypotheses and had also developed the capacity to examine stratospheric dynamics. The instruments aboard the UARS had been intended to measure the key species as well as stratospheric temperature and circulation, and had it been available would have made selection of the most appropriate hypothesis far simpler than the process that actually played out. But while finished, it could not get into space. Instead, NASA resorted to field expeditions to resolve the controversy.

In February 1986, shortly before a workshop scheduled to discuss the proliferation of depletion hypotheses, the supporters of a chemical explanation for the ozone hole gained a significant boost from JPL. Barney Farmer’s Atmospheric Trace Molecules Spectroscopy (ATMOS) instrument had flown on Space Shuttle Challenger (STS-51B) during the first two weeks of May 1985. Capable of measuring all of the chlorine and nitrogen species involved in the photochemical depletion hypothesis except the key active species chlorine monoxide, the instrument had permitted the science team to produce the first complete inventory for them in the May stratosphere. This included several first detections of some of the trace species, and included the entire active nitrogen family and all the chlorine source species, including the manmade CFC-11, CFC-12, HCFC-22, and the primary natural chlorine source methyl chloride. It also measured the primary sink species, hydrogen chloride, and the data showed the expected diurnal variation in concentrations.51

The ATMOS measurements were also important in that they were the first simultaneous measurement of all the trace species. Previous measurements of the various chemicals had been made at different times and places, by different investigators, using different techniques. This had made it very difficult to claim that differences between measurements were chemical in causation and not a product of experiment errors or natural variations. ATMOS effectively eliminated those sources of error. It provided confirmation that some of the species predicted to exist in the stratosphere actually were there, that they existed in approximately the expected ratios, and that they varied in the course of a day the way theory said they should. While it could not see the poles from the Shuttle’s orbit, and it flew at the wrong time of year for the Antarctic phenomenon in any case, by demonstrating that all of the chemical species required by the CFC thesis existed in the stratosphere, it provided a substantial credibility boost.

At a meeting at the Aeronomy Lab in March 1986, the proponents of each of the ozone hole theories had their chance to explain it to their peers. While the meeting had not been called to plan a research program to demonstrate which, if any, of these hypotheses happened to be true, the collected scientists came up with one anyway. Adrian Tuck recalls that Arthur Schmeltekopf, one of the laboratory’s senior researchers, pointed out that the instrumentation necessary to select between the hypotheses already existed in one form or another.52 For either of the chemical mechanisms to be correct, certain molecules had to be found in specific ratios relative to other molecules. Barney Farmer’s infrared spectrometer could measure most of the chlorine and nitrogen species in question. It could not fly on any of the Shuttles, which were grounded due to the destruction of Challenger that January during a launch accident, and balloons large enough to carry it could not be launched from the American Antarctic station at McMurdo. But his balloon version, known only as the Mark IV interferometer, would work perfectly well as a ground-based instrument. David Hofmann’s ozonesonde and dustsonde balloons could provide measurements at various altitudes within the polar vortex, permitting evaluation of the dynamical hypothesis, and he was already a veteran at making these measurements in the Antarctic. Robert de Zafra at the State University of New York, Stony Brook, had developed a microwave spectrometer that could remotely measure chlorine monoxide, the key species Farmer’s instrument could not sense. Finding high levels of chlorine monoxide in the Antarctic stratosphere was crucial to verifying the anthropogenic chemical hypotheses. This too was a ground instrument. Finally, the Aeronomy Laboratory’s Schmeltekopf had developed another remote sensing instrument to measure nitrogen dioxide and chlorine dioxide. This was a visible-light spectrometer that employed moonlight.

The meeting participants did not expect that a single, primarily ground-based expedition would be conclusive. It would not, for example, provide the kinds of data necessary to disprove the dynamical thesis. Demonstrating that the upwelling proposed by the dynamics supporters was not happening would require simultaneous measurements from a network of sites within the vortex, very similar to the meteorological reporting networks used for weather forecasting in the industrialized nations. NASA’s Watson wanted to launch this first expedition in August 1986, much too short a time to arrange for additional ground stations. Further, expedition scientists at McMurdo would only be making their measurements from the edge of the polar vortex, not deep within it. This limited the utility of the results. Finally, the remote sensing instruments being used would not necessarily be seen as credible. Much of the larger scientific community still resisted remote sensing, preferring in situ measurements. At the very least, in situ measurements provided corroboration of potentially controversial results.

To better address these potential criticisms, Watson, Tuck, Schmeltekopf, and others also sketched out a plan for a second expedition using aircraft. This was based upon the payload designed for a joint experiment planned for early 1987 that was designed to examine how tropospheric air was transported into the stratosphere. Known as STEP, for Stratosphere-Troposphere Exchange Project, this had been the idea of Edwin Danielsen at Ames Research Center. Danielsen had conceived of ways to use tracer molecules to investigate vertical air motion, including ozone and nitrogen oxides, and had assembled an instrument payload for the NASA ER-2 (a modified U-2 spyplane). A key unknown in the transport process was how moist tropospheric air dried as it moved into the stratosphere, and this question was STEP’s principal target. Two new instruments, an in situ ozone sampler devised by Michael Proffitt at the Aeronomy Lab, and an in situ NOy instrument built by David Fahey, also of the Aeronomy Lab, had been chosen for this tracer study, supplemented by a water vapor instrument built by Ken Kelly at the Aeronomy Lab, aerosol instruments from NCAR and the University of Denver, and nitrous oxide instruments from Ames Research Center and NCAR.53 Finally, a new version of James Anderson’s chlorine monoxide instrument rounded out the payload.

The first ground-based expedition to figure out the ozone hole was carried out between August and October 1986. The NSF, which ran McMurdo Station, handled the logistics of moving the thirteen members and their equipment down to the Antarctic. Susan Solomon had volunteered to be the expedition leader after Art Schmeltekopf had not been able to go. The four experiment teams were all able to make measurements successfully, with Hofmann’s dustsondes showing that aerosols were descending, not ascending, tending to refute the dynamical hypotheses, while the NOAA spectrometer showed high levels of chlorine dioxide and very low levels of nitrogen dioxide, in keeping with the anthropogenic chemistry hypothesis and in opposition to the solar cycle thesis. The SUNY Stony Brook instrument recorded high levels of chlorine monoxide, again as expected under the anthropogenic hypothesis. Only the JPL team could not reduce their instrument’s readings to chemical measurements in the field. They needed access to computers back in California. But the information from the other instruments was sufficient to convince the researchers that the anthropogenic hypothesis was probably correct. The data they had was clearly consistent with the chlorofluorocarbon theory, and clearly inconsistent with the others.

Before the team left Antarctica, they participated in a prearranged press conference to explain their results, and here they raised what was probably an inevitable controversy. Solomon, who was too young to have been a participant in the ozone wars of the 1970s, made the mistake of giving an honest answer: that the evidence they had supported the chlorofluorocarbon depletion hypothesis, and not the others. Widely quoted in the mainstream press, her statement outraged proponents of the other hypotheses, and they were quite vocal in complaining about it to reporters. The most aggrieved parties were the meteorologists who had proposed the dynamical theses. The evidence against the dynamical thesis, Hofmann’s aerosol measurements, had not been circulated in the community (it was still in Antarctica), so no one could check or absorb it. The team thus returned to what appeared to be a vicious little interdisciplinary conflict, carried out via the mainstream press.54

Yet this conflict in the popular press did not really have an impact on the research effort, suggesting that it had far less reality than press accounts at the time suggested. British meteorologist Adrian Tuck, for example, who had been involved in the planning for the expedition, did not doubt that both dynamics and chemistry had roles in the hole’s formation. The relevant questions involved the details of the processes and relative contributions of them, considerations that were not well described in the press.

Hence, the media controversy did not much affect planning for the airborne experiment. The expedition’s goals had included closer attention to dynamical concerns in any case. It was obvious that even if dynamics were not solely responsible for the depleted region, they certainly played a role in establishing the conditions that formed and maintained it.55 The use of aircraft, particularly the fragile and difficult to fly ER-2, required a greater meteorological infrastructure for the mission whose data would also be available to test the dynamical thesis. Watson had been able to convince the Ames Research Center’s leadership to provide the ER-2 as well as the DC-8, equipped with many of the instruments deployed by GTE, as well as Barney Farmer’s Mark IV interferometer. Estelle Condon was assigned to be the project manager for the expedition. Watson had also chosen Adrian Tuck, who had joined the Aeronomy Lab in 1986, as the mission scientist. Tuck had gained considerable experience at carrying out airborne sampling missions in the U.K. Meteorological Office, where he had been originally been hired to help figure out the Concorde’s impact on the stratosphere.56 Brian Toon from Ames was his second, due to his long-standing aerosol research interests.

The airborne mission was flown from Punta Arenas, Chile, beginning in August 1986. Condon had arranged for the conversion of one of the hangars at the local airfield into a laboratory and office complex. Art Schmeltekopf had convinced Watson that the principal investigators should be made to convert their instrument’s data into geophysical variables (i.e., into temperature, pressure, concentrations of a particular molecule) within six hours after a flight, and post them for the other scientists to see. This was intended to solve a perennial problem. In many other NASA and NOAA field campaigns, investigators had sent their graduate students and not come themselves, meaning data did not get reduced until long after the expedition was over. Sometimes, it disappeared entirely and no one ever saw results. This expedition was too important to permit that. Further, the expedition’s leaders needed to know what the results of one flight were before planning the next. In this effort, some of the instruments needed daylight, and some of the observations needed to be done at night. Hence sound planning demanded a quick data turnaround. One consequence was that the hangar at Punta Arenas had to be converted into fairly sophisticated laboratory complex, complete with computers to do the data reduction. Ames’s Steve Hipskind shipped four standard cargo containers’ worth of gear there, and rented an air force C-141 cargo aircraft to carry the scientists’ equipment down in August.57

The expedition’s leadership had also established a satellite ground station at Punta Arenas just as had been done for GARP Atlantic Tropical Experiment (GATE) in 1974. Up-to-date meteorological information was necessary to plan the aircraft missions. The ER-2 was very limited in the range of winds it could take off and land in, and both it and the DC-8 were at risk of fuel tank freezing if temperatures at their cruise altitudes were too cold. Further, the scientists wanted the aircraft to fly into specific phenomena, requiring an ability to predict where they would be when the aircraft reached the polar vortex.

Tuck arranged to borrow a pair of meteorologists who had done forecasting for the Royal Navy during the Falklands Islands war from the U.K. Met Office. He also chose to use forecast analyses from the Met Office, and a meteorological facsimile link was established to permit near-real-time transmission from Bracknell. Further, to ensure the flight plans carried the two aircraft into the ozone hole, and through and under PSCs, the expedition leaders and pilots wanted access to real-time TOMS and SAM II data, and to data from the second-generation SAGE instrument, launched in 1984. Because the TOMS, SAM, and SAGE instruments all required sunlight to operate and the mission leaders wanted the DC-8 to make nighttime flights, the Met Office also borrowed an algorithm from the Centre Nationale des Recherches Météorologiques that computed total ozone maps from the High Resolution Infrared Spectrometer (HIRS) 2 instrument on the NOAA operational weather satellites.58 Because HIRS sensed emission, and not absorption or backscatter, it was independent of the Sun.

Image

Image

(Opposite) The experiments carried during AAOE. AAOE Equipage: A. F. Tuck, et al., “The Planning and Execution of ER-2 and DC-8 Aircraft Flights over Antarctica, August and September 1987,” Journal of Geophysical Research 94:D9 (30 August 1989), 11,183. Reproduced with the permission of the American Geophysical Union.

About 150 scientists took up residence in Punta Arenas for the expedition that August. During the two-month expedition, the research aircraft flew twenty-five missions, twelve for the ER-2 and thirteen for the DC-8, surprising many of the participants. The difficult weather conditions in the early spring Straits of Magellan, the thirty-year plus ages of the two aircraft, and the great distance from spare parts had caused the expedition’s leaders to hope they would get half that many before the winter vortex broke up. Meteorologically, they were lucky, and the aircraft ground crews provided sterling service in keeping the aircraft ready. There were incidents that colored future expeditions, however. The predicted winds for the DC-8’s cruise altitude were off by half (too low) on two occasions, forcing emergency aborts due to insufficient fuel. And the temperature at the ER-2’s cruise altitude also tended to be overestimated, leading to the wing-tip fuel tanks freezing. The chief ER-2 pilot, Ron Williams, had expected that, however, and had calculated the rate it which it would thaw and become available for the return trip. These incidents served as reminders that this research was also dangerous business.

The principal technological challenge during the mission turned out to be Anderson’s chlorine monoxide instrument. His group at Harvard had had about six months to prepare it, and had assembled and flight-tested it for the first time in June. In the expedition’s first ER-2 flight, however, it had failed just as the aircraft reached the polar vortex. But it worked again when the aircraft landed, leading the team to suspect that the intense cold was triggering the failure. One of his assistants wrote new software prior to the second flight that logged all of the instrument’s activities in hope of determining the fault; this led them to a space-qualified connector between the instrument and its control computer that was opening under the intense cold.59

Hence the third flight, 23 August, produced the first useful data from the instrument. This flight showed chlorine monoxide approaching levels nearly 500 times normal concentrations within the polar vortex, while ozone, as measured by Proffitt’s instrument, appeared to be about 15 percent below normal. As flights through September continued, the ozone losses deepened, and the two instruments demonstrated a clear anti-correlation between chlorine monoxide and ozone. The most striking correlation occurred on 16 September. The ER-2’s flight path took it through a mixed area in which ozone and chlorine and ozone moved repeatedly in opposition as if locked together, leaving little doubt among the experimenters that chlorine was responsible for the ozone destruction. By the end of the third week of September, the ER-2 was encountering parts of the polar vortex in which nearly 75 percent of the ozone at its flight altitude had been destroyed.60 While correlation did not in and of itself prove causation, none of the other hypotheses could explain this piece of evidence.

There was quite a bit of additional data gathered during the expedition that was also relevant to theory selection. David Fahey’s experiment produced data that strongly suggested that the PSCs were composed of nitric acid ice. It had found highly elevated levels of nitric acid while flying through them; cloud edges were clearly visible in his data. JPL’s Mark IV spectrometer’s data showed vaporphase nitric acid increasing toward the end of September, as the stratosphere warmed, also suggesting that it existed in a condensed form prior to that. Its measurements of the active nitrogen family also clearly showed that these trace species were substantially reduced, corroborating the in situ measurements. Measurements made by Max Lowenstein from Ames Research Center, by Michael Coffey and William Mankin from NCAR, and by Barney Farmer’s Mark IV provided clear evidence that stratospheric air within the vortex was descending throughout the period, not ascending as required by the dynamics theories.61

The evidence gathered during the Airborne Antarctic Ozone Experiment (AAOE), then, was clearly consistent with one of the three hypotheses the investigators had carried into the Antarctic, and equally clearly inconsistent with the other two. This left the researchers with little choice but to accept the hypothesis that anthropogenic chlorine was the proximate cause of the ozone hole, with the major caveat that particular meteorological conditions also had to exist to enable it. This did not necessarily mean that the hypothesis was true in an absolute sense. It remained possible that a fourth hypothesis might emerge to explain the observations even better. But in the absence of a better hypothesis, the expedition’s leaders had little choice but to accept the anthropogenic thesis as the correct one. The participants in the AAOE thus drafted an end of mission statement that was released 30 September, two weeks after the end of negotiations over an international protocol to ban CFCs. The statement concluded that the “weight of observational evidence strongly suggests that both chemical and meteorological mechanisms perturbed the ozone. Additionally, it is clear that meteorology sets up the special conditions required for the perturbed chemistry.”62

Image

The anti-correlation between chlorine monoxide and ozone from 16 September 1987. This chart is often referred to as the mission’s “smoking gun” result. From: J. G. Anderson and W. H. Brune, “Ozone Destruction by Chlorine Radicals Within the Antarctic Vortex: The Spatial and Temporal Evolution of ClO-O3 Anticorrelation based on in Situ ER-2 Data,” Journal of Geophysical Research 94:D9 (30 August 1989), 11,475. (figure 14). Reproduced with the permission of the American Geophysical Union.

A series of scientific meetings after the expedition served as forums to discuss its results and those of related efforts that had gone on during it. One other significant result had to be considered, and its implications for the AAOE’s results thought through. Mario Molina and his wife, Luisa Tan, working at JPL, had carried out a series of elegant experiments that demonstrated a third chlorine-based mechanism could be the primary cause of the ozone hole. This thesis proposed that if the PSCs were mostly nitric acid ice instead of water ice, they would scavenge hydrogen chloride and hold the molecules on their surfaces, where they became more available for reaction with another chlorine species (chlorine nitrate). This second reaction released a chlorine monoxide dimer while trapping nitrogen dioxide in the ice. Because nitrogen dioxide was necessary to convert active chlorine into inert reservoir species, its removal would permit very high concentrations of chlorine to occur. At virtually the same time, researchers in Britain isolated the difficult-to-measure dimer and quantified the rates of its catalytic cycle.63

At a meeting in Dahlem, Germany, that November, the dynamical and odd nitrogen hypotheses were again discarded in favor of a new synthesis centered on chlorine chemistry, with meteorology providing some necessary preconditions—relative confinement of the vortex and very cold temperatures. The participants also attempted to assess the relative contributions of the chemical mechanisms suggested by the laboratory efforts. The AAOE data had shown low levels of bromine oxide, limiting the potential impact of the bromine catalytic cycle that McElroy had proposed. Solomon’s could not be evaluated. The telltale species for her mechanism was hydrogen dioxide, which the expedition had not measured. Finally, the Molinas’s mechanism was able to explain the observed results very closely, leaving it the dominant thesis at the end of the meeting.64

This meeting’s participants also sketched out where the uncertainties still lay in their understanding of the ozone hole. The PSCs were one locus of uncertainty. While their composition seemed to be known, what particles served as the nuclei for their formation, what temperature they formed at and over what range they were stable at, and how large the PSC crystals could grow before they sedimented out of the stratosphere were all unknown. The details of the various catalytic cycles were unclear, of course, as the inability to test Solomon’s idea suggests. Further, none of the chemical models, even those including the heterogeneous chemistry, could replicate the observed depletion pattern. The depleted region extended downward in altitude more deeply than models predicted, and the overall ozone loss predicted by the models remained less than observed. This suggested that feedback processes were at work that the models did not capture. Finally, it was not clear whether the Antarctic ozone hole was relevant to the mid-latitudes where most humans lived. Models tended to treat the hole as if it were completely contained in a leak-proof vessel, but to many of the empirical scientists, this was absurd. Most of the Earth’s stratospheric ozone was produced in the tropics (where the requisite solar radiation was strongest) and was transported to the poles. That also meant the Antarctic’s ozone-poor air would be transported out after the vortex breakup in October. The difficulty, as it had been throughout the ozone conflict, lay in proving quantitatively what was qualitatively obvious.

While the AAOE and National Ozone Expedition (NOZE) II expeditions had been in progress, diplomats had been in Montreal negotiating a treaty that would cut CFC production by 50 percent. It had been deliberately isolated from the expedition’s findings to prevent biasing it with undigested data; scientific briefings to the conferees had been provided by Daniel Albritton, director of the Aeronomy Laboratory, who did not go on the expedition.65 The resulting Montreal Protocol, of course, had no force until ratified by national governments, and this was where the science results could have an impact.66 Further, the protocol contained a clause requiring the signatory nations to occasionally revisit and revise its terms in the light new scientific evidence—an escape clause, if the science of depletion fell apart. Instead, this became a point of conflict as additional research suggested that high depletion rates might be possible in the northern mid-latitudes, producing pressure for rapid elimination of the chemicals.

In the absence of the Antarctic data, the scientific basis for the Montreal Protocol had been the 1985 WMO-NASA assessment. While this had been limited to the gas-phase only chemistry that had been the basis of the prior decade’s research, it had clearly documented that CFCs and their breakdown products were rising rapidly in the stratosphere. By this time, the laboratory kinetics work organized by NASA had also resulted in stabilization of the rate constant measurements. These no longer showed the large swings of the late 1970s, giving confidence that the gas-phase chemistry for the nitrogen and chlorine catalytic cycles was reasonably well understood.67 Finally, the assessment had also established a clear scientific position that ozone depletion in the mid-latitudes should be happening. From a policy standpoint, however, where the assessment was weak was in its demonstration that the depletion expected by the chemists was actually happening in the atmosphere.68 There were solid economic reasons to keep the CFCs flowing, and powerful economic interests relied upon the lack of evidence for real-world depletion to insist that there was no basis for regulation of CFCs.

But during the busy year of 1986, Donald Heath had circulated a paper prior to publication claiming to find very substantial mid-latitude loss based upon the Nimbus 7 TOMS/SBUV instrument. Separately Neil Harris, one of Sherry Rowland’s graduate students, had deployed a new method of analyzing the Dobson data. In previous analyses, the annual data from all of the stations had been lumped together, which had the effect of masking potential seasonal trends. Because the stations were not all of equivalent quality, it also tended to contaminate the dataset, making data from the good stations less reliable. Harris decided to reexamine each of the twenty-two stations northward of 30N individually, using monthly averages instead of annual averages. By comparing pre-1970 and post-1970 monthly averages, he found a clear wintertime downward trend.69 Their trend was half that indicated by Heath’s data, but they were at least pointing in the same direction.

Hence Albritton and Watson established a new group, called the Ozone Trends Panel, to resolve these conflicting bits of evidence to see if there really was a trend revealed in the data. The twenty-one panel members reanalyzed the data from each of the thirty-one Dobson stations with the longest records by cross-checking them against ozone readings from satellite overflights and using the result as a diagnosis with which to correct the ground station data. This revised data revealed a clear trend of post-1970 ozone erosion that was strongest in winter, and that increased with latitude. They also performed their trend analysis with the unrevised data, which showed the same general trend, but less clearly.70 They were not able to confirm the Solar Backscatter Ultraviolet (SBUV) findings, but it remained possible that the instrument was correct. The group found clear evidence that ozone in the troposphere was increasing, which would partially mask stratospheric ozone loss from the Dobson instruments. This would result in an underestimate of ozone erosion from the Dobson network data. Tropospheric ozone would not be detectable to the SBUV, however. It might be the more accurate, in this case, but the assembled scientists did not have a basis on which to evaluate the SBUV’s own degradation.

The panel’s findings were formally released 15 May 1988, one day after the American Senate voted to ratify the Montreal Protocol requiring a 50 percent reduction in CFC production. Its general conclusions were already well known, as Watson and Albritton had briefed policymakers and politicians on them, and inevitably, they had been leaked. Yet their formal press conference still drew great attention. At it, Watson, Sherry Rowland, and John Gille stated unequivocally that human activity was causing rapid increases in CFCs, and halons in the stratosphere, and that these gases controlled ozone. They reported that the downward trend found in the reanalysis was twice that predicted by the gas-phase models. However, they also felt the need to specifically reject the TOMS/SBUV data as an independent source of knowledge. The large downward trend in the TOMS/SBUV data was, in the panel’s opinion, primarily an artifact of instrument degradation.71

Watson also took the occasion to call for more stringent regulation than that contained in the protocol, and this caused some controversy. As it stood in 1988, the protocol would reduce CFC production by half, and it would be capped at that level. But this was not enough to him. The inability of models to provide a credible prediction of ozone destruction meant that it was impossible to forecast a safe level of CFC production that was not zero. Watson believed a ban on the chemicals was necessary to restore the stratosphere to its original state.

The research effort did not stop with ratification of the Montreal Protocol by the Senate. There were considerable remaining uncertainties over the precise chemical mechanisms behind the unexpectedly high levels of chlorine in the Antarctic, and over the meteorological conditions that were necessary antecedents to the hole phenomenon. Further, the protocol contained a clause requiring that its terms be reexamined regularly so that it could be adjusted in the light of new scientific evidence. Since the protocol had been negotiated on the basis of the gas-phase chemistry of 1985, it might well need to be altered to reflect the heterogeneous chemistry of 1987. When added to the revised Dobson data showing clear indications of greater ozone loss in the wintertime Arctic than expected, these left NASA’s Watson in need of more data. Conditions on the periphery of the Antarctic hole were similar to those in the core of the Arctic polar vortex in terms of temperature, for example, raising the possibility that as chlorine continued to increase an Arctic version of the hole might form. Far more people (including all the people paying for this research) lived in the Northern Hemisphere than did in the Southern Hemisphere, and both the human and political implications were obvious.

Two indications that the chemistry of the Antarctic phenomenon might also exist in the Arctic had already been found. On 13 February 1988, the Ames ER-2 had carried its Antarctic payload on a flight from its home at Moffet Field north to Great Slave Lake, Canada. This was still south of the Arctic polar vortex, but the data from the flight clearly showed highly elevated levels of chlorine monoxide appearing suddenly north of 54N.72 The second indication that Arctic chemistry might also be perturbed came from Aeronomy Lab scientists, who had carried the spectrometer used during the NOZE and NOZE II expeditions to Thule, Greenland, the last week of January 1988. This was inside the Arctic vortex during the time they were present. They had found elevated levels of chlorine dioxide and very depressed levels of nitrogen dioxide, suggesting that the wintertime chemical preconditioning that happened in the Antarctic was also happening in the Arctic. This did not mean that a similar ozone hole would form, however. As Susan Solomon pointed out in the resulting paper, the Arctic stratosphere was much warmer, and in the years since records started in 1956, average monthly temperatures in the Arctic stratosphere had never fallen to the minus 80 degrees C that seemed to be the point at which PSCs formed.73 While PSCs did form during shorter periods of extreme cold in the Arctic, the lack of prolonged periods of extreme cold suggested that depletion would not reach the extreme levels found in the Antarctic.

These findings caused Watson to seek the opinions of Adrian Tuck, Art Schmeltekopf, Jim Anderson, and some of the other mission scientists on whether to try to do an Arctic expedition modeled on AAOE in the winter of that year or whether to wait until the winter of 1989. The principal investigators associated with the instruments had been involved in either expeditions or post-expedition conferences of one sort or another since STEP in January 1987, and it was asking a lot to send them back into the field in the winter of 1988. They decided—or, rather, Watson, Tuck, and Anderson convinced the rest—to mount the Arctic expedition sooner rather than later, in January 1989. This mission became the Airborne Arctic Stratospheric Expedition (AASE), flown out of Stavanger, Norway.74

The AASE made thirty-one flights into the northern polar vortex in January and February 1989. This particular winter proved to be unusually warm and windy at the surface, correlating to an unusually cold, stable stratosphere. The expedition’s scientists were therefore able to collect a great deal of information about the expedition’s primary targets, the PSCs. The resulting special issue of Geophysical Research Letters contained twenty-three papers on PSCs. The observations confirmed that both nitric acid and water ice clouds formed, with the nitric acid clouds dominating as expected from thermodynamic considerations, and several instruments provided characterizations of the nuclei around which the ice crystals formed. The expedition did not settle all of the questions surrounding PSCs, unsurprisingly; it was still not clear, for example, how they facilitated the process of denitrification. Richard Turco, in his summary of the expedition results, remarked that a “consistent and comprehensive theory of denitrification remains elusive.”75

As expected, the expedition found that the chemistry of the Arctic polar vortex was highly disturbed. The low levels of nitrogen species and high levels of chlorine species mirrored those in the Antarctic, and the final ER-2 flight in February actually found higher chlorine monoxide levels than had been measured in the Antarctic. This confirmed to the science teams that the same chemical pre-processing that happened in the Antarctic had happened in the Arctic. Based on measurements by Ed Browell’s lidar instrument on the DC-8, this resulted in ozone destruction just above the ER-2’s cruise altitude. This finding was corroborated by Mike Proffitt’s ozone instrument and the Ames Research Center nitrous oxide tracer measurements. It did not result in a hole like that in the Arctic, however, because extensive downwelling was simultaneously bringing ozone-rich air in from higher altitudes, and because the polar vortex broke down before the Sun was fully up.76

Hence the message that the expedition’s scientists took out of the AASE was that all of the chemical conditions necessary to reproduce the Antarctic ozone hole existed in the Arctic, but that unusual meteorological conditions would have to occur in order for one to actually happen. Very cold temperatures would have to prevail into March, and the atmospheric waves that normally roiled the Arctic stratosphere would have to be quiescent. Such conditions were not impossible.

The combined results of the Ozone Trends Panel and the four field expeditions caused the Montreal Protocol to be renegotiated. In his history of the protocol, Edward Parson explains that the results finally produced industry acceptance that actual harm had been done by CFCs. CFCs would therefore be regulated based on what had already happened, not on what might happen in the future. And because the chemicals had lifetimes measured in decades, there was no longer any doubt that damage would continue to happen. And further, of course, there was still no way to determine what level of chlorine emissions might be harmless. It continued to be the case that the models did not reflect reality—reality was worse. Hence in a series of meetings culminating in London in June 1990, the protocol was revised to include a complete ban on the manufacture of CFCs, as well as other anthropogenic chemicals that introduced chlorine into the stratosphere. CFC production was scheduled to cease in 2000; the other chemicals had deadlines ranging from 2005 to 2040.77

THE GLOBAL VIEW, FINALLY: CORROBORATION

On 15 September 1991, the crew of Space Shuttle Discovery finally deployed the long-awaited UARS. As it had with the SAGE satellite in 1979, NASA arranged a field mission to provide data against which to compare its results. This was planned as a second deployment by the ER-2 and DC-8 to the Arctic during January and February 1992. As fortune would have it, however, the catastrophic eruption of Mount Pinatubo in the Philippines in June 1991 provided another natural laboratory in which to study the thorny question of mid-latitude ozone loss. The eruption was followed by substantial mid-latitude ozone depletion, and several lines of evidence, from the second Arctic expedition, from UARS, from a 1992 flight of the ATMOS instrument on Shuttle Atlantis, and from balloon-based measurements, all pointed to sulfate aerosols as additional actors in ozone chemistry. But the eruption of Pinatubo also launched a final outbreak of the ozone wars, leading political critics of the anthropogenic chlorine hypothesis to return to long-discredited claims that volcanic chlorine was primarily responsible for stratospheric depletion.

During 1991, more disturbing evidence of Northern Hemisphere depletion came from Goddard Space Flight Center. In 1989, the TOMS science team had found a way to correct the TOMS/SBUV data for instrument degradation by exploiting differences in the way the individual channels degraded. This permitted them to produce a calibration that was independent of the Dobson network, and they had then spent two years revising the data archive in accordance with the new calibration. This revised data brought the TOMS globally averaged depletion measurement to within the instrument error of the Dobson stations. The data showed a clear poleward gradient to the ozone loss, with no significant reduction in the tropics but loss in the mid-latitudes, increasing toward the poles. It also displayed the expected seasonality, with considerably greater loss in winter than in summer.78

The vertical distribution of the mid-latitude loss, however, was confusing to the research community. The gas-phase depletion chemical models indicated that most of the ozone loss should be in the upper stratosphere, but it clearly was not. Instead, the ozone loss was concentrated in the 17 to 24 kilometers region, the lower stratosphere, and the same altitude that the polar ozone destruction happened. This suggested that the primary mechanism for ozone destruction in the mid-latitudes was heterogeneous chemistry, which required the presence of particulates. But certainly the PSCs could not be at fault in the mid-latitudes, where the stratosphere was much too warm to permit their formation. Some of the mid-latitude loss could be attributed to movement of ozone-poor air from the collapsing polar vortex into the mid-latitudes, but this did not seem a complete explanation for the phenomenon. So some researchers turned back to the sulfate aerosols that were prevalent throughout the stratosphere at this altitude.79

The second Arctic expedition, AASE II, was designed somewhat differently from the preceding two airborne missions. It was designed to examine the evolution of the stratosphere from fall until the end of winter, and at a latitude range covering the tropics to the Arctic. The mission’s research goals were primarily directed at understanding the mid-latitude depletion phenomenon and whether the severely perturbed Arctic chemistry had anything to do with it. The ER-2 was flown first out of Ames Research Center in October 1991 for six missions, and then shifted to Bangor, Maine, at the beginning of October for the Arctic flights. These continued through the end of March, in order to examine the breakdown of the vortex and the distribution of its air to lower latitudes. While Bangor was too far from the vortex edge to permit deep penetration, this site had been chosen for its lower risk.80 The wind and icing conditions at Stavanger had been more dangerous than warranted by a repeat mission, particularly since the major goals of this mission did not require reaching far inside the polar vortex.

Image

Wireframe of the Upper Atmosphere Research Satellite. Because this satellite was deployed by the Space Shuttle, it could not reach a polar orbit. Its instruments leave a data gap at the highest latitudes. Courtesy NASA.

The AASE II expedition provided a great deal of new information about the conditions necessary to produce large-scale ozone destruction in the Arctic. It confirmed the suspicion from the first Arctic expedition that extensive down-welling of higher altitude air continually introduced fresh ozone, limiting the total column destruction. It also demonstrated that the Arctic stratosphere fostered more rapid destruction of chlorine monoxide than did the Antarctic, limiting the chemical’s southward movement as the vortex broke up. This was a product of nitric acid photolysis once the Sun had risen far enough to light the stratosphere. Because the photolysis rate increased with temperature, the warmer Arctic stratosphere destroyed the chlorine monoxide more rapidly. This had the effect of further reducing the rate of ozone destruction. In their expedition summary, Jim Anderson and Brian Toon pointed out that large-scale ozone losses in the Arctic would require substantial removal of nitrates via extensive PSC formation, and that had not happened during the expedition. The Arctic stratosphere had been too warm that year.81

However, there had been enough PSC formation that the expedition was able to determine that they were not primarily ice after all. Using a lidar instrument aboard the DC-8, a team led by Edward Browell of Langley Research Center demonstrated that the PSCs were largely composed of droplets in a supercooled liquid state. Brian Toon had suggested in 1990 that this might be the case based on a thermodynamic study; other measurements made during the AASE had also suggested the possibility. The difficulty in proving the case simply lay in the fact that both states existed. There were PSCs composed of ice and others composed of liquid, and many PSCs contained both. Disentangling the true state of the clouds was difficult due to the limited number of measurements available and the complexity of the cloud phenomenon. This sent the chemical kineticists back to the laboratory, to investigate how the chlorine and nitrate reactions differed between the liquid surface and the ice-phase surfaces they had been working with previously.82 It also made modeling the cloud phenomenon far more difficult, since the relative abundance of the different types of clouds, and the ratio of liquid to solid particles in the clouds, now mattered, but there were not enough observations available to determine the ratios empirically.

The Mount Pinatubo eruption of 1991 further complicated the chemical picture. The Pinatubo eruption was the largest volcanic explosion of the century, and its position in the tropics, where tropospheric air ascends into the stratosphere, had ensured that its ejecta cloud was rapidly transported throughout the stratosphere. Large increases in stratospheric sulfate aerosols were measured in the volcano’s wake, and widespread ozone depletion had followed. A tentative mechanism to explain sulfate-aerosol mediated ozone destruction had been proposed in 1988 based on laboratory studies. This postulated that hydrolysis of certain nitrogen species on sulfate aerosol droplets could alter the balance between the chlorine and nitrogen catalytic cycles, accelerating the release of active chlorine. This might occur outside the polar regions anywhere the stratosphere happened to cool enough, explaining some of the observed mid-latitude loss. By increasing the sulfate aerosol loading of the stratosphere (and therefore the total surface area available for the reaction), a volcanic eruption would accelerate the conversion of chlorine from inactive reservoir species to active species, thereby increasing ozone loss.83

While the Pinatubo eruption provided new observations that tended to confirm a role for sulfate aerosols in ozone loss, it also produced a reeruption of the ozone war. There was a line of argument in conservative circles that the anthropogenic chlorine depletion hypothesis was being hyped by the scientific community in the interests of increased funding, and this was revived after the eruption.84 The eruption, of course, had been followed by extensive ozone loss and thus could easily be blamed for it directly. One simply had to contend that the volcano, and not humans, was the original source of the chlorine.

Second, the mission scientists for the AASE II had made an unwise decision: they held a press conference midway through the expedition. On 3 February 1992, James Anderson, Brian Toon, and Richard Stolarski had announced that the chemical conditions in the polar vortex were primed for an ozone hole, and one might extend as far south as Bangor if conditions within the polar vortex remained cold enough through March. This statement was taken up in the press, often without the important caveat if conditions remained cold enough, and broadcast throughout the nation. It was picked up by Senator Albert Gore, Jr. (D, Tenn.) the next day, who used it to attack then-President George H. W. Bush for resisting a resolution before Congress to accelerate the phase-out of CFC production by five years. Senator Gore proclaimed that Bush would be responsible for an ozone hole forming over his home of Kennebunkport, Maine.85

At virtually the same time, but much more quietly, the first chemical maps of the Arctic stratosphere produced by Joe Water’s Microwave Limb Sounder team at JPL made their way to Washington (reproduced in Plate 3). These put in a visual form the relationship between temperature, active chlorine species, and ozone throughout the Arctic stratosphere. Combined with the announced intention of a major producer, Du Pont, to cease producing most CFCs by 1996 regardless of additional regulation, Bush rescinded his instructions to Senate Republican leaders to block the resolution, and it passed without dissent 6 February.

The volcano and the press conference created what Science writer Gary Taubes labeled the “ozone backlash.” The Artic hole did not happen; instead, the Arctic stratosphere warmed within days of the press conference, eliminating any chance of a substantial hole forming. The nonappearance of Senator Gore’s Kennebunkport hole triggered a series of attacks by conservative writers. But these were not directed primarily at Gore for his exploitation of ozone depletion for political gain—of which he was certainly guilty. The attacks were directed at the expedition scientists and at the larger thesis of CFC-induced ozone depletion. The arguments these writers made was that volcanoes, not CFCs, were the primary source of stratospheric chlorine, and included the false claim that no one had ever measured CFCs in the stratosphere.86

This line of attack, broadcast to millions of people on Rush Limbaugh’s radio show, was disturbing enough that Sherry Rowland felt compelled to make it the subject of his 1993 presidential address to the American Association for the Advancement of Science. Framing the controversy as a failure of the scientific community to properly educate the public, Rowland deconstructed the root of this claim. A 1980 paper in Science had argued for a volcanic origin for most chlorine in the stratosphere. This argument was based upon measurements of chlorine gas trapped in bubbles within the ash fall of an Alaskan volcano that had erupted in 1976, not measurements of chlorine species in the stratosphere. Rowland then sketched the ways in which chlorine might be removed from the volcanic plume on its way to the stratosphere that the author had not considered, and he pointed to the 1982 measurements made by Mankin and Coffey of NCAR in the El Chichón plume. They had documented an increase in hydrogen chloride of less than 10 percent after this eruption, and that had not appeared until six months after the eruption. He then pointed to all the measurements of CFCs in the stratosphere that had been made since 1975 to discredit the claim that they had not been measured.87

In April 1993, Joe Waters’s Microwave Limb Sounder group at JPL published the results of their first eighteen months of operations in Nature. Their data, displayed as colored maps of temperature, chlorine monoxide, and ozone, clearly showed the expected anti-correlation of ozone and chlorine monoxide in a way that made clear the spatial extent of the stratosphere’s altered chemistry. The simultaneity of the instrument’s data also provided important corroboration of the temperature dependence of the PSC-accelerated reactions and followed the complete cycle of evolution and collapse of the vortex.88 Eventually, they began to make movies from the data showing the complete lifecycle of the phenomenon.

The following year, the ATMOS science team published an inventory of carbonyl fluoride based on the measurements taken on the 1985 Spacelab 3 flight and the late March 1992 Atlas 1 flight. Carbonyl fluoride had no natural source in the stratosphere, and was produced solely by breakdown of CFCs. It could therefore be used to trace the buildup of anthropogenic gases in the stratosphere. The team found that between 1985 and 1992, the amount of carbonyl fluoride increased by 67 percent, in keeping with estimates of the amount of CFCs released.89 Similarly, the ATMOS team argued in a separate paper that their measured 37 percent increase in hydrogen chloride and 62 percent increase in hydrogen fluoride between 1985 and 1992 were in agreement with other ground measurements and with the estimates of CFC release. Further, after pointing out that there had been no measured increase in the stratospheric burden of hydrogen chloride immediately after the Pinatubo eruption, they argued that the sizeable increase in hydrogen fluoride during this seven-year period gave the game away in any case.90 The primary origin of these gases was human. The HALOE instrument on the UARS, which also measured fluorine species worldwide, finally, corroborated their results.

Image

In 1995, Paul Crutzen, Mario Molina, and Sherry Rowland received the Nobel Prize in chemistry for their 1970s work in ozone chemistry. While the gas-phase chemistry that had been the center of their effort during those years had not turned out to be the primary mechanism for ozone destruction, they, Hal Johnston, Richard Stolarski, and Ralph Cicerone had set in motion a complex series of research efforts that led eventually to the correct mechanism. These efforts were comprehensive in time, space, and methodologies, encompassing laboratory kinetic studies, numerical models, balloon and aircraft measurements employing direct sampling and remote sensing, and space-borne observations. The twin spines of this effort were NASA’s UARP and the NOAA Aeronomy Laboratory; without their support, the difficult question of ozone depletion would not have been resolved as quickly.

There were weaknesses, however. As the space agency, NASA could have been expected to make better use of the global view than it did, and if one merely counts up the dollars spent, space viewing was its emphasis. But the long delays in getting the UARS into orbit left the agency without the space hardware it had intended to use to solve the ozone question during critical years. It deployed a fallback science program using aircraft to carry out its statutory responsibilities; when finally deployed, the expensive space assets wound up confirming what the scientific community already believed. The vibrant aircraft research program NASA had maintained primarily as a means of testing and validating new space instrumentation wound up being its primary science program. Tying its science programs to the Shuttle program had clearly been a mistake. What the science program had needed was reliable space access, and the Shuttle had not delivered on that promise.

Yet the global view did produce important corroboration, for while the active research community seemed comfortable with the results of in situ measurements, the larger scientific and policy communities did not find them convincing. Satellites provided confirmation that point measurements made inside the atmosphere adequately reflected the global atmosphere. Their principal weaknesses during the 1980s (besides the lack of reliable space access) had been in the reliability of their calibration. While Pat McCormick’s occultation instruments had not had this problem, they also either did not measure the all-important ozone (SAM II and SAGE) or did not yet have a long enough record to make a convincing case (SAGE II). TOMS/SBUV, which fortunately operated for fifteen years, took many years to get a reliable calibration technique. Hence despite the determined efforts of the UARP to get better satellite data in the 1970s, they did not really succeed until the early 1990s.

Modeling, too, proved to be a weakness, although one that was not limited to NASA. The theoretical models deployed to study the ozone question, if evaluated as means of prediction, were failures. No one generated a model of ozone chemistry that accurately predicted ozone depletion. During the 1970s, the real atmosphere showed much less depletion than the models did, while during the 1980s it showed much more depletion than did the models. The atmosphere proved much more complicated than the chemistry community had believed when they began this research effort, and contained far more phenomena (and variations of phenomena) than the collective intelligence of the research community could program into a model. Instead, the models were numerical thought experiments, serving as guides to the observational research that needed to be done and helping to clarify the relative importance of various chemical and dynamical processes.91 In the field expeditions, they also helped guide decisions about where to send the aircraft to obtain desired measurements. They were essential to the ozone research program for these reasons. But like the weather forecast models developed during GARP, their ability to predict had limits.

Beginning with the GTE’s early missions, NASA developed the ability to carry out large-scale, multi-instrument atmospheric studies that combined in situ and remote sensing measurements. New in the late 1970s, by the end of the 1980s they were well-established means of gaining new knowledge of the atmosphere. They were methodologically complex, requiring not only large-scale participation from scientists during the experiment but significant post-expedition coordination as well. Program managers began to draw scientists into workshops to compare and study data, and in the most contentious areas, they began to organize formal assessments of the state of relevant knowledge. These served multiple purposes. They served to accelerate the normal process of scientific argumentation that takes place in journals, providing forums where interpretations could be argued over before being committed to print. They also served to identify areas that were not well understood, helping direct future efforts. Finally, in addition to these scientific purposes, they increasingly served policy purposes. The assessments in particular were written to provide scientific bases for policy action; by the end of the decade, they began to include “Summary for Policymakers” sections, making this purpose clear.

The scientific community’s knowledge of the Earth’s atmosphere changed dramatically during the 1980s. In the 1970s, tropospheric ozone was believed to be largely derived from stratospheric injections into the troposphere, with an additional human industrial component to the ozone budget (smog); by the end of the 1980s, atmospheric chemists had demonstrated that photochemical production of ozone in the troposphere had a significant biogenic component as well. They had also shown that agricultural burning had global-scale chemical impacts on the atmosphere. Combined with the dramatic imagery of the ozone hole, they had conclusively demonstrated that humans had achieved the ability to fundamentally alter the chemistry of the atmosphere.

Share