From Ultra Rays To Astroparticles: A Historical Introduction To Astroparticle Physics
From Ultra Rays To Astroparticles: A Historical Introduction To Astroparticle Physics
From Ultra Rays To Astroparticles: A Historical Introduction To Astroparticle Physics
Editors
A Historical Introduction
to Astroparticle Physics
Editors
Brigitte Falkenburg Wolfgang Rhode
Philosophy and Political Science, Faculty 14 Physics, Faculty 2
Technische Universitaet Dortmund Technische Universitaet Dortmund
Dortmund, Germany Dortmund, Germany
Copyright for the cover illustration: Michael Backes, Technische Universität Dortmund, http://app.tu-
dortmund.de/~backes/media/presse/Magic%201.JPG.
In 1912, Victor Hess discovered cosmic rays. His discovery opened the skies in
many regards: for detecting extraterrestrial particles, for making energies beyond
the MeV scale of nuclear physics accessible, for interpreting all kind of astrophys-
ical data in terms of cosmic messenger particles, and finally, for giving cosmology
an empirical basis. In the 1920s, it turned out that the so-called Höhenstrahlung
has an extraterrestrial origin and contains charged particles such as the electron.
The discovery of the positron in 1932 inaugurated the detection of a plethora of
new subatomic particles. With the rise of the big particle accelerators in the early
1960s, cosmic ray studies shifted from particle physics to astrophysics. The cos-
mic microwave background discovered in 1965 gave support to the big bang model
of cosmology and made cosmology an empirical science. In the late 1980s, with
the experiments that measured the solar neutrino flux, part of the particle physics
community moved back to cosmic ray studies and astroparticle physics began. The
experiments of astroparticle physics use particle detectors arranged to telescopes.
Hence, astroparticle physics is doing particle physics by means of telescopes, and
vice versa, doing astrophysics by means of particle detectors. Cosmic rays are mes-
senger particles that carry information about exploding and collapsing stars (in par-
ticular, supernovae, their remnants, and black holes), the large-scale structure of the
universe, and the microwave afterglow of the big bang. Their investigation is one of
the most fascinating fields of modern physics.
This book may be read on its own as an introduction to a fascinating multi-faceted
field of research, but also used in addition to undergraduate or graduate lectures in
astroparticle physics. It covers the historical experiments and lines of thought to
which lectures cannot give sufficient attention. The material presented here makes
the bridge from the beginnings of radioactivity research, particle physics, astro-
physics, and cosmology in the early days of quantum theory and relativity, to the
current foundations of physical knowledge, and to the questions and methods of a
future physics. It shows that fundamental research is fascinating and of great impor-
tance, and that it seems worth tremendous efforts to the physicists.
At the centenary of the pioneering phenomenon found by Victor Hess, we present
a historical introduction into astroparticle physics here. We think that the historical
v
vi Preface
approach is a good thread for understanding the many experimental methods, phe-
nomena, and models employed in astroparticle physics, the ways in which they are
linked to each other, as well as their relations to their neighboring disciplines of par-
ticle physics, astrophysics, and cosmology. Each of these fields on its own is highly
complex, and to learn a mixture of them before getting to the bottom of any may
be confusing for beginners. We hope that this complex body of knowledge is made
more transparent by a historical account of the different research traditions which
come together in current astroparticle physics.
Astroparticle physics has emerged from several distinct fields of research. In-
deed, these fields have not completely grown together as long as physics does
not dispose of a unified “theory of everything”. Nevertheless the models and ex-
periments of astroparticle physics are much more than provisional or piece-meal
physics. They are no less and no more than surveys and maps of our knowledge of
the universe at a small scale and at a large scale. On the way in terra incognita, care-
ful cartography of the details is indispensable. Indeed astroparticle physics aims at
establishing as many experimental details as possible about cosmic rays, their parti-
cle nature, their spectrum, their astrophysical sources, and the mechanisms of their
acceleration. But in distinction to other scientific disciplines, this gathering of de-
tails does not give rise to increasing specialization. Quite to the contrary, the history
of the different branches of physics grown together in astroparticle physics shows
the merging of very distinct scientific traditions.
The book is addressed to undergraduate and graduate students of physics and to
their teachers. It may serve as background material for lectures. It mays also serve
the students and teachers of other faculties, in particular philosophers and historians
of science, and everybody interested in a fascinating field of research in physics.
To historians and philosophers of science it gives an overview as well as detailed
information of a new sub-discipline of physics that has not been studied yet as a
whole, but only in partial approaches to the history of particle physics, cosmology,
etc., and to their epistemological aspects. Historians of science will read the book
as a history written by the physicists, with all the advantages and disadvantages
of objective expert knowledge combined with subjective memory. Philosophers of
science will find in the book a lot of epistemological material, most of which has
been neglected by a philosophy of physics that has traditionally been focusing on
the theories rather than the phenomena of physics, even though the latter are most
important constraints of the former. The history, the current shape, and the goals
of astroparticle physics raise deep epistemological questions about the grounds of
a discipline grown together from distinct scientific traditions in search for unified
knowledge. But these philosophical questions are kept apart here. They will be dis-
cussed in a follower volume on the question of what kind of knowledge astroparticle
physics gathers about particles from cosmic sources.
The authors of the book reflect the various approaches to astroparticle physics.
All of them substantially contributed to developing the many-faceted methods and
to the results of this field of research. We should add that the collection of subjects
presented here is far from being complete. We thank to the authors and we apologize
for all neglected subjects and all the colleagues and their merits which could not be
included in the book.
Preface vii
This volume emerged from a workshop on the history and philosophy of as-
troparticle physics which took place in Dortmund in October 2009, and which was
supported by the German Physical Society. We would like to thank the authors,
Kirsten Theunissen from Springer, whose support made this edition possible, and
Raphael Bolinger, who prepared Appendices A–D.
Dortmund, Germany Brigitte Falkenburg
Wolfgang Rhode
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Wolfgang Rhode
2 From the Discovery of Radioactivity to the First Accelerator
Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Michael Walter
3 Development of Cosmology: From a Static Universe to Accelerated
Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Matthias Bartelmann
4 Evolution of Astrophysics: Stars, Galaxies, Dark Matter, and
Particle Acceleration . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Peter L. Biermann
5 Development of Ultra High-Energy Cosmic Ray Research . . . . . . 103
Karl-Heinz Kampert and Alan A. Watson
6 Very-High Energy Gamma-Ray Astronomy: A 23-Year Success
Story in Astroparticle Physics . . . . . . . . . . . . . . . . . . . . . . 143
Eckart Lorenz and Robert Wagner
7 Search for the Neutrino Mass and Low Energy Neutrino Astronomy 187
Kai Zuber
8 From Particle Physics to Astroparticle Physics:
Proton Decay and the Rise of Non-accelerator Physics . . . . . . . . 215
Hinrich Meyer
9 Towards High-Energy Neutrino Astronomy . . . . . . . . . . . . . . 231
Christian Spiering
10 From Waves to Particle Tracks and Quantum Probabilities . . . . . 265
Brigitte Falkenburg
ix
x Contents
Wolfgang Rhode
Taking the discovery of the “Ultra Rays”, nowadays called “Cosmic Radiation”, by
Victor Hess 1912 as a beginning, the research field of Astroparticle Physics cel-
ebrates in 2012 its 100th Birthday. It is a unique research field, to which results
from a large number of other physical disciplines contribute and which treats funda-
mental questions of astrophysics, particle physics and cosmology. In this book, the
development of Astroparticle Physics since its beginning in the early 20th century
is described in contributions of authors who all have left distinctive footsteps in the
field they are reporting about. Emphasizing the basic ideas of the development, in
this first chapter a tour d’ horizon on the connecting path between the following
contributions is followed.
W. Rhode ()
Physics Department, TU Dortmund University, Otto-Hahn-Str. 4, 44227 Dortmund, Germany
e-mail: [email protected]
analytical interest of the researchers, physical disciplines and branches and later on
complete engineering faculties were established. A ‘physical discipline’ in this way
appears as physics within a special branch of interest.
It seems that the analytic–synthetic way of research is perfectly adapted to the
needs of physics – so long a further extension of the analysis, a better control of the
experimental parameters or an enlargement of the range of measurements in a labo-
ratory is still possible. But how can one proceed, if there is no or only limited control
over the experimental parameters, if the improvement of the resolution of the detec-
tor does not lead anymore to a better understanding of a subsystem, which again
solves the primary question? Then one might feel unwillingly thrown in the posi-
tion of Bacon’s judge. The ‘evidence’ here means results of measurements, often
correlated, in experiments, which can be controlled only partially and in a figurative
sense. The ‘verdict’ relies on a simultaneous solution of the ‘evidence’-puzzle with
physical, mathematical or even modern statistical methods.
Since new techniques and actual research always are in contradiction with this
uncontrolled or uncontrollable part of the experiments, new science is in this sense
always at the borderline between Galileo and Bacon. How easy for example ques-
tions in particle physics could be answered, if only the physicist had the freedom to
only switch on the reaction that he wants to investigate! The subject of this book,
the development of Astroparticle Physics as a physical discipline is an example of
how physical questions are solved in a much more ‘Baconian’ than ‘Galileian’ way.
Also a second approach to the question of the special way of Astroparticle
Physics concerns nothing less than the goal of all physical research. The episte-
mological key question is: Does one intend to draw a picture of the world, which
is as true as possible, or should it be as simple and beautiful as possible? Given the
small part of the physical world, in which singular measurements are possible, the
truth-requirement depicts this world as a patchwork of approximations. The require-
ment of beauty and simplicity on the other hand leads into that Platonic world in
which the nature is described by a small number of forces. The price for this beauty
is, however, an inclusion of assumptions and of extrapolations in the mathematical
model and finally a loss of provable truth. While Mach’s critical positivistic ques-
tions helped to identify the right questions in the dusk of classical physics, in the
dawn of quantum mechanics and the theory of relativity physicists quickly adopted
the goal of finding the unifying world formula.
The heart of most physicists who think about this problem still seems to beat
more for the beauty model than for the ugly truth. Therefore it is no surprise that in
disciplines like elementary particle physics, the search for a mathematical expres-
sion of the unified forces between the particles is one of the driving forces to build
step by step larger and faster and more precise and better controlled experiments.
Given the success of the standard model of elementary particle physics in describing
the results of these high precision experiments, is there still room for Astroparticle
Physics on the beauty side of that discussion, or is this discipline condemned to live
with its passive measurements in a positivistic patchwork world?
The answer to this question has two sides. First, one has to be aware that a search
for more beauty in describing the theory alone is not a sufficient reason to jus-
1 Introduction 3
tify those huge investments necessary to build new particle accelerators and exper-
iments. There have to be real problems with the original model. In the case of the
standard model of particle physics these problems occur if the model is applied to as-
trophysical and cosmological processes. The flatness of space, the non-observation
of magnetic monopoles, the matter–anti-matter relation, the nature of dark matter
and energy indicate such problems. Those measurements, pinning down the corre-
sponding problems of the standard model of particle physics are, however, measure-
ments from the field of Astroparticle Physics. These problems in return direct the
laboratory research to experiments revealing hints for the next improvement of the
theory. Experiments in Astroparticle Physics are therefore important to ask relevant
questions to nature. But can they also answer questions relevant to more than one
part on the sky?
This is the second side of the answer: Those processes, which are observed by
the experiments of Astroparticle Physics, only in rare cases can be replaced by a
laboratory experiment on Earth. So poor the control over the fusion process within a
star or its supernova explosion ever may be, in many cases an astrophysical or cos-
mological setup for an experiment is the single source of information that we have.
For example, energy and space surveyed in experiments of Astroparticle Physics
exceed the frame of laboratory experiments by many orders of magnitude. Accord-
ingly, the single extension of the standard model of particle physics found up to now,
the fact that neutrinos are not massless, was inferred by analysis of the neutrino flux
from the sun. We conclude that there are precise answers in Astroparticle Physics,
though the puzzles to be solved are in general more complicated than in laboratory
experiments. Quite a lot of these experiments are discussed in this book.
During the century covered by this book, the rôle and position of both theory and
experiment have changed substantially. On the side of the theory, the general theory
of relativity and quantum field theory were developed and elaborated, both describ-
ing their subject of interest with impressive precision – also, however, showing for
both that they are not trivially to be unified. This structural break between the most
fundamental and powerful theories is a problem for the requirement of a description
of nature by a unified and beautiful theory. If even the columns on which the monu-
ment of the physical description of the world is based are not commensurable, then
how can one require that for all special cases? One might ask, however, why such a
unification should not be invented in future or why the fact that the physical picture
is composed by two theories should mean that it is in truth composed by a giant
number of approximations. One way to a solution of these questions leads through
measurements at the highest available energies, produced in sources of the largest
energy densities in the universe, probing as much of the space as possible, in other
words: through experiments of Astroparticle Physics.
These experiments have started in small setups and laboratories, even if the lo-
cation of those laboratories was always slightly extraordinary: the discipline saw
its ‘first light’ in balloons, glacier adits, vessels fixed under rowing boats and parti-
cle counters on the top of high mountains. Within the first half of the 20th century,
the investigation of the constituents of cosmic radiation could be told as the his-
tory of particle physics to a time where no man-made accelerators could provide the
4 W. Rhode
investigated energies above the MeV scale of nuclear physics. The development of
particle accelerators then enabled the physicist to build even larger and more precise
and better controlled experiments. The complexity of the experiments occasionally
exceeded the size, up to which an experiment could be completely understood and
executed by a single person or even a few scientists. Finally, the combined knowl-
edge of hundreds and thousands of physicists became necessary to experiment in
particle physics. ‘Multi-person’ is the label that Pedro Waloschek invented for this
new super-individual scientist.
Only in niches between particle physics and astronomy, and still driven by a
small number of scientists, experiments using the technique of high-energy physics
were done without accelerators with growing success until the beginning of the
1990s. With the transition between the LEP and LHC area and also connected to
the end of accelerator experiments in high-energy physics laboratories at DESY,
SLAC, and Fermilab, the possibility to fill in white spots on the landscape of funda-
mental science became of increasing interest. Especially collaborations working on
experiments with the techniques of charged particle detection, gamma and neutrino
astronomy, but also accelerator-less neutrino experiments, grew rapidly. By intro-
ducing the science-policy structures of particle physics also in these fields, huge
experiments could be planned and realized. The individual astroparticle physicists
was adapted to do science as part of a ‘multi-person’.
The various techniques necessary to detect different particles from astrophysical
and cosmological sources lead to the construction of different experiments in satel-
lites, balloons, with the Cherenkov or fluorescence light of the atmosphere, with
particle counters and calorimeters, below the surface – in tunnels through moun-
tains, mine shafts and deep below the water or ice surface. Each of them was built
with a dedicated physical program. The full physical picture, however, can only be
obtained if all physical information carried by the messenger particles from their
astrophysical or cosmological source to the detector on Earth is combined. An end
point of this development would be the combined observation of the full sky with all
particles at all detectable energies and at all possible times. Given the size and num-
ber of experiments reached until now, the last step to a ‘world detector’ seems viable.
This ‘world detector’ would, as required by Bacon, observe ‘everything’ at the
same time. The ‘multi-person’ physicist treats the observational puzzle containing
many pieces, not to be further investigated in a controlled experiment on Earth. This,
however, also means that more information is barely possible. The world formula,
the unified theory of everything that physicists are aiming at, will not have more
experimental results from the sky than provided by this detector. The scope of that
world formula then would be clear: to explain everything measured by / or relevant
to that final detector.
The experimental investigation of all forms of radiation and their theoretical under-
standing was one of the main subjects of physics of the 19th century. Since this
research yields one of the primary reasons to abandon the classical mathematical
picture of nature in favor of the new and uncertain world of quantum statistics, it
still belongs to the most discussed subjects of physics lectures of today. In parallel,
starting with analyses of the colorful patterns of the gas-discharge tubes (Geissler
tubes), first experiments in the physics with the ‘electron’ elementary particle were
done. In cathode-ray tubes, the properties of this radiation type were investigated
systematically, always looking at the interaction between these ‘rays’ and ‘matter’
as a special key aspect. It were the latter investigations which opened the window
to modern physics. At low energies (1 eV), the interaction between light and mat-
ter provided (after the investigation of the blackbody radiation) the second step to
quantum mechanics in the form of the discovery and theoretical understanding of
6 W. Rhode
the photo effect. At moderate energies in the keV region, through Röntgen’s discov-
ery of X-rays, new ways of structural information about matter became accessible.
By Becquerel’s discovery of (MeV) radioactivity the stability of the matter was
questioned.
For our purpose, the understanding of the roots of Astroparticle Physics, one
more technical side step of these discoveries became important. Beneath the tran-
sient fluorescence, the blackening of photographic plates was the first way to con-
struct a detector with spatial resolution. An investigation of the absolute intensity
of charged radiation and its temporal change became possible by the analysis of the
discharge velocity of electroscopes, essentially consisting e.g. of two flexible gold
leaves rejecting and deflecting each other so long they are equally charged. The dis-
charge rate of the electroscopes depends both on the construction of the instrument
and on the flux of charged particles through this detector. It was finally the attempt
to calibrate these instruments far from the charged products of the radioactive decay
of elements in the crust of the Earth which led to the balloon flights of Victor Hess.
Unexpectedly, the electroscope discharge rate grew with rising altitude, giving rise
to the investigation of this now discovered ‘Ultra Radiation’.
From Michael Walter we learn in this chapter how systematic physical research
of independent physicists with very different experiments unveiled the nature of this
radiation step by step.
We will further notice the dispute between the German and Anglo–American in-
terpretations of the firstly measured signals. Whilst in Germany one tended to be
convinced about the wave nature of this radiation, in the Anglo–American region
one had no major problems assuming its particle nature. It might be interesting to
mention that this dispute was to some extent also a consequence of the philosophy
of science discussion in the two regions. In Germany, the 19th century was the time
of the so-called ‘German Idealism’. One might characterize this as a time in which
philosophers tried to construct – similarly like the physicists in the classical me-
chanics – huge wonderful, closed and more or less logical systems, explaining the
complete world. These efforts had positive consequences also for our physical un-
derstanding of the world. The concept of energy conservation, e.g. was invented by
Robert Mayer from such a philosophical inspiration. However, Idealism contained
more than enough elements that were in complete contradiction with the attempt to
explain the world from its physical observation and description. No wonder that at
the end of this epoch, the opposite thinking of Ernst Mach gained huge influence
under German physicists. Mach wanted to establish physics only based on clear ob-
servations and without any metaphysical elements. Though his criticism of the con-
cept of the existence of elementary particles (i.e. atoms) partially was overdrawn,
in the end it helped to pave the way to a clean formulation of quantum mechanics.
His aversion against metaphysically claiming the existence of ‘particles’ as reasons
of observed effects was a living heritage among many German physicists at a time
in which the final formulation of quantum mechanics was not yet to be found. Out-
side of Germany, one had less problems accepting newly invented ‘metaphysically
as real defined’ particles as reason for observations. Therefore the Anglo–American
community had much less of a problem to accept the particle nature of the new
‘Ultra Radiation’ than the German.
1 Introduction 7
After a travel through the beautiful and systematic investigation of the cosmic
radiation, leading to the development of new experimental techniques and new in-
sights in the nature of elementary particles, their interaction with electromagnetic
radiation, and their theoretical description, this chapter leaves us at the dawn of
accelerator physics.
In this second chapter, we direct our view to Astroparticle Physics from the point
of view of cosmology. Since the beginning of human thinking, cosmology was a
subject of religion; since the beginning of science, a subject of philosophy, and only
in parallel to the investigation of cosmic rays it became subject to experimental in-
vestigations. Other than in cosmic ray research, this field, however, was primarily
driven by theoretical considerations. The birthday of modern cosmology was, as
Matthias Bartelmann explains, the day in 1915 on which Einstein’s general theory
of relativity was finished. The transition from cosmology as a subject of philoso-
phy to a subject of physics was driven by two epistemic discussions. Ernst Mach’s
positivistic requirement to base physics on measured facts led Einstein on his way
from the Newtonian absolute understanding of space and time to special and gen-
eral relativity. That the theory of relativity, however, was intended to be much more
than a construct to explain facts known before its construction follows already from
the suggestion of experiments to test the theory by Einstein (1916). These ‘classical
tests of general relativity’ were the questions whether the perihelion precession of
Mercury’s orbit could be calculated correctly, whether a deflection of light by the
Sun could be observed and whether light undergoes a gravitational redshift. This
understanding of the theory as a suggested model carrying a deeper truth than its
progenitors, which can be tested by experiments is paradigmatic for the epistemic
understanding of science by Karl Popper’s critical rationalism.
The second important discussion concerns the question if those physical facts
measured today on Earth are valid also at other times and other places of the Uni-
verse. Do we perform our measurements at a special location? Already Nicolaus
Copernicus answered this question in his Copernican Principle with: no. For pur-
poses of cosmology, this principle was complemented by assumptions about the
equal distribution of matter in the Universe and its geometry in the Cosmological
Principle (i.e. that we do not live at a special location). The Perfect Cosmologi-
cal Principle even required, additionally, that we do not live at a special time: the
Universe as a whole should not undergo any temporal change. The belief of such a
perfect homogeneous and static universe was widely spread under physicists at the
beginning of the 20th century.
Matthias Bartelmann follows the interplay between theory and experiments
through the 20th century, showing the way in which the mentioned primarily meta-
physical principles were investigated by experiments. The Perfect Cosmological
8 W. Rhode
Principle was rejected due to the Hubble expansion of the Universe. The validity
of the Cosmological Principle was questioned by investigations of the homogeneity
of the Universe backwards from the present distribution of matter, stars and galaxies,
through the time of emission of the microwave background and to the interactions
in the very early Universe. These investigations unveiled deep information about
the forms and properties of the matter in the Universe. The geometrical flatness of
the space could be shown and epochs with a different rates of expansion could be
identified. The essential mechanism of the formation of structure could be estab-
lished. The present end point of this research is a picture of a world dominated by
dark matter and dark energy sustaining its expansion with increasing velocity – and
still containing a multitude of questions to be answered by further experiments and
research in Astroparticle Physics.
To gain knowledge about the Universe, one needs well understood experimental
conditions. In the optimal case one would know the particle accelerators delivering
particles with well defined energy spectra. As well as the interaction probability of
these particles we would also know the composition of matter and radiation, which
the particles cross on their way to the detector – and of course also the properties
of the detector. Against this background, by defining clear theoretical predictions,
hypotheses can be set up and their validity can be tested in dedicated experiments.
In the case of Astroparticle Physics, the accelerators delivering the particles are
astrophysical objects like stars in their different states, and all types of galaxy and
types of objects maybe still unknown to us today. Investigations of questions in this
field relate to the research in astrophysics. If the matter and radiation that is crossed
or the geometry of the space is investigated the research contributes to cosmology,
and if the properties of the messenger particles or their interactions are investigated,
it is a contribution to particle physics.
How the understanding of experimental and theoretical details in these fields is
nested and finally used to ask new relevant questions, is explained by Peter Bier-
mann. The chapter with his contribution is based on the cosmology section, inte-
grating the accumulation of astrophysical knowledge of the last century and pointing
forward to the actual research questions in Astroparticle Physics. Still, the accelera-
tion mechanisms of the cosmic radiation at the different energy scales are not com-
pletely understood. Charged nuclei are observed up to the highest energies, however,
neither their sources are yet unanimously identified nor was a signal observed pro-
duced by neutrinos from interactions of these nuclei. Gamma rays are observed from
a multitude of galactic and extragalactic sources; however, also these observations
leave the contribution of hadronic acceleration to the certainly present leptonic ac-
celerations open. Finally, beside other details from the field of particle physics, also
the question of dark matter is still open and subject to experimental investigation.
1 Introduction 9
The most frequent particles, detected directly or indirectly in experiments above the
Earth’s surface, are charged nuclei. In the chapter they wrote, we learn from Karl-
Heinz Kampert and Allan Watson, how the investigation of this charged component
of cosmic radiation on the Earth’s surface developed. Temporal overlapping with
Chap. 2 and resuming the scientific questions of Chap. 3, the description starts with
the experimental situation in the third decade of the last century. Starting from this
situation, the key questions are posed still occupying cosmic ray research today:
What and how can we learn about the nature of the involved primary and secondary
particles? Which chemical composition and which energy spectra do the charged
cosmic rays have? What do we learn from the arrival direction of charged cosmic
rays?
The crucial physical property, which one had to discover and to understand to
answer these questions, is the development of air showers from the first interaction
of the cosmic particles in the high atmosphere to the arrival of the cloud of sec-
ondary particles on the surface of the Earth. Therefore the history of Cosmic Ray
research starts with the discovery of extended air showers. The development of ap-
propriate detectors from Geiger-counters to phototube-based detectors, from small
coincidence experiments fitting in one room to the AUGER detector covering an
area of 3 000 km2 , is described. Depending on the detector’s construction, the en-
ergy and nature of the particles arriving on the Earth’s surface can be identified and
studied. Investigations of the air shower development and insights about the primary
particles are, however, only possible if the fluorescence (or Cherenkov) light emitted
in the atmosphere is recorded in appropriate telescopes.
We observe in this chapter the interplay between the invention and develop-
ment of detection methods and the growth of physical knowledge. Thresholds in
the measurement quality were overrun, when new techniques or new combinations
of detection methods were established. With the detector size, also the maximal en-
ergy inferred for the primary particles grew until the scale of the Greisen–Kuzmin–
Zatsepin cutoff was reached. Particles in this highest ever observed energy regions
will barely be observed in accelerator experiments. Thus, the high-energy end of
particle physics will be investigated in the same experimental situation as the one in
which particle physics had its roots. Furthermore, a small energy window below the
GZK cutoff is interesting, in which the charged primaries are only marginally de-
flected by the intergalactic magnetic fields. Here, astronomy with charged particles
is possible – if particles with sufficiently high energies are accelerated in galaxies
close enough. The special conditions, however, to which the use of this window to
10 W. Rhode
the universe is connected, restrict the use of this window. Therefore a different tech-
nique, applicable also for the detection of low energetic primaries or sources on all
distance scales, is necessary to solve the quest to the origins of cosmic rays.
these sources. There is at least barely need to require proton acceleration to explain
this part of the spectrum. In the TeV region, the spectral behavior becomes crucial.
Some spectra are shaped so that an additional proton component might be needed
to explain the spectrum. Some structures in the temporal changes of the flux also in-
dicate that a purely leptonic explanation fails and hadrons, as accelerated particles,
might be or are needed.
In the 10 TeV region, the only region where the energy reached by the accel-
eration, respectively, lost by gamma emission could decide this question, the spec-
tra from the sources high-energetic and distant enough to accelerate protons to the
highest energies are cut off by interactions with the Infrared Background. On the
one side, this cut off opens the possibility to measure this background flux, on the
other side it lets us hope for an independent method to identify hadron acceleration:
the detection of neutrinos produced as secondaries of proton interactions.
1.2.6 Search for the Neutrino Mass and Low Energy Neutrino
Astronomy
The investigation of the charged products of the nuclear decay had in the history
of Astroparticle Physics already given rise to the development of the first detectors,
which led to the discovery of the Ultra Radiation. In parallel to the development
of the cosmic ray research, also the physics of the nuclei was further investigated.
Kai Zuber follows for us the stepwise understanding of the properties of the nu-
clei flowing into the puzzle of how to distribute in beta decays the energy and the
spin among the nuclear constituents in such a way that the conservation laws were
observed. To solve the puzzle, Wolfgang Pauli uttered in his famous letter in 1930
the ‘desperate remedy’ that in these decays a neutral and barely detectable particle
could be created. This hypothesis of a neutral particle, later called neutrino, became
an important, and by decay physics, a better and better supported part of the the-
ory. The small interaction probability, however, prevented the discovery of the first
neutrinos for more about quarter of a century until their detection by Clyde Lorrain
Cowan and Frederic Reines in 1956.
After the first discovery of a certain particle, physicists immediately wish to ob-
serve large numbers of them to determine their properties. One such neutrino source
could be the fusion processes claimed, with growing understanding of the nuclear
properties, as the energy source of the sun. Through their small interaction cross
section, the neutrinos would be able to cross sun and Earth unhampered until, deep
enough under the Earth’s surface to shield unwanted particles from the cosmic radi-
ation, the neutrinos could be detected in a large volume detector. In the late 1960s,
in the Homstake experiment, the first single solar neutrinos were detected. On the
one hand this was the first step, even though radiochemical and thus non-directional,
to neutrino astronomy and a large success for the solar model. On the other hand,
only a third of the expected number of neutrinos was observed. This comparably
small deviation was the origin of the so-called solar neutrino problem. Forty years
12 W. Rhode
and several detector generations later, and after developing Cherenkov proton de-
cay experiments in water, i.e. the technique telling us how to also reconstruct the
neutrino direction, this solar neutrino problem led to the insight that the neutrinos
oscillate and are not massless – different from the way they had been defined in the
meanwhile developed standard model of particle physics.
The time of proton decay experiments was enlightened in 1987 by a second astro-
physical neutrino signal: the signal from the Supernova 1987a. The time difference
between the neutrinos arriving from this event could be used together with the up-
coming measurements of the fluctuations of the Microwave Background and the dif-
ferent beta-, double-beta and neutrino-less double-beta decay experiments to limit
the range of the neutrino masses. Until this mass range becomes a measurement of
the mass, these investigations will be continued in future.
It should be noticed that the neutrino story told up to here forms a root of As-
troparticle Physics that is widely independent of the cosmic ray and gamma-ray
research already discussed; however, it is increasingly connected by the studied sub-
jects, applied methods and nested theories. Astrophysics (the fusion process in the
sun), particle physics (neutrino oscillations, respectively, masses) and cosmology
(contribution of the hot dark matter particle neutrino) are studied with one family of
experiments.
We have left the particle physics in the situation, when accelerator experiments came
up. In this book we will not look on the details of how with these accelerators a large
number of particles were discovered. We will also not discuss how the standard
model was established and confirmed. One property of the standard model, how-
ever, had unexpected consequences for Astroparticle Physics. Its symmetry would
have required that matter and anti-matter annihilated in the early universe, so that no
world made of ‘matter’ could have formed. In 1968, Andrei Sacharow found under
which conditions the matter–anti-matter asymmetry could have formed. That the
universe was not in a state of thermal equilibrium was obvious in big bang cosmol-
ogy, and the required C and CP violation was found and is still investigated today
– for example in the LHC experiment LHCb. Proton decay, also required, however,
could be only investigated in large none-accelerator experiments. The size of the
first generation of such experiments depended on the idea of the unification of the
fundamental forces extending the standard model. In the middle of the 1980s, the
most simple extension of the standard model called SU(5) implied a proton life-time
of about 1029 years. With detectors consisting of 1 000-tons of matter and hidden
from cosmic radiation as deep under the Earth’s surface as possible, one expected to
detect several proton decays per year. Hinrich Meyer was one of the leading physi-
cists who constructed the French–German Fréjus iron calorimeter for this purpose.
1 Introduction 13
In this section, he reports on the path leading from accelerator laboratories to under-
ground physics.
Unfortunately, at the end of the 1980s, the attempts to detect proton decay failed,
and SU(5) – be it supersymmetric or not – had to be abandoned. With this failed
search, the experimental background to proton decay events consisting of neutrinos
and muons got into the center of interest. In addition to the already discussed solar
neutrinos, also atmospheric muon- and electron-neutrinos were detected and their
energy spectrum was measured. The overwhelming flux of atmospheric muons was
used for cosmic ray studies. Thus the experiments built primarily as detectors of
‘particle physics without accelerators’ became entirely and successfully cosmic ray
experiments.
The open question of the sources of the cosmic radiation at high energies had led
to first theories of the acceleration of protons and their energy loss by proton–proton
or proton–gamma interactions. Some of these theories predicted such high neutrino
fluxes that they should have been visible on top of the atmospheric neutrino flux
in existing experiments. By the non-detection of these neutrinos, these theories, as
well as theories predicting a measurable very-high-energy neutrino flux, for example
from the decay of topological defects, could be rejected. The observed data sample
was further used to set limits on the flux of possible point-like sources. In this way,
Cygnus X3, the position at which the Kiel air shower array had claimed to see a
signal, was investigated (and with the given sensitivity rejected) as one of the first
point source positions.
The epistemological purpose to reject theories was even more successfully ful-
filled than the involved physicist would have wished at that time. As a consequence,
however, first results in the field of neutrino astronomy published and in nuce meth-
ods still in use in actual neutrino telescopes were developed. At the moment of its
appearance, the sensitivity of the Fréjus apparatus as a neutrino telescope had en-
tered a range in which physical meaningful results could be published, and also
meaningful questions for the construction of the next Astroparticle Physics detector
generation and the next generation of particle acceleration theories could be asked.
The road to Astroparticle Physics to be followed in this section starts with the first
neutrino detection in the 1950s. Since then, experiments to detect neutrinos from the
interactions of the primary interactions of nuclei in the atmosphere were planned and
executed. Like all neutrino detectors, appropriate experiments had to be installed
as deep under the surface as possible to shield the flux from atmospheric muons.
Then muon events from a direction with a shielding of more than 13 000 mwe (i.e.
horizontal events), from which atmospheric muons could not reach the detector,
were searched. As Christian Spiering explains, the first successful detection of at-
mospheric muons was in 1965, nearly at the same time as detection by two experi-
ments (KGF and CWI).
14 W. Rhode
lyzed data gain their true value through the connection with all other experiments –
through the connection of all experiments to the ‘world detector’.
1.3 Summary
In this introduction, the dependencies between the chapters of this book, illuminat-
ing the history, presence and future of Astroparticle Physics, are presented. With
this help the reader may decide which track he wants to follow to obtain a basic
knowledge of the history of Astroparticle Physics in the past century.
Chapter 2
From the Discovery of Radioactivity to the First
Accelerator Experiments
Michael Walter
2.1 Introduction
The article reviews the historical phases of cosmic ray research from the very be-
ginning around 1900 until the 1940s, when the first particle accelerators replaced
cosmic particles as a source for elementary particle interactions. Contrary to the
discovery of X-rays or the ionising α-, β- and γ -rays, it was an arduous path to
the definite acceptance of the new radiation. The development in the years before
the discovery is described in Sect. 2.2. The following section deals with the work
of Victor F. Hess, especially with the detection of extraterrestrial radiation in 1912
and the years until the final acceptance by the scientific community. In Sect. 2.4 the
study of the properties of cosmic rays is discussed. Innovative detectors and meth-
ods like the cloud chamber, the Geiger–Müller counter and the coincidence circuit
brought new stimuli. The origin of cosmic rays was and is still an unsolved ques-
tion. The different hypotheses of the early time are summarised in Sect. 2.5. In the
1930s a scientific success story started which nobody of the first protagonists might
have imagined. The discovery of the positron by C.D. Anderson was the birth of
elementary particle physics. The 15 years until a new era started in 1947 with first
accelerator experiments at the Berkeley synchro-cyclotron are described in Sect. 2.6.
It is obvious that this article can only cover the main steps of the historical de-
velopment. An excellent description of the research on the “Höhenstrahlung” in the
years between 1900 and 1936 was given by Miehlnickel (1938). Two other volumes
are also recommended: Brown and Hoddeson (1983) and Sekido and Elliot (1985).
Both summarise the personal views of the protagonists or their coworkers of the
early time.
M. Walter ()
DESY, Platanenallee 6, 15738 Zeuthen, Germany
e-mail: [email protected]
In general it was assumed that dry air is a good isolator. Then, in 1785, Charles Au-
gustin de Coulomb observed that a very well isolated electrically charged conductor
loses its charge with time. Coulomb’s hypothesis was that the charge is taken away
from the conductor by the contact of dust and other particles contained in the air.
But this explanation was not generally accepted and for more than 100 years there
was no clear answer to the question why air becomes conductive.
In 1857 the electrical discharge tube was developed by Heinrich Geißler. It con-
sisted of a glass cylinder with two electrodes inside at both ends. Filled with gases
like air, neon or argon at low pressure, and operated at a high voltage of several kilo-
volts, the tube showed a plasma glow. These effects were first used for entertainment
demonstrations, but this discharge tube was finally the basis for the development of
cathode, X-ray and neon tubes. William Crookes operated in 1869 a tube at lower
gas pressure and found that cathode rays are produced at the negative electrode. At
the other end of the tube, close to the positive charged anode, they hit the glass wall
where fluorescence light was emitted. Like many others, also W.C. Röntgen investi-
gated the properties of cathode rays using a tube provided by Phillip Lenard. In the
end of 1895, he observed an energetic radiation penetrating a black cardboard cover-
ing accidentally the tube. With the picture of his wife’s hand skeleton the discovery
of X-rays reached worldwide publicity within a few weeks. The new radiation and
the photographic imaging were a breakthrough for new developments in physics and
a revolution in medical diagnostics.
Only two months later H. Becquerel discovered also by chance a new penetrating
radiation in uranium minerals. Inspired by Röntgen’s discovery, he continued in
2 From the Discovery of Radioactivity to the First Accelerator Experiments 19
Fig. 2.1 C.R.T. Wilson, J. Elster and H. Geitel, A. Gockel and Th. Wulf
A new effect was discovered when gases were irradiated with α-, β- and γ -rays:
ionisation. The radiation is energetic enough to dissociate atoms and molecules into
positively and negatively charged ions, as it was assumed before the atomic structure
was known. These ions allow the transport of electricity and make gases conductive.
It was the Scottish physicist Charles Thomson Rees Wilson (Fig. 2.1) who found
in 1896 that the formation of clouds and fog is connected with the ionisation of air
20 M. Walter
Table 2.1 Absorption coefficients of γ -rays for different substances and the necessary absorber
thickness to reduce the ionisation by a factor of two (Eve, 1906) (It should be emphasised that at
this time measured or calculated values were given without errors)
Substance Density, Absorption Absorber Thickness d, cm
g cm−1 Coeff. λ, cm−1 (for 50 % reduction)
molecules. But the work on this topic was continued almost 15 years later, when he
developed a cloud chamber which visualised α- and β-rays. At first he investigated
the ionisation of gases using an electrometer, the standard detector at this time. It
consisted of two thin gold leafs mounted on a metal rod enclosed in a metallic
vessel. Charging the rod, the gold leafs move away from each other, because of
the equal charge. The distance is then a measure of the amount of charge. In a
publication in 1900 (Wilson, 1900), Wilson gave an explanation for the conductivity
of air in an isolated vessel. The reason that a charged metallic conductor loses its
charge in an isolated chamber filled with air is that there are small quantities of
radioactive substances. These can be pollutions embedded in the chamber walls and
in the surrounding environment. At the same time, Julius Elster and Hans Geitel
(Fig. 2.1), two friends and physics teachers at a German school in Wolfenbüttel came
to the same conclusion (Geitel, 1900; Elster and Geitel, 1901). In our days forgotten,
both were in the time from 1880 to 1920 with about 200 common publications
internationally accepted authorities in the fields of electricity in the atmosphere,
photo effect and radioactivity (Fricke, 1992). Several times Elster and Geitel were
nominated for the Nobel prize. They did not accept an offer of a professorship at
the university but preferred independence with school teaching and working in their
private laboratory.
The group around Ernest Rutherford at McGill University in Canada went in
1903 a step further. An electroscope was shielded with different materials, like wa-
ter and lead, to measure the ionisation in dependence on the absorber thickness.
A decrease by about a factor of three was observed, but then the ionisation remained
constant. There is obviously radiation of high penetration power, which: “. . . may
have its origin in the radioactive matter which is distributed throughout the earth and
atmosphere” (Cooke, 1903). Several authors assumed then that the penetrating radi-
ation is γ -rays coming from radium in the earth crust and from radium emanations
in the atmosphere. With their own measurements and results of others, A.S. Eve
estimated the absorption coefficients λ of γ -rays for different substances given in
Table 2.1 (Eve, 1906). The dependence of the ionisation I on the distance d to the γ -
ray source is described by I = Io · e−λd . Interesting for later discussions is that 99 %
of the γ -rays from radium emanations will be absorbed by 1 000 m atmosphere.
Wilson in Scottland, Elster and Geitel in Germany were probably the first who in-
vestigated radioactivity in the environment outside of laboratories. They performed
2 From the Discovery of Radioactivity to the First Accelerator Experiments 21
measurements in a railway tunnel (Wilson) and in caves, and salt mines (Elster and
Geitel). A comparison with the ionisation measured outside in free nature showed
different results. Volcanic rock contains in general a higher fraction of radioactive
substances than sediment stones. Radioactive pollution is much smaller in rock salt
mines and in water.
A new measurement quality was reached by Theodor Wulf (Fig. 2.1), a Ger-
man Jesuit priest, who studied physics in Innsbruck and Göttingen. As physics lec-
turer at the Jesuit University Valkenburg in The Netherlands he investigated from
1905 to 1914 the electricity of the atmosphere and radioactivity. Wulf developed
a robust, transportable electrometer which became for many years the state of the
art instrument. The gold leafs were replaced by two metallised quartz strings. Fig-
ure 2.2 shows a schematic view of this two-string electrometer. It was produced by
the company Günther & Tegetmeyer in Braunschweig/Germany (Fricke, 2011), as
were many of its worldwide distributed succeeding models. In autumn 1908 Wulf
performed absorption measurements of γ -radiation in the area of Valkenburg and
concluded (Wulf, 1909a):
Then particularly observations in balloons and with kite flights could give very valuable
information whether the starting point of this radiation is the earth, or the atmosphere, or
the stars.
One can only speculate why Wulf himself did not explore the atmosphere with a
balloon. But he was focused first of all on detailed investigations inside and outside
of buildings and caves, in mines up to 980 m below ground, on lakes and the river
22 M. Walter
Maas above and below the surface. From these ionisation measurements he came to
the following summary (Wulf, 1909b):
Experiments are presented which demonstrate that the penetrating radiation at the place
of observation is caused by primary radioactive substances which are located in the upper
Earth’s layers, up to 1 m below the surface. If a part of the radiation comes from the atmo-
sphere, then it is so small that it is not detectable with the used methods. The time variations
of the γ -radiation can be explained by the shift of emanation-rich air masses in the Earth in
larger or smaller depths due to variations of the air pressure.
To prove this hypothesis he did at least a small step into the atmosphere. Wulf
followed an invitation to perform measurements on top of the Eiffel tower on four
days in April 1910. Assuming that the main part of γ -radiation comes from the area
near to the ground, one would expect a reduction of ionisation at 300 m height by
27 % (see Table 2.2). In fact, Wulf measured a decrease of ionisation by 13 % com-
pared to the ground (Wulf, 1910). This significant difference was in clear disagree-
ment with his previous assumption that radioactive emanations in the atmosphere
are negligible.
In Italy Domenico Pacini, a physicist at the Agency of Meteorology and Geo-
dynamics, confirmed this result. Using electrometers of the Wulf-type, he performed
measurements in 1910 and 1911 (Pacini, 1910, 1912) on board of a destroyer of the
Italian Navy at more than 300 m distance from the coast, where the water was at
least 4 m deep. Assuming that there is no influence of radiation from the Earth’s
solid ground, he estimated on the sea a fraction of 66 % of the ionisation measured
in parallel on land. At the same time George Simpson, an English meteorologist,
and Charles Wright, a Canadian physicist, investigated the ‘atmospheric electric-
ity over the ocean’ (Simpson and Wright, 1911) on the way from England to New
Zealand. Both were scientific members of Robert Scott’s crew travelling in 1910
on board the ‘Terra Nova’ to Antarctica. They measured the ionisation also with a
Wulf electroscope made by Günther & Tegetmeyer. Whereas Pacini’s data showed
strong fluctuations on land and on sea, Simpson and Wright measured in average
6–7 ions cm−3 s−1 over the sea without variations during a day. They stated (Simp-
son and Wright, 1911):
. . . it was seen that near land a high radioactive-content of the air almost synchronised with
a high natural ionisation. That this high ionisation is due to radioactive products deposited
on the ship itself is highly probable from the fact that the ionisation persists for some time
after the high air radioactivity had disappeared.
At the end of the 19th century balloon flights were very popular for military and
scientific purposes. Especially meteorologists and geophysicists used balloons to
study weather conditions, the electrical earth field and the electricity of the atmo-
sphere at high altitudes. Probably Elster and Geitel were the first to suggest to use
a balloon for ionisation measurements in the higher atmosphere. It was apparently
forgotten or ignored that already between 1902 and 1903 the German meteorolo-
gist Franz Linke performed 12 balloon flights with interesting results (Linke, 1904).
Starting in Berlin he reached altitudes up to 5 500 m and measured the electrical
field of the Earth and the ionisation in the atmosphere. There was an agreement that
2 From the Discovery of Radioactivity to the First Accelerator Experiments 23
Table 2.2 Ionisation measurements in the atmosphere at different altitudes. The measured values
can be compared with the assumption that the radiation is concentrated close to the ground and
is absorbed by the air corresponding to the exponential dependence on the distance (Wulf, 1910;
Gockel, 1910)
Scientist Location Date Position Measured ions, Expected ions,
cm−3 s−1 cm−3 s−1
Elster and Geitel measured the ionisation in Wolfenbüttel at the same time for com-
parison. Linke observed at altitudes between 1 and 3 km about the same ionisation
values as at ground and an increase by a factor of four at higher altitudes up to 5 km.
Obviously, the existence of penetrating radiation in the higher atmosphere was de-
tected too early to be recognised and appreciated by the physics community. A new
series of balloon flights began in 1908 with Flemming and Bergwitz. Because of
problems with their detectors, they did not achieve convincing results. In the end of
1909 Albert Gockel (Fig. 2.1) started the first of three balloon flights in Switzerland.
With a Wulf electrometer he could establish previous observations that the ionisa-
tion of the atmosphere decreases slowly with altitude (Gockel, 1910). In Table 2.2
the results of Wulf and Gockel are summarised.
Inspired by the work of Wulf, Bergwitz and Gockel, Hess started with own measure-
ments. First, he wanted to prove experimentally the absorption of γ -rays in air. With
the strongest radium sources available in the institute Hess investigated the range of
γ -rays at different distances to the detector (Hess, 1911, p. 999): “The sources were
positioned at distances of 10, 20, 30, . . . up to 90 m from the electrometer and then
the saturation current was estimated as mean value of 5–10 single measurements.”
With an absorption coefficient for air of λ = 0.0000447, Hess confirmed the results
of previous estimates of Eve and others and concluded (p. 1 000): “. . . that the pen-
etrating radiation of the Earth must decrease rapidly with the altitude and at 500 m
only few percent would be expected of the values on the ground.” As Hess men-
tioned in the publication (Hess, 1913), Wulf proposed to him in winter 1911/1912 to
calibrate two-string electrometers with different gauge radium sources. After some
improvements of the electrometer construction carried out by the company Günther
& Tegetmeyer, Hess performed the calibration with radioactive sources of differ-
ent strengths. The accuracy to measure the radioactivity of a source with unknown
strength could be improved to few per mille (Hess, 1913). In contrast, the same
electrometer reached without this calibration a measurement accuracy of about 3 %.
In 1911 Hess planned balloon flights to repeat the investigations of penetrating ra-
diation in the atmosphere. The Royal Imperial Austrian Aeronautical Club provided
a balloon for two flights. Already with the first flight he confirmed the results of
Gockel (Hess, 1911). The ionisation remained almost constant up to the maximum
height of 1 000 m. The second flight in October 1911 was during the night. In general
stable thermic conditions guarantee quiet flights at constant altitude, but in this case
bad weather conditions did not allow to fly higher than 200–400 m above ground.
Nevertheless, the observation of the identical ionisation at day and night became an
important argument for later discussions.
That the Imperial Academy of Sciences in Vienna funded seven balloon flights
in 1912 shows the high ranking of this research in Austria. To avoid the problems of
Bergwitz and Goppel at higher altitudes, Hess had ordered at Günther & Tegetmeyer
two pressure-sealed electrometers for γ -rays and a third one with thin zinc walls for
β-rays. Six flights were launched from the area of the Aeronautical Club in Vienna’s
Prater. The balloons were filled with illuminating gas which did not allow one to
reach very high altitudes. In Table 2.3 the characteristic data of these flights are
summarised, and Fig. 2.3 shows the flight routes.
The results of the six flights at relatively low altitudes can be summarised as
follows (Hess, 1912):
(i) All three electrometers showed identical variations with time and altitude.
2 From the Discovery of Radioactivity to the First Accelerator Experiments 25
Table 2.3 Seven balloon flights of V.F. Hess in 1912 (Hess, 1912)
Flight Date Time Height above Ions(γ − 1), Ions(γ − 2), Ions(β-rays),
ground, m cm−3 s−1 cm−3 s−1 cm−3 s−1
the mean values of the observed ionisation for all three detectors. Unfortunately,
the β-ray detector was damaged accidentally by Hess before the maximum height
was reached. But between 500 and 3 000 m a continuous increase of ionisation was
measured. Both γ -detectors registered an increase of ionisation by a factor four
from 3 000 to 5 200 m. Summarising the results of the seven flights, Hess came to
the following conclusion (Hess, 1912):
The results of these observations seem to be explained by the assumption that a radiation
of high penetration power hits our atmosphere from top, which causes also in their lower
layers a fraction of the observed ionisation in the closed detectors. The intensity seems
to underly variations which are visible in time intervals of one hour. Since I did not find a
decrease of radiation during the night or during the sun eclipse, the sun cannot be the reason
for this hypothetical radiation, at least if one assumes a direct γ -radiation with straight-line
propagation.
The discovery of cosmic rays can be seen as a step-wise approach. First indi-
cations seen by Wulf, Bergwitz, Goppel and Pacini were convincingly established
2 From the Discovery of Radioactivity to the First Accelerator Experiments 27
by the measurements of Hess. The essential step was the detection of the strong in-
crease of penetrating radiation with growing altitude. Since γ -rays have the largest
penetration power of the three known ionising radiations, it was natural to assume
that also cosmic rays consist of energetic γ -rays.
The World War I stopped most activities. Long-term measurements of Gockel and
Hess in the Alps confirmed the balloon results for 2 500–3 500 m altitudes (Gockel,
1915; Hess, 1917). This was not important as regards corroborating the balloon re-
sults, but was convincing enough to think about research stations on high mountains.
There was a group of physicists who had serious doubts that a new radiation of cos-
mic origin was discovered. Their main arguments ranged from a possible radiation
in the upper atmosphere to measurement problems due to insulation leaks caused by
the low temperatures at high altitudes.
A problematic role in these scientific debates played Robert A. Millikan at the
California Institute of Technology. He received the Nobel Prize in 1923 for the mea-
surement of the elementary charge of the electron, although his data analysis was
challenged by experts. As director of the Norman Bridge Lab for Physics he started
a cosmic ray research program. First results were presented by Russel M. Otis in
1923. He has measured the ionisation with a Kolhörster-like electrometer in bal-
loons and airplanes up to 5 300 m altitude. A similar dependence was observed as
before in Europe, although with a smaller increase. Another approach was tried by
Millikan and Bowen, who used a simple and light electrometer with automated data
recording. Their goal was to overcome the magic 10 000 m border with low-cost, un-
manned sounding balloons. Two of four ascents in 1921 were successful and reached
11 200 and 15 500 m. But with the detector only one averaged ionisation value of
46.2 ions per cm2 per second was measured above 5 500 m. This was about a factor
three larger than at the surface. Nevertheless, Millikan concluded from this doubtful
result in April 1926 that there is “complete disagreement” with the data of Hess and
Kolhörster and one has therefore a “definite proof that there exists no radiation of
cosmic origin having an absorption coefficient as large as 0.57 per meter of water”
(Millikan and Bowen, 1926). A second publication from June 1926 summarised
the experiments of Otis and measurements in the mountains. Also here no evidence
for extraterrestrial radiation was observed. But five months later, measurements in
snow-fed lakes at high altitudes (Millikan and Cameron, 1926) showed ‘suddenly’
that “This is by far the best evidence found so far for the view that penetrating rays
are partially of cosmic origin.”
An article appeared in the New York Times (NY-Times, 1925) at 12 November
1925 with the title “Millikan Rays” which referred to the sounding balloon measure-
ments published five months later in April 1926 (Millikan and Bowen, 1926) with
the conclusion given above. This is an interesting example for Millikan’s ‘abilities’
in publicity and science marketing. Parts of this article, which was even reprinted in
‘Science’ (Science, 1925), will be presented here:
DR. R.A. MILLIKAN has gone out beyond our highest atmosphere in search for the cause
of a radiation mysteriously disturbing the electroscopes of the physicists. . . . The study had
to be made out upon the edge of what the report of his discovery calls “finite space,” many
miles above the surface of the earth in balloons that carry instruments of men’s devising
where man himself cannot go. His patient adventuring observations through 20 years have
at last been rewarded. He has brought back to earth a bit more of truth to add to what we
2 From the Discovery of Radioactivity to the First Accelerator Experiments 29
knew about the universe. . . . He found wild rays more powerful and penetrating than any
that have been domesticated or terrestrialized, travelling toward the earth with the speed of
light . . . The mere discovery of these rays is a triumph of the human mind that should be
acclaimed among the capital events of these days. The proposal that they should bear the
name of their discoverer is one upon which his brother-scientists should insist. . . . “Millikan
rays” ought to find a place in our planetary scientific directory all the more because they
would be associated with a man of such fine and modest personality.
The ‘brother-scientists’ in Europe insisted, but in the opposite way (Hess, 1926;
Kolhörster, 1926). They made clear that what was called the discovery of ‘Millikan
rays’ was nothing else than the radiation discovered in 1912 by Hess.
But Millikans aggressive campaign had a strong impact. In several scientific
books of this time, also from European authors (see e.g. De Broglie and De Broglie,
1930, p. 130), Millikan was assigned as the discoverer of the extraterrestrial rays.
Finally, with the Nobel Prize awarded in 1936 to V.F. Hess, the real development
in this research field was put in perspective. Today it is assumed that Millikan cre-
ated the terms ‘cosmic radiation’ and ‘cosmic rays’ for this radiation (Millikan and
Cameron, 1926). But also this can be disputed. Gockel and Wulf used it in a paper
from 1908 (Gockel and Wulf, 1908) summarising the results of their investigations
in the Alps:
An influence of the altitude on the ionisation could not be verified. This allows the conclu-
sion that a cosmic radiation, if it exists at all, contributes with an inconsiderable fraction
only.
In English, French and Russian publications the term ‘cosmic rays’ was the standard
after Millikans paper in 1926, in German the terms ‘Höhenstrahlung’ and ‘Ultra-
strahlung’ were in use until end of the 1940s. Afterwards ‘cosmic rays’ and ‘cosmic
particle physics’ were commonly used.
It was the higher penetration power which led to speculations that there could be
something else than the known α-, β- and γ -rays. As discussed before, it took years
to isolate cosmic rays from background γ -radiations caused by radioactive impu-
rities in the detector walls, in the environment of the detector, in the Earth crust
and in the lower atmosphere. Until the early 1930s it was the general consensus
that cosmic rays are γ -rays. In many long-term experiments the time variation of
the ionisation was measured and correlations were investigated with temperature,
velocity of the wind, air pressure, position of the sun and the stars. Most of these re-
sults were contradictory, and, finally, the only advantage was a better understanding
of the used electrometers and the experimental conditions. Besides the discovery
at high altitudes itself, the absorption measurements in water and ice as well as
with lead shielding brought new insights. Real progress came with new detection
methods like the cloud chamber, the Geiger–Müller counter and the possibility to
measure coincident signals.
30 M. Walter
Fig. 2.6 Schematic view of the experimental set-up to measure the ionisation variation with the
air pressure in the Neva river (from Myssowsky and Tuwim, 1926)
hits the upper atmosphere (Regener, 1932b). The automatic photographic recording
of ionisation, temperature and air pressure was adapted to the conditions at very
high altitudes. The balloon flight at 12 August 1932 reached an altitude of 28 km.
Figure 2.5 is an impressive demonstration of the continuation of Kolhörster’s mea-
surements into the stratosphere. The main conclusions of these investigations were
(Regener, 1932b):
. . . 3. At pressures below 150 mmHg (above 12 km altitude) the curve becomes flatter,
i.e. the intensity of the radiation increases more slowly approaching the end of the atmo-
sphere. . . . 6. If there would exist a γ -radiation of the known radioactive substances in
the cosmos, then it would penetrate . . . still 20 % of the corresponding air column. This
would result in an increase of radiation intensity in the upper part of the curve. Since this
is not the case, one can conclude that such kind of radiation does not exist with observable
intensity.
The influence of the air pressure on the ionisation rate was observed years before
the discovery of cosmic particles. Simpson and Wright (see also Sect. 2.2.3) stated
in their summary of atmospheric electricity measurements over the ocean in 1910
(Simpson and Wright, 1911):
A slight dependence of the natural ionisation upon barometric pressure has been observed
– a high barometer giving low value of ionisation.
The table of Wulf’s measurement results on the Eiffel tower (Wulf, 1910) showed
the same effect, but Wulf did not comment on it.
The effect was investigated by the Russian physicists L. Myssowsky and
L. Tuwim in 1926 in the Neva river (Myssowsky and Tuwim, 1926). To reduce
background radiation a Kolhörster electrometer was installed 1 m below the sur-
face (see Fig. 2.6). Between 21 May and 11 June the ionisation and air pressure
were registered. An increase by 1 mmHg (1.333224 HPa) reduced the ionisation by
32 M. Walter
Fig. 2.7 Schematic illustration of the latitude effect. The lines represent the intensity of cosmic
particles in dependence on the latitude and longitude (from Johnson, 1938)
0.7 %. Their conclusion that the barometric effect has to be considered for precision
measurements was the first result valid until today.
Fig. 2.8 Schematic view of Wilson’s cloud chamber from 1912 (from Wilson, 1912)
For many years it was unquestioned that cosmic rays are highly energetic γ -rays.
Radioactive decays were the only known source and γ -rays had the highest energy
and penetration power. This view changed at the end of the 1920s when century
when new detection methods came into operation. The old-fashioned electrometers
were driven to high precision and stability. But they could not distinguish if a γ - or
a β-ray ionised the air molecules. Only thicker detector walls could shield the lower
energetic electrons.
The cloud chamber was not so new. Wilson made first studies in 1894 trying
to understand the formation of clouds and fog. Motivated by his investigations on
natural radioactivity and the conductivity of air by ionisation, he came back to this
idea. In 1911 he published first results entitled “On a method of making visible
the paths of ionising particles through a gas” (Wilson, 1911). In the following year
Wilson produced, with an improved cloud chamber, impressive photographs of α-,
β- and X-rays (Wilson, 1912). The working principle of a cloud chamber is rather
simple. A volume containing moist air reaches by fast expansion a supersaturated
state. Irradiation with ionising rays produces air ions which then act as nuclei of
condensation. Tiny water drops form the track of the ionising particle. Figure 2.8
presents a schematic view of this cloud chamber. Surprisingly on two photographs
very straight tracks are visible. Cosmic rays had not been detected at this time. So
Wilson misinterpreted these tracks. One of them is shown in Fig. 2.9.
There can be no question that the possibility to visualise the path of atomic par-
ticles revolutionised the research. The installation of the chamber between strong
magnet coils opened for the first time the possibility of momentum and energy esti-
mates by measuring the track curvature.
34 M. Walter
Fig. 2.9 Photograph with a straight charged track, which is possibly the first cosmic ray electron.
It was taken with Wilson’s cloud chamber before June 1912 (from Wilson, 1912)
Fig. 2.10 Layout of Bothe’s and Kolhörster’s coincidence experiment (from Bothe and Kolhörster,
1929). C1 and C2 are the Geiger–Müller counters. The coincidence condition requires the particles
to cross the detector from top to bottom
(ii) These charged particles have a penetration power comparable with cosmic rays
measured at high altitudes.
Therefore, it could be assumed that primary cosmic rays are also charged parti-
cles. The final answer to this question was given then in the following years by mea-
suring the latitude dependence of the cosmic particle rate as discussed in Sect. 2.4.3.
Bothe and Kolhörster achieved the coincidence still with a photographic method.
Analysing the registration film strips, which registered the electrometer string posi-
tion, they looked for amplitudes appearing at the same time for both detectors. But
the development of electronic components in the area of broadcast and telephony al-
lowed new solutions. At the end of 1929 Bothe published his pioneering idea under
the title ‘Simplification of coincidence counting’ (Bothe, 1929). With an electronic
circuit and a two-grid vacuum tube he could realise an automatic coincidence count-
ing. The circuit and the electronic components were improved in the following by
Bruno Rossi and others. But the coincidence method is still an essential component
in modern particle and astroparticle experiments.
• 1906: O.W. Richardson studied the diurnal variation of ionisation in closed ves-
sels (Richardson, 1906). He assumed that a correlation with the variation of the
electric earth field near the surface could be “caused by radiation from extra-
terrestrial sources”.
• 1908: Gockel and Wulf used in their paper on high altitude measurements in the
Alps (Gockel and Wulf, 1908) the term ‘cosmic radiation’ (kosmische Strahlung)
many years before Millikan.
• 1912: Hess discovered the cosmic radiation 7 August 1912. With his previous
balloon flights during the night and a solar eclipse, he concluded that the sun can
be excluded as source.
• 1913: Kolhörster established the discovery. Why he favoured the sun to be the
source is an open question. Perhaps he only wanted to distinguish himself from
Hess. Especially in the first years he tried to convince the reader of his papers that
Hess’s results were not very confident.
• 1915: For the ‘Elster-Geitel Festschrift’ Egon von Schweidler (Univ. Vienna)
performed theoretical estimates “about the possible sources of the Hess radia-
tion” (von Schweidler, 1915). Based on the known knowledge about radioactiv-
ity, Schweidler could exclude most sources: The upper atmosphere, the moon,
the sun, other planets and fixed stars. He concluded that “the less extreme re-
quirements sets the hypothesis of radioactive substances distributed in the outer
space.”
• 1921: Walther Nernst, Nobel Prize laureate of 1920 and founder of physical
chemistry, gave a public lecture on the status of newest research (Nernst, 1921).
He also discussed the implications of the cosmic radiation: “. . . if many primor-
dial matter is concentrated in the Milky Way, so this could be an area of stronger
emissions. . . . More detailed investigations should be done on high mountains.
From here the fundamental question could be decided if it (the radiation) will be
emitted uniformly in the space or stronger from the milky way.” Subsequent in-
vestigations did not give conclusive answers. The reason became clear later with
the discovery of the particle character of cosmic rays. The galactic magnet fields
prevent a straight path from source to observer.
• 1926: In the publication, where Millikan and Cameron ‘rediscovered’ cosmic
rays, they also presented their view on the origin of the radiation (Millikan and
Cameron, 1926): “The cosmic rays are probably . . . generated by nuclear changes
having energy values not far from those recorded above. These changes may be
(1) the capture of an electron by the nucleus of a light atom, (2) the formation
of helium out of hydrogen, or (3) some new type of nuclear change, such as the
condensation of radiation into atoms. The changes are presumably going on not
in the stars but in nebulous matter in space, i.e., throughout the depths of the
universe.” It should be mentioned that Millikan was the last, giving up the γ -ray
nature of cosmic rays.
• 1933: With the findings of Skobeltzyn, Bothe and Kolhörster and the proof of the
latitude effect by Clay and Compton, the particle character of cosmic rays was
established. This changed naturally the assumptions and requirements of their
production.
2 From the Discovery of Radioactivity to the First Accelerator Experiments 37
• 1934: Fritz Zwicky, a Swiss, and Walter Baade, a German astrophysicist and
astronomer, introduced the term supernova for short flaring, extremely bright ob-
jects (Baade and Zwicky, 1934a): “. . . the whole visible radiation is emitted dur-
ing the 25 days of maximum brightness and the total thus emitted is equivalent to
107 years of solar radiation of the present strength.”
But, more importantly, they demonstrated impressively that supernovae are
sources of cosmic rays (Baade and Zwicky, 1934b): “The hypothesis that su-
pernovae emit cosmic rays leads to a very satisfactory agreement with some
of the major observations on cosmic rays.” This concerns especially the en-
ergy release. They estimated the intensity of cosmic rays to be σ = (0.8–8) ×
10−3 ergs cm−2 s−1 , in rather good agreement with experimental results. Assum-
ing that supernovae are the only source and knowing that very few appeared in
our galaxy in the last 1 000 years, Baade and Zwicky argued: “The intensity of
cosmic rays is practically independent of time. This fact indicates that the origin
of these rays can be sought neither in the sun nor in any of the objects of our own
Milky Way.”
• 1942: The rebirth of the hypothesis that the sun is a source of cosmic rays came
with observations of Scott Forbush, a USA geophysicist. He measured an increase
of the cosmic ray rate during a strong solar flare in 1942 and concluded that at
least a part of cosmic rays come from the sun (Forbush and Lange, 1942).
There were of course several publications discussing other ideas. Hannes Alfvén
proposed in 1937 magnetic fields of double star systems as acceleration mechanism;
Alfvén, Robert D. Richtmyer and Edward Teller discussed in 1949 the possibility
that cosmic rays could have a solar origin. These and other suggestions were not
mentioned here, since they did not have any relevance for future developments.
Table 2.4 Cloud chamber experiments for cosmic particle detection operating in a magnet field
Author Year Chamber Discovery
diameter, Magnet Coincidence
cm field, trigger
tesla counters
continued in 1931 by Paul Kunze in Germany, Patrick M.S. Blackett and Giuseppe
Occhialini in Great Britain and by Carl D. Anderson in the USA (see Table 2.4).
But also other experimental approaches to study the properties of cosmic par-
ticles yielded important results. More advanced arrangements of Geiger–Müller
counters in coincidence were used by Bruno Rossi, Bothe, Kolhörster and Erich Re-
gener. Another photographic method, photographic emulsion, was brought to per-
fection by the efforts of Marietta Blau.
Carl D. Anderson proposed at the end of his time as graduate student in 1929 a
magnet cloud chamber experiment. The goal was to study electrons produced in
a lead sheet within the chamber by 2.6 MeV γ -rays of a Th-C source. However,
Millikan forced him to construct a cloud chamber with a very strong magnet for
cosmic ray studies. First photographs taken in 1931 showed negatively and posi-
tively charged tracks. Mainly driven by Millikans view of the nature of cosmic rays,
they were interpreted as electrons and protons produced by high energy cosmic γ -
rays. But Anderson was in doubt, since for many positive particles the ionisation
agreed with those of electrons. In August 1932 photographs with a 6 mm lead plate
in the centre of the chamber were taken. A short announcement appeared in Science
in September 1932. The more detailed publication from February 1933 (Anderson,
1933) presented the often-cited Fig. 2.11. It unambiguously demonstrated that the
track must be a positively charged electron. A proton would have a ten times shorter
track length.
At the same time, Blackett and Occhialini published a first analysis of pho-
tographs taken with their triggered cloud chamber (Blackett and Occhialini, 1933).
The efficiency for taking cosmic track photographs was 80 % compared to 2 % for
Anderson’s untriggered chamber. The sketch in Fig. 2.12 shows the experimental
set-up. Many photographs contained particle showers. To estimate momentum or
2 From the Discovery of Radioactivity to the First Accelerator Experiments 39
energy of the tracks was difficult because of the small magnet field. But both par-
ticle charges were observed with almost identical fractions and ionisation values,
which confirmed the assumption that electron–positron pairs were produced. Black-
ett and Occhialini discussed several hypotheses for the shower production and the
properties of the positron:
In this way one can imagine that negative and positive electrons may be born in pairs during
the disintegration of light nuclei. If the mass of the positive electron is the same as that of
the negative electron, such a twin birth requires an energy of 2mc2 ∼ 106 eV, that is much
less than the translationary energy with which they appear in general in the showers.
The existence of positive electrons in these showers raises immediately the question of why
they have hitherto eluded observation. It is clear that they can have only a limited life as
free particles since they do not appear to be associated with matter under normal conditions.
. . . it seems more likely that they disappear by reacting with a negative electron to form two
or more quanta. This latter mechanism is given immediately by Dirac’s theory of electrons.
Anderson was aware of Dirac’s prediction of the positron (Dirac, 1930). But as
he stated in (Anderson, 1983), “. . . the discovery of the positron was wholly acci-
dental. . . . Dirac’s relativistic theory . . . played no part whatsoever in the discovery
of the positron.” For the paper of Blackett and Occhialini, Dirac computed the mean
free path and the range of positrons in water for different energies. A positron with
1 MeV energy annihilates on average after 0.45 cm, and at 100 MeV the range is
about 28 cm.
Probably, the visualisation of electron–positron particle showers initiated new
theoretical activities. Heisenberg, Oppenheimer, Bethe, Heitler and others pub-
lished models and theories. At the same time, experiments with cloud chambers
and counter set-ups yielded new results. These important developments on particle
showers will be discussed in the following article, by K.-H. Kampert and A. Watson.
40 M. Walter
After the discovery of the positron the main goal of the research with triggered
cloud chambers and pure counter experiments was a better understanding of the
particle properties. For theorists, the energy spectra of electrons, positrons and pro-
tons were important to verify and adjust their models. Cloud chambers in strong
magnet fields and triggered by counters had clear advantages against other meth-
ods. The track visualisation, their momentum measurement and the mass estimate
using the ionisation information allowed one to shed light on the complicated pro-
cesses.
One of these paradoxes mentioned by Anderson appeared in photographs taken
in 1934 with a 0.35 cm thick lead sheet in the centre of the chamber. S.H. Nedder-
meyer and Anderson found particles which were much less absorbed than electrons
but had masses smaller than the proton mass. To solve this problem a new exposure
of 6 000 photographs was performed with a 1 cm platinum plate in the chamber
centre. Concerning electron absorption, this was more than a factor of five thicker
than in the previous experiment. The data contained 55 events where the energy loss
in platinum could be measured. Fourteen of them were identified as electrons and
positrons with a considerable loss. For a large fraction the absorption was signifi-
cantly smaller. Neddermeyer and Anderson announced the muon discovery in 1937
(Neddermeyer and Anderson, 1938) and concluded:
. . . that there exist particles of unit charge, but with a mass (which may not have a unique
value) larger than that of a normal free electron and much smaller than that of a proton; this
assumption would also account for the absence of numerous large radiative losses, as well
as for the observed ionisation.
The name of the new particle should express that its mass is between those of
electron and proton. In the first years the term ‘mesotron’ was used. After the dis-
covery of the pion it was called ‘μ-meson’ and finally ‘muon’ to demonstrate that it
is a lepton. Anderson later wrote about the history (Anderson, 1983):
The discovery of the meson, unlike that of the positron, was not sudden and unexpected. Its
discovery resulted from a two-year series of careful, systematic investigations all arranged
2 From the Discovery of Radioactivity to the First Accelerator Experiments 41
to follow certain clues and to resolve some prominent paradoxes which were present in the
cosmic rays.
Paul Kunze published the first photograph of a probable muon in 1933 four years
earlier (Kunze, 1933b) without knowing that he had missed a sensational discovery.
He interpreted Fig. 2.13 as
. . . a thin electron track of 37 MeV and a considerably stronger ionising positive particle
of smaller curvature. The nature of this particle is unknown; for a proton the ionisation is
probably too small, and for a positive electron too large.
In 1938 Yukawa and Sakata published a more detailed version of the theory. The
lifetime of the Yukawa particle was predicted to be about 10−8 seconds, 100 times
larger than the measured lifetime of the muon (Yukawa et al., 1938). This contra-
diction made it even more difficult to accept the possible identity of both particles.
Almost ten years later, the mystery was finally solved with the discovery of the
Yukawa-meson in a photographic emulsion plate.
This detection method was developed by Marietta Blau in the 1930s in Austria.
Photographic emulsions accumulate the ionisation information of through-going
tracks or interactions. The big advantage for the registration of rare processes is the
long-term exposure from hours to months. Supported by Hess, Marietta Blau ex-
posed an emulsion package in 1937 at the Hafelekar cosmic ray station in the Alps.
One of the developed emulsion plates showed a ‘star’ of heavy particles (Blau and
Wambacher, 1937). It was interpreted as the interaction of a cosmic particle with
a nucleus of the emulsion material, leading to its disintegration into several parts.
Because of her Jewish roots, Blau immigrated to Mexico, Her successful work was
continued after the war, but, unfortunately, she had no possibility to participate.
In Great Britain Cecil Powell, Donald Perkins and others started in 1946 the
development of photographic emulsions in cooperation with the Ilford company.
Perkins, a graduate student at the Imperial College, performed an exposure of emul-
sion plates in an airplane at 10 km altitude (Perkins, 1947). He found about 20
‘stars’, one of them with an incoming particle track. From the measured ionisation
and estimates for the elastic scattering of protons and lighter particles in the emul-
sion, Perkins concluded that the incoming particle is a meson of 100–300 electron
masses.
Just a month later Occhialini and Powell published six events of the same signa-
ture (Occhialini and Powell, 1947), confirming Perkin’s discovery. The group of the
University of Bristol around Powell subsequently analysed 65 meson tracks, where
25 of them showed an interaction in the emulsion (Lattes et al., 1947). The estimated
meson mass of 240 ± 50 electron masses agreed rather well with the pion mass. In
Fig. 2.14 a pion interaction in a photographic emulsion is shown.
2 From the Discovery of Radioactivity to the First Accelerator Experiments 43
Table 2.5 Results in elementary particle physics with cosmic rays and with experiments at the
first particle accelerator, the 184 inch synchro-cyclotron at LBL Berkeley
Year Discovery with cosmic Reference Detector
part.
Just about 50 years have passed since the first investigations on the conductivity of
air and the search for the sources of radiation causing the ionisation of gases. With
the discovery of cosmic rays, research activities have been started in many countries
and over a wide range of scientific topics. Particle physics, one of the very strong and
interesting branches since the beginning of the 1930s, began in 1948 with first steps
into its own, autonomous life. Discoveries made with cosmic particles, being mile-
stones for the development of elementary particle physics, are summarised in Ta-
ble 2.5. Results are also shown from the worldwide first accelerator used since 1948
for particle physics investigations. Pions were produced with the 184 inch Berke-
ley synchro-cyclotron by accelerated α-particles hitting a wire target. Most of the
first small experiments used photographic emulsions as detector. Both the success
in detecting new short living heavy mesons and baryons in cosmic particle exper-
iments and the convincing first results at the Berkeley synchro-cyclotron triggered
44 M. Walter
the construction of new accelerators and particle detectors. The table demonstrates
in some degree the transition that particle physics performed within a few years. Yet
in 1954 the 6.2 GeV Bevatron and the first hydrogen bubble chamber initiated a new
area in elementary particle physics.
Finally, let some of the heroes in cosmic particle research present their view of
this transition time in their own words.
Carl D. Anderson (Anderson, 1983):
. . . the ever-encroaching larger and larger accelerators clearly indicated the end of the period
when cosmic rays could be useful in studies of particle physics. . . . However, undaunted by
the irresistible encroachment of the accelerators, Cowan built a complex arrangement of
eight flat ionisation chambers and 12 flat cloud chambers of a total height of 20 ft, designed
for investigations at energies above those obtainable in any accelerator, and he continued
his studies of cosmic-ray particle events until 1971.
Cecil F. Powell (Powell, 1950):
Even when the new machines have been brought successfully into operation, however, it
will still be necessary to turn to natural sources in order to study the nuclear transmutations
produced by particles of greatest energy. . . . As a result of these developments there is to-
day no line of division between nuclear physics and the study of cosmic radiation. The latter
can be regarded as nuclear physics of the extreme high energy region.
Bruno B. Rossi (Rossi, 1983):
Today, thinking back to the work that produced these results and to the work in which other
colleagues were engaged at that time, I am overtaken by a feeling of unreality. How is it
possible that results bearing on fundamental problems of elementary particle physics could
be achieved by experiments of an almost childish simplicity, costing a few thousand dollars,
requiring only the help of one or two graduate students?
In the few decades that have elapsed since those days, the field of elementary particles has
been taken over by the big accelerators. These machines have provided experimentalists
with research tools of a power and sophistications undreamed of just a few years before.
All of us oldtimers have witnessed this extraordinary technological development with the
greatest admiration; yet, if we look deep into our souls, we find a lingering nostalgia for
what, in want of a better expression, I may call the age of innocence of experimental particle
physics.
References
Anderson, C.D.: The positive electron. Phys. Rev. 43, 491–494 (1933)
Anderson, C.D.: Unraveling the particle content of cosmic rays. In: Brown, L.M., Hoddeson, L.
(eds.) The Birth of Particle Physics, pp. 1–412. Cambridge University Press, Cambridge (1983)
Armenteros, R., et al.: Decay of V-particles. Nature 167, 501–503 (1951)
Baade, W., Zwicky, F.: On super-novae. Proc. Natl. Acad. Sci. USA 20, 254–259 (1934a)
Baade, W., Zwicky, F.: Cosmic rays from super-novae. Proc. Natl. Acad. Sci. USA 20, 259–263
(1934b)
Barkas, W.H., et al.: Meson to proton mass ratios. Phys. Rev. 82, 102–103 (1951)
Bjorklund, R., et al.: High energy photons from proton–nucleon collisions. Phys. Rev. 77, 213–218
(1950)
Blackett, P.M.S., Occhialini, G.: Some photographs of the tracks of penetrating radiation. Proc. R.
Soc. Lond. Ser. A 139, 699–726 (1933)
Blau, M., Wambacher, H.: Disintegration processes by cosmic rays with the simultaneous emission
of several heavy particles. Nature 140, 585 (1937)
2 From the Discovery of Radioactivity to the First Accelerator Experiments 45
Bothe, W.: Zur Vereinfachung von Koinzidenzzählungen. Z. Phys. 59, 1–5 (1929)
Bothe, W., Kolhörster, W.: Das Wesen der Höhenstrahlung. Z. Phys. 56, 751–777 (1929)
Brown, R., et al.: Observations with electron-sensitive plates exposed to cosmic radiation. Nature
163, 47–51 and 82–86 (1949)
Brown, L.M., Hoddeson, L. (eds.): The Birth of Particle Physics. Cambridge University Press,
Cambridge (1983)
Clay, J.: Penetrating radiation. Proc. R. Soc. Amst. 30, 1115–1127 (1927)
Clay, J., Berlage, H.P.: Variation der Ultrastrahlung mit der geographischen Breite und dem Erd-
magnetismus. Naturwissenschaften 20, 687–688 (1932)
Compton, A.H.: Variation of the cosmic rays with latitude. Phys. Rev. 41, 111–113 (1932)
Compton, A.H.: A geographic study of cosmic rays. Phys. Rev. 43, 387–403 (1933)
Conversi, M., Pancini, E., Piccioni, O.: On the decay process of positive and negative mesons.
Phys. Rev. 68, 232 (1945)
Cooke, H.L.: A penetrating radiation from the earth’s surface. Philos. Mag. 34, 403–411 (1903)
De Broglie, M., De Broglie, L.: Einführung in die Physik der Röntgen- und Gamma-Strahlen,
pp. 1–205. Verlag Johann Ambrosius, Barth/Leipzig (1930)
Dirac, P.A.M.: A theory of electrons and protons. Proc. R. Soc. Lond. Ser. A 126, 360–365 (1930)
Elster, J., Geitel, H.: Weitere Versuche über die Elektrizitätszerstreuung in abgeschlossenen Luft-
mengen. Phys. Z. 2, 560–563 (1901)
Eve, A.S.: On the radioactive matter in the earth and the atmosphere. Philos. Mag. 12, 189–200
(1906)
Forbush, S.E., Lange, I.: Further note on the effect on cosmic-ray intensity of the magnetic storm
of March 1. Terr. Magn. Atmos. Electr. 47, 331–334 (1942)
Fricke, R.: J. Elster & H. Geitel – Jugendfreunde, Gymnasiallehrer, Wissenschaftler aus Passion,
pp. 1–159. Döring Druck, Druckerei und Verlag, Braunschweig (1992)
Fricke, R.: Günther & Tegetmeyer 1901–1958, pp. 1–299. AF-Verlag, Wolfenbüttel (2011)
Geiger, H., Müller, W.: Elektronenzählrohr zur Messung schwächster Aktivitäten. Naturwis-
senschaften 16, 617–618 (1928)
Geitel, H.: Über die Elektrizitätszerstreuung in abgeschlossenen Luftmengen. Phys. Z. 2, 116–119
(1900)
Gockel, A., Wulf, Th.: Beobachtungen über die Radioaktivität der Atmosphäre im Hochgebirge.
Phys. Z. 9, 907–911 (1908)
Gockel, A.: Luftelektrische Beobachtungen bei einer Ballonfahrt. Phys. Z. 11, 280–282 (1910)
Gockel, A.: Beiträge zur Kenntnis der in der Atmosphäre vorhandenen durchdringenden Strahlung.
Phys. Z. 16, 345–352 (1915)
Hess, V.F.: Über die Absorption der γ -Strahlen in der Atmosphäre. Phys. Z. 12, 998–1001 (1911)
Hess, V.F.: Über Beobachtungen der durchdringenden Strahlung bei sieben Freiballonfahrten.
Phys. Z. 13, 1084–1091 (1912)
Hess, V.F.: Über Neuerungen und Erfahrungen an den Radiummessungen nach der Gamma-
Strahlenmethode. Phys. Z. 14, 1135–1141 (1913)
Hess, V.F., Kofler, M.: Ganzjärige Beobachtungen der durchdringenden Strahlung auf dem Obir
(2044 m). Phys. Z. 18, 585–595 (1917)
Hess, V.F.: Über den Ursprung der Höhenstrahlung. Phys. Z. 18, 159–163 (1926)
Johnson, Th.H.: Cosmic ray intensity and geomagnetic effects. Rev. Mod. Phys. 10, 193–244
(1938)
Kolhörster, W.: Messungen der durchdringenden Strahlung im Freiballon in grösseren Höhen.
Phys. Z. 14, 1153–1155 (1913)
Kolhörster, W.: Messungen der durchdringenden Strahlungen bis in Höhen von 9300 m. Verh.
Dtsch. Phys. Ges. 16, 719–721 (1914)
Kolhörster, W.: Messungen der durchdringenden Strahlung während der Sonnenfinsternis vom 21
August 1914. Naturwissenschaften 7, 412–415 (1919)
Kolhörster, W., Salis, G.v.: Intensitäts- und Richtungsmessung der durchdringenden Strahlung.
Sitz.ber. Preuss. Akad. Wiss. 34, 366–379 (1923)
46 M. Walter
Kolhörster, W.: Bemerkungen zu der Arbeit von R.A. Millikan: “Kurzwellige Strahlen kosmischen
Ursprungs”. Ann. Phys. 14, 621–628 (1926)
Kolhörster, W.: Die durchdringende Strahlung in der Atmosphäre. In: Wegener, A. (ed.) Physik der
Erde, pp. 565–580. Vieweg, Braunschweig (1928)
Kunze, P.: Magnetische Ablenkung der Ultrastrahlen in der Wilsonkammer. Z. Phys. 80, 559–572
(1933a)
Kunze, P.: Untersuchung der Ultrastrahlung in der Wilsonkammer. Z. Phys. 83, 1–18 (1933b)
Lattes, C.M.G., et al.: Processes involving charged mesons. Nature 159, 694–697 (1947)
Linke, F.: Luftelektrische Messungen bei zwölf Ballonfahrten. Abh. Ges. Wiss. Göttingen 3, 1–90
(1904)
Menon, M.G.K., O‘Ceallaigh, C.: Observations on the decay of heavy mesons in photographic
emulsions. Proc. R. Soc. A 221, 295–318 (1954)
Miehlnickel, E.: Höhenstrahlung, pp. 1–313. Verlag von Theodor Steinkopf, Dresden und Leipzig
(1938)
Millikan, R.A., Bowen, I.S.: High frequency rays of cosmic origin I. Sounding balloon observa-
tions at extreme altitudes. Phys. Rev. 27, 353–363 (1926)
Millikan, R.A., Cameron, G.H.: High frequency rays of cosmic origin III. Measurements in Snow-
Fed lakes at high altitudes. Phys. Rev. 28, 851–869 (1926)
Myssowsky, L., Tuwim, L.: Unregelmässige Intensitätschwankungen der Höhenstrahlung in
geringer Seehöhe. Phys. Z. 39, 146–150 (1926)
Neddermeyer, S.H., Anderson, C.D.: Cosmic ray particles of intermediate mass. Phys. Rev. 54,
88–89 (1938)
Nernst, W.: Das Weltgebäude im Lichte der Forschung, pp. 1–63. Springer, Berlin (1921)
NY-Times Editorial: Millikan rays. New York Times, November 12 (1925)
Occhialini, G.P.S., Powell, C.F.: Nuclear disintegrations produced by slow charged particles of
small mass. Nature 159, 186–190 (1947)
Pacini, D.: Penetrating radiation on the sea. Le Radium 8, 307–312 (1910). See also De Maria, M.,
De Angelis, A.: arXiv:1101.3015v3 (2011)
Pacini, D.: Penetrating radiation at the surface of and in water. Nuovo Cimento 6, 93–100 (1912).
See also De Angelis, A.: arXiv:1002.1810v2 (2011)
Panofsky, W.K.H., et al.: The gamma-ray spectrum from the absorption of π -mesons in hydrogen.
Phys. Rev. 78, 825–826 (1950)
Perkins, D.H.: Nuclear disintegration by meson capture. Nature 159, 126–127 (1947)
Piccard, A., Stahel, E., Kipfer, P.: Messung der Ultrastrahlung in 16000 m Höhe. Naturwis-
senschaften 20, 592–593 (1932)
Powell, B.B.: Mesons. Rep. Prog. Phys., 13, 350–424 (1950)
Rasetti, F.: Disintegration of slow mesotrons. Phys. Rev. 60, 198–204 (1941)
Regener, E.: Über das Spektrum der Ultrastrahlung. Z. Phys. 74, 433–454 (1932a)
Regener, E.: Messung der Ultrastrahlung in der Stratosphäre. Naturwissenschaften 20, 695–699
(1932b)
Richardson, O.W.: Diurnal variation of ionisation in closed vessels. Nature 73, 607 (1906)
Richardson, J.R.: The lifetime of the heavy meson. Phys. Rev. 74, 1720–1721 (1948)
Richman, C., Wilcox, H.: Production cross sections for π + and π − mesons by 345 MeV protons
on carbon at 90° to the beam. Phys. Rev. 78, 496 (1950)
Rochester, G.D., Butler, C.C.: Evidence for the existence of new unstable elementary particles.
Nature 160, 855–857 (1947)
Rossi, B.B.: The decay of “Mesotrons” (1939–1943): experimental particle physics in the age
of innocence. In: Brown, L.M., Hoddeson, L. (eds.) The Birth of Particle Physics, pp. 1–412.
Cambridge University Press, Cambridge (1983)
von Schweidler, E.: Über die möglichen Quellen der Hessschen Strahlung. In: Bergwitz, K. (ed.)
Elster- und Geitel Festschrift, pp. 1–719. Vieweg, Braunschweig (1915)
Science: Millikan rays. Science 62, 461–462 (November 20, 1925)
Sekido, Y., Elliot, H. (eds.): Early History of Cosmic Ray Studies, pp. 1–408. Reidel, Dordrecht
(1985)
2 From the Discovery of Radioactivity to the First Accelerator Experiments 47
Simpson, G.C., Wright, C.S.: Atmospheric electricity over the ocean. Proc. R. Soc. A 85, 175–199
(1911)
Skobeltzyn, D.: Die Intensitätsverteilung in dem Spektrum der γ -Strahlen von RaC. Z. Phys. 43,
354–378 (1927)
Skobeltzyn, D.: Über eine neue Art sehr schneller β-Strahlen. Z. Phys. 54, 686–703 (1929)
Williams, E.J., Roberts, G.E.: Evidence for transformation of mesotrons into electrons. Nature 145,
102–103 (1940)
Wilson, C.T.R.: On the leakage of electricity through dust-free air. Proc. Camb. Philos. Soc. 11, 52
(1900)
Wilson, C.T.R.: On the ionisation of atmospheric air. Proc. R. Soc. Lond. 68, 151–161 (1901)
Wilson, C.T.R.: On a method of making visible the paths of ionising particles through a gas. Proc.
R. Soc. Lond. A 85, 285–288 (1911)
Wilson, C.T.R.: On an expansion apparatus for making visible the tracks of ionising particles in
gases and some results obtained by its use. Proc. R. Soc. Lond. A 87, 277–297 (1912)
Wulf, Th.: Über die in der Atmosphäre vorhandene Strahlung von hoher Durchdringungsfähigkeit.
Phys. Z. 10, 152–157 (1909a)
Wulf, Th.: Über den Ursprung der in der Atmosphäre vorhandenen γ -Strahlung. Phys. Z. 10, 997–
1003 (1909b)
Wulf, Th.: Beobachtungen über Strahlung hoher Durchdringungsfähigkeit auf dem Eiffelturm.
Phys. Z. 11, 811–813 (1910)
York, C.M., Leighton, R.B., Bjonerud, E.K.: Direct experimental evidence for the existence of a
heavy positive V particle. Phys. Rev. 90, 167–168 (1953)
Yukawa, H.: On the interaction of elementary particles. Proc. Phys. Math. Soc. Jpn. 17, 48–57
(1935)
Yukawa, H., Sakata, S., Taketani, M.: On the interaction of elementary particles IV. Proc. Phys.
Math. Soc. Jpn. 20, 720–745 (1938)
Chapter 3
Development of Cosmology: From a Static
Universe to Accelerated Expansion
Matthias Bartelmann
M. Bartelmann ()
Zentrum für Astronomie, Institut für Theoretische Astrophysik, Universität Heidelberg,
Albert-Ueberle-Str. 2, 69120 Heidelberg, Germany
e-mail: [email protected]
published in 1888 by the Danish astronomer Johan L.E. Dreyer, already contained
7840 objects which were evidently not individual stars, but of which many did not
reveal their nature to the largest telescopes then available.
In 1920, the discussion about the nature of the nebulae culminated in the so-
called Great Debate between Harlow Shapley and Heber D. Curtis, both renowned
US-American astronomers. While Shapley was convinced that the nebulae were
part of our own galaxy, Curtis took the view that they were extra-galactic. Both
debaters had good arguments, and the debate was difficult to settle at the time. Only
a few years later, in May 1925, Edwin Hubble announced his measurement of the
distance to the nebula in the constellation of Andromeda, which settled the debate:
This nebula turned out to be so far away that it had to be a galaxy of its own, much
like the Milky Way itself (Hubble, 1925).
It is perhaps astounding in hindsight that the discovery is not even 100 years
old that the Universe extends beyond our Galaxy. In any case, when Einstein pub-
lished the field equations of General Relativity in 1915, it was not known whether
the Universe consisted of anything else than the Milky Way. It is staggering how
profoundly and quickly the picture changed thereafter.
In 1917, General Relativity was first applied to the Universe as a whole in two
different papers. The first was by Einstein (1917). He discusses the difficulty New-
ton’s theory has with static, extended mass distributions of constant density because
of the boundary conditions to be set at infinity. To avoid having to set boundary
conditions at all, he introduces a world model which is static in time and closed in
space. Alexander Friedman later called this model Einstein’s cylindric world. Since
such a model does not satisfy Einstein’s field equations of 1915, he extends them
by introducing the cosmological constant. He closes the paper writing: “To arrive at
this consistent interpretation, though, we had to introduce a new extension into the
field equations of gravity, which is not justified by our actual knowledge of grav-
ity. It is to be emphasised, however, that a positive curvature of space also results
from the matter it contains if that additional term is not introduced; we require the
latter only to enable a quasi-static matter distribution, as it corresponds to the fact
of small stellar velocities” (Einstein, 1917, p. 152, my translation). Here it is: Spa-
tially closed world models are possible with matter alone, but the conviction that the
world is static forces Einstein to introduce the cosmological constant.
In the second paper, Willem de Sitter considered a Universe devoid of matter (de
Sitter, 1917). This was an intriguing world model that he constructed to discuss the
relation between gravity and inertia and Ernst Mach’s hypothesis that the inertia of
one body is caused by the presence of all others: He wanted to study the inertial
motion of a single test particle in absence of any others. This model is, however, not
globally static.
From today’s point of view, it is hard to understand why Einstein overlooked
that his model is unstable. Any small perturbation drives it to collapse or expand.
This was a problem that had already disturbed Sir Isaac Newton. In 1693, he wrote
to Bishop Bentley that if a static, infinitely extended universe was possible at all, it
would be as unstable as a system of infinitely many needles standing on their points.
It seems that Einstein was so firmly convinced that the Universe was static that he
3 Development of Cosmology 51
was satisfied to show that such a universe was in fact compatible with his theory, if
only at the cost of introducing the cosmological constant.
A first and decisive step towards its resolution was taken in 1944 when Walter
Baade realised that there are two distinct stellar populations in our Galaxy: a metal-
rich Population I and a metal-poor Population II (Baade, 1944). In Baade’s own
words: “Although the evidence presented in the preceding discussion is still very
fragmentary, there can be no doubt that, in dealing with galaxies, we have to distin-
guish two types of stellar population, one which is represented by the ordinary H–R
[Hertzsprung–Russell] diagram (type I), the other by the H–R diagram of the glob-
ular clusters (type II) (. . .) Characteristic of the first type are highly luminous O-
and B-type stars and open clusters; of the second, globular clusters and short-period
Cepheids” (Baade, 1944, p. 145).
For cosmology, Baade’s discovery was of paramount importance because Hubble
had used a certain class of variable stars, the so-called Cepheids, to measure the dis-
tance first to the Andromeda galaxy and then to other distant galaxies. Cepheid stars
pulsate with a period increasing with their absolute luminosity. From the measurable
period of pulsation, their luminosity can be inferred, and by comparison with the ob-
served flux the distance can be measured. Of course, this requires that this period–
luminosity relation has been accurately calibrated. Baade found that Cepheids of
population II have shorter periods than those of population I at the same luminosity.
Hubble had mistaken the brighter Cepheids of population II with those of popula-
tion I and thus underestimated their intrinsic luminosity. At the same flux, they could
thus be much farther away. While Hubble had estimated the Andromeda galaxy to be
285 kpc away, its distance is now given as 765 kpc, higher by a factor of 2.7. Baade
immediately remarked: “(. . .) it is now quite certain that Hubble’s value of the dis-
tance modulus [a photometric expression for the distance, here to the Andromeda
galaxy] is somewhat too small” (Baade, 1944, p. 141). With this correction of the
distances, the Hubble constant shrank by the same factor, and the age of the Universe
grew accordingly to approximately 4.3 billion years. Even though this was still not
comfortably larger than the age of the Earth, it was reassuring that the Earth could
now be younger than the Universe. After various further corrections, the value of the
Hubble constant has today been measured to be 70.4+1.3 −1.4 km s
−1 Mpc−1 , more than
8.5 times lower than the result published by Hubble and Humason. This illustrates
impressively how difficult it is to measure cosmological distances reliably.
A cosmological model needs not only explain how the Universe is geometrically
shaped and how it develops, but also how it could be filled with the structures it
contains. We are surrounded by cosmological structures on all scales, ranging from
planets to stars, star clusters, galaxies, galaxy clusters to the long filaments of matter
surrounding huge voids. Perhaps the most obvious, if anthropocentric, question is
how planets like the Earth and stars like the Sun could have been formed in the
Universe we find ourselves in.
3 Development of Cosmology 55
conditions, this radiation should have a Planck spectrum, which is fully charac-
terised by its temperature. Applying and refining a very elegant argument originally
due to Gamow, it was even possible for Alpher and Herman to predict this temper-
ature. Going through different sets of parameters compatible with the constraints
from helium production, they arrived at a remarkable conclusion. Alpher and Her-
man write: “(. . .) the temperature during the element-forming process must have
been of the order of 108 –1010 K. This temperature is limited, on the one hand, by
photo-disintegration and thermal dissociation of nuclei and, on the other hand, by
the lack of evidence in the relative abundance data for resonance capture of neutrons.
For purposes of simplicity we have chosen (. . .) [a radiation density of 1 g cm−3 ]
which corresponds to T 0.6 × 109 K at the time when the neutron capture pro-
cess became important (. . .) which corresponds to a temperature now of the order of
5 K. This mean temperature for the universe is to be interpreted as the background
temperature which would result from the universal expansion alone” (Alpher and
Herman, 1949, p. 1093). This was the prediction, in 1949, of the cosmic microwave
background (CMB): Not only did the amount of helium in the Universe suggest a
very hot and early phase of cosmic evolution, but the temperature of the remaining
thermal radiation could even be estimated to lie around a few degrees Kelvin. Un-
fortunately, this remarkable insight seems to have gone utterly unnoticed when it
was first published.
The problem posed by the existence of carbon and other “metals” in the astro-
nomical sense was not solved, however. The main obstacle was that there are no
stable elements with an atomic weight number of five. The first steps of cosmic
nuclear fusion were the formation of deuterium from protons and neutrons, then
the formation of helium-4 through tritium and helium-3, and finally of lithium-7
either directly through fusion of helium-4 with tritium, or indirectly by the decay
of beryllium-7 formed by fusion of helium-3 with helium-4. In the rapidly diluting
plasma in the early Universe, further fusion would have had to combine protons with
3 Development of Cosmology 57
helium-4, forming nuclei of atomic weight number five, but there is no such stable
nucleus.
This problem was finally solved by Fred Hoyle in 1954. Quoting from the mon-
umental review of 1957 on the Synthesis of the elements in stars (Burbidge et al.,
1957, p. 565) by E. Margaret and Geoffrey Burbidge, William Fowler and Fred
Hoyle: “Even though very small, the equilibrium concentration of Be8 is sufficient
to lead to considerable production of C12 through radiative alpha-particle capture by
the Be8 , and of O16 , Ne20 , etc., by succeeding alpha-particle captures. (. . .) Detailed
consideration of the reaction rates and of the resulting relative abundances of He4 ,
C12 , and O16 led Hoyle (. . .) to the prediction that the foregoing second reaction,
in which C12 is produced, must exhibit resonance within the range of energies at
which the interaction between Be8 and He4 effectively occurs. Hoyle’s predicted
value for the resonance energy was 0.33 MeV, corresponding to an excited state in
C12 at 7.70 MeV. (. . .) The experiments reported (. . .) show (. . .) that the excitation
energy of C12∗ is (. . .) 7.653 ± 0.008 MeV (. . .).”
In other words, Hoyle had recognised that the formation of carbon-12 in stars
could be understood only if beryllium-8 and helium-4 could combine in such a way
that a suitable resonance in the carbon-12 nucleus could accept the excess energy in
the reaction. The discovery of this resonance at almost exactly the predicted energy
marks one of the most outstanding masterpieces in astrophysics. The problem of the
formation of carbon-12, necessary for our own existence, was thereby solved.
Let us briefly recapitulate what has happened so far. We began with Einstein’s theory
of General Relativity in its form reported to the Prussian Academy of Sciences in
November 1915. Within little more than a decade, a static universe the size of the
Milky Way turned into an expanding Universe in which the Milky Way was one of
very many other galaxies, separated by huge distances and driven apart by cosmic
expansion. Lemaître’s paper from 1927 already contained the essence of modern
cosmology. About 20 years later, by the end of the 1940s, the grave problem of an
old Earth in a young universe had been solved by correcting the distance scale, and
the considerable amount of helium had been recognised as a strong piece of evidence
for a hot, early phase in the evolution of the Universe. After a further decade, by the
mid-1950s, the origin of carbon and heavier elements had essentially been solved.
Yet, there was substantial opposition against this emerging picture of the evolving
Universe for which Fred Hoyle coined the intentionally derogatory term of a “Big
Bang” conception. The age problem was still considered as potentially severe, and
it was unclear how the cosmic structures could have formed against the expansion
of space. At a more fundamental level, however, it seems that there were fierce
objections against the idea of a Big Bang because it reminded one of an act of
creation for which no scientific reason could be given. Yet, the recession of the
galaxies was an undeniable observational fact. Could an alternative model for the
58 M. Bartelmann
Universe be conceived that avoided an act of creation and could nonetheless account
for the recession of the galaxies?
In 1948, two articles appeared in the same volume 108 of the Monthly Notices
of the Royal Astronomical Society. The first, entitled The steady-state theory of the
expanding universe, had been written by Herman Bondi and Thomas Gold (1948),
the second, A new model for the expanding universe, by Fred Hoyle (1948). Bondi
and Gold begin with a fundamental discussion of the conditions under which the
laws of physics known on Earth can with some faith be applied to the Universe as
a whole. They write: “As the physical laws cannot be assumed to be independent of
the structure of the universe, and as conversely the structure of the universe depends
upon the physical laws, it follows that there may be a stable position. We shall pursue
the possibility that the universe is in such a stable, self-perpetuating state (. . .). We
regard the reasons for pursuing this possibility as very compelling, for it is only in
such a universe that there is any basis for the assumption that the laws of physics are
constant; and without such an assumption our knowledge, derived virtually at one
instant of time, must be quite inadequate for an interpretation of the universe (. . .)”
(Bondi and Gold, 1948, p. 254). They pose an exciting epistemic problem: Can we
with any reason believe that the laws of physics known to us could be extrapolated
to the Universe? Their answer is that if this should at all be possible, then only in
a universe that is as time-independent as we assume the physical laws to be. They
postulate the “perfect cosmological principle”, which is translation invariance not
only in space, but also in time.
Intriguing as Bondi’s and Gold’s epistemic reasoning may be, how could it cope
with the empirical fact that distant galaxies are receding the faster the farther they
are? Bondi and Gold write: “If we considered that the principle of hydrodynamic
continuity were valid over large regions and with perfect accuracy then it would
follow that the mean density of matter was decreasing, and this would contra-
dict the perfect cosmological principle. It is clear that an expanding universe can
only be stationary if matter is continuously created within it. The required rate of
creation, which follows simply from the mean density and the rate of expansion,
can be estimated as at most one particle of proton mass per litre per 109 years”
(Bondi and Gold, 1948, p. 256). Even though matter had to be continuously cre-
ated in a steady-state universe, the required rate of production was reassuringly
low.
A severe initial problem of the Steady-State theory was that it had to violate lo-
cal mass conservation, which is ensured by (the vanishing divergence of) Einstein’s
field equations. Bondi and Gold wish to retain a metric theory of gravity, though,
and argue that the metric of the Universe then has to be of the exponentially ex-
panding de Sitter type. Regarding the age problem, they notice: “The ages of the
nebulae follow therefore a merely statistical law and there is no reason to suppose
that a particular nebula (such as our Milky Way) is of some age rather than an-
other” (Bondi and Gold, 1948, p. 264). Old and young galaxies should occur next to
each other everywhere in time and space. From here, a suggestion already emerged
for an observational test of the Steady-State model: In the Big-Bang model, young
galaxies should all be distant, while they could also be nearby in the Steady-State
model.
3 Development of Cosmology 59
Bondi’s and Gold’s article remains tentative regarding the physics of creation.
They acknowledge that the new matter must be created in such a way as to obey
the observed recession velocity, that is, Hubble’s law. Hoyle builds upon this in-
sight and writes: “We now diverge from the usual procedure [of deriving Friedman’s
equations] by introducing at each point P of space-time a vector Cμ of fixed length
directed along the geodesics from [a fixed space-time point] O to P. The sense of
this vector is always taken as being away from O.” Then he continues: “By dif-
ferentiation, a symmetrical tensor field Cμν is obtained. (. . .) The essential step in
the present work is the introduction of the tensor Cμν into the Einstein field equa-
tions. (. . .) The Cμν term in (. . .) [the field equations] plays a rôle similar to that
of the cosmological constant in the de Sitter model, with the important difference,
however, that there is no contribution from the C00 component. As we shall see, this
difference enables a universe, formally similar to the de Sitter model, to be obtained,
but in which [the matter density] ρ is non-zero” (Hoyle, 1948, p. 376). Now the the-
ory could be considered complete: Hoyle had specified a modification of Einstein’s
field equations by introducing the “creation field” C. The Steady-State model of the
Universe quickly gained sympathy because it was undeniably elegant and seemed
free of the difficulties that plagued the Big-Bang model.
The Steady-State model did not receive a severe blow until 1961, when Mar-
tin Ryle and Randolph Clarke published the results of their new survey for radio
galaxies, undertaken with the Mullard Radio Astronomy Observatory at 178 MHz
(Ryle and Clarke, 1961). (Note that the term “cycles per second”, abbreviated c/s,
was at that time used instead of “Hertz”.) They derived the number density of faint
radio galaxies as a function of radio flux. If the radio-galaxy population would not
change in time, as required within the Steady-State model, the intrinsic distribution
of radio galaxies with radio luminosity would have to be independent of time, and
thus of distance. Their expected radio-flux distribution could therefore be predicted
within the theory without any further assumptions on the radio-galaxy population
itself. Ryle and Clarke found that faint radio galaxies were substantially more abun-
dant than expected in the Steady-State model: “A comparison of the predicted with
the observed curves [of the number density as a function of flux] shows a marked
discrepancy, even when the smallest permissible source luminosity is adopted; the
observed number of sources in the range 0.5 < S < 2 × 10−26 watts (c/s)−1 m−2
is 3 ± 0.5 times that predicted by the steady-state model. If a luminosity function
similar to that of the identified sources is assumed, the discrepancy is 11 ± 2” (Ryle
and Clarke, 1961, p. 361). Ryle and Clarke had discovered that radio galaxies were
significantly farther away than they should have been in the Steady-State model; in
other words, they had discovered that the radio-galaxy population must have un-
dergone pronounced evolution over cosmic time scales. Evidently, cosmic evolution
was not compatible with the idea of a steady state.
When another severe blow followed in 1965, the Steady-State model, elegant
and compelling as it was, quickly disappeared in favour of the Big-Bang scenario.
The pioneering discovery of 1965, however, needs to be described in a section of its
own.
60 M. Bartelmann
Fig. 3.3 Horn antenna of the AT&T-Bell Laboratories at Crawford Hill, New Jersey, with which
Penzias and Wilson detected the cosmic microwave background
This summarises the winding route towards one of the most fundamental dis-
coveries of modern cosmology: Well ahead of their time, Alpher and Herman had
published a firm prediction of the microwave background and its temperature, based
on the abundance of helium in the Universe. Dicke and collaborators searched for
this radiation, apparently without knowing Alpher’s and Herman’s earlier tempera-
ture estimate. A suitable radiometer had already been constructed when Penzias and
Wilson accidentally discovered the radiation as part of their noise budget without
having searched for it. “Boys, you have been scooped” is the statement attributed to
Robert Dicke when he put down the receiver, having been informed of Penzias’ and
Wilson’s discovery by phone. 38 years after Lemaître had published his idea that we
may be living in an expanding universe, the remains of its hot beginning had been
found.
structures be visible in the cosmic microwave background? And how could the pri-
mordial density fluctuations have been created?
Two years after the discovery of the cosmic microwave background, in 1967,
Ray Sachs and Arthur Wolfe calculated the expected temperature fluctuations in
the CMB on large angular scales (Sachs and Wolfe, 1967). They concluded: “We
have estimated that anisotropies of order 1 per cent should occur in the microwave
radiation if the radiation is cosmological. This figure is a reasonable lower limit
provided even rather modest 10 per cent density fluctuations with a scale of 1/3 the
Hubble radius occur at present. (. . .) Conversely, if isotropy to within 1 per cent or
better could be established, this would be a quite powerful null result” (Sachs and
Wolfe, 1967, p. 85). This estimate was necessarily rough because nobody knew the
density-fluctuation level of large-scale structures. Nonetheless, the calculation by
Sachs and Wolfe remained valid. On large scales, fluctuations in the gravitational
potential cause temperature fluctuations in the CMB by gravitational redshift and
time delay.
In 1968, Joseph Silk (1968) considered the effect of a finite mean-free path for
the CMB photons prior to recombination and concluded that “primordial fluctua-
tions may account for masses of the order of a typical galaxy; smaller fluctuations
would not have survived to an epoch when condensation may occur. Primordial
fluctuations of cosmogonic significance are found to imply anisotropy of the 3 K
background radiation on an angular scale of between 10 and 30 , depending on the
cosmological model assumed” (Silk, 1968, p. 459). The damping by free stream-
ing, aptly called Silk damping thereafter, suppresses small-scale fluctuations in the
baryonic matter distribution and the corresponding CMB temperature fluctuations
on angular scales of a few arc minutes.
Sachs and Wolfe had approached CMB temperature fluctuations from large an-
gular scales, Silk from small angular scales. The treatment was completed by two
pioneering studies in 1970, one by Rashid Sunyaev and Yakow Zeldovich (1970),
the other by Peebles and Yu (1970). They went through the fairly complicated cal-
culation of how temperature fluctuations could have been imprinted by density fluc-
tuations during the formation of the CMB, which requires the solution of the col-
lisional Boltzmann equation. As Peebles and Yu put it: “To obtain a more accurate
description of the evolution through this complicated phase of recombination, we
have resorted to direct numerical integration of the collision equation for the pho-
ton distribution function” (Peebles and Yu, 1970, p. 816). From their calculations,
Sunyaev and Zeldovich concluded: “We note especially that perturbations corre-
sponding to small masses in comparison with 1015 M give quite a small contribu-
tion to δT /T ; for example, for a single object with mass M = 1011 M , in the case
Ω = 1 and (δρ/ρ) = 1 for z0 = 2 (. . .) we obtain (. . .) δT /T = 10−8 ” (Sunyaev and
Zeldovich, 1970, p. 15). Peebles and Yu arrived at compatible results and wrote, re-
ferring to larger angular scales: “Our result (. . .) yields characteristic angular scale
(width at half-maximum) ∼ 7 , and δT /T ∼ 1.7 × 10−3 at this angular resolution”
and added the cautionary note: “It is well to bear in mind that in this calculation the
initial density fluctuations are invoked in an ad hoc manner because we do not have
a believable theory of how they may have originated. (. . .) Our calculation thus is at
best exploratory (. . .)” (Peebles and Yu, 1970, p. 834).
3 Development of Cosmology 63
Even though the numbers had to be revised later for several reasons, the physical
mechanisms leading to fluctuations in the CMB had now been put together. What
we now call the Sachs–Wolfe effect is the imprint of gravitational-potential fluctu-
ations on the largest angular scales. Silk damping removes fluctuations by photon
diffusion on angular scales of a few arc minutes and smaller. In between, the in-
terplay between gravity and radiation pressure gives rise to oscillations resembling
sound waves in the cosmic plasma immediately prior to the release of the CMB.
Despite these detailed and pioneering calculations, the Big-Bang model posed a
severe conceptual difficulty which appeared in different guises. It can perhaps best
be highlighted in the following way. It is quite straightforward to estimate that the
Universe must have been approximately 400 000 years old when it had cooled down
sufficiently for atoms to form. Once electrons and nuclei combined to form mainly
hydrogen and helium-4, free charges disappeared, the mean-free path of the photons
increased abruptly, and the photons of the CMB were set free. This process is called
recombination even though there had been no combination before.
This implies, however, that there is a firm upper limit for the size of causally
connected regions at the end of recombination. During the first ∼ 400 000 years
after the Big Bang, light could evidently travel by no more than ∼ 400 000 light
years. But this length scale corresponds to a small patch on the sky, not very much
larger than the Sun or the full Moon. How was it possible then that the CMB had
a single temperature all over the sky? How could regions in the primordial plasma
have adapted to the same temperature even though they must have been well outside
any causal contact?
In 1981, Alan Guth (1981) pointed out this problem and wrote: “I have tried to
convince the reader that the standard model of the very early universe requires the
assumption of initial conditions with are very improbable for two reasons: (i) The
horizon problem. Causally disconnected regions are assumed to be nearly identical;
in particular, they are simultaneously at the same temperature.” Guth added a second
difficulty, called the flatness problem, and proposed a solution which sounded utterly
speculative: “Both of these problems would disappear if the universe supercooled
by 28 or more orders of magnitude below the critical temperature for some phase
transition. (Under such circumstances, the universe would be growing exponentially
in time.)” (Guth, 1981, p. 353f)
It was not at all clear what could be driving the exponential expansion of the
Universe during such a phase of cosmological inflation, and how inflation could
have ended. Initially, there was only the insight that Big-Bang cosmology had a
severe causality problem that a period of inflation might remedy. However, it was
recognised almost immediately by Viatcheslav Mukhanov and Gennady Chibisov
in 1981 that an epoch of inflationary expansion might at the same time explain how
structures could have been created (Mukhanov and Chibisov, 1981). They asked:
64 M. Bartelmann
“Might not perturbations of the metric, which would be sufficient for the forma-
tion of galaxies and galactic clusters, arise in this stage?” (Mukhanov and Chibisov,
1981, p. 534). They carried out the quantum-theoretical calculations needed and
concluded: “The fluctuation spectrum is thus nearly flat. (. . .) these perturbations
can lead to the observed large-scale structure of the universe. The form of the spec-
trum (. . .) is completely consistent with modern theories for the formation of galax-
ies. (. . .) Thus we have one possible approach for solving the problem of the ap-
pearance of the original perturbation spectrum” (Mukhanov and Chibisov, 1981,
p. 535).
while we can predict the distribution of mass, what we see is the distribution of
galaxies” (Davis et al., 1985, p. 393f).
By the mid-1980s, therefore, it had become clear that cold dark matter provided
essentially the only way to reconcile the low, so far unseen, level of temperature
fluctuations in the CMB with the existence of pronounced cosmic structures. The
existence of dwarf galaxies argued against warm dark matter. The hypothesis of
cosmological inflation was invoked to solve the horizon problem and at the same
time provided a seed mechanism for cosmic structures. The spatial distribution of
galaxies and galaxy clusters as well as their growth over cosmological time scales
argued against a high cosmic matter density, while the lack of fluctuations in the
CMB required that the matter density should be moderate, but not very low. In hind-
sight, it appears that all essential ingredients of what is now called the cosmological
standard model had been in place around 1985. What was missing, however, was an
experimental confirmation of temperature fluctuations in the CMB.
an altitude of 37 km. With its bolometer array cooled to 0.28 K, it observed 1 800
square degrees of the sky in four frequency bands between 90 and 400 GHz with
an angular resolution near 15 . Its results, announced by Paolo de Bernardis and his
team in 2000 (de Bernardis et al., 2000), were summarised by: “We (. . .) find a peak
[in the angular power spectrum] at Legendre multipole
peak = (197 ± 6), with an
amplitude T200 = (69 ± 8) µK. This is consistent with that expected for cold dark
matter models in a flat (Euclidean) Universe, as favoured by standard inflationary
models” (de Bernardis et al., 2000, p. 955).
This statement, important as it is, may require a little more explanation. Together
with the sound speed in the cosmic plasma prior to recombination, the time elapsed
between the Big Bang and the recombination sets the characteristic length scale
for CMB temperature fluctuations. The CMB √ originated ≈400 000 years after the
Big Bang, the sound speed was very nearly c/ 3. The largest wavelength of CMB
temperature fluctuations is thus ≈230 000 light years, or ≈71 kpc. However, the
angle that we see spanned by this length on the sky depends on the spatial curvature.
From the angular scale of the peak in the power spectrum of the CMB fluctuations,
first discovered by the BOOMERanG experiment, the spatial curvature could thus
be directly inferred: it turned out to be compatible with zero. Within measurement
uncertainties, the space in our Universe is flat.
This was a confirmation and a surprise at the same time. It was a confirmation
of the expectation from inflationary cosmology that the brief period of exponential
expansion should in fact have driven any finite curvature radius towards infinity. It
was a surprise because spatial flatness requires all energy-density contributions in
the Universe to add up to a critical value, the critical density. It was known from ob-
servations of cosmic structures as well as the CMB itself, however, that the density
of baryonic and dark matter together should not amount to more than ∼30 % of this
critical density. The most obvious candidate for the missing ∼70 % of the cosmic
energy budget was the cosmological constant.
The verification that the remaining ∼70 % of the present energy density could
indeed be assigned to the cosmological constant formed the headstone in the edifice
of the cosmological standard model. It came from observations of a certain class
of stellar explosion, the so-called type-Ia supernovae. Supernovae of this type have
a reasonably well-defined luminosity whose scatter can be substantially reduced
by an empirical correction scheme. They form what has been called standardisable
candles. By comparison of their luminosity with their measurable flux, their distance
can be inferred. The relation between distance and redshift, however, depends on
cosmology and thus allows the inference of cosmological parameters.
In September 1998, Adam Riess and the High-z Supernova Search Team, among
them Brian Schmidt, published measurements based on 16 distant and 34 nearby
supernovae (Riess et al., 1998) from which they concluded: “We find the luminosity
distances to well-observed SNe with 0.16 ≤ z ≤ 0.97 measured by two methods to
be in excess of the prediction of a low mass density (ΩM ≈ 0.2) [ΩM is the matter
density in units of the critical density] universe by 0.25 to 0.28 mag [i.e., by a factor
of 1.25 to 1.29]. A cosmological explanation is provided by a positive cosmological
constant with 99.7 % (3.0σ ) to more than 99.9 % (4.0σ ) confidence using the com-
plete spectroscopic SN Ia sample and the prior belief that ΩM ≥ 0” (Riess et al.,
68 M. Bartelmann
1998, p. 1034). Shortly thereafter, in June 1999, Saul Perlmutter and the members
of The Supernova Cosmology Project announced the cosmological results obtained
from a set of 42 type-Ia supernovae with redshifts up to 0.83 (Perlmutter et al.,
1999). They found that: “A flat, ΩΛ = 0 cosmology is a quite poor fit to the data.
The (ΩM , ΩΛ ) = (1, 0) line on Fig. 3.2b shows that 38 out of 42 high-redshift su-
pernovae are fainter than predicted for this model” (Perlmutter et al., 1999, p. 580).
This was the essential message from both teams: The distant supernovae appeared
significantly fainter in reality than they should have appeared in a universe without
cosmological constant. Assuming a spatially flat Universe, the cosmological con-
stant should contribute ≈72 % of the critical energy density. The loop was closed.
References
Alpher, R.A., Herman, R.C.: Remarks on the evolution of the expanding universe. Phys. Rev. 75,
1089–1095 (1949). doi:10.1103/PhysRev.75.1089
Baade, W.: The resolution of Messier 32, NGC 205, and the central region of the Andromeda
Nebula. Astrophys. J. 100, 137 (1944). doi:10.1086/144650
3 Development of Cosmology 69
Blumenthal, G.R., Faber, S.M., Primack, J.R., Rees, M.J.: Formation of galaxies and large-scale
structure with cold dark matter. Nature 311, 517–525 (1984). doi:10.1038/311517a0
Bondi, H., Gold, T.: The steady-state theory of the expanding Universe. Mon. Not. R. Astron. Soc.
108, 252 (1948)
Burbidge, E.M., Burbidge, G.R., Fowler, W.A., Hoyle, F.: Synthesis of the elements in stars. Rev.
Mod. Phys. 29, 547–650 (1957). doi:10.1103/RevModPhys.29.547
Davis, M., Efstathiou, G., Frenk, C.S., White, S.D.M.: The evolution of large-scale struc-
ture in a Universe dominated by cold dark matter. Astrophys. J. 292, 371–394 (1985).
doi:10.1086/163168
de Bernardis, P., Ade, P.A.R., Bock, J.J., Bond, J.R., Borrill, J., Boscaleri, A., Coble, K., Crill, B.P.,
De Gasperis, G., Farese, P.C., Ferreira, P.G., Ganga, K., Giacometti, M., Hivon, E., Hristov, V.V.,
Iacoangeli, A., Jaffe, A.H., Lange, A.E., Martinis, L., Masi, S., Mason, P.V., Mauskopf, P.D.,
Melchiorri, A., Miglio, L., Montroy, T., Netterfield, C.B., Pascale, E., Piacentini, F., Pogosyan,
D., Prunet, S., Rao, S., Romeo, G., Ruhl, J.E., Scaramuzzi, F., Sforna, D., Vittorio, N.: A flat
Universe from high-resolution maps of the cosmic microwave background radiation. Nature
404, 955–959 (2000). arXiv:astro-ph/0004404
de Sitter, W.: Einstein’s theory of gravitation and its astronomical consequences. Third paper. Mon.
Not. R. Astron. Soc. 78, 3–28 (1917)
Dicke, R.H., Peebles, P.J.E., Roll, P.G., Wilkinson, D.T.: Cosmic black-body radiation. Astrophys.
J. 142, 414–419 (1965). doi:10.1086/148306
Eddington, A.S.: The internal constitution of the stars. Observatory 43, 341–358 (1920)
Einstein, A.: Kosmologische Betrachtungen zur allgemeinen Relativitätstheorie, pp. 142–152.
Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften, Berlin (1917)
Einstein, A., de Sitter, W.: On the relation between the expansion and the mean density of the
Universe. In: Contributions from the Mount Wilson Observatory, vol. 3, pp. 51–52 (1927)
Friedman, A.: Über die Krümmung des Raumes. Z. Phys. 10, 377–386 (1922). doi:10.1007/
BF01332580
Gamow, G.: My world line: an informal autobiography (1970)
Guth, A.H.: Inflationary Universe: a possible solution to the horizon and flatness problems. Phys.
Rev. D 23, 347–356 (1981). doi:10.1103/PhysRevD.23.347
Hoyle, F.: A new model for the expanding Universe. Mon. Not. R. Astron. Soc. 108, 372 (1948)
Hubble, E., Humason, M.L.: The velocity-distance relation among extra-galactic nebulae. Astro-
phys. J. 74, 43 (1931). doi:10.1086/143323
Hubble, E.P.: Cepheids in spiral nebulae. Observatory 48, 139–142 (1925)
Lemaître, G.: Un Univers homogène de masse constante et de rayon croissant rendant compte de
la vitesse radiale des nébuleuses extra-galactiques. Ann. Soc. Sci. Brux. 47, 49–59 (1927)
Mather, J.C., Cheng, E.S., Eplee, R.E. Jr., Isaacman, R.B., Meyer, S.S., Shafer, R.A., Weiss, R.,
Wright, E.L., Bennett, C.L., Boggess, N.W., Dwek, E., Gulkis, S., Hauser, M.G., Janssen,
M., Kelsall, T., Lubin, P.M., Moseley, S.H. Jr., Murdock, T.L., Silverberg, R.F., Smoot, G.F.,
Wilkinson, D.T.: A preliminary measurement of the cosmic microwave background spectrum
by the Cosmic Background Explorer (COBE) satellite. Astrophys. J. Lett. 354 L37–L40, (1990).
doi:10.1086/185717
Mukhanov, V.F., Chibisov, G.V.: Quantum fluctuations and a nonsingular Universe. JETP Lett. 33,
532 (1981)
Peebles, P.J.E.: Large-scale background temperature and mass fluctuations due to scale-invariant
primeval perturbations. Astrophys. J. Lett. 263, L1–L5 (1982). doi:10.1086/183911
Peebles, P.J.E., Yu, J.T.: Primeval adiabatic perturbation in an expanding Universe. Astrophys. J.
162, 815 (1970). doi:10.1086/150713.
Penzias, A.A., Wilson, R.W.: A measurement of excess antenna temperature at 4080 Mc/s. Astro-
phys. J. 142, 419–421 (1965). doi:10.1086/148307
Perlmutter, S., Aldering, G., Goldhaber, G., Knop, R.A., Nugent, P., Castro, P.G., Deustua, S.,
Fabbro, S., Goobar, A., Groom, D.E., Hook, I.M., Kim, A.G., Kim, M.Y., Lee, J.C., Nunes,
N.J., Pain, R., Pennypacker, C.R., Quimby, R., Lidman, C., Ellis, R.S., Irwin, M., McMa-
hon, R.G., Ruiz-Lapuente, P., Walton, N., Schaefer, B., Boyle, B.J., Filippenko, A.V., Math-
70 M. Bartelmann
eson, T., Fruchter, A.S., Panagia, N., Newberg, H.J.M., Couch, W.J.: The supernova cosmology
project: measurements of
and from 42 high-redshift supernovae. Astrophys. J. 517, 565–
586 (1999). doi:10.1086/307221. arXiv:astro-ph/9812133
Riess, A.G., Filippenko, A.V., Challis, P., Clocchiatti, A., Diercks, A., Garnavich, P.M., Gilliland,
R.L., Hogan, C.J., Jha, S., Kirshner, R.P., Leibundgut, B., Phillips, M.M., Reiss, D., Schmidt,
B.P., Schommer, R.A., Smith, R.C., Spyromilio, J., Stubbs, C., Suntzeff, N.B., Tonry, J.: Obser-
vational evidence from Supernovae for an accelerating Universe and a cosmological constant.
Astron. J. 116, 1009–1038 (1998). doi:10.1086/300499. arXiv:astro-ph/9805201
Ryle, M., Clarke, R.W.: An examination of the steady-state model in the light of some recent
observations of radio sources. Mon. Not. R. Astron. Soc. 122, 349 (1961)
Sachs, R.K., Wolfe, A.M.: Perturbations of a cosmological model and angular variations of the
microwave background. Astrophys. J. 147, 73 (1967). doi:10.1086/148982
Silk, J.: Cosmic black-body radiation and galaxy formation. Astrophys. J. 151, 459 (1968).
doi:10.1086/149449
Slipher, V.M.: Nebulae. Proc. Am. Philos. Soc. 56, 403–409 (1917)
Smoot, G.F., Bennett, C.L., Kogut, A., Wright, E.L., Aymon, J., Boggess, N.W., Cheng, E.S.,
de Amici, G., Gulkis, S., Hauser, M.G., Hinshaw, G., Jackson, P.D., Janssen, M., Kaita, E.,
Kelsall, T., Keegstra, P., Lineweaver, C., Loewenstein, K., Lubin, P., Mather, J., Meyer, S.S.,
Moseley, S.H., Murdock, T., Rokke, L., Silverberg, R.F., Tenorio, L., Weiss, R., Wilkinson,
D.T.: Structure in the COBE differential microwave radiometer first-year maps. Astrophys. J.
Lett. 396, L1–L5 (1992). doi:10.1086/186504
Sunyaev, R.A., Zeldovich, Y.B.: Small-scale fluctuations of relic radiation. Astrophys. Space Sci.
7, 3–19 (1970). doi:10.1007/BF00653471
Uson, J.M., Wilkinson, D.T.: New limits on small-scale anisotropy in the microwave background.
Astrophys. J. Lett. 277, L1–L3 (1984). doi:10.1086/184188
Wirtz, C.: Einiges zur Statistik der Radialbewegungen von Spiralnebeln und Kugelsternhaufen.
Astron. Nachr. 215, 349 (1922)
Chapter 4
Evolution of Astrophysics: Stars, Galaxies, Dark
Matter, and Particle Acceleration
Peter L. Biermann
Astrophysics began when star-gazers began to ask already millenia ago, where
comets came from, and what caused the observed novae and supernovae, how the
planetary system began, what the Milky Way really was. Other galaxies were ob-
served, but thought of as nebulae like any other such as around the Plejades or
the sword of Orion. Even what we now know to be relativistic jets were already
observed about 100 years ago in the case of the very large elliptical galaxy M87
without even knowing of other galaxies at the time.
Modern astrophysics began in the early 1900s with several developments, most
importantly quantum mechanics, but also the realization by Oort that we live in a
flattened distribution of stars far from the center. At this location optical emission
by the stars near the center is obscured by interstellar dust. Quantum mechanics
finally allowed one to quantitatively interpret the spectra of stars, to start studying
their structure, and their evolution. Then using such understanding also the study of
tenuous emission nebulae began with Woltjer (1958), considering the Crab nebula
P.L. Biermann
Institute for Nucl. Phys., Karlsruhe Inst. for Technology, Karlsruhe, Germany
P.L. Biermann
Dept. of Physics & Astronomy, Univ. of Alabama, Tuscaloosa, AL, USA
P.L. Biermann
Dept. of Physics, University of Alabama at Huntsville, Huntsville, AL, USA
P.L. Biermann
Dept. of Physics & Astron., Bonn, Germany
In the following we will expand on the early developments in cosmology, stars, dark
matter, and cosmic rays, and will add an outlook into the future:
• The big bang: The universe expands, and seems to do so for ever. As it expands it
cools, and so the very early stages must have been very hot. The radiation of this
hot phase was postulated to exist by Alpher et al. (1948). The residual emission
from this hot phase was detected by Penzias and Wilson (1965), and immediately
properly explained by Dicke et al. (1965). Today we use the very weak spatial
wiggles in this radiation, and its polarization characteristics to learn about the
very early universe. What is 99.5 percent of the universe made of?
• Dark energy: From 1998 the calibration of the light-curves of exploding white
dwarfs (called Supernova type Ia) demonstrated that the universe expands with
acceleration, most readily described by a cosmic equation of state with negative
relativistic pressure; such a description defies a physical explanation up to now
with predictive power and tests. Today the very precise measurements of the spa-
tial wiggles in the cosmic microwave background give very precise information
on the existence of dark energy. What can we say about the physics of dark en-
ergy?
• Magnetic fields: Polarization of star-light as well as the containment of the en-
ergetic charged particles (after 1945), called, for historical reasons, cosmic rays,
require that magnetic fields permeate the interstellar medium, the gas between
the stars in our Milky Way. Today we expect that magnetic fields run through
the entire universe. Magnetic fields are readily produced in almost any rotating
star consisting of ionized gas (referred to as Biermann-battery, 1950), and also in
shock waves running through ionized gas (the Weibel instability, 1959). However,
phase transitions in the very early universe can also give rise to magnetic fields.
4 Evolution of Astrophysics: Stars, Galaxies, Dark Matter, and Particle 73
Stars can distribute magnetic fields through their winds, or when they explode,
and galaxies can distribute magnetic fields in their winds, or when the environ-
ment of their central super-massive black holes explodes. Where do magnetic
fields come from?
• Cosmic rays: Cosmic rays were originally discovered through their ionizing prop-
erties by Hess in balloon flights (1912) and Kolhörster (1913), later due to their
east–west asymmetry in arrival directions identified as charged particles, and to-
day we have measured their entire spectrum accessible inside the magnetic Solar
wind (discovered by Biermann, 1951). A model with predictions based on com-
mon explosions of stars into their stellar winds has passed a number of quanti-
tative tests today (Biermann, 1993; Stanev et al., 1993; Biermann and de Souza,
2012), but we still are not completely sure whether this model is unique. Where
do cosmic-ray particles come from?
• Massive black holes: In the 1960s it became apparent that super-massive back
holes might help understand the huge luminosities seen in the center of some
galaxies (often so bright that the galaxy is nearly invisible from a large distance).
In the 1990s only did it become clear that all massive galaxies contain at their
center a super-massive black hole, with a direct correlation between the mass of
the spheroidal distribution of stars, and the mass of the central black hole. The
mass distribution of these massive black holes starts essentially at a few million
Solar masses, thus, far above stellar masses. What is the origin and evolution of
these super-massive black holes?
• Dark matter: Dark matter implies that there is invisible matter which exerts grav-
itational attraction, holding a cluster of stars or galaxies (Zwicky, 1933) together,
but we cannot identify this matter. The term was coined by Oort (1932) to explain
the large perpendicular velocity dispersion of stars in the Galactic disk: these
large motions have to be contained, otherwise the stellar system would fly apart.
He argued correctly that the unseen component was due to stars. Zwicky used the
same kind of reasoning to ask what is required of clusters of galaxies to hold them
together. In this case we do need dark matter, which cannot be stars or even gas;
it has to be of a form that can cluster, but does not participate in nuclear reactions
in the early universe. What is dark matter?
• Ultra-high-energy cosmic-ray particles: From 1963 it became obvious (Linsley,
1963) that some particles of ultra-high energy exist that cannot be contained in
the disk of our Galaxy by the interstellar magnetic fields. Their near isotropy
as well as their energy implies that they originate far outside our own Galaxy,
probably either from Gamma-ray Bursts (the formation of a stellar black hole
probably), or from a feeding episode of a super-massive black hole, quite possibly
after a merger of two host galaxies, and the subsequent merger of the two central
massive black holes. These particles come in at an energy far above what we can
attain in an accelerator such as the “Large Hadron Collider (LHC)”, even when
considering the center-of-mass frame of a collision with a particle at rest. Where
do ultra-high-energy cosmic-ray particles come from, and what exactly are they?
The ever stronger mutual interaction between fundamental physics and astron-
omy can be safely predicted to grow.
74 P.L. Biermann
The universe started in a state of extremely high density and temperature and ex-
panded almost explosively since then; this initial stage is called the big bang. This
phase is very dense and very hot; emission from this phase was postulated in a fa-
mous paper by Alpher et al. (1948). The residual radiation from this background
was detected by Penzias and Wilson (1965), and immediately properly explained by
Dicke et al. (1965). – This radiation is called the microwave background (MWBG)
or cosmic microwave background (CMB).
The background is slightly mottled by the weak gravitational disturbances due
to dark matter at the epoch of last scattering, about redshift 1 100, corresponding
to about 300 000 years after the big bang. The discovery of these spatial wiggles
(Smoot, 2007; Mather, 2007) in the residual radiation (Penzias and Wilson, 1965)
from the big bang seen in the sky allows us to measure the geometry of the universe
to exquisite precision: The various contributing components add to unity, and so
the universe is mathematically flat. However, the three main components, which we
know about, are largely not understood, with about 72 percent dark energy, 23 per-
cent dark matter, and 5 percent normal matter (WMAP seven year results, Komatsu
et al., 2011):
(a) Dark energy drives the universe in ever faster expansion, and is independent of
redshift (Riess et al., 1998; Goldhaber and Perlmutter, 1998, for a review see
Frieman et al., 2008).
(b) Dark matter dominates the gravitational potential of galaxies, clusters of galax-
ies, and the soap-bubble-like filigree network of large scale structure.
(c) Normal matter participates in nuclear reactions, so is made up of hydrogen, he-
lium, carbon, oxygen, iron and the like (see Burbidge et al., 1957). All of the
hydrogen, most of the helium, and none of the heavy elements were made in
the first few minutes of the big bang (e.g. Reeves, 1994). We now know (e.g.
Woosley et al., 2002) that all the heavy elements are produced in nuclear reac-
tions in massive stars; massive stars have powerful winds; and when such stars
explode, most of their original mass has already been ejected and constitutes
a wind with a shell from interacting with the surrounding interstellar medium:
These stars blow up into their winds and expel much of their heavy element pro-
duction. Considering then the gas and stars we observe in galaxies and groups
and clusters of galaxies, one problem with the normal baryonic matter is that
we have trouble finding it, even though we know from the helium abundance
and several rare isotopes what the normal matter fraction is. Probably the best
hypothesis is that this missing normal matter is in hot and warm gas (Ostriker
et al., 2005), which is very difficult to detect. Obviously this missing baryonic
mass could also be in cool extremely tenuous clouds. The first stars and galax-
ies were made from the baryonic soup, just hydrogen, helium and a trace of
light elements like lithium. We do not yet know how early the first stars and the
4 Evolution of Astrophysics: Stars, Galaxies, Dark Matter, and Particle 75
first galaxies were made; suggestions run from about 40 million years to a few
hundred million years. But we can expect the gamut of stellar phenomena very
quickly after the first stars polluted the environment, making cooling instabil-
ities and so the formation of a second generation of stars easier (e.g. Rollinde
et al., 2009). Within just a few million years after the first massive stars one can
expect heavy element enrichment, dust, magnetic fields, and energetic particles,
also called, for historical reasons “cosmic rays”.
The best published analyses of the spatial wiggles of the residual microwave-
background radiation is based on the data from the WMAP satellite; these analyses
are sequentially numbered: The WMAP7 analysis of the MWBG fluctuations (Ko-
matsu et al., 2011) gives the latest determinations and error bars. Ω is the fraction of
critical density, given by ρcrit = 3H02 /(8πGN ), where H0 is the Hubble expansion
parameter, and GN is Newton’s constant of gravitation. If the universe were at the
critical density it would expand parabolically, so expand forever, but with an ever
slower expansion speed, just like a stone thrown out from Earth that would reach in-
finity with zero speed (which, obviously, would take forever), a mathematical limit.
For dark energy
Ω = 0.725 ± 0.016. (4.1)
For dark matter
Ωdm = 0.229 ± 0.015. (4.2)
For baryonic matter (stars, planets, gas)
Ωb = 0.0458 ± 0.0016. (4.3)
Adding these three components yields
Ωk = 1 − Ω − Ωdm − Ωb = −0.0125+0.0064
−0.0067 . (4.4)
As this quantity, Ωk , approaches zero it measures whether the sum of the three
angles in a triangle is 180 degrees – of course provided the path does not pass too
closely to a black hole. It is useful to work out the angular mismatch due to black
holes: Using a typical distance between black holes today of about 6 Mpc, and a
typical mass of a black hole of, say, 107 M gives then a minimal angle of deviation
of a light path of ∼ 3×10−13 rad, which is 0.06 µarc-seconds. Of course in a random
walk across the universe, this will increase to by a factor of order 30 and more, given
the density of black holes, so it would become of order 2 µarc-seconds and more.
One of the most precise astrometry measurements was in the µarc-seconds range
(Marcaide et al., 1984). Therefore, the astrometry for high redshift objects ought to
be limited to such a scale. In mathematical language the geometry is “flat”, flat like
a tabletop, but with this precision.
The expansion scale of the universe is the Hubble parameter,
H0 = 70.2 ± 1.4 km/s/Mpc. (4.5)
For moderate distances the speed of a galaxy scales as H0 times the distance; at
very small distances peculiar velocities enter, at the scale of about 500 km/s.
76 P.L. Biermann
This then enters into the distance scale and time integral (e.g. Frieman et al.,
2008):
z
c dz
r(z) =
, (4.6)
0 H (z )
which gives the various distance scales such as luminosity distance dL = (1+z)r(z),
angular diameter distance dA = r(z)/(1+z), and proper motion distance dM = r(z).
These distances are required to determine the luminosity of a source from its ap-
parent brightness, the size of a source from its angular diameter, and the motion of
a source from its angular speed. If we were to recognize that some object could be
thought of as a “standard candle”, then we could obtain the luminosity distance, and
so verify the cosmic distance scale. Supernova explosions of type Ia offer this possi-
bility, based on an empirical law of their luminosity as a function of other observed
properties, like color evolution, and time scales.
The time is
z
dz
t (z) =
. (4.7)
0 (1 + z )H (z )
Here and above the cosmic expansion rate H (z) given by
1/2
H (z) = H0 {Ωdm + Ωb }(1 + z)3 + Ω , (4.8)
ignoring the radiation and neutrino component. We also ignore black holes, since
they are probably made from baryonic matter (baryonic matter is matter that par-
ticipates in nuclear reactions at sufficiently high density and temperature, such as
inside stars, or in the very early universe at age of a few minutes), or perhaps even
dark matter (see, e.g. Zhang et al., 2008). The contribution of super-massive black
holes today can be estimated to ΩBH = 2 × 10−6±0.40 (Caramete and Biermann,
2010). Concerning mini black holes, the mass range between 1015 g and 1026 g is
not constrained and could contribute to total mass balance, while higher and lower
mass black holes are strongly constrained (Abramowicz et al., 2009).
The baryonic gas in the universe starts off as hot and fully ionized, then recom-
bines at a redshift near 1 000, and is later re-ionized by bright stars, active black
holes, or in some other way (e.g. Wise and Abel, 2008a, 2008b; Mirabel et al.,
2011). Assuming instantaneous re-ionization (corresponding to about 3 × 108 yr,
counting from the big bang), we have
zre-ion = 10.6 ± 1.2. (4.9)
Of course the re-ionization is very likely a very prolonged affair, with some argu-
ments suggesting that it might start as early as redshift 80 (Biermann and Kusenko,
2006), then being initiated by massive stars. How the first stars may have formed
has been reviewed by Ripamonti and Abel (2005).
The age of the universe is
t0 = (13.76 ± 0.11) × 109 yr. (4.10)
In a fair approximation for z 0.4 one can write the time since the big bang to
redshift z: t0 (1 + z)−3/2 .
4 Evolution of Astrophysics: Stars, Galaxies, Dark Matter, and Particle 77
The tally is that out there, say, in clusters of galaxies, where we believe we “see”
everything, most of the gravitational mass is dark matter; the second most important
component is hot gas, and only the third component is stars. A problem with these
ingredients is that we do not know what dark energy and dark matter are, and even
for the baryonic component we are confident only of order ten percent that we have
detected in stars and visible gas. So we know only of ∼ 0.1Ωb 0.005 where it is
and what form it takes; of 0.995 we do not know this.
The conclusions discussed above derive from the measurements of the micro-
wave-background fluctuations and spectrum, from large scale structure observa-
tions, from abundances of certain isotopes and elements, and – of course – from
Supernova type Ia observations. But there are also other potential avenues to obtain
information about the universe: There appears a possibility to measure directly the
early expansion of the universe using frequency combs (Steinmetz et al., 2008). The
precision using frequency combs is also approaching such a level that for instance in
the not too distant future the cosmological change of some natural constants could
be observed within a few weeks of time. Using atomic clocks one can also use
time measurements to test relativity, geodesy and cosmology (Diddams et al., 2004;
Rosenband et al., 2008; Chou et al., 2010).
It is only since relatively recently that quantum mechanics allowed us to interpret the
observed absorption and emission line spectra of stars, their structure, their internal
nuclear reactions, and their internal energy transport by radiation or convection.
What we have learnt is that the lifetime of stars is a very strong function of their
initial mass, with about two million years reached asymptotically for high masses,
and a lifetime longer than the age of the universe at masses of about one Solar mass
(depicted with a very ancient symbol as , which looks similar in Chinese even).
Massive stars explode, and pollute their environment with heavy elements, dust,
magnetic fields and cosmic rays. Lower mass stars live for times longer than the age
of the universe. Stars in binary systems can gain and lose mass via mass exchange,
and so show signs of extreme activity; the end point of moderate to low mass stars
is a very compact star, called a “white dwarf”, in which the electrons sit in a sort of
lattice, commonly referred to as a “degenerate quantum state”. Yet more compact
stars are neutron stars, also a quantum lattice, this time for neutrons. Both these
kinds of star have an upper limit as regards stability, and if adding too much mass
for accretion they blow up. This limit is called the Chandrasekhar mass for white
dwarfs (see below).
Cosmic rays are energetic particles that together make up as much pressure in
the interstellar medium as the magnetic fields, and both components together again
have an energy density almost as high as the thermal gas – which represents a fa-
mous stability limit due to Parker (1966). Exploding stars were the prime candi-
dates to produce the cosmic rays as argued already by Baade and Zwicky (1934).
78 P.L. Biermann
The cosmic-ray particles have a near power-law energy spectrum, with a highest
energy of about 3 × 1018 eV for heavy nuclei accelerated by the shocks of explod-
ing stars. These highest energies can be reached in the case that a very massive star
explodes into its own magnetic wind. An interesting alternative is the enhancement
of the magnetic field strength by instabilities caused by the cosmic-ray particles
themselves (Weibel, 1959; Bell and Lucek, 2001; Bell, 2004), which can also allow
larger particle energies for a given space in the acceleration region. In this instabil-
ity the magnetic field can approach in energy density a modest fraction of the ram
pressure, produced by the stellar explosion. Cosmic rays will be discussed in more
detail below.
Magnetic fields are also readily produced in stars. Whenever a star forms, and
obtains angular momentum from shear or tidal forces, so rotates as usually – unless
the thermodynamics of the star become extremely odd and simple –, surfaces of
constant pressure and constant density do not coincide and so drive an electric cur-
rent, which in turn produces a magnetic field. This magnetic field then immediately
has the length scale characteristics of that star.
A simplified version of how this works is as follows (Biermann, 1950; Biermann
and Schlüter, 1951): One starts by writing down separately the equation of motion
for ions (charge Z, mass Mi , density ni , velocity vi ) and electrons (mass me , density
ne and velocity ve ). Then the electric current is given by
j = e(ni Zvi − ne ve ), (4.11)
where e is the elementary charge. Then one can work out the equation of motion for
the current from the difference of the equation of motion for the ions and electrons.
One can renormalize the equation to isolate the electric field E, and then take the
curl. This finally yields one important term:
mi mi
curl ∇Pe ∼ ∇ × ∇Pe ∼ ∇ρ × ∇Pe . (4.12)
eZρ eZρ
This is zero exactly when the surfaces of constant density of pressure coincide.
It is a cross-product of two linear vectors and so is a rotational vector; it cannot be
balanced by a linear vector, such as an electric field. In a freshly forming rotating
star, as in an accretion disk, where density and temperature are governed by very
different microphysics, this term, the cross-product of the gradients of density and
pressure, is certainly non-zero, and drives an electric current. This driving of an
electric current can be compensated by an existing magnetic field (Mestel and Rox-
burgh, 1962), so this works strictly only when the prior magnetic field is extremely
close to vanishing or zero. In a second step, turbulence, shear and rotation enhance
this seed field by the dynamo process and so attain the relatively strong magnetic
fields we find ubiquitously in stars (work done by Steenbeck and Krause, 1965,
1966, 1966, 1967, and later papers; and in parallel by Parker, 1969, 1970a, 1970b,
1970c, and later papers). This can take very many rotation periods. Stars in turn
eject their magnetic fields in winds and explosions; this in turn leads to the injec-
tion of magnetic fields into the interstellar medium (e.g., Bisnovatyi-Kogan et al.,
1973), which are quite strong, possibly at the level of a few percent of equiparti-
tion. In the Galaxy shear, convection, and a cosmic-ray driven wind enhance the
4 Evolution of Astrophysics: Stars, Galaxies, Dark Matter, and Particle 79
magnetic field to its stability limit (Hanasz et al., 2004, 2009; Otmianowska-Mazur
et al., 2009; Kulesza-Zydzik et al., 2010; Kulpa-Dybel et al., 2011). These galac-
tic winds eject magnetic fields possibly as far as hundreds of kpc (Breitschwerdt
et al., 1991; Everett et al., 2008, 2010). As disk galaxies can be thought of as ac-
cretion disks (e.g. Y.-P. Wang and Biermann, 1998), the interstellar magnetic fields
get accreted to the central super-massive black holes, which in turn during their ac-
tivity episodes can eject powerful magnetic fields very far via their relativistic jets,
typically 100 kpc, but occasionally up to 3 Mpc. In this way it is possible to dis-
tribute magnetic fields throughout the cosmos (for simulations see Ryu et al., 1998,
2008; Dolag et al., 2009; Vazza et al., 2011; for observations see Kronberg, 1999,
2005; Kronberg et al., 2008; Gopal-Krishna and Wiita, 2001; Ferrière, 2010; Op-
permann et al., 2011). A review focusing on observations and theory alike is Beck
et al. (1996); a recent review on cosmic magnetic fields is in Kulsrud and Zweibel
(2008).
So exploding stars produce the bubbles of hot gas with lots of magnetic fields
and energetic particles, shining at many electromagnetic wavelengths like radio, X-
rays and gamma rays, as observed with the Cherenkov telescopes H.E.S.S., MAGIC
and VERITAS today, as well as with the Fermi satellite (e.g. H.E.S.S.-Coll., 2011).
These explosions can run through the immediate interstellar medium, or run through
an extended stellar wind for the more massive stars.
Lower mass stars normally do not explode, except in rare cases, when their old-
age stage of a white dwarf (Shapiro and Teukolsky, 1983) is pushed over an in-
stability limit (Chandrasekhar, 1931; Landau, 1932) and creates what is called a
supernova Ia. Amazingly it turns out that this class of supernova can be calibrated
to provide a standard candle, so its distance can be measured reliably. This has con-
vinced us that the universe expands with acceleration, thus providing evidence for
dark energy (work by Riess et al., 1998, as well as Goldhaber and Perlmutter, 1998,
and later papers; for a review see Frieman et al., 2008). Reviews of such supernovae
are Branch (1998) and Hillebrandt and Niemeyer (2000).
However, we do not know what pushes these stars into an explosion (see, e.g.
Mazzali et al., 2007). The common idea that accretion pushes them over the sta-
bility limit for white dwarfs, called the Chandrasekhar limit, has encountered ob-
servational problems (Gilfanov and Bogdan, 2010). The fact that stellar explosions,
which we do not really understand, serve as stepping stones in an argument for the
existence of the prime cosmological ingredient, dark energy, is disconcerting. On
the other hand, the precision of the measurements of the microwave-background
fluctuations, coupled with Large Scale Structure data, is so excellent nowadays that
we now use the supernova data as a consistency check, and no longer completely
depend on them.
Low mass stars in binary systems may include white dwarfs and neutron stars
(Shapiro and Teukolsky, 1983); if such compact stars accrete, they display many
of the same phenomena as also shown by accreting black holes, and they can be
much easier to study (e.g. Ikhsanov, 2007). If the orbital periods are short and well
determined, we can even identify planets in the data (e.g., DP Leonis, Biermann
et al., 1985; Qian et al., 2010; Beuermann et al., 2011).
80 P.L. Biermann
High mass stars explode, and make in rare cases a Gamma-ray Burst, probably
giving birth to a black hole (Shapiro and Teukolsky, 1983; see, e.g., Woosley et al.,
2002). These Gamma-ray Bursts constitute a narrow highly relativistic gas stream
(Vietri, 1995; Waxman, 1995; e.g. Cenko et al., 2010; Mészáros and Rees, 2010,
2011); this means that the flow runs at a speed βc very close to the speed of light, in
fact so close that the difference to the speed of light is probably initially only about 1
in 100 000. We describe a velocity βc very close to the speed of light by its Lorentz
factor, given by γ = (1 − β 2 )−1/2 . The difference with the speed of light can then
be approximated by 1/(2γ 2 ). When such a relativistic jet flow happens to point at
us, their emission is extremely enhanced, so very bright, and so such explosions
become observable all across the universe – and have been seen from an epoch of
the universe closer to the origin than one billion years (e.g., Cucchiara et al., 2011
identify a GRB at redshift 9.4). A beautiful summary of Gamma-ray Burst physics
has been given by Piran (2004), and a recent book has been published by Mészáros
(2010).
For high mass stars one idea is that the collapse of a rotating star stops at the
angular momentum limit, when centrifugal forces balance the gravitational pull
inwards, then ejects the rotational energy via torsional Alfvén waves, and finally
collapses (Bisnovatyi-Kogan, 1970; Ardeljan et al., 1996; Moiseenko et al., 2006;
Bisnovatyi-Kogan and Moiseenko, 2008; an early concept is in Kardashev, 1964).
This picture has the advantage that it uses the observed magnetic fields in mas-
sive stars, and easily relates to the argument that such stars can also explode as a
Gamma-ray Burst (Langer et al., 2010), given a substantial further mass loss in a
binary system before the explosion. Another idea is that the ubiquitous neutrinos
cause the explosion, with the very good reasoning that most of the potential energy
is in fact transformed into neutrinos. However, despite considerable work by many
(e.g. Bethe, 1990; Woosley and Weaver, 1986; Langer et al., 2010; Dessart et al.,
2011) this has not yet yielded the final proof what the explosion mechanism is.
Most of high mass stars live in binary star systems, and when one of the stars ex-
plodes and makes a black hole, then tidal pull can attract gas from the other star onto
the black hole. This accreting gas can form an accretion disk, given enough angu-
lar momentum (Lüst, 1952; Prendergast and Burbidge, 1968; Shakura and Sunyaev,
1973; Novikov and Thorne, 1973; Lynden-Bell and Pringle, 1974). This accretion
disk lets the gas slowly spiral down into the black hole, getting very hot and emit-
ting a lot of light. Up to 30 percent of the accretion energy can be dissipated into
heat and emitted as light (Shakura and Sunyaev, 1973; Bardeen, 1970, e.g. Sun and
Malkan, 1989); for stellar mass black holes this light is in the form of X-rays. In
some cases, and we do not yet understand exactly how this happens, the accreting
matter is funneled into a fast jet along the rotation axis of the black hole (Blandford
and Znajek, 1977; Lovelace et al., 2002; e.g. Keppens et al., 2008). The flow along
the jet usually is relativistic, with Lorentz factors possibly reaching up to of order
100, in Gamma-ray Bursts many hundreds quite possibly. Accretion and relativistic
jet formation also occur for less compact objects like neutron stars, allowing us to
study these phenomena in our own Galaxy. Active stars with such relativistic jets are
called “micro-quasars” (Mirabel and Rodriguez, 1998, 1999, 2003; Mirabel et al.,
2001, 2011; Mazzali et al., 2008).
4 Evolution of Astrophysics: Stars, Galaxies, Dark Matter, and Particle 81
Stars and their various activities present very many paradigmata that we can use
for other phenomena as a stepping stone in our understanding, but we would better
be sure about the steps (e.g. Plotkin et al., 2012). They teach us about the origin of
the heavy elements, the acceleration of particles, the origin of magnetic fields, the
birth of black holes, the phenomena of compact accretion disks and relativistic jets,
through their motions about dark matter, and through their explosions about dark
energy. And yet we still do not understand key aspects of stars, like exactly why
they explode.
4.2.3.1 Galaxies
Galaxies are visibly huge collections of stars, but in fact are gravitationally domi-
nated by dark matter, which we of course do not see. In a very crude approximation
many can be described as isothermal oblate spheres of older stars, but some are
also tri-axial configurations; many galaxies have thin disks of a stellar population
with gas (including magnetic fields and cosmic rays), like our Milky Way. A typ-
ical combination is a combination of a disk of old and young stars, with gas, and
a broader spheroidal distribution of old stars, and all embedded in a near-spherical
distribution of invisible dark matter. In our own Milky Way we observe the gas in
the constellation Orion in the “sword” with our own eyes.
Galaxies come in many masses and sizes, and it appears from the data that there
is a smallest mass, which is of order 5 × 107 M (e.g. Gilmore et al., 2007); such
galaxies are called dwarf ellipticals, and many think that they represent the original
galaxies, the first ever to form. In this picture other galaxies grow out of merging
small galaxies. There appears to be a clear separation in properties between star clus-
ters and galaxies; star clusters never have any measurable dark matter, small galaxies
are dominated by dark matter (Gilmore et al., 2007). Star clusters are also usually
smaller. If we accept this picture, the scale of these small galaxies also represents
the smallest scale in large scale structure, which is an indication of the dark matter
particle and its property (here momentum) at the time when it was first produced
in processes which we do not know. Galaxies grow by merging, and then initiate
a starburst, a phase of very high temporary star formation (Sanders and Mirabel,
1996). In such a merger the central one or two black holes also get fed.
Galaxies come in many sizes, and have disk-like structures like spiral galaxies,
and ellipsoidal structures like elliptical galaxies (Disney et al., 2008). Most galaxies
are mixed, have a disk and an ellipsoidal distribution of old stars, usually called
“bulges”. The largest galaxies merge into groups of galaxies, and often dominate a
region in a cluster of galaxies. Along the Hubble-sequence the so-called “early-type
galaxies” have big bulges, and “late-type” galaxies have almost no bulges, only
disks. Galaxies cluster easily, and usually occur in groups or clusters of galaxies;
these groups and clusters form a soap-bubble-like large scale structure, with huge
voids.
82 P.L. Biermann
There are now observations and source counts for the early population of galax-
ies, such as Lagache et al. (2003, 2005), and Dole et al. (2006). The claimed detec-
tion of a universal radio background (Kogut et al., 2011) might be explainable by a
very early activity of galaxies. The Planck satellite measurements will give strong
further constraints (e.g. Planck-Coll., 2011).
There are now many detailed cosmological simulations, how galaxies, their first
stars and their central black holes may form and evolve (e.g. Kim et al., 2011; Abel
et al., 2009; Wise and Abel, 2008a, 2008b).
abundance was still close to zero. Very massive stars live just a few million years;
they suffer from an instability as their mass approaches about 106 M , and so can
blow up (Appenzeller and Fricke, 1972a, 1972b). Fall-back of additional material
may increase the mass of the resulting black hole beyond that of the exploding star,
thus possibly explaining the lower mass limit of the distribution of super-massive
black holes. So in this way the first generation of super-massive black holes may
have formed. Again we are not yet sure whether this is the key process, although it
seems plausible.
The original number density of black holes can be derived (Caramete and Bier-
mann, 2010) in two approximations: If their growth is all by baryonic accretion,
then their number is conserved and their mass is above 3 × 106 M ,
6 × 10−3±0.40 Mpc−3 . (4.13)
If they grow mostly by mergers, then their mass is conserved, and the original
number is
−1
MBH,min
8 × 10−2±0.40 Mpc−3 . (4.14)
3 × 106 M
Black holes can grow by accretion and by merging. If they grow by accretion
the accretion disk and the relativistic jet almost always seem to be there, in various
strengths, and provide enormous power to the environment, and great visibility, all
across the universe. The highest luminosities we observe currently reach about 1013
times the luminosity of our Sun. If they grow by merging, then to current observa-
tions things are less visible, because we cannot yet detect gravitational waves. The
mass distribution of these super-massive black holes can be well interpreted by the
merging process, using a classical merger scenario (e.g. Smoluchowski, 1916; Silk
and Takahashi, 1979; Berti and Volonteri, 2008; Gergely and Biermann, 2009). But
we do not know for sure which of these processes is the more important. Observa-
tions show that black hole growth predates final galaxy assembly (Woo et al., 2010),
but this does not unequivocally decide which growth mechanism is preferred.
The creation of stellar black holes from merging binary stars, perhaps from white
dwarfs or neutron stars, may provide an opportunity to be directly observed by grav-
itational wave interferometers (e.g. Hild et al., 2011; Zhang et al., 2008; Abadie
et al., 2010).
Merging super-massive black holes produce ubiquitous gravitational waves,
which may also be observable. If black holes start at a few million solar masses,
then merge, and do this at some high redshift, then for each specific merger the inte-
gral gravitational wave spectrum rises steeply to a maximum frequency given by the
merger itself (e.g. Wyithe and Loeb, 2003). The best current limits are given by pul-
sar timing (e.g. Kramer, 2010; Liu et al., 2011). There is reasonable hope that within
a few years this observational method may detect the gravitational wave background
produced when black holes merged probably prodigiously in the early universe. Due
to the low mass cut in the black hole mass distribution there ought to be a rather sig-
nificant spike in the spectrum of gravitational waves, probably somewhere near the
84 P.L. Biermann
mass of the Galactic Center black hole of about 3 × 106 M , probably also corre-
sponding to the mass of the first generation of massive black holes, modified by the
redshift factor (1 + z), which could be large (giving a lower frequency).
Almost all black holes studied with sufficient sensitivity show signs of activity, of-
ten in the form of a gaseous jet flow, with relativistic velocities. This jet flow is
detectable via the emission of energetic electrons with a power-law spectrum, giv-
ing what is called non-thermal emission.
Many of the phenomena of relativistic jet flow can be seen already in highly
supersonic flow, as studied by E. Mach in the 19th century. He already determined
the geometry of the basic shockwave patterns.
The production of a relativistic jet by a super-massive black hole leads to some
of the most spectacular sights in the universe. Since the emission from the jet is
boosted by the relativistic motion, the emission makes these jets (and by inference
their hosting black holes) so eminently detectable that of a random selection of radio
sources at 5 GHz (or 6 cm wavelength) half of all sources show a jet pointed at us
(Gregorini et al., 1984). Looking at these jets in gamma rays, the percentage goes
up to nearly 100. The activity of such relativistic jets is almost certainly the origin of
the highest energy particles observed to date. Copious neutrino emission is expected
to accompany this (e.g. Becker, 2008).
For sources that are selected to show a relativistic jet pointing at Earth, there is
a spectral sequence commonly referred to as the blazar sequence, which shows a
double-bump spectrum, for which many observations and tests exist (Fossati et al.,
1998; Ghisellini et al., 1998, 2009a, 2009b, 2009c, 2010; Ghisellini, 2004; Guetta
et al., 2004; Celotti et al., 2007; Celotti and Ghisellini, 2008; Ghisellini and Tavec-
chio, 2008, 2009; Cavaliere and D’Elia, 2002). One common interpretation is that
we observe at radio to X-ray frequencies synchrotron emission from relativistic elec-
trons, and at TeV energies the corresponding inverse Compton emission (Keller-
mann and Pauliny-Toth, 1969; Maraschi et al., 2008). An alternative interpretation
is (see, e.g., Biermann et al., 2011) that the lower frequency peak is indeed primarily
synchrotron emission from relativistic electrons. But following some speculations
in Biermann and Strittmatter (1987), the higher frequency peak is synchrotron emis-
sion from protons (or nuclei). Obviously, under such a hypothesis many secondary
emissions will also exist. On the other hand, under this hypothesis, the frequency
ratio between the primary emissions is fixed to (mp /me )3 , the proton to electron
mass ratio to the third power; the frequencies both scale with the black mole mass
−1/2
as the inverse square root MBH .
There are a variety of interesting effects that arise from emission from a relativis-
tic jet. First of all, and perhaps most famous, is the effect of apparent superluminal
motion: When an emitting region moves at a speed very close to the speed of light,
the energy of any particle moving along with this emitting region is enhanced in
4 Evolution of Astrophysics: Stars, Galaxies, Dark Matter, and Particle 85
the observer by a factor called the Lorentz factor, explained above. When this mo-
tion is very close to the line of sight to the observer, then the apparent motion can
reach Γ c. Second, the variability seen can be shortened by up to 1/(2Γ 2 ). And
third, the emission is boosted into a narrow cone of angular width 1/Γ , while each
photon is increased in its energy by Γ , so as a consequence the visible emission
is enhanced by Γ 3 . This leads to enormous selection effects in source surveys, as
noted above: Of all the isotropically distributed sources, only those within an angle
of about 1/Γ to the line of sight to the observer have their apparent emission en-
hanced by at least Γ 3 (or more depending on spectrum, and on whether the jet is
intermittent or steady); this means that we can detect such sources to much larger
distances, and so for a given observational sensitivity the number of sources visible
to us is very much greater if we see this boosted emission.
Finally, since the early 1970s arguments by Bardeen, Bekenstein and Hawking have
shown that black holes have entropy. For a non-rotating black hole the entropy is
SBH 1 A
= 2 , (4.15)
kB 4 lPlanck
where A is the area of a black hole:
2
GN MBH
A = 16π , (4.16)
c2
and
GN
2
lPlanck = , (4.17)
c3
which boils down to
SBH MBH 2
= 1.15 × 1091 . (4.18)
kB 107 M
For maximal spin the entropy is lower by a factor of 2 (see also Bardeen, 1970;
Bekenstein, 1973, 2004; Bardeen et al., 1973; Hawking, 1971, 1973, 1976; Wald,
1997).
The characteristic energy of a graviton is of order
ch
hν = . (4.19)
2πrg
There is another equivalence, in that
MBH c2 GN MBH 2 c3 SBH
= = , (4.20)
hν c 2 GN kB
which says that to within a factor of order unity the number of gravitons possibly
produced with a black hole is equal to its entropy in units of kB .
86 P.L. Biermann
so contributes to helium and some rare isotope formation in the early universe, and
other kinds of mass, which have no relation to these nuclear reactions in the first few
minutes of the universe. As noted above, most of the normal matter is not seen either.
But dark matter, as defined today, is just that gravitating mass that is not participat-
ing in nuclear reactions; in its spatial and velocity fluctuations in the early universe
it could limit the spatial scale of these fluctuations from below. Some think, as also
noted already, that this scale corresponds to the smallest dwarf elliptical galaxies
(Gilmore et al., 2007; Wyse and Gilmore, 2008; Donato et al., 2009). If we assume
that this is true, then we can estimate the mass of these particles very simply by
assuming that the population of these particles behaves thermodynamically just as
any normal gas. So we have three constraints: the velocity dispersion in the halo of
a galaxy gives us the typical speed of these unseen particles; they must go at this
speed just to match the radial profile of the gravitational well. We have the density
of these particles just from the volume and the mass to generate the potential well.
These two numbers give us “density” and “pressure”. And, finally, assuming that
this “gas” of dark matter particles behaves just as a normal gas, we also find that
the “pressure” has to scale as density to the 5/3 power. This last relationship allows
us then to obtain an estimate of the mass of these particles, and the result is in the
keV range, so at the level of one percent of the mass of an electron. Obviously, this
quasi-gas of dark matter might behave very differently from a normal gas, but sim-
plicity suggests this line of reasoning is worth pursuing (work done by Hogan and
Dalcanton, 2000; Dalcanton and Hogan, 2001; Boyanovsky et al., 2008a, 2008b;
de Vega and Sanchez, 2010, 2011; de Vega et al., 2011).
The simple version of the argument runs as follows (Hogan and Dalcanton, 2000;
Weinberg, 1972, Eq. (15.6.29)): Consider the phase-space density f (p) of some
particle of momentum p, which is produced in some unknown interaction and decay
chain, and which will give today the dark matter particles of mass m:
1
f (p) , (4.23)
e(pc−μ)/(kB TD ) ± 1
where μ is the chemical potential, TD is the temperature at decoupling, while ±
applies to fermions and bosons. We assume that we start in an equilibrium situa-
tion, which is initially highly relativistic. Expansion of the universe just lowers the
momenta of the particles without changing their distribution function. Using the
temperature of the distribution in its diluted form from decoupling to today, T0 , we
then obtain simple integrals. In the following we will assume that the particles are
today non-relativistic and that the chemical potential can be neglected.
Then we can work out density n and pressure P , using the number of spin degrees
of freedom g:
g
n∼ f (p) d3 p, (4.24)
(2π)3 3
g
P∼ p 2 f (p) d3 p. (4.25)
3m(2π)3 3
88 P.L. Biermann
The “phase density” Q = ρ/{v2 }3/2 is a measure of the inverse specific en-
tropy, and so is conserved except in shock waves or “violent relaxation” (Lynden-
Bell, 1967); numerical experiments show that regions of enhanced entropy move
to the outer parts of halos, while the lowest entropy material sinks to the center.
One may conjecture that the cores of galaxies conserve a memory of the original
“phase density”. This is our key assumption here, and the data seem to confirm that
cores of smaller galaxies all show similar properties (Gilmore et al., 2007; Strigari
et al., 2008; Gentile et al., 2009; de Vega and Sanchez, 2011). So today’s quasi-
temperature T0 can be used to rewrite the two expressions for density and pressure:
g(kB T0 )3
n∼ f (x) d3 x, (4.26)
(2π)3 3 c3
g(kB T0 )5
P∼ x 2 f (x) d3 x, (4.27)
3m(2π)3 3 c5
where x is the dimensionless momentum, scaled with kB T0 /c. Noting that v2 =
3P /(nm), we combine the two expressions eliminating T0 and write
QX = qX gX m4X . (4.28)
In the thermal case q 2 × 10−3 , just given by the ratio of two simple integrals;
in a highly degenerate case of fermions this coefficient can be much larger. Trans-
lating this into astronomical units for a thermal neutrino-like relic particle gives
−4 M km −3 mX 4
QT = 5 × 10 . (4.29)
pc3 s keV
The application is that we determine the density of dark matter in a core of a
galaxy, and its velocity dispersion, calculate Q using the same units as above and
compare. For instance, 0.1 M pc−2 over a core radius of 100 pc with a velocity
dispersion of 15 km/s (Gilmore et al., 2007) yields a value for Q lower than the
reference value above, suggesting a dark matter particle mass of slightly less than
1 keV, based on our simple argument.
Going any further requires a specific model for this particle, and one model typi-
cally explored is that of a right-handed sterile neutrino, which can decay into a left-
handed standard model neutrino and photons; these photons give a background, and
for each galaxy a specific emission line at a photon energy at half the mass of the par-
ticle. Such models can be compared with data and then suggest that such a particle
is actually sub-thermal (Kusenko, 2005, 2006, 2009; Biermann and Kusenko, 2006;
Loewenstein and Kusenko, 2010), and so our equations need to be modified; how-
ever, the principal route remains, and for each specific model this can be worked out
(see Boyanovsky et al., 2008a, 2008b; de Vega and Sanchez, 2010, 2011; de Vega
et al., 2011). This suggests in the end a somewhat higher mass, a few keV. There are
some speculations now on how such particles could naturally arise (Kusenko et al.,
2010; and Bezrukov et al., 2010).
This kind of reasoning runs directly counter to other arguments, also based on
simplicity, that derive from particle physics (for an observational argument derived
4 Evolution of Astrophysics: Stars, Galaxies, Dark Matter, and Particle 89
from simulations, see, e.g., Font et al., 2011). Along this line of reasoning it is noted
that in supersymmetry there ought to be a lightest particle, which cannot decay, and
which would be a natural candidate to explain dark matter. Such a particle is very
massive, of order 100 GeV or even more. The particle physics detector experiments
based on this hypothesis have not yielded a definitive positive detection of anything
yet, but have indicated numerous hints at seeing evidence (e.g. Bernabei et al., 2011;
Frandsen et al., 2011; Savage et al., 2011), which has the correct behavior, like an
oscillation with the yearly phase. This would be plausible, because in one part of
the year the Earth moves into the dark matter particle population, and so increases
the flux slightly, and in the opposite phase of the year it moves away from the pop-
ulation, and so the flux is expected to decrease. There is some evidence that the
detections do have this pattern. But no consistent signal across all experiments has
emerged yet. Obviously, this interpretation runs directly counter to the phase-space
argument described above, which is based on detailed astronomical data.
Some others have proposed to modify the theory of gravity (Milgrom and Beken-
stein, 1987), but this has not yet yielded any prediction that could be independently
verified; on the other hand, this approach has also passed many tests of consistency
(e.g. Sanders, 2008; Milgrom, 2009).
There is no direct and fully convincing proof of any of these hypotheses yet –
or any other –, on what dark matter really is made of. A real detection of the dark
matter particles would very much help to clinch the case.
Already in 1912 it was noted by V. Hess (Hess, 1912; Kolhörster, 1913) that the
ionization of enclosed gas containers with two charged electrodes inside decreased
rapidly with time, suggesting an outside ionizing invisible source. Going up in a
balloon demonstrated that this effect increased with height above ground, and so the
term “cosmic rays” was coined. Today we know that these cosmic rays are charged
particles, and contain all the normal chemical elements, but with many isotopic vari-
ations and enhancements, as well as a component of electrons and positrons. Since
1934 (Baade and Zwicky) it has been thought that supernova explosions accelerate
particles to cosmic-ray energies. The basic concept is acceleration in flow disconti-
nuities, originally proposed by Fermi (1949, 1954), and then applied to shock waves
by Axford et al. (1977), Krymskii (1977), Bell (1978a, 1978b), Blandford and Os-
triker (1978), with a thorough review by Drury (1983). All the heavier elements in
cosmic rays are strongly increased in abundance, and it is thought that this arises
because massive stars expose their inner layers through their powerful winds, and
so the subsequent explosion rips through these layers of dominant helium, or even
carbon and oxygen, with significant variations in isotopes (work by Prantzos, 1984,
1991, and later papers), and so provide the particle injection for a shock accelerating
cosmic rays. Textbooks are, e.g., Berezinsky et al. (1990), Gaisser (1991), Schlick-
eiser (2002), Aharonian (2004), Mészáros (2010), and Stanev (2010).
90 P.L. Biermann
Cosmic-ray particles are charged and so they react strongly to magnetic fields;
this was already used very early to estimate the strength of the then unmeasured
magnetic fields in interstellar space (Biermann and Schlüter, 1951). The numbers
determined then still hold (Beck et al., 1996). It was also realized that cosmic-ray
particles can get reflected as they spiral along magnetic flux tubes that narrow as
they approach a magnetic pole structure. Such reflections were then used in almost
all cosmic-ray acceleration models from Fermi (1949, 1954) on.
Considering the explosion into the interstellar medium, most relevant for the
lower mass range of normal and most common supernova explosions, the shock
accelerates particles as described above, in a naive analogy to a tennis game. In a
tennis game the tennis racket moves towards the other player, when it encounters the
tennis ball, and considering the two sides the two tennis rackets constitute a com-
pressing system. The two sides of a shock in an ionized gas with magnetic fields
(called “plasma”) also constitute a compressing system, since the helical motion of
energetic particles in a magnetic field connects the particles to the plasma; irregular-
ities in the magnetic field then change the direction of the particle’s motion, and so
the stronger the irregularities, the faster the particle forgets where it came from, and
so has a high probability to move back through the shock. As the speed of the plasma
towards the shock is much faster than the speed at which it moves away from the
shock downstream, the difference is negative, and so effectively we do have com-
pression. Every time a particle goes through it gets a kick, gains momentum and
energy, and as long as it does not just escape, keeps gaining energy. Balancing en-
ergy gain with how many particles escape at each cycle gives in a simple limit for
normal strong shock waves a particle spectrum of E −2 (see Drury, 1983). The en-
ergy contained in this population is the integral of the spectrum times the energy;
this integral diverges, and so the acceleration of the population naturally hits a limit,
when the total energy in particles becomes comparable to the other energy densities
in the shock system, like the magnetic field, or upstream flow energy (called the
“ram pressure”).
Following Drury (1983) the argument runs as follows: We consider a population
of particles of mass m, velocity v, and momentum p, injected near the shock, that
get scattered back and forth. Occasionally some particle gets lost from the system.
This means that we have to follow the momentum gains and the probability that
particles get lost.
The relative momentum gain per crossing is μ(U1 − U2 )/v, with cos θ = μ. Av-
eraging then over the probability of crossing the shock, using a given surface area
of the shock surface, gives a factor μ. Scaling with integral of μ from 0 to 1 gives
another factor of 2. This means that the weighting factor is 2μ. The average mo-
mentum gain is then 43 p(U1 − U2 )/v, going back and forth once each.
In the flow downstream particles escape net with nU2 . At shock, however, just
downstream, the particles have an isotropic distribution in momentum phase space.
Averaging over the isotropic distribution, we have the average of cos θ nd(cos θ ) =
nv/4 with a factor 1/2 downstream from isotropy and a factor 1/2 from the av-
erage of cos θ , where θ is the angle between the direction downstream and the
momentum vector of some particle with velocity v. The fraction of particles lost
4 Evolution of Astrophysics: Stars, Galaxies, Dark Matter, and Particle 91
downstream therefore have ratio 4U2 /v, in a standard shock for relativistic particles
equal to U1 /c.
After n cycles the momentum is pn ∼ p0 Π(1 + 43 {U1 − U2 }/vi ), or in logarith-
mic form ln(pn /p0 ) ∼ 43 (U1 − U2 )Σ v1i . Here we denote with index i the various
particles. The probability to get there is Pn ∼ Π(1 − 4U2 /vi ), or again in logarith-
mic form ln Pn ∼ −4U2 Σ v1i .
Now we have to remember that losses might intervene and so the momentum
gain may be reduced, say by a factor of αacc , and the probability to get lost might
also be different, by a factor of βacc .
Combining now the two expressions, and inserting the correction factors we get
for the integral spectrum
Pn = (pn /p0 )−(3U2 βacc )/(αacc (U1 −U2 )) . (4.30)
Taking the derivative gives the spectral index
3βacc U2 + αacc (U1 − U2 )
− . (4.31)
αacc (U1 − U2 )
For a strong shock in a normal gas of adiabatic gas index 5/3 we have U1 /U2 =
4, and so for αacc = βacc = 1 we obtain a spectral index of −2, corresponding in
synchrotron emission by relativistic electrons to a flux density spectrum of ν −1/2 .
As an alternative one can also consider a strong non-relativistic shock in a relativistic
gas of adiabatic gas index 4/3 with U1 /U2 = 7; the spectrum then has −3/2, and
the spectrum of synchrotron emission is ν −1/4 . Now using smaller αacc < 1, so less
energy gain per cycle, and also a higher probability to get lost, so βacc > 1, yields
a steeper spectrum; for example, for αacc = 1/2 and βacc = 2, and U1 = 4U2 again
we obtain a very steep spectrum of −5, corresponding to a synchrotron spectrum
of ν −2 .
The configuration normally considered is for the shock normal to be parallel to
the magnetic field, for an isotropic random distribution of angles a very unlikely sit-
uation. Jokipii (1987) has introduced a concept to deal with highly oblique shocks
by introducing a specific limiting scattering coefficient; this does not change the
derivation of the spectrum, as sketched here, but it does modify the rate of accel-
eration, and the maximal particle energy attainable. Another aspect is that radio
polarization observations (e.g. Dickel et al., 1991), and now also theory, show that
the shock region is highly unstable (Bell, 2004; Caprioli et al., 2010; Bykov et al.,
2011), and therefore quite turbulent, and perhaps extremely unsteady. This is very
difficult to deal with, and has been done only in certain limiting situations. However,
it has been used as a starting point to derive scaling relations for the unsteadiness
and turbulent transport (Biermann, 1993), following Prandtl (1925).
Since charged particle also can constitute an electric current, it appears possible
that the magnetic fields get enhanced (Weibel, 1959; Bell and Lucek, 2001), and
not just the particle population. So both particle population and magnetic field may
reach their limit of some fraction of order 0.1 of the kinetic energy of the flow going
into the shock. Considering the spherical shock around an exploding star, the shock
system also includes adiabatic losses, as the system expands, and then the spec-
trum is slightly steeper. When these particles encounter normal interstellar matter
92 P.L. Biermann
outside the expanding shock shell they interact, and produce other emissions; these
emissions verify the spectrum. The material of the gas the expanding shock moves
through then determines the abundances of the energetic particles being picked up.
The mix of massive stars that explode into the interstellar medium (Lagage and Ce-
sarsky, 1983), and the very massive stars that explode into their own wind (Völk
and Biermann, 1988), with the chemical composition enhanced in heavy elements
allows us to understand the abundances of cosmic rays (Biermann, 1993, and later
papers; Binns et al., 2008). Gamma-ray Bursts exploding into Wolf–Rayet stars of-
fer an almost isomorphic alternative (Wang et al., 2008). Since these winds are very
massive they in turn form a thick shell in the interstellar medium around them, where
most of the former mass of the star is waiting for the later supernova shock to hit.
When this shock, loaded with energetic particles, does encounter the massive shell
a lot of nuclei get broken up, or are “spallated” (Silberberg and Tsao, 1990), and
this may explain the many secondary nuclei in cosmic rays. The wandering of these
very fast particles through the shocked shell is dominated by magnetic irregulari-
ties given by the cosmic rays themselves. This allows a ready understanding of the
spectra of these secondary nuclei.
The model for cosmic rays using a more detailed view of the stars that explode,
by invoking the magnetic stellar wind, was developed quantitatively by Biermann
et al. (Biermann, 1993; Biermann and Cassinelli, 1993; Biermann and Strom, 1993;
Stanev et al., 1993), then developed further in Biermann et al. (1995, 2001), Wiebel-
Sooth et al. (1998), and in Meli and Biermann (2006). The new tests began with
accounting for the cosmic-ray positrons and electrons (Biermann et al., 2009), then
for the high-frequency radio emission excess near the Galactic Center (called the
WMAP excess) as well as the 511 keV annihilation emission also near the Galactic
Center (Biermann et al., 2010a). This entails the model that used the galactic cosmic
rays as seeds for ultra-high-energy cosmic rays (Gopal-Krishna et al., 2010), the
spectral curvature predicted in 1993 confirmed by the CREAM mission (Biermann
et al., 2010b). Finally, in 2012 we had the spectral test of the quantitative 1993
model in Biermann and de Souza (2012), now accounting for the entire cosmic-
ray spectrum in a first simple test. This especially used the cosmic-ray data only
available in 2011 from the Kaskade-Grande experiment, in the energy range 1015 eV
to 1018 eV, and with the Kaskade array on the lower energy side, and the Auger
array on the higher energy side covering 1014 eV through 1020.5 eV. The key for
this development was:
(a) Massive stars explode into their winds.
(b) Cosmic rays interact with the matter in the shell snow-plowed together by that
stellar wind.
(c) These galactic cosmic rays can be seeds for ultra-high-energy cosmic-ray ac-
celeration by a relativistic jet, preserving basically the spectral shape and abun-
dances. A relativistic shock as pushed through the region of enhanced star for-
mation (Achterberg et al., 2001) can do this, as noted below.
There is also evidence about the origin of cosmic rays from observations of stellar
absorption lines of elements which are produced mainly in spallation of cosmic-ray
4 Evolution of Astrophysics: Stars, Galaxies, Dark Matter, and Particle 93
(2009), who have proposed some solutions such as a pair-production dip under the
assumption that the main particles are protons.
Since 1963 (Linsley, 1963) particles of much higher energy have been detected,
up to 3 × 1020 eV today, clearly not easily explained as originating from a source
in our Galaxy. Already before Linsley’s publication Ginzburg and Syrovatski (also
1963, 1964) suggested the famous radio galaxies Centaurus A (Cen A = NGC5128),
Virgo A (Vir A = M87 = NGC4486), and Fornax A (For A = NGC1316) as the best
candidates outside our own Galaxy for producing cosmic-ray particles in abundance.
Further development of radio galaxies as sources of ultra-high-energy cosmic rays
using observational cues was, e.g., in Lovelace (1976), Biermann and Strittmatter
(1987), and Rachen et al. (1993). These radio galaxies could be excluded to ex-
plain the normal lower energy cosmic rays (readily understood via supernovae, as
suggested first by Baade and Zwicky, 1934), but easily could explain the extra-
galactic cosmic rays observed up to now. The most recent reviews are Kotera and
Olinto (2011), and Letessier-Selvon and Stanev (2011); classical textbooks include
Berezinsky et al. (1990) and Gaisser (1991), and the most recent textbook is by
Stanev (2010).
The sites where ultra-high-energy cosmic-ray particles are accelerated are also
sites where many interactions and secondary particle production can occur. Since
even in the center-of-mass frame the energies are higher than at the Large Hadron
Collider (LHC) at CERN in Geneva, at least in principle we could learn more
about the nature of particles and their physics from these sources. However, the
extreme flux arriving at Earth makes this very difficult. One of the first attempts
to discuss various interactions that are likely to happen was made by Gould and
Burbidge (1967). Others are described in the various textbooks and reviews men-
tioned.
The Auger experiment (Auger-Coll., 2011a, 2011b, 2011c, 2011d, 2011e) gives
a lot of information on particle character, energy and direction. There is a strong hint
suggesting that these particles at energies beyond 3 × 1018 eV are usually heavy nu-
clei, and the composition seems to vary in this energy range. Their arrival directions
seem to cluster around the radio galaxy Cen A. The data do not unambiguously show
whether all particles come from Cen A or only a small fraction of all; the data allow
both, depending on various models for the spectrum and the magnetic scattering.
After the discovery of the cosmic microwave background by Penzias and Wil-
son (1965) it was quickly realized that energetic particles interact with this cosmic
background, and so suffer considerable losses. Assuming as a reference example
that sources are homogeneously distributed in space, source spectra are E −2 to very
high energy, and particles propagate in straight lines outside galaxies. Then the ob-
served spectrum turns down significantly at an energy now commonly referred to as
the GZK-energy, after the authors who made its discovery (Greisen, 1966; Zatsepin
and Kuz’min, 1966). For protons this energy is 6 × 1019 eV; for nuclei this can be
at lower energy (Allard et al., 2008). It follows that far below this energy the cos-
mos is effectively transparent and we should be able to “see” sources at very large
distances. So far there is no evidence that this “sub-GZK population” of sources
4 Evolution of Astrophysics: Stars, Galaxies, Dark Matter, and Particle 95
has been detected. All that we observe is compatible with some source in the ra-
dio galaxy Cen A that also produces heavy nuclei. There are a number of ways to
explain these findings.
One option is that a very massive star explodes as a Gamma-ray Burst, and the
relativistic shock plowing through the enriched wind of the former star picks up
heavy nuclei, pushes them up in energy and then sends these particles out into space
(Vietri, 1995; Waxman, 1995; Wang et al., 2008). Another option is that the black
hole observed near the center of Cen A pushes through the region of the massive
star formation inside the radio galaxy, triggered by the merger between an ellipti-
cal and a spiral galaxy some 50 million years ago (Gopal-Krishna et al., 2010). In
this picture the relativistic shock of the fresh jet picks up the existing population
of galactic cosmic rays, and pushes it up in energy (Achterberg et al., 2001). The
second possibility is specific enough to allow for a test with the observed spectrum,
and shows full consistency (Biermann and de Souza, 2012). However, the spectral
fit does suggest that all these particles basically go straight between Cen A and us,
and then get scattered into near isotropy in the magnetic wind of our Galaxy. This
may be possible, but sets a serious constraint on our Galactic wind and its magnetic
fields.
One can also use the concepts of relativistic jets to explore which sources out
there could contribute even in principle. Using a complete sample of radio galaxies
just confirms the 1963 arguments of Ginzburg and Syrovatski, with Cen A pre-
dicted to be the strongest contributor (Caramete et al., 2011), just as suggested by
the Auger data. These new calculations suggest that the radio galaxies Cen A, Vir
A, and For A contribute as 10:1:0.6; so at the present level of statistics Cen A is
the only source discernible. Using a complete sample of starburst galaxies to allow
for a contribution from Gamma-ray Bursts suggests many possible sources, with
no obvious preference for some single source (Caramete et al., 2011). The radio
galaxy Cen A is a prime candidate (see, also, Anchordoqui et al., 2011; Fargion and
D’Armiento, 2011).
The collisions of the charged particles inside Cen A occur at much higher en-
ergy than at the LHC at CERN (Geneva), even when considering the center-of-mass
frame. But of course at a distance of order 3–4 Mpc ( 1025 cm) the opportunities
for a high flux observations are nil. Any particle physics experiment would have to
be done very differently from an experiment on the ground. The fluxes at the source
are very high, but we have no control over what exactly happens, and will have to
infer the “experimental set-up” in detail before learning anything really new and
having any significance.
The final conclusion and confirmation is still pending, but it appears possible that
our highest energy interaction experiment accessible to observations of charged as
well as neutral particles may be the radio galaxy Cen A. If that were the case, and
with a few years of statistics, we might be able to verify this possibility. Then Cen A
could be thought of as an extension of the LHC at CERN, with higher particle en-
ergies, higher luminosities, but lower fluxes per given area, and beyond any human
control. The combination of insights gathered at the LHC with measurements of
such a distant particle interaction laboratory such as Cen A might help us push the
frontier.
96 P.L. Biermann
Over the last 100 years the interaction between physics and astronomy has first
spawned astrophysics, and now has given rise to what is commonly called astro-
particle physics, or sometimes also particle astrophysics. We learn from the cosmos,
and in the future we can expect that this interaction will get yet stronger, as not only
radio galaxies might turn into particle interaction laboratories at energies far beyond
what we have achieved on Earth, but at the very highest energies only the very early
universe in combination with laboratories on Earth might teach us more about the
fundamental properties of matter and nature.
Acknowledgements The author wishes to thank his many collaborators, friends and partners
for intense discussions over many years, among them F. Aharonian, J.K. Becker, V. Berezin-
sky, G. Bisnovatyi-Kogan, H. Blümer, S. Britzen, L. Caramete, S. Casanova, J. Cassinelli,
L. Clavelli, A. Donea, R. Engel, T. Enßlin, J. Everett, H. Falcke, T.K. Gaisser, L.A. Gergely,
G. Gilmore, Gopal-Krishna, F. Halzen, B. Harms, A. Haungs, N. Ikhsanov, G.P. Isar, S. Jiraskova,
R. Jokipii, K.-H. Kampert, H. Kang, A. Kogut, P.P. Kronberg, A. Kusenko, N. Langer, R. Lovelace,
K. Mannheim, I. Mariş, S. Markoff, G. Medina-Tanco, A. Meli, S. Moiseenko, F. Munyaneza,
B. Nath, A. Obermeier, K. Otmianowska-Mazur, R. Protheroe, G. Pugliese, J. Rachen, W. Rhode,
M. Romanova, D. Ryu, N. Sanchez, M.M. Shapiro, V. de Souza, E.-S. Seo, R. Sina, Ch. Spiering,
T. Stanev, J. Stasielak, P.A. Strittmatter, R.G. Strom, H. de Vega, H.-J. Völk, Y.-P. Wang, T. Weiler,
P. Wiita, A. Witzel, Ch. Zier, et multae aliae et multi alii.
References
Abadie, J., et al.: Phys. Rev. D 81, id. 102001 (2010)
Abel, T., Bryan, G., Teyssier, R.: In: Chabrier, G. (ed.) Structure Formation in Astrophysics. Cam-
bridge Contemporary Astrophysics, p. 159. Cambridge University Press, Cambridge (2009)
Abramowicz, M.A., et al.: Astrophys. J. 705, 659–669 (2009). arXiv:0810.3140
Achterberg, A., et al.: Mon. Not. R. Astron. Soc. 328, 393 (2001)
Aharonian, F.A.: Very High Energy Cosmic Gamma Radiation: A Crucial Window on the Extreme
Universe. World Scientific, Singapore (2004)
Alcock, C., et al.: Astrophys. J. Lett. 499, L9 (1998)
Allard, D., et al.: J. Cosmol. Astropart. Phys. 10, 33 (2008)
Alpher, R.A., Bethe, H., Gamow, G.: Phys. Rev. 73, 803–804 (1948)
Anchordoqui, L.A., et al.: arXiv:1103.0536 [astro-ph] (2011)
Appenzeller, I., Fricke, K.: Astron. Astrophys. 18, 10 (1972a)
Appenzeller, I., Fricke, K.: Astron. Astrophys. 21, 285 (1972b)
Ardeljan, N.V., et al.: Astron. Astrophys. Trans. 10, 341–355 (1996)
Auger-Coll.: eprint arXiv:1107.4809 (2011a)
Auger-Coll.: eprint arXiv:1107.4804 (2011b)
Auger-Coll.: eprint arXiv:1107.4805 (2011c)
Auger-Coll.: eprint arXiv:1107.4806 (2011d)
Auger-Coll.: eprint arXiv:1107.4807 (2011e)
Axford, W.I., Leer, E., Skadron, G.: In: Proc. 15th Intern. C. R. Conf., Plovdiv, Bulgaria, vol. 11,
pp. 132–137 (1977)
Baade, W., Zwicky, F.: Proc. Natl. Acad. Sci. 20, 259 (1934)
Bardeen, J.M.: Nature 226, 64–65 (1970)
Bardeen, J.M., Carter, B., Hawking, S.W.: Commun. Math. Phys. 31, 161–170 (1973)
4 Evolution of Astrophysics: Stars, Galaxies, Dark Matter, and Particle 97
Ginzburg, V.L., Syrovatskii, S.I.: Astron. ž. 40, 466 (1963). Transl. in Sov. Astron., A.J. 7, 357
(1963)
Ginzburg, V.L., Syrovatskii, S.I.: The Origin of Cosmic Rays. Pergamon Press, Oxford (1964).
Orig. Russ. edn. (1963)
Goldhaber, G., Perlmutter, S.: Phys. Rep. 307, 325–331 (1998)
Goldstein, M.L., Roberts, D.A., Matthaeus, W.H.: Annu. Rev. Astron. Astrophys. 33, 283 (1995)
Gopal-Krishna, Wiita, P.J.: Astrophys. J. Lett. 560, L115–L118 (2001)
Gopal-Krishna, et al.: Astrophys. J. Lett. 720, L155–L158 (2010)
Goto, M., et al.: Astrophys. J. 688, 306–319 (2008)
Goto, M., et al.: Publ. Astron. Soc. Jpn. Lett. 63, L13–L17 (2011)
Gould, R.J., Burbidge, G.R.: Handb. der Physik, Band 46/2, pp. 265–309. Springer, Heidelberg
(1967)
Greene, J.E., Ho, L.C.: Astrophys. J. 670, 92–104 (2007a)
Greene, J.E., Ho, L.C.: In: Proc. the Central Engine of Active Galactic Nuclei. ADSP Conf.,
vol. 373, p. 33 (2007b)
Greene, J.E., Barth, A.J., Ho, L.C.: New Astron. Rev. 50, 739–742 (2006)
Greene, J.E., Ho, L.C., Barth, A.J.: Astrophys. J. 688, 159–179 (2008)
Gregorini, L., et al.: Astron. J. 89, 323–331 (1984)
Greisen, K.: Phys. Rev. Lett. 16, 748–750 (1966)
Guetta, D., et al.: Astron. Astrophys. 421, 877–886 (2004). arXiv:astro-ph/0402164
H.E.S.S. Coll.: Astron. Astrophys. 531, id. A81 (2011)
Halzen, F.: In: Proc. Exotic Nuclei and Nuclear/Particle Astrophysics (III): From Nuclei to Stars.
AIP Conf., vol. 1304, pp. 268–282 (2010)
Hanasz, M., et al.: Astrophys. J. Lett. 605, L33–L36 (2004)
Hanasz, M., et al.: Astron. Astrophys. 498, 335–346 (2009)
Häring, N., Rix, H.-W.: Astrophys. J. Lett. 604, L89–L92 (2004)
Hawking, S.W.: Phys. Rev. Lett. 26, 1344–1346 (1971)
Hawking, S.W.: In: Hegyi, D.J. (ed.) Proc. Sixth Texas Symp. on Rel. Astrop. Ann. N.Y. Acad.
Sci., vol. 224, p. 268 (1973)
Hawking, S.W.: Commun. Math. Phys. 43, 199–220 (1975)
Hawking, S.W.: Phys. Rev. D 13, 191–197 (1976)
Hess, V.F.: Physik. Z. 13, 1084 (1912)
Hewish, A., et al.: Nature 217, 709–713 (1968)
Hild, S., et al.: Class. Quantum Gravity 28, id. 094013 (2011)
Hillas, A.M.: eprint astro-ph/0607109 (2006)
Hillebrandt, W., Niemeyer, J.C.: Annu. Rev. Astron. Astrophys. 38, 191–230 (2000)
Hogan, C.J., Dalcanton, J.J.: Phys. Rev. D 62, id. 063511 (2000)
IceCube Coll.: eprint arXiv:1109.1017 (2011a)
IceCube Coll.: Astrophys. J. 740, id. 16 (2011b)
Ikhsanov, N.R.: Mon. Not. R. Astron. Soc. 375, 698–704 (2007)
Indriolo, N., et al.: Astrophys. J. 724, 1357–1365 (2010)
Jokipii, J.R.: Astrophys. J. 313, 842–846 (1987)
Kardashev, N.S.: Astron. ž. 41, 807 (1964)
Kellermann, K.I., Pauliny-Toth, I.I.K.: Astrophys. J. Lett. 155, L71–L78 (1969)
Keppens, R., et al.: Astron. Astrophys. 486, 663–678 (2008)
Kim, J.-H., et al.: Astrophys. J. 738, id. 54 (2011)
Kogut, A., et al.: Astrophys. J. 734, id. 4 (2011)
Kolhörster, W.: Physik. Z. 14, 1153 (1913)
Komatsu, E., et al.: Astrophys. J. Suppl. 192, id. 18 (2011)
Kormendy, J., Richstone, D.: Annu. Rev. Astron. Astrophys. 33, 581 (1995)
Kotera, K., Olinto, A.V.: Annu. Rev. Astron. Astrophys. 49, 119–153 (2011)
Kramer, M.: In: Relativity in Fundamental Astronomy: Dynamics, Reference Frames, and Data
Analysis. Proc. IAU Symp., vol. 261, pp. 366–376 (2010)
100 P.L. Biermann
Kronberg, P.P.: In: Wielebinski, R., Beck, R. (eds.) Cosmic Magnetic Fields. Lect. Notes in Phys.,
vol. 664, p. 9 (2005)
Kronberg, P.P., Lesch, H., Hopp, U.: Astrophys. J. 511, 56–64 (1999)
Kronberg, P.P., et al.: Astrophys. J. 676, 70–79 (2008)
Krymskii, G.F.: Dokl. Akad. Nauk 234, 1306–1308 (1977). Transl. in Sov. Phys. Dokl. 22, 327–
328 (1977)
Kulesza-Zydzik, B., et al.: Astron. Astrophys. 522, id. A61 (2010)
Kulpa-Dybel, K., et al.: Astrophys. J. Lett. 733, id. L18 (2011)
Kulsrud, R.M., Zweibel, E.G.: Rep. Prog. Phys. 71, id. 046901 (2008)
Kusenko, A.: New Astron. Rev. 49, 115–118 (2005)
Kusenko, A.: Phys. Rev. Lett. 97, id. 241301 (2006)
Kusenko, A.: Phys. Rep. 481, 1–28 (2009)
Kusenko, A., Takahashi, F., Yanagida, T.T.: Phys. Lett. B 693, 144–148 (2010)
Lagache, G., Dole, H., Puget, J.-L.: Mon. Not. R. Astron. Soc. 338, 555–571 (2003)
Lagache, G., Puget, J.-L., Dole, H.: Annu. Rev. Astron. Astrophys. 43, 727–768 (2005)
Lagage, P.O., Cesarsky, C.J.: Astron. Astrophys. 125, 249–257 (1983)
Landau, L.D.: Phys. Z. Sov. 1, 285 (1932). Transl. in ter Haar, D. (ed.) (1965). “Collected papers
of L.D. Landau”, p. 60. Gordon & Breach, New York. Quoted on p. 505 and 716 of Binney, J.,
Tremaine, S., Galactic Dynamics. Princeton University Press, Princeton (1987)
Langer, N., van Marle, A.-J., Yoon, S.-C.: New Astron. Rev. 54, 206–210 (2010)
Letessier-Selvon, A., Stanev, T.: Rev. Mod. Phys. 83, 907–942 (2011)
Linsley, J.: Phys. Rev. Lett. 10, 146–148 (1963)
Liu, et al.: Mon. Not. R. Astron. Soc. 417, 2916–2926 (2011)
Loewenstein, M., Kusenko, A.: Astrophys. J. 714, 652–662 (2010)
Lovelace, R.V.E.: Nature 262, 649 (1976)
Lovelace, R.V.E., et al.: Astrophys. J. 572, 445 (2002)
Lüst, R.: Z. Naturforsch. 7, 87 (1952)
Lynden-Bell, D.: Mon. Not. R. Astron. Soc. 136, 101 (1967)
Lynden-Bell, D., Pringle, J.E.: Mon. Not. R. Astron. Soc. 168, 603–637 (1974)
Maoz, E.: Astrophys. J. Lett. 494, L181–L184 (1998)
Maraschi, L., et al.: Mon. Not. R. Astron. Soc. 391, 1981–1993 (2008). arXiv:0810.0145
Marcaide, J.M., et al.: In: Fanti, R., Kellermann, K., Setti, G. (eds.) VLBI and Compact Radio
Sources. Proc. IAU Symp., vol. 110, p. 361 (1984)
Mather, J.C.: Rev. Mod. Phys. 79, 1331–1348 (2007)
Matthaeus, W.H., Zhou, Y.: Phys. Fluids B 1, 1929–1931 (1989)
Mazzali, P.A., et al.: Science 315, 825 (2007)
Mazzali, P.A., et al.: Science 321, 1185 (2008)
Meli, A., Biermann, P.L.: Astron. Astrophys. 454, 687–694 (2006). astro-ph/0602308
Mestel, L., Roxburgh, I.W.: Astrophys. J. 136, 615 (1962)
Mészáros, P.: The High Energy Universe: Ultra-High Energy Events in Astrophysics and Cosmol-
ogy. Cambridge University Press, Cambridge (2010)
Mészáros, P., Rees, M.J.: Astrophys. J. Lett. 715, 967–971 (2010)
Mészáros, P., Rees, M.J.: Astrophys. J. Lett. 733, id. L40 (2011)
Milgrom, M.: Mon. Not. R. Astron. Soc. 398, 1023–1026 (2009)
Milgrom, M., Bekenstein, J.: In: Proc. Dark Matter in the Universe. IAU Symp., pp. 319–330
(1987)
Mirabel, I.F., Rodriguez, L.F.: Nature 392, 673–676 (1998)
Mirabel, I.F., Rodriguez, L.F.: Annu. Rev. Astron. Astrophys. 37, 409–443 (1999)
Mirabel, I.F., Rodrigues, I.: Science 300, 1119–1121 (2003)
Mirabel, I.F., et al.: Nature 413, 139–141 (2001)
Mirabel, I.F., et al.: Astron. Astrophys. 528, id. A149 (2011)
Moiseenko, S.G., Bisnovatyi-Kogan, G.S., Ardeljan, N.V.: Mon. Not. R. Astron. Soc. 370, 501–
512 (2006)
Nath, B.N., Biermann, P.L.: Mon. Not. R. Astron. Soc. 267, 447 (1994)
4 Evolution of Astrophysics: Stars, Galaxies, Dark Matter, and Particle 101
Nath, B.N., Silk, J.: Mon. Not. R. Astron. Soc. 396, L90–L94 (2009)
Novikov, I.D., Thorne, K.S.: In: Black Holes (Les astres occlus), pp. 343–450 (1973)
Oort, J.H.: B.A.N. 6, 249 (1932)
Oppermann, N., et al.: eprint arXiv:1111.6186 (2011)
Ostriker, J.P., Bode, P., Babul, A.: Astrophys. J. 634, 964–976 (2005)
Otmianowska-Mazur, K., et al.: In: Proc. Magnetic Fields in the Universe II: From Laboratory
and Stars to the Primordial Universe. Rev. Mex. de Astron. y Astrof. (Ser. de Conf.), vol. 36,
pp. CD266–CD271 (2009)
Parikh, M.K., Wilczek, F.: Phys. Rev. Lett. 85, 5042–5045 (2000)
Parker, E.N.: Astrophys. J. 145, 811 (1966)
Parker, E.N.: Astrophys. J. 157, 1129 (1969)
Parker, E.N.: Annu. Rev. Astron. Astrophys. 8, 1 (1970a)
Parker, E.N.: Astrophys. J. 160, 383 (1970b)
Parker, E.N.: Astrophys. J. 162, 665 (1970c)
Penzias, A.A., Wilson, R.W.: Astrophys. J. 142, 419–421 (1965)
Piran, T.: Rev. Mod. Phys. 76, 1143–1210 (2004)
Planck Coll.: eprint arXiv:1101.2028 (2011)
Plotkin, R.M., et al.: Mon. Not. R. Astron. Soc. 419, 267–286 (2012)
Portegies Zwart, S.F., et al.: Nature 428, 724–726 (2004)
Prandtl, L.: Z. Angew. Math. Mech. 5, 136 (1925)
Prantzos, N.: Adv. Space Res. 4, 109–114 (1984)
Prantzos, N.: In: van der Hucht, K.A., Hidayat, B. (eds.) Wolf-Rayet Stars and Interrelations with
Other Massive Stars in Galaxies. Proc. IAU Symp., vol. 143, p. 550 (1991)
Prendergast, K.H., Burbidge, G.R.: Astrophys. J. Lett. 151, L83–L88 (1968)
Qian, S.-B., et al.: Astrophys. J. Lett. 708, L66–L68 (2010)
Rachen, J.P., Stanev, T., Biermann, P.L.: Astron. Astrophys. 273, 377 (1993)
Ramaty, R., et al.: Astrophys. J. 534, 747 (2000)
Ramaty, R., Lingenfelter, R.E., Kozlovsky, B.: Space Sci. Rev. 99, 51 (2001)
Reeves, H.: Rev. Mod. Phys. 66, 193–216 (1994)
Rickett, B.J.: Annu. Rev. Astron. Astrophys. 15, 479–504 (1977)
Riess, A.G., et al.: Astron. J. 116, 1009–1038 (1998)
Ripamonti, E., Abel, T.: eprint arxiv:astro-ph/0507130. Lecture Notes for the Spring 2003 SIGRAV
Doctoral School The Joint Evolution of Black Holes and Galaxies (2005, to be published)
Rollinde, E., et al.: Mon. Not. R. Astron. Soc. 398, 1782–1792 (2009)
Rosenband, T., et al.: Science 319, 1808 (2008)
Ryu, D., Kang, H., Biermann, P.L.: Astron. Astrophys. 335, 19–25 (1998)
Ryu, D., et al.: Science 320, 909 (2008)
Sanders, R.H.: Astrophys. J. 162, 791 (1970)
Sanders, R.H.: Mon. Not. R. Astron. Soc. 386, 1588–1596 (2008)
Sanders, D.B., Mirabel, I.F.: Annu. Rev. Astron. Astrophys. 34, 749 (1996)
Savage, Ch., et al.: Phys. Rev. D 83, id. 055002 (2011)
Schlickeiser, R.: Cosmic Ray Astrophysics. Springer, Berlin (2002)
Schmidt, M.: Nature 197, 1040 (1963)
Schödel, R., Merritt, D., Eckart, A.: Astron. Astrophys. 502, 91–111 (2009)
Schwarzschild, K.: Sitz.-berichte der Königl. Preuss. Akad. d. Wiss. zu Berlin, Phys.-Math. Kl.,
189–196 (1916)
Shakura, N.I., Sunyaev, R.A.: Astron. Astrophys. 24, 337–355 (1973)
Shapiro, S.L., Teukolsky, S.A.: Black Holes, White Dwarfs, and Neutron Stars: The Physics of
Compact Objects. Wiley-Interscience, New York (1983), 663 pp.
Silberberg, R., Tsao, C.H.: Phys. Rep. 191, 351–408 (1990)
Silk, J., Rees, M.J.: Astron. Astrophys. Lett. 331, L1–L4 (1998)
Silk, J., Takahashi, T.: Astrophys. J. 229, 242–256 (1979)
Smoluchowski, M.: Z. Phys. 17, 557 (1916)
Smoot, G.F.: Rev. Mod. Phys. 79, 1349–1379 (2007)
102 P.L. Biermann
Towards the end of the 1930s it was recognised from studies of the effect of the
geomagnetic field on cosmic rays that the energy spectrum of the primary particles,
not identified as being proton-dominated until 1941, extended to at least 10 GeV.
The discovery of extensive air showers in 1938, however, radically changed this
situation with the highest energy being pushed up by about 5 orders of magnitude,
probably the single largest advance to our knowledge of energy scales ever made. It
is now known that the energy spectrum extends to beyond 1020 eV, but it has taken
over 60 years to consolidate this picture. In this chapter we trace the history of the
discovery of extensive air showers, show how advances in experimental and theoret-
ical techniques have led to improved understanding of them, and describe how some
of the most recent work with contemporary instruments has provided important data
on the energy spectrum, the mass composition and the arrival direction distribution
of high-energy cosmic rays. These results are of astrophysical importance, but ad-
ditionally some aspects of the shower phenomenon promise to give new insights on
hadronic physics at energies beyond that reached by the LHC.
In Chap. 2, the measurement of the properties of cosmic rays 1014 eV per
particle was discussed. The flux of particles falls so rapidly with energy (∝ E −γ
with γ ∼ 2.7) that around 1014 eV it becomes impractical to make measurements of
high precision directly: the number of events falling on a detector of a size that can
be accommodated on a balloon or a space-craft is simply too small. However, at this
energy sufficiently many particles are produced in the atmosphere as secondaries to
A.A. Watson
School of Physics and Astronomy, University of Leeds, Leeds, UK
e-mail: [email protected]
the incoming primary cosmic rays for some to reach mountain altitudes and, as the
energy of the primary increases, even sea level. The transverse momentum acquired
by secondary particles at production and the scattering which the shower electrons,
in particular, undergo through interactions with the material of the atmosphere are
such that the secondaries are spread over significant areas at the observational level.
The phenomenon of the nearly simultaneous arrival of many particles over a large
area is called an Extensive Air Shower (EAS): at 1015 eV around 106 particles cover
approximately 104 m2 while at 1020 eV some 1011 particles are spread over about
10 km2 . It was quickly recognised that the phenomenon of the air shower offered
the possibility of answering four major questions.
1. What particle physics can be learned from understanding air shower evolution?
A detailed understanding of how an air shower develops is crucial to obtain-
ing an estimate of the primary energy and to learning anything about the mass
spectrum of the primary particles. It is worth recalling that when the shower phe-
nomenon was first observed that, in addition to the proton, neutron, electron and
positron, only the muon was known, so that a realistic understanding of shower
development had to wait until the discovery of the charged pion and its decay
chain in 1947 and of the neutral pion in 1950. Indeed, much early thinking was
based on the hypothesis that showers were initiated by electrons and/or photons.
Once it was recognised that the initiating particle was almost always a proton or
a nucleus, the first steps in understanding the nuclear cascade focused on such
matters as whether a proton would lose all or only part of its energy in a nuclear
collision and how many pions were radiated in such a collision. A combination
of observations in air showers, made using Geiger counters and cloud chambers,
of data from studies in nuclear emulsions and of early accelerator information
was used to inform the debate. The issues of inelasticity (what fraction of the
energy is lost by an incoming nucleon to pion production) and the multiplicity
(the number of pions produced) are parameters which are still uncertain at most
of the energies of interest.
2. What can be inferred from the arrival direction distributions of the high-energy
particles?
From the earliest years of discovery of cosmic rays there have been searches
for directional anisotropies. Hess himself, from a balloon flight made during a
solar eclipse in April 1912, i.e. before his discovery flight in August of the same
year, deduced that the Sun was not a major source (Hess, 1912). There are a few
predictions of the level of anisotropy that might be expected. While there have
always been speculations as to the sources, the fact that the primary particles are
charged and therefore are deflected in the poorly known galactic and intergalactic
magnetic fields makes it difficult to identify them. One firm prediction was made
very early on by Compton and Getting (1935) that cosmic rays should show an
anisotropy because of the motion of the earth within the galaxy. Eventually it
was realised that this idea would be testable only with cosmic rays undeflected
by the solar wind (discovered much later) so measuring the Compton–Getting
effect became a target for air shower experiments. However, as the velocity of
the earth is only about 200 km s−1 , the effect is ∼0.1 % and it has taken around
5 Development of Ultra High-Energy Cosmic Ray Research 105
70 years for a convincing demonstration of its discovery. The search for point
sources has been largely unsuccessful, but one of the motivations for searching
for rarer and rarer particles of higher and higher energy has been the expectation
that anisotropy would eventually be found.
3. What is the energy spectrum of the primary cosmic rays?
A power-law distribution of cosmic rays was first described by E. Fermi in 1949
(Fermi, 1949) but until 1966 there were no predictions as to the power-law in-
dex or to further structures in the energy spectrum. Observations in 1959 had
indicated a steepening at around 3 × 1015 eV (the “knee”), while in 1963 it was
claimed from observations made with the first large shower array that the spec-
trum flattens just above 1018 eV. However, not only were there no predictions of
these features, interpretation of them remains controversial. By contrast the dis-
covery of the 2.7 K cosmic microwave background radiation in 1965 led, a year
later, to the firm statement that if cosmic rays of energy above ∼4 × 1019 eV ex-
ist, they can come only from nearby sources. It took about 40 years to establish
that there is indeed a steepening in the cosmic ray spectrum at about this energy
but whether this is a cosmological effect or a consequence of a limit to which
sources can accelerate particles is unclear: 4 × 1019 eV is within a factor of ∼ 5
of the highest energy event ever recorded.
4. What is the mass composition of the primary cosmic rays?
One of the major tasks of the air shower physicist is to find the mass of the pri-
mary particles. This has proved extraordinarily difficult as even if the energy of
the primary that produces an event is known, the uncertainties in the hadronic
physics make it hard to separate protons from iron. Data from the LHC will
surely help, but above 1017 eV one has reached a regime where the centre-of-
mass energies in the collisions are above what is accessible to man-made ma-
chines. Indeed it may be that in the coming decades the highest-energy cosmic
rays provide a test bed for theories of hadronic interactions, mirroring the fact
that cosmic ray physics was the place where particle physics was born in the
1930s.
In what follows we have chosen to emphasise the progress made since the 1940s
towards answering these four questions through an examination of the development
of different techniques, both experimental and analytical, introduced in the last 70
years. While new techniques have enabled air showers to be studied more effec-
tively, it is remarkable how the essentials of what one seeks to measure were recog-
nised by the pioneers in the 1940s and 1950s. Increasingly sophisticated equipment,
operated on increasingly larger scales has been developed, and had led to some an-
swers to the key questions although many issues remain uncertain.
Galbraith (1958) and Cranshaw (1963) have written books in which details of
early work, up to the end of the 1950s, are discussed in more detail than is pos-
sible below, while in Hillas’s classic book on Cosmic Rays (Hillas, 1972) there is
an excellent discussion of some of the earliest papers in a context which includes
fundamental ideas of cosmic rays physics, including shower physics.
We now move on by reviewing the history of the discovery of the air shower
phenomenon.
106 K.-H. Kampert and A.A. Watson
In retrospect, this experiment marked the birth of “rare event triggering”, which
became a key tool for progress in nuclear and particle physics experiments.
The development of the coincidence approach was crucial also for the discovery
and study of extensive air showers. In 1933 Rossi made a key observation which
was hard to accept for the scientific community and which, as Rossi recalled later
(Rossi, 1985, page 71), even “raised doubts about the legitimacy of the coincidence
method”. In an experimental arrangement as shown in Fig. 5.1, Rossi observed that
the coincidence rate between the three adjacent Geiger counters increased when an
absorber was placed above the counters. Only when the absorber thickness reached
a certain value did the coincidence rate fall, but much less than was expected even
from β-rays, the most penetrating particles known at that time. All this was difficult
to accept for the scientific community, and it became known as “Rossi’s transition
curve”. Rossi, however, correctly concluded that soft secondary particles were pro-
duced by the cosmic particles entering the material. These secondary particles then
suffer increasing absorption with increasing total thickness of the absorber (Rossi,
1933). It is interesting to note that the same basic observation was made a year
later by Regener and Pfotzer (1935) when studying the vertical intensity of cosmic
5 Development of Ultra High-Energy Cosmic Ray Research 107
Fig. 5.1 Rossi’s transition curve: The experiment in which the abundant production of secondary
radiation by cosmic rays was discovered. Coincidences between Geiger–Müller counters, arranged
as shown on the left, are produced by groups of secondary particles generated by cosmic rays in the
lead shield above the counters. The curves labelled I–III refers to Pb and Fe absorbers of different
thicknesses placed above the counters (Rossi, 1933)
Fig. 5.2 The discovery of extensive air showers (EAS): Decoherene curves measured with Geiger
counters separated up to 300 m distance. Data of (Schmeiser and Bothe, 1938) and (Kolhörster
et al., 1938) were measured at sea level with counters of 91 cm2 and 430 cm2 effective area,
respectively, while data of (Auger et al., 1939a) were measured with counters of 200 cm2 at the
Jungfraujoch at 3 450 m
Despite the work of Rossi and the two German groups, credit for the discovery
of extensive air showers has usually been given to Auger and his collaborators for
what seems to have been a serendipitous observation (Auger et al., 1939a) depend-
ing strongly on the electronic developments by Roland Maze who improved the re-
solving time of coincidence circuits to 5 µs (Maze, 1938). Auger, Maze and Robley
found that the chance rate between two counters separated by some distance greatly
exceeded the chance rate expected from the resolving time of the new circuitry. For a
while, the phenomenon was known as “Auger showers” (Auger, 1985, page 214). In
their measurements performed at the Jungfraujoch in the Swiss Alps they were able
to separate their detectors by up to 300 m. The decoherence curves are shown again
in Fig. 5.2. Differences in the coincidence rates between the three groups of authors
can be understood both by the different effective areas of the Geiger counters and
by the different altitudes at which the measurements were performed. In view of the
sequence of air shower observations, the important achievement of Auger and his
group, which distinguishes their work from that of Rossi, Schmeiser and Bothe, and
Kolhörster, appears to be not so much in separating their detectors by up to 300 m,
but in estimating the primary energy to be around 1015 eV. This estimate was based
on the number of particles in the showers, assuming that each particle carried, on
average, the critical energy.1 A factor of 10 was added to account for energy lost
in the atmosphere. A similar conclusion came from using the work of Bhabha and
Heitler, based on the ideas of quantum electrodynamics (QED). It is worth quoting
the final remarks of Auger from his paper presented at the 1939 Symposium held in
Chicago (Auger et al., 1939b):
1 The critical energy is the energy at which energy losses by ionisation and bremsstrahlung are
One of the consequences of the extension of the energy spectrum of cosmic rays up to
1015 eV is that it is actually impossible to imaging a single process able to give a particle
such an energy. It seems much more likely that the charged particles which constitute the
primary cosmic radiation acquire their energy along electric fields of very great extension.
Kolhörster had little chance for cosmic ray work after 1939. Rossi left Manchester
for Chicago and, during his brief stay there before joining the Manhattan project,
his cosmic ray studies were focused largely on the problem of muon decay.
Only a few years after the discovery of extensive air showers, Skobeltzyn et al.
(1947) at the Pamir mountains at an altitude of 3 860 m above sea level pushed
measurements of coincidences out to distances of 1 000 m. To suppress random
coincidences which would occur between single distant Geiger counters, they were
the first to apply so-called double-coincidences, meaning that coincidences were
first formed within trays of local Geiger counters, before a coincidence was formed
between the distant trays.
Work by Auger and his colleagues using cloud chambers triggered by arrays of
Geiger counter allowed features of air showers to be understood relatively quickly.
By the late 1930s it was known that air showers contained hadronic particles, muons
and electrons and major advances in understanding took place in the late 1940s
and early 1950s after the existence of two charged and one neutral pion was es-
tablished and it was recognised that muons were secondary to charged pions. The
development of an air shower can be understood by studying Fig. 5.4 which we will
reference on occasion. In the figure a cloud chamber picture of a shower created
in lead plates by a cosmic ray proton of about 10 GeV is shown (Fretter, 1949).
The features shown in this photograph, except for scale, are extremely similar to
those present when a high-energy particle enters the earth’s atmosphere and creates
a shower.
5 Development of Ultra High-Energy Cosmic Ray Research 111
Each lead plate (the dark bands running horizontally across the picture) is about
two radiation lengths thick3 and the cross-sectional area of the cloud chamber is
0.5 × 0.3 m2 . The gas in the chamber was argon, effectively at atmospheric pres-
sure, and thus most of the shower development happens within the lead plates. Lit-
tle development of the cascade takes place in the gas, but the level of condensation
gives a snapshot of how the particle number increases and decreases as the shower
progresses through more and more lead. All of the important features of shower de-
velopment, such as the rise and fall of the particle numbers, and the lateral spreading
of the shower, are evident, as are some muons that penetrate more deeply into the
chamber than most of the electrons. Had such a proton interacted near sea level
in air, then the extent of the lateral spread of the shower would have been around
50 m.
The problem of identifying the nature and determining the energy of the particle
that initiated this shower, if there were data available from only one layer of gas
corresponding to the information available from a shower array at a single atmo-
spheric depth, can be appreciated from Fig. 5.4. But until the 1980s, when a tech-
nique was developed that allowed the build-up of the air shower to be studied on an
event-by-event basis, as is seen in the figure, this was the challenge faced by all air
shower experimenters. Assumptions had to be made as to where the particle had its
first interaction and what are the features of the hadronic interactions. Key param-
eters such as the cross sections for the interaction of protons (and heavier nuclei)
with nuclei, pion–nucleus cross sections, the fraction of energy radiated as pions
in each collision and the number of particles produced are needed. By contrast de-
termination of the direction of the incoming primary is a relatively straightforward
exercise.
The basic key processes of cascade multiplication occurring in EAS were laid
out in 1934 by Bethe and Heitler based on QED (Bethe and Heitler, 1934) and were
formulated in terms of pair-production and bremsstrahlung processes by Bhabha
and Heitler (1937). Carlson and Oppenheimer (1937) finally completed the theory
by accounting also for energy losses of electrons by ionisation and for practical
calculations they pioneered the use of diffusion equations. Moreover, they demon-
strated quantitative agreement of their calculations with the experimental results by
Regener and Pfotzer (Pfotzer Maximum) (l.h.s. of Fig. 5.5), pointed out the impor-
tance of fluctuations of the shower maximum, and noted that a more penetrating
burst like component, as suggested by Heisenberg (1936) based on measurements
by Hoffmann4 was needed to allow electrons to penetrate the atmosphere to a thick-
ness of 30 radiation lengths (r.h.s. of Fig. 5.5). This paper presented the simple
Fig. 5.5 Left: Total number of electrons N(t) against t = X/X0 (X0 : radiation length), calculated
by Carlson and Oppenheimer for 2.5 GeV electrons in air and compared to experimental results
(circles) of Pfotzer (1936). Right: Estimated number of electrons with Ee > 50 MeV in Pb for
E0 = 2.7, 20, 150, and 1 110 GeV (Carlson and Oppenheimer, 1937)
were the focus although there was always a drive to find the limiting energy that
Nature reached.
In the 1950s a relatively large array of Geiger counters that eventually covered
∼0.6 km2 was developed by Cranshaw and Galbraith (1954, 1957) at Culham near
sea level, the site of UK Atomic Energy Establishment. In USSR, investigations of
air showers were initiated by Skobeltzyn who encouraged George Zatsepin of the
Lebedev Institute to develop a program in the last years of WW II. The first Russian
activity was carried out in the Pamirs (3 860 m) and was the start of a major effort
on shower work at mountain stations by Soviet scientists which continued for many
decades, latterly at a well-serviced installation at Tien Shan (3 340 m) near Almata.
The leaders of this work, in addition to Zatsepin, were N.A. Dobrotin, S.I. Nikolsky
and S.A. Slavatinsky. There was also a major effort in Moscow, headed first by
S.N. Vernov and later by G.B. Khristiansen. Until the start of construction of the
Yakutsk array in the late 1960s, the Soviet program was largely focused on studying
primary particles of less than 1017 eV.
The most important output from the early period of the Moscow work was the
discovery of a feature in the size spectrum of showers5 which became known as
“knee” of the cosmic ray spectrum (Kulikov and Khristiansen, 1959) and it had
considerable impact. It was verified with high precision relatively quickly by a num-
ber of groups (Fukui et al., 1960; Kameda et al., 1960; Allan et al., 1962; Kulikov
et al., 1965). Estimating the energy of the knee from the track-integral method (see
Sect. 5.8), Kulikov and Khristiansen had argued that the break may be caused by
diffusion of cosmic rays out of the galaxy, so that cosmic rays at E > 1016 eV may
have an metagalactic origin. Thus, an astrophysical feature in the cosmic ray spec-
trum may have been discovered. This started a long running debate, picked up by
Peters (1961) who proposed that what was being seen reflected a similar feature in
the primary spectrum of cosmic rays induced either by a limitation of the accelera-
tion processes or by a leakage of particles from the galaxy. There were competing
claims that this feature was due to a characteristic of nuclear interactions with a dra-
matic change occurring near 1015 eV and the debate about astrophysics or particle
physics origin, was not to be settled for a further 45 years until precise data from
KASCADE became available (see below).
The increasing availability of PMTs led to some significant advances in the air
shower technique including the use of Cherenkov radiation to study extensive air
showers suggested by Blackett (1947), Galbraith and Jelley (1953), and Chudakov
and colleagues (1960). These data were used independently by Greisen (1956) and
Nikolsky (1962) to derive a relationship between the primary energy and the shower
size which proved to be particularly important for early estimates of the primary
energy.
5 The “shower size spectrum” or just “size spectrum” is a common notion used for the distribution
of the shower size, i.e. of the total number of particles that reached ground. The shower size, N ,
is obtained by fitting
∞ the lateral distribution ρ(r) of shower particles at ground and evaluating the
integral N = 2π 0 rρ(r) dr.
114 K.-H. Kampert and A.A. Watson
Fig. 5.6 Scale comparison of the first water Cherenkov detectors used by Porter et al. (1958) of
1.44 m2 read out by a single 5 diameter PMT to those used by the Pierre Auger Observatory of
10 m2 read out by three 9 PMTs (Abraham et al., 2004)
Fig. 5.7 Schematic diagram of a scintillation counter used in the Agassiz shower array. The scin-
tillator block was 105 cm in diameter and 10 cm thick. The inside of the box was painted white and
the diffuse light reflected from the walls was collected by a Dumont 5 diameter PMT (Reproduc-
tion from Clark et al., 1957)
that the particles in the disk of the shower were spread over a thickness of only a
few metres and, by shielding one of them with up to 20 cm of lead, that the elec-
trons in the shower lead the muons close to the shower axis. The discovery that the
shower disk was relatively thin (∼10 ns) opened up the possibility of measuring the
direction of the primary particle. Assuming that the direction was perpendicular to
a plane tangent to the surface defined by the leading particles in the shower, it was
demonstrated that the direction of the shower could be found to within ∼2°. This
was a major advance as hitherto the very crude collimating effect of the atmosphere
had been used to define shower directions.
This pioneering work led to the construction of a larger array at a partially
wooded site, the Agassiz Astronomical Station of the University of Harvard. Un-
fortunately the liquid scintillators were flammable and after a lightning-induced fire
a method of making solid scintillator in large slabs with masses of ∼100 kg was de-
veloped (Clark et al., 1957). These could also be viewed by PMTs and a schematic
diagram of one scintillation counter is shown in Fig. 5.7.
At the Agassiz site an array of 15 such detectors was operated between 1954
and 1957 with the layout shown in Fig. 5.8. Members of the group included George
Clark, William Kraushaar, John Linsley, James Earl, Frank Scherb and Minoru Oda,
who became a leading figure in air shower work in Japan. An excellent first-hand
account of Rossi’s work at MIT has been given by Clark (2006).
Cosmic-research began in Japan in the 1930s at RIKEN first under the guidance
of Y Nishina and then under S Tomonaga. At the end of WW II, experimental work
in nuclear physics in Japan was essentially terminated for some years following the
destruction of the cyclotrons at RIKEN in Tokyo and those in Kyoto and Osaka. By
contrast, cosmic ray work flourished: Tomanaga stimulated studies of extensive air
showers at Mt Norikura (2 770 m above sea level) (Ito et al., 1997) and played a key
role in establishing the Institute for Nuclear Studies (INS) in Tokyo. He was also
instrumental in encouraging Nishimura and Kamata to develop three-dimensional
analytical calculations of electromagnetic cascades, work which they began after
reading the Rossi and Greisen article of 1941 (Rossi and Greisen, 1941) during daily
visits to a US reading room in Tokyo. Japan has been one of the leading countries
in cosmic ray physics ever since.
116 K.-H. Kampert and A.A. Watson
By far the most important insights at that time came from combined data from
the four muon and 14 scintillator detectors operated at INS. Although the results
have long since been surpassed, the group was the first to point out the key infor-
mation that could be derived from a study of plots of muon versus electron number,
Nμ vs. Ne plots in modern language. One of the plots from the INS work is shown
in Fig. 5.9 for nearly vertical events. This type of diagram, with improved statistics
and smaller uncertainties in Nμ and Ne , when combined with detailed shower sim-
ulations, later proved to be a powerful tool for extracting information on primary
mass. In addition, it was soon recognised, when Monte Carlo studies developed,
that at fixed primary energy the fluctuations in electron number were greater than
those for the muons. Accordingly the muon number came to be used as a proxy for
shower energy.
The work at INS was the forerunner of other projects in air shower research. In
addition to the BASJE project at Chacaltaya (cf. Sect. 5.7), the activities led to the
Akeno and AGASA arrays in Japan and the Telescope Array in the USA.
The determination of the energy that has created a particular shower is not
straightforward and it is instructive to appreciate the various approaches that have
been adopted over the years. Although it was established relatively early that air
showers contained nucleons, pions and muons in addition to an abundance of elec-
trons and photons, the gross features of showers were found to be relatively well-
described under the assumption that the primaries were electrons. It thus became
the practice to infer the primary energy from a measurement of the total number of
charged particles, N , – dominantly electrons and positrons – in a shower, relating
this to the primary energy using theory provided by such as the Nishimura–Kamata
equations that describe the lateral distribution of charged particles for showers pro-
duced by photons or electrons. The number of particles was straightforward to mea-
sure when the detectors were Geiger counters as they respond predominantly to
charged particles. Also, for the study of the showers produced by primaries of en-
ergy less than ∼1017 eV, it was practical and economically feasible to build arrays
in which the average separation of the detectors was less than the Molière radius,7
about 75 m at sea level: roughly 50 % of the charged particles of a shower lie within
this distance.
As greater understanding of showers developed, there were moves away from
using the photon/electron approximation to estimate the primary energy from the
number of charged particles measured in the shower. Also a difficulty in obtaining
N was recognised as scintillation counters were increasingly introduced during the
1950s. Because of the success of the approach with Geiger counters and the lack
of other methods to find the energy on an event-by-event basis, considerable effort
was initially expended in relating the scintillator measurements to what would have
been the particle count had a Geiger counter been located at the same position as the
scintillation counter. This adjustment to particle number was reasonable while the
7 The Molière radius is the root mean square distance that an electron at the critical energy is
spacing between detectors remained small. For example, at the Agassiz array, mea-
surements were made at distances much closer to the shower core than one Molière
radius (see Fig. 5.8) and the scintillator response was converted to particle number
using an array of Geiger counters operated for that purpose. As more understanding
of shower structure developed, the importance of the thickness of the scintillators
was recognised and it was also realised that the conversion from scintillator signal
to number of charged particles depended on the distance of the scintillators from the
shower core because the energy spectrum of electrons and photons was distance-
dependent.
The MIT group also pointed out that to obtain an energy spectrum from the ob-
served size spectrum required “a quantitative knowledge of the cascade processes
initiated by primary particles of different energies”. This problem of quantitative
knowledge of the hadronic process is still an issue over 50 years on though there
is a growing understanding of the key hadronic interactions, most recently from the
LHC. S Olbert, one of the MIT group (Olbert, 1957) had solved the shower equa-
tions to relate the shower size at different atmospheric depths to the primary energy.
Using two models of high-energy interactions then current (the Landau model (Be-
lenki and Landau, 1956) and the Fermi model (Fermi, 1950, 1951)), making the
assumptions of a collision mean free path for protons of 100 g cm−2 and complete
inelasticity, Olbert obtained relations between N and energy E0 .
A study of the muon content of showers was made using a hodoscoped system
of Geiger counters shielded with lead. This work established that roughly 10 % of
the particles in an air shower were muons (Clark et al., 1958).
A final report on the work of the MIT group was made in 1961 (Clark et al., 1961)
where details of the largest event with a ‘Geiger counter size’ of N = 2.6 × 109 were
given. The array at Agassiz had been operated for about a year, from July 1956 to
June 1957. In addition the group had run a small shower array at Kodaikanal in
India to search for anisotropies in the arrival direction pattern at energies just above
1014 eV.
The work directed by Rossi subsequently led to the establishment of the shower
array at Chacaltaya (with the Japanese group from INS as major partners). A particu-
lar motivation was to search for γ -rays by attempting to identify showers containing
fewer muons than average. This attempt was unsuccessful but the first indirect de-
ductions about the position at which the number of particles in showers of ∼1016 eV
reach their maximum were obtained. Additionally, attempts were made to find the
depth of shower maximum using the constant intensity cut method, a very impor-
tant technical conception. Rossi also encouraged the work led by Linsley at Volcano
Ranch in New Mexico to establish the first array with an area of over 1 km2 , built to
study the highest energy events.
array of radius 500 m with 15 × 0.85 m2 scintillators with five near the centre on 3 to
80 m spacing and 5 each 150 and 500 m from the array centre. Like the MIT group,
the Cornell team did not have a fast computer available to them initially and devel-
oped some ingenious analogue methods to find the direction and the shower core.
This could only be adopted for a relatively small number of large events. Above
1013 eV, around 104 events were recorded per day with several million accumu-
lated in 1957 and 1958.
A measurement of the number spectrum above N = 6 × 106 was made using an
approach similar to, but independent of, that of the MIT group and the two mea-
surements were found to be in good agreement. The largest event recorded with the
Cornell array contained N 4 × 109 particles.
Greisen and his group also studied muons in showers, extending what had been
done at MIT and elsewhere and he derived useful formulae to describe the lateral
distribution of muons above 1 GeV and also the energy spectrum of muons as a
function of distance. Although the muon sample was only 559, and the shower anal-
ysis was not done on an event-by-event basis, the relations established have been
found to fit a wide sample of modern work on the muon lateral distribution even
for showers of greater energy. The parameterisations of both the electron and muon
lateral distributions (LDF) presented in Greisen’s seminal reviews (Greisen, 1956,
1960) described the data well over a large range of distances from the shower core
and atmospheric depths. Greisen also noted that his parameterisation of the elec-
tromagnetic distribution was a close approximation to the analytical calculations for
electromagnetic showers performed by Kamata and Nishimura (1958). Greisen’s ap-
proximations to the Nishimura–Kamata functions become known as the Nishimura–
Kamata–Greisen (NKG) function.
After work on the Cornell scintillator array had been completed, Greisen turned
his attention to the development of the fluorescence technique. His two reviews
remain important sources of insights. In particular, in the first of these reviews,
Greisen developed a method to estimate the energy that a primary particle would
need, on average, to produce a shower of a certain size.
Of the several laboratories to be developed for the study of air showers, one of the
most important, and certainly the highest, was constructed at Chacaltaya in Bo-
livia at 5 200 m and is still in operation. The mountain had already been used ex-
tensively for the exposure of nuclear emulsion plates in the 1940s. At Chacaltaya
important steps were taken to infer the depth of shower maximum, to measure the
energy spectrum and to study the mass of cosmic rays, including a search for pho-
tons.
As a first step to understanding the features of showers at high altitude 11 of the
scintillators used in the Agassiz experiment were deployed in an array of 700 m
diameter on the Altoplano at El Alto, near La Paz, Bolivia, at an altitude of 4 200 m
120 K.-H. Kampert and A.A. Watson
in 1958. Showers of size ∼107 were studied. It was found that, unlike those of a
similar size at sea level, the steepness of the lateral distribution changed with zenith
angle, being steeper for the more vertical showers. Furthermore, for N ∼ 3 × 106
the change in shower size with depth from 630 to ∼800 g cm−2 was small suggest-
ing that these showers had their maxima close to 630 g cm−2 (Hersil et al., 1961,
1962).
In 1958, following a proposal by Oda, the MIT, Tokyo and La Paz groups joined
forces to establish the Bolivian Air Shower Joint Experiment (BASJE) at Mt Chacal-
taya which started taking data in the early 1960s. The basic shower array comprised
the 20 Agassiz-like scintillators deployed within a circle of 150 m diameter with
five scintillators for fast timing, supplemented with a muon detector of 60 m2 array.
The muon detector was constructed from 160 tonnes of galena (the natural mineral
form of lead sulfide) which was readily available locally. Modules of 4 m2 com-
mercial scintillator were developed by K Suga (INS) and were used together with
a logarithmic time-to-height amplifier (Suga et al., 1961) to measure the muon flux
in showers. The 60 m2 of scintillator were placed below a concrete structure sup-
porting the galena. The size of this muon detector exceed those built previously by
about an order of magnitude and made practical a search for showers produced by
primary gamma rays under the hypothesis that such showers would have low num-
bers of muons. Events with less than 10 % of the average number of muons were
found but they were not clearly separated from the bulk of the data and did not show
any anisotropy. In addition to the energy spectrum measurements and the photon
search, innovative studies of the mass composition of cosmic rays were made. Fur-
ther, Krieger and Bradt (1969) augmented the scintillator array with nine open PMTs
to detect air-Cherenkov light and concluded that at ∼1016 eV the composition was
much as it was at 1012 eV.
Many small arrays were built to study the cosmic rays in the region from
1014 –1017 eV at locations across the world with scientists in Australia, Germany,
India, Italy, Japan, Poland, UK, the USA and the USSR making important contri-
butions. The early measurements have been replicated with very superior statistics
in the modern arrays built in Germany (KASCADE and KASCADE-Grande), in
Italy (EAS-TOP) and in Tibet: this applies particularly to the energy region 1014 to
1016 eV which includes the region where the energy spectrum steepens. We shall
discuss those briefly in Sect. 5.14.
By contrast the number of devices constructed with collecting areas of over 1 km2
has been only 7, including the Pierre Auger Observatory, the Telescope Array and
the Yakutsk array that are still operating, although with the latter reconfigured to
study smaller showers. A Soviet proposal for a 1 000 km2 array named EAS-1000,
5 Development of Ultra High-Energy Cosmic Ray Research 121
led by Khristiansen, was given formal approval and construction began (Khris-
tiansen et al., 1989), but the project was hit by the political and economic prob-
lems that came with the glasnost and perestroika and was never realised. Data from
the Telescope Array and the Pierre Auger Observatory currently dominate from the
Northern and Southern Hemispheres, respectively. By contrast to the low-energy
arrays, it is useful to discuss the pioneering large arrays in some detail first, as at
each different features of technique and analysis were introduced which were impor-
tant for later studies. The layout of these surface arrays can be found in the review
by Nagano and Watson (2000): essentially all arrays are variations of the style de-
veloped at MIT shown in Fig. 5.8. While methods of data recording evolved, the
analysis techniques were similar to those introduced at MIT.
The first of the giant shower arrays was constructed at Volcano Ranch, New
Mexico (1 770 m) by members of the MIT group (Linsley et al., 1961). It con-
sisted of 19 plastic scintillation counters of 3.3 m2 area, each viewed with a 5
PMT. The construction, maintenance and data analysis of Volcano Ranch was the
almost single-handed effort of Linsley who made many contributions to the under-
standing of giant showers. Figure 5.10 shows him together with his colleague Livio
Scarsi.
Data from this array yielded the first measurement of the energy spectrum of
cosmic rays above 1018 eV, giving the earliest hint of a flattening of the spectrum
in that region (Linsley, 1963a), a hint that took over 20 years to confirm convinc-
ingly. Linsley also made the first exploration of the arrival direction distribution of
these exceptional events. The most energetic one was assigned an energy of 1020 eV
(Linsley, 1963b), an energy that was subsequently revised to 1.4 × 1020 eV (Lins-
ley, 1980). This event, reported before the discovery of the 2.7 K cosmic microwave
background radiation and the subsequent prediction of a steepening of the spectrum,
remains one of the most energetic cosmic rays ever recorded.
Following the closure of the Culham array in 1958 it was decided, under the
strong influence of Blackett, that work on extensive air showers should continue in
the UK but be supported and developed within the university environment by a team
drawn from several universities. This led to the construction of the Haverah Park
array (1964–1987) under the leadership of J G Wilson until his retirement in 1976,
with strong support in the initial stages from R M Tennant. Prototype studies were
carried out at Silwood Park near London under H R Allan who led a small team to
examine the potential of the Cherenkov detectors developed by Porter at Culham
(Allan et al., 1962) and A W Wolfendale, who led an effort to evaluate the potential
of neon flash tubes.
While the Silwood studies were underway, a site search identified land about
25 km from the University of Leeds (200 m) where an array covering 12 km2 was
established and which operated for 20 years from 1967 to study features of showers
from 1015 to 1020 eV. The primary detectors were water-Cherenkov detectors of
2.25 m2 ×1.2 m with over 200 being deployed. In addition there was 10 m2 of liquid
scintillator shielded by lead to provide muon detectors with an energy threshold of
250 MeV, and a muon spectrometer.
122 K.-H. Kampert and A.A. Watson
The team from the University of Sydney who designed ‘The Sydney University
Giant Air Shower Recorder (SUGAR)’ introduced a totally novel concept to the
detection of extensive air showers by an array of ground detectors. Before this inno-
vation, the practice had been to link the detectors with cables to some common point
where coincident triggers between them could be made and the signals recorded, in
the early days often using oscilloscopes. This method becomes impractical for ar-
eas much above 10 km2 as it was rarely possible to have the relatively unrestricted
land access enjoyed by Linsley at Volcano Ranch: the cost of cable, their suscepti-
bility to damage and the problems of generating fast signals over many kilometres
were further handicaps. The concept, due to Murray Winn, was first discussed in
1963 (McCusker and Winn, 1963). The Sydney group proposed the construction of
an array of detectors that ran autonomously with the time at which a trigger above
a certain number of particles was recorded being measured with respect to a tim-
ing signal transmitted across the area covered by the detectors. The concept was
realised in the Pilliga State Forest near Narribri (250 m) where 47 stations were
deployed over an area of ∼70 km2 . Most of the detectors were on a grid of 1 mile
(1.6 km) with 9 on a smaller spacing to enable smaller showers to be studied. Time
and amplitude data were recorded locally on magnetic tape and coincidences be-
tween different stations found off-line some time after the event. A difficulty was
that the rate of triggers of a local station above a level that was low enough to be
useful is very high and the rate could not be handled with technologies available at
the time. The problem was solved by burying the detectors under 2 m of earth and
placing them in pairs 50 m apart.
While the concept was brilliant it was somewhat ahead of its time in terms of the
technology available. Calor gas had to be used to supply the power at each station
and the reel-to-reel tape recorders proved difficult to operate in the dusty environ-
ment. The array was thus quite difficult to maintain and the problem of handling
many magnetic tapes at a single computing site proved to be a challenge. The PMTs
used were 7 in diameter and suffered from after-pulsing which complicated the
measurement of the signals as logarithmic time-to-height converters were used to
find the amplitudes (Suga et al., 1961). Efforts were made to overcome this diffi-
culty. There was also a serious problem in estimating the energy of events as only
muons were detected and therefore there was total reliance on shower models with
little ability to test which was the best to use because of a lack of different types of
detector in the array. Attempts to overcome this with a fluorescence-light detector
and with a small number of unshielded scintillators were unsuccessful. Energy spec-
tra were reported in (Winn et al., 1986a). The measurement of the shower directions
to a precision of a few degrees was a demonstration that the timing stamp method
was effective and the most valuable data from the SUGAR array were undoubt-
edly from the measurements of directions, the first such measurement to be made
from the Southern Hemisphere at energies above 1018 eV (Winn et al., 1986b). In
later analyses of the SUGAR database, the Adelaide group reported the detection
of a signal from the region of the Galactic Centre (Clay et al., 2000; Bellido et al.,
2001).
The concept of autonomous detection was tested at Haverah Park in an early
attempt to devise methods to construct an array of ∼1 000 km2 but the method had
124 K.-H. Kampert and A.A. Watson
its most effective realisation in the system that was designed for the surface detector
array of the Pierre Auger Observatory and subsequently at the Telescope Array.
The largest shower array constructed before the advent of the Pierre Auger Obser-
vatory and the Telescope Array was the ‘Akeno Giant Air Shower Array (AGASA)’
which was built outside Tokyo at Akeno (900 m). The AGASA team was led
by M Nagano and the array operated from 1990 until 2004. It consisted of 111
unshielded scintillator detectors each of 2.2 m2 with an inter-detector spacing of
∼1 km. Muon detectors of various areas between 2.4 and 10 m2 were installed at 27
of the 111 detectors. Each detector was serviced using a detector control unit that
recorded the arrival time and size of every incident signal and logged monitoring
information, the pulse height distribution, the voltage, counting rate and tempera-
ture in a manner that anticipated what is done at the Auger Observatory. An optical
fibre network was used to send commands, clock pulses and timer frames from the
central station to each module and to accept the trigger signals, shower data and
monitoring data.
Some important claims were made about the energy spectrum and the arrival di-
rection distributions at the highest energies. The energy spectrum was reported as
extending beyond 1020 eV with the 11 events observed, showing no sign of any cut-
off. The energies were estimated using model calculations and subsequent work,
in which the energy spectrum has been found by the track-length integral method
inferred from observations of fluorescence light, have shown that there were defi-
ciencies in the model calculations used.
The use of Monte Carlo techniques in the study of the cascade characteristics of air
showers has grown enormously since they were first introduced in the early 1960s.
The techniques developed have become indispensable for the interpretation of data,
to model the performance of detectors and to understand the development of the
cascade itself (Wilson, 1952). Wilson’s work was carried out with what was essen-
tially a roulette wheel but subsequent activities depended on the computing power
available with particular ingenuity being shown in the earliest days to combat the
limitations of the times.
Early calculations of the cascade development made use of phenomenological
models of the hadronic interactions such as the CKP-model of Cocconi et al. (1962)
developed to calculated particle fluxes from future accelerators. Other phenomeno-
logical models were developed and were used in interpretation of data from many
experiments. A problem was recognised by Linsley in 1977 when he found that
some of the Monte Carlo calculations produced results that were in violation of his
elongation rate theorem (Linsley, 1977) in that the computation of the change of
some shower parameters with energy was greater than was physically possible. This
raised questions about the accuracy of some of the Monte Carlo codes. Accordingly
Linsley and Hillas (1982) organised a discussion targeted at having interested groups
5 Development of Ultra High-Energy Cosmic Ray Research 125
use a common model within their codes to calculate the depth of shower maximum
and how it varied with energy. This exercise was partially successful and the results
from seven groups who contributed were reported and assessed. The problem of
following all of the particles in a shower was first discussed by Hillas (1982): he
introduced the concept of ‘thinning’ which has subsequently had very wide applica-
tion. He pointed out that it was not necessary in some cases to follow every particle
to get a good picture of a shower and reported that good results for muons were ob-
tained efficiently by choosing a demarcation energy, D, set at 10−4 of the primary
energy, and following all particles of energy >D but only a fraction of particles of
energy E < D. The technique was also used for electromagnetic cascades.
By the mid-1980s computing power had increased enormously and several major
programs were developed. Hillas created the MOCCA program at this time, written
in Pascal. Only a limited description of this code reached the literature but it was
made available to the designers of the Auger Observatory for which purpose it was
translated into FORTRAN in the early 1990s.
When work on the KASCADE project at Karlsruhe started by the end of the
1980s, it had been realised that most of the cosmic ray projects used their own spe-
cific tools which often became a source of errors. In case of disagreement between
experiments it remained unknown whether the problem had been of purely exper-
imental nature related to the apparatus or whether it had been due to differences
in the EAS simulations applied. Thus, parallel to preparing for constructing KAS-
CADE, an extremely important code was developed, with input by J Capdevielle
and by P Grieder who were early pioneers of the Monte Carlo method. The COR-
SIKA code (‘COsmic Ray SImulation for KAscade’), continuously maintained by
a team at Karlsruhe with support from all over the world, has the merit of allow-
ing different models of nuclear interactions to be included in an easy way and the
authors made it widely available to the community.8 Thus, over the years it had be-
come a de facto standard in the field, similar to the GEANT simulation package in
high-energy physics.
The important step made with CORSIKA is that, even though the EAS modelling
may not be perfect, the very same modelling can be applied to all experiments in
the field. As J Knapp, with D Heck one of the drivers behind CORSIKA, stated in
his rapporteur talk at the ICRC in Durban (Knapp, 1997):
Is the composition changing or not? The answer depends on the yardstick (i.e. the Monte
Carlo program) used for comparison. Use the same yardstick to get consistent results, use a
well-calibrated yardstick to get the correct result.
In addition to its application in shower modelling, the CORSIKA code has been
used in many other investigations, ranging from mountain and pyramid tomography
through muon measurements over neutrino searches to the possible link between
cosmic rays and climate (see e.g. Usoskin and Kovaltsov, 2006).
8 http://www-ik.fzk.de/corsika/.
126 K.-H. Kampert and A.A. Watson
The primary purpose of the early km2 -scale EAS experiments was to study the en-
ergy spectrum and arrival directions of ultra-high-energy primary cosmic rays for
the information which these data give about the origin of cosmic rays. It had been
realised that cosmic ray particles beyond 1020 eV, which were believed to be atomic
nuclei, would have a very great magnetic rigidity. Thus, the region in which such a
particle originates must be large enough and possess a strong enough magnetic field
so that RB (1/300) · (E/Z), where R is the radius of the region in cm, B is the
magnetic field in Gauss and E is units of TeV. Also, anisotropies were expected to
be seen. However, estimates of the particle flux were over-optimistic.
In May 1965 Penzias and Wilson reported their serendipitous observation of the
cosmic microwave background radiation (CMB) (Penzias and Wilson, 1965). Only
a few months later, Gould and Schréder (1966) pointed out that high-energy pho-
tons of a few 1014 eV traversing cosmic distances would suffer rapid energy losses
due to electron-positron pair production by photon-photon collisions in the CMB.
Thus, some earlier claims of high-energy muon-poor showers, supposed to be ini-
tiated by photons of extragalactic origin, were questioned by the authors and no
“window” was open for extragalactic γ -ray astronomy until well above 1014 eV
(Jelley, 1966). A few months later, Greisen (1966a) and independently Zatsepin and
Kuz’min (1966) noted a related effect for proton primaries, in this case photo-pion
production in the CMB being responsible for rapid attenuation of protons of energy
beyond 4 × 1019 eV. Figure 5.11 shows the key figure of Zatsepin and Kuz’min’s
paper including the data point from Linsley (1963b) which was hard to understand
after this finding. The title of Greisen’s paper “End to the Cosmic-Ray Spectrum?”
expressed the situation perfectly and the effect became known as “GZK-effect”. Its
worth pointing out that both Greisen as well a Zatsepin and Kuz’min also noted that
light and heavy nuclei would suffer rapid photo-disintegration above about the same
energy threshold.
It is an interesting fact that the aforementioned large shower arrays that were de-
veloped in the UK, Siberia, and Australia which dominated the studies of cosmic
rays above 1017 eV during in the 1970s and 1980s were all planned before this dis-
covery which was to become one of the main motivations for their operation. By
contrast, planning of the Fly’s Eye detector, which detected fluorescence radiation,
was begun in 1973 long after the interaction of the CMB and ultra-high-energy cos-
mic rays had been recognised and verification of the GZK-effect was one of the
prime motivations for its construction. However, it turned out that none of these
devices had a sufficiently large aperture to establish the existence of a steepening
in the cosmic ray spectrum. In fact, the dispute between AGASA and Fly’s Eye
about the existence of a flux suppression at the highest energies became an impor-
tant argument for the construction of Pierre Auger Observatory by the end of the
1990s.
5 Development of Ultra High-Energy Cosmic Ray Research 127
Fig. 5.11 Left: Characteristic time for GZK-like collisions as a function of proton energy for
different photon gas temperatures. Right: Expected suppression of the energy spectrum for a sim-
plified source scenario (Zatsepin and Kuzmin, 1966)
Fig. 5.12 Concept of a PMT camera viewing the fluorescence light from an air shower collected
with a mirror. The similarity of the layout shown here to the devices constructed by the Utah, Auger
and TA groups is remarkable. Reproduction from Proceedings of Norikura Meeting in Summer
1957, INS Report 1958. The text translates as: “parabolic mirror” and “A proposal for the shower
curve measurement in Norikura symposium, 1958” in upper/lower lines
The method was first discussed at an international forum in La Paz in 1962 where
Suga outlined the idea and showed a spectrum of the emission in the ultra-violet
part of the spectrum using α-particle sources (Suga, 1962). The signal was expected
to be small, even from showers produced by primary cosmic rays of 1020 eV, as
the isotropic emission is only about 4 photons per metre of electron track in the
wavelength range from 300 to 450 nm.
The fact that the light is emitted isotropically makes it feasible to observe showers
‘side-on’ from very great distances and thus it opens the possibility of monitoring
large volumes of air. It is clear from a diagram taken from a Japanese publication
of 1958 (Fig. 5.12) that discussions about using this method to detect high-energy
cosmic rays must have taken place in Japan, under the guidance of Suga and Oda,
for some years prior to Suga’s report at La Paz.10 During the discussions following
Suga’s presentation, Chudakov reported the results of measurements that he had
made in 1955–1957 of the same phenomenon. He examined this effect as he was
concerned that it might be a background problem in the detection of Cherenkov
radiation, a technique that was being developed strongly in the Soviet Union in the
1950s, but he was slow to write up his observations (Belyaev and Chudakov, 1966).
Chudakov also observed transition radiation in the same series of experiments.
The use of fluorescence radiation to detect air showers was already being studied
in Greisen’s group which included Bunner (1967, 1968) measuring the spectrum of
the light produced by particles in air. Greisen did not mention this activity at La Paz
but in an important review talk in 1965 (Greisen, 1966b) he pointed out many of
the key issues and showed the band spectrum of the fluorescence light from 200 to
460 nm. This paper had a much wider distribution than did the report of Suga’s talk.
The Japanese plans did not develop immediately. Goro Tanahashi from the INS
group in Tokyo worked in Greisen’s team at Cornell in the mid-1960s where efforts
were being made to detect fluorescence radiation using a set of Fresnel lenses. On
his return to Japan Tanahashi played a major role in setting up a fluorescence detec-
tor at Mt Dodaira, with Fresnel lenses, and the successful detection of air showers by
the fluorescence method was reported in 1969 (Hara et al., 1970). Greisen acknowl-
edged this achievement generously11 and recently Bruce Dawson has confirmed the
INS conclusions using his experience from the Auger Observatory to re-examine
the INS data (Dawson, 2011). The use of fluorescence light as a detection technique
seems to have been thought of more or less simultaneously in three countries, but
it is clear that the Japanese air shower physicists were the first to make convincing
detections.
The work of Greisen’s group at Cornell ended in 1972. Although unsuccess-
ful his efforts had inspired many. Tanahashi attempted to introduce the fluores-
cence technique into the Sydney Air Shower array and Greisen’s work was taken
up in the USA by a team at the University of Utah, led first by Keuffel. Following
the Japanese efforts, another convincing demonstration of the method was finally
achieved through the operation of a small fluorescence detector in coincidence with
the Volcano Ranch scintillator array (Bergeson et al., 1977). Fluorescence detectors
could now be used as stand-alone devices.
Another lasting legacy of Greisen’s work was the diagram made by Bunner for
his 1964 master’s thesis (Fig. 5.13). Here the essence of the reconstruction method
is shown: the diagram has been reproduced many times but its source has rarely
been acknowledged.
(later extended to 36) identical units started operation in 1986. In monocular mode,
Fly’s Eye reached a collection area of about 1 000 km2 (effectively about 100 km2
if the ∼10 % duty cycle of night time operation is taken into account).
A spectrum from a single eye was reported in 1975 along with a measurement
of the mass composition above 1018 eV before work with two Eyes started. Full
operation began in 1981. The science output culminated in the report of an event
of (3.2 ± 0.9) × 1020 eV (51 Joule) recorded in 1991 (Bird et al., 1995), still the
highest energy ever claimed. The event fell only 12 km from the Fly’s Eye I detector,
allowing a good measurement of its profile and energy. However, it fell behind the
Fly’s Eye II detector, so it was not seen in stereo.
The aperture of this pioneering experiment was too small to measure the spec-
trum at 1020 eV, and hence to observe the GZK cut-off. However, the Fly’s Eye and
AGASA spectral measurements (see below) set the stage for work to come with the
HiRes and the Pierre Auger Observatories.
One of the consequences of the work on the cores of showers carried out at the
Kiel array was the impact of an unexpected result that was never confirmed. In 1983
Samorski and Stamm (1983) reported a surprising observation suggesting that the
5 Development of Ultra High-Energy Cosmic Ray Research 131
11 kpc distant X-ray binary system, Cygnus X-3, was a source of photons of above
2 × 1015 eV. A signal of 4.4σ was found in the region around the object using
data obtained between 1976 and 1980 based on 16.6 events above a background of
14.4 ± 0.4. Cygnus X-3 has a periodicity of 4.8 hours and 13 of the events in the on-
source region were in one of the 10 phase bins into which the 4.8 hour period was
divided. The Kiel conclusion appeared to be confirmed by results from a sub-array
at Haverah Park (Lloyd-Evans et al., 1983), tuned to ∼1015 eV, and also by mea-
surements made around the same time at lower energies using the air-Cherenkov
technique. The claims stimulated great interest and, although now regarded as in-
correct, gave a huge stimulus to activity in the fields of high-energy gamma ray
astronomy and ultra-high-energy cosmic rays.
For the air shower field an important consequence was the interest that James W
Cronin (University of Chicago) took in the subject. A Nobel Laureate for his work in
particle physics, Cronin entered the cosmic ray field with vigour and led a team from
the Universities of Chicago and Michigan to construct an air shower array, known
as CASA-MIA, of ∼0.24 km2 , to search specifically for signals from Cygnus X-3
(Borione et al., 1994). The array was on a different scale, in terms of numbers of
detectors, from anything built previously with 1 024 scintillators of 1.5 m2 laid out
on a rectangular grid with 15 m spacing, above the muon detectors, each of 64 m2 ,
buried 3 m deep at 16 locations. As with the Chacaltaya array built 30 years earlier,
the idea was that showers with small muon numbers were likely to be produced by
gamma rays. The area of the muon detector was over 40 times that at Chacaltaya.
No signals were detected from Cygnus X-3 suggesting that the results from Kiel,
Haverah Park and the TeV gamma ray observatories were spurious. However, what
this enterprise showed was that it was possible to build much larger detectors than
had been conceived previously and Cronin went on to be the leading player in
the planning and implementation of the Pierre Auger Observatory. Another con-
sequence of the Cygnus X-3 period was that other particle physicists, most notably
Werner Hofmann and Eckart Lorenz, began work at La Palma to search for signals
from Cygnus X-3 using a variety of novel methods, but they quickly moved into
high-energy gamma ray astronomy.
The Cygnus X-3 observations revitalised experimental efforts for studying cosmic
rays above 1014 eV and resulted in a new generation of devices with sophisti-
cated instrumentation, including CASA-MIA, GRAPES, HEGRA, EAS-TOP, KAS-
CADE, MAKET-ANI, Tibet-ASγ , and others.
In Italy a group led by Gianni Navarra in the mid 1980s started to install a mul-
ticomponent detector at the Campo Imperatore at 2 005 m a.s.l. on top of the un-
derground Gran Sasso Laboratory, named EAS-TOP. It consisted in its final stage
of an array of 35 modules of unshielded scintillators, 10 m2 each, separated by
17 m near the centre, and by 80 m at the edges of the field, covering an area of
132 K.-H. Kampert and A.A. Watson
about 0.1 km2 for measurement of the shower size. A central 140 m2 calorimeter
of iron and lead, read out by 8 layers of positional sensitive plastic streamer tubes,
allowed measurements of hadrons (Eh ≥ 30 GeV) and muons (Eμ ≥ 1 GeV) in the
shower core (Aglietta et al., 1989). Operation started in 1989 and a very important
feature of EAS-TOP was the unique possibility of correlated measurements with
the MACRO detector located underground in the Gran Sasso Laboratory, thereby
combining shower information at ground with TeV energy muons measured under-
ground.
EAS-TOP measured the cosmic ray mass composition across the knee, allowed
tests of hadronic interaction models, measured the p-Air interaction cross section,
and very importantly, it provided stringent tests of the cosmic ray anisotropy as a
check of decreasing Galactic content of the cosmic rays. Before operation finally
was terminated in 2000,12 contacts were made to explore the possibility of shipping
the scintillator stations to the KASCADE site in Karlsruhe to continue operation in
an enlarged experiment there. A summary of the results from EAS-TOP has been
given in Navarra (2006).
At the end of the 1980s, two institutes at the research centre in Karlsruhe, Ger-
many (now KIT) led by G Schatz and B Zeitnitz joined efforts together with Univer-
sity groups from abroad, to construct KASCADE (KArlsruhe Shower Core and Ar-
ray DEtector). Again, this endeavour was motivated largely by the surprising results
from the Kiel array, so that γ -ray astronomy was on the agenda. However, con-
cise measurements the cosmic ray composition and of hadronic interactions were
realised to be of great need and the experiment was designed accordingly. Karlsruhe
was chosen as the site mostly because of its direct proximity to all the infrastructure
of the centre, needed to operate a most complex EAS experiment. It consisted of 252
array stations of e/γ - and μ-detectors spread over 200 × 200 m2 area, a highly com-
plex 320 m2 central detector, and a 130 m2 area μ-tracking detector, details of which
are described in Antoni et al. (2003). The sampling fraction (fraction of counter area
covered in the fiducial area of the EAS experiment) of 2.6 % and 3.3 % for the elec-
tromagnetic and muonic component, respectively, is the largest of all EAS experi-
ments ever operated and was crucial for achieving good electron and muon number
information. Figure 5.14 shows an example event measured with KASCADE.
Data taking started in 1996 and like the other projects already mentioned, KAS-
CADE never found any significant diffuse or point-like γ -flux and only provided
upper limits. Its main achievements, however, were tests of hadronic interaction
models and most importantly measurements of the cosmic ray composition across
the knee. The high experimental precision enabled a two-dimensional unfolding of
the measured Ne vs. Nμ distributions – 45 years after similar plots from the INS ar-
ray (cf. Fig. 5.9) were analysed. The results convincingly demonstrated that the knee
in the cosmic ray spectrum is caused by light particles and that the knee could be
seen in five different mass groups with their position shifting to higher energies with
increasing mass (Antoni et al., 2005), in good agreement with Peter’s cycle (Peters,
12 This was primarily for reasons of environmental protection arguments that applied to the Campo
Fig. 5.14 Example of an EAS registered by the e/γ detectors of the KASCADE experiment in the
energy range of the knee. Left: Energy deposits, Right: arrival times. The position of the shower
core and the curvature of the shower front are well observed (Antoni et al., 2003)
1961). This achievement of combining high precision EAS data with sophisticated
mathematical tools marked another milestone in cosmic ray physics.
Obviously this observation showed the need for improved data up to 1017 eV
where the break of the iron-knee would be expected. The closure of EAS-TOP at
about the same time triggered Navarra and Kampert to extend the KASCADE-
Experiment with the scintillator stations of EAS-TOP to become KASCADE-
Grande (2010). It covered an area of about 0.5 km2 , operated from 2003 to 2010, and
recently (Apel et al., 2011) demonstrated a knee-like structure in the energy spec-
trum of the heavy component of cosmic rays at E 8 × 1016 eV. Does this mark
the end of the galactic cosmic ray spectrum? In fact, the cosmic energy spectrum
appears be much richer in its features than could be described by simple broken
power laws, challenges to be addressed by future observations, such as the Tibet
Array, IceTop as part of IceCube, GAMMA, GRAPES, and TUNKA.
At the highest energies, the second-generation air-fluorescence experiment,
High-Resolution Fly’s Eye (HiRes) became the successor of Fly’s Eye. Proposed in
the early 1990s it was completed in 1997 (HiRes I) and 1999 (HiRes II). It was also
located at Dugway, Utah and also had two air-fluorescence detector sites, HiRes I
and HiRes II spaced 12.6 km apart. This detector had smaller phototubes resulting
in a pixel size of 1° by 1° in the sky. Amongst other improvements over the orig-
inal Fly’s Eye was an FADC data acquisition system at HiRes II which allowed a
much more precise measurement of the longitudinal shower profile. The HiRes-I
detector took data in monocular mode from 1997 to 2006, while HiRes II operated
from 1999 to 2006. The last years of operation of HiRes suffered from an accident
at the military site of Dugway which subsequently meant that a very small num-
ber of people could go to the site for shifts. Despite these operational problems,
a rich spectrum of measurements of the cosmic ray composition, p-Air cross sec-
tion, anisotropies, and the energy spectrum was reported. Most notably, clear signs
of a cut-off in the energy spectrum, in good agreement with the GZK-effect was
134 K.-H. Kampert and A.A. Watson
demonstrated. A comprehensive summary of the late Fly’s Eye and early HiRes
results can be found in Sokolsky and Thomson (2007).
The problem of the small number of events at the highest energies was recog-
nised in the 1980s, even before the AGASA and HiRes detectors had completed
construction, and a controversy about the existence or non-existence of a suppres-
sion of the cosmic ray flux at the GZK threshold of 5 × 1019 eV became a major
point of discussion. This led to the idea that 1 000 km2 of instrumented area was
needed if progress was to be made. Cronin argued that 1 000 km2 was insufficiently
ambitious and in 1991 he and Watson decided to try to form a collaboration to build
two identical detectors of 3 000 km2 , each one in the Northern and Southern Hemi-
sphere. Initially named Giant Air-shower Project (GAP) it later became the Pierre
Auger Observatory, in honour of Pierre Auger’s work on the discovery of exten-
sive air showers. Argentina was selected as Southern site in a democratic vote at
the UNESCO headquarter in Paris in November 1995 and construction of an engi-
neering array finally began in 2001 near Malargüe, Argentina. Physics data taking
started January 1st 2004 with about 150 water Cherenkov tanks and six fluorescence
telescopes and construction of all 1 600 surface detector stations covering an area
of 3 000 km2 and 24 telescopes finished mid-2008. As of today, Auger has reached
an exposure of nearly 25 000 km2 sr yr, more than the sum achieved with all other
experiments. The Northern part of the project could not be realised yet because of
funding problems.
Highlights of results include the clear evidence for a suppression of the flux
above 4 × 1019 eV, observations of anisotropies in the arrival directions above
5.6 × 1019 eV, suggesting a correlation to the nearby matter distribution, measure-
ments of the primary mass favouring a change from a light to a heavier composi-
tion above 1019 eV, a measurement of the p-Air and pp inelastic cross section at
√
a centre-of-mass energy of s = 57 TeV, almost 10 times higher in energy than
recent LHC data, and the most stringent upper limits on EeV photon and neutrino
fluxes, strongly disfavouring an exotic particle physics origin for the highest energy
cosmic rays. The Auger Observatory will continue running for at least 5 more years
and upgrade plans are being discussed.
When AGASA and HiRes were nearing the end of operation, a collaboration
consisting of key members from AGASA and HiRes started to prepare for the con-
struction of a large observatory, named Telescope Array (TA), in the Northern hemi-
sphere. Like the Auger Observatory, TA combines a large area ground array, largely
based on the AGASA design, with air-fluorescence telescopes based on the HiRes
system. TA is located in the central western desert of Utah, near the city of Delta,
about 250 km south west of Salt Lake City and covers with its 507 surface detector
stations and 38 fluorescence telescopes a total area of about 730 km2 . Data taking
started early 2008 and because of this, the total number of events recorded is still
much less than from the Auger Observatory. Nevertheless, good agreement within
the systematic uncertainties is seen for the energy spectrum. Analyses of composi-
tion and anisotropies still suffer strongly from limited statistics; thus final statements
need to wait for more data.
5 Development of Ultra High-Energy Cosmic Ray Research 135
5.15 Future
As discussed above, the situation at the upper end of the cosmic ray energy
spectrum has changed considerably with the advent of new large scale obser-
vatories. No doubts now exist about the existence of a flux suppression above
∼5 × 1019 eV. However, is this the observation of the GZK-effect which was pre-
dicted 45 years ago? From the experimental point of the view, the answer can-
not be given, because the suppression could equally well be due to the limit-
ing energy reached in nearby cosmic accelerators, just as discussed by Hillas in
his seminal review (Hillas, 1984). In fact, the latter picture is supported by data
from the Pierre Auger Observatory which suggest an increasingly heavier com-
position towards the end of the spectrum and seeing the suppression about 20 %
lower in energy than expected for typical GZK scenarios. HiRes and TA, on the
other hand find no significant change in their composition and their cut-off en-
ergy is in agreement with the GZK-expectation. Moreover, a directional correla-
tion of ultra high-energy cosmic rays on a 3° scale is hard to imagine for heavy
primaries. Could this indicate weaker extragalactic magnetic fields than thought,
or could it point to deficiencies of hadronic interaction models at the highest ener-
gies? These models must be employed to infer the elemental composition from EAS
data.
Obviously, nature does not seem ready to disclose the origin of the most ener-
getic particles in the Universe yet. More work is needed and the main players in the
field have intensified their co-operation sharing data and analysis strategies to better
understand systematic uncertainties which, despite being small, appear to be quite
relevant concerning conclusions to be drawn from the data. In parallel, experimental
efforts are underway to increase the statistics more quickly and to further improve
data quality. Most importantly muon detection capabilities, which are of key impor-
tance to understanding features of hadronic interactions at the highest energies, are
being added.
Understanding the origin of ultra high-energy cosmic rays demands high quality
data in the 1019 to 1020 eV energy range. While this is to be the major task of
ground-based experiments during the next years, finding the long-searched point
sources of cosmic rays simply requires much larger exposures. Plans for space-
based experiments exist as well as for further efforts on the ground.
In 1979 Linsley developed the idea to observe giant air showers from space
(Linsley, 1979). The advantages were obvious, as a fluorescence camera look-
ing downwards from space could survey huge areas at ground simultaneously
with only one atmospheric thickness between the light source and the sen-
sor, the major challenge being the faint light because of the distance to the
shower and the optical imaging required for geometrical reconstruction and
Xmax observations. Several projects of this type were proposed to space agen-
cies in US, Europe, Japan and Russia with JEM-EUSO presently planned to be
mounted at the International Space Station in 2018. The realisation of space-
based projects involves some uncertainty, and it is clear that the energy and mass
resolution for cosmic rays will be much worse than that achieved with ground-
based observations. The prime goal is to collect event statistics at the highest
136 K.-H. Kampert and A.A. Watson
References
Abraham, J., et al.: Properties and performance of the prototype instrument for the Pierre Auger
Observatory. Nucl. Instrum. Methods A523, 50 (2004)
Abraham, J., et al. (Pierre Auger Collaboration): Correlations of the highest energy cosmic rays
with nearby extragalactic objects. Science 318, 938 (2007)
Afanasiev, B.N., et al.: Recent rezults from Yakutsk experiment. In: Nagano, M. (ed.) Proc. Tokyo
Workshop on the Techniques for the Study of the Extremely High Energy Cosmic Rays, p. 35
(1993)
Aglietta, M., et al. (EAS-TOP Collaboration): The EAS-TOP array at E0 = 1014 –1016 eV: stability
and resolutions. Nucl. Instrum. Methods A 277, 23 (1989)
Allan, H.R., et al.: The distribution of energy in extensive air showers and the shower size spectrum.
Proc. Phys. Soc. 79, 1170 (1962)
Andrews, D., et al.: Evidence for the existence of cosmic ray particles with E > 5 × 1019 eV.
Nature 219, 343 (1968)
Antoni, T., et al. (KASCADE Collaboration): The cosmic-ray experiment KASCADE. Nucl. In-
strum. Methods A513, 490 (2003)
Antoni, T., et al. (KASCADE Collaboration): KASCADE measurements of energy spectra for
elemental groups of cosmic rays: results and open problems. Astropart. Phys. 24, 1–25 (2005)
Apel, W.D., et al. (KASCADE-Grande Collaboration): The KASCADE-Grande experiment. Nucl.
Instrum. Methods A620, 202–216 (2010)
Apel, W.D., et al. (KASCADE-Grande Collaboration): Kneelike structure in the spectrum of the
heavy component of cosmic rays. Phys. Rev. Lett. 107, 171104 (2011)
Askaryan, G.A.: Excess negative charge of an electron-photon shower and its coherent radio emis-
sion. JETP 14, 441–443 (1962)
Auger, P.: In: Sekido, Y., Elliot, H. (eds.) Early History of Cosmic Ray Studies. Reidel, Dordrecht
(1985)
Auger, P., Maze, R., Robley: Extension et pouvoir pénétrant des grandes gerbes de rayons cos-
miques. Comptes Rendus 208, 1641 (1939a)
Auger, P., et al.: Extensive cosmic ray showers. Rev. Mod. Phys. 11, 288 (1939b)
Bassi, P., Clark, G., Rossi, B.: Distribution of arrival times of air shower particles. Phys. Rev. A
92, 441 (1953)
Belenki, S.Z., Landau, L.: Hydrodynamic theory of multiple production of particles. Suppl. Nuovo
Cim. 3(1), 15 (1956)
Bellido, J.A., et al.: Southern hemisphere observations of a 1018 eV Cosmic Ray Source near the
direction of the Galactic centre. Astropart. Phys. 15, 167 (2001)
Belyaev, V.A., Chudakov, A.E.: Ionization glow of air and its possible use for air shower detection.
Bull. USSR Acad. Sci. Phys. Ser. 30(10), 1700 (1966)
Bergeson, H.E., Boone, J.C., Cassiday, G.L.: In: Proc. 14th ICRC, Munich, vol. 8, p. 3059 (1975)
Bergeson, H.E., et al.: Measurement of light emission from remote cosmic-ray air showers. Phys.
Rev. Lett. 39, 847 (1977)
Bethe, H., Heitler, W.: On the stopping of fast particles and on the creation of positive electrons.
Proc. R. Soc. A 146, 83 (1934)
Bhabha, H., Heitler, W.: The passage of fast electrons and the theory of cosmic showers. Proc. R.
Soc. A 159, 432–458 (1937)
Bird, H.E., et al.: Detection of a cosmic ray with measured energy well beyond the expected spec-
tral cutoff due to cosmic microwave radiation. Astrophys. J. 441, 144–150 (1995)
Blackett, P.M.S.: In: Proceedings of the International Conference on the Emission Spectra of the
Night Sky and Aurorae, pp. 34–35. Physical Society, London (1947)
Blackett, P.M.S.: Cloud chamber researches in nuclear physics and cosmic radiation. Nobel Lecture
(13 December, 1948)
Blackett, P.M.S., Occhialini, G.: Photography of penetrating corpuscular radiation. Nature 130,
363 (1932)
5 Development of Ultra High-Energy Cosmic Ray Research 139
Borione, A., et al.: A large air shower array to search for astrophysical sources emitting γ -rays
with energies > 1014 eV. Nucl. Instrum. Methods 346, 329 (1994)
Bothe, W.: Zur Vereinfachung von Koinzidenzzählungen. Z. Phys. 59, 1–5 (1929)
Bunner, A.N.: The atmosphere as a cosmic ray scintillator. Master Thesis, Cornell University
(1964)
Bunner, A.N.: The atmosphere as a cosmic ray scintillator. PhD Thesis, Cornell University (1967)
Bunner, A.N., Greisen, K., Landecker, P.B.: An imaging system for EAS optical emission. Can. J.
Phys. 46, S266 (1968)
Carlson, J.F., Oppenheimer, J.R.: On multiplicative showers. Phys. Rev. 51, 220 (1937)
Chudakov, A.E., et al.: In: Proc. 6th ICRC, Moscow, vol. II, p. 50 (1960)
Clark, W.G.: The scientific legacy of Bruno Rossi. Università degli Padova, 7 (2006)
Clark, G.W., et al.: An experiment in air showers produced by high-energy cosmic rays. Nature
180, 353 (1957)
Clark, G., et al.: The M.I.T. air shower program Suppl. Nuovo Cim. 8, 623 (1958)
Clark, G.W., et al.: Cosmic-ray air showers at sea level. Phys. Rev. 122(2), 637 (1961)
Clay, R.W., et al.: Cosmic rays from the galactic center. Astropart. Phys. 12, 249 (2000)
Cocconi, G., Koester, L.J., Perkins, D.H.: Calculation of particle fluxes. Lawrence Berkeley Labo-
ratory Report LBL 10022, pp. 167–192 (1962)
Compton, A.H., Getting, I.A.: An apparent effect of galactic rotation on the intensity of cosmic
rays. Phys. Rev. 47, 817 (1935)
Cranshaw, T.E.: Cosmic Rays. Clarendon Press, Oxford (1963)
Cranshaw, T.E., Galbraith, W.: Philos. Mag. 45, 1109 (1954)
Cranshaw, T.E., Galbraith, W.: Philos. Mag. 2, 797 (1957)
Dawson, B.: Comment on a Japanese detection of fluorescence light from a cosmic ray shower in
1969 (2011). arXiv:1112.5686
Falcke, H., Gorham, P.W.: Detecting radio emission from cosmic ray air showers and neutrinos
with a digital radio telescope. Astropart. Phys. 19, 477 (2003)
Falcke, H., et al. (LOPES Collaboration): Detection and imaging of atmospheric radio flashes from
cosmic ray air showers. Nature 435, 313 (2005)
Fermi, E.: On the origin of the cosmic radiation. Phys. Rev. 75, 1169 (1949)
Fermi, E.: High energy nuclear events. Prog. Theor. Phys. 5(4), 570 (1950)
Fermi, E.: Angular distribution of the pions produced in high energy nuclear collisions. Phys. Rev.
81(5), 683 (1951)
Fretter, W.B.: In: Proceedings of Echo Lake Cosmic Ray Symposium (1949)
Fukui, S., et al.: A study on the structure of the extensive air shower. Prog. Theor. Phys. Suppl. 16,
1–53 (1960)
Galbraith, W.: Extensive Air Showers. Academic Press, San Diego (1958)
Galbraith, W., Jelley, J.V.: Light pulses from the night sky associated with cosmic rays. Nature
171, 349 (1953)
Geiger, H., Müller, W.: Elektronenzählrohr zur Messung schwächster Aktivitäten. Naturwis-
senschaften 31, 617–618 (1928)
Gorham, P.W., et al.: Observations of microwave continuum emission from air shower plasmas.
Phys. Rev. D 78, 032007 (2008)
Gould, R.J., Schréder, G.: Opacity of the universe to high-energy photons. Phys. Rev. Lett. 16, 252
(1966)
Greisen, K.: Prog. Cosm. Ray Phys. 3, 1–141 (1956)
Greisen, K.: Cosmic ray showers. Annu. Rev. Nucl. Part. Sci. 10, 63 (1960)
Greisen, K.: End to the cosmic ray spectrum? Phys. Rev. Lett. 16, 748 (1966a)
Greisen, K.: In: Proc. 9th ICRC, London, vol. 2, p. 609 (1966b)
Hara, T., et al.: Detection of the atmospheric scintillation light from air showers. Acta Phys. Acad.
Sci. Hung. 29, 369 (1970)
Heisenberg, W.: Zur Theorie der Schauer in der Höhenstrahlung. Z. Phys. 101, 533 (1936)
Hersil, J., et al.: Observations of extensive air showers near the maximum of their longitudinal
development. Phys. Rev. Lett. 6(1), 22 (1961)
140 K.-H. Kampert and A.A. Watson
Hersil, J., et al.: Extensive air showers at 4200 m. J. Phys. Soc. Jpn. 17, 243 (1962)
Hess, V.F.: Über die Beobachtungen der durchdringenden Strahlung bei sieben Freiballonflügen.
Phys. Z. 8, 1084 (1912)
Hillas, A.M.: Cosmic Rays. Pergamon Press, Elmsford (1972)
Hillas, A.M.: Two interesting techniques for Monte Carlo simulation of very high-energy hadron
cascades. In: Linsley, J., Hillas, A.M. (eds.) Proc. of the Paris Workshop on Cascade Simula-
tions, p. 193 (1982).
Hillas, A.M.: The origin of ultra-high-energy cosmic rays. Annu. Rev. Astron. Astrophys. 22, 425–
444 (1984)
Hoffmann, G., Pforte, W.S.: Zur Struktur der Ultrastrahlung. Phys. Z. 31, 347 (1930)
Hoover, S., et al.: Observation of ultrahigh-energy cosmic rays with the ANITA Balloon-Borne
radio interferometer. Phys. Rev. Lett. 105, 151101 (2010)
Ito, N., et al.: In: Proc. 25th ICRC, Durban, vol. 4, p. 117 (1997)
Jelley, J.V.: High-energy γ -ray absorption in space by a 3.5 K microwave field. Phys. Rev. Lett.
16, 479 (1966)
Kamata, K., Nishimura, J.: The lateral and the angular structure functions of electron showers.
Prog. Theor. Phys. Suppl. 6, 93 (1958)
Kameda, T., Toyoda, Y., Maeda, T.: J. Phys. Soc. Jpn. 15, 1565 (1960)
Kampert, K.-H., Unger, M.: Measurements of the cosmic ray composition with air shower experi-
ments. Astropart. Phys. 35, 660 (2012)
Khristiansen, G.B., et al.: The EAS-1000 array. Ann. N.Y. Acad. Sci. 571, 640 (1989)
Knapp, J.: In: Proc. 25th ICRC, Durban, vol. 8, p. 83 (1997)
Kolhörster, W., Matthes, I., Weber, E.: Gekoppelte Höhenstrahlen. Naturwissenschaften 26, 576
(1938)
Krieger, A.S., Bradt, H.V.: Cherenkov light in extensive air showers and the chemical composition
of primary cosmic rays at 1016 eV. Phys. Rev. 185, 1629 (1969)
Kulikov, G.V., Khristiansen, G.B.: On the size spectrum of extensive air showers. JETP 35, 441
(1959)
Kulikov, K., et al.: In: Proc. 9th ICRC, London (1965)
Linsley, J.: In: Proceedings of the 8th International Cosmic Ray Conference, Jaipur, vol. 4, p. 77
(1963a)
Linsley, J.: Evidence for a primary cosmic-ray particle with energy 1020 eV. Phys. Rev. Lett. 10(4),
146 (1963b)
Linsley, J.: In: Proc. 15th ICRC, Plovdiv, vol. 12, p. 89 (1977)
Linsley, J.: Study of 1020 eV cosmic rays by observing air showers from a platform in space,
response to Call for Projects and Ideas in High Energy Astrophysics for the 1980’s, Astronomy
Survey Committee (Field Committee) (1979)
Linsley, J.: In: Wada, M. (ed.) Catalogue of Highest Energy Cosmic Rays. World Data Center of
Cosmic Rays, Institute of Physical and Chemical Research, Itabashi, Tokyo (1980)
Linsley, J., Hillas, A.M.: In: Proc. of the Paris Workshop on Cascade Simulations. Texas Center
for the Advancement of Science and Technology, Texas (1982)
Linsley, J., Scarsi, L., Rossi, B.: Extremely energetic cosmic-ray event. Phys. Rev. Lett. 6, 485
(1961)
Lloyd-Evans, J., et al.: Observations of γ -rays > 1015 eV from cygnus X-3. Nature 305, 784
(1983)
Matthews, J.: A Heitler model of extensive air showers. Astropart. Phys. 22, 387 (2005)
Maze, R.: Étude d’un appareil à grand pouvoir de résolution pour rayons cosmiques. J. Phys.
Radium 9(4), 162–168 (1938)
McCusker, C.B.A., Winn, M.M.: A new method of recording large cosmic-ray air showers. Nuovo
Cimento 28, 175 (1963)
Nagano, M., Watson, A.A.: Observations and implications of the ultrahigh-energy cosmic rays.
Rev. Mod. Phys. 72, 689 (2000)
Navarra, G.: Cosmic ray composition and hadronic interactions in the knee region. Nucl. Phys. B,
Proc. Suppl. 151(1), 79–82 (2006)
5 Development of Ultra High-Energy Cosmic Ray Research 141
Nikolsky, S.I.: In: Proceedings of 5th Interamerican Seminar on Cosmic Rays, vol. 2. Universidad
Mayor de San Andreas, La Paz, Bolivia (1962)
Olbert, S.: Theory of high-energy N-component cascades. Ann. Phys. 1, 247–269 (1957)
Penzias, A.A., Wilson, R.W.: A measurement of excess antenna temperature at 4080 Mc/s. Astro-
phys. J. 142, 419 (1965)
Peters, B.: Primary cosmic radiation and extensive air showers. Nuovo Cimento 22, 800 (1961)
Pfotzer, G.: Dreifachkoinzidenzen der Ultrastrahlung aus vertikaler Richtung in der Stratosphere.
Z. Phys. 102, 41 (1936)
Porter, N.A., et al.: Philos. Mag. 3, 826 (1958)
Regener, E., Ehmert, A.: Über die Schauer der kosmischen Ultrastrahlung in der Stratosphäre.
Z. Phys. 111, 501 (1938)
Regener, E., Pfotzer, G.: Vertical intensity of cosmic rays by treefold coincidences in the strato-
sphere. Nature 136, 718 (1935)
Rossi, B.: Method of registering multiple simultaneous impulses of several Geiger’s counters. Na-
ture 125, 636 (1930)
Rossi, B.: Über die Eigenschaften der durchdringenden Korpuskularstrahlung im Meeresniveau.
Z. Phys. 82, 151 (1933)
Rossi, B.: Misure sulla distribuzione angolare di intensita della radiazione penetrante all’ Asmara.
Suppl. Ric. Sci. 1, 579 (1934)
Rossi, B.: In: Sekido, Y., Elliot, H. (eds.) Early History of Cosmic Ray Studies. Reidel, Dordrecht
(1985)
Rossi, B., Greisen, K.: Cosmic-ray theory. Rev. Mod. Phys. 13, 240 (1941)
Saltzberg, D., et al.: Observation of the Askaryan effect: coherent microwave Cherenkov emission.
Phys. Rev. Lett. 86, 2802 (2001)
Samorski, M., Stamm, W.: Detection of 2 × 1015 to 2 × 1016 eV γ -rays from Cygnus X-3. Astro-
phys. J. 268, L17 (1983)
Schmeiser, K., Bothe, W.: Die harten Ultrastrahlschauer. Ann. Phys. 424, 161 (1938)
Skobeltzyn, D.V.: Die Intensitätsverteilung in dem Spektrum der γ -Strahlen von RaC. Z. Phys. 43,
354 (1927)
Skobeltzyn, D.V.: Über eine neue Art sehr schneller β-Strahlen. Z. Phys. 54, 686 (1929)
Skobeltzyn, D.V., Zatsepin, G.T., Miller, V.V.: The lateral extension of auger showers. Phys. Rev.
71, 315 (1947)
Sokolsky, P., Thomson, G.B.: Highest energy cosmic rays and results from the HiRes experiment.
J. Phys. G 34, R401 (2007)
Suga, K.: In: Proceedings of 5th Interamerican Seminar on Cosmic Rays, vol. 2. Universidad
Mayor de San Andreas, La Paz, Bolivia (1962)
Suga, K., Clark, G.W., Escobar, I.: Scintillation detector of 4 m2 area and transistorized amplifier
with logarithmic response. Rev. Sci. Instrum. 32, 1187 (1961)
Usoskin, I.G., Kovaltsov, G.A.: Cosmic ray induced ionization in the atmosphere: full modeling
and practical applications. J. Geophys. Res. 111, D21 (2006)
Wilson, R.R.: Monte Carlo study of shower production. Phys. Rev. 86(3), 261 (1952)
Winn, M.M., et al.: The cosmic-ray energy spectrum above 1017 eV. J. Phys. G 12, 653 (1986a)
Winn, M.M., et al.: The arrival directions of cosmic rays above 1017 eV. J. Phys. G 12, 675 (1986b)
Zatsepin, G.T., Kuzmin, V.A.: Upper limit of the spectrum of cosmic rays. JETP Lett. 4, 78 (1966)
Chapter 6
Very-High Energy Gamma-Ray Astronomy:
A 23-Year Success Story in Astroparticle Physics
6.1 Introduction
Since early times, astronomy has been an important part of human culture. Astro-
nomical observations started way back in the Stone Age. Galileo opened the window
of modern astronomy in 1609 AD, by making astronomical observations based on
using optical telescopes. By means of a simple optical telescope, he could observe
the four largest moons of Jupiter for the first time. The same year Kepler published
the fundamental laws of planetary movements in the Astronomica Nova. During the
following 350 years, astronomers were exploring the Universe in the wavelength
range of visible light, successively investigating more and more of the so-called
thermal Universe, which comprises all emission coming from thermal emission pro-
cesses. In the year 1912, the Austrian physicist Victor Hess showed that some type
of high-energy radiation is constantly bombarding the Earth from outer space (Hess,
1912). These so-called cosmic rays (CR), later identified mostly as charged parti-
cles, were a clear evidence of the existence of high-energy processes in our Universe
exceeding energies that could possibly be created in thermal emission processes.
A fundamental problem of CRs (below some 1018 eV) is that these charged parti-
cles do not allow their trajectories to be traced back to any astrophysical object, as
they are deflected by (unknown) intergalactic magnetic fields and thus lose any di-
rectional information: the sources of the CRs cannot be identified. Even today, after
100 years of CR studies, many questions about the sources of CRs remain unsolved.
Shortly before and after the Second World War, new windows in energy bands
below and above visible wavelengths of the electromagnetic spectrum were success-
fully opened, by observations in radio waves, infrared and ultraviolet light, X-rays,
R. Wagner
Excellence Cluster “Origin and Structure of the Universe”, Boltzmannstraße 2, 85748
Garching b. München, Germany
and, eventually, in gamma rays. At around 1980, it was possible to observe cosmic
radiation in the entire range of the electromagnetic spectrum, from 10−6 eV up to
109 eV. By such observations it could be shown that, besides the thermal Universe
(dominated by stellar production of photons), high-energy reactions are an essential
part of what can be observed from the Universe.
In 1989, the window of very-high energy (VHE) gamma-ray astronomy was
opened by the detection of TeV gamma rays from the Crab nebula by the Whipple
collaboration (Weekes et al., 1989). This seminal detection started a very productive
research field in an energy domain mostly accessible only by ground-based instru-
ments. In 2012, we are celebrating the 100th year of cosmic-ray studies. This article
will give an overview of the development of VHE gamma-ray astronomy. The rich-
ness of the results achieved over the years necessitated a selection of experiments,
discussed here, that reflect the steady progress in VHE gamma-ray astronomy. Obvi-
ously, this selection is somewhat personal, and emphasis is put on such experiments
that made initial breakthroughs in new detection methods and new results, while
less emphasis is put on later experiments, using very similar techniques, which may,
however, be of same scientific productivity. Also, experiments that were optimized
for energies above 100 TeV are mostly skipped, because up to now no sources have
been discovered in that energy domain.
VHE gamma-ray astronomy is part of high-energy cosmic-ray astrophysics.
Many experiments of the past aimed both at the search for VHE gamma-ray emit-
ting sources, as well as at solving fundamental questions concerning the nature of
cosmic rays. Here, we concentrate on the discussion of gamma-ray studies and refer
to Chap. 5 for details on CR studies.
Cosmic rays result from and thus transmit information about distant high-energy
processes in our Universe. Besides their energy (and particle type), the most im-
portant information they carry is the location of the astrophysical object of their
origin. However, nearly all CRs are charged and therefore suffer deflection from
their original trajectories by the weak magnetic fields (1 Gauss) in our Galaxy
and, if originating from somewhere in the extragalactic space, also by very weak
extragalactic magnetic fields, which are known to exist. Their direction and size is,
however, unknown. CRs up to about few times 1019 eV are nearly completely ran-
domized in direction and cannot be associated with any astrophysical object. Even
if the magnetic fields were known, it would currently be impossible to extrapolate
observed charged CRs back to their point of origin because the uncertainty in de-
termining their energy would result in a much too large correlated area. Therefore,
only neutral particles are currently suited to serve as messenger particles. The two
particle types that ideally fall into this category are photons – gamma (γ ) quanta –
and the neutrinos. All other neutral particles are too short-lived. The neutron with
just below 15-minute lifetime in its rest frame would, even at the highest energies
6 Very-High Energy Gamma-Ray Astronomy 145
of ≈1019 eV, on average just travel over a distance from the center of our Galaxy to
the Earth. Neutrinos, being weakly interacting particles, are very difficult to detect
and huge volumes of dense material are required to observe a minuscule fraction
of them impinging on the earth. A review of neutrino astronomy and its historical
development is given in Chap 9. VHE γ rays are therefore currently the best-suited
messengers of the relativistic Universe. The challenge to explain γ -ray production
is that, experimentally, currently two fundamentally different production processes
(or a combination of these!) can be at work, namely, leptonic or hadronic processes.
Neutrinos, however, can be created only in hadronic processes, therefore one could
solve this ambiguity. The main production processes of γ rays are as follows.
Inverse Compton scattering: VHE electrons upscatter low-energy photons over a
broad energy range above the initial one,
e + γlow energy −→ elow energy + γVHE .
Normally, there are plenty of low-energy photons in the environment of stars due
to thermal emission or due to synchrotron emission by the high energy electrons in
the normally present magnetic fields. In the lower energy range, the dominant pro-
duction process of gamma rays from leptons is via synchrotron radiation processes,
where electrons lose a fraction of their energy by synchrotron radiation when pass-
ing through local magnetic fields.
Another production process is by hadronic interactions. Accelerated protons or
heavier nucleons interact hadronically with other protons or nucleons in stellar en-
vironments or cosmic gas clouds. Dominantly, charged and neutral pions are pro-
duced. Charged pions decay in a two-step process into electrons and two neutrinos
while neutral pions decay with >99 % probability into two gamma quanta; schemat-
ically:
p + nucleus → p · · · + π ± + π 0 + · · · and
π → 2γ ;
0
π → μνμ ; μ → eνμ νe .
Heavier secondary mesons, much rarer, normally decay in a variety of lighter ones
and eventually mostly into π ± and π 0 and/or γ . It is impossible to distinguish
from observing gamma rays only whether they originate from either a leptonic or a
hadronic parent particle, while the observation of neutrinos would be an unambigu-
ous proof that these messengers come from hadronic interactions. Nevertheless, by
analyzing gamma-ray spectra the dominant parent particle process can sometimes
be deduced.
The main driving force for VHE gamma-ray astronomy was initially the search for
the sources of the charged cosmic rays, while now, after the discovery of many
sources, the interest has shifted to general astrophysics questions. In earlier times
the searches were hampered by a few fundamental questions:
146 E. Lorenz and R. Wagner
The three decades from 1960 to the end of the 1980s saw very little progress in
discovering one or more sources of VHE gamma rays. Experiments were in a vi-
cious circle: Poor experiments gave very doubtful results and the funding agencies
were not willing to finance large installations. Many physicists, that started their
career in cosmic-ray physics, turned to high energy physics (HEP) experiments at
accelerators; this field was and still is in an extremely productive phase. In con-
trast to HEP, very-high energy cosmic-ray experimentalists basically developed no
6 Very-High Energy Gamma-Ray Astronomy 147
Fig. 6.1 Principle of the two commonly used detector techniques for observing cosmic VHE par-
ticles. a An extended air shower array. Primary particles hitting the Earth’s atmosphere initiate an
extended air shower. Shower tail particles, which penetrate down to ground level, are detected by
an array of particle detectors. b Cherenkov light detection of air showers that do not need to pene-
trate down to ground. Cherenkov light generated by the shower particles can be observed by one or
more so-called imaging atmospheric Cherenkov telescopes, comprising a large mirror focusing the
light onto a matrix of high-sensitivity photosensors in the focal plane. Both detector principles are
used for the observation of charged cosmic-ray showers as well as gamma-ray induced showers.
Courtesy C. Spiering
new techniques; very little progress in understanding the fine structure of shower
developments was made because of lack if sophisticated experimental instruments,
insufficient computing power and limited theory in high energy hadronic interac-
tion. Two basic detector concepts1 were used (Fig. 6.1): detectors that measure par-
ticles of the shower tail hitting the ground, so-called extended air shower arrays
(EAS) or, as particle physicists call them: “tail-catcher detectors”, and Cherenkov
telescopes for observing showers that are essentially stopping high up in the atmo-
sphere. Both methods make use of the atmosphere as calorimeter, in combination of
either a tracking detector or a light sensor as a calorimetric measuring device.
High-energy cosmic rays (mostly protons and heavier nucleons, rarely gamma
rays, electrons and positrons) enter the Earth’s atmosphere and generate a cascade of
secondary particles, forming an extended air shower. Initially, in this shower process
the number of secondary particles is rapidly increasing. During this multiplication
process the energy of the primary is partitioned onto the secondaries until the en-
ergy of the secondary particles becomes so low such that the multiplication process
stops. Due to energy loss of the charged particles by ionization, the shower eventu-
ally dies out. Depending on the primary energy and nature of the incident particle,
1 Other detection principles like the fluorescence detectors make use oft the very weak fluorescence
light of an air shower; radio detectors detect radio waves emitted by the shower. Both detection
principles are currently unsuited for VHE gamma-ray astronomy, because of an extremely high
threshold.
148 E. Lorenz and R. Wagner
Fig. 6.2 Simulations of air showers. a secondaries of a 50 GeV γ -ray primary particle. b Same,
but only those secondaries that produce Cherenkov light are plotted. c Secondaries of a 200 GeV
proton primary particle. d Same, but only those secondaries that produce Cherenkov light are plot-
ted. In all figures, the particle type of the secondaries is encoded in their track color: red = elec-
trons, positrons, gammas; green = muons; blue = hadrons. Figures courtesy Dario Hrupec (Institut
Ruder Bošković, Zagreb), produced using code done by Fabian Schmidt (Leeds University), using
CORSIKA (Color figure online)
the shower might stop at high altitudes or reach ground. Showers originating from
hadrons (“hadronic showers”) and electromagnetic showers, initiated by gamma
rays or electrons (positrons) can be discriminated by their development: Fig. 6.2
shows examples of a gamma-ray induced shower and a proton-induced hadronic
shower. If the charged secondary particles are moving faster than the speed of light
in the atmosphere, they emit Cherenkov light within a small angle, which depends
also on the (altitude-dependent) atmospheric density and particle energy. A hadronic
shower starts normally with many secondary pions and a few heavier mesons. Due to
the fact that about one third of the secondary particle in each interaction are π 0 par-
ticles, the electromagnetic component of hadronic showers becomes more and more
enriched due to the decay π 0 → 2γ . Rarely, charged pions decay into muons, which
can penetrate deeply into ground. Gamma-ray induced cascades are much narrower
in transverse extension. The dominant multiplication processes in electromagnetic
showers is electron/positron bremsstrahlung, producing gamma rays and e+ e− pair
production from gamma-ray conversion. The vertical atmosphere corresponds to 27
radiation lengths and 11 hadronic absorption lengths. Due to transverse momentum
in hadronic interactions, multiple scattering and Earth magnetic field deflections the
showers are widened, facilitating their detection. In case of the Cherenkov detector
principle, the small emission angle of the Cherenkov light still illuminates a large
area at ground, of typically 200–220 meters in diameter. A telescope anywhere in
this area can detect an electromagnetic shower, provided the Cherenkov light inten-
sity is high enough. Further details of the showering process can be found in Weekes
(2003) or in numerous publications about calorimetry in high energy physics exper-
iments.
6 Very-High Energy Gamma-Ray Astronomy 149
The air-shower array detectors used in most cases are derivatives of the initial
Geiger tube counters and nearly all followed the low active-density array concept.
Most advanced detectors used large scintillation counters viewed by photomultipli-
ers read out by simple electronics. These array detectors sampled the shower tail
and measured the arrival signals in each hit counter, thus allowing to determine the
energy and direction of the shower. The active area fraction of the array area cov-
ered by detectors was normally below 1 % resulting in rather large uncertainties in
energy determination and modest angular resolution. A big problem was the precise
angular calibration of the detectors, as no reference source was available. Special
variants of the air shower array detectors were tracking charged particles passing
the instruments. It was hoped to determine the incident particle direction from the
measurement of a few angular measurements of the secondary tracks in the shower
tail. These measurements, however, provided only a very poor directional deter-
mination because most of the secondary particles at the shower tail were of low
momentum and multiple scattering was large. The air-shower arrays had basically a
24 h up-time and thus allowed the monitoring of a large fraction of the sky, i.e., they
were in principle well suited for searching the sources of CRs. Depending on the
altitude of the installation the threshold was very high. At sea level, one achieved
a threshold of around 1014 eV for showers with vertical incidence. For large zenith
angle showers, the energy threshold scales with a strong dependence of the zenith
angle θ of around cos−7 θ . The main deficiency of air shower array detectors is their
weak gamma/hadron separation power, poor energy and angular resolution at, and
still quite far above their energy detection threshold. Muons might be used as dis-
criminators. Gamma-ray induced showers contain, however, only very few muons
(originating from rare photo-production processes), while hadronic showers contain
quite a few muons mostly going down to ground level. Muons are normally iden-
tified by their passage of substantial amounts of matter. Therefore muon detectors
had to be installed a few meters underground, thus making them an expensive com-
ponent of the detector and, consequently, only few muon detectors could normally
complement a small fraction of the arrays. The general procedure of searching for
the sources by means of cosmic γ rays was to look for locally increased rates in the
sky maps, because hadronic events would be isotropically distributed and should
form a smooth background. By means of the muon detectors it was hoped to sup-
press the hadronic background further.
The alternative techniques to the air shower arrays were detectors based on the
observation of Cherenkov light from air showers. In 1934, Pavel Cherenkov discov-
ered that charged particles emit some prompt radiation in transparent media when
moving faster than the speed of light in those media (Cherenkov, 1934). Later, Ilia
Frank and Igor Tamm developed the theory for this radiation, dubbed after its dis-
coverer Cherenkov radiation. All three were awarded the Nobel Prize in 1958. In
1947, the British physicist P.M.S. Blackett predicted that relativistic cosmic parti-
cles passing the atmosphere should produce Cherenkov light and even contribute
to a small fraction (≈10−4 ) of the night sky background light (Blackett, 1948). In
1953, B. Galbraith and J.V. Jelley built a simple detector and proved that air show-
ers generate Cherenkov light, which could be detected as a fast light flash during
150 E. Lorenz and R. Wagner
Fig. 6.3 a The first design of an air Cherenkov counter in a garbage can used by B. Galbraith and
J.V. Jelley in 1953 (Galbraith and Jelley, 1953). Photograph courtesy T.C. Weekes. b Setup and
results of the observations of Galbraith and Jelley (figure taken from the original article)
clear dark nights (Galbraith and Jelley, 1953). With a threshold of around four times
the night sky noise level they observed signals with a rate of about one event per
two to three minutes. This was, by the way, the first demonstration that Cherenkov
light was generated also in gases. Later, they could demonstrate that these signals
were actually caused by air showers due to coincidences with the nearby Harwell
air shower array. The first detectors consisted of a very simple arrangement, i.e.,
a search-light mirror viewed by a photomultiplier, as shown in Fig. 6.3. The first
setup was installed in a garbage can for shielding from stray light. In the following
years the technique was refined by using larger mirrors, replacing the single pho-
tomultiplier tube (PMT) by a few arranged in the focal plane and even a few of
these simple telescopes in coincidences. As in optical astronomy, the air Cherenkov
telescopes had to track the source under observation. Nevertheless, all these many
pioneering efforts were not rewarded by any important discovery. The so-called air
Cherenkov telescopes had some important advantages compared to the air shower
arrays. The telescopes collected light from the entire development of the particle
shower and one could, in principle, measure the energy of the initial particle with
much higher precision and with a typically two orders of magnitude lower threshold
compared to “tail catcher” detectors. The main disadvantages were that one could
only observe with a very limited field of view of a few degrees. Thus one could study
only a single object at a time and observations could only be carried out during clear,
moonless nights. Similarly to the air shower arrays, the first-generation Cherenkov
telescopes could not discriminate between hadronic and electromagnetic showers.
Therefore observers tried to identify sources by just a change in the counting rate
when pointing their telescope(s) to the sources and later for the same time slightly
off the source. As the gamma-ray flux was very low compared to the CR flux, such
a method was prone to secondary effects generating rate changes, for example fluc-
tuations due to atmospheric transmission and the night sky light background from
stars.
6 Very-High Energy Gamma-Ray Astronomy 151
Because Cherenkov detectors measured the light coming from the entire shower,
they had, besides their better energy resolution, also a better angular resolution.
Basically, the combination of the atmosphere and the detector forms a fully ac-
tive calorimeter with some imaging quality due to the directional distribution of the
Cherenkov light. Figure 6.2 shows simulations of the shower development of typi-
cal γ -ray and proton-induced air showers, particularly illustrating those secondaries
that produce Cherenkov light.
One should be aware that only a fraction of less than 10−4 of the total shower
energy is converted into photons, and quite a few of these photons get lost before
hitting the ground. Losses are due to absorption by ozone molecules below around
300 nm, Rayleigh scattering (normally well predictable) and Mie scattering due to
fine dust or thin clouds or haze in the atmosphere. In the early times of Cherenkov
detectors, losses due to Mie scattering were quite unknown even until the 1980s;
these losses could not be fully explained because no adequate instruments for mea-
suring them were used. Even around 1990, the predictions of the transmission of the
atmosphere for Cherenkov light varied by up to a factor four. Adding to these uncer-
tainties the systematic errors of the instruments, in particular the photon detection
efficiency (PDE) of the photomultipliers, provided observers with measurements,
which were hardly consistent. Also, as previously mentioned, the first-generation
Cherenkov detectors did not allow one to discriminate between electromagnetic and
hadronic showers. The early Cherenkov telescopes plainly did not have the neces-
sary sensitivity to even observe the strongest sources, and often excesses of three
standard deviations (σ ) in the rate difference between On and Off source observa-
tion were claimed as a discovery.
HEP has made considerable progress in particle studies in laboratories in the
years since 1960. This success could be traced to advances in accelerator devel-
opments as well as for the replacement of optical readout techniques for bubble
chambers or optical spark chambers by a continuous development of more powerful
electronic devices, intense use of computers and the formation of large collabora-
tions. At the same time, CR physics progressed very little. Detectors were small and
completely inadequate for the necessary collection of complete shower information
and the important discrimination between γ -ray and hadronic showers was very
much hampered by a poor knowledge of the shower development, i.e., by the lack
of adequate VHE measurements of especially the high energy hadronic interaction.
Modest progress in technology – completely different compared to the progress in
HEP experiments – was achieved because of a lack of resources. Often leftover ma-
terial from dismantled HEP experiments was used, thus reflecting the state of the art
electronics of the 1950s and 1960s. Also, the use of computers was very restricted.
In HEP experiments, discoveries were often made as soon as an excess of at
least 3-σ above background was observed. When cuts based on poor knowledge of
shower developments were applied to CR data to find sources, failures were guaran-
teed because the used selection procedures did not deliver unbiased samples. Thus,
detections often were reported when a subset of cuts provided a 3-σ excess, and this
was then interpreted as a signal. Particularly the air shower arrays, although simple
to operate, suffered from their high and rapidly changing threshold with the zenith
152 E. Lorenz and R. Wagner
angle. The results of that time were often highly controversial and often disagreed
at the level of spectral analyses. These mostly and often contradictory 3-σ observa-
tions of claimed sources contributed very much to the low reputation of cosmic-ray
physics. Only a few physicists who did not change their focus to HEP in the 1950s
and 1960s continued this research. Even the Cherenkov technique, which looked
quite promising, was not delivering. In retrospective, the lack in finding the sources
of the CRs is quite obvious. The reasons were:
In summary, the reasons for failure were the use of detectors of insufficient sensi-
tivity, the lack of information from precision VHE experiments at accelerators, the
lack of understanding the details of the dominant hadronic shower development and
the atmospheric response. Nevertheless, one of the reasons for the activities in the
field not completely fading away was due to a controversial high-significance result
from an array detector set up by the University of Kiel.
At the University of Kiel a small but very active group pursued cosmic-ray research.
In the mid 1970s, the group improved their cosmic-ray experiment by extending the
existing scintillator detector array and measuring more parameters of air showers.
They added quite a few scintillation counters up to a distance of 100 m from the
previous core detector arrangement. Also, they improved the measurement of the
different shower components in the shower tail, such as a measurement of the elec-
tron, the muon and the hadron parameters of individual showers. Figure 6.4 shows
the layout of the inner part of their array. The array comprised the following detec-
tors:
• 27 unshielded scintillation counters for measuring the shower size and core posi-
tion.
6 Very-High Energy Gamma-Ray Astronomy 153
Fig. 6.4 Layout of the inner part of the Kiel detector (figure taken from Bagge et al., 1977):
Central part of the EAS detector array. Squares represent scintillation counters of 1(0.25) m2 area
each. Shaded areas indicate a 1.25 × 1.25 m shielding of 2 cm lead plus 0.5 cm of iron on top
of the scintillator. Detectors with additional fast timing photomultipliers are labeled with diagonal
crosses
Fig. 6.5 Measurement of the cosmic-ray flux from the direction of Cygnus X-3 (right ascension
band at 40.9◦ ± 1.5◦ declination) and the surrounding sky region as measured in 1983 by Samorski
and Stamm (1983) (upper panel) and 8 years later by HEGRA (lower panel). Figures taken from
Merck (1993)
candidate for the emission of TeV gamma rays. Quite a few experiments claimed
to have seen gamma rays with roughly a 3-σ excess. In their 1983 publication,
the Kiel physicists claimed a 4.4-σ excess at the position of Cygnus X-3 from
3,838 hours of observation time and a sensitive area of 2,800 m2 , i.e., at a dec-
lination angle of 40.9◦ ± 1.5◦ , and 307.8 ± 2.0◦ in right ascension. Figure 6.5
shows the published results. What enhanced the belief in the result was the find-
ing of a strong peak in the phase diagram of a 4.8 h periodicity, derived from
X-ray data ten orders of magnitude lower in energy (Fig. 6.6). As early as 1973,
the SAS-2 satellite had reported gamma-radiation within a narrow phase interval
of the 4.8 h-phase (Parsignault et al., 1976). Common belief was that Cygnus X-
3 comprises a binary system generating gamma rays and that the eclipsing of the
compact star by its companion was most likely causing this periodic signal. This
result created quite some interest and intense discussion not only in the CR com-
munity, but also in the HEP community, and quite some groups started to observe
specifically Cygnus X-3. Basically, this result triggered a revival in interest in the
search for the sources of CRs. In the wake of the Kiel experiment quite a few other
experiments confirmed the results, mostly claiming also to observe a 4.8-h peri-
odicity signal. Details go beyond the scope of this article and we mention only a
few references (which include also some general discussion): Lloyd-Evans et al.
(1983); Marshak et al. (1985); Watson (1985). Later, some additional results sur-
faced, which could have reduced the excitement. It turned out that the excess was
6 Very-High Energy Gamma-Ray Astronomy 155
about 1.5° off the position of Cygnus X-3 (Samorski, private communication), but
this was considered consistent with the systematic uncertainty of the measurement
of the shower arrival direction by means of time of flight measurements. Also, not
published in the 1983 article, the muon hodoscope results showed that nearly all
showers in the Cygnus X-3 bin had a very similar muon flux as that of hadronic
showers, i.e., also the excess showers were consistent with hadronic showers. In
the absence of reliable gamma experiments at accelerators it was speculated that
electromagnetic showers above 1015 eV had a strong hadronic component, explain-
ing the presence of a strong muon component. Cygnus X-3 is about 12 kilopar-
secs away from the earth and >1015 eV photons from this distance should already
be strongly attenuated by interaction with the cosmic microwave background, see
Sect. 6.3. Again, CR physicists speculated that in the absence of trustworthy ac-
celerator experiments PeV gamma rays behaved quite differently compared to low-
energy gamma rays. The Kiel results again triggered quite a few 3-σ observations
as well a similar number of contradicting results, and a flood of exotic theoretical
predictions for an energy range inaccessible to HEP accelerator experiments. It is
interesting to note that the Kiel physicists assumed that the gamma-ray flux to be
about 1.5 % of the total VHE CR flux (Samorski and Stamm, 1983). Eight years
later, the HEGRA (high-energy gamma-ray array) experiment, started by the Kiel
group on the Canary island of La Palma at a height of 2,200 meters above sea level,
with much higher precision and higher data statistics could not confirm any sig-
nal from Cygnus X-3 (Merck et al., 1991), see the lower panel of Fig. 6.5. The
CASA-MIA experiment, at that time the EAS experiment with the highest sensitiv-
156 E. Lorenz and R. Wagner
ity (array size 500 × 500 m2 ; median energy of 100 TeV) could not find any signal
from Cygnus X-3 (Borione et al., 1997). As it cannot be excluded that a signal from
Cygnus X-3 is variable, the Kiel result might not be contradicting later negative
observations.
Earth. Also detectors needed to be placed at high altitudes of a few thousand me-
ters.
It took over 35 years until the air Cherenkov technique was rewarded with the first
discovery of a VHE γ -ray emitting source since the initial observation of Cherenkov
light from air showers by J.V. Jelley. The first-generation Cherenkov telescopes
were in general using relatively small mirrors and very simple readouts in the
form of a single PMT. In 1968, a large 10-m telescope was completed at the Fred
Lawrence Whipple Observatory in Arizona, USA (Fazio et al., 1968). Figure 6.9
shows a photograph of the 10-m Whipple telescope at Mount Hopkins. Again, dur-
ing the first phase only a single PMT was used as a “camera” and thus, γ /hadron
discrimination was impossible. Therefore, no source could be detected although the
light-collecting mirror was sufficiently large. Then, under the leadership of Trevor
Weekes, both the instrument and the analysis methods were developed further to
increase the sensitivity, and a method for the crucial γ /hadron separation was im-
plemented, enabling the search for sources of much lower γ -ray fluxes than in other
experiments. In 1989, the Whipple collaboration published the first convincing ob-
servation of gamma-ray emission from the Crab nebula (Weekes et al., 1989). It was
basically a culmination of 10 to 20 years of hard experimental work with many steps
of improvements. While quite a number of discoveries in particle physics are just
surprise results, like for example the discovery of the ψ particle at the SPEAR stor-
age ring at SLAC (Augustin et al., 1974), the opening of the new window in VHE
γ astronomy was a long and tediously prepared search for the first VHE γ source
over many years.
The collaboration concentrated on a source that turned out to be the strongest
steady state galactic source. Already in 1958, Philip Morrison (1958) and, inde-
pendently in 1959, Guiseppe Cocconi (1959) had put forward strong arguments for
observing VHE gamma rays from the Crab nebula and made predictions for high γ -
ray fluxes. Ever since that time the Crab nebula was a target of VHE γ -astronomy,
but the Whipple collaboration spent a remarkably long observation time of 80 h
spread over three years.
They used a telescope of a large light collection area and for the first time a
camera allowing an efficient γ /hadron separation of the data. The use of an “imaging
camera” was at first proposed by T.C. Weekes and K.E. Turver (1977), but it took
another 10 years until the first useful imaging camera was built. This camera with
only 37 PMTs covered a field of view (FOV) of 3.5 degrees diameter. It allowed the
recording of coarse pictures of air showers and making a simple discrimination of
electromagnetic and hadronic showers. This rudimentary camera was nevertheless
the start of the design of consecutively improved cameras with finer and finer pixel
sampling while the FOV of 3.5° is quite a standard even of today’s telescopes.
158 E. Lorenz and R. Wagner
Fig. 6.8 The image parameterization employed by Weekes et al. (1989): The shower image is
characterized by the width and length of the shower ellipse along with some parameters describing
the position and angle of the shower in the camera plane – showers originating from the source
should point back to the source position, i.e., have a small MISS value. Weekes et al. (1989)
showed that there are distinct differences in all parameters given between gamma- and hadron-ini-
tiated showers. Today, the original MISS parameter has been superseded by the ALPHA parameter,
describing the angle between the weighted center of the shower ellipse and the camera center, and,
later still, by the θ 2 parameter, allowing for analyses without assumptions about the source position
The third and most important achievement was the introduction of a refined
γ /hadron separation method based on the calculation of image moments. This analy-
sis developed by the Whipple collaboration in the mid-1980s was based on the com-
bination of both a measurement of the shower image orientation, originally proposed
by T.C. Weekes in 1981 (Weekes, 1983) and an analysis to evaluate the difference
in images between gamma-ray showers and hadron showers, originally proposed
by A.A. Stepanian, V.P. Fomin and B.M. Vladimirsky (1983). The shower image
should align with the position of the source in the camera (Fig. 6.8) and images of
gamma and hadron showers should distinctly differ in shape with gamma-showers
being rather slim and concentrated, while hadron showers are much wider and more
irregular. Of course, shower fluctuations could sometimes make discrimination dif-
ficult and limit discrimination power. The originally rather simple moment analysis,
commonly known as Hillas parameterization analysis (Hillas, 1985) became the ba-
sic concept for γ /hadron separation in future Cherenkov telescope experiments. It
is still in use in most of today’s experiments with some refinements based on ad-
ditional information retrieved from better cameras with finer resolution and better
shower timing data.
While the classical analysis method gave just a 1-σ excess, the γ /hadron analy-
sis bases on the Hillas moments allowed the hadronic background to be reduced by
98 % (with a loss of around 50 % of γ events). Eventually, an excess at a 9-σ level
was found. This observation was confirmed in the following years by a number of
other Cherenkov telescope experiments and opened the window for VHE γ astron-
omy. Also, many other experiments followed the concept of the second-generation
Cherenkov telescope with a pixelized camera and confirmed the VHE γ emission
of the Crab nebula (Table 6.1).
6 Very-High Energy Gamma-Ray Astronomy 159
Fig. 6.9 Photo of the Whipple 10-m telescope at Mount Hopkins. Courtesy Brian Humensky
A very important, but hardly noticed, byproduct of the detection of gamma rays
from the Crab nebula was the first trustworthy measurement of the γ flux of ≈0.2 %
of the CR flux in a FOV of 2 degrees around the Crab nebula position and above
about 0.7 TeV. This low value explains why past experiments had no chance of
finding a real signal due to their low γ /hadron separation power.
Not long after the discovery of VHE γ -emission from the Crab nebula and the search
for some other galactic sources, the Whipple collaboration started a search for γ -
emission from extragalactic sources. Candidates were AGNs of the blazar type, that
had been detected in X-rays and low-energy gamma rays in satellite observations.
Amongst the five candidate AGNs they selected for their study, only the weakest
low-energy γ -emitter, the AGN Markarian (Mkn) 421, showed a strong VHE signal
of about 30 % of the Crab nebula flux (Punch et al., 1992). If converted naively to
the intrinsic brightness of the source nearly 5 billion light years away, Mkn 421 must
emit over 106 times more VHE gamma rays than the Crab nebula. This observation
160 E. Lorenz and R. Wagner
Table 6.1 Some Imaging Cherenkov telescopes in the 1990s, similar to the Whipple telescope,
which later confirmed the Crab nebula VHE gamma-ray emission and detected also some other
gamma-ray sources
Telescope #Cameras/Pixels Collaboration Ref.
References – 1: Vladimirsky et al. (1989), 2: Fomin et al. (1991), 3: Aharonian et al. (1989), 4:
Nikolsky and Sinitsyna (1989), 5: Kifune (1992), 6: Aharonian et al. (1991), 7: Akerlof et al.
(1991), 8: Bowden et al. (1991), 9: Aiso et al. (1997), 10: Barrau et al. (1998), 11: Goret et al.
(1991)
opened the window of extragalactic γ -search. Later, quite a few AGNs were de-
tected and now nearly the same number compared to galactic sources are observed.
Nearly all of them are blazars, i.e. galaxies with an accreting super-massive black
hole in the center, a large accretion disc, and two jets orthogonal to the accretion
disc (sometimes only one is seen, presumably due to beaming effects). Most current
models assume that γ -rays are produced in the jets. In case one jet points towards
the earth, they are called blazars. Many gamma-detected blazars show rapidly vary-
ing γ -activity, which is called “flaring”. Intensity variations by a factor ten or more
are observed, in extreme occasions up to a factor of ≈50 with respect to the lowest
gamma-ray fluxes seen from the respective blazars. It is likely that most blazars have
not yet been detected because they are currently in a “dormant” state. Also, the sen-
sitivity of current Cherenkov telescopes might only allow one to see the strongest
flaring sources, as up to now nearly all observed blazars have a super-massive black
hole of at least 108 solar masses (Wagner, 2008).
6.5.2 A Persistently Flaring Blazar: Mkn 501 Flares for Over Six
Months
Soon after the discovery of Mkn 421, the Whipple collaboration discovered another
blazar, Mkn 501 (Quinn et al., 1996) at a redshift of z = 0.034, at nearly the same
6 Very-High Energy Gamma-Ray Astronomy 161
Fig. 6.10 The light curve of Mkn 501 in summer 1997. Flux variations in the range of 20 were
observed in the VHE domain. The flaring activity extended over the entire observation period of 6.7
months. Due to a new observation method introduced by HEGRA it was possible to observe such a
strong source also during partial moonlight. Data from D. Kranich’s Ph.D. thesis (Kranich, 2002).
The TeV data show much larger fluctuations than the X-ray data recorded by RXTE (Remillard
and Levine, 1997)
distance as Mkn 421 and of very similar performance. The VHE γ -emission of Mkn
501 was soon afterwards confirmed by the HEGRA collaboration (Bradbury et al.,
1997).
In 1997, Mkn 501 showed a series of extremely large outbursts extending in
time over the entire observation period in 1997 and, up to now, never seen from
any other AGN. The flare intensities reached peak values exceeding the low state
by up to approximately a factor 20. The flaring activity was observed by HEGRA
stereoscopic system (Aharonian et al., 1999), TACTIC (Joshi et al., 2000), and the
Whipple telescope (Quinn et al., 1999). At that time, the HEGRA collaboration in-
troduced a new method for observing strong sources also during partial moonlight,
thus HEGRA was able to collect a nearly continuous light curve over nights during
nearly 6 months. Figure 6.10 shows this flux measurement above 1.5 TeV from the
HEGRA collaboration during the observation period in 1997. The data are com-
pared with the X-ray data from the RXTE satellite between 2 keV < E < 10 keV
(Remillard and Levine, 1997). Figure 6.10 highlights the enormous variation in the
162 E. Lorenz and R. Wagner
Fig. 6.11 Photograph of the Crimean multi-telescope setup. Each barrel contains one Cherenkov
telescope. Three telescopes form a unit. Each of those units can be positioned along a railway
system and view the air showers under slightly different angles. This arrangement allowed both
coincidence measurements and simple multi-telescope observations. The system was used from
1960 until 1963. Figure courtesy T.C. Weekes
highest energy domain while at lower energies also a change in the X-ray flux was
observed but with a much smaller and smoother flux variation.
Soon after the first Cherenkov telescopes were used to look for the sources of cos-
mic rays one tried to improve the sensitivity by means of the stereo technique,
i.e. by viewing the showers from spaced telescopes. Chudakov and coworkers at
the Catsiveli site in Crimea were the first to attempt designing a multi-telescope
stereo system (Chudakov et al., 1963), which also facilitated simple stereo obser-
vations. They used 12 detectors each comprised of a large mirror and only one
photomultiplier per telescope. Units of three detectors each were installed on a sim-
ple mount, which could be separated on rails. Figure 6.11 shows a photo of their
arrangement. With normally only 20 m separation and a single large diameter pho-
tomultiplier/telescope, the stereo quality was rather poor and more a coincidence
measurement for reducing accidental triggers. Some time later, J. Grindlay (Grind-
lay et al., 1975) tried another stereo approach (Fig. 6.12) with only two similar
telescopes mounted on a circular rail system allowing a separation of up to 180 m.
Later, some other similar attempts were made, but again none of them, however,
led to a high-significance source detection. The lack of any discovery can be traced
back to the missing γ /hadron separation power. After the breakthrough discovery of
6 Very-High Energy Gamma-Ray Astronomy 163
the Whipple collaboration using a pixelized camera, part of the extended Whipple
collaboration converted an 11-m solar telescope, originally located in New Mexico,
into a 37-pixel camera Cherenkov telescope, dubbed Granite, and genuine stereo
observations were pursued. Unfortunately, the sensitivity of the stereo system was
worse than the Whipple telescope alone. Reasons were mirrors of poorer optical
quality and a tendency of icing due to radiation cooling caused by low heat conduc-
tivity of the foam backing of the mirror. Additionally, the spacing between the two
telescopes of ≈120 m did not yield enough events simultaneously detected in both
telescopes and thus was far from optimal. The first successfully operating stereo
system with significantly improved sensitivity was build by the HEGRA collabora-
tion.
After the publication of a 4.4-σ excess from the direction of Cygnus X-3, the
Kiel physicists in 1987 started to build an improved scintillation counter array, the
HEGRA experiment, at the Roque de los Muchachos (2,200 m asl) observatory on
the Canary island of La Palma.
Already in the early 1990s, the Kiel institute leader, the late Otto Claus Allkofer,
had discussed with Felix Aharonian from the Armenian group in Yerevan about the
164 E. Lorenz and R. Wagner
possibility of adding five Cherenkov telescopes because of the excellent optical con-
ditions at the La Palma site. The Armenian group had already built a small imaging
Cherenkov telescope on Mount Aragats and had plans for a stereo system.
Eventually, a prototype Cherenkov telescope and five telescopes, operating in a
stereo system, were built. The system was very successful with an increase in sen-
sitivity of about a factor 10 compared to a single telescope of the same size. The
reasons were manifold and are shown in a sketch in Fig. 6.13. With a stereo system,
showers are observed from different directions. This can improve the γ /hadron sep-
aration by means of viewing the shower in part under optimal condition and by sup-
pressing the so-called head-tail ambiguity of single telescopes. In single telescope
pictures recorded by a classical gated analog-to-digital converter (ADC) readout,
there is an ambiguity about the shower direction pointing either towards or away
from the potential source location. In stereo systems one can cut the background by
a factor two by solving this ambiguity. Stereo observations also provide a much bet-
ter shower energy determination and a better angular resolution allowing the study
of extended sources. The HEGRA stereo system was the first one that used regularly
a readout with flash ADCs, now common in all Cherenkov stereo systems.
6 Very-High Energy Gamma-Ray Astronomy 165
Fig. 6.14 The VHE (E > 300 GeV) sky map at the year 2000
In the last decade of the last century a few other stereo systems were built (Ta-
ble 6.1), but none reached the sensitivity of the HEGRA experiment. Nowadays
stereo telescope systems are the main tool in VHE γ -astronomy.
The progress in discovering new VHE gamma-ray emitting sources after the dis-
covery of the Crab nebula was initially rather slow. Figure 6.14 shows the VHE
sky map in the year 2000. Only eight more sources were discovered, all of them by
“imaging” Cherenkov telescopes, which became the “workhorse” for the searches.
These second-generation Cherenkov telescopes were simply not sensitive enough
to observe sources that emit VHE gamma rays below 10 % of the Crab nebula flux.
Nevertheless, confidence in the observation techniques and analysis methods devel-
oped. For nearly every group observing on the northern half of the Earth the Crab
nebula was the test bench. The number of extragalactic sources found was equal
to that of galactic ones detected. All extragalactic sources were blazars, while two
galactic sources were pulsar-wind nebulae and two supernova remnants (SNR). The
community followed a suggestion of Trevor Weekes that observed sources were ac-
cepted as discoveries only if their significance exceeded 5 σ and all sources on the
sky map were at least confirmed by one other experiment.
Table 6.2 Table of the third-generation observatories with large mirror telescopes. The overview
lists location and altitude of the observatories, the diameter and number (“#”) of the individual
telescopes, and the start dates of operations
Name Location Diameter # Altitude Start
and astronomers were still not convinced that the new field would really contribute
to the fundamental understanding of the relativistic Universe and the meager re-
sults of the past times did not justify the diversion of funding from other areas.
Nevertheless, the results from mainly the last decade of the last century made it
obvious that new, better telescopes would lead to a breakthrough in the field. Also,
the stereo-observation technique was generally accepted as the approach that would
reach sensitivities around 1 % of the Crab nebula flux within 50 h observation time
for achieving a 5-σ excess signal. Eventually, four large projects materialized: Can-
garoo III, H.E.S.S., MAGIC and VERITAS.
The plans for these third-generation improved telescopes started to evolve around
the year 1994 onwards. The construction of the first Cangaroo III telescope started
already in 1997, the main activities of H.E.S.S. basically around 2000, MAGIC in
2002, and VERITAS in 2003. Table 6.2 lists some essential information about the
third-generation observatories.
6.6.3 H.E.S.S.
H.E.S.S. (High Energy Stereoscopic System) was built by a large international col-
laboration in the years 2000–2003 in Namibia at 23°16 S, 16°30 E, at 1,800 m
above sea level (Hofmann, 2001). H.E.S.S. comprises four 12-m diameter imaging
Cherenkov telescopes with a 110-m2 mirror and a multi-pixel camera of 960 PMTs
each. The observatory is suited for the study of gamma-ray sources in the energy
range between 100 GeV and 100 TeV. The stereoscopic system has a sensitivity of
0.7 % of the Crab nebula flux within 25 hours of observation time when pointing
to zenith. Like Cangaroo III, H.E.S.S. is located in the Southern hemisphere and is
particularly suited for the observation of sources in the central region of the galactic
plane. H.E.S.S. is currently the most successful observatory, as it has discovered
more than half of all known VHE sources. Due to their large diameter cameras of
5° FOV, H.E.S.S. has studied quite a number of extended sources. For example,
a scan of the supernova remnant RX J1713.7-3946 in the Galactic plane (discovered
in X-rays by ROSAT, Pfeffermann and Aschenbach, 1996) highlights the detection
power for extended sources and is shown in Fig. 6.15 (Aharonian et al., 2006a).
In 2012/2013, H.E.S.S. will be extended by a central fifth telescope with a 28-m
diameter reflector and an energy threshold of 30–40 GeV.
6.6.4 MAGIC
The MAGIC collaboration pursued another path in the development. They designed
an ultra-large Cherenkov telescope with a 17-m diameter mirror (Baixeras et al.,
2003) on La Palma (28.8° N, 17.8° W, 2,225 m above sea level). A second one,
168 E. Lorenz and R. Wagner
which was constructed later. The telescope is based on numerous novel concepts,
such as a low-weight carbon-fiber reinforced plastic space frame, supporting the
diamond-turned, low-weight, sandwich aluminum mirrors. To counteract small de-
formations during tracking, the matrix of small mirror elements, approximating a
parabolic mirror profile, was corrected by an active mirror control system. The total
moving part of the telescope has a weight of only ≈70 tons and could be reposi-
tioned to any point on the sky within 20 seconds in order to observe at least part
of gamma-ray bursts (GRB). A second telescope was built only after the new items
of the first one proved to work. The first telescope started to take data in 2004, and
stereo observations with both telescopes commenced in 2009. The first telescope
has a threshold of 60 GeV and initially a sensitivity of ≈1.5 % of the Crab nebula
flux while the stereo system has a threshold of 50 GeV and a sensitivity of 0.8 % of
the Crab nebula flux.
6.6.5 VERITAS
6.6.6 Milagro
Milagro was the first really successful tail-catcher detector. Progress in understand-
ing the shower development at its tail and using a detector with 100 % active area
around the shower core axis finally produced the first convincing detection of some
VHE gamma-ray sources. This detector, dubbed Milagro (Sinnis, 2009), made use
of a large water pond of 80 × 50 m with a depth of 8 m. The detector was located
near Los Alamos at an altitude of 2,630 m above sea level. 175 small water tanks
surrounded the water pond to collect information about the radial shower extension.
The charged shower tail particles generated Cherenkov light when passing the wa-
ter. Electrons from γ -showers stop normally in the first 2 meters while hadronic
showers contain some particles that penetrate deeply into the water pond. The water
pond was subdivided into two layers of 2.8 × 2.8 m cells. Each cell was viewed by
6 Very-High Energy Gamma-Ray Astronomy 169
Fig. 6.16 Schematic cross section of the water pond of the Milagro detector. Depending on the
type of incident particle, PMTs in the upper and lower region of the pond would detect light,
as illustrated. The gamma/hadron separation of Milagro was based on these different penetrating
powers
one large PMT. The top layer of 450 PMTs was under 1.4 meters of water and the
bottom layer of 273 PMTs was under 6 m of water, as illustrated in Fig. 6.16.
Milagro had some considerable γ /hadron separation power. Air showers induced
by hadrons contain a penetrating component (muons and hadrons that penetrate
deeply into the reservoir). This component resulted in a compact bright region in
the bottom layer of PMTs. A cut based on the distribution of light in the bottom
layer removed 92 % of the background cosmic rays while retaining 50 % of the
gamma-ray events. The detector was suited for the observation of showers above
2 TeV (from showers coming close from the zenith) and had an up-time of 24 h. At
45° zenith angle the threshold was 20 TeV. The collaboration operated the detector
from 2002 to 2006.
Milagro with its rather high threshold was best suited for the search for galac-
tic sources in the outer part of the galactic plane. During a survey of the galactic
plane (Abdo et al., 2007) three new, in part quite extended sources were discovered
and a few already known sources confirmed. Milagro stopped operation in 2007.
Another successful air-shower array is Tibet AS operated by a Japanese collabora-
tion (Huang et al., 2009). This detector at 4,300 m asl comprises a large number of
scintillation counters but still has only a fractional sampling of the surface and has
therefore a threshold of 3 TeV. Air-shower detectors have a 24 h up-time and should
in principle be well suited for the detection of gamma-ray bursts, but their currently
high threshold has prevented any detection up to now.
Shortly after completion of the four H.E.S.S. telescopes, the collaboration started
scanning the inner part of the galactic disk with a sensitivity of 2 % of the Crab
nebula flux above 200 GeV. In order to achieve a nearly uniform sensitivity across
the galactic disk, the four telescopes were slightly re-adjusted to cover a strip of
±3◦ latitude relative to the Galactic plane. The scan extended from −30◦ to +30◦ in
170 E. Lorenz and R. Wagner
Fig. 6.17 The H.E.S.S. scan of the inner region of the Galactic plane with 13 newly discovered
sources (Aharonian et al., 2006a)
longitude, covered by 500 pointings in a total of 230 hours. In total, 14 new sources
were discovered (Fig. 6.17), about half of them unidentified sources and the other
half in part pulsar-wind nebulae (candidates for the sources of CRs) and SNR with
≥4-σ significance after all trials (Aharonian et al., 2006a). Later, a partial rescan
with higher sensitivity, respectively with an improved analysis method, increased
the number of detected sources to over 30. Also, a few binary objects were found to
be gamma-ray emitters. This scan made H.E.S.S. the most successful observatory
for the detection of galactic sources. Quite a few sources could not be classified.
The richness of sources found in the galactic plane tells us that one could expect a
significantly larger number with the next generation higher sensitivity telescopes.
About one third of all stars are arranged in binary systems. Already during the
Cygnus X-3 studies by the Kiel and other groups, the mostly accredited model for
the VHE gamma-ray production was assumed to be a binary system with a period-
icity of 4.8 hours. In the 1980s, binaries were considered as the sources of cosmic
6 Very-High Energy Gamma-Ray Astronomy 171
gamma rays. Later, after quite a few VHE gamma-ray sources were discovered and
none of them could be explained as binary systems, the question after the discovery
of the Crab nebula was raised at nearly every International Cosmic Ray Confer-
ence before 2005: Where are the binaries? Eventually, both H.E.S.S. and MAGIC
detected binaries in the Galactic plane. H.E.S.S. published the first discovery of a
VHE binary, PSR B1259-63 (Aharonian et al., 2005a) and LS 5039 on the Southern
sky (Aharonian et al., 2005b). Soon afterwards MAGIC discovered the first binary
on the Northern sky, LS I+61 303 (Albert et al., 2006). Figure 6.18 shows the light
curves of the three binaries. The composition of the binaries is not evident; Fig. 6.19
shows the two preferred models.
The first decade of the 21st century saw considerable progress in VHE gamma-
ray astronomy. The third-generation Cherenkov telescopes achieved a sensitivity of
≈1 % of the Crab nebula flux and currently about one new source per month is dis-
covered. Nevertheless, one sees a gradual shift from “source hunting” to the study
of the underlying physics and to fundamental physics issues. The recent successes
have triggered ideas for quite a few new detectors with another large step in sensi-
tivity increase and which should be realized in the coming years. There follows a
very short overview of the new ideas.
172 E. Lorenz and R. Wagner
Around 2007 it became evident that a further large increase in sensitivity could
not be achieved by improving single telescopes but by considerably increasing the
6 Very-High Energy Gamma-Ray Astronomy 173
Fig. 6.19 The two preferred models of binary system emitting gamma rays. a the so-called mi-
croquasar model with a small black hole accreting mass from the companion star. Gamma rays
are produced in the jets. b a binary system proposed by Felix Mirabel (2006). A pulsar circulates
around a Be star
number of telescopes in an array configuration. The idea for CTA (Cherenkov Tele-
scope Array) was born. Building a detector covering the energy range of 20 GeV
to 100 TeV (Actis et al., 2011) requires a large number of three different sizes of
telescopes (23 m, 12 m, and 3 to 5 m diameter, respectively) in order to achieve
a sensitivity 10 times higher compared to H.E.S.S. (see Fig. 6.21 for the predicted
sensitivity). The sites have not yet been selected. For covering the entire sky, it will
be necessary to select one site in the Southern hemisphere and one in the Northern
hemisphere. The energy range of CTA South will be extended to about 100 TeV
for the study of galactic sources while CTA North will need the two larger size
telescopes types, because multi-TeV gamma rays from higher redshift extragalac-
tic sources are suppressed by the interaction with the low-energy photon fields (see
Sect. 6.3) and consequently no longer detectable. The initially European project is
now enlarged to a worldwide collaboration approaching 900 members. CTA will
start observations around 2015–2017. In their initial phase, the telescopes will be
relatively conservative copies of current third-generation telescopes.
Four other projects have passed the level of first ideas and are currently under de-
tailed evaluation or in a first phase of construction. AGIS (Vandenbroucke, 2010)
and MACE (Koul et al., 2005) are Cherenkov telescopes, while HAWC (Salazar,
2009) is an extended air-shower (EAS) array at high altitude for achieving a low
174 E. Lorenz and R. Wagner
threshold. LHAASO (Cao et al., 2011) is a facility that combines various air shower
detector elements and Cherenkov telescopes.
A final stage of stellar evolution is reached when a star runs out of the fuel nec-
essary for the fusion reactions that counteract the gravitational pressure. If the star
6 Very-High Energy Gamma-Ray Astronomy 175
is heavy enough, the collapse of the stellar core is followed by the ejection of the
outer shells of the stellar material. Depending on the mass of the remaining object,
a neutron star or a black hole is formed; the ejected material may interact with inter-
stellar material. This expanding structure is called a supernova remnant. For a long
time, supernova remnants have been suspected to be the sources of charged cosmic
rays up to energies of at least 1015 eV. SNR generally are extended objects, and any
VHE gamma-ray emission observed traces either, in case of hadronic origin, regions
in which cosmic rays interact with target material, or, in case of leptonic origin,
target electrons that exist in SNR. Showcase examples for detected and spatially re-
solved SNRs in gamma rays so far are the four objects RX J1713.7-3946 (Aharonian
et al., 2006a), RX J0852.0-4622 (Lemoine-Gourmard et al., 2007), RCW 86 (Aharo-
nian et al., 2009b), and SN 1006 (Acero et al., 2010). Generally, the VHE emission
seems to resemble the X-ray morphology in these SNR, favoring a leptonic origin
of the VHE emission, and particularly SN 1006 and RX J1713.7-3946 are most cer-
tainly dominated by leptonic acceleration. On the other hand, an association of the
gamma-ray emission with the presence of a molecular cloud (traced by CO den-
sity), which may serve as target material for hadronic gamma-ray production. Such
an association is given in IC 443 (Albert et al., 2007), whereas in Tycho’s supernova
remnant, a combination of Fermi-LAT (GeV) and VERITAS spectra (Acciari et al.,
2011) rule out leptonic acceleration models. The energy spectra from SNR are par-
ticularly hard, with a cutoff that sets in at about 20 TeV, indicating that the primary
particles responsible for the gamma-ray emission must have had energies of some
hundred TeV.
systems high energy electrons originating from the pulsar power the gamma-ray
emission. PWN are the most commonly found type of galactic gamma-ray sources.
Nonetheless, not only the nebula itself may emit gamma radiation: As recently dis-
covered (Aliu et al., 2008), the pulsar in the center of the Crab nebula emits pulsed
VHE gamma radiation.
About one third of the sources found in scans of the galactic plane could not yet
be associated with counterpart objects. For these, spectral and temporal properties of
the TeV emission, and spatial co-location with known emission at other wavelengths
are being investigated to learn about their nature.
Strong stellar winds, as they typically exist in star-forming regions and stellar clus-
ters, may accelerate particles and lead to VHE gamma-ray production. Stellar winds
seem natural candidate regions for VHE gamma-ray production as they also drive
particle acceleration in binary systems and outflows in pulsar systems. Recently,
TeV gamma-ray emission has been discovered in the young star system Wester-
lund 2 (Aharonian et al., 2007; Abramowski et al., 2011a), and indications have
been found in the Cyg OB2 star association.
The galactic plane scan revealed a substantial number of sources with no evi-
dent counterpart at any other wavelength – about 20 such “dark accelerators” are
now known. Some objects could later on be identified as PWN or SNR by cat-
alog searches, by the revision of the likeliness of an association to a known ob-
ject (e.g., HESS J1303-631/PSR J1301-6305) or by targeted follow-up observations
6 Very-High Energy Gamma-Ray Astronomy 177
(e.g., HESS J1813-178, Helfand et al., 2007). However, for quite a few unidentified
sources, such methods have failed to reveal their nature (Aharonian et al., 2008). Par-
ticularly a lack of X-ray emission may hint at a hadronic origin of the gamma-ray
emission. Detailed studies of the (temporal, spectral, and morphological) features
of these TeV-only emitters may help to identify the particle acceleration process at
work and may also help answering the question whether these objects represent a
source class of their own. However, as particle acceleration that leads to gamma-
ray production generally requires certain rather characteristic parameters of the ac-
celerator (like magnetic field strength, extension, densities), it may be difficult to
establish a new class of TeV emitters.
In a certain sense, also the gamma-ray source at the center of our Galaxy is an
unidentified TeV source (Kosack et al., 2004; Aharonian et al., 2004). Here, the diffi-
culties come from source confusion, as the Galactic center region is a very busy one:
Besides star-forming regions (Sgr B1, Sgr B2, Sgr D), the most prominent source
towards the Galactic center is Sgr A, within which Sgr A* has been identified as
possibly being a super-massive black hole. In addition, also a dark-matter annihila-
tion signal could be expected from the center of our Galaxy. The gamma-ray energy
spectrum determined from the Galactic center source is rather hard, favoring a PWN
origin, and disfavoring a dark-matter origin. Dedicated searches for a dark-matter
signal are reported, e.g., in (Abramowski et al., 2011c).
The second VHE gamma-ray source to be detected in 1992 was the active galactic
nucleus Mkn 421. This source, like most of the well over 20 AGNs discovered
as of today, is a blazar, which is a subclass of AGN with relativistically beamed
emission towards the observer. Blazars have been detected at a redshift range of
z = 0.031 (Mkn 421, Punch et al., 1992) up to z = 0.536 (3C 279, Albert et al.,
2008a) so far. Active galactic nuclei are powered by accretion of matter by super-
massive black holes with some billion solar masses and show high variability down
to timescales of minutes and below, indicating complex particle acceleration and
cooling processes working within the jet acceleration regions. The most remarkable
flaring activity so far has been observed in PKS 2155-304 (Acero et al., 2012) with
flux intensities exceeding by an order of magnitude the otherwise mostly “dormant”
emission (Aharonian et al., 2009c; Abramowski et al., 2010) and flux variations on
timescales of minutes.
The TeV AGNs were for a long time dominated by so-called high-peaked BL
Lac objects (Fig. 6.22), which are AGNs with a peak of their synchrotron emis-
sion in the X-ray range of the energy spectrum. In leptonic acceleration models the
TeV emission is then interpreted as photons scattered off the same electron popu-
lation that created the X-ray emission. Lately, some “low-peaked” BL Lac objects
(with the synchrotron peak in the optical regime; e.g. BL Lac itself; W Comae) and
flat-spectrum radio quasars with even lower X-ray peaks could be discovered, e.g.,
3C 279 (Albert et al., 2008a), and PKS 1222+22 (Aleksić et al., 2011).
178 E. Lorenz and R. Wagner
Fig. 6.22 A strictly simultaneously measured spectral energy distribution of the blazar Mkn 421
(Abdo et al., 2011). The low-energy peak is believed to represent synchrotron radiation off a pop-
ulation of relativistic electrons, while the origin of the second, high-energy peak is debated. It may
be due to inverse-Compton radiation of the same electron and photon population (“self-synchrotron
Compton” emission), external photons scattering off the electrons (“external Compton” emission),
or it may be of hadronic origin. High-energy gamma-ray observations play a crucial role in dis-
criminating possible scenarios due to their sensitivity to time variations and the spectral shape of
the SED at around GeV/TeV energies
Recently, also close-by radio galaxies like M 87 and Centaurus A have also been
identified as gamma-ray emitters. Those objects are close-by and have jets mis-
aligned to the line of sight. This allows spatial studies of the jets and the regions
within them responsible for the particle acceleration, particularly by combining
high-resolution radio observations and TeV light curves (Acciari et al., 2009a; Har-
ris et al., 2011; Abramowski et al., 2012).
and recently the central galaxy of the Perseus Cluster was detected in TeV gamma
rays. The emission seen so far, however, is compatible with what is expected from
the galaxy itself; no extended, inter-cluster emission could be claimed (Aleksić
et al., 2012a).
Fig. 6.23 The VHE (E > 100 GeV) sky map in the year 2010
6.8 The VHE Sky Map at the 98th Year of Cosmic-Ray Studies
The first decade of the new millennium saw a large expansion of discoveries after the
large Cherenkov observatories became fully operational. Nearly every month a new
source was discovered. Figure 6.23 shows the E > 100 GeV sky map in the year
2010 with over 110 sources. About 60 % of all sources are located in the galactic
plane, while about 40 % of the sources are of extragalactic origin. The central part
of the galactic plane is well visible from the H.E.S.S. observatory site while only
the outer wings of the galactic plane are visible to the two northern Cherenkov
observatories, MAGIC and VERITAS. Also, some sources are detected by the “tail
catcher” detector Milagro.
Currently, the productivity of the three large telescope installations is high. In
the 100th year of CR physics the number of 150 discovered sources is being ap-
proached. The two Northern installations have a sensitivity of about 0.8–1 % of the
Crab nebula flux for a 5-σ signal within 50 h observation time, while H.E.S.S. has
a sensitivity close to 0.7 % of the Crab nebula flux. Still, most of the extragalactic
area has not been scanned.
6 Very-High Energy Gamma-Ray Astronomy 181
References
Abdo, A.A., et al. (Milagro collaboration): TeV gamma-ray sources from a survey of the galactic
plane with Milagro. Astrophys. J. Lett. 664, L91–L94 (2007)
Abdo , A.A., et al.: Fermi-LAT observations of Markarian 421: the missing piece of its spectral
energy distribution. Astrophys. J. 736, 131–152 (2011)
Abramowski, A., et al. (H.E.S.S. collaboration): VHE gamma-ray emission of PKS 2155-304:
spectral and temporal variability. Astron. Astrophys. 520, A83 (2010)
Abramowski, A., et al. (H.E.S.S. collaboration): Revisiting the Westerlund 2 field with the H.E.S.S.
telescope array. Astron. Astrophys. 525, A46 (2011a)
Abramowski, A., et al. (H.E.S.S. collaboration): Search for Lorentz invariance breaking with a
likelihood fit of the PKS 2155-304 Flare data taken on MJD 53944. Astropart. Phys. 34, 738–
747 (2011b)
Abramowski, A., et al. (H.E.S.S. collaboration): Search for a dark matter annihilation signal from
the galactic center halo with H.E.S.S. Phys. Rev. Lett. 106, 161301 (2011c)
Abramowski, A., et al. (H.E.S.S. collaboration, MAGIC collaboration, VERITAS collaboration):
The 2010 very high energy gamma-ray flare & 10 years of multi-wavelength observations of
M87. Astrophys. J. 746, 151–169 (2012)
Acciari, V.A., et al. (VERITAS collaboration, MAGIC collaboration, H.E.S.S. collaboration): Ra-
dio imaging of the very-high-energy gamma-ray emission region in the central engine of a radio
galaxy. Science 325, 444–448 (2009a)
Acciari, V.A., et al. (VERITAS collaboration): A connection between star formation activity and
cosmic rays in the starburst galaxy M82. Nature 462, 770–772 (2009b)
Acciari, V.A., et al. (VERITAS collaboration): Discovery of TeV gamma ray emission from Ty-
cho’s Supernova Remnant. Astrophys. J. Lett. 730, L20 (2011)
Acero, F., et al. (H.E.S.S. collaboration): Detection of gamma rays from a starburst galaxy. Science
326, 1080–1082 (2009)
Acero, F., et al. (H.E.S.S. collaboration): First detection of VHE γ -rays from SN 1006 by HESS.
Astron. Astrophys. 516, A62 (2010)
Acero, F., et al. (H.E.S.S. collaboration): A multiwavelength view of the flaring state of PKS 2155-
304 in 2006. Astron. Astrophys. 539, A149 (2012)
Actis, M., et al. (CTA consortium): Design concepts for the Cherenkov telescope array CTA: an
advanced facility for ground-based high-energy gamma-ray astronomy. Exp. Astron. 32, 193–
316 (2011)
Aharonian, F., et al.: Cherenkov imaging TeV gamma-ray telescope. In: Very High Energy Gamma
Ray Astronomy: Proceedings of the International Workshop, Crimea, USSR, April 17–21, 1989,
p. 36 (1989)
Aharonian, F.A., Akhperjanian, A.G., Kankanian, A.S., Mirzoyan, R.G., Stepanian, A.A., Müller,
N., Samorski, M., Stamm, W., Bott-Bodenhausen, M., Lorenz, E., Sawallisch, P.: A system of
air Cherenkov telescopes in the HEGRA array. In: Proceedings of the 22nd International Cosmic
Ray Conference, vol. 2, Dublin, Ireland, pp. 615–618 (1991)
Aharonian, F., et al. (HEGRA collaboration): The temporal characteristics of the TeV gamma-
radiation from MKN 501 in 1997. I. Data from the stereoscopic imaging atmospheric Cherenkov
telescope system of HEGRA. Astron. Astrophys. 342, 69–86 (1999)
Aharonian, F., et al. (H.E.S.S. collaboration): Very high energy gamma rays from the direction of
Sagittarius A*. Astron. Astrophys. 425, L13–L17 (2004)
Aharonian, F., et al. (H.E.S.S. collaboration): Discovery of the binary pulsar PSR B1259-63 in
very-high-energy gamma rays around periastron with HESS. Astron. Astrophys. 442, 1–10
(2005a)
Aharonian, F., et al. (H.E.S.S. collaboration): Discovery of very high energy gamma rays associated
with an X-ray binary. Science 309, 746–749 (2005b)
Aharonian, F., et al. (H.E.S.S. collaboration): The H.E.S.S. survey of the inner galaxy in very
high-energy gamma-rays. Astrophys. J. 636, 777–797 (2006a)
182 E. Lorenz and R. Wagner
Aharonian, F., et al. (H.E.S.S. collaboration): 3.9 day orbital modulation in the TeV gamma-ray
flux and spectrum from the X-ray binary LS 5039. Astron. Astrophys. 460, 743–749 (2006b)
Aharonian, F., et al. (H.E.S.S. collaboration): Detection of extended very-high-energy gamma-ray
emission towards the young stellar cluster Westerlund 2. Astron. Astrophys. 467, 1075–1080
(2007)
Aharonian, F., et al. (H.E.S.S. collaboration): HESS very-high-energy gamma-ray sources without
identified counterparts. Astron. Astrophys. 477, 353–363 (2008)
Aharonian, F., et al. (H.E.S.S collaboration): H.E.S.S. observations of the prompt and afterglow
phases of GRB 060602B. Astrophys. J. 690, 1068–1073 (2009a)
Aharonian, F., et al. (H.E.S.S. collaboration): Discovery of gamma-ray emission from the shell-
type supernova remnant RCW 86 with HESS. Astrophys. J. 692, 1500–1505 (2009b)
Aharonian, F., et al. (H.E.S.S. collaboration): Simultaneous observations of PKS 2155-304 with
H.E.S.S., Fermi, RXTE and ATOM: spectral energy distributions and variability in a low state.
Astrophys. J. 696, L150 (2009c)
Aharonian, F., et al. (H.E.S.S. collaboration): Very high energy gamma-ray observations of the
binary PSR B1259-63/SS2883 around the 2007 Periastron H.E.S.S. collaboration. Astron. As-
trophys. 507, 389–396 (2009d)
Aiso, S., et al.: The detection of TeV gamma rays from Crab using the telescope array prototype.
In: Proceedings of the 25th International Cosmic Ray Conference, vol. 3, Durban, South Africa,
pp. 177–180 (1997)
Akerlof, C.W., et al.: Granite, a new very high energy gamma-ray telescope. Nucl. Phys. B, Proc.
Suppl. 14, 237–243 (1991)
Albert, J., et al. (MAGIC collaboration): Variable very-high-energy gamma-ray emission from the
microquasar LS I+61 303. Science 23, 1771–1773 (2006)
Albert, J., et al. (MAGIC collaboration): Discovery of VHE gamma radiation from IC443 with the
MAGIC telescope. Astrophys. J. Lett. 664, L87–L90 (2007)
Albert, J., Jordí, A., et al. (MAGIC collaboration): Very high energy gamma rays from a distant
quasar: how transparent is the Universe? Science 320, 1752–1754 (2008a)
Albert, J., et al. (MAGIC collaboration): Probing quantum gravity using photons from a flare of
the active galactic nucleus Markarian 501 observed by the MAGIC telescope. Phys. Lett. B 668,
253–257 (2008b)
Aleksić, J., et al. (MAGIC collaboration): MAGIC discovery of VHE emission from the FSRQ
PKS 1222+21. Astrophys. J. 730, L8 (2011)
Aleksić, J., et al. (MAGIC collaboration): Detection of very-high energy γ -ray emission from NGC
1275 by the MAGIC telescopes. Astron. Astrophys. 539, L2 (2012a)
Aleksić, J., et al. (MAGIC collaboration): Phase-resolved energy spectra of the Crab pulsar in
the range of 50–400 GeV measured with the MAGIC telescopes. Astron. Astrophys. 540, A69
(2012b)
Aliu, E., et al. (MAGIC collaboration): Observation of pulsed γ -rays above 25 GeV from the crab
pulsar with MAGIC. Science 322, 1221–1224 (2008)
Aliu, E., et al. (VERITAS collaboration): Detection of pulsed gamma rays above 100 GeV from
the Crab pulsar. Science 334, 69–72 (2011)
Aliu, E., et al. (VERITAS collaboration): VERITAS deep observations of the dwarf spheroidal
galaxy Segue 1. Phys. Rev. D 85, 062001 (2012)
Auger, P., Ehrenfest, P., Maze, R., Daudin, J., Fréon, R.A.: Extensive cosmic-ray showers. Rev.
Mod. Phys. 11, 288–291 (1939)
Augustin, J.-E., et al.: Discovery of a narrow resonance in e+ e− annihilation. Phys. Rev. Lett. 33,
1406–1408 (1974)
Bagge, E.R., Samorski, M., Stamm, W.: A new air shower experiment at Kiel. In: Proceedings of
the 15th International Cosmic Ray Conference, vol. 12, Plovdiv, Bulgaria, pp. 24–29 (1977)
Baixeras, C., et al. (MAGIC collaboration): The MAGIC telescope. Nucl. Phys. B, Proc. Suppl.
114, 247–252 (2003)
Barrau, A., et al. (CAT collaboration): The CAT imaging telescope for very-high-energy gamma-
ray astronomy. Nucl. Instrum. Methods Phys. Res., Sect. A, Accel. Spectrom. Detect. Assoc.
6 Very-High Energy Gamma-Ray Astronomy 183
Holder, J., et al. (VERITAS collaboration): The first VERITAS telescope. Astropart. Phys. 25,
391–401 (2006)
Huang, J., et al. (Tibet-AS collaboration): The complex EAS hybrid arrays in Tibet. Nucl. Phys. B,
Proc. Suppl. 196, 147–152 (2009)
Kifune, T.: The energy threshold of imaging Cherenkov technique and 3.8 m telescope of CAN-
GAROO. In: Conference Proceedings: Towards a Major Atmospheric Cherenkov Detector for
TeV Astroparticle Physics, pp. 229–237 (1992)
Kranich, D.: Temporal and spectral characteristics of the active galactic nucleus Mkn 501 during a
phase of high activity in the TeV range. Ph.D. thesis, Technische Universität München (2002)
Kolhörster, W.: Messungen der durchdringenden Strahlung in Freiballons in größeren Höhen.
Phys. Z. 14, 1153–1156 (1913)
Kosack, K., et al.: TeV gamma-ray observations of the galactic center. Astrophys. J. Lett. 608,
L97–L100 (2004)
Koul, R., et al.: The Himalayan gamma ray observatory at Hanle. In: Proceedings of the 29th
International Cosmic Ray Conference, vol. 5, Pune, India, pp. 243–246 (2005)
Joshi, U., et al. (TACTIC collaboration): Coordinated TeV gamma-ray and optical polarization
study of BL Lac object Mkn 501. Bull. Astron. Soc. India 28, 409–411 (2000)
Lloyd-Evans, J., Coy, R.N., Lambert, A., Lapikens, J., Patel, M., Reid, R.J.O., Watson, A.A.:
Observation of gamma rays with greater than 1000 TeV energy from Cygnus X-3. Nature 305,
784–787 (1983)
Lorenz, E.: High-energy astroparticle physics. Nucl. Instrum. Methods Phys. Res., Sect. A, Accel.
Spectrom. Detect. Assoc. Equip. 567, 1–11 (2006)
Lemoine-Gourmard, M., et al. (H.E.S.S. collaboration): HESS observations of the supernova
remnant RX J0852.0-4622: shell-type morphology and spectrum of a widely extended VHE
gamma-ray source. In: Proceedings of the 30th International Cosmic Ray Conference, vol. 2,
Merida, Mexico, 667–670 (2007)
Maier, G., Skilton, J. (VERITAS collaboration, H.E.S.S. collaboration): VHE Observations of the
binary candidate HESS J0632+057 with H.E.S.S. and VERITAS. In: Proceedings of the 32nd
International Cosmic Ray Conference, Beijing, China (2011). Available at arXiv:1111.2155
[astro-ph]
Marshak, M.L., et al.: Evidence for muon production by particles from Cygnus X-3. Phys. Rev.
Lett. 54, 2079–2082 (1985)
Merck, M., et al. (HEGRA collaboration): Search for steady and sporadic emission of neutral radi-
ation above 50 TeV with the HEGRA array. In: Proceedings of the 22nd International Cosmic
Ray Conference, vol. 1, Dublin, Ireland, 261–264 (1991)
Merck, M.: Suche nach Quellen untrahochenergetischer kosmischer Strahlung mit dem HEGRA-
Detektor. Ph.D. thesis, Ludwig-Maximilians-Universität München (1993)
Millikan, R.A., Cameron, G.H. New results on cosmic rays. Nat. Suppl. 121, 19–26 (1928)
Mirabel, I.F.: Very energetic gamma-rays from microquasars and binary pulsars. Science 312,
1759–1760 (2006)
Morrison, P.: On gamma-ray astronomy. Nuovo Cimento 7, 858–865 (1958)
Nikolsky, S.I., Sinitsyna, V.G.: Investigation of gamma-sources by mirror telescopes. In: Very High
Energy Gamma Ray Astronomy: Proceedings of the International Workshop, Crimea, USSR,
April 17–21, 1989, p. 11 (1989)
Nolan, P.L., et al. (Fermi-LAT collaboration): Fermi large-area telescope second source catalog.
Astron. J. Suppl. Ser. 199, 31–77 (2012)
Parsignault, D.R., Schreier, E., Grindlay, J., Gursky, H.: On the stability of the period of Cygnus
X-3. Astrophys. J. Lett. 209, L73–L75 (1976)
6 Very-High Energy Gamma-Ray Astronomy 185
Pfeffermann, E., Aschenbach, R.: ROSAT observation of a new supernova remnant in the constel-
lation Scorpius. In: Proceedings of International Conference on X-ray Astronomy and Astro-
physics: Röntgenstrahlung from the Universe, pp. 267–268 (1996)
Penzias, A., Wilson, R.W.: A measurement of excess antenna temperature at 4080 Mc/s, Astrophys.
J. Lett. 142, 419–421 (1965)
Punch, M., et al. (Whipple collaboration): Detection of TeV photons from the active galaxy
Markarian 421. Nature 358, 477–478 (1992)
Quinn, J., et al. (Whipple collaboration): Detection of Gamma Rays with E > 300 GeV from
Markarian 501. Astrophys. J. Lett. 456, L83–L86 (1996)
Quinn, J., et al. (Whipple collaboration): The flux variability of Markarian 501 in very high energy
gamma rays. Astrophys. J. 518, 693–698 (1999)
Remillard, R.A., Levine, M.L.: The RXTE all sky monitor: first year of performance. In: Proceed-
ings All Sky X-Ray Observations in the Next Decade, RIKEN, Japan, p. 29 (1997)
Salazar, H.: The HAWC observatory and its synergies at Sierra Negra Volcano. In: Proceedings of
the 31st International Cosmic Ray Conference, Łódź, Poland (2009)
Samorski, M.: private communication
Samorski, M., Stamm, W.: Detection of 2 × 1015 to 2 × 1016 eV gamma-rays from Cygnus X-3.
Astrophys. J. Lett. 268, L17–L21 (1983)
Sinnis, G.: Cosmic-ray physics with the Milagro gamma-ray observatory. J. Phys. Soc. Jpn.
Suppl. A 78, 84–87 (2009)
Stepanian, A.A., Fomin, V.P., Vladimirsky, B.M.: A method to distinguish the gamma-ray
Cherenkov flashes from proton component of cosmic rays. Izv. Krimskoi Astrofiz. Obs. 66,
234–240 (1983)
Vandenbroucke, J.: AGIS: a next-generation TeV gamma-ray observatory. Bull. Am. Astron. Soc.
41, 909 (2010)
Vladimirsky, B.M., Zyskin, Yu.L., Neshpor, Yu.I., Stepanian, A.A., Fomin, V.P., Shitov, V.G.:
Cherenkov gamma-telescope GT-48 of the Crimean astrophysical observatory of the USSR
Academy of Sciences. In: Very High Energy Gamma Ray Astronomy: Proceedings of the Inter-
national Workshop, Crimea, USSR, April 17–21, 1989, p. 11 (1989)
Wagner, R.: Synoptic studies of seventeen blazars detected in very high-energy gamma-rays. Mon.
Not. R. Astron. Soc. 385, 119–135 (2008)
Watson, A.A.: High-energy astrophysics: is Cygnus X-3 a source of gamma rays or of new parti-
cles? Nature 315, 454–455 (1985)
Weekes, T.C., Turver, K.E.: Gamma-ray astronomy from 10–100 GeV: a new approach. In: ESA
Recent Advances in Gamma-Ray Astronomy, pp. 279–286 (1977)
Weekes, T.C.: A fast large aperture camera for very high energy gamma-ray astronomy. In: Pro-
ceedings of the 17th International Cosmic Ray Conference, vol. 8, Paris, France, pp. 34–37
(1983)
Weekes, T.C., et al. (Whipple collaboration): Observation of TeV gamma rays from the Crab nebula
using the atmospheric Cherenkov imaging technique. Astrophys. J. 342, 379–395 (1989)
Weekes, T.C.: Very High Energy Gamma-Ray Astronomy. Institute of Physics, Bristol (2003)
Chapter 7
Search for the Neutrino Mass and Low Energy
Neutrino Astronomy
Kai Zuber
Historic details and most references of this section can be found in Pais (1986).
The hidden entrance of neutrinos into our understanding of nature started with the
discovery of radioactivity by Becquerel in 1896. He recognised that photographic
plates got “fogged” while being close to uranium salts and he called this strange
phenomenon les rayons uranique – uranium rays. Around this time Rutherford was
studying ionisation of gases due to X-rays, just discovered shortly before by Roent-
gen. He realised that Becquerel has found a quite similar behaviour while studying
his uranium rays in air. This lead him into a two-year systematic investigation of
absorption features of the uranium rays and he could resolve two components of
the rays. According to their penetration ability he called them α-rays (easily ab-
sorbed) and β-rays being more penetrating (an even more penetrating radiation was
discovered by Villard in 1900, nowadays called γ -rays). In another 10 years effort
Rutherford and Geiger could show that the emitted α-particle is mono-energetic
and “after losing his positive charge is like a helium atom”, observations which
ultimately brought Rutherford to his concept of atomic nuclei.
In parallel also investigations of β-rays continued. By 1900 it was known that
they are negatively charged and after Becquerel measured the e/m-ratio of these
rays he obtained a similar value as the one observed 1897 in cathode rays by J.J.
Thomson, strongly suggesting the electron to be the emitted particle in beta decay. It
was Kaufmann in 1902 who convincingly showed that β-rays are electrons by plac-
ing a radium source into electric and magnetic fields. Only about 5 years later the
question was raised by experiments whether β-rays are mono-energetic like α-rays.
Hahn and Meitner picked up this issue in 1907 while working in Berlin (actually
K. Zuber ()
Inst. for Nuclear and Particle Physics, Zellescher Weg 19, 01069 Dresden, Germany
e-mail: [email protected]
they had to start their work in a carpenters workshop as women were not allowed
in the Chemistry Institute; this situation changed two years later) and very first in-
dications using photographic plates indeed supported the idea of mono-energetic
electrons. However, von Baeyer in the Physics Institute was building one of the first
beta magnetic spectrometers and by collaborating with him they could show in 1911
that β-rays are continuous. Nevertheless, the idea that mono-energetic electrons are
emitted, which are losing energy by “secondary causes”, remained. Independently,
Chadwick, while working with Geiger in Berlin, picked up the investigation using a
new counter system (nowadays known as Geiger counters) instead of photographic
plates. He used a magnetic spectrometer with a small slit and counted the elec-
trons arriving at his counter. By tuning the spectrometer he was able to measure the
energy spectrum. In 1914 he confirmed a continuous spectrum with a few lines su-
perimposed and could explain why there might be misleading interpretations from
photographic plate data. Nowadays, the lines are known to be internal conversion
lines, due to gamma rays knocking out electrons in inner atomic shells.
After recovery from World War 1 the investigation was revived in 1921, but peo-
ple were looking at the problem from a different point of view. In the meantime
Rutherford had convincingly proved the concept of an atomic nucleus and in 1913
Bohr invented his first quantum theory of atoms, with electrons orbiting the massive
but tiny nucleus in quantised states. Using energy arguments it became obvious that
β-rays are not resulting from the atomic shells, but have their origin in the nucleus.
Furthermore, the mass of the nucleus was typically estimated a factor two too small
if just counting all the necessary protons to compensate the charges of the electrons
in the shells to make a neutral atom. Thus, the preferred model of the nucleus was
one with twice the number of protons, half of them electrically neutralised by a cor-
responding number of electrons inside the nucleus, the remaining by the atomic shell
electrons. In 1925 Wooster and Ellis went out to solve the issue of mono-energetic
electrons in beta decay calorimetrically. If electrons are really mono-energetic, a
calorimetric measurement should always measure the full transition energy between
the two involved nuclei, independent of any “secondary causes”. However, if elec-
trons are really emitted with a continuous energy spectrum, the average energy mea-
sured in the calorimeter should be much smaller than the maximal value allowed. By
using 210 Bi (at that time called radium-E) they measured in 1928 an average energy
of 0.35 MeV, much smaller than the maximal energy of 1.16 MeV and thus clearly
proved a continuous spectrum. Meitner and Orthmann repeated the measurement
and confirmed the result in 1929.
Meanwhile a second independent problem had arisen in the context of beta decay.
In 1925 Uhlenbeck and Goudsmit discovered that the electron has a spin of 1/2 and
Dennison was proposing the same spin for the proton. However, it was impossible
to explain a measurement performed in 1929 by Rasetti, namely that the spin of
14 N is one. According to the accepted nuclear models this nucleus would contain
14 protons and 7 electrons, thus 21 spin 1/2 objects, which results in a non-integer
total spin. Also the decay of 14 C into 14 N cannot be explained by the emission of a
single spin 1/2 electron, as this nuclear transition is characterised as 0+ → 1+ .
Given these severe puzzles rather desperate solutions were discussed, in 1930
Bohr even considered that the conservation of energy in beta decay might only be
7 Search for the Neutrino Mass and Low Energy Neutrino Astronomy 189
He suggested a new particle called neutron (the real neutron was not known and
discovered by Chadwick 1932, even though Bothe and collaborators had already
seen them in 1930 as an “unusual gamma radiation”). A first public mentioning of
the idea of the “neutron” by Pauli was in his presentation on Problems of Hyperfine
structure at a 171st regular meeting of the American Physical Society at Caltech in
Pasadena on June 16th, 1931 (Pauli, 1931), which made it to the New York Times the
next day in an article named “Dance of electrons heard by scientists”. However, by
no means the issue was settled and the debate whether a new particle or violation of
energy conservation is responsible for the energy spectrum in beta decay continued
for another five years.
The whole situation changed dramatically in 1932 when the real neutron was
discovered. Chadwick and coworkers were searching already for quite some time
for a “neutron” in the nucleus. After Joliot-Curie reported the observation of pro-
tons ejected from paraffin while shooting alpha-particles on beryllium, within three
weeks Chadwick was able to repeat the experiment and claimed the observation of
a neutron via the reaction α +9 Be →12 C + n. Joliot-Curie assumed high energy
gammas to be responsible for the proton emission. However, in her further studies
she discovered positron decay as well as delayed radioactive decays. The neutron
itself was still considered to be a bound ep-state, hence the electron was still part
of the nucleus but the discovery of the neutron completely changed the view of the
atomic nucleus as became apparent in the famous Solvay Conference in 1932. Tak-
ing Pauli’s idea for granted by using radium source it was revealed in 1933 that the
190 K. Zuber
mean free path of “the neutron” is at least 150 km in nitrogen at 75 atmospheric pres-
sure, by far the most penetrating object known at that time and also today. Shortly
after, a radical new view and one of the most insightful papers in modern physics
came up with a first sophisticated theory to explain all these new nuclear effects. It
was Enrico Fermi who in 1933/1934 developed a first theory in particle physics us-
ing quantised spin 1/2 particles, published in Italian and German (Fermi, 1934a,b).
His description of reactions assuming a point interaction of four particles is still
valid today in the MeV range. In his seminal paper he also mentioned the neutrino
appearing in beta decay and he renamed Pauli’s neutron into neutrino (Italian for
“small neutron”). Furthermore, he discussed the impact of a non-vanishing neutrino
mass on the endpoint of a beta spectrum (Fig. 7.1) and from existing beta decay
data he concluded that the best estimate is zero. Inspired by Fermi’s work Bethe and
Peierls calculated the cross section for neutrinos interacting with a nucleus produc-
ing an electron or positron (Bethe and Peierls, 1934). They estimated a mean free
path of about 1016 km3 in solid matter for 2–3 MeV antineutrinos and concluded:
It is therefore absolutely impossible to observe processes of that kind with the neutrinos
created in nuclear transformations.
Thus, for a long time nuclear recoil measurements seemed to be the only option
to prove its existence. In 1935 M. Goeppert-Mayer estimated the life time of dou-
ble beta decay (Goeppert-Mayer, 1935), the simultaneous decay of two neutrons in
a single nucleus, resulting in values of around 1020 year, making this process ex-
tremely rare and hard to detect. Another theoretical masterpiece was published in
1937 as Majorana proposed a two component theory of neutrinos (Majorana, 1937),
resulting in a neutrino being its own antiparticle (see Sect. 7.3). Immediately Racah
(1937) and two years later Furry (1939) realised that this would allow for the process
of neutrino-less double beta decay, just the emission of two electrons and provide
an alternative neutrino mass measurement.
Fig. 7.2 The Project Poltergeist group, on the left Clyde Cowan and on the right Fred Reines. The
detector called “Herr Auge” (German for “Mr. Eye”) in form of a liquid scintillator barrel can be
seen in the middle (with kind permission of Los Alamos Science)
with the idea of using such an explosion for neutrino detection. The detection re-
action is inverse beta decay ν̄e + p → e+ + n. They estimated that observation of
a few events via positron annihilation into two 511 keV photons would require a
liquid scintillation detector of several tons if the bomb is about 50 m away. As the
newly developed technology of liquid scintillators built so far had volumes of only
one liter, the new detector was called “El Monstro”. The detector was considered to
be in free fall in a shaft, triggered by the explosion, ending up in a bath of feath-
ers and foam rubber. The project was granted approval by Los Alamos and Reines;
Cowan and others began designing and building the detector in 1951. However, in
Fall 1952, once urged again to consider the option of using a nuclear power plant
instead, they realised that detecting the neutron as well, by using a capture reaction
producing high energy gamma rays and forming a short time coincidence of less
than 100 ms between the positron and neutron, would dramatically reduce back-
grounds and guarantee a much better experimental environment. The activity was
called Project Poltergeist and a 300 liter detector “Herr Auge” was built (Fig. 7.2).
First measurements were performed at the reactor in Hanford, Washington in 1953,
which showed some indication for neutrinos but were not significant enough (Reines
and Cowan, 1953). Hence a new detector was designed consisting of three scintilla-
tor tanks with 1400 liters each, interleaved by two water tanks containing dissolved
cadmium chloride (the “target tanks”). The neutrino reaction was supposed to hap-
pen on the protons in the water and the thermalised neutron was captured on 113 Cd,
which has a huge cross section for that. The scintillators had to record all the gam-
192 K. Zuber
mas. The Los Alamos group went out to the new Savannah River Plant in South
Carolina and proved the existence of the neutrino in 1956 (Cowan et al., 1956), see
also Los Alamos (1997) for more details.
In this period of time a lot of further fundamental experiments were performed,
some are shortly listed here. First of all, Madame Wu and collaborators discovered
parity violation in weak interactions (Wu et al., 1957). Subsequent investigations ac-
tually showed that parity is maximally violated. This basically defined the structure
of the weak interaction to be of vector minus axial-vector type (V-A interaction) al-
lowing only left-handed neutrinos and right-handed antineutrinos to participate. To
prove that the helicity of the neutrino is always left-handed Goldhaber performed his
deeply insightful experiment measuring its helicity (Goldhaber et al., 1958). A next
milestone was the proof that neutrinos emitted in pion decay are not identical to
the ones from beta decay, thus at least two different neutrinos exist (Danby et al.,
1962). Finally, experiments at the LEP accelerator at CERN in the 1990s showed
pretty clearly that there are only three light neutrinos with mass below 45 GeV by
studying the decay of the Z-boson. The first direct detection of the tau neutrino via
the produced tau-lepton occurred relatively recently in 2000 by the DONUT exper-
iment at Fermilab (Kodama et al., 2001).
noble gas mass spectrometry. The radiochemical approach was to search for double
beta decay of 238 U into 238 Pu which decays with the emission of a characteristic α-
particle. Both searches came up with much longer half-lives limits then the claimed
observation (Inghram and Reynolds, 1949; Levine et al., 1950). It is worthwhile to
mention that in 1951 a geochemical 130 Te half-life was claimed of 1.4 × 1021 years,
not too far away from the current laboratory value. A repetition of a tin experiment
in 1952 with improved equipment could not confirm the original evidence (Fireman
and Schwartzer, 1952). More sophisticated experiments based on the geochemical
approach were performed in the 1960s and showed first serious evidence for double
beta decay of 130 Te (Kirsten et al., 1967, 1968). It should be mentioned that this
kind of approach cannot discriminate between the decay modes as only the daugh-
ter isotope is detected, hence the signal is expected to be dominated by the neutrino
accompanied decay.
The first laboratory observation of this decay mode occurred in 1987 by using
selenium-foils in a TPC within a magnetic field. In this way the group at UC Irvine
could announce a positive signal based on 36 events (Elliott et al., 1987). By now
this decay mode has been observed for about a dozen isotopes but the important
neutrino-less mode is still awaiting its detection. However, in 2001 there was a claim
of observation (Klapdor-Kleingrothaus et al., 2001) in the isotope 76 Ge, which is
still awaiting confirmation. The next generation of large-scale detectors using huge
amounts of isotopically enriched detectors is about to start, already in 2011 three
projects in form of GERDA (76 Ge) and EXO, KamLAND-Zen (136 Xe) have started.
It should be mentioned that in the last decade also bounds on the sum of all
neutrino masses from cosmology became more and more stringent due to much
better data, leading to bounds in the region of an eV and below as well.
The ideas about energy production inside the Sun and stars has a long and changing
history. An extensive discussion can be found in Longair (2006). In the 19th century
thermodynamic arguments were used to explain the heat production by accretion.
A bombardment by meteorites was considered but the rate needed would be very
large and in conflict with observed meteoritic impacts on Earth. The Sun itself was
considered to be a liquid sphere gradually contracting and cooling on time scales
of 107 years, already causing conflicts with geological age determinations of the
Earth. In 1869/1870 Lane was exploring whether the Sun could be gaseous and he
was the first who found the correct hydrodynamic equations and the principle of
mass conservation. He was also the first to show that if a star loses energy by ra-
diation it contracts and that in this process the temperature actually increases and
not decreases as expected from the virial theorem. This model was supported and
refined later by more sophisticated work of Ritter and Emden. Independently, Lock-
yer in the 1880s attempted to link the spectral classes of stars with an evolutionary
sequence based on the mentioned meteoritic hypothesis. In his opinion a cloud of
7 Search for the Neutrino Mass and Low Energy Neutrino Astronomy 195
would work most effectively for nuclei with small charge as the Coulomb bar-
rier is smaller and that preferentially particles in the high energy tail of the MB-
distribution will succeed. Two immediate consequences from this are that nuclear
reactions might occur at lower temperature than thought and that the stellar lumi-
nosity should strongly rise as a function of temperature, due to the exponential de-
pendence of the tunnelling probability on energy. Furthermore, in 1931 it became
more or less clear that hydrogen is the most abundant of all elements in the Uni-
verse. Atkinson used this information and proposed that heavier elements could be
created by successively adding protons on nuclei until they become too massive
for nuclear stability and emit an α-particle. This is a kind of precursor idea of the
CNO-cycle proposed by Bethe and Weizsäcker in 1938 as a source of energy gener-
ation (Weizsäcker, 1937, 1938; Bethe, 1939). With the discovery of the neutron and
Fermi’s theory about weak interactions (see Sect. 7.1) it finally became possible to
calculate reaction rates for 3 He and 4 He which ultimately lead to the proposal of the
pp-chain by Bethe and Critchfield (Bethe and Critchfield, 1938; Bethe, 1939). They
also found that the energy of the pp-chain is sufficient to explain the solar luminosity
and the scaling laws for the rate of energy production ε as a function of tempera-
ture (ε ∝ T 4 for the pp-chain and ε ∝ T 17 for the CNO cycle). In his papers Bethe
safely ignored neutrinos as the initial reaction for fusion: he wrote p + p → d + e+ .
Thus, the major energy source for stars on the main sequence performing hydrogen
burning into helium in hydrostatic equilibrium was finally found.
After the world recovered from the Second World War Bruno Pontecorvo (Fig. 7.4)
in 1946 studied radiochemical methods to detect solar neutrinos if the mentioned
nuclear processes are really occurring inside the Sun (Pontecorvo, 1946). One of
7 Search for the Neutrino Mass and Low Energy Neutrino Astronomy 197
his suggestions was the usage of the chlorine-argon method, based on the reaction
νe + 37 Cl → 37 Ar + e, requiring neutrinos of at least 814 keV energy. However,
notice that at that time the subscript e was not existing and the neutrino was not
even observed. Thus only one neutrino was assumed. Luis Alvarez picked up the
idea in 1948 and performed a rather detailed discussion of all the experimental issues
involved but did not try to convert the idea into an experiment. Fortunately there was
Raymond Davis Jr., a radiochemist hired by the newly opened Brookhaven National
Laboratory. Inspired by a review article of Crane (1948) on neutrinos he started to
convert the chlorine idea into reality. First he built a 200 l detector but was unable
to detect anything at the Brookhaven High Flux reactor, considered to be a neutrino
source. Later on, he was upgrading his tank to 3 800 l buried 5.8 m underground.
Based on the data obtained he was able to give an upper limit on solar neutrinos of
40 000 SNU (see below), which is about 15 000 times higher than he measured later
which forced a response from the referee:
Any experiment such as this, which does not have the requisite sensitivity, really has no
bearing on the question of the existence of neutrinos. To illustrate my point, one would
not write a scientific paper describing an experiment in which an experimenter stood on a
mountain and reached for the moon, and concluded that the moon was more than eight feet
from the top of the mountain (from Davis, 2003).
After recognising the too low flux of the reactor, the experiment was moved to Sa-
vannah River and measured simultaneously to the Project Poltergeist (see Sect. 7.3).
While Reines and Cowan discovered the antineutrino, again Davis did not see any
signal. He concluded that the cross section for the neutrino capture was a factor 5
smaller than predicted by theory and he refined it within a few years to about a fac-
tor of 20 smaller. This was the prove that neutrinos and antineutrinos are different
(reactors emit only antineutrinos and the chlorine–argon method is insensitive to
them), but this non-observation did not get much attention. In 1958 all of a sudden a
solar neutrino measurement seemed to be feasible. Until this time it was considered
that the Sun is basically using the pp-I chain resulting in solar neutrinos too low
in energy to be detected with 37 Cl. In this year Holmgren and Johnston measured
the cross section for the reaction 3 He + 4 He → 7 Be + γ and it turned out to be
1 000 times greater than thought (Holmgren and Johnston, 1959). This opened the
road to higher energetic neutrinos resulting from the electron capture of 7 Be and also
from 8 B decay (Fig. 7.5). Immediately Fowler and Cameron contacted Davis to con-
vince him to consider an experiment and he put his 3 800 l prototype in a limestone
mile in Ohio. Unfortunately he was not able to detect a signal. In 1960 Kavanagh
showed that the branching into 8 B (the pp-III chain) is very small, hence a poten-
tial neutrino flux would be small. On the positive side Bahcall in 1963 recalculated
the capture cross section of solar neutrinos on 37 Cl including the isobaric analogue
state in 37 Ar and found it 20 times higher than before (Bahcall, 1964). This finally
triggered a larger experiment and the estimate was an event rate of 4–11 neutrino
captures per day in 378 000 liters of C2 Cl4 (Davis, 1964). This kind of enormous
ratio (finding a few atoms in about 1030 other ones) was the beginning of low-level
(often called low-background) physics. C2 Cl4 was chosen as it is a bleaching agent
and was available in huge amounts at that time. Based on the experience from the
198 K. Zuber
Ohio measurements in terms of 37 Ar production via other processes, Davis was able
to calculate the depth he needed to actually observe solar neutrinos and found out
that at least 4 000 feet were necessary. At the end the decision was made to build
the device at the Homestake Gold Mine in Lead, South Dakota. The expected solar
neutrino spectrum based on the reaction chain and solar models is shown in Fig. 7.6.
The principle of the experiment was the counting of 37 Ar which could only be
produced by solar neutrino capture on 37 Cl. For that the tank was left alone for 2–3
months to accumulate a few 37 Ar atoms. This is feasible because the half-live of
37 Ar is 35.04 days. After that via an eductor system helium was bubbled into the
Fig. 7.8 As it is pretty warm underground, R. Davis Jr. took a swim in the water shielding of the
chlorine experiment. This shielding was built to reduce external backgrounds and moderate high
energy neutrons (Courtesy of Brookhaven National Laboratory)
chemical experiments are only sensitive to νe and the water Cherenkov detectors are
dominated by νe interactions as well, because the cross section of electron scattering
for the remaining two flavours is about a factor 6 smaller. However, due to the energy
of the electron required to emit significant Cherenkov light, this method can only
be used for neutrinos of several MeV, i.e. only the 8 B and hep neutrinos (Fig. 7.6).
At energies involved in solar neutrino searches, the recoiling electron more or less
follows the direction of the incoming neutrino. The water tank is equipped with
photomultipliers; the energy can be reconstructed from the number of struck photo-
tubes and the measured amount of light. Nowadays this technique is also used for
neutrino telescopes. So far so good, but nobody would have built a detector for solar
neutrinos alone, fortunately another fundamental physics topic was on the bench,
which triggered the effect of building large-scale water detectors.
For decades people are desperate to find new physics which is not covered by
our current understanding of Particle Physics. In the environment of Grand Unified
Theories attempts are made to merge the four fundamental interactions of nature.
The most promising way discussed around 1980 had as a major prediction the fun-
damental instability of matter due to a possible decay of the proton. The favourite
decay mode was into a positron and a neutral pion. As water is a cheap source
of a large number of protons and the potential decay products all end up in light,
the idea of water Cherenkov detectors came up. At the end no proton decay was
observed, but solar and supernova neutrinos were. In a kind of competition in find-
ing proton decay two experiments based on this technology were built, one in the
202 K. Zuber
Fig. 7.9 Slide of a talk from R. Davis Jr. shown in 1971 sketching the evolutionary timelines of
experiments and theory of solar neutrinos (Courtesy of Brookhaven National Laboratory)
Fig. 7.10 Ray Davis Jr. (left) and Till Kirsten (right) on a Sunday afternoon in the Gardens of
Schwetzingen Castle (near Heidelberg, Germany) during the Second International GALLEX plan-
ning meeting end of Sep. 1979. T. Kirsten became Principal Investigator and spokesperson of
GALLEX (with kind permission of D.D. Clayton)
mixing effect, i.e. even if the Sun emits only νe in a certain distance you have a
mixture of all three neutrino flavours. The probability for a new flavour to show
up in a distance L from the source depends, in the simple case that only two states
contribute, on a mixing angle θ (determining the amplitude of the oscillation), the
energy E of the neutrino, the distance L from the source to the detector and the
quantity m2ij = m2j − m2i with i, j = 1, 2, 3, the difference of two involved mass
eigenvalues squared. Thus, at least one of the mass eigenstates has to be non-zero,
otherwise this phenomenon cannot occur. The case described would imply that neu-
trinos oscillate on the way from the Sun to the Earth, hence it is called vacuum
oscillation. L. Wolfenstein, A. Smirnov and S. Mikheyev realised around 1980 that
a conversion can occur within the Sun already, now called the matter effect or MSW-
effect (Wolfenstein, 1978; Mikheyev and Smirnov, 1986). The physics behind this
phenomenon relies on the fact that all neutrinos can interact with the electrons in the
solar interior via the exchange of Z-bosons (so-called weak neutral currents), while
only electrons can additionally interact via W-boson exchange (called weak charged
currents). This singles out electron neutrinos and provides them with an “additional
effective mass” which is proportional to the electron density. Thus in the solar core,
where the density is highest, the νe is heavier than the other neutrinos which get
inverted as neutrinos leave the Sun. Hence, somewhere on the way out, the involved
states are about equal in mass and there is a good chance for the conversion of one
204 K. Zuber
Fig. 7.11 Proportional counter from the GALLEX experiment to detect the 71 Ge decay. It is
a modified and improved version of the counter used by the Homestake experiment. A similar
counter is also used in SAGE (Courtesy of Max-Planck Institute for Nuclear Physics, Heidelberg)
flavour into the other. In this way it is also possible to describe the three observations
and it allows for different energy dependent oscillation probabilities. Interestingly,
for certain parameter values of θ and m2 there could be some regeneration effect
for νe if neutrinos have to travel through the Earth due to the same effect. Hence,
the Sun in neutrinos could be brighter during the night than at daytime. The effect
is called day–night asymmetry. However, no day–night effect has been seen yet.
The next player in the game is Super-KamiokaNDE starting data taking in 1996
and still operational. They were able to reduce the threshold below 5 MeV and
accumulated a high statistics with far more than 20 000 events of 8 B neutrinos. This
detector was the first one claiming the observation of neutrino oscillation based on
a deficit of upward going muons due to atmospheric neutrino interactions.
Already in 1984 an approach was suggested by H. Chen at a Solar Neutrino Con-
ference in Homestake to solve the problem of missing neutrinos independently of
any solar models by using heavy water D2 O. On deuterium two reactions are pos-
sible, the flavour sensitive reaction νe + D → p + p + e (charged current) and the
flavour blind reaction νx + D → νx + p + n (neutral current). The latter is possi-
ble for all neutrinos with an energy above 2.2 MeV and additionally measuring the
total solar neutrino flux (Chen, 1985). Of course, also neutrino–electron scattering
is possible but plays a minor role. Hence by measuring the number of electrons
and neutrons a discrimination of the two solutions can be found. If both rates are
down by a factor 2–3 with respect to expectation, the Sun indeed would be produc-
ing less neutrinos, however, if only the electron events are reduced but the neutron
reaction corresponds to expectation then neutrinos oscillate. Already in the 1960s
at times when the chlorine experiment was considered, groups at Case Western Re-
serve University, including F. Reines, were exploring various kinds of solar neutrino
detector. As one of the investigated approaches T.L. Jenkins and his student F. Dix
built a 2 000 liter heavy water Cherenkov detector which suffered from a high back-
ground rate but could place a limit on the 8 B flux which is more than two orders of
magnitude higher than expected from the models. G. Ewan and H. Chen started a
7 Search for the Neutrino Mass and Low Energy Neutrino Astronomy 205
project which became later the Sudbury Neutrino Observatory (SNO). After a feasi-
bility study published in 1985 a full proposal of a 1 000 ton heavy water Cherenkov
detector was released in 1987 (Fig. 7.13). The heavy water was borrowed from
the Canadian Atomic Energy Commission and installed in a nickel mine close to
Sudbury (Ontario, Canada) 2 km underground. The experiment started data taking
in 1998 and finished end of 2006. Three different phases were performed: A first
running period with pure D2 O which allowed a good measurement of the charged
current reaction producing electrons but also had some sensitivity to neutral current
reactions. To enhance the sensitivity for the latter, 2 tons of salt were added to the
detector, because 35 Cl has a much higher neutron capture cross section. In a third
phase 3 He filled proportional counters were added which allowed the detection of
neutrons on an event-by-event basis. As the first neutral current data were released
in 2001 it became obvious that the total neutrino flux is in agreement with solar
model expectations, thus neutrinos must be responsible (Ahmad et al., 2001). Fur-
thermore, the charged current reaction was down by about a factor 3, confirming the
decades long claim of the Homestake experiment. Finally one of the longest stand-
ing problems in particle astrophysics, the one of missing solar neutrinos, was solved
(Fig. 7.14). The solution is that there are no missing solar neutrinos but 60–70 % are
passing the earth in a wrong flavour, i.e. not as νe . One year after the release of the
first SNO results the Nobel prize was awarded to R. Davis Jr. and M. Koshiba for
the detection of cosmic neutrinos. With all the data obtained it became very likely
that indeed matter effects are the solution.
However, this is not the end of the story. Now being equipped with real-
time and spectroscopic measurements of 8 B neutrinos above 4 MeV and inte-
gral capture rates starting from 814 keV (from Homestake) and 233 keV (from
GALLEX/GNO/SAGE) there is still information to be deduced about the solar in-
terior and the neutrino spectrum and it was always a desire to measure neutrinos in
real-time below 1 MeV. Obviously the maximal information is available if the full
solar neutrino spectrum can be measured in real time. As Cherenkov detectors do
not allow for this, scintillation detectors are considered like the one F. Reines used
but bigger. This was the aim of the BOREXINO experiment installed at the Gran
206 K. Zuber
Fig. 7.13 Picture from one of the first SNO collaboration meetings. Among others there is Herb
Chen (fifth from right), George Ewan (third from right) and Art McDonald (right) as the long term
spokesperson of SNO (with kind permission of the SNO collaboration)
Sasso Laboratory in Italy (the name is a relic from the original idea using a boron
loaded scintillator with much larger target mass proposed as BOREX. Neutral cur-
rent excitations and charged current reactions on 11 B were considered. However,
at the end only a scintillator without any boron was used and the detector became
a “small Borex”, i.e. Borexino). It contains 300 tons of a liquid scintillator sur-
rounded by photomultipliers. Already proposed in 1991 a lot of effort had to be
spent on purifying the scintillator and the used materials from any kinds of radioac-
tive contaminant to an incredibly low level. However, the effort paid off, because
already shortly after the start of the experiment in 2007 a first real-time detection of
mono-energetic 7 Be neutrinos at 0.862 MeV could be announced (Arpesella et al.,
2008). In the meantime also the 8 B spectrum with a threshold of 2.8 MeV has been
measured and a first evidence for the mono-energetic pep neutrinos at 1.44 MeV
has been claimed. The experiment is still taking data. Another large-scale scintil-
lator experiment called KamLAND using 1 000 tons was installed in the Kamioka
mine to use nuclear power plants for an independent prove that matter effects are the
solution of the solar neutrino deficit. If matter effects occur in the Sun, the required
m2 would lead to oscillation effects in vacuum (consider the earth atmosphere as
vacuum) on a baseline of roughly 100–200 km. Surrounded by a large amount of
reactors in a reasonable distance from KamLAND, the experiment was able to show
that the neutrino energy spectrum from reactors is distorted as one would expect
from oscillation results (Eguchi et al., 2003). This finally pinned down the oscilla-
tion solution and determined the m2 involved precisely. In 2011 KamLAND also
released their first 8 B solar neutrino measurement.
7 Search for the Neutrino Mass and Low Energy Neutrino Astronomy 207
Fig. 7.14 A letter of the late Hans Bethe to John Bahcall on solar modelling originating from 2003
(with kind permission of C. Pena-Garay)
Further sources of low energy astrophysical neutrinos besides the Sun are super-
nova explosions. Unfortunately the world-wide data set relies on only one event
from 1987 which is discussed now in some detail. Stars with initial masses larger
than about eight solar masses are considered to continue fusion processes all the
way up to iron group elements. Beyond that no further energy gain is possible by
fusion. The star has developed on onion-like structure with the inner iron core be-
ing made out of iron. The following describes a very simplified version of a very
complex phenomenon which is still under intense research. The pressure of the de-
generate electron gas in the core balances gravitation. The usual “trick” of a star
of contraction → pressure increase → temperature increase → ignition of new fu-
sion fails in the last two steps. First of all the pressure of a degenerate electron gas
is no function of temperature, hence pressure increase does not lead to tempera-
ture rise. Secondly iron group elements cannot do fusion anymore, because they
have the highest binding energy per nucleon. Even worse, energy and electrons are
taken away by electron capture processes on protons and heavy nuclei, and photo-
disintegration of iron group elements occurs, which is strongly endothermic. As
a consequence the iron core becomes unstable and collapses quickly. At densities
beyond 1012 g cm−3 matter becomes opaque even for neutrinos and they start to
diffuse, probably the only known scenario in the Universe where this can happen.
As the iron core contracts further it will be compressed to densities beyond nuclear
density (about 1014 g cm−3 ). This will convert the collapse into an explosion as the
density of the atomic nucleus is highly incompressible and thus bounces back. The
resulting shock wave is travelling outwards and dissociates the still infalling iron
nuclei. If it successfully leaves the iron core then the outer shells are not really a
hurdle and they will be blown away in a giant explosion, which is called super-
nova.
208 K. Zuber
So what about neutrinos? Well, even being a spectacular optical event, the energy
released is minor compared to neutrinos. About 99 % of the released binding energy
is actually emitted in neutrinos! The expected neutrino spectrum, details are still a
hot topic of research, basically consists of two parts: First the so-called deleptonisa-
tion burst consisting of the trapped νe emitted in the electron capture process which
are released within milliseconds as the outgoing shock wave reaches sufficiently low
densities which allows them to escape immediately. They have piled up behind the
shock wave as in the dissociated material the mean free path for neutrinos is much
larger. The second contribution is from the Kelvin–Helmholtz cooling phase of the
protoneutron star. This will emit neutrinos of all flavours and last for about 10 s or
so. Average neutrino energies are in the region of 10–30 MeV, thus enables all so-
lar neutrino detectors to detect them. So what happened in 1987? A lively account
can be found in Koshiba (1992, 2003). On February 23rd, 1987, the brightest super-
nova since the Kepler supernova in 1604 was discovered in the Large Magellanic
Cloud, about 150 000 light years away. It is named Supernova 1987A and was first
announced in the IAU Circular 4 316 on February 24th. As is clear from the previ-
ous section, several proton decay detectors were online but it took them a while to
loop through the data to find the supernova. Nevertheless on February 28th in IAUC
4323 the Mont Blanc Neutrino Observatory (Large Scintillator Detector, LSD) an-
nounced the observation of five events above 7 MeV within 7 seconds, an unusual
high rate. The detection time was corrected to be 5 minutes earlier in IAUC 4332
on March 6th. The detector was a 60 ton Liquid Scintillation detector distributed
in 72 counter units. In circular IAUC 4329, dated March 3rd, J. Bahcall, A. Dar
and T. Piran made a first attempt to estimate the number of events in the various
detectors based on supernova models by Wilson and collaborators. Homestake was
considered to see about one Ar atom while the water Cherenkov detectors should
see about a dozen. The chlorine experiment did an immediate extraction but could
not find any 37 Ar as announced in IAUC 4339 on March 10. The full announcement
from the Homestake experiment, never released in this form (but see Davis 1994),
reads
Immediately after Supernova 1987A was announced, a special run was made with the
Homestake detector to look for electron neutrinos. The preceding solar neutrino ex-
traction, run 92, was fortunately completed on February 13th, just 9.1 days before
the neutrinos from SN1987A arrived at the earth. The extraction for the supernova-
produced 37Ar began on 1 March and was finished on 3 March. The run was counted
7 Search for the Neutrino Mass and Low Energy Neutrino Astronomy 209
for 134 days (120 live-days) and only two 37Ar-like events were detected. These
events occurred 35 and 119 days after the time of extraction. These times of oc-
currence are best fit by attributing both of these events to counter background, and
yield an upper limit (one sigma) for the number of detected 37Ar atoms of less than
two.
If we assume that this limiting number of atoms was produced by the supernova, then, tak-
ing into account the 92 % extraction efficiency, the 40 % counting efficiency, the 8.6 day
delay between the time of neutrino arrival and the time of extraction, and the dead time
during counting, the limit of less than two detected atoms implies that less than 7.3 atoms
of 37Ar were produced in the C2Cl4 absorber by the neutrinos from the supernova. This
limit is consistent with the number expected to be produced – approximately one. A su-
pernova at 150 kiloparsecs distance is simply out of range for the Homestake detec-
tor.
Alternatively, if we assume the 37Ar production rate was constant during this run, the upper
limit of less than two detected atoms implies a production rate of less than 0.5 37Ar atoms
per day, not inconsistent with what is expected to be produced by the average solar neutrino
flux during the 17.7 day exposure period.
However, in the circular before (IAUC 4338) Kamiokande-II reported the de-
tection of 11 events within 13 s, but this event was about 4.7 hours later than the
reported Mont Blanc detection. Evidently this caused a lot of discussion. . . . At the
time of the Mont Blanc detection Kamiokande-II did not observe anything unusual.
Kamiokande was lucky to observe it for two reasons: First of all two minutes before
the supernova signal there was a 105 second dead time in data taking due to gain
check monitoring clearly visible in Fig. 7.15. Secondly the mine was on a substitute
holiday and hence the experiment in holiday trigger scheme. Under normal condi-
tions 5 mins before the supernova event the shifters would have started to change
the data tapes with a fair chance that the experiment would have missed the event.
One day later in IAUC 4340 the IMB-3 collaboration announced their discovery of
SN1987A in the form of eight events in 6 seconds at the same time as Kam-II. Un-
fortunately, due to a failure of one of the four high voltage power supplies 25 % of
the photomultipliers were not operational. IMB-3 was an 8 000 ton rectangular wa-
ter Cherenkov detector using 2 048 photomultipliers located in the Morton Thiokol
salt mine near Cleveland, Ohio. Last but not least, the Baksan Scintillator Telescope
experiment with a total mass of 330 tons investigated their data and could not find
anything around the time of LSD but five events in 9 seconds around ±1 min of the
Kam-II observation. To summarise, three experiments observed 25 events in coinci-
dence while one detector has seen five events about 5 hours before (Fig. 7.16). The
number of publications based on these events is at least an order of magnitude larger.
For example from the observed time spread of the events a neutrino mass limit could
be set which was as good as the ones from beta decay at that time. Astonishingly
the relative time between the events is quite accurately known but the absolute time
is not. The events observed in around 7:35 UT are shown in Fig. 7.17. This event
marks the birth of neutrino astrophysics, the first observation from astrophysical
neutrinos besides the Sun.
210 K. Zuber
Fig. 7.16 Compilation of event of all four experiments around 7:35 UT (from Alekseev et al.,
1988, with kind permission of Elsevier)
7.7 Outlook
Neutrino physics made major progress over the last two decades, especially estab-
lishing a non-vanishing neutrino mass via the phenomenon of neutrino oscillations.
The so-called flavour eigenstates participating in weak interactions are not identi-
cal to the mass eigenstates describing the propagation. Hence different flavours can
occur even if the source is producing only one kind of neutrino, and more precise
measurements are ongoing or planned. The long standing problem of missing solar
neutrinos was finally solved to be due to these effects of converting neutrinos within
the sun, called matter effects. However, still a lot can be explored. The improve-
ments in helioseismological observations and new abundance determinations from
the photosphere would make a measurement of CNO neutrinos desirable as this
will put one of the fundamental assumptions of stellar evolution, the homogeneous
distribution of elements, on the bench. After a first glimpse on the pep neutrinos a
precise measurement would be desirable to reduce the error on the involved param-
eter θ from neutrino oscillations. Both issues might be addressed by SNO+, a new
liquid scintillator detector using the SNO infrastructure. Last but not least, the ul-
timate goal would and should be a real-time measurement of solar pp-neutrinos,
which is by far the dominant flux. This would allow to study the dynamics of en-
ergy production within the Sun, and some detector concepts exist to perform such a
measurement.
Also, scientists are better prepared now for the next supernova explosion. There
are larger (Super-Kamiokande) and more detectors available (ICECUBE, Borexino,
KamLAND, LVD and in the near future HALO and SNO+) to detect supernova
neutrinos. In the era of GPS also the absolute synchronisation of time will not be an
issue anymore. Furthermore, the detectors are connected within a Supernova Early
Warning System (SNEWS) to warn astronomers about a potential supernova. This is
7 Search for the Neutrino Mass and Low Energy Neutrino Astronomy 211
possible as neutrinos leave the star significantly before the actual explosion. Addi-
tionally some of the mentioned detectors have already placed significant bounds on
a diffuse supernova neutrino background (DSNB) resulting from a summation of all
supernova explosions over the history of the Universe. A detection still remains to
be done. The same is true for the Cosmic neutrino background originating from the
Big Bang. Like the cosmic microwave background (CMB) there should be a 1.95 K
relic neutrino background, whose detection is extremely difficult.
Absolute mass measurements are facing interesting times as well. The KATRIN
experiment will perform a new tritium endpoint measurement down to 0.2 eV, an-
other order of magnitude with respect to existing limits and supposed to start data
taking soon. Four new large-scale double beta decay experiments (GERDA, EXO,
Kamland-Zen and Candles) started data taking in 2011, several others are in the
preparation stage. Also these experiments are probing absolute neutrino masses in
the sub-eV range and are complementary to the tritium measurements. Last but not
least, the progress in cosmology over the last decade was extremely rapid, so that
based on large-scale structure and CMB data, limits on the mass density Ων of
neutrinos in the universe and hence on the sum of the total neutrino masses could
212 K. Zuber
Acknowledgements I would like to thank B. Cleveland for valuable information and for provid-
ing the SN1987A note of the Homestake experiment not published in this form before.
References
Abazov, A.I., et al.: Phys. Rev. Lett. 67, 3332–3335 (1991)
Abdurashitov, J.N., et al.: Phys. Rev. C 80, 015807 (2009)
Ahmad, Q.R., et al.: Phys. Rev. Lett. 87, 071301 (2001)
Alekseev, E.N., et al.: Phys. Lett. B 205, 209 (1988)
Altmann, M., et al.: Phys. Lett. B 616, 174 (2005)
Anselmann, P., et al.: Phys. Lett. B 285, 376–389 (1992)
Arpesella, C., et al.: Phys. Lett. B 658, 101 (2008)
Bahcall, J.N.: Phys. Rev. Lett. 12, 300–302 (1964)
Bahcall, J.N.: Phys. Rev. Lett. 23, 251 (1969)
Bahcall, J.N.: Neutrino Astrophysics. Cambridge University Press, Cambridge (1989)
Bahcall, J.N., Davis, R. Jr., Parker, P., Smirnov, A., Ulrich, R. (eds.) Solar Neutrinos – The First
Thirty Years. Addison Wesley, Reading (1994)
Bergkvist, K.E.: Nucl. Phys. B 39, 317 (1972)
Bethe, H., Peierls, R.: Nature 133, 532 (1934)
Bethe, H.A., Critchfield, C.L.: Phys. Rev. 54, 248 (1938)
Bethe, H.A.: Phys. Rev. 55, 434 (1939)
Chen, H.H.: Phys. Rev. Lett. 55, 1534–1536 (1985)
Cleveland, B.T., et al.: Astrophys. J. 496, 505 (1998)
Cowan, C., et al.: Science 124, 103 (1956)
Crane, H.R.: Rev. Mod. Phys. 20, 278 (1948)
Curran, S.C., Angus, J., Cockcroft, J.L.: Phys. Rev. 76, 853 (1949)
Danby, G., et al.: Phys. Rev. Lett. 9, 36 (1962)
Davis, R. Jr.: Phys. Rev. Lett. 12, 303–305 (1964)
7 Search for the Neutrino Mass and Low Energy Neutrino Astronomy 213
Davis, R. Jr., Harmer, D.S., Hoffman, K.C.: Phys. Rev. Lett. 20, 1205–1209 (1968)
Davis, R.: Prog. Part. Nucl. Phys. 32, 13 (1994)
Davis, R.: Rev. Mod. Phys. 75, 985–994 (2003)
Eddington, A.: Observatory 43, 341–358 (1920)
Eguchi, K., et al.: Phys. Rev. Lett. 90, 021802 (2003)
Elliott, S.R., Hahn, A.A., Moe, M.: Phys. Rev. Lett. 59, 2020 (1987)
Fermi, E.: Z. Phys. 88, 161 (1934a)
Fermi, E.: Nuovo Cimento 11, 1 (1934b)
Fireman, E.L.: Phys. Rev. 75, 323 (1949)
Fireman, E.L., Schwartzer, D.: Phys. Rev. 86, 451 (1952)
Furry, W.H.: Phys. Rev. 56, 1184 (1939)
Goeppert-Mayer, M.: Phys. Rev. 48, 512 (1935)
Goldhaber, M., Grodzin, L., Sunyar, A.W.: Phys. Rev. 109, 1015 (1958)
Hampel, W., et al.: Phys. Lett. B 447, 127 (1999)
Hirata, K.S., et al.: Phys. Rev. Lett. 65, 1297–1300 (1990)
Holmgren, H.D., Johnston, R.L.: Phys. Rev. 113, 1556–1559 (1959)
Inghram, M.G., Reynolds, J.H.: Phys. Rev. 76, 1265 (1949)
Kaether, F., et al.: Phys. Lett. B 685, 47 (2010)
Kirsten, T., Gentner, W.: Schaeffer, O.A.: Z. Phys. 202, 273 (1967)
Kirsten, T., Gentner, W.: Schaeffer, O.A.: Phys. Rev. Lett. 20, 1300 (1968)
Klapdor-Kleingrothaus, H.V., et al.: Mod. Phys. J. A 12, 147 (2001)
Kodama, K., et al.: Phys. Lett. B 504, 218 (2001)
Koshiba, M.: Phys. Rep. 220, 229–381 (1992)
Koshiba, M.: Rev. Mod. Phys. 75, 1011–1020 (2003)
Kraus, C., et al.: Eur. Phys. J. C 40, 447 (2005)
Kuzmin, V.A.: Sov. Phys. JETP 22, 1051 (1966)
Lande, K.: Annu. Rev. Nucl. Part. Sci. 59, 21–39 (2009)
Langer, L.M., Moffat, R.J.D.: Phys. Rev. 88, 689 (1952)
Levine, C.A., Ghiorso, A., Seaborg, G.T.: Phys. Rev. 77, 296 (1950)
Lobashev, V.M.: Nucl. Phys. A 719, 153 (2003)
Los Alamos: Celebrating the neutrino, Los Alamos Science, vol. 25 (1997)
Longair, M.: The Cosmic Century. Cambridge University Press, Cambridge (2006)
Lubimov, V.A., et al.: Phys. Lett. B 94, 266 (1980)
Majorana, E.: Nuovo Cimento 14, 171 (1937)
Mikheyev, S.P., Smirnov, A.Y.: Nuovo Cimento 9, 17 (1986)
Otten, E.W., Weinheimer, C.: Rep. Prog. Phys. 81, 086201 (2008)
Pais, A.: Inward Bound. Oxford University Press, Oxford (1986)
Pauli, W.: Phys. Rev. 38, 579 (1931)
Pauli, W.: In: Aufsätze und Vorträge über Physik und Erkenntnistheorie, p. 156. Vieweg, Braun-
schweig (1961)
Pontecorvo, B.: Chalk River Laboratory Report PD-205 (1946)
Racah, G.: Nuovo Cimento 14, 322 (1937)
Reines, F., Cowan, C.: Phys. Rev. 92, 830 (1953)
v. Weizsäcker, C.F.: Phys. Z. 38, 176 (1937)
v. Weizsäcker, C.F.: Phys. Z. 39, 633 (1938)
Wolfenstein, L.: Phys. Rev. D 17, 2369 (1978)
Wu, C.S., et al.: Phys. Rev. 105, 1413 (1957)
Chapter 8
From Particle Physics to Astroparticle Physics:
Proton Decay and the Rise of Non-accelerator
Physics
Hinrich Meyer
8.1 Introduction
In this chapter, we report on the way leading from accelerator laboratories to un-
derground physics, which paradoxically enough turned out to studying cosmic rays.
The standard model of particle physics established in the early 1970s (see Rior-
dan, 1987) had an unexpected consequence for astroparticle physics. Its symmetry
would have required that matter and antimatter annihilated in the early universe, so
that no world made up of ‘matter’ could have formed. In 1968, Andrei Sacharov
showed that the matter–antimatter asymmetry might have formed in a state of the
universe far from thermal equilibrium (such as obviously given in big bang cos-
mology), together with the P- and CP-violations (which today are well-confirmed
and further investigated e.g. in the LHC experiment LHCb), and proton decay. The
latter phenomenon, however, could be only investigated in large none-accelerator
experiments. The size of the first generation of such experiments depended on the
idea of unifying the fundamental forces beyond the standard model. In the middle
of the 1980s, the most simple extension of the standard model, the SU(5) theory,
implied a proton lifetime of about 1029 years. With detectors consisting of 1 000
tons of matter and hidden from the cosmic radiation as deep as possible under the
Earth surface, such as the water Cherenkov detectors Kamiokande and IBM or the
French–German Fréjus iron calorimeter, one expected to detect several proton de-
cays per year.
The passage from accelerator physics to cosmic ray studies by means of under-
ground detectors began in the 1980s (for the following see Meyer, 1990). It was
based on detailed knowledge about the formation of the primordial light nuclei and
baryon number violation and had its first striking successes in the measurements of
the solar neutrino flux and the neutrino signals emitted by supernovae (see Chap. 7).
H. Meyer ()
DESY, Hamburg, Germany
e-mail: [email protected]
Non-accelerator particle physics is a field of high energy physics that exploits the
sources of energy and related particle beams as provided by nature. It works on
grand scales of space, time and energy by considering e.g. the matter content of
the whole universe, single events of huge energy release like the big bang, super-
nova explosions and the process of energy production in stars as well as acceleration
of protons and electrons up to energies that are still far beyond the capabilities of
man-made accelerators. It is the photons and neutrinos that carry very important in-
formation on cosmic events to us observers on earth and may even give new insights
on the basic properties of those particles. The atmospheric neutrino and muon flux
turned out to be particularly relevant.
Let us review the stage of knowledge around 1990. It was well known then that
one of the most successful fields of non-accelerator particle physics concerns the
formation of the primordial light nuclei, D, 3 He, 4 He and 7 Li in the environment of
an expanding gas of protons, neutrons, electrons, photons and neutrinos (Boesgaard
and Steigman, 1985; Yang et al., 1984). In addition, other (very) weakly interact-
ing stable or unstable particles may have been present in large numbers and it was
suspected that they may consist of the ubiquitous dark matter present on all scales
in the universe (Trimble, 1987). Experimental information relevant for this problem
has become available in the late 1980s, namely a precise measurement of the neu-
tron lifetime, a safe limit on the number of different light neutrinos from studies
of the Z0 particle in e+ e− -annihilation and finally a better estimate of the amount
of primordial 4 He from the 4 He abundance as seen today. It had been possible to
use these data to reassess the ratio of baryons to photons, η = nb /ny , which is es-
timated to be in the range of 10−9 –10−10 . Measurements of the neutron lifetime
using very different techniques have been performed in the late 1980s at the reactor
in Grenoble. The results for the neutron lifetime τn are 877 ± 10 sec using a mag-
netic neutron storage ring (Paul et al., 1989) and 887.6 ± 3 sec from storing ultra
cold neutrons in a glass box coated with reflecting oil (Mampe et al., 1989). Since
the amount of 4 He produced in the early universe, Yp , depends on the uncertainty
t of the neutron lifetime like Yp = 0.24 ± (2 × 10−4 × t (sec)), the small er-
ror in the neutron lifetime is no longer of great concern and its influence on the Yp
is about a factor of ten smaller than a change in the number of different neutrino
flavors by 1.
The measurements of the width of the Z0 essentially rule out four neutrino flavors
and are in very good agreement with three different neutrino species, the well known
electron-, muon- and tau-neutrinos. From the measurements of 7 Li abundance in
old stars and a careful reanalysis of the amount of primordial 4 He6 one obtains
as nominal values for η10 (η measured in units of 10−10 ) a value of 2.2 from 7 Li
(choosing the lower of two possible values) and 1.6 from 4 He. This would seem to
be in mild conflict with the lower limit on η10 = 2.6 from the upper limit on the D
8 Proton Decay and the Rise of Non-accelerator Physics 217
In 1967, A.D. Sacharov suggested a very general reason to expect the instability
of nucleons. Based on the existence of P- and CP-violations together with the ab-
sence of significant amounts of antimatter in a uniformly expanding universe, he
argued that nucleons should decay. In the 1970s, much more specific and detailed
arguments came up in the context of attempts to achieve grand unification of the
fundamental interactions (Pati and Salam, 1973; Georgi and Glashow, 1974). The
most specific prediction used SU(5) as the unification group, with the result for the
proton lifetime
4
τp = 1028±1 MX /2 × 104 years
In 1987, the evolution of the gauge couplings with energy as the basis of this pre-
diction was known with impressive precision (Amaldi et al., 1987), however, a com-
mon unification mass was apparently missing, and furthermore the lower limit on
proton decay reached values incompatible with the SU(5) prediction for the domi-
nant decay mode p → e+ π 0 . (See Fig. 8.1.)
After World War II physicists started to consider the extraordinary stability of pro-
tons a problem that required an explanation. Many nuclei are known to be unstable
to alpha emission, beta decay and even fission. But protons seem to stay there for-
ever and seem to require an extra law of conservation of protons. It then seemed of
interest to demonstrate to what extent protons are stable. If indeed protons decayed,
then so much energy would be released that any nucleus would be destroyed. This
was not observed and a first estimate of a lower limit of proton lifetime was esti-
mated of about 1020 years (Wick et al., 1949; Wigner, 1952; Segre, 1952; Seaborg
218 H. Meyer
et al., 1953; Feinberg and Goldhaber, 1959). Reines et al. (1954) then started with
an experiment where a large piece of scintillator was placed under about 100 feet of
rock to reduce the counting rate from penetrating cosmic rays that exceed an energy
deposition of about 15 MeV. It provided an improved lower limit of proton lifetime
of 1022 years from the non-observation of destroyed nuclei.
Cosmic ray interactions were considered the only source of the observed counts
entirely due to muons when going deep enough. Therefore deeper and deeper sites
were chosen in e.g. mines and tunnels.
Backenstoss et al. (1960) went into the Loetschberg tunnel getting more than 800
meters of rock on top to dramatically reduce counts due to cosmic rays to reach a
limit of 1026 years, depending somewhat on the assumed decay mode of the proton.
Reines and collaborators (Gurr et al., 1967) then decided to go even deeper into
a South African gold mine at a depth of 3 200 meters. Here another cosmic ray issue
came into consideration, finding neutrino events, first of all those ones created in
air showers in the earth atmosphere. Even the detection of neutrinos from extrater-
restrial sources was discussed. A similar experiment at about the same depth was
placed in a gold mine in Southern India in the Kolar Gold Field (Achar et al. 1965a,
1965b). Both experiments succeeded with neutrino detection, but for proton decay
only limits could be established at more than 1028 and later up to 1030 years (Reines
and Crouch, 1974).
We think that all we observe around us was created in the Big Bang Event long time
ago when everything was very hot and dense and uniform. While it expands struc-
ture can develop, very rich structure indeed. Everything consists of matter (protons)
and very likely not of antimatter although all basic interactions seem to be symmet-
rical between matter and antimatter. Why is it that only matter has survived? What
8 Proton Decay and the Rise of Non-accelerator Physics 219
(if at all) would matter decay to? Photons and neutrinos in the end, of course, and
indeed as a result of the hot early universe, neutrinos and photons fill and dominate
(by number) the universe today.
A. Sacharov in the 1960s (Sacharov, 1967) started to wonder about this situation
that apparently only protons were surviving from the big bang environment. Could
it be that the just discovered CP-violation was involved so that antiprotons decayed
in the very early big bang event and only some protons survived?
This idea could not gain ground. At the same the discovery of the whole zoo of
strongly interacting particles and of a rather strange theory, quantum chromodynam-
ics (QCD), ruling them, allowed to propose a new hypothesis, called Grand Unifica-
tion (GUT) (Georgi and Glashow, 1974; see also Pati and Salam, 1973) which had
the surprising consequence of the existence of proton decay e.g. to photons and neu-
trinos and with lifetime predictions just an order of magnitude more than the best
experimental limit. Also simple decay modes were predicted, like e+ π 0 . This was
very challenging for experimenters and in particular for those working in cosmic ray
physics since one had to be deep underground, with very large size and with good
resolution. However, the existing experimental installations were not sufficient, both
in size and resolution to get convincing evidence for proton decay.
Right from the outset it is clear that a proton lifetime is very long indeed: the early
results tell, somehow beyond 1030 years otherwise it would not have escaped at-
tention. Then cosmic rays interactions start to be a strong, in fact the dominating
background to a search for a possible proton decay. Now cosmic rays on the sur-
face of earth dominantly consist of muons observed to be rather penetrating through
any material. Going into deep mines, however, offers sufficient shielding. The gold
mines of the Kolar Gold Field in India and in Witwatersrand in South Africa, with
(by modern standards) rather simple setups consisting of scintillator detectors and
drift tubes could only be used to set limits on proton decay a little more than 1030
years with simple considerations on possible decay modes.
With definite predictions for lifetimes and decay modes, in particular from the
ideas of Grand Unifications (GUT) it became clear that specific designs needed to
be realized that had significant efficiencies for the observation of proton decay can-
didates and that were placed deep enough underground to provide sufficient shield-
ing against cosmic ray events. Part of cosmic rays are neutrinos, which cannot be
shielded and their sure observation deep underground was indeed one of the main
considerations to go deep enough. What material should be used? It is considered
that using any chemical element is safe for proton decay searches given the large
energy release as compared to binding energies of nucleons in heavier nuclei. The
scintillator is generically CH(n) and is to be preferred because of the free proton
content, and so is water (H2 O). But also iron or argon are considered well suitable.
In addition a size of more than a few hundred tons was needed because of simple
rate considerations.
220 H. Meyer
In the mid-1970s new great efforts started and in the first half of the 1990s new
experiments came into operation using water, scintillator and iron as the detector
material, with photomultipliers to detect Cherenkov light in water, scintillator light
and in iron detectors with tracking elements like flash tubes and drift tubes. The
detectors based on water were the largest, with IMB (3300 tons) (Seidel et al., 1988),
Kamiokande (800 tons) (Hirata et al., 1989).
The experiments based on iron are somewhat less massive between 120 tons
(Kolar, Krishnaswamy et al., 1986 and NUSEX, Battistoni et al., 1983, and 900 tons
Fréjus, Berger et al., 1991). All experiments observed atmospheric neutrinos. The
masses of the detectors were large enough so that all particles from p-decay as well
as from neutrinos would be contained and total energies could be well established.
Also the separation into muon neutrino and electron neutrino events could be well
achieved. Although several candidates for proton decay were discussed at times,
however, with increasing exposures no candidate was confirmed and only limits
on proton decay lifetimes could be established. With values above 1033 years, the
predictions of Grand Unification were exceeded by about a factor of 100!
However, in other possible decay modes of nucleons the experimental search
for evidence continued. This has been done notably by the three large experiments
IMB, Kamiokande II and Fréjus (see Fig. 8.2) (Seidel et al., 1988; Hirata et al.,
1989; Berger et al., 1989a; Phillips et al., 1989). The experiments came close to
the unavoidable background level due to the interactions of atmospheric neutrinos
with the nuclei of the detector material. The limits obtained then mainly depend on
detection efficiencies for a given channel and ranged from few times 1031 years for
8 Proton Decay and the Rise of Non-accelerator Physics 221
modes involving neutrinos in the final state up to few times 1032 years for modes
with only electrons and photons to be detected. Although IMB and Kamiokande II
continued taking data, they achieved only small improvements. A big step forward
could only be achieved by construction of a really huge detector, even bigger than
Superkamiokande (Kajita et al., 1989). In order to be able to suppress the neutrino
background it had to have an energy resolution and spatial resolution much better
than the detectors running then.
Baryon number violations were also suspected to manifest through the B = 2
process of nn oscillations (Mohapatra and Marshak, 1980). Two very different ex-
perimental approaches were used to search for nn transitions. In the first method a
very cold neutron beam generated in a reactor, carefully shielded against the earth
magnetic field passes through a target. Antineutrons created along the flight path
would annihilate with neutrons and protons in the target to produce a multipion final
state that has to be separated against the cosmic ray interactions in the target (Bressi
et al., 1989; Baldo-Ceolin et al., 1990). Secondly the underground experiments to
search for nucleon decay also yield very interesting limits on nn oscillations (Jones
et al., 1984; Takita et al., 1986). The transition of a neutron to an antineutron would
occur inside a nucleus (16 O or 56 Fe) with subsequent nN annihilation. The only
background is from atmospheric neutrino interactions in the underground detectors.
No evidence for annihilation events has been seen in any of the experiments. The
best limits obtained were τn > 1–2 × 108 sec from the nucleon decay experiments
(Jones et al., 1984; Takita et al., 1986) and τn > 107 sec from the experiment at the
reactor in Grenoble (Bressi et al., 1989; Baldo-Ceolin et al., 1990).
The relevance of these experimental limits for the underlying baryon number
violating mechanism is not easily assessed, since the theoretical frame work still has
many unknowns. However, in the long run the water detectors were very successful
as far as the observation of neutrinos is concerned (see Chap. 7).
In February 1987 the two big water detectors Kamiokande and IMB were on-
line just at the right moment to receive signals from a flash of low energy neutrinos
originating from a supernova of type II that occurred in the large Magellanic Cloud,
a galaxy next to our own (SN 1987A), see e.g. Koshiba (1988). Furthermore, only
a year later Kamiokande also saw the neutrinos from the sun, but at a rate lower
than predicted by solar models in agreement with other experiments indicating the
possibility of neutrino oscillation. Further evidence for neutrino oscillations was ob-
tained by increasing the statistic concerning the ratio of electron neutrinos to muon
neutrinos coming from the atmosphere, which also deviated from expectation. What
was believed to be background for the search for p-decay proved to be the source of
great discoveries. Thus the original motivation to go deep underground into mines
and tunnels, namely a quantitative determination of fluxes of neutrinos, was well
met. One of the key problems of cosmic ray physics was solved.
Since the water detector in Kamioka, Japan, was earlier increased in mass by
another order of magnitude, through the construction of Superkamiokande while
keeping the resolution, one could continue the hunt for the elusive proton decay.
But again only limits could be obtained, reaching a new lower limit about 1034 years
for the favored decay modes and a few factors less for other decay modes (Nishino
222 H. Meyer
et al., 2012; Regis et al., 2012). Now, 40 years after the GUT prediction, limits are
a factor of 100 larger than predicted have been achieved. Should one really dare to
go even further, say, by another order of magnitude? Probably neutrino events set a
natural limit.
supported, although with less statistics, by NUSEX (Aglietta et al., 1989), and fi-
nally, by Superkamiokande. The latter experiment, however, detected the neutrino
oscillations in a parameter space inaccessible to the former generation of experi-
ments.
Particles and antiparticles could annihilate into ordinary quarks and leptons. The
quarks hadronize to produce high energy neutrinos from their subsequent decay.
Since inside the sun and the earth one has a beam dump situation a neutrino signal
would mainly come from charm and bottom particle decay (Ritz and Seckel, 1988;
Gaisser et al., 1986; Ng et al., 1987; Ellis et al., 1987; Hagelin et al., 1987). The
neutrinos can be observed in detectors deep underground while waiting for nucleon
decay to occur. The expected neutrino rates are very low but so is the background
from atmospheric neutrinos within the angular acceptance from the direction of the
sun (earth). None of the big underground detectors reports an excess in the neutrino
flux from the sun (LoSecco et al., 1987b; Totsuka, 1989; Daum, 1989), and for the
Fréjus detector also limits on the neutrino flux from the direction of the center of
the earth are available (Kuznik, 1989).
It may be convenient to convert these limits into limits on the abundance of galac-
tic dark matter for several species of dark matter particles. This can be done on the
basis of quantitative calculations of capture and annihilation rates for the heavy neu-
trinos from supersymmetry of Majorana νM , Dirac-νD and s-neutrino νe , νμ type.
For Dirac type neutrinos the limit at lower particle masses (< 10 GeV) covers the
interesting region not excluded by the direct scattering experiments using Ge de-
tectors. The pure (unmixed) photino has been proposed as a viable candidate for
the LSP but no useful abundance limit could be obtained (Goldberg, 1983; Kraus,
1983). It seems more appropriate, however, to consider the more general case of a
mixed particle the neutralino (Barbieri et al., 1989) at the expense of more parame-
ters that are unknown. Limiting regions in this parameter space on the basis of sev-
eral sources of experimental limits have been discussed (Olive and Srednicki, 1989).
The dark matter particles could as well annihilate in the galactic halo, with photons,
antiprotons and positrons as annihilation products to be detected (Ellis et al., 1988;
Turner and Wilczek, 1989; Tylka, 1989; Bergstrom, 1989).
The primary energy spectrum of cosmic rays as observed near earth extends up
to about 1020 eV = 1011 GeV, following roughly a power law with spectral index
(−2.75) at energies ≤ 3 × 1015 eV and (−3.1) to the upper end of the spectrum. The
particle composition of the cosmic ray flux had been determined with balloon and
satellite experiments up to about 1000 GeV/nucleon and protons seem to dominate
(Jones et al., 1987; Grunsfeld et al., 1988). The basic acceleration mechanisms and
their site inside or even outside the Galaxy are still not known, both large scale
as well as point like sources have been proposed (Berezinsky and Prilutsky, 1978;
Gaisser and Stanev, 1987). Observations of primary photons (and neutrinos) are
crucial to help solving this outstanding problem (Berezinsky and Ptuskim, 1989).
The most likely source of neutrinos will be π + (and μ± ) decay that are produced
in collisions of protons with ambient matter (Berezinsky et al., 1985; Reno and
Quigg, 1988). The neutrinos are difficult to detect on earth because typical detection
226 H. Meyer
efficiencies are very low (between 10−12 at 1 GeV and 10−5 at 10 TeV) and also due
to the high atmospheric background. Photons, however, are much easier to detect at
or near earth than neutrinos.
High energy photons will in general be produced through synchrotron radiation
of high energy electrons, by backward Compton scattering of low energy photons
off high energy electrons and finally by high energy π 0 decay. Below about 100 GeV
detectors have to be located above the atmosphere on satellites. At larger energies
> 1 000 GeV primary photons generate large air showers detectable in counter ar-
rays at mountain altitudes and alternatively through air Cherenkov light detection in
clear moonless nights using open photomultipliers (Weekes, 1988).
In the low energy range from 50 MeV to 5 GeV two satellite experiments, SASII
(Fichtel et al., 1975) and COSB (Bignami et al., 1975), had provided a fascinating
first look on the high energy gamma ray sky. Mostly galactic γ -ray sources had been
detected (Mayer-Hasselwander and Simpson, 1988), with the possible exception
of the quasar 3C273. The strongest point sources are pulsars, Vela, Crab and PSR
1820-11, identified by the characteristic time structure of photon emission. The Crab
nebula has also been detected at > 700 GeV using the air Cherenkov technique
with very high significance (Weekes et al., 1989). Most of the observed photon
sources, however, found no definite identification so far (Mayer-Hasselwander and
Simpson, 1988), mainly because the large positional error boxes contained too many
astronomical objects as possible candidates.
The photon flux from the galaxy was believed to be generally dominated by so-
called diffuse emission (Bloemen, 1989). It indeed originates from π 0 production in
pp collisions. The interstellar hydrogen (both in atomic and molecular form) consti-
tutes the target protons, the proton beam is provided by cosmic rays assumed to have
the same intensity and spectral shape as we detect it near earth. The column density
of the proton gas along a given line of sight through our galaxy is rather well known
on the basis of 21 cm line observations (atomic hydrogen) and also of H2 using CO
(carbon monoxide) as a tracer. This simple model provides a quantitative descrip-
tion of the observed diffuse photon flux (Mayer-Hasselwander and Simpson, 1988;
Bloemen, 1989). A comparison of the energy dependence of the photon flux from
the inner and the outer galaxy reveals a flatter energy spectrum, (by about 0.4–0.5
in the spectral index) for photons from the outer part of the galaxy (Bloemen, 1987)
and specifically at moderate galactic latitudes (Bloemen et al., 1988). The reason
was considered to be a flatter energy spectrum of the primary protons in the outer
part of the Galaxy.
Driven by these motivations, it was decided to extend these measurements to
much higher energies, to approach the energy where the spectral index of the pri-
mary cosmic ray energy spectrum changes, E = 3 × 1015 eV, and where significant
leakage out of the galaxy is expected. This was attempted in a number of air shower
array experiments. In the 1990s, an increasing number of particle physicists took
the decision to join one of the astroparticle telescopes.
8 Proton Decay and the Rise of Non-accelerator Physics 227
After about 50 years, and on the basis of the great success of the big bang hypothesis,
and given the breathtaking developments in particle physics it is time to remember
that gravitation still presents us (since already 80 years) with the enigma that there
seems to be much more matter in the cosmos whose nature is completely open.
But it would outnumber protons by a factor of about 10, provided it exists at all.
I expect that when a solution can be found to include gravitation into a combined
description with Grand Unification, this way an understanding of the deviations
from Newtonian gravitation can be found. This would most likely involve a new
view onto the fundamental question of the stability of protons.
References
Achar, C.V., et al.: Phys. Lett. 18, 196 (1965a)
Achar, C.V., et al.: Phys. Lett. 19, 78 (1965b)
Aglietta, M., et al.: Europhys. Lett. 8, 611 (1989)
Amaldi, U., et al.: Phys. Rev. D 36, 2191 (1987)
Auriemma, G., et al.: Phys. Rev. D 37, 665 (1988)
Ayres, D.S., et al.: Phys. Rev. D 29, 902 (1984)
Backenstoss, G.K., et al.: Nuovo Cimento 16, 749 (1960)
Baldo-Ceolin, M., et al.: Phys. Lett. B 236, 95 (1990)
Barbieri, R., et al.: Nucl. Phys. B 313, 725 (1989)
Barger, V., Wishnant, K.: Phys. Lett. B 220, 308 (1988)
Barr, S., et al.: Phys. Lett. B 214, 147 (1988)
Barr, G., et al.: Phys. Rev. D 39, 3523 (1989)
Battistoni, G., et al.: Phys. Lett. B 133, 454 (1983)
Berezinsky, V.S., Prilutsky, O.F.: Astron. Astrophys. 66, 325 (1978)
Berezinsky, V.S., Ptuskim, V.S.: Astrophys. J. 340, 351 (1989)
Berezinsky, V.S., et al.: Nuovo Cimento C 8, 185 (1985)
Berger, Ch., et al..: Nucl. Instrum. Methods A262, 463 (1987)
Berger, C., et al.: Nucl. Phys. B 313, 509 (1989a)
Berger, C., et al.: Phys. Lett. B 227, 489 (1989b)
Berger, C., et al.: Z. Phys. C 50, 385 (1991)
Bergstrom, L.: Phys. Lett. B 225, 372 (1989)
Bignami, G.F., et al.: Space Sci. Instrum. 1, 245 (1975)
Bloemen, J.B.G.M.: Astrophys. J. 317, L15 (1987)
Bloemen, H.: Annu. Rev. Astron. Astrophys. 27, 469 (1989). Full Text via CrossRef
Bloemen, J.B.G.M., et al.: Astron. Astrophys. 204, 88 (1988)
Boesgaard, A.M., Steigman, G.: Annu. Rev. Astron. Astrophys. 23, 319 (1985)
Bressi, G., et al.: Z. Phys. C 43, 175 (1989)
Bugaev, E.V., Naumov, V.A.: Phys. Lett. B 232, 391 (1989)
Bugaev, E.V., Naumov, V.A.: Sov. J. Nucl. Phys. 45, 857 (1987)
Carlson, D.: Phys. Rev. D 34, 1454 (1986)
Daum, H.J.: Preprint WUB89-22 Univ. of Wuppertal (1989)
Daum, K., et al.: Z. Phys. C 66, 417–428 (1995)
Ellis, J., et al.: Phys. Lett. B 198, 393 (1987)
Ellis, J., et al.: CERN-TH 5062/88 (1988)
Feinberg, G., Goldhaber, M.: Proc. Natl. Acad. Sci. 45, 1301 (1959)
228 H. Meyer
Christian Spiering
9.1 Introduction
The search for the sources of cosmic rays is a three-fold assault, employing charged
cosmic rays, gamma rays and neutrinos. The first conceptual ideas how to detect
high energy neutrinos date back to the late 1950s. The long evolution towards de-
tectors with a realistic discovery potential started in the 1970s and 1980s, by the pio-
neering works in the Pacific Ocean close to Hawaii (DUMAND) and in the Siberian
Lake Baikal (NT200). But only now, half a century after the first concepts, such a
detector is in operation: IceCube at the South Pole. We do not yet know for sure
whether with IceCube we will indeed detect extraterrestrial high-energy neutrinos
or whether this will remain the privilege of next generation telescopes. But what-
ever the answer will be: already the path to the present detectors was a remarkable
journey. This chapter sketches its milestones. It focuses to the first four decades and
keeps the developments of the last decade comparatively short. I refer the reader to
the 2011 review of the field (Katz and Spiering, 2012) for more detailed information
on actual results and plans for future detectors.
C. Spiering ()
DESY, Platanenallee 6, 15738 Zeuthen, Germany
e-mail: [email protected]
Earth they must hit matter, generate pions and neutrinos as decay products of the
pions.
The first ideas to detect cosmic high-energy neutrinos underground or underwa-
ter date back to the late 1950s. In the 1960 Annual Review of Nuclear Science,
K. Greisen and F. Reines discuss the motivations and prospects for such detectors.
In his paper entitled Cosmic Ray Showers (Greisen, 1960), Greisen writes:
As a detector, we propose a large Cherenkov counter, about 15 m in diameter, located in a
mine far underground. The counter should be surrounded with photomultipliers to detect the
events, and enclosed in a shell of scintillating material to distinguish neutrino events from
those caused by μ mesons. Such a detector would be rather expensive, but not as much as
modern accelerators and large radio telescopes. The mass of the sensitive detector could be
about 3000 tons of inexpensive liquid.
Later he estimates the rate of neutrino events from the Crab Nebula as one count per
three years and optimistically concludes:
Fanciful though this proposal seems, we suspect that within the next decade cosmic ray
neutrino detection will become one of the tools of both physics and astronomy.
At this time, he could not be aware of the physics potential of atmospheric neutrinos
and continues:
The situation is somewhat simpler in the case of cosmic-ray neutrinos (“atmospheric neu-
trinos” in present language. C.S.) – they are both more predictable and of less intrinsic
interest.
In the same year, on the 1960 Rochester Conference, M. Markov published his
ground-breaking idea (Markov, 1960)
. . .to install detectors deep in a lake or a sea and to determine the direction of charged
particles with the help of Cherenkov radiation.
This appeared to be the only way to reach detector volumes beyond the scale of 104
tons.
During the 1960s, no predictions or serious estimates for neutrino fluxes from
cosmic accelerators were published. Actually, many of the objects nowadays con-
sidered as top candidates for neutrinos emission were discovered only in the 1960s
and 1970s (the first quasar 1963, pulsars 1967, X-ray binaries with a black hole
1972, gamma-ray bursts 1973). The situation changed dramatically in the 1970s,
when these objects were identified as possible neutrino emitters, triggering an enor-
mous amount of theoretical activity.
Different to extraterrestrial neutrino fluxes, the calculation of the flux of atmo-
spheric neutrinos became more reliable. First serious estimates were published in the
early 1960s. These pioneering attempts are described in detail in the recollections
of Igor Zheleznykh (2006) who was a student of Markov. In his diploma work from
1958 he performed early estimates for the flux of atmospheric neutrinos and for the
9 Towards High-Energy Neutrino Astronomy 233
Fig. 9.1 Kenneth Greisen (1918–2007), Frederick Reines (1918–1998) and Moisej Markov
(1908–1994)
flux of neutrinos from the Crab nebula. The real explosion of papers on atmospheric
neutrinos, however, happened between 1980 and 1990 when the large underground
detectors became operational and the field turned into a precision science (see the
contribution of Kai Zuber).
Still, from the perspective of the 1960s and early 1970s, the study of atmospheric
neutrinos appeared equally interesting as the search for extraterrestrial neutrinos
(Markov and Zheleznykh, 1961). Neutrino oscillations did not yet play a role in the
discussions of the late 1960s and appeared only in the 1970s on the shopping list.
However, atmospheric neutrinos offered a possibility to study neutrino cross sec-
tions in an energy region, which was not accessible to accelerator experiments at
that time. Using the language of the 1970s, these studies would have given infor-
mation on the mass of the intermediate W-boson. Without proper guidance on the
W-mass, these effects were expected to be visible already in the few-GeV range,
and actually this was one of the main motivations to build the first underground
neutrino detectors (Zheleznykh, 2006). More generally, the availability of neutrinos
with energies beyond what could realistically be expected from accelerator beams
was recognized as a tempting method to search for phenomena beyond the standard
model; however, not by everybody! F. Reines notes in his summary of the Neutrino-
81 conference (Reines, 1981):
Estimates of the atmospheric flux suggest that interactions of this source of ≥1 TeV neutri-
nos might be usefully observed, although our accelerator-based colleagues are not keen on
this as a source of new information.
Actually the attitude of the broader particle physics community w.r.t. the physics
potential of atmospheric neutrinos changed only together with the detection of neu-
trino oscillations in the 1990s.
Before studying atmospheric neutrinos they had to be detected. This was
achieved in 1965, almost simultaneously, by two groups. The one was led by
F. Reines (Case-Witwatersrand group, later Case-Witwatersrand-Irvine, CWI). The
group operated two walls of segmented liquid scintillator in a South African Gold
mine, at a depth of 8800 m water equivalent. The configuration was chosen to iden-
tify horizontal muon tracks which, at this depth, could be due only to neutrino
234 C. Spiering
Fig. 9.2 The first neutrino sky map with the celestial coordinates of 18 KGF neutrino events
(Krishnaswamy et al., 1971). Due to uncertainties in the azimuth, the coordinates for some events
are arcs rather than points. The labels reflect the numbers and registration mode of the events (e.g.
“S” for spectrograph). Only for the ringed events the sense of the direction of the registered muon
is known
interactions (the detector could measure the direction but not the sense of the direc-
tion, so that an upward moving muons could not be distinguished from downward
moving muons). Between February and July 1965, seven such tracks were recorded,
with a background of less than one event from muons not induced by neutrinos. It
is interesting to note that the first of these tracks was recorded at February 23, 1965,
exactly 22 years before the neutrinos from supernova SN1987A reached the Earth
(23/2/1987). The detector of the other group (a Bombay–Osaka–Durham collab-
oration) was operated in the Indian Kolar Gold Field (KGF) mine, at a depth of
7 500 m water equivalent. It consisted of two walls of plastic scintillators and flash
tubes. The KGF group started data taking nearly six months after the CW group,
saw the first of three neutrino candidates two months later than Reines (20/4/1965),
but published two weeks earlier than the CW group: KGF at August 15, 1965 (sub-
mitted 12/7/1965 Achar et al., 1965), CW at August 30, 1965 (submitted 26/7/1965
Reines et al., 1965). So, indeed Reines recorded the first cosmic neutrino ever, but
the formal priority is with the KGF group. A historic race which had no losers but
two winners.
With improved detectors, the two groups continued measurements for many
years collecting a total sample of nearly 150 neutrino events. The KGF group was
the first to release a sky map (see Fig. 9.2).
In 1978, the Baksan Neutrino Telescope (BNT) in the Caucasus started (partial)
operation. It was followed by a phalanx of new detectors in the 1980s, which mostly
were motivated by the search for proton decay. The largest of them was the IMB
detector which produced the first neutrino sky map of reasonable quality, with 187
events from 396 live-days (Svoboda et al., 1987).
The study of atmospheric neutrinos and of MeV-neutrinos from a galactic su-
pernova seemed to be feasible with detectors of a few hundred or thousand tons,
with the main unknown being the rate at which those supernovae occur. Predic-
tions ranged from a several per millennium up to a few per century. Therefore a
9 Towards High-Energy Neutrino Astronomy 235
Fig. 9.4 Left: Sources of muons in deep underwater/ice detectors. Cosmic nuclei – protons (p),
α particles (He), etc. – interact in the Earth atmosphere (light-colored). Sufficiently energetic
muons produced in these interactions (“atmospheric muons”) can reach the detector (white box)
from above. Upward-going muons must have been produced in neutrino interactions. Right: De-
tection principle for muon tracks
At the 1976 workshop and, finally, at the 1978 DUMAND workshop at the
Scripps Institute in La Jolla (Roberts and Wilkins, 1978) the issue was settled
in favor of an array which combined the last two options, ATHENE and UNI-
CORN.
The principle of the detector was to record upward-traveling muons generated in
charged current muon neutrino interactions. Neutral current interactions which pro-
duce no muons had been only discovered in 1973. They result in final-state charged
particles of rather low energy and did not play a role for the design studies. The
upward signature guarantees the neutrino origin of the muon since no other particle
can traverse the Earth. Since the 1960s, a large depth was recognized as necessary
in order to suppress downward-moving muons which may be mis-reconstructed as
upward-moving ones (Fig. 9.4, left). Apart from these, only one irreducible back-
ground to extraterrestrial neutrinos remains: neutrinos generated by cosmic ray in-
teractions in the Earth’s atmosphere (“atmospheric neutrinos”). This background
cannot be reduced by going deeper. On the other hand, it provides a standard cali-
bration source and a reliable proof of principle.
9 Towards High-Energy Neutrino Astronomy 237
Fig. 9.5 The originally conceived DUMAND cubic kilometer detector and the phased downgrad-
ing to the 1988 plan for a first-generation underwater neutrino telescope DUMAND-II
Confronted with the oceanographic and financial reality, the 1.26 km3 array was
abandoned. A half-sized configuration (1980) met the same fate, as did a much
238 C. Spiering
smaller array with 756 phototubes (1982). The latter design was comparable in size
to the AMANDA detector at the South Pole (see Sect. 9.7) and the ANTARES
telescope in the Mediterranean Sea, close to Toulon (see Sect. 9.8). What finally
emerged as a technical project was a 216-phototube version, dubbed DUMAND-II
or “The Octagon’ ” (eight strings at the corners of an octagon and one in the center),
100 m in diameter and 230 m in height (Bosetti et al., 1988) (see Fig. 9.5). The plan
was to deploy the detector 30 km off the coast of Big Island, Hawaii, at a depth of
4.8 km.
The evolution of the detector design, largely following financial and technologi-
cal boundary conditions, was the one side of the story. What about the flux predic-
tions?
At the 1978 workshop first investigations on neutron star binary systems as point
sources of high-energy neutrinos were presented, specifically Cygnus X-3 (D. Eich-
ler/D. Schramm and D. Helfand in Roberts and Wilkins, 1978). The connection to
the indications for sources of TeV-γ -ray (none of them significant at that time!)
was discussed by T. Weekes. At the same time, the possibilities for diffuse source
detection were disfavored (R. Silberberg, M. Shapiro, F. Stecker).
The gamma-neutrino connection was discussed further by V. Berezinsky at the
1979 DUMAND Workshop in Khabarovsk and Lake Baikal (Learned, 1979). He
emphasized the concept of “hidden” sources which are more effectively (or only)
detectable by neutrinos rather than by γ rays. Among other mechanisms, Berezin-
sky also investigated the production of neutrinos in the young, expanding shell of a
supernova which is bombarded by protons accelerated inside the shell (“inner neu-
trino radiation from a SN envelope”). He concluded that a 1 000 m2 detector should
be sufficient to detect high-energy neutrinos from a galactic supernova over several
weeks or months after the collapse. Naturally, ten years later, in 1987, much atten-
tion was given to this model in the context of SN1987. But alas! – this supernova
was at about 50 kpc distance, more than five times farer than the Galactic center.
Moreover all underground detectors existing in 1987 had areas much smaller than
1 000 m2 . Therefore the chances to see inner neutrino radiation from the envelope
were rather small, and actually “only” the MeV burst neutrinos and no high-energy
neutrinos have been recorded.
A large number of papers on expected neutrino fluxes was published during the
1980s. The expected neutrino fluxes were found to depend strongly
(a) on the energy spectrum of the γ -ray sources which could only be guessed
since the first uncontroversial TeV-γ observation was the Crab nebula in 1989
(Weekes et al., 1989), and
(b) on the supposed ν/γ ratio which depends on the unknown thickness of matter
surrounding the source.
The uncertainty of expectations is reflected in the DUMAND-II proposal (Bosetti
et al., 1988). Pessimistic and optimistic numbers differed by 2–3 orders of magni-
tude and left it open whether DUMAND-II would be able to detect neutrino sources
or whether this would remain the realm of a future cubic kilometer array. Two
years later, V. Berezinsky reiterated his earlier estimates that for neutrinos from
9 Towards High-Energy Neutrino Astronomy 239
a fresh neutron star a detector with an effective area of 1 000 m2 (i.e. a large under-
ground detector) would be sufficient, but that the detection of extragalactic sources
would require detectors of 0.1–1.0 km2 size (Berezinsky, 1990). DUMAND-II, with
25 000 m2 area, fell just below these values. Again citing A. Roberts (1992):
These calculations serve to substantiate our own gut feelings. I have myself watched the
progression of steadily decreasing size . . . at first with pleasure (to see it become more
practical), but later with increasing pain. . . . The danger is, that if DUMAND II sees no
neutrino sources, the funding agencies will decide it has failed and, instead of expanding it,
will kill it.
After various site surveys and technical tests, in 1983 the Department of Energy
(DOE) approved the funding for DOE-supported U.S. groups to deploy the “Short
Prototype String” (SPS). With additional support from NSF, ICRR in Japan and the
University of Bern in Switzerland, the SPS was conceived to develop and test the
basic detector techniques, to further study the environmental effects, to demonstrate
that muons can be reconstructed and to measure the muon vs. depth dependence. In
1987, this 7-phototube test string was successfully deployed for some hours from
a high-stability Navy vessel (Babson et al., 1990). It provided the measured muon
intensity as a function of depth.
After the successful SPS test, in 1988 the DUMAND-II proposal was submit-
ted to DOE and NSF. The collaboration groups of this proposal were: UC Irvine,
CalTech, U. Hawaii, Scripps Inst. Oceanology, U. Vanderbilt, U. Wisconsin (USA),
U. Kinki, ICRR Tokyo (Japan), TH Aachen, U. Kiel (Germany). DUMAND-II with
its 100 m diameter and 230 m height, would have detected three down going muons
per minute and about 3 500 atmospheric neutrinos per year.
A wealth of technological solutions was found within the design study for the
SPS, and many remained as a legacy for other neutrino telescopes. For the first
time, relevant optical properties of sea water were measured. Tests and investiga-
tions made during many cruises pushed ahead of oceanographic practice at that
time. Some of the innovative solutions were only possible since basic technologies
had only recently appeared on the market.
One example is the photomultiplier tube (PMT) which had to be large (to collect
much light), fast (to allow for fast timing and good muon angular resolution) and
to have a good amplitude resolution (allowing identification of the 1-photoelectron
(PE) peak and separation from noise and possibly from the 2-PE signals). Various
innovative attempts were made but eventually discarded after Hamamatsu Comp.
(Japan) committed to develop a spherical 15-inch PMT R2018. This PMT fitted into
a 17-inch commercial pressure sphere. The only other practicable design for a light
sensor was the PHILIPS “smart” photomultiplier XP2600 (van Aller et al., 1983).
I will sketch its operation principle in the context of the Baikal neutrino telescope,
where a similar tube was developed.
Another example for applying brand-new techniques is the use of optical fibers
for data transmission. The processed signal is sent as optical pulse through a multi-
mode fiber cable. Fibers for use in undersea cables had been become available just in
the late 1970s. This was the second fortunate event with remarkable consequences
since it removed the low-data-rate barrier imposed by shore cables with copper lines
of 40 km length.
240 C. Spiering
Russian participation in the DUMAND project was strong from the beginning.
However, in the context of the Soviet invasion in Afghanistan, in 1980 the Rea-
gan administration terminated the cooperation. As A. Roberts remembers (Roberts,
1992):
The severing of the Russian link was done with elegance and taste. We were told, confiden-
tially, that while we were perfectly free to choose our collaborators as we liked, if perchance
they included Russians it would be found that no funding was available.
About the same time, however, A. Chudakov proposed to use the deep water of
Lake Baikal in Siberia as the site for a “Russian DUMAND”. The advantages of
Lake Baikal seemed obvious: it is the deepest freshwater lake on Earth, with its
largest depth at nearly 1 700 meter, it is famous for its clean and transparent water,
and in late Winter it is covered by a thick ice layer which allows installing winches
and other heavy technique and deploying underwater equipment without any use of
ships.
In 1981, first shallow-site experiment with small PMTs started. Chair of a dedi-
cated laboratory at the Moscow Institute of Nuclear Research, Academy of Science
of USSR (INR) became G. Domogatsky, a theoretician, flanked by L. Bezrukov as
leading experimentalist.
Soon a site in the Southern part of Lake Baikal was identified as suitable. It was
about 30 km South-West from the outflow of Lake Baikal into the Angara river and
approximately 60 km from the large city Irkutsk. A site at a distance of 3.6 km to
shore and at a depth of about 1 370 m was identified as the optimal location for a
detector which would be installed at a depth of about 1.0–1.1 km. Detectors could
be installed in a period between late February and early April from the ice cover,
and operated over the full year via a cable to shore.
In the US, these efforts were noticed but obviously not understood as a com-
petition to DUMAND. V. Stenger, who was the leading Monte Carlo expert of the
DUMAND project, repeatedly expressed his doubts that one could separate neu-
trinos from background in Lake Baikal: the lake was too shallow and the back-
ground of downward-going muons much too high; the necessary cuts to reject the
background would inevitably also strongly diminish the signal of upward-going
9 Towards High-Energy Neutrino Astronomy 241
muons from neutrino interactions, with the exception of rather small and dense ar-
rays.
After operation of first underwater modules with a 15-cm PMT in 1982, in the
following year a small string was operated for several days. In 1984, a first sta-
tionary string was deployed (Girlanda-84) and recorded downward moving muons
(Bezrukov et al., 1984). It consisted of three floors each with four PMTs in two
pressure-tolerant cylinders of glass-fiber enforced epoxy. At that time, no pressure-
tight glass spheres where available in the USSR. The end of the cylinders were
closed by caps of plexiglass. The PMT was a Russian tube with a 15 cm flat pho-
tocathode and modest amplitude and time resolution. An electrical cable connected
the string to the shore. The 1984 string was followed by another stationary string
in 1986 (Girlanda-86). Data from this string were used to set stringent limits on the
flux of magnetic monopoles catalyzing proton decays along their path (Domogatsky
et al., 1986).
Girlanda-84 took data for a total of 50 days and then sank down due to leak-
ing buoys which held the string in vertical position. But also the cable penetra-
tors through the epoxy cylinders as well as the cap-to-cylinder hermetic connection
tended to leak and were a notorious source of headaches. Moreover, it was clear that
the PMT used was much too small and too slow for a neutrino telescope. There-
fore a technology using glass spheres and a new type of photo-sensor were devel-
oped.
The Russians started with testing the “smart” XP2600 from PHILIPS (see
Sect. 9.3) in Lake Baikal (Bezrukov et al., 1987). In parallel, the development of
an equivalent Russian device, the QUASAR, was tackled, in cooperation with the
EKRAN company in Novosibirsk. The QUASAR (Fig. 9.7) is a hybrid device simi-
lar to the PHILIPS 2600 developed for the DUMAND project. Photoelectrons from
a 370 mm diameter cathode (K2 CsSb) are accelerated by 25 kV to a fast, high-gain
scintillator placed near the center of the glass bulb. The light from the scintilla-
tor is read out by a small conventional photomultiplier (type UGON). One pho-
toelectron from the hemispherical photocathode yields typically 20 photoelectrons
in the small photomultiplier. This high multiplication factor results in an excellent
1-PE resolution, clear distinction between 1-PE and 2-PE pulses, a time jitter as
small as 2 ns and negligible sensitivity to the Earth’s magnetic field (Bagduev et al.,
1999).
In 1988, the Baikal experiment was approved as a long-term direction of re-
search by the Soviet Academy of Sciences and the USSR government which in-
cluded considerable funding. A full-scale detector (yet without clear definition of
its size) was planned to be built in steps of intermediate detectors of growing size.
In the same year 1988, our group from the East German Institute of High En-
ergy Physics in Zeuthen (part of DESY from 1992 on) joined the Baikal experi-
ment.
After German unification in 1990, the Zeuthen group had access to the Western
market and contributed with Jena glass spheres and some underwater connectors to
the strings which were deployed in 1991 to 1993 (see below). In parallel, Russian
spheres were developed in collaboration with industry, as well as penetrators and
242 C. Spiering
connectors which tolerated water depths down to 2 km – not suitable for large-depth
ocean experiments but sufficient for Lake Baikal.
In 1989, a preliminary version of what later was called the NT200 project was
developed, an array comprising approximately 200 optical modules. The final ver-
sion of the project description was finished in 1992 (Sokalski and Spiering, 1992).
At this time, the participating groups came from INR Moscow, Univ. Irkutsk,
Moscow State Univ., Marine Techn. Univ. Petersburg, Polytechnical Institutes in
Niszhni Novgorod and Tomsk, JINR Dubna, Kurchatov Inst. (Moscow), Limno-
logical Inst. Irkutsk (all Russia), DESY-Zeuthen (Germany) and KFKI Budapest
(Hungary).
NT200 (Fig. 9.8, left) is an array of 192 optical modules carried by eight strings
which are attached to an umbrella-like frame consisting of 7 arms, each 21.5 m in
length. The strings are anchored by weights at the lake floor and held in a vertical
position by buoys at various depths. The configuration spans 72 m in height and
43 m in diameter. The finely balanced mechanics of this frame, with all its buoys,
anchor weights and pivoted arms is another stunning feature of the Baikal experi-
ment. The detector is deployed (or hauled up for maintenance) within a period of
about six weeks in February to April, when the lake is covered with a thick ice layer
providing a stable working platform. It is connected to shore by several copper ca-
bles on the lake floor, which allows for operation over the full year.
The optical modules with the QUASAR-370 phototubes are grouped pair-wise
along a string. In order to suppress accidental hits from dark noise (about 30 kHz)
and bio-luminescence (typically 50 kHz but seasonally raising up to hundreds of
kHz), the two photomultipliers of each pair are switched in coincidence. The time
calibration is done using several nitrogen lasers in pressure-tight glass cylinders.
9 Towards High-Energy Neutrino Astronomy 243
Fig. 9.8 Left: The Baikal Neutrino Telescope NT200. Right: One of the first upward moving
muons from a neutrino interaction recorded with the 4-string stage of the detector in 1996 (Balka-
nov et al., 1999). The Cherenkov light from the muon is recorded by 19 channels
The construction of NT200 coincided with the decay of the USSR and an eco-
nomically desperate period. Members of the collaboration and even some industrial
suppliers had to be supported by grants from Germany; nevertheless many highly
qualified experimentalists left the collaboration and tried to survive in the private
sector. Over a period of three years, a large part of the food for the winter campaigns
at Lake Baikal had to be bought in Germany and transported to Siberia. Still, a nu-
cleus of dedicated Russian physicists heroically continued to work for the project.
Under these circumstances, the construction of NT200 extended over more than
five years. It started with the deployment of a 3-string array (Wischnewski et al.,
1993) with 36 optical modules in March/April 1993. The first two upward moving
candidates were separated from the 1994 data. In 1996, a 96-OM array with four
NT200 strings was operated (Balkanov et al., 1999) and provided the first textbook
neutrinos like the one shown in Fig. 9.8 (right).
NT200 was completed in April 1998 and has been taking data since then. The
basic components have been designed and built in Russia, notably the mechanics
of the detector, the optical module, the underwater electronics and cabling. The
non-Russian contributions all came from DESY: the laser time calibration system,
a transputer farm in the shore station for fast data processing, an online monitor-
ing system and a special underwater trigger tailored to register slowly moving very
244 C. Spiering
bright particles (as GUT monopoles), not to mention the supply of Western elec-
tronics and glass spheres.
The small spacing of modules in NT200 leads to a comparably low-energy
threshold of about 15 GeV for muon detection. About 400 upward muon events
were collected over 5 years. This comparatively low number reflects the notori-
ously large number of failures of individual channels during a year rather than what
would correspond to the effective area. Still, NT200 could compete with the much
larger AMANDA for a while by searching for high-energy cascades below NT200,
surveying a volume about ten times as large as NT200 itself (Aynutdinov et al.,
2006a). In order to improve pattern recognition for these studies, NT200 was fenced
in 2005–2006 by three sparsely instrumented outer strings (six optical module pairs
per string). This configuration is named NT200+ (Aynutdinov et al., 2006b), but
suffered from several problems (from the new strings as well as from the mean-
while antiquated NT200 itself) so that no satisfying statistics and no convincing
results have yet been obtained.
The high threshold required to get a detector working in a hostile environment such
as the deep Pacific or the harsh conditions on the frozen Lake Baikal, resulted in ap-
parently long preparatory periods of both DUMAND and Baikal. This led others to
think about detectors near surface (for a review see Belotti and Laveder, 1993). The
advantages seemed tempting: much easier access and a less challenging environ-
ment. Moreover, proven techniques like tracking chambers or Cherenkov techniques
à la Kamiokande could be used. However, none of these projects was realized, be
it by financial reasons, by the failure to convincingly demonstrate the background
rejection capabilities, or since shallow lake water parameters turning out to be worse
than expected.
At the same time, underground detectors moved from success to success. Re-
markably, two of these successes had not been on the top priority list of the ex-
periments: neutrino oscillations (since the trust in their existence was low in the
1980s) and neutrinos from supernova SN1987A (since Galactic or near-Galactic
supernovae are rare). At the same time, neutrinos from high-energy astrophysics
neutrinos did not show up. Even the data from largest detectors with about 1 000 m2
area (MACRO and Super-Kamiokande) did not show any indication of an excess
over atmospheric neutrinos. Seen from today, the search for sources of high-energy
neutrinos with detectors of 1 000 m2 or less appears to be hopeless from the be-
ginning, with the possible exception of certain transient Galactic sources. But when
these detectors were constructed, this knowledge was not common and the search
for point sources appeared as a legitimate (although not priority) goal.
9 Towards High-Energy Neutrino Astronomy 245
In this situation, a new, spectacular idea appeared on stage. In 1988, Francis Halzen
from the University of Wisconsin gave a talk at the University of Kansas. At this oc-
casion he was contacted by Ed Zeller, a Kansas glaciologist. Zeller told him about
a small test array of radio antennas at the Soviet Vostok station, close to the geo-
magnetic South Pole. The Russians were going to test whether secondary particles
generated in neutrino interactions could be detected via their radio emission. The
idea that showers of charged particles would emit radio signals had been published
back in 1962 by the Soviet physicist Gurgen Askaryan. Together with his colleagues
Enrique Zas and Todor Stanev, Halzen realized that the threshold for this method
was discouraging high (Halzen, 1995). Instead he asked himself whether the optical
detection via Cherenkov light, i.e. the DUMAND principle, would be also feasible
for ice. Halzen (1998) remembers:
I suspect that others must have contemplated the same idea and given up on it. Had I not
been completely ignorant about what was then known about the optical properties of ice
I would probably have done the same. Instead, I sent off a flurry of E-mail messages to
my friend John G. Learned, then the spokesman of DUMAND. . . . Learned immediately
appreciated the advantages of an Antarctic neutrino detector.
A few months later, Halzen and Learned released a paper “High energy neutrino
detection in deep Polar ice” (Halzen and Learned, 1988). With respect to the light
attenuation length they
. . . proceeded on the hope that a simple test will confirm the belief that is similar to the
observed 25 m attenuation length for blue to mid UV light in clear water in ocean basins.
Bubble-free ice was hoped to be found at depths smaller than 1 km. Holes drilled
into the ice were supposed to refreeze or, alternatively, to be filled with a non-
freezing liquid.
Halzen is a theorist and Learned had its hands full with DUMAND, so both
did not proceed to do an experiment. But the idea made it to Buford Price’s group
at University of California, Berkeley. In 1989, two young physicists of the Price
group, Doug Lowder and Andrew Westphal, joined a Caltech group drilling holes
in Antarctic ice and tried to measure the ice transparency using existing boreholes.
246 C. Spiering
It would take, however, another year until the first successful transparency measure-
ment of natural ice was performed – this time in Greenland. Bob Morse from the
University of Wisconsin and Tim Miller (Berkeley) lowered photomultipliers into a
3 km hole drilled by glaciologists (Lowder et al., 1991).
In parallel to these first experimental steps, Buford Price, Doug Lowder and Steve
Barwick (Berkeley), Bob Morse and Francis Halzen (Madison) and Alan Watson
(Leeds) met at the International Cosmic Ray Conference in Adelaide and decided to
propose the Antarctic Muon and Neutrino Detection Array, AMANDA.
In 1991 and 1992, the embryonic AMANDA collaboration deployed photomul-
tipliers at various depth of the South Polar ice. Holes were drilled using a hot-water
drilling technique which had been developed by glaciologists. Judging the count rate
of coincidences between photomultipliers (which are due to down-going muons),
the light absorption length of the ice was estimated at about 20 m and scattering
effects were supposed to be negligible. It would turn out later that this was a funda-
mental misinterpretation of the rates. But exactly this interpretation encouraged the
AMANDA physicists to go ahead with the project.
With ongoing activities in Hawaii and at Lake Baikal and the first ideas on a tele-
scope in polar ice, the exploration of the Mediterranean Sea as a site for an under-
water neutrino telescope was natural. First site studies along a route through the
Mediterranean Sea were performed in 1989 by Russian physicists who also mea-
sured the muon counting rate as a function of depth (Deneyko et al., 1991). In July
1991, a Greek/Russian collaboration led by Leonidas Resvanis from the Univer-
sity of Athens performed a cruise and deployed a Russian built hexagonal structure
made of titanium and carrying 10 photomultipliers down to a depth of 4 100 m. The
site was close to Pylos at the West coast of the Peloponnesus. They measured the
vertical muon intensity and the angular distribution of down-going muons. This was
the start of the NESTOR project (NESTOR: http://www.nestor.noa.gr/), which was
named after the mythic king of Pylos who counseled the Greeks during the Trojan
war.
The advantages of the Pylos site were obvious: The depth can be chosen down
to 5 200 m, dependent on the acceptable distance to shore, deeper than at any other
candidate site. This would reduce the background of mis-reconstructed downward
muons faking upward muons from neutrino interactions and even allows looking
above horizon. The water quality was excellent and the bio-luminescence seemed to
be lower than at other Mediterranean sites.
In 1992, the collaboration included the University of Athens, the Scripps Institute
for Oceanography in San Diego (USA), the Universities of Florence (Italy), Hawaii,
Wisconsin (USA), Kiel (Germany) and the Institute for Nuclear Research Moscow.
Naturally, more Greek institutes joined. French institutes joined and left again to
pursue their own project ANTARES. More Italian institutes joined but later also de-
cided to follow their own project, NEMO, close to Sicily. LBNL Berkeley provided
9 Towards High-Energy Neutrino Astronomy 247
essential electronics for the first stationary hexagonal floor which was deployed in
2004.
The results of the early cruises and the concept for the NESTOR detector were
developed and presented during a series of Workshops in Pylos. NESTOR was con-
ceived to consist of a hexagon of hexagonal towers covering an area of about 105 m2
as shown in Fig. 9.10.
A single tower should carry 168 PMTs on 12 hexagonal floors, vertically spaced
by 20–30 m, each with six omni-directional modules at the end of 16 m arms and
one in the center (Resvanis et al., 1994). The “omni-directional module” contained
two 15-inch PMTs each in a 17-inch glass pressure sphere.
The philosophy of NESTOR was to be not only sensitive to high-energy neutrinos
(therefore the large area covered by seven towers) but also to study atmospheric
neutrino oscillations (hence the 5 GeV threshold inside the geometrical volume of a
tower and the omni-directionality of the modules).
In 1993 and 1994, three collaborations were going to deploy detectors with three
or more strings. Three strings are the minimum to achieve full spatial reconstruc-
tion.
The DUMAND collaboration was working towards installation of the first three
of the nine DUMAND-II strings. Two of these strings were to be equipped with
“Japanese Optical Modules” (JOMs) containing a 15-inch PMT “R2018” from
Hamamatsu, and one string with “European Optical Module” (EOMs) containing
the hybrid XP2600 from Philips. This stage was christened TRIAD.
248 C. Spiering
9.7 AMANDA
AMANDA is located some hundred meters from the Amundsen–Scott station. Holes
60 cm in diameter were drilled with pressurized hot water; strings with optical mod-
ules were deployed in the water which subsequently refreezes. Installation opera-
tions at the South Pole are performed in the Antarctic summer, November to Febru-
ary. For the rest of the time, two operators (of a winter-over crew of 25–40 persons
in total) maintained the detector, connected to the outside world via satellite com-
munication.
The first AMANDA array with 80 optical modules on four strings was deployed
in the austral summer 1993/1994, at depths between 800 and 1 000 m (Askebjer
9 Towards High-Energy Neutrino Astronomy 249
et al., 1995). Surprisingly, light pulses sent from on string to a neighbored string
over 20 m distance, did not arrive after the expected 100 ns, but were considerably
delayed. The surprise was resolved at the 1994 Venice Workshop on Neutrino Tele-
scopes. Here, Grigorij Domogatsky informed Francis Halzen about results from an
ice core extracted at the geomagnetic South Pole where the Russian Vostok station
is located. The data proved that air bubbles which remain from the original firn ice
at the surface did not yet disappear at 1 km depth. The delay was due to light scatter-
ing at the bubbles. Light would not travel straight but via random walk and become
nearly isotropic after a few times a distance called effective scattering length. The
effective scattering length was found to be between 40 cm at 830 m depth and 80 cm
at 970 m. The scattering by air bubbles trapped in the ice made track reconstruction
impossible.
A great story might have been over before it really got started. AMANDA seemed
to be “nothing than a big calorimeter” – as Leonidas Resvanis sarcastically formu-
lated – without any real tracking capabilities. This could have been the point to give
up the project. Nevertheless, our group from DESY joined. We were encouraged by
a trend seen in the AMANDA data themselves, as well as by ice core data taken at
the Russian Vostok station: below 1 300 meters bubbles should disappear.
This expectation was confirmed with a second 4-string array which was deployed
in 1995/1996. The remaining scattering, averaged over 1 500–2 000 m depth, corre-
sponds to an effective scattering length of about 20 m and is assumed to be due to
dust. This is still considerably worse than for water but sufficient for track recon-
struction (Ackermann et al., 2006). The proof that is indeed was sufficient took some
time, as well as the development of the suitable reconstruction methods and selec-
tion criteria (Ahrens et al., 2004a). The array was upgraded stepwise until January
2000 and eventually comprised 19 strings with a total of 677 optical modules, most
250 C. Spiering
of them at depths between 1 500 and 2 000 m. Figure 9.11 shows the AMANDA
configuration.
In Fig. 9.12, absorption and scattering coefficients are shown as functions of
depth and wavelength (Ackermann et al., 2006). The variations with depth are due
to bubbles at shallow depth leading to very strong scattering and, at larger depths, to
dust and other material transported to Antarctica during varying climate epochs. The
quality of the ice improves substantially below a major dust layer at a depth of about
2 000–2 100 m, with a scattering length about twice as large as for the region above
2 000 m. The depth dependence of the optical properties complicates the analysis of
the experimental data. Furthermore, the large delays in photon propagation due to
the strong scattering cause worse angular resolution of deep-ice detectors compared
to water. On the other hand, the large absorption length, with a cut-off below 300 nm
instead of 350–400 nm in water, results in better photon collection.
9 Towards High-Energy Neutrino Astronomy 251
corresponding neutrino flux would not fit any reasonable assumption on the ener-
getics of the source (Reimer et al., 2005), the other claiming that scenarios yield-
ing such fluxes were conceivable (Halzen and Hooper, 2005). Since the analysis
was not a fully blind analysis, it turned out to be impossible to determine chance
probabilities for this event, and actually the result was never published in a jour-
nal. However, it initiated considerations to send alerts to gamma-ray telescopes in
case time-clustered events from a certain direction would appear. Such a “Target-
of-Opportunity” alert is currently operating between IceCube and the gamma-ray
telescopes MAGIC (La Palma) and VERITAS (Arizona).
Fig. 9.15 The “curious” coincidence of neutrino events from the direction of an AGN with gamma
flares from the same source. The second and third of the three events recorded in 2002 (dashed)
coincide within about one day with peaks seen by WHIPPLE
projects, and if problems from this corner add to the inherent problems of deep un-
derwater projects, delays or even failure are inevitable.
Currently NESTOR is part of the KM3NeT framework which is directed towards
a multi-cubic kilometer detector in the Mediterranean Sea.
French collaborators who temporarily had been members of NESTOR, pursued
an independent strategy from the mid-1990s. Together with collaborators from Italy
and the Netherlands they presented a full proposal for a 12-string detector in 1999
(Aslanides et al., 1999). In 2001 also a German group from Univ. Erlangen joined
the experiment. ANTARES stands for Astronomy with a Neutrino Telescope and
Abyss environmental RESearch (ANTARES: http://antares.in2p3.fr). This proposal
was based on the operation of a demonstrator string (Blondeau, 1998; Feinstein,
1999) as well as on the results of extensive site exploration campaigns in the region
off Toulon at the French Mediterranean coast, indicating that the optical background
(Amram et al., 2000) as well as sedimentation and biofouling (Amram et al., 2003)
are acceptable at that site. However, taken all together (depth, optical clarity, optical
background and sedimentation) the site is inferior to the Greek and Italian sites.
The construction of ANTARES started in 2002 with the deployment of a shore
cable and a junction box, the central element connecting the shore cable to the de-
tector. In 2002/2003, a preproduction string was deployed and operated for a few
months. Several technical problems were identified that required further studies,
design modifications and the operation of a mechanical test string (Ageron et al.,
2007). The detector in its final 12-string configuration was installed in 2006–2008
and has been operational since then, with a break of a few months in 2009 due to a
failure of the main cable that required repair.
ANTARES consists of 12 strings, each carrying 25 “storeys” equipped with three
optical modules, an electronics container and calibration devices. The optical mod-
ule consists of a 17-inch glass sphere housing a hemispherical 10-inch photomul-
254 C. Spiering
Fig. 9.16 Schematic of the ANTARES detector. Indicated are the 12 strings and the instrumenta-
tion line in its 2007 configuration (IL07). Shown as an inset is the photograph of a story carrying
three photomultipliers
Fig. 9.17 Top: Number of reconstructed muons in the 2008 ANTARES data, as a function of
the reconstructed zenith angle (horizontal bars). Also indicated are the simulation results for at-
mospheric muons (dashed), and muons induced by atmospheric neutrinos (full line). The shaded
band indicates the systematic uncertainties. Figure taken from Aguilar et al. (2011). Bottom: Equa-
torial skymap of neutrino-induced muon events from 295 days of ANTARES data in 2007/2008.
The background color scale indicates the sky visibility in percent of the time. The most significant
accumulation of events, marked with a circle, is fully compatible with the background expectation
(Eberl, 2011)
long and each equipped with four 10-inch PMs. The floors are tilted against each
other and form a three-dimensional structure (Capone et al., 2009). A tower can
be folded together and deployed to the sea floor as a compact object that is subse-
quently unfurled. Contrary to single strings and similarly to the NESTOR concept,
the 3-dimensional arrangement of photomultipliers per tower allows for local recon-
struction of muon directions.
256 C. Spiering
A suitable site at a depth of 3.5 km, about 100 km off Capo Passero on the
South-Eastern coast of Sicily has been identified and investigated during various
campaigns. During the first prototyping phase, a cable to a test site near Catania at
a depth of 2 km was installed and equipped with a junction box. In 2007, a “mini-
tower” with 4 bars was deployed, connected and operated for several weeks. Al-
though the data taking period was limited to a few months due to technical prob-
lems, the mini-tower provided the proof of concept for the technologies and most of
the components employed.
The setup of a second phase (Taiuti et al., 2011) includes shore infrastructure at
Capo Passero and a 100 km long cable to the site at 3.5 km depth; both are currently
in place. A remotely operated vehicle (ROV) is available for the deep-sea operations.
A mechanical test tower of limited size was successfully deployed and unfurled in
early 2010. The plans to deploy a full-size prototype tower will be pursued in the
KM3NeT framework.
9.9 IceCube
With IceCube (IceCube: http://www.icecube.wisc.edu/), the idea of a cubic-
kilometer detector was finally realized (Ahrens et al., 2004b). However, the way
towards the first installation was all but smooth.
Actually, the first initiative beyond AMANDA was a concept called DeepIce,
a proposal for multi-disciplinary investigations, including neutrino and cosmic ray
astrophysics, glaciology, glacial biology, seismology and climate research. As an
example, we note the relation between the layered impurities from dust and climatic
effects or volcano eruptions (Ackermann et al., 2006). concluded that a neutrino
detector was sold under the flag of multi-disciplinary research, (mis)using the NSF
funding model for multi-disciplinary centers. The advice was to go ahead with a
dedicated project for a neutrino telescope.
As a consequence, already in November of the same year a first 67-page IceCube
proposal was submitted to NSF. It was signed essentially by the collaborators of
the old AMANDA collaboration. Soon, a number of additional institutions became
interested and a new collaboration was formed, the IceCube collaboration, which
meanwhile has grown to more than 30 institutions. Paradoxically, the two collabo-
rations co-existed until 2005, then joining to one collaboration, IceCube.
For IceCube construction, the thermal power of the hot-water drill factory was
upgraded to 5 MW, compared to 2 MW for AMANDA. This reduced the average
time to drill a 2 450 m deep hole to 35 hours. The commissioning of the drill during
the first deployment season 2004/2005 turned out to be extremely challenging, but
eventually a first, single string was deployed in January 2005: The first step was
made! The following seasons resulted in 8, 13, 18, 19, 20 and 7 strings, respectively.
The last of 86 strings was deployed at Dec. 18, 2010.
IceCube consists of 5 160 digital optical modules (DOMs) installed on 86 strings
at depths of 1 450 to 2 450 m. A string carries 60 DOMs with 10-inch photomultipli-
ers Hamamatsu R7081-02 housed in a 13-inch glass sphere. Signals are digitized in
9 Towards High-Energy Neutrino Astronomy 257
the DOM and sent to shore via copper cables. 320 further DOMs are installed in Ice-
Top, an array of detector stations on the ice surface directly above the strings (see
Fig. 9.18). AMANDA, initially running as a low-energy sub-detector of IceCube,
was decommissioned in 2009 and replaced by DeepCore, a high-density sub-array
of six strings at large depths (i.e. in the best ice layer) at the center of IceCube.
DeepCore collects photons with about six times the efficiency of full IceCube, due
to its smaller spacing, the better ice quality and the higher quantum efficiency of
new PMTs. Together with the veto provided by IceCube, this results in an expected
threshold of about 10 GeV. This opens a new window for oscillation physics and
indirect dark matter search.
The muon angular resolution achieved by present reconstruction algorithms is
about 1° for 1 TeV muons and below 0.5◦ for energies above 10 TeV. Unlike un-
derwater detectors with their environment of high optical noise, IceCube can be op-
erated in a mode that is only possible in ice: The detection of burst neutrinos from
supernovae. The low dark-count rate of the PMTs allows for detection of the fee-
ble increase of the summed count rates of all PMTs during several seconds, which
would be produced by millions of interactions of few-MeV neutrinos from a super-
nova burst (Abbasi et al., 2011b). IceCube records the counting rate of all PMTs
in millisecond steps. A supernova in the center of the Galaxy would be detected
with extremely high significance and the onset of the pulse could be measured in
258 C. Spiering
Fig. 9.20 90 % C.L. integral upper limits on the diffuse flux of extraterrestrial neutrinos. The
horizontal lines extend over the energy range which would cover 90 % of the detected events from
an E −2 source (5 % would be below and 5 % above the range). All model predictions have been
normalized to one flavor, i.e. all of the all-flavor limits have been divided by 3. The colored band
indicates the measured flux of atmospheric neutrinos (see also Fig. 9.14), the broadening at higher
energies reflects the uncertainties for prompt neutrinos. The limits on muon neutrinos are from
807 days AMANDA 334 days ANTARES and 375 days IceCube-40. Cascade/all flavor limits are
from 807 days AMANDA 1038 days Baikal-NT200 257 days IceCube-22. See Katz and Spiering
(2012) for references. Also indicated is the Waxman–Bahcall (WB) bound (Waxman and Bahcall,
1999)
kinds of background has been observed so far, resulting in upper limits on the dif-
fuse flux of extraterrestrial high-energy neutrinos. Figure 9.20 summarizes the lim-
its obtained in the TeV–PeV region. For each experiment and each method only
the best limit is shown. Remarkably, from the first limit derived from the under-
ground experiment Frejus to the 2010 IceCube-40 limit, a factor of 500 improve-
ment has been achieved. Several models such as e.g. the blazar model of Stecker
(2005) shown in the figure can be excluded. A further factor of 10 improvement is
expected over the next 2–3 years, using the full IceCube detector and combining
muon and cascade information. The expected sensitivity is more than an order of
magnitude below a theoretical upper bound of Waxman and Bahcall (1999), and
prompt atmospheric neutrinos will be detectable for all but the lowest predictions
(Kowalski, 2005).
Five decades after the first conceptual ideas, and three decades after first practical
attempts to build high-energy neutrino telescopes, we may be close to a turning
point. IceCube, has started data taking in its full cubic-kilometer configuration but
has not yet detected an extraterrestrial neutrino signal.
The strong case for high-energy neutrino astronomy has remained unchanged
over time, but the requirements on the necessary sensitivity have tightened continu-
ously. Whereas underground detectors on the kiloton mass scale (or on the 103 m2
fmuon area scale) seemed sufficient in the 1960s, predictions from the 1970s and
260 C. Spiering
1980s already favored scales of 105 –106 m2 . Actually, DUMAND was conceived
as a cubic kilometer configuration in 1978. On the other hand, underground detec-
tors like MACRO or Super-Kamiokande were still given a certain potential for high-
energy neutrino astronomy. Therefore, it is no surprise that, in spite of their declared
goal of the kilometer scale, also the underwater/ice community did hope for early
discoveries with NT200, AMANDA, NESTOR and ANTARES. This hope turned
out to be illusory. Neither did the observations of GeV and TeV gamma rays in the
last two decades support higher flux expectations nor has any of these detectors saw
a signal indication with more than 3σ significance. Therefore the detection of first
extraterrestrial high-energy neutrinos sources lies still ahead. With some optimism,
we may expect it within the next few years. Galactic “Pevatrons” as those observed
in gamma rays by the Milagro detector are within reach after a few years of IceCube
data taking if the corresponding predictions are correct. Models assigning the most
energetic cosmic rays to gamma-ray bursts are challenged by recent IceCube data
(Abbasi et al., 2012) and will be more strongly scrutinized within a couple of years.
However, clear detections are all but guaranteed.
Whereas the identification of first extraterrestrial neutrinos IceCube has not yet been
achieved, projects of similar or greater size on the Northern hemisphere are under
preparation. In 2002, an expert committee installed by the International Union of
Pure and Applied Physics (IUPAP) concluded (HENAP, 2002) that “a km3 -scale
detector in the Northern hemisphere should be built to complement the IceCube
detector being constructed at the South Pole”. Following this recommendation, the
Mediterranean neutrino telescope groups have formed the KM3NeT collaboration
to prepare, construct and operate such a device. A design study from 2006 to 2009
resulted in a Conceptual Design Report (CDR) (Bagley et al., 2008) and a Technical
Design Report (TDR) (Bagley et al., 2010). At present, the project is in a Preparatory
Phase and envisages to install a detector with 6 km3 volume from 2014 on. The
total investment cost is estimated to be around 225 MEuro. A top view of a possible
detector configuration consisting of two blocks, each 3 km3 , is sketched in Fig. 9.21.
In Russia, the Baikal Collaboration plans the stepwise installation of a kilometer-
scale array in Lake Baikal, the Gigaton Volume Detector, GVD (Aynutdinov et al.,
2009). Realizing that the presently planned size of half a cubic kilometer is no longer
enough, a three times larger array is presently being studied, as sketched at the
bottom of Fig. 9.21. Note that due to the shallower depth of Lake Baikal, the hight
of GVD will be shorter than for IceCube and KM3NeT.
The realization of these projects depends on several factors. First of all, Ice-
Cube results will play a strong role. Secondly, future gamma-ray data must provide
stronger indications that the observed gamma rays are pion-decay counterparts of
neutrinos and not only the result of inverse Compton scattering. Last but not least,
the considerable funding must be found.
9 Towards High-Energy Neutrino Astronomy 261
Missing or marginal evidence for sources from IceCube may have various conse-
quences. If one is going to continue the venue of detectors which explore the energy
range most characteristic for GRBs and AGNs, one has to envisage an order-of-
magnitude step in sensitivity, i.e. beyond what is presently scheduled by KM3NeT
and GVD.
The second option would be an even larger leap in size. It would address ener-
gies above 100 PeV with the help of new technologies like radio or acoustic detec-
tion and envisage 100–1 000 cubic kilometers of instrumented volume. This option
might still have sensitivity to neutrinos from AGN jets but would also cover well
the energy range of neutrinos from cosmic ray interactions with the 3-Kelvin mi-
crowave background. In contrast to optical detectors, new-technology detectors are
still in the R&D phase and also have no natural calibration source like atmospheric
neutrinos for optical detectors.
The third option would define, at least for the time being, an end to the search for
neutrinos from cosmic accelerators. It would focus on optical detection with small
spacing optimized to investigate oscillations with accelerator neutrinos (Mediter-
ranean Sea) and atmospheric neutrinos, or, even more pretentious, to study super-
nova bursts beyond our own Galaxy or even proton decay.
Taken all together, we may be close to a turning point. We have made a factor-
of-thousand step in sensitivity compared to a dozen years ago. This is far more than
the traditional factor of ten which so often led to the discovery of new phenomena
(Harwit, 1981). For instance, looking across our own field, the prospects for dis-
covery had not been estimated too highly before launching the first X-ray rocket in
1962, or before detecting the Crab Nebula in TeV gamma rays in 1989. History has
told another story, as we know today. The same may be the case for high-energy
neutrino astronomy. The journey is not yet finished!
References
Abbasi, R., et al. (IceCube Coll.): Phys. Rev. D79, 062001 (2009). arXiv:0809.1646
262 C. Spiering
Abbasi, R., et al. (IceCube Coll.): Phys. Rev. D83, 012001 (2011a). arXiv:1010.3980
Abbasi, R., et al. (IceCube Coll.): Astron. Astrophys. 535, A109 (2011b). arXiv.org/abs/1108.0171
Abbasi, A., et al. (IceCube Coll.): Neutrinos challenge gamma ray burst origin of cosmic rays.
Nature 484, 351 (2012)
Achar, C., et al.: Phys. Lett. 18, 196 (1965)
Ackermann, M.: Searches for signals from cosmic point-like sources of high energy neutrinos in 5
years of AMANDA-II data. Ph.D. thesis, Humboldt University Berlin (2006)
Ackermann, M., et al. (AMANDA Coll.): J. Geophys. Res. 111, D13203 (2006)
Ageron, M., et al. (ANTARES Coll.): Nucl. Instrum. Methods A 581, 695 (2007)
Ageron, M., et al. (ANTARES Coll.): Nucl. Instrum. Methods A 656, 11 (2011). arXiv:1104.1607
Aggouras, G., et al. (NESTOR Coll.): Nucl. Instrum. Methods A 552, 420 (2005a)
Aggouras, G., et al. (NESTOR Coll.): Astropart. Phys. 23, 377 (2005b)
Aguilar, J.A., et al. (ANTARES Coll.): Astropart. Phys. 34, 652 (2011)
Ahrens, J., et al. (AMANDA Coll.): Nucl. Instrum. Methods A 524, 169 (2004a). arXiv:astro-ph/
0407044
Ahrens, J., et al. (IceCube Coll.): Astropart. Phys. 20, 507 (2004b). arXiv:astro-ph/0305196
Amram, P., et al. (ANTARES Coll.): Astropart. Phys. 13, 127 (2000)
Amram, P., et al. (ANTARES Coll.): Astropart. Phys. 19, 253 (2003). arXiv:astro-ph/0206454
Andres, E., et al. (AMANDA Coll.): Nucl. Phys. Proc. Suppl. 91, 423 (2001). arXiv:astro-ph/
0009242
ANTARES homepage: http://antares.in2p3.fr
Antonioli, P., et al.: New J. Phys. 6, 114 (2004). arXiv:astro-ph/0406214
Askebjer, P., et al. (AMANDA Coll.): Science 267, 1147 (1995)
Aslanides, E., et al. (ANTARES Coll.): A deep sea telescope for high-energy neutrinos.
arXiv:astro-ph/9907432 (1999)
Aynutdinov, V., et al. (Baikal Coll.): Astropart. Phys. 25, 140 (2006a). arXiv:astro-ph/0508675
Aynutdinov, V., et al. (Baikal Coll.): Nucl. Instrum. Methods A 567, 433 (2006b). arXiv:astro-ph/
0609743
Aynutdinov, V., et al. (Baikal Coll.): Nucl. Instrum. Methods A 602, 227 (2009). arXiv:0811.1110
Babson, E., et al. (DUMAND Coll.): Phys. Rev. D 42, 3613 (1990)
Bagduev, R., et al. (Baikal Coll.): Nucl. Instrum. Methods A 420, 138 (1999). arXiv:astro-ph/
9903347
Bagley, P., et al. (KM3NeT Coll.): Conceptual design report (2008). isbn:978-90-6488-031-5.
Available from: www.km3net.org
Bagley, P., et al. (KM3NeT Coll.): Technical design report (2010). isbn:978-90-6488-033-9. Avail-
able from: www.km3net.org
Balkanov, R.V., et al. (Baikal Coll.): Astropart. Phys. 12, 75 (1999). arXiv:astro-ph/9705244
Belotti, E., Laveder, M.: In: Proc. 5th Int. Workshop on Neutrino Telescopes, Venice, p. 275 (1993)
Berezinsky, V.: In: Proc. Int. Workshop on Neutrino Telescopes, Venice, p. 125 (1990)
Bezrukov, L.B., et al. (Baikal Coll.): In: Proc. XI. Conf. on Neutrino Physics and Astrophysics,
Nordkirchen, Germany, p. 550 (1984)
Bezrukov, L.B., et al.: In: Proc. 2nd Int. Symp. Underground Physics-87, Baksan Valley, USSR,
p. 230 (1987)
Blondeau, F. (for the ANTARES Coll.): Prog. Part. Nucl. Phys. 40, 413 (1998)
Bosetti, P., et al. (DUMAND Coll.): DUMAND II: Proposal to construct a deep-ocean laboratory
for the study of high energy neutrino astrophysics and particle physics. Tech. Rep. HDC-2-88,
Hawaii DUMAND Center. University of Hawaii (1988)
Capone, A., et al. (NEMO Coll.): Nucl. Instrum. Methods A 602, 47 (2009)
Deneyko, A.O., et al.: In: Proc. 3rd Int. Workshop on Neutrino Telescopes, Venice, p. 407 (1991)
Domogatsky, G.V., et al.: In: Proc. XII. Conf. on Neutrino Physics and Astrophysics, Sendai, Japan,
p. 737 (1986)
Eberl, T. (for the ANTARES Coll.): Prog. Part. Nucl. Phys. 66, 457 (2011)
Feinstein, F. (for the ANTARES Coll.): Nucl. Phys. Proc. Suppl. 70, 445 (1999)
Greisen, K.: Annu. Rev. Nucl. Part. Sci. 10, 63 (1960)
9 Towards High-Energy Neutrino Astronomy 263
Halzen, F.: Ice fishing for neutrinos (1995). Available from: http://icecube.berkeley.edu/amanda/
ice-fishing.html
Halzen, F.: Antarctic dreams (1998). Available from: http://www.exploratorium.edu/origins/
antarctica/tools/dreams1.html
Halzen, F., Hooper, D.: Astropart. Phys. 23, 537 (2005). arXiv:astro-ph/0502449
Halzen, F., Learned, J.G.: In: Proc. 5th Int. Symp. on Very High-Energy Cosmic-Ray Interactions,
Lodz, Poland (1988)
Harwit, M.: Cosmic Discovery. Basic Books, New York (1981)
IceCube homepage: http://www.icecube.wisc.edu/
Katz, U., Spiering, C.: High-energy neutrino astrophysics: status and perspectives. Prog. Part. Nucl.
Phys. 67, 651 (2012). arXiv:1111.0507
Kotzer, P. (ed.) DUMAND-75, Proc. 1975 Summer DUMAND Study, Western Washington State
College, Bellingham (1976)
Kowalski, M.: J. Cosmol. Astropart. Phys. 0505, 010 (2005). arXiv:astro-ph/0505506
Krishnaswamy, M., et al.: Proc. R. Soc. Lond. 323, 489 (1971)
Learned, J. (ed.): DUMAND-1979. Proc. of Khabarovsk and Lake Baikal Summer Workshops
(1979)
Lowder, D., et al.: Nature 353, 331 (1991)
Markov, M.A.: In: Proc. 10th ICHEP, Rochester, p. 578 (1960)
Markov, M., Zheleznykh, I.: Nucl. Phys. 27, 385 (1961)
NEMO homepage: http://nemoweb.lns.infn.it
NESTOR homepage: http://www.nestor.noa.gr/
Reimer, A., Böttcher, M., Postnikov, S.: Astrophys. J. 630, 186 (2005). arXiv:astro-ph/0505233
Reines, F.: Annu. Rev. Nucl. Part. Sci. 10, 1 (1960)
Reines, F.: In: Peterson, V. (ed.) Proc. 30th Int. Conf. on Neutrino Physics and Astrophysics, vol. 2,
p. 496 (1981)
Reines, F., et al.: Phys. Rev. Lett. 15, 9 (1965)
Resvanis, L.K., et al. (NESTOR Coll.): Nucl. Phys. Proc. Suppl. 35, 294 (1994)
Roberts, A. (ed.): DUMAND-76, Proc. 1976 Summer DUMAND Workshop, University of Hawaii,
Honolulu (1977)
Roberts, A.: Rev. Mod. Phys. 64, 259 (1992)
Roberts, A., Wilkins, G. (eds.): DUMAND-78, Proc. 1978 Summer DUMAND Study (1978)
Sokalski, I., Spiering, C. (eds.) (Baikal Coll.): The Baikal Neutrino Telescope NT-200. Tech. Rep.
Baikal-92-03, DESY/INR (1992)
Stecker, F.W.: Phys. Rev. D 72, 107301 (2005). arXiv:astro-ph/0510537
Svoboda, R., et al.: Astrophys. J. 315, 480 (1987)
Taiuti, M., et al. (NEMO Coll.): Nucl. Instrum. Methods A 626, S25 (2011)
The High Energy Neutrino Astrophysics Panel: High energy neutrino observatories (2002). Avail-
able from: www.lngs.infn.it/lngs/infn/contents/docs/pdf/panagic/henap2002.pdf
van Aller, G., et al.: IEEE Trans. Nucl. Sci. NS-30, 1119 (1983)
Waxman, E., Bahcall, J.: Phys. Rev. D 59, 023002 (1999). arXiv:hep-ph/9807282
Weekes, T.C., et al.: Astrophys. J. 342, 379 (1989)
Wischnewski, R., et al.: In: Proc. 3rd Int. NESTOR Workshop, Pylos, Greece, p. 213 (1993)
Zheleznykh, I.: Int. J. Mod. Phys. A 21S1, 1 (2006)
Chapter 10
From Waves to Particle Tracks and Quantum
Probabilities
Brigitte Falkenburg
Later I was asked to sit and count the tic-tac of the cosmic rays
and only then did I accept for the first time the existence of
cosmic rays. Until then I had been convinced that they were an
invention of Bruno and other scientists
Nora Lombroso (Rossi, 1990, 167)
10.1 Introduction
This chapter is on measurement theory. It investigates how to measure cosmic rays
and how one came to know about their nature. In particular, it explains the laws in-
volved in the methods of data analysis in a ‘genetic’ account that aims at making the
growth of knowledge transparent. In the history and philosophy of physics it is well
known that the experimental data are theory-laden. Of course, without a detailed
theory of the measuring devices and the ways in which they measure the phenom-
ena, no precise experimental results are available. This measurement theory should
be well-confirmed and independent of the theory which is under test, in order to
avoid circularity. But the way in which such an independent measurement theory is
developed, empirically confirmed, and extended in the course of time is a neglected
topic in the history and philosophy of physics.
The existing historical accounts of cosmic ray studies and particle physics omit
this issue as a matter of tacit background knowledge. When the physicists describe
the history of their discipline (Pais, 1986; Riordan, 1987), they more or less presup-
pose this background knowledge and focus on discoveries and new theories. Cur-
rent historians of science, in turn, stand in the tradition of Thomas S. Kuhn (1962).
They mainly focus on the schools, skills and pragmatic aspects of scientific practice,
on “external”, social factors (Pickering, 1984), or on the historical aspects of what
scientists consider to be objective knowledge (Daston, 2000; Daston and Galison,
2007). In general these approaches miss the multiple ways in which theory and ex-
periment are partially interwoven, partially independent, giving rise to a growing
B. Falkenburg ()
Fakultät 14, Institut für Philosophie und Politikwissenschaft, Technische Universität Dortmund,
44221 Dortmund, Germany
e-mail: [email protected]
The first particles ever detected in subatomic physics were the electron, the α-
particle, and the photon. It was not so easy, however, to establish evidence of the
particle nature of the electron and the photon. The meandering between waves and
particles began with cathode rays and the search for a measurement of isolated elec-
tron charges and ended up in wave–particle duality of the photon and all other par-
ticles.
10 From Waves to Particle Tracks and Quantum Probabilities 267
The discovery of the electron is usually attributed to J.J. Thomson and dated to
the year 1897, when the debate about the wave or particle nature of cathode rays
was still ongoing. Thomson measured the ratio e/m as a property of cathode rays,
but he neither localized the mass nor the charge associated with it. It took a long
way from his e/m measurement of to Millikan’s measurement of isolated electron
charges. Whoever did not yet believe in 1897 that a massive fundamental charge
unit existed, would not be convinced by Thomson’s measurement result either. The
measurement did not in any way test the hypothesis that cathode rays consist of
single massive charged particles. It just confirmed that cathode rays are deflected by
the Lorentz force and hence are carriers of mass and charge. Although this result
was a strong indication for the existence of the electron, many physicists did not
regard Thomson’s conclusion as sufficient. This was in particular true of defenders
of energetism like Ostwald, or the empiricist physicist and philosopher Mach.
Additional measurements were needed in order to identify the electrons as parti-
cles, i.e., as single carriers of charge and mass, rather than measuring only the rays
that carry these properties. Still in 1897 and in Thomson’s laboratory, Townsend
made the first step towards such a measurement. He succeeded in using the prin-
ciple of the cloud chamber (developed by Wilson in 1895) for localizing single
charge carriers unsharply, and he determined their charges independently of mass.
Townsend determined the charge of the single condensation droplets in the cloud
chamber from the total charge of the steam and the number of droplets per vol-
ume. The total charge of the steam was measured electro-statically, and the number
of droplets was calculated from the weight of the cloud and the mean weight of the
single droplets (Townsend, 1897; Millikan, 1917, 43–47). As opposed to Thomson’s
e/m measurement, the force law here is not only applied to invisible particles but to
the macroscopic properties of the droplet cloud. After 1898, Thomson carried out
measurements based on this principle, too (Thomson, 1899), even though in a more
indirect way and adding considerably to the experimental uncertainties, as Millikan
stressed (Millikan, 1917, 47–52).
It is not the e/m determination from cathode rays but only such a charge mea-
surement from condensation droplets that assigns a characteristic value of charge to
entities identified one by one in a detector. However, the early measurements still
allowed doubt as to whether every single condensation droplet carries an elemen-
tary charge unit, or at least an integer multiple of it. Only around 1910 were these
doubts largely dispelled by Millikan’s oil droplet experiments. In lengthy and dif-
ficult measurements, Millikan determined the force which an electric field exerts
on single charged oil droplets (Millikan, 1911). When he published his value for e,
he emphasized that it was due to the first method for measuring the charge of indi-
vidual charge carriers (Millikan, 1911). Millikan himself, however, did not see the
principal benefit of his results in the experimental validation of isolated elementary
charges, but rather in the precision measurement for e made possible by his method.
Indeed he was so eager to have a high precision measurement that he even arbitrar-
ily omitted some of the droplet events he had measured (as is known today from
268 B. Falkenburg
a careful analysis of his notebooks; see Franklin, 1986, 140–157). In this way, and
probably in order to keep his measurement error small, he simply violated the rules
of good scientific practice. However, at this time the existence of the electron and
the atomistic constitution of matter were well-established due to an abundance of
experimental evidence from other areas of physics. Indeed, Millikan’s results were
confirmed by many later experiments without any need of awkward data selection.
A few years after Thomson’s e/m measurement, the particle nature of α-rays
became evident. As Crookes and others discovered in 1903, a screen laminated with
zinc sulfide starts to phosphoresce in total darkness when it is exposed to α-rays.
Observed with a magnifying glass, this glow could be resolved into a variety of
single light flashes. Later, Rutherford, Chadwick and Ellis commented on this phe-
nomenon as follows in their textbook on radioactivity:
On viewing the surface of the screen with a magnifying glass, the light from the screen is
seen not to be distributed uniformly but to consist of a number of scintillating points of light
scattered over the surface and of short duration. Crookes devised a simple apparatus called
a ‘spinthariscope’ to show the scintillations. A small point coated with a trace of radium
is placed several millimeters away from a zinc sulfide screen which is fixed at one end of
a short tube and viewed through a lens at the other end. In a dark room the surface of the
screen is seen as a dark background dotted with brilliant points of light which come and
go with great rapidity. This beautiful experiment brings vividly before the observer the idea
that the radium is shooting out a stream of projectiles each of which causes a flash of light
on striking the screen’ (Rutherford et al., 1930, 54–55)
From 1906, Rutherford and his assistants carried out their famous scattering experi-
ments with α-rays. Rutherford measured the ratio E/M of charge E and mass M of
the α-rays (Rutherford et al., 1930, 41–46), and then he proceeded to using the scin-
tillation method to measure the scattering angle of single α-particles. The discovery
of backward scattering, the inference to the atomic nucleus, and the experimental
check of Rutherford’s famous scattering formula were later based on this method,
too (Geiger and Marsden, 1913; Trigg, 1971). It turned out that the α-particles are
helium nuclei.
A decisive step towards establishing the atomistic structure of matter was made
in 1908, when Perrin’s measurements confirmed Einstein’s 1905 theory of Brow-
nian motion (Perrin, 1909). Now, the atomistic hypothesis was approved even by
defenders of energetism like Ostwald (Blackmore, 1972, 217; Nye, 1972).
The story shows that in order to establish the particle nature of electrons and
α-rays, two kinds of evidence had to come together: the empirical observation of a
local event, and the unambiguous attribution of particle properties such as mass and
charge to this event. This requirement is in accordance with a twofold, causal and
mereological particle concept (Falkenburg, 2007). According to the causal particle
concept, particles in contradistinction to waves cause local events in a measurement
device. This was first directly observed by means of the scintillation method. The
mereological particle concept dates back to ancient atomism and its renaissance in
Galileo’s and Newton’s beliefs. (“Mereology” is the logic of wholes and parts; see
Simons, 1987.) It means that particles are the parts of matter, and that their dynamic
properties add up to the properties of macroscopic matter according to well-defined
sum rules for conserved quantities such as mass and charge.
10 From Waves to Particle Tracks and Quantum Probabilities 269
So far, so good. When the cosmic rays were discovered by Victor Hess in 1912,
no one thought that they might have mass and charge. Until the early 1930s, they
were supposed to be gamma rays, i.e., electromagnetic waves rather than charged
particles (see Chap. 2 and below).
However, it turned out that electromagnetic waves have particle aspects, too. Due to
quantum theory, the classical distinction of particles and waves did no longer hold.
Einstein’s light quantum hypothesis of 1905 seemed to step backwards to Newton’s
long-refuted atomistic theory of light, a fact that puzzled no one more than Einstein
himself. Indeed, the physics community did not accept Einstein’s light quantum hy-
pothesis of 1905 for many years. Only after the observation made in 1919 that light
is bent by gravity, as predicted by general relativity, Einstein’s public reputation
increased so enormously that in 1921 he was awarded the Nobel prize, not for rel-
ativity but for his light quantum hypothesis, because it made the bridge to Bohr’s
atomic model (Wheaton, 1983, 279–281). The Nobel prizes of 1921 and 1922 were
given to Einstein and Bohr at the same ceremony in 1922.
Only afterwards was the photon hypothesis definitely confirmed by experimental
proof of the Compton effect (Wheaton, 1983, 282–286). For the experimental confir-
mation the energy–momentum conservation of relativistic kinematics was decisive.
It had similar significance as the Lorentz force for Thomson’s e/m measurement.
It explained the measured decrease in frequency of scattered X rays in terms of the
momentum transfer of the photon to an electron, i.e., in terms of a particle property.
From Einstein’s light quantum hypothesis (Einstein, 1905) up to the experimental
validation of the photon by Bothe and Geiger (1925) almost two decades passed. In
Einstein’s paper of 1905, the light quantum hypothesis had primarily a heuristic
value, namely its unifying power. It explained in a uniform way the photo effect
and several other experimental phenomena observed in the interaction of light and
matter.
That light can cause a photocurrent was already known since the end of the 19th
century. But the way in which the photocurrent depends on the frequency and in-
tensity of the incoming light was not understood. Einstein explained the threshold
of the photocurrent in terms of light quanta and the releasing energy of an electron.
In 1914, Millikan tested this theoretical explanation experimentally and confirmed
it with high precision (Wheaton, 1983, 238–241; Trigg, 1971). But the light quan-
tum hypothesis was in conflict with the wave theory of light, and Millikan’s results
were not taken as sufficient experimental evidence. At that time, the light quantum
hypothesis lacked both ingredients for indicating a particle. It was neither based on
particles properties that could be measured nor did it correspond to any local events
such as the observable scintillation flashes caused by α-particles. In the years after
1905, it was only considered to be a well-confirmed phenomenological law which
lacked theoretical understanding, like the other laws of early quantum theory such as
270 B. Falkenburg
Planck’s law of black-body radiation and the quantum postulates of Bohr’s atomic
model.
In 1916, Einstein extended his light quantum hypothesis of 1905 to a first, statis-
tically well-founded theory of the absorption and emission of light (Einstein, 1917).
This theory aggravated rather than cured Einstein’s life-long discomfort about the
probabilistic nature of quantum processes. However, it laid the grounds for the ac-
ceptance of the photon hypothesis by enlarging its empirical content in two decisive
respects. On the one hand, it connected the light quantum hypothesis to Bohr’s quan-
tum postulates and made it possible to derive the frequency of the radiative transi-
tions in the hydrogen atom. On the other hand, Einstein now attributed a momentum
to the photon in addition to its energy E = hν a momentum p = k, where k is the
wave vector of an electromagnetic wave with the wave number k = 2πν/c. In this
way, the energy–momentum relation E 2 = p 2 c2 of relativistic kinematics became
applicable to it. In this way, the photon hypothesis was transformed into the theory
of a relativistic particle of zero rest mass with an inertial mass m = hν/c2 , which
may give a recoil to a massive particle.
The breakthrough for the photon hypothesis came in 1922, when Compton ap-
plied relativistic kinematics to the scattering of gamma radiation and electrons. His
quantitative prediction of the effect fitted surprisingly well with the existing data
(Compton, 1923; Debye, 1923). As in the case of the discovery of the electron, the
attribution of a typical particle property to the photon was a milestone toward accept-
ing the photon hypothesis, but it was not yet regarded as sufficient. Classical elec-
trodynamics was at stake, and Bohr, Kramers, and Slater invented the BKS theory
in order to save it (Bohr et al., 1924). It predicted violations of energy–momentum
conservation for individual subatomic processes and saved the classical picture only
at the probabilistic level (thus paving the way for the probabilistic interpretation of
quantum mechanics). To establish the photon hypothesis against the BKS theory,
the localization of single photons of energy h/ν was needed. Unlike the measuring
of single elementary electric charge units by Millikan, it was not long in coming.
Bothe and Geiger proved in 1925 that relativistic energy–momentum conservation
is not only valid in the time average for the scattering of light at electrons but also
for single scattering processes. Their experiment tested energy–momentum conser-
vation in the individual case, showing by means of a coincidence counter that in
the Compton effect every single photon is actually correlated with a recoil electron
(Bothe and Geiger, 1925). The coincidences were accepted empirical evidence for
the effects of individual photons.
But this is not the whole story. There are some parallels with Millikan’s e mea-
surement about 15 years after the electron hypothesis found support by Thom-
son’s measurement; and with the confirmation of the neutrino hypothesis by in-
direct β-decay, about two and a half decades after establishing this hypothesis.
Since the early days of quantum theory, several semi-classical theories of the in-
teraction of light and matter have been developed which quantize the atom but
keep classical electromagnetic radiation. They do indeed explain the experimental
phenomena that were taken as evidence for the photon, including the photoelec-
tric effect and the Compton effect (Greenstein and Zajonc, 1997, 23–26). The most
10 From Waves to Particle Tracks and Quantum Probabilities 271
Defenders of the wave picture of cosmic rays like Millikan, however, still re-
sisted against applying quantum theory to them, when defenders of the particle pic-
ture interpreted the charged particles in terms of quantum electrodynamics. In the
late 1920s, Millikan still maintained a classical picture of the interactions of atoms
with matter and believed that cosmic rays are γ -rays, i.e., electromagnetic waves.
He defended an atom-building theory according to which these γ -rays stem from
the synthesis of atoms. To this “birth cry of atoms” theory, he adhered for religious
reasons, and he explained it in intuitive classical terms rather than within the frame-
work of quantum mechanics (Galison, 1987, 80–89). In opposition to Millikan’s
views, Bothe and Kolhörster (1929) invented the coincidence measurement method
in order to prove that cosmic rays consist of charged particles, and in 1930 Rossi
substantially improved their method (see Rossi, 1990). In 1932, Anderson (who was
a member of Millikan’s group) discovered the positron and turned from Millikan’s
views over to the interpretation of cosmic rays in terms of the Dirac equation. In
1933, the New York Times even published a sharp controversy between Millikan and
Compton (see Galison, 1987, 93): The classical physicist and Nobel prize winner
Millikan insisted that cosmic rays are electromagnetic waves. The quantum physi-
cist and Nobel prize winner Compton on the contrary argued that cosmic rays are
charged particles.
At that time, however, the particle picture used in cosmic ray studies remained
to be associated with the model of a classical trajectory, as if there was no quan-
tum theory. According to quantum mechanics, there is no continuous track. The
appearance of a particle track stems from a sequence of discrete position measure-
ments. It is just due to the repeated localization of a conserved quantity of mass
and charge. Indeed, non-relativistic quantum mechanics predicts the appearance of
a quasi-classical track with extremely high precision in terms of ionization events
that occur with extremely high probability along the corresponding classical trajec-
tory (Mott, 1929; Heisenberg, 1930; see below). In view of these quasi-classical
tracks recorded by the Wilson chamber, the traditional term “particle” was kept and
adopted for the discipline of particle physics that emerged from the cosmic ray stud-
ies of the 1930s and 1940s. But after almost a century of philosophical debates about
the foundations of quantum mechanics and its probabilistic interpretation, it has to
be emphasized that there is no (relativistic) quantum theory of individual particles
that describes an individual, quasi-classical track. The discovery of the positron,
which was a first and crucial confirmation of quantum electrodynamics, gave rise to
a less naive picture of the quantum processes that may happen along a particle track.
It is worth looking into the details of the phenomenological analysis and the
quantum mechanics of particle tracks, on which the data analysis of particle and
astroparticle physics rests up to the present day. Today, after 100 years of quantum
theory, cosmic ray studies and particle physics, the meaning of the term “particle” is
predominantly operational. It is based on the detection of repeated “clicks”, particle
tracks, and local scattering events measured by particle detectors.
10 From Waves to Particle Tracks and Quantum Probabilities 273
The Geiger–Müller coincidence counters and the Wilson chamber paved the way
to modern particle physics (see Chap. 2). The efficiency of the cloud chamber was
improved by means of triggering it with coincidence counters, and starting with
the discovery of the positron in 1932, many new kinds of particle were found in
the tracks of cosmic rays. In order to determine their mass and charge from the
measured tracks, it became crucial to refine the measurement methods
The most important marks for identifying the particles were the track curvature
in the magnetic field, the range of the particles in the vapor of the cloud chamber,
and the ionization density or density of the measurement points. The latter is a mea-
sure of the frequency of the interactions of the particle with the hydrogen atoms of
the Wilson chamber, and hence a measure for the ionization degree which in turn in-
dicates the mass of an unidentified particle as compared to the mass of well-known
particles. For α-particles or protons, the ionization degree is substantially larger than
for electrons, as was known from the first photos of particle tracks since 1912. The
tracks of α-particles, protons, and electrons in the Wilson chamber look significantly
different. For α-rays only the tracks and no condensation droplets are seen on the
cloud chamber photographs, while for β-rays the individual measurement points of
the tracks can be clearly distinguished:
Owing to the density of the ionization, the path of the α-particle shows as a continuous line
of water drops. A swift β-particle, on the other hand, gives so much smaller ionization that
the individual ions formed along its track can be counted.” (Rutherford et al. 1930, 57)
The ionization density depends on the velocity of a particle of given mass and charge
and gives no hint to the absolute mass value (Skobeltzin, 1985, 114). The same is
true of the Lorentz force and the curvature of a charged particle due to a magnetic
field. The range of a particle in matter gives some more information. The range of
a particle (or the length of its track) is related to its energy loss during its passage
through the detector until being stopped or absorbed. A half-empirical law, the so-
called energy–range relation, was already formulated in the early days of particle
physics. It was based on the scattering experiments with particles from radioactive
radiation sources, as performed in Rutherford’s laboratory. The relation connects the
kinetic energy of a massive charged particle to its range (or track length) in different
materials (Rutherford et al., 1930, 294).
These laws and relations (which were half classical, half empirical) made up the
measurement theory of particle physics and cosmic ray studies in the 1930s and
1940s. They made it possible to give a rough estimation of the mass of a charged
particle from the track signature. In this way, it was tricky though not too difficult
to discriminate the electron and the proton mass. By doing so, Anderson discovered
the positron in 1932.
But as soon as particles of medium mass such as the muon or the pion came into
play, a rough mass estimation did no longer help to identify the particles. Only a
detailed theory of what happens to charged particles during their passage through
matter would have helped to make the mass measurement more precise. Such a the-
ory was in principle available since Bethe’s seminal work on the scattering processes
274 B. Falkenburg
of charged particles in matter (Bethe, 1930) and its extension to quantum electro-
dynamic processes such as bremsstrahlung and pair creation. After the discovery
of the positron, quantum electrodynamics got much credit, and many calculations
were performed. But quantum electrodynamics did give rise to divergences beyond
first-order perturbation theory, and its first-order predictions for electron scattering
drastically disagreed with the cosmic ray measurements.
Hence, for two decades the theoretical predictions were elaborated and confirmed
in a zig-zag between cosmic ray measurements and phenomenological calculations
based on quantum electrodynamics. This zig-zag had to deal with the puzzles of
quantum electrodynamics and particle identification mentioned above, which are ex-
plained in the next section. In addition, it had to bridge a certain mismatch between
the quantum theory of scattering and the semi-classical model of an individual par-
ticle track. The physicists knew that particle tracks are due to ionization processes
described by the quantum mechanics of scattering, on the one hand, but they had to
use more or less classical methods in order to interpret the tracks, on the other hand.
With regard to the energy loss due to ionization or “collision loss”, this problem was
clearly stated in Rossi’s textbook of 1952 (Rossi, 1952, 29):
The energy loss of a charged particle in matter is a statistical phenomenon because the
collisions that are responsible for this loss are independent of each other. Thus particles of a
given kind and a given energy do not all lose exactly the same amount of energy traversing a
given thickness of material. The quantity kcol (E) defined as ‘collision loss’ [. . .] represents
only an average value.
The same is obviously true of the range of charged particles in matter, which in the
semi-classical model of an individual track derives from the energy loss along the
track. For non-relativistic particles, however, the statistical effects are low, as Rossi
also stated (Rossi, 1952, ibid.):
The statistical fluctuations in the energy loss by collision are comparatively small because
the average transfer of energy in each individual collision process is small and the number
of collisions necessary to cause any appreciable energy change is correspondingly large.
For relativistic particles, however, the nice correspondence of the quantum mechan-
ics of scattering to the classical picture of a track breaks down. Quantum electro-
dynamic processes such as bremsstrahlung or pair creation give rise to an abrupt
drastic energy loss of a charged particle. The corresponding track signatures are
observable kinks, the emergence of a pair of tracks of opposite curvature, or the ap-
pearance of a particle shower. For these processes, the nice correspondence of the
quantum mechanical scattering to a classical trajectory breaks down, and with it the
validity of the semi-classical explanation of an individual particle track.
look at Mott’s and Bethe’s semi-classical model of particle tracks. This model is still
in use for data analysis up to the present day. It has first been employed in cosmic
ray studies, then in the high-energy scattering experiments of particle physics, and
finally in recent astroparticle physics. Given its limitations in the relativistic domain,
of course in current high-energy physics and astroparticle physics it is no longer
applied to individual particle tracks but used for statistical data analysis.
The formulas for the energy loss along a track are genuine, probabilistic quantum
laws. They describe the dissipation of energy in a sequence of irreversible quantum
processes. As far as these processes are observable, i.e., give rise to a particle detec-
tion or position measurement, each of them results in an irreversible change of the
momentum state of the particle. Hence, in terms of quantum mechanics the particle
states after the measurement points of one-and-the-same track do not belong to one-
and-the-same quantum ensemble. Nevertheless it became experimental practice in
particle physics to apply them to individual particle tracks, as if there was no quan-
tum measurement problem. This is no wonder. When Mott and Bethe developed
their semi-classical model of a track, there was no quantum theory of measurement.
When von Neumann laid the foundations for it (von Neumann, 1932) and the Bohr–
Einstein debate on the foundations of quantum mechanics went on (Bohr, 1927,
1949; Einstein et al., 1935), the physicists working on cosmic rays or on quantum
electrodynamics neglected these foundational problems. They considered them not
to be relevant for their practice.
And they did so for good pragmatic reasons. The semi-classical model of a parti-
cle track is based on trust in Bohr’s correspondence principle, or Heisenberg’s gen-
eralized version of it. As mentioned above, Mott and Heisenberg showed in 1930
that Born’s quantum mechanics of scattering predicts particle tracks with a classical
shape. Thus, the energy loss of charged particles in matter was calculated on the
basis of a naive, quasi-classical, realistic picture of subatomic reality. As Heisen-
berg stressed in his 1930 book on quantum mechanics, the probability of α-particle
deflection due to repeated ionization of molecules in the vapor is non-zero
only if the connecting line of the two molecules runs parallel to the velocity direction of the
α-particles (Heisenberg, 1930, 53; my translation).
The corresponding calculation was carried out by Mott in 1929. According to Born’s
quantum mechanical scattering theory, the scattering is not due to an impact but due
to diffraction, i.e., it is described in a wave model. In particular, the quantum me-
chanical description of scattering lacks the classical trajectory of a deflected particle
and the corresponding classical impact parameter. The squared wave function pre-
dicts only the probability (and hence the relative frequency) of particle detections at
a certain scattering angle (Born, 1926a,b).
Mott calculated the probability for two subsequent collisions of an α-particle
and a hydrogen atom with the effect of the ionization of both atoms. The ionized
atoms give rise to observable measurement points. They are the core of droplets
which condense in the vapor of the Wilson chamber. The observation of a droplet
is a position measurement, whereas the observation of the particle deflection given
by straight lines drawn between the adjacent droplets is a momentum measurement.
276 B. Falkenburg
Heisenberg showed in his 1930 book by a heuristic consideration that the uncer-
tainty relation for position and momentum holds for any ionization process along
the track (Heisenberg, 1930, 18; Engl. transl., 24). Hence, the quantum mechani-
cal explanation of the single measurement points of a particle track is in perfect
correspondence to the classical particle picture, as long as the (unobservable) path
between the position measurements is neglected. Thus, for all practical purposes it
predicts a classical particle track and supports the application of quantum mechani-
cal scattering theory to individual particle tracks.
Mott’s calculation is probabilistic and it is performed in a quantum mechanics
without measurement. According to Born’s quantum mechanics of scattering, the
wave function is diffracted at two atoms at a given distance R. Mott’s main result is
that the first two orders of perturbation theory predict a classical track. To first order
(incoherent scattering), the outgoing wave is concentrated in a cone of very small
angle behind the ionized atom, in the direction of the incoming wave. To second
order (coherent scattering at two atoms), the contribution is non-zero if and only if
both atoms lie inside that angle in the same direction (Mott, 1929; Heisenberg, 1930,
56 and Engl. transl., 75–76). Generalized to N atoms, Mott’s results predict the
following results: To first order, the scattering probability is N times the probability
for incoherent scattering at a single atom. Coherent scattering at more than one atom
contributes only to second, third, . . . , N th order. However, to any order the scattered
wave propagates along a classical path.
As noted above, Mott’s 1929 calculation is based on the unrealistic idealization
that the energy loss associated with ionization is not taken into account. The par-
ticle is described as if it did not really transfer a definite amount of energy to the
hydrogen atom when ionizing it. Although the calculation deals with the amplitudes
of inelastic collisions (and hence with the dissipative process of energy loss), it is
performed as if the momentum state of the charged particle remained unaffected by
the energy transfer to the hydrogen atom which gives rise to ionization. That is, the
charged particle which gives rise to subsequent ionization processes and observable
droplets in the vapor of the Wilson chamber is treated as if its collisions with the
hydrogen atoms were elastic. This unrealistic idealization looks reasonable once we
notice that e.g. the energy loss of an α-particle due to ionization of hydrogen atoms
can be neglected. The ionization energy of hydrogen is very small compared to the
kinetic energy of the α-particle. Therefore the momentum of the α-particle remains
practically unchanged along its track in the Wilson chamber.
In the case of a substantial amount of energy loss along a particle track, however,
the agreement of the classical and the quantum descriptions vanishes. Nevertheless,
Mott’s semi-classical model of the scattering processes along an observable track
was maintained in all later calculations of the energy loss of charged particles in
matter. This semi-classical model comes together with the following intuitive classi-
cal picture of what happens when a charged particle loses its energy along a particle
10 From Waves to Particle Tracks and Quantum Probabilities 277
track. The particle is slowed down repeatedly by inelastic collisions with detector
atoms, the track curvature in an external magnetic field increases, and the track ends
when the particle has transferred its total kinetic energy and momentum to the de-
tector atoms. Indeed the discovery of the positron was based on this picture. In order
to identify the sign of the charge of the particles from cosmic rays, Anderson mea-
sured the flight direction by putting a lead plate into the Wilson chamber. The lead
plate caused a substantial energy loss and gave rise to an observable increase of the
curvature of particle tracks in the magnetic field.
Until the early 1930s, there was no satisfying quantum theory of ionization.
When the exploration of cosmic rays with the cloud chamber started in the late
1920s, there was only Bohr’s classical calculation of ionization (Bohr, 1913, 1915),
which was known to give wrong results for fast particles. Its predictions differed
from the half-empirical knowledge about particle tracks accumulated until the late
1920s, they were particularly in disagreement with the semi-empirical energy–range
relation. Around 1930, the classical theory of the energy loss of charged particles
in matter was known to give some correct results on average, but to be in need of
quantum theoretical corrections and to have no validity for individual particle tracks
(Rutherford et al., 1930, 439). In turn, Mott’s and Heisenberg’s quantum mechani-
cal explanation of the appearance of a quasi-classical track neglected the energy loss
due to ionization into account, not to speak of bremsstrahlung and pair creation.
The quantum theory for describing the interactions of charged particles with mat-
ter, however, already existed. Born’s seminal papers on the probabilistic interpreta-
tion of quantum mechanics laid the grounds for the quantum mechanics of scatter-
ing (Born, 1926a,b). Using it in the first order of perturbation theory (today known
as Born approximation), Bethe developed the quantum theory of the passage of
charged particles through matter in 1930 (Bethe, 1930). His paper made the first re-
liable calculation of a non-negligible energy loss. The results only agree with Bohr’s
classical result for vanishing particle velocity v and energy loss (for a detailed dis-
cussion, see Falkenburg, 2007, 178–183). This limit of zero velocity and no energy
loss is exactly the idealized case of Mott’s calculation discussed above. Indeed, for
the energy loss along a particle track, Bohr’s correspondence principle in general
fails, as becomes evident for relativistic processes such as bremsstrahlung and pair
creation.
Bethe calculated the quantum mechanical expectation value E for the energy
loss at a given atom and implemented it into Mott’s semi-classical model by apply-
ing it to the scattering processes along a particle track. From a quantum mechanical
point of view, however, E is the mean energy loss per atom and per incoming par-
ticle in the limit of infinitely many incoming particles (Nin → ∞). In the classical
part of his semi-classical model, Bethe applied his formula for E to the subse-
quent scattering processes along an individual particle track, be it with or without
an observable effect. Then he calculated the number of observable and unobservable
inelastic collisions along a track for N atoms per volume x 3 in a given material
278 B. Falkenburg
(Bethe, 1930, 358). Finally, he gave the following simple expression for the mean
energy loss E per length x of matter (Bethe, 1930, 360):
E
= NE.
x
Here, Bethe interprets the expectation value E as the average energy loss of a
charged particle by successive scattering from many detector atoms along an indi-
vidual track, normalized to the number of atoms per path x. Of course, Bethe dis-
cussed neither the physical interpretation of his results nor their philosophical justi-
fication. He simply suggested that E applies to the subsequent individual quantum
transitions along a track, whether their results be observable or not. This was com-
pletely in the spirit of Mott’s quasi-classical results.
One of Bethe’s quantitative results was the ratio of observable to unobservable
collisions. For hydrogen, 28.5 % of all inelastic collisions give rise to ionization
(Bethe, 1930, 360), causing observable droplets in the vapor of the Wilson cham-
ber. The calculation shows that a substantial amount of the energy lost along a par-
ticle track in the Wilson chamber is indeed measured, in striking contrast to Mott’s
idealized model in which the energy loss along a track was neglected. Hence, the
situation is no longer comparable with Mott’s idealized model in which the prepara-
tion of the particle and the expectation value of the scattering results do not change
along the track.
From a strict quantum mechanical point of view, Bethe’s semi-classical model
for the energy loss of a charged particle in matter is incoherent. Within a quantum
mechanics without measurement, Bethe calculates a formula which holds for the
energy dissipation due to the position measurements along a track. It can be shown,
however, that for the fast particles which were the subject of his calculation this
may be neglected for all practical purposes, at least as long as the particles are
not too fast, i.e., have relativistic velocity (Falkenburg, 2007, 182). In this case,
the momentum dependence of E is very weak, making practically no difference
for the particle states after the measurement points of an individual particle track.
Therefore, Rossi’s observation quoted above, according to which for ionization loss
the statistical effects along a track are of negligible (Rossi, 1952, 29), is not only
supported by Mott’s highly idealized case but also by Bethe’s results. In this way,
the application of Bethe’s formula to the energy loss along an individual particle
track is justified for all practical purposes. (What looks queer from a philosophical
point of view may be a good approximation in physical practice.)
Like Born, Bethe used the non-relativistic quantum mechanics of scattering. His
theory did therefore not apply to the tracks of the high-energy particles from cosmic
rays. In order to describe the interactions of relativistic charged particles, quantum
electrodynamics was needed. The basic equations were given by Dirac in 1927.
10 From Waves to Particle Tracks and Quantum Probabilities 279
For relativistic particles, however, the naive quasi-classical picture of a track breaks
down. Quantum electrodynamics predicts that a particle does not lose its energy
smoothly. Due to bremsstrahlung and pair creation, the energy loss along a particle
track may become completely irregular and extreme deviations from the classical
path may occur. Therefore, with increasing particle energy the correspondence be-
tween the shape of individual particle tracks and the classical case breaks down
stepwise, and the semi-classical model of a particle track does, too.
Møller had already shown in 1931 how Dirac’s theory of the electron can be
combined with Born’s quantum mechanics of scattering. After the discovery of
the positron, Bethe, Bloch, and Heitler extended Bethe’s 1930 approach to calcu-
lations of bremsstrahlung and pair creation (Bethe, 1932; Bloch, 1933; Heitler and
Sauler, 1933). The expectation value E of these processes depends on the en-
ergy of the charged particles. Only at non-relativistic particle energies, ionization is
predominant and the relative frequency of bremsstrahlung or pair creation is neg-
ligible. In the relativistic domain, the relative frequency of the latter processes in-
creases rapidly with increasing particle energy (Rossi, 1952, 29–30 and 60). Thus, in
the transition from the non-relativistic to the relativistic domain the smooth quasi-
classical shape of the particle tracks predicted by Mott’s and Bethe’s 1929/1930
calculations gets lost for an increasing number of particle tracks. Today, in the data
analysis of high-energy scattering experiments or astroparticle physics, this is cor-
rected at the probabilistic level (see next section). There is no other way to take the
quantum electrodynamic fluctuations of the energy loss along a particle track into
account.
Therefore, the way in which these quantum electrodynamic formulas were used
did not change. All the same, they were simply inserted into Bethe’s expression for
the mean energy loss E per length x of matter along a particle track. Again,
this procedure is justified by the incoherent first-order contributions to the quan-
tum mechanics of scattering. The fact that they dominate makes it unproblematic
to apply classical stochastic methods to the analysis of particle tracks. Bethe’s and
Bloch’s 1932–1933 calculations gave rise to the Bethe–Bloch formula for the mean
energy loss of fast charged particles per path length in matter which consists of
heavy atoms (Bloch, 1933; for a semi-classical calculation see Rossi, 1952, 17).
The Bethe–Bloch formula made it possible to calculate a theoretical value for the
average range of charged particles in a given kind of matter or detector material (see
Rossi, 1952, 22–27).
Confidence in such calculations stood or fell with confidence in the Dirac equa-
tion. Before the discovery of the positron, the Dirac equation did not have much
credit with physicists. Its solutions corresponding to negative energy values had no
empirical interpretation and were considered to be “unphysical”. After Anderson’s
discovery, this equation had much more credit and many quantum electrodynamic
280 B. Falkenburg
The positron was the first new particle found in cosmic radiation. Its discovery in
1932 was completely based on the phenomenological features of cloud chamber
tracks. Anderson identified it from a track with a density of ionization typical of
an electron, but wrong sign of the curvature. Supervised by Millikan, he had been
working with a cloud chamber since 1931 to examine cosmic rays (Anderson and
10 From Waves to Particle Tracks and Quantum Probabilities 281
Anderson, 1983; Pais, 1986, 351–352). In order to identify the charge of the parti-
cles, he used a strong magnetic field. On his photographs he found quite a number
of tracks which indicated positive particles. At first he attributed them to protons, as
the proton was the only positively charged particle known at that time. But evidence
spoke against protons. The low ionization degree of the tracks indicated a mass of
the order of the electron rather than the proton mass, which is almost 2 000 times
larger. Under the assumption that it were electrons from cosmic rays, however, the
observed flight direction was not compatible with the track curvature in the magnetic
field. It seemed absurd that they should be electrons from cosmic rays traveling up-
wards. To determine the flight direction of the particles unambiguously, Anderson
put a lead plate of width 6 mm into the center of the cloud chamber. When passing
the lead plate, the particles lost energy and momentum. This energy loss gave rise
to an increase in the track curvature.
Anderson discovered a track particularly suitable for particle identification in
August 1932 (Anderson, 1932, 1933; see Fig. 2.11, Sect. 2.6.1). In his analysis
of the track, he discussed all degrees of freedom for the interpretation: the mass,
the amount of charge, the sign of the charge and the number of particles to which
the track may be imputed. From the ionization degree of the track he inferred that
the charge could not differ in magnitude more than around a factor two from the
electron. The magnitude of the mass was estimated indirectly using the track length,
the track curvature and the known values of the electron and the proton mass.
His interpretation of the track was based on the following reasoning. According
to the Lorentz force, for a particle of the proton charge and mass the track curvature
indicated an energy of 300 MeV. According to the semi-empirical energy–range
relation for protons, however, a proton of 300 MeV could only have a range of
around 5 mm (Rutherford et al., 1930, 294). And due to the ionization density of
the track, the mass had to be substantially smaller than the proton mass. But the
assumption that it was an electron would have implied a drastic violation of energy
conservation: due to the curvatures of the partial tracks, an electron causing the
complete track would not have lost energy in the 6 mm thick lead plate, but rather
have been accelerated by 40 MeV, as Anderson emphasized. Finally, it was very
implausible to assume that it consisted of two independent electron tracks which
met by chance at the lead disk. The probability of such a coincidence was extremely
low. Therefore, only two possibilities remained. Either the track was due to a single
particle of positive charge which had lost energy at the lead plate and which had a
mass and charge comparable to the electron. Or it was due to a pair of particles with
opposite charges and equal mass which had arisen from one and the same reaction
in the lead plate, and of which one was an electron. The existence of a positive
electron, the positron, resulted from both possibilities.
Anderson’s way of proceeding teaches an important lesson about the relation be-
tween quantum theory and experimental cosmic ray studies. In the first decades of
particle physics, many decisive discoveries happened independently of theory for-
mation. The carrying out and evaluation of many crucial experiments was largely
autonomous with regard to the simultaneous development of new theoretical ap-
proaches. This autonomy of the experiment was also emphasized by Galison (1987).
282 B. Falkenburg
It concerned the measurement theories. In the early cosmic ray studies as well as in
the current experiments of particle and astroparticle physics, the physicists only use
reliable, well-confirmed theoretical background knowledge for the analysis of their
experimental data. This is above all true when they explore new empirical grounds,
like in the investigation of cosmic rays. (And it is no less true when they search
for data that confirm a theoretical hypothesis, as in the case of the neutrino. See
Chap. 7.) Sometimes it happens that they disregard an existing theory, as in the case
of the positron. Anderson knew the Dirac equation, but he did not use it (Anderson
and Anderson, 1983, 140):
It has often been stated in the literature that the discovery of the positron was a conse-
quence of its theoretical prediction by Paul A.M. Dirac, but this is not true. The discovery
of the positron was wholly accidental. Despite the fact that Dirac’s relativistic theory of
the electron was an excellent theory of the positron, and despite the fact that the existence
of this theory was well known to nearly all physicists, including myself, it played no part
whatsoever in the discovery of the positron.
His careful analysis of the positron tracks available in his data since 1931 did not
consider the Dirac equation, as his published papers show as well (Anderson, 1932;
1933). In the 1983 review of his work he did not explain why he neglected the
Dirac equation. Probably he simply did so because Millikan was his supervisor. And
probably, like Bothe and Kolhörster or Rossi, he was more interested in the particle
content of cosmic rays than in the search for Dirac particles. In contradistinction
to him, Blackett and Occhialini were (Blackett and Occhialini, 1933; see Chap. 2).
Anderson tried as long as possible to interpret the atypical tracks in terms of the
proton charge and mass, as did Millikan. Indeed, Dirac himself had also tried to give
such a conservative interpretation, namely to assign the negative energy solutions of
his equation to the proton (Pais, 1986, 346–348). Blackett and Occhialini confirmed
Anderson’s discovery of the positron. In addition they identified processes of pair
creation, to which Anderson could track the positrons of the cosmic rays back, too,
still in 1933 (Anderson and Anderson, 1983). The process of pair creation was also
explained by the Dirac equation.
and goals, their strong belief that applications of QED failed at high energies, and their need
to find a rough limit below which they could safely apply the theory.
The calculations seemed to be valid only for the “soft”, non-penetrating electron-
photon component of showers, but not for the “hard”, high-energetic shower compo-
nents. Hence, there was a phenomenology of quantum electrodynamics that turned
out to be correct for low energies but seemed to go astray for high energies.
Only in 1936, Anderson concluded that there was a new charged particle with a
mass between the electron and proton masses. The lack of confidence in quantum
electrodynamics and in the calculation of the energy loss by scattering processes at
high particle energies delayed its identification for two years. The delay was due
to the vicious circle described above. With the insufficiently selective experimenta-
tion methods of 1933, Anderson found some particle tracks which he regarded as
tracks of electrons in his photographs from the cosmic rays. However, these were the
still unidentified muon tracks. Their signature confirmed the suspicion that quantum
electrodynamics fails at high particle energies (Cassidy, 1981, 2 and 12–15; Ander-
son and Anderson, 1983, 143–146).
Anderson used the same cloud chamber as for the discovery of the positron,
but worked with a better developed experimenting technique (Galison, 1987, 137).
Later, he further refined his measuring methods by the installing a platinum absorber
of width 1 cm into the cloud chamber. This absorber stopped the electrons and let
the muons pass through. The obvious conclusion was that the platinum absorber
was passed by a charged particle which was heavier than the electron. In their 1936
paper, Anderson and Neddermeyer finally concluded that there is another option
than the failure of quantum electrodynamics at high energies, namely (after Cassidy,
1981, 22):
that either the theory of absorption breaks down for energies greater than about 1000 MeV,
or else that these high energy particles are not electrons.
Quantum electrodynamics was much better validated for the energy loss by ioniza-
tion processes. It predicted that the new particle had to be lighter than the proton, if
its predictions for particles of low to medium energy were correct at all (Pais, 1986,
432; Cassidy, 1981, 14).
The process which led step by step to the identification of the muon has been
investigated in detail by the historian of science Galison (1987, Chap. 3; in particular
126–133). He emphasizes that the exact instant of the muon discovery cannot be
defined because the discovery was due to a collective learning process amongst
particle physicists (as for the neutrino). In this collective learning process, more and
more explanatory options were carefully eliminated:
The move towards acceptance of the muon was not the revelation of a moment. But by
tracing an extended chain of experimental reasoning like this one, we have seen a dynamic
10 From Waves to Particle Tracks and Quantum Probabilities 285
process that, while sometimes compressed in time, has occurred over and over in particle
physics. With the discovery of the neutrino, for instance, one sees such a gradual elimination
of alternatives (Galison, 1987, 133).
Due to the discovery of the muon, the confidence in quantum electrodynamics in-
creased. But further puzzles remained. Until 1947, it was not possible to distinguish
the muon discovered in 1936 and the pion which had already been predicted in 1935
in Yukawa’s theory of the strong interaction. It had been mistaken for the muon from
1936 to 1947, since there was no precise mass measurement available. The muon
and the pion have masses of 106 MeV/c2 and 140 MeV/c2 , respectively (Rossi,
1952, 162–163; Lattes, 1983). They could only be discriminated when more precise
independent methods of mass measurement became available. Anderson could not
have dreamed of resolving such a small mass difference with his methods of 1936.
The puzzles of particle identification of the mid-1930s–1940s could only be
resolved when nuclear emulsions became available. They were developed by the
physicist Marietta Blau (Halpern, 1993; Galison, 1997; Strohmaier and Rosner,
2006) who worked at the Vienna Institute of Radium Research from 1927 until
1937, without any salary, however. Since 1932, she was working with her for-
mer PhD student Hertha Wambacher on the improvement of photographic plates.
In 1937, she had the opportunity to expose her plates in Hess’ laboratory at the
top of the Hafelekar (Innsbruck) for five months, in 2 300 m of height. Blau and
Wambacher discovered not only very long tracks of protons of extremely high en-
ergy but also star-shaped tracks which stemmed from nuclear disintegration, i.e., a
scattering event in which a high particle from cosmic rays of extremely high energy
made an atomic nucleus burst, giving rise to several tracks of protons or α-particles.
The discovery gave rise to a publication in Nature (Blau and Wambacher, 1937). In
1938, she emigrated first to Oslo and later to Mexico and the USA, where she ar-
rived in 1944, without any possibility to return to her scientific work there, however.
Blau was finally awarded for her merits by the Schrödinger prize of the Austrian
Academy of Science in 1962, a few years before she died almost forgotten in 1970.
It was Powell who improved the new photographic method further and used it
in order to investigate the particle content of cosmic rays, in the 1940s. His nuclear
emulsions made it possible to record the tracks of charged particles and to develop
their photographs with a very high spatial resolution (of 1 µm) (Rossi, 1952, 127–
142; Powell et al., 1959, 26–32). Now, a semi-empirical method of mass measure-
ment was available that was completely independent of quantum electrodynamics.
They allowed one to determine the mass of a particle with high precision, inde-
pendently of its range, using the density of the measurement points (Rossi, 1952,
138–142). So Powell discovered the pion, in 1947, and received the Nobel prize in
1950 for the development of the photographic method and the discovery of the pion.
Further improvements in the recording and analyzing of particle tracks from cos-
mic rays came with the bubble chamber invented by Glaser in 1952. When particle
physics shifted from cosmic ray studies to scattering experiments at particle accel-
erators, around 1960, the bubble chamber made it possible to measure the semi-
empirical energy–range relation with high precision for many particle types and
over a large energy range.
286 B. Falkenburg
While the observation of particle tracks on a photo plate is also possible for the lay-
man, the data analysis of these tracks makes use of a very complex measurement
theory. This measurement theory has historically grown, and it has a layer structure.
Indeed it is based on Mott’s and Bethe’s semi-classical model of a particle track
up to the present day. Its foundations were laid in 1930 by Bethe’s seminal paper,
based on the idea of correspondence to the classical case and the quantum mechan-
ics of scattering. The formulas for the energy loss of charged particles in matter were
elaborated by Bethe, Bloch, Heitler, and others in the early days of quantum elec-
trodynamics. However, before the consolidation of quantum electrodynamics these
formulas could not be used for the mass measurement of cosmic rays. In order to
resolve the puzzles of particle identification, independent semi-empirical formulas
such as the energy–range relation were employed.
The resulting collection of laws for the analysis of particle tracks was probed
and refined in cosmic ray studies during the 1930s–1950s. After the consolidation
of quantum electrodynamics, more and more new particles were detected, and con-
servation laws for quantum properties such as spin, parity, isospin, etc. were added
in order to understand their properties and their reactions. The conservation laws
brought theoretical structure (i.e., dynamic symmetries) in the increasing particle
zoo.
The measurement theory was taken over to particle physics in the 1960s. It was
complemented by the Breit–Wigner formula for the energy and width of a reso-
nance (which is due to the decay of an unstable particle), and further refined by
radiative corrections based on higher-order terms of perturbation theory. In the era
of particle accelerators, with increasing data sets, it was completed by statistical
methods. Finally, advanced computer methods such as Monte Carlo simulations en-
tered. The resulting collection of theoretical and semi-empirical laws has been used
for the analysis of high-energy scattering experiments up to the present day. With
the rise of modern astroparticle physics in the late 1980s, it was finally taken back
to the cosmic ray studies with neutrino telescopes and other most advanced particle
detectors.
semi-empirical laws, the formulas for bremsstrahlung and pair creation were used
for the analysis of particle tracks, and higher-order contributions to the scattering
amplitudes for the statistical data analysis of high-energy scattering experiments,
but before they were not.
Second, it has different theoretical layers. It is made up of classical and semi-
classical laws that apply to particle tracks, conservation laws that apply to scattering
events, quantum laws applied at the probabilistic level, and statistical laws for the
analysis of large data samples.
Let us now have a look at these theoretical layers and the way in which they are
connected.
On the one hand, it is a complex aggregation of semi-empirical and theoretical
background knowledge which has historically grown, is theoretically and empiri-
cally well-justified in all parts, but is far from being a well-defined theory. On the
other hand, however, it demonstrates that the shift from waves to particles in the cos-
mic ray studies of the late 1920s was not the last word about the nature of cosmic
rays, given that their quantum nature has to be taken into account.
Due to this quantum nature, the analysis of individual particle tracks has to be
corrected by quantum laws. But due to the irreducibly probabilistic character of
quantum laws, the corrections can only be applied at the probabilistic level, taking
into account sophisticated statistical methods such as unfolding procedures in order
to correct the data samples. Taking all non-quantum measurement errors (such as
mere statistical effects) aside, these methods indicate a shift from particles back
to waves, or to be more precise, to quantum waves with probability amplitudes.
This shift is due to the wave–particle duality and the probabilistic interpretation of
quantum theory, according to which the relative frequency of particles corresponds
to the (squared) scattering amplitudes of probability waves. So let us have a closer
look at the layers of this measurement theory and the shift from particles to quantum
waves, which it implies.
The quantities attributed to individual particle tracks are mass and charge, which are
measured based on Mott’s and Bethe’s semi-classical model explained above. The
analysis of particle tracks employs classical and semi-classical laws. For charged
particles, the Lorentz force (Lorentz, 1895) connects the ratio of mass and charge
of a particle to its acceleration in electric and/or magnetic fields. It was applied to
charged particles from the very beginnings of subatomic physics (Thomson, 1897).
The mass of charged particles is measured from a large number of semi-empirical
laws such as the empirical energy–range relation (Rutherford et al., 1930; Rossi,
1952), which were measured and used since the early radioactivity and cosmic
ray studies. In addition, classical considerations suggest that the track length cor-
responds to the absorption length of a stable particle or to the decay time of an
288 B. Falkenburg
unstable particle. Even though both are genuinely quantum, i.e., probabilistic quan-
tities, there is no problem to attribute them to an individual particle track once the
track is generated (and all quantum measurements are done).
For the measurement of the dynamic properties of uncharged particles, there are
few laws at the level of individual particle tracks. Even though the energy-range
relation may be used for them, too, in order to determine the mass of uncharged
particles more precise, scattering events or resonances have to be measured.
All quantities which derive from the quantum theory of scattering are irreducibly
probabilistic. This is in particular true of the cross section of a given kind of par-
ticle reaction, the quantity in which a quantum field theory comes down to earth.
Quantum field theory and the experiment meet in the scattering cross section of a
particle reaction. As an empirical quantity, the cross section is obtained by counting
the relative frequency of scattering events of a given dynamic type. As a theoreti-
cal quantity, it is calculated within the quantum mechanics of scattering from the
S-matrix element of a quantum field theory for the corresponding particle reaction.
10 From Waves to Particle Tracks and Quantum Probabilities 289
scattering results. The particle aspect only shows up in the relative frequency of par-
ticle detections in a given direction. However, there is no classical path. In particular,
the classical impact parameter of an individual scattering process has no quantum
correlate.
The probabilistic nature of quantum waves is most obviously employed in the
measurement of CP violations or of neutrino oscillations. Here, quantum superposi-
tions of different particles (or quantum states of given dynamic properties) are mea-
sured. CP violations are measured in the scattering experiments of particle physics,
whereas neutrino oscillations have been detected by measuring the solar neutrino
flux. It has been argued that CP violations are a good example of quantum superpo-
sitions that cannot be interpreted in terms of an ignorance interpretation of quantum
probabilities (Müller, 1993). (The ignorance interpretation of quantum theory claims
that quantum probabilities are just due to missing knowledge about the quantum
properties, but not to something like objectively undetermined properties.) I sup-
pose that neutrino oscillations are, too.
The measurement theory of cosmic ray studies has been taken over to particle
physics, elaborated further in the era of the big accelerators, and finally taken back to
the measurement of cosmic rays in recent astroparticle physics. We have seen that
the foundations of this measurement theory date back to the first half of the 20th
century. Its core is a collection of classical laws, like the Lorentz force, the laws
of relativistic kinematics, the quantum mechanics of scattering, and the formulas of
quantum electrodynamics calculated in the early 1930s.
As sketched above, for almost two decades there were no independent measure-
ment methods for the mass attributed to a particle track. The analysis of particle
tracks was trapped in a vicious circle which was not easy to resolve, as the puzzles
10 From Waves to Particle Tracks and Quantum Probabilities 291
This shift of interest has several pragmatic consequences, which open a new and
interesting field of investigation in the philosophy of physics. Let me just mention
two of them and sketch some of their philosophical implications.
First, astroparticle physics is mainly interested in uncharged particles which
point to their cosmic origin. Charged particles are deflected by cosmic magnetic
fields. Hence, they carry no information about their origin. Therefore, the classi-
cal Lorentz force plays no longer a crucial role in the data analysis of the particle
tracks measured in astroparticle physics. The Lorentz law still serves to sort out the
charged particle background of the photon or neutrino flux which is measured. But
a precise measurement of the mass and momentum from a particle track is no longer
needed. Hence, it is no longer necessary to reconstruct the individual particle tracks.
The particle tracks are simply fitted as straight lines; their χ 2 is optimized by means
of multivariate statistical analysis, and the probability that the track belongs to a
specific kind of particle of given energy is calculated. The sample of the individual
particle tracks reconstructed in the high precision measurements of particle physics
is now replaced by a probability cloud.
Second, there is an important kinematic distinction between the collider experi-
ments of recent high-energy physics and the measurements of astroparticle physics.
The dominating kinematic domain of collider experiments are scattering events with
a high transverse momentum. For a collider experiment, scattering events in forward
direction are neither experimentally accessible, nor can they be calculated with suf-
ficient precision. In the experiments of astroparticle physics, however, the accessible
kinematics is vice versa. This matter of fact indicates that the collider experiments of
particle physics and the cosmic particle flux measurements of astroparticle physics
complement each other.
The first consequence raises an interesting question concerning the interpretation
of quantum theory. To replace the reconstruction of individual particle tracks by a
probability cloud means in a certain sense that the flux measurement of cosmic rays
directly captures the reference of quantum states to probability waves. Instead of
attributing dynamic properties to individual particle tracks in a semi-classical model
and correcting for the errors thus produced, the data analysis takes only place at the
probabilistic level. Here, an obvious philosophical issue is: In which terms may this
probabilistic data analysis be understood? In terms of an ensemble interpretation of
quantum theory, in terms of decoherence, or in other terms?
The second consequence indicates that the scattering experiments of particle
physics and the flux measurements of cosmic rays are complementary regarding
their respective domains. Indeed they complement each other in a more general
sense, by exploring the grounds of ‘new physics’. The physicists of both fields
claim that their respective experiments are relevant for the ‘big questions’ of unified
physics at a small scale and at a large scale. The experiments of particle physics
attempt to make the bridge from the current standard model of particle physics to
physics beyond: to the reign of supersymmetry, superstring theory, loop quantum
gravity, or some other unifying theory. The experiments of astroparticle physics
make the bridge from subatomic particles to the cosmic sources which emit them.
Both approaches are complementary. Both rely on the same measurement theory
10 From Waves to Particle Tracks and Quantum Probabilities 293
used with a different focus. But astroparticle physics employs measurement meth-
ods and concepts of astrophysics in addition to those of particle physics.
The complementarity of the respective experimental methods and concepts
should be investigated in more detail. Here, the philosophical issue in view of two
competing fields of investigation is: What is the specific significance of astrophys-
ical methods and concepts in astroparticle physics? For example, which role plays
the concept of messenger particles for unifying the physics at a small scale and at
a large scale? (For a first approach see Falkenburg and Rhode, 2007; Falkenburg,
2012.) These philosophical topics are beyond the scope of this historical introduc-
tion to astroparticle physics. However, the field for their investigation now is open.
References
Anderson, C.D.: The apparent existence of easily deflectible positives. Science 7, 238 (1932)
Anderson, C.D.: The positive electron. Phys. Rev. 43, 491–494 (1933)
Anderson, C.D., Anderson, H.L.: Unraveling the particle content of cosmic rays. In: Brown, L.M.,
Hoddeson, L. (eds.) The Birth of Particle Physics, pp. 131–154. Cambridge University Press,
Cambridge (1983)
Bethe, H.: Zur Theorie des Durchgangs schneller Korpuskularstrahlen durch Materie. Ann. Phys.
5, 325–400 (1930)
Bethe, H.: Bremsformel für Elektronen relativistischer Geschwindigkeit. Z. Phys. 76, 293–299
(1932)
Bethe, H., Heitler, W.: On the stopping of fast particles and on the creation of positive electrons.
Proc. R. Soc. A 146, 83–112 (1934)
Blackett, P.M.S., Occhialini, G.: Some photographs of the tracks of penetrating radiation. Proc. R.
Soc. Lond. Ser. A 139, 699–726 (1933)
Blackmore, J.T.: Ernst Mach; His Work, Life, and Influence. University of California Press, Berke-
ley (1972)
Blau, M., Wambacher, H.: Disintegration processes by cosmic rays with the simultaneous emission
of several heavy particles. Nature 140, 585 (1937)
Bloch, F.: Bremsvermögen von Atomen mit mehreren Elektronen. Z. Phys. 81, 363–376 (1933)
Bohr, N.: On the theory of the decrease of velocity of moving electrified particles on passing
through matter. Philos. Mag. 25, 10–31 (1913). Repr. in: Bohr Collected Works (BCW), vol. 2,
pp. 18–39. North-Holland (Elsevier), Amsterdam
Bohr, N.: On the decrease of velocity of swiftly moving electrified particles on passing through
matter. Philos. Mag., 30, 581–612 (1915). Repr. in: BCW, vol. 8, pp. 127–160
Bohr, N.: Über die Serienspektren der Elemente. Z. Phys. 2, 423–469 (1920). Engl. transl. in: Bohr
(1922)
Bohr, N.: The Theory of Spectra and Atomic Constitution. Cambridge University Press, Cambridge
(1922)
Bohr, N.: Como Lecture (1927). Modified version: The quantum postulate and the recent devel-
opment of atomic theory. Nature 121, 580–590 (1927). Repr. in: Wheeler and Zurek, 1983,
87–126. Both versions in: BCW, vol. 6, pp. 109–158
Bohr, N.: Discussion with Einstein on epistemological problems of atomic physics. In: Schilpp,
P.A. (ed.) Albert Einstein: Philosopher–Scientist. Library of Living Philosophers, pp. 115–150.
Evanston, Illinois (1949). Repr. in: Wheeler and Zurek, 1983, pp. 9–49
Bohr, N., Kramers, H.A., Slater, J.C.: The quantum theory of radiation. Philos. Mag. 47, 785–802
(1924). Repr. in: van der Waerden, B.L. (ed.): Sources of Quantum Mechanics, pp. 159–176.
Dover, New York (1967)
Born, M.: Zur Quantenmechanik der Stoßvorgänge. Z. Phys. 37, 863–867 (1926a). Engl. transl. in:
Wheeler and Zurek (1983), 52–55
294 B. Falkenburg
Millikan, R.A.: The isolation of an ion, a precision measurement of its charge, and the correction
of Stokes’s law. Phys. Rev. 32, 349–397 (1911)
Millikan, R.A.: The Electron. University of Chicago Press, Chicago (1917)
Møller, Ch.: Über den Stoß zweier Teilchen unter Berücksichtigung der Retardation der Kräfte. Z.
Phys. 70, 786–795 (1931)
Mott, N.F.: The wave mechanics of α-rays. Proc. R. Soc. A 126, 79–84 (1929). Repr. in: Wheeler
and Zurek (1983), 129–134
Müller, A.: Complementarity in the neutral kaon system. Philos. Nat. 30, 247–253 (1993)
Nye, M.: Molecular Reality: A Perspective on the Scientific Work of Jean Perrin. MacDonald,
London (1972)
Noether, E.: Invariante Variationsprobleme. Nachr. Ges. Wiss. Gött., Math.-Phys. Kl., 235–257
(1918)
Pais, A.: Inward Bound. Clarendon Press, Oxford (1986)
Perrin, J.: Ann. Chim. Phys. 18, 1–14 (1909) Engl. transl. by F. Soddy: Brownian Motion and
Molecular Reality. Dover Phoenix Editions, Book on Demand, (1909)
Pickering, A.: Constructing Quarks. Edinburgh University Press, Edinburgh (1984)
Powell, C.F., Fowler, P.H., Perkins, D.H.: The Study of Elementary Particles by the Photographic
Method. An Account of the Principal Techniques and Discoveries. Illustrated by an Atlas of
Photomicrographs. Pergamon Press, London (1959)
Riordan, M.: The Hunting of the Quark. Simon & Schuster, New York (1987)
Rossi, B.: High-Energy Particles. Prentice-Hall, New York (1952)
Rossi, B.: The decay of “Mesotrons” (1939–1943). Experimental particle physics in the age of
innocence. In: Brown, L.M., Hoddeson, L. (eds.) The Birth of Particle Physics, pp. 183–205
Cambridge University Press, Cambridge (1983)
Rossi, B.: Moments in the Life of a Scientist. Cambridge University Press, Cambridge (1990)
Rutherford, E., Chadwick, J., Ellis, C.D.: Radiations from Radioactive Substances. Cambridge
University Press, Cambridge (1930). Repr.: (1951)
Schrödinger, E.: Über den Comptoneffekt. Ann. Phys. 82, 257–264 (1927)
Schweber, S.S.: QED and the Men Who Made It. Dyson, Feynman, Schwinger, and Tomonaga.
Princeton University Press, Princeton (1994)
Simons, P.: Parts. A Study in Ontology. Clarendon Press, Oxford (1987)
Skobeltzyn, D.V.: The early stage of cosmic particle research. In: Sekido, Y., Elliot, H. (eds.) Early
History of Cosmic Ray Studies – Personal Reminiscences with Old Photographs, pp. 47–51.
Reidel, Dordrecht (1985)
Strohmaier, B., Rosner, R. (eds.): Marietta Blau. Stars of Disintegration. Biography of a Pioneer
of Particle Physics. Ariadne, Riverside (2006)
Thomson, J.J.: Cathode rays. Philos. Mag. 44, 293–316 (1897)
Thomson, J.J.: On the masses of the ions in gases at low pressures. Philos. Mag. 48, 547–567
(1899)
Townsend, J.S.: On electricity in gases and the formation of clouds in charged gases. Proc. Camb.
Philos. Soc. 9, 244–258 (1897)
Trigg, G.L.: Crucial Experiments in Modern Physics. Crane Russah & Co, New York (1971)
von Neumann, J.: Mathematische Grundlagen der Quantenmechanik. Springer, Berlin (1932).
Engl. transl. by R. Beyer: Mathematical Foundations of Quantum Mechanics. Princeton Uni-
versity Press, Princeton (1955)
Yukawa, H.: On the interaction of elementary particles. Proc. Phys.-Math. Soc. Jpn. 17, 48–57
(1935)
Wheaton, B.R.: The Tiger and the Shark. Cambridge University Press, Cambridge (1983)
Wheeler, J.A., Zurek, W.H. (eds.): Quantum Theory and Measurement. Princeton University Press,
Princeton (1983)
Wigner, E.P.: On unitary representations of the inhomogeneous Lorentz group. Ann. Math. 40,
149–204 (1939)
Appendix A
Timetable
1901 Wilson suggests to look for the source of the ionisation of charged conductors
outside the atmosphere.
1902 Linke performs twelve balloon-flights in altitudes up to 5 500 meters with the
result of increasing ionisation in height over 3 000 meters.
Kaufmann convincingly shows that β-rays are electrons.
Kennely and Heaviside suggest an electrically conducting layer in the upper
atmosphere.
1903 Rutherford suggests terrestrial, radioactive substances to be the reason of the
ionisation of charged conductors.
1905 Einstein publishes his light quantum hypothesis, the theory of Brownian mo-
tion and Special Relativity.
1906 Rutherford and his assistants carry out their scattering experiments.
1908 Perrin’s measurements confirm Einstein’s theory of Brownian motion.
Flemming and Bergwitz start a series of balloon-flights but due to problems
with their measurement apparatus, they yield no convincing results.
Gockel uses the term cosmic radiation.
1909 In three balloon flights Gockel confirms that ionisation decreases only slowly
with the altitude.
Under the direction of Rutherford, Geiger and Marsden measure unexpected
backward scattering and suggest that atoms have a nucleus.
1910 Millikan performs his oil droplet experiment.
After designing and building the electrometer, Wulf conducts measurements
on the Eiffel Tower. The results contradict the assumption that the radiation
leading to the ionisation of charged conductors is terrestrial. The same con-
clusion is reached by Pacini, Simpson and Wright.
1911 Wilson uses the cloud chamber to visualize α- and β-rays.
Hertzsprung reveals the main sequence.
1912 Victor Hess recognizes an increase of radiation in altitudes of over 3 000 me-
ters. As a reason he suggests the existence of Höhenstrahlung.
Leavitt discovers the correlation of brightness and period of Cepheid variable
stars.
1913 Bohr develops his first quantum theory of atoms.
1914 Walter Kolörster confirms the results of Hess with further balloon flights.
Russell is able to obtain a diagram similar to the one of Hertzsprung.
1915 Einstein publishes the final version of his field equations of General Relativity.
Schweidler excludes several terrestrial and solar possibilities as origin of the
cosmic rays.
Gockel and Hess give further confirmation of Höhenstrahlung with long term
measurements.
1916 Einstein extends his light quantum hypothesis.
Schwarzschild shows the possibility of the existence of black holes.
1917 Einstein extends his field equations by introducing the cosmological constant.
Slipher finds first evidence for the expansion of the universe.
1919 Eddington gives, influenced by the work of de Sitter, evidence for the theory
of General Relativity by observing the eclipse on May 29th.
A Timetable 299
1940 Williams and Roberts present the first photograph of a decaying muon.
Bethe and Critchfield propose the pp-chain.
Reber discovers the radio source Cygnus A.
1941 Rasetti is the first to measure the lifetime of a meson.
The term nucleon is introduced.
1944 Walter Baade distinguishes between two different stellar populations and in
doing so can correct the Hubble constant by a factor 2.7. Further corrections
follow.
Discovery of the Seyfert galaxies.
1947 Blackett predicts that relativistic particles passing the atmosphere produce
Cherenkov light.
The Feynman diagrams are introduced.
1948 Alpher and Gamow formulate their theory of the origin of the elements.
Hoyle, Bondi and Gold support the steady-state model of the universe.
The first artificial pions are produced in the synchotron cyclotron.
1949 Hoyle is the first to use the term Big Bang.
Gamow and collaborators predict the cosmic microwave background.
Fermi is the first to describe a power law distribution of cosmic rays.
Discovery of the extragalactic radio sources NGC 4486 (M87) and NGC 5128
(Centaurus A).
The K + is detected.
1950 The Π 0 is detected at the cyclotron at the university of California.
1951 The detector El Monstro ist designed and built.
Biermann predicts the solar winds.
The λ◦ and the K ◦ are detected in cosmic rays.
1952 Glaser invents the bubble chamber.
Discovery of the Δ.
1953 Bassi, Clark and Rossi show that the disk of air showers is quite thin.
Galbraith and Jelley prove that air showers generate Cherenkov light.
First measurements of Project Poltergeist.
1954 Baade and Minkowsky identify the optical counterpart of Cygnus A.
Yang and Mills develop the gauge theories, providing the theoretical founda-
tions for the later standard model.
1956 Kulikov and Khristiansen discover the knee of the cosmic ray spectrum.
First detection of neutrinos by Cowan and Reines near a nuclear reactor.
1957 Hoyle works on the synthesis of the elements in stars with impressive results.
Sputnik is launched.
Schwinger proposes the unification of the weak and electromagnetic interac-
tions.
1958 Porter is the first to succeed in preventing bacterial growth in unfiltered water
long enough to realise a stable detector.
Morrison puts forward strong arguments for observing very high energy
gamma rays from the Crab nebula.
The Bolivian Air Shower Joint Experiment (BASJE) at Mount Chacaltaya is
established.
302 A Timetable
Turner introduces the term Dark Energy to explain the acceleration of the
expansion of the universe.
The Gallium Neutrino Observatory (GNO) succeeds GALLEX.
The Sudbury Neutrino Observatory (SNO) starts taking data.
The Super-Kamiokande collaboration confirms the existence of neutrino-
oscillations and thus the existence of non-zero neutrino mass.
1999 Perlmutter and others publish the cosmological results from the investigations
of several supernovae.
The Collaboration of Australia and Nippon for Gamma-Ray Observation in
the Outback (Cangoroo III) starts taking data.
2000 The DONUT experiment detects the first tau neutrino; it is stated as oscillating
partner to the muon neutrino.
2001 AMANDA II is completed.
SNO announces the observation of neutral currents from solar neutrinos and
finally solves the problem of missing solar neutrinos.
2002 The High Energy Stereoscopic System (HESS) starts taking data as well as
the first really successful tail-catcher detector Milargo.
Kamioka Liquid-scintillator Anti-Neutrino Detector (KamLAND) begins
data taking.
Using neutrinos from an accelerator, KamLAND confirms the results of the
observations of the results of the observations of solar neutrinos.
2003 The KASKADE array is expanded to KASKADE-Grande.
2004 MAGIC starts taking data.
Physics data collecting begins at the Pierre Auger Observatory.
2006 The Very Energetic Radiation Imaging Telescope System (VERITAS) Starts
taking data.
2007 Data taking starts at the BOREXINO experiment.
A first real-time detection of monoenergetic 7 Be neutrinos is announced.
2008 The Telescope Array (TA) near Utah starts taking data.
HiRes and Auger discover the suppression at the GZK threshold.
2010 The neutrino detector IceCube is completed.
Appendix B
Nobel Prizes
1932 Werner Karl Heisenberg “for the creation of quantum mechanics, the appli-
cation of which has, inter alia, led to the discovery of the allotropic forms of
hydrogen”.
1933 Erwin Schrödinger and Paul Adrien Maurice Dirac “for the discovery of new
productive forms of atomic theory”.
1935 James Chadwick “for the discovery of the neutron”.
1936 Victor Franz Hess “for his discovery of cosmic radiation” and Carl David
Anderson “for his discovery of the positron”.
1938 Enrico Fermi “for his demonstrations of the existence of new radioactive el-
ements produced by neutron irradiation, and for his related discovery of nu-
clear reactions brought about by slow neutrons”.
1939 Ernest Orlando Lawrence “for the invention and development of the cyclotron
and for results obtained with it, especially with regard to artificial radioactive
elements”.
1945 Wolfgang Pauli “for the discovery of the Exclusion Principle, also called the
Pauli Principle”.
1948 Patrick Maynard Stuart Blackett “for his development of the Wilson cloud
chamber method, and his discoveries therewith in the fields of nuclear physics
and cosmic radiation”.
1949 Hideki Yukawa “for his prediction of the existence of mesons on the basis of
theoretical work on nuclear forces”.
1950 Cecil Frank Powell “for his development of the photographic method of
studying nuclear processes and his discoveries regarding mesons made with
this method”.
1951 Sir John Douglas Cockcroft and Ernest Thomas Sinton Walton “for their pi-
oneer work on the transmutation of atomic nuclei by artificially accelerated
atomic particles”.
1954 Max Born “for his fundamental research in quantum mechanics, especially
for his statistical interpretation of the wavefunction” and Walther Bothe “for
the coincidence method and his discoveries made therewith”.
1955 Divided equally between Willis Eugene Lamb “for his discoveries concerning
the fine structure of the hydrogen spectrum” and Polykarp Kusch “for his
precision determination of the magnetic moment of the electron”.
1957 Chen Ning Yang and Tsung-Dao (T.D.) Lee “for their penetrating investi-
gation of the so-called parity laws which has led to important discoveries
regarding the elementary particles”.
1958 Pavel Alekseyevich Cherenkov, Il’ja Mikhailovich Frank and Igor Yevgenye-
vich Tamm “for the discovery and the interpretation of the Cherenkov effect”.
1959 Emilio Gino Segré and Owen Chamberlain “for their discovery of the antipro-
ton”.
1960 Donald A. Glaser “for the invention of the bubble chamber”.
1961 Divided equally between Robert Hofstadter “for his pioneering studies of
electron scattering in atomic nuclei and for his thereby achieved discover-
ies concerning the structure of the nucleons” and Rudolf Ludwig Mössbauer
“for his researches concerning the resonance absorption of gamma radiation
and his discovery in this connection of the effect which bears his name”.
B Nobel Prizes 307
1963 Divided, one half awarded to Eugene Paul Wigner “for his contributions to
the theory of the atomic nucleus and the elementary particles, particularly
through the discovery and application of fundamental symmetry principles”,
the other half jointly to Maria Goeppert Mayer and J. Hans D. Jensen “for
their discoveries concerning nuclear shell structure”.
1965 Jointly to Sin-Itiro Tomonaga, Julian Schwinger and Richard P. Feynman “for
their fundamental work in quantum electrodynamics, with deep-ploughing
consequences for the physics of elementary particles”.
1967 Hans Bethe “for his contributions to the theory of nuclear reactions, especially
his discoveries concerning the energy production in stars”.
1969 Murray Gell-Mann “for his contributions and discoveries concerning the clas-
sification of elementary particles and their interactions”.
1970 Divided equally between Hannes Olof Gösta Alfvén “for fundamental work
and discoveries in magnetohydro-dynamics with fruitful applications in dif-
ferent parts of plasma physics” and Louis Eugéne Félix Néel “for fundamen-
tal work and discoveries concerning antiferromagnetism and ferrimagnetism
which have led to important applications in solid state physics”.
1974 Martin Ryle and Antony Hewish “for their pioneering research in radio astro-
physics: Ryle for his observations and inventions, in particular of the aperture
synthesis technique, and Hewish for his decisive role in the discovery of pul-
sars”.
1977 Jointly to Philip Warren Anderson, Sir Nevill Francis Mott and John Has-
brouck van Vleck “for their fundamental theoretical investigations of the elec-
tronic structure of magnetic and disordered systems”.
1978 Divided, one half awarded to Pyotr Leonidovich Kapitsa “for his basic inven-
tions and discoveries in the area of low-temperature physics”, the other half
jointly to Arno Allan Penzias and Robert Woodrow Wilson “for their discov-
ery of cosmic microwave background radiation”.
1979 Jointly to Sheldon Lee Glashow, Abdus Salam and Steven Weinberg “for their
contributions to the theory of the unified weak and electromagnetic interaction
between elementary particles, including, inter alia, the prediction of the weak
neutral current”.
1980 Jointly to James Watson Cronin and Val Logsdon Fitch “for the discovery
of violations of fundamental symmetry principles in the decay of neutral K-
mesons”.
1981 Divided, one half jointly to Nicolaas Bloembergen and Arthur Leonard
Schawlow “for their contribution to the development of laser spectroscopy”
and the other half to Kai M. Siegbahn “for his contribution to the development
of high-resolution electron spectroscopy”.
1983 Divided equally between Subramanyan Chandrasekhar “for his theoretical
studies of the physical processes of importance to the structure and evolution
of the stars” and William Alfred Fowler “for his theoretical and experimental
studies of the nuclear reactions of importance in the formation of the chemical
elements in the universe”.
308 B Nobel Prizes
1984 Jointly to Carlo Rubbia and Simon van der Meer “for their decisive contribu-
tions to the large project, which led to the discovery of the field particles W
and Z, communicators of weak interaction”.
1986 Divided, one half awarded to Ernst Ruska “for his fundamental work in elec-
tron optics, and for the design of the first electron microscope”, the other half
jointly to Gerd Binnig and Heinrich Rohrer “for their design of the scanning
tunneling microscope”.
1988 Jointly to Leon M. Lederman, Melvin Schwartz and Jack Steinberger “for the
neutrino beam method and the demonstration of the doublet structure of the
leptons through the discovery of the muon neutrino”.
1990 Jointly to Jerome I. Friedman, Henry W. Kendall and Richard E. Taylor “for
their pioneering investigations concerning deep inelastic scattering of elec-
trons on protons and bound neutrons, which have been of essential importance
for the development of the quark model in particle physics”.
1992 Georges Charpak “for his invention and development of particle detectors, in
particular the multiwire proportional chamber”.
1993 Jointly to Russell A. Hulse and Joseph H. Taylor Jr. “for the discovery of a
new type of pulsar, a discovery that has opened up new possibilities for the
study of gravitation”.
1995 Awarded “for pioneering experimental contributions to lepton physics” jointly
with one half to Martin L. Perl “for the discovery of the tau lepton” and with
one half to Frederick Reines “for the detection of the neutrino”.
1996 Jointly to David M. Lee, Douglas D. Osheroff and Robert C. Richardson “for
their discovery of superfluidity in helium-3”.
2002 One half jointly to Raymond Davis Jr. and Masatoshi Koshiba “for pioneering
contributions to astrophysics, in particular for the detection of cosmic neutri-
nos” and the other half to Riccardo Giacconi “for pioneering contributions to
astrophysics, which have led to the discovery of cosmic X-ray sources”.
2006 Jointly to John C. Mather and George F. Smoot “for their discovery of the
blackbody form and anisotropy of the cosmic microwave background radia-
tion”.
2011 Divided, one half awarded to Saul Perlmutter, the other half jointly to
Brian P. Schmidt and Adam G. Riess “for the discovery of the accelerating
expansion of the Universe through observations of distant supernovae”.
Appendix C
Textbooks
T ICER , B., 2004, Gravity, Quasi Black Holes, and Cosmic Relativity. Blooming-
ton: AuthorHouse.
U NSÖLD , A., AND BASCHEK , B., 2002, Der neue Kosmos – Einführung in die
Astronomie und Astrophysik. Wiesbaden: Gabler Wissenschaftsverlage.
W EIGERT, A., W ENDKER , H. J., AND W ISOTZKI , L., 2009, Astronomie und
Astrophysik – Ein Grundkurs. Weinheim: Wiley-VCH.
W EINBERG , S., 1993, The First Three Minutes – A Modern View of the Origin of
the Universe, updated ed. United States: Basic Books.
W EINBERG , S., 2000, The Quantum Theory of Fields – Supersymmetry. Cam-
bridge: Cambridge University Press.
W EINBERG , S., 2008, Cosmology. New York: Oxford University Press.
Z EIDLER , E., 2006, Quantum Field Theory, 1st ed. 2006. Corr. 2nd printing.
Berlin, Heidelberg, New York: Springer.
Z UBER , K., AND K LAPDOR -K LEINGROTHAUS , H. V., 1997, Teilchenastro-
physik. Wiesbaden: Teubner.
VAN , J. T. T., 1981, Cosmology and Particles. Paris: Atlantica Séguier Fron-
tiéres.
W EINBERG , S., 1972, Gravitation and Cosmology: Principles and Applications
of the General Theory of Relativity. New York: Wiley.
A LPHER , R., AND H ERMAN , R., 2001, Genesis of the Big Bang. New York:
Oxford University Press.
A SHTEKAR , A., 2005, One Hundred Years of Relativity. Singapore: World Sci-
entific.
B ERENDZEN , R., H ART, R., AND S EELEY, D., 1984, Man Discovers the Galax-
ies. New York: Columbia University Press.
B ERNSTEIN , J., Ed., 1979, Hans Bethe: Prophet of Energy. Basic Books.
B ERTOTTI , B., BALBINOT, R., B ERGIA , S., AND M ESSINA , A., 1990, Modern
Cosmology in Retrospect, illustrated ed. Cambridge: Cambridge University Press.
B LEEKER , J., G EISS , J., AND H UBER , M., Eds., 2001, The Century of Space
Science. Berlin, Heidelberg, New York: Springer.
B RECHT, P., 1990, History of Cosmology and Its Significance. Charlottesville:
University of Virginia.
B ROWN , L. M., D RESDEN , M., AND H ODDESON , L., Eds., 2009, Pions to
Quarks – Particle Physics in the 1950s. Cambridge: Cambridge University Press.
B ROWN , L. M., AND H ODDESON , L., Eds., 1986, The Birth of Particle Physics.
Cambridge: Cambridge University Press.
C AHN , R. N., AND G OLDHABER , G., 2009, The Experimental Foundations of
Particle Physics. Cambridge: Cambridge University Press.
C HARON , J. E., AND WALDHEIM , H. V., 1970, Geschichte der Kosmologie.
München: Kindler.
C RANSHAW, T. E., 1963, Cosmic Rays. Oxford: Oxford University Press.
C ROPPER , W. H., 2004, Great Physicists – The Life and Times of Leading Physi-
cists from Galileo to Hawking. Oxford: Oxford University Press.
DA C OSTA A NDRADE , E. N., 1973, Rutherford and the Nature of the Atom. New
York: Peter Smith Pub.
DAHL , P. F., 1997, Flash of the Cathode Rays – A History of J.J. Thomson’s
Electron, illustrated ed. Bristol: Institute of Physics Pub.
D ICK , W. R., H AMEL , J., AND D UERBECK , H. W., Eds., 2008, Beiträge zur
Astronomiegeschichte. Frankfurt am Main: Harri Deutsch Verlag.
K RAGH , H., 2004, Matter and Spirit in the Universe – Scientific and Religious
Preludes to Modern Cosmology. Paris: OECD Publishing.
K RAGH , H. S., 2007, Conceptions of Cosmos – From Myths to the Accelerating
Universe: A History of Cosmology. New York: Oxford University Press.
K RÖGER , B., 1981, Der Nachlass von Julius Elster und Hans Geitel. Frankfurt
am Main: Klostermann.
K ROPP, W., S OBEL , H., AND S CHULTZ , J., 1990, Neutrinos and Other Matters
– Selected Works of Frederick Reines. Singapur: World Scientific.
K UHN , T H . S., 1962, The Structure of Scientific Revolutions. University of
Chicago Press. 2nd ed.: 1970.
L ANG , K. R., AND G INGERICH , O., 1979, A Source Book in Astronomy and
Astrophysics, 1900–1975. Cambridge: Harvard University Press.
L AVRO , A. S., 2002, Neutrinos – A Bibliography with Indexes. Nova Publishers.
L ONGAIR , M. S., 2006, The Cosmic Century – A History of Astrophysics and
Cosmology. Cambridge: Cambridge University Pres.
L OSEE , J., 2001, A Historical Introduction to the Philosophy of Science. New
York: Oxford University Press.
M UNITZ , M. K., 1957, Theories of the Universe – From Babylonian Myth to
Modern Science. New York: Simon and Schuster.
N ORTH , J., AND S ENGERLING , R., 2001, Viewegs Geschichte der Astronomie
und Kosmologie. Berlin, Heidelberg, New York: Springer.
N ORTH , J. D., 2008, Cosmos – An Illustrated History of Astronomy and Cosmol-
ogy, revised ed. Chicago: University of Chicago Press.
N YE , M. J., 2003, The Cambridge History of Science: The Modern Physical and
Mathematical Sciences. Cambridge: Cambridge University Press.
PAIS , A., 1988, Inward Bound – Of Matter and Forces in the Physical World.
Oxford: Clarendon Press.
P ICKERING , A., 1984, Constructing Quarks. Edinburgh University Press.
R EEVES , R., 2008, A Force of Nature – The Frontier Genius of Ernest Rutherford.
New York: Norton.
R IORDAN , M., 1987, The Hunting of the Quark – A True Story of Modern
Physics. New York: Simon & Schuster.
ROSNER , R., AND S TROHMAIER , B., 2003, Marietta Blau – Sterne der Zertrüm-
merung: Biographie einer Wegbereiterin der modernen Teilchenphysik. Wien:
Böhlau Verlag.
ROSNER , R., AND S TROHMAIER , B., Eds., 2003, Marietta Blau. Stars of Disin-
tegration. Biography of a Pioneer of Particle Physics. Ariadne, Riverside, Cali-
fornia 2006. Engl. transl. of Rosner and Strohmaier.
ROSSI , B. B., 1965, High-Energy Particles. New York: Prentice-Hall.
ROSSI , B., 1990, Moments in the Life of a Scientist. Cambridge: Cambridge Uni-
versity Press.
S CHMEIDLER , F., 1962, Alte und moderne Kosmologie. Berlin: Duncker & Hum-
blot.
S CHWEBER , S. S., 1994, QED and the Men Who Made It – Dyson, Feynman,
Schwinger, and Tomonaga. Princeton: Princeton University Press.
318 D Books in History of Physics
Alpher et al. (1948), 72, 74, 96 Bagley et al. (2008), 260, 262
Altmann et al. (2005), 200, 212 Bagley et al. (2010), 260, 262
Amaldi et al. (1987), 217, 227 Bahcall (1964), 197, 212
Amram et al. (2000), 253, 262 Bahcall (1969), 199, 212
Amram et al. (2003), 253, 262 Bahcall (1989), 199, 212
Anchordoqui et al. (2011), 95, 96 Bahcall et al. (1994), 199, 212
Anderson (1932), 281, 282, 293 Baixeras et al. (2003), 167, 182
Anderson (1933), 38, 39, 43, 44, 281, 282, 293 Baldo-Ceolin et al. (1990), 221, 227
Anderson (1983), 39, 40, 44 Balkanov et al. (1999), 243, 262
Anderson and Anderson (1983), 281, 282, 284, Barbieri et al. (1989), 225, 227
293 Bardeen (1970), 80, 85, 96
Andres et al. (2001), 258, 262 Bardeen et al. (1973), 85, 96
Andrews et al. (1968), 122, 138 Barger and Wishnant (1988), 223, 227
Anselmann et al. (1992), 200, 212 Barkas et al. (1951), 43, 44
ANTARES, 253, 262 Barr et al. (1988), 222, 227
Antoni et al. (2003), 132, 133, 138 Barr et al. (1989), 222, 227
Antoni et al. (2005), 132, 138 Barrau et al. (1998), 160, 182
Antonioli et al. (2004), 258, 262 Barrau et al. (2003), 86, 96
Apel et al. (2010), 133, 138 Bassi et al. (1953), 114, 138
Apel et al. (2011), 133, 138 Battistoni et al. (1983), 220, 227
Appenzeller and Fricke (1972a), 83, 96 Beck et al. (1996), 79, 90, 97
Appenzeller and Fricke (1972b), 83, 96 Becker (2008), 84, 97
Ardeljan et al. (1996), 80, 96 Becker et al. (2011), 93, 97
Armenteros et al. (1951), 43, 44 Bekenstein (1973), 85, 86, 97
Arpesella et al. (2008), 206, 212 Bekenstein (2004), 85, 97
Askaryan (1962), 136, 138 Belenki and Landau (1956), 118, 138
Askebjer et al. (1995), 248, 262 Bell (1978a), 89, 97
Aslanides et al. (1999), 253, 262 Bell (1978b), 89, 97
Auger (1985), 108, 138 Bell (2004), 78, 91, 97
Auger et al. (1939), 146, 182 Bell and Lucek (2001), 78, 91, 97
Auger et al. (1939a), 108, 138 Bellido et al. (2001), 123, 138
Auger et al. (1939b), 108, 138 Belotti and Laveder (1993), 244, 262
Auger-Coll. (2011a), 94, 96 Belyaev and Chudakov (1966), 128, 138
Auger-Coll. (2011b), 94, 96 Berezinsky (1990), 239, 262
Auger-Coll. (2011c), 94, 96 Berezinsky (2009), 93, 97
Auger-Coll. (2011d), 94, 96 Berezinsky and Prilutsky (1978), 225, 227
Auger-Coll. (2011e), 94, 96 Berezinsky and Ptuskim (1989), 225, 227
Augustin et al. (1974), 157, 182 Berezinsky et al. (1985), 225, 227
Auriemma et al. (1988), 222, 227 Berezinsky et al. (1990), 89, 94, 97
Axford et al. (1977), 89, 96 Berger et al. (1987), 223, 227
Aynutdinov et al. (2006a), 244, 262 Berger et al. (1989a), 220, 227
Aynutdinov et al. (2006b), 244, 262 Berger et al. (1989b), 223, 227
Aynutdinov et al. (2009), 260, 262 Berger et al. (1991), 220, 227
Ayres et al. (1984), 222, 227 Bergeson et al. (1975), 129, 138
Bergeson et al. (1977), 129, 138
B Bergkvist (1972), 192, 212
Baade (1944), 54, 68 Bergstrom (1989), 225, 227
Baade and Zwicky (1934), 77, 89, 94, 96 Bernabei et al. (2011), 89, 97
Baade and Zwicky (1934a), 37, 44 Berti and Volonteri (2008), 83, 97
Baade and Zwicky (1934b), 37, 44 Bethe (1930), 274, 277, 278, 293
Babson et al. (1990), 239, 262 Bethe (1932), 279, 283, 293
Backenstoss et al. (1960), 218, 227 Bethe (1939), 196, 212
Bagduev et al. (1999), 241, 262 Bethe (1990), 80, 97
Bagge et al. (1977), 153, 182 Bethe and Critchfield (1938), 196, 212
Citation Index 321
Bethe and Heitler (1934), 111, 138, 283, 293 Bohr et al. (1924), 270, 293
Bethe and Peierls (1934), 190, 212 Bondi and Gold (1948), 58, 69
Beuermann et al. (1985), 93, 97 Borione et al. (1994), 131, 138
Beuermann et al. (2011), 79, 97 Borione et al. (1997), 156, 183
Bezrukov et al. (1984), 241, 262 Born (1926a), 275, 277, 293
Bezrukov et al. (1987), 241, 262 Born (1926b), 275, 277, 293
Bezrukov et al. (2010), 88, 97 Bosetti et al. (1988), 238, 262
Bhabha and Heitler (1937), 107, 111, 138 Bothe (1929), 35, 44, 106, 139
Biermann (1950), 72, 78, 97 Bothe and Geiger (1925), 269, 270, 294
Biermann (1951), 73, 93, 97 Bothe and Kolhörster (1929), 34, 35, 43, 45,
Biermann (1993), 73, 91, 92, 97 272, 294
Biermann and Cassinelli (1993), 92, 97 Bowden et al. (1991), 160, 183
Biermann and de Souza (2012), 73, 92, 95, 97 Boyanovsky et al. (2008a), 87, 88, 97
Biermann and Kusenko (2006), 76, 88, 97 Boyanovsky et al. (2008b), 87, 88, 97
Biermann and Schlüter (1951), 78, 90, 97 Bradbury et al. (1997), 161, 183
Biermann and Strittmatter (1987), 84, 94, 97 Branch (1998), 79, 97
Biermann and Strom (1993), 92, 97 Breit and Wigner (1936), 289, 294
Biermann et al. (1985), 79, 97 Breitschwerdt et al. (1991), 79, 97
Biermann et al. (1995), 92, 97 Bressi et al. (1989), 221, 227
Biermann et al. (2001), 92, 97 Brown and Hoddeson (1983), 17, 45
Biermann et al. (2009), 92, 97 Brown et al. (1949), 42, 43, 45
Biermann et al. (2010a), 92, 93, 97 Bugaev and Naumov (1987), 222, 227
Biermann et al. (2010b), 92, 97 Bugaev and Naumov (1989), 223, 227
Biermann et al. (2011), 84, 97 Bunner (1964), 130, 139
Bignami et al. (1975), 226, 227 Bunner (1967), 128, 139
Binns et al. (2008), 92, 97 Bunner et al. (1968), 128, 139
Bird et al. (1995), 130, 138 Burbidge et al. (1957), 57, 69, 74, 97
Bisnovatyi-Kogan (1970), 80, 97 Bykov et al. (2011), 91, 97
Bisnovatyi-Kogan and Moiseenko (2008), 80,
97 C
Bisnovatyi-Kogan et al. (1973), 78, 97 Cao et al. (2011), 174, 183
Bjorklund et al. (1950), 43, 44 Capone et al. (2009), 255, 262
Blackett (1947), 113, 138 Caprioli et al. (2010), 91, 97
Blackett (1948), 106, 138, 149, 183 Caramete and Biermann (2010), 76, 82, 83, 97
Blackett and Occhialini (1932), 106, 138 Caramete et al. (2011), 95, 97
Blackett and Occhialini (1933), 38, 40, 44, Carlson (1986), 222, 227
282, 293 Carlson and Oppenheimer (1937), 111, 112,
Blackmore (1972), 268, 293 139
Blandford and Ostriker (1978), 89, 97 Cartwright (1983), 291, 294
Blandford and Znajek (1977), 80, 97 Casanova et al. (2011), 93, 97
Blau and Wambacher (1937), 42, 44, 285, 293 Cassidy (1981), 282–284, 294
Bloch (1933), 279, 283, 293 Cavaliere and D’Elia (2002), 84, 97
Bloemen (1987), 226, 227 Celotti and Ghisellini (2008), 84, 98
Bloemen (1989), 226, 227 Celotti et al. (2007), 84, 98
Bloemen et al. (1988), 226, 227 Cenko et al. (2010), 80, 98
Blondeau (1998), 253, 262 Chandrasekhar (1931), 79, 98
Blumenthal et al. (1984), 65, 68 Chen (1985), 204, 212
Boesgaard and Steigman (1985), 216, 227 Cherenkov (1934), 149, 183
Bohr (1913), 271, 277, 293 Chi et al. (1992), 156, 183
Bohr (1915), 277, 293 Chou et al. (2010), 77, 98
Bohr (1920), 271, 293 Chudakov et al. (1960), 113, 139
Bohr (1922), 271, 293 Chudakov et al. (1963), 162, 183
Bohr (1927), 271, 275, 293 Clark (2006), 115, 139
Bohr (1949), 275, 293 Clark et al. (1957), 115, 116, 139
322 Citation Index
Fukui et al. (1960), 113, 117, 139 Gould and Burbidge (1967), 94, 99
Furry (1939), 190, 213 Gould and Schréder (1966), 126, 139, 156, 183
Greene and Ho (2007a), 82, 99
G Greene and Ho (2007b), 82, 99
Gaisser (1991), 89, 94, 98 Greene et al. (2006), 82, 99
Gaisser and Grillo (1987), 222, 228 Greene et al. (2008), 82, 99
Gaisser and Stanev (1984), 222, 228 Greenstein and Zajonc (1997), 270, 294
Gaisser and Stanev (1987), 225, 228 Gregorini et al. (1984), 84, 99
Gaisser et al. (1983), 222, 228 Greisen (1956), 113, 119, 122, 139
Gaisser et al. (1986), 225, 228 Greisen (1960), 119, 122, 139, 232, 262
Gaisser et al. (1988), 222, 228 Greisen (1966), 94, 99
Galbraith (1958), 105, 139 Greisen (1966a), 126, 139
Galbraith and Jelley (1953), 113, 139, 150, 183 Greisen (1966b), 128, 139
Galison (1987), 266, 272, 281–285, 291, 294 Grindlay et al. (1975), 162, 163, 183
Galison (1997), 285, 294 Grunsfeld et al. (1988), 225, 228
Gamow (1970), 53, 69 Guetta et al. (2004), 84, 99
Garcia-Munoz et al. (1987), 93, 98 Gurr et al. (1967), 218, 228
Geiger and Marsden (1913), 268, 294 Guth (1981), 63, 69
Geiger and Müller (1928), 34, 45, 106, 139
Geitel (1900), 20, 45 H
Gentile et al. (2009), 88, 98 H.E.S.S.-Coll. (2011), 79, 99
Genzel et al. (2010), 82, 98 Hagelin et al. (1987), 225, 228
Georgi and Glashow (1974), 217, 219, 228 Halpern (1993), 285, 294
Gergely and Biermann (2009), 83, 98 Halzen (1995), 245, 262
Ghisellini (2004), 84, 98 Halzen (1998), 245, 263
Ghisellini and Tavecchio (2008), 84, 98 Halzen (2010), 93, 99
Ghisellini and Tavecchio (2009), 84, 98 Halzen and Hooper (2005), 252, 263
Ghisellini et al. (1998), 84, 98 Halzen and Learned (1988), 245, 263
Ghisellini et al. (2009a), 84, 98 Hampel et al. (1999), 200, 213
Ghisellini et al. (2009b), 84, 98 Hanasz et al. (2004), 79, 99
Ghisellini et al. (2009c), 84, 98 Hanasz et al. (2009), 79, 99
Ghisellini et al. (2010), 84, 98 Hara et al. (1970), 129, 139
Giacconi et al. (1962), 72, 98 Häring and Rix (2004), 82, 99
Gibbons and Hawking (1977), 86, 98 Harris et al. (2011), 178, 183
Gilfanov and Bogdan (2010), 79, 98 Harwit (1981), 261, 263
Gilliland et al. (1986), 224, 228 Hawking (1971), 85, 99
Gilmore et al. (2007), 81, 87, 88, 98 Hawking (1973), 85, 99
Ginzburg and Syrovatskii (1963), 94, 98 Hawking (1975), 86, 99
Ginzburg and Syrovatskii (1964), 94, 99 Hawking (1976), 85, 99
Gockel (1910), 23, 45 Hegyi and Olive (1989), 224, 228
Gockel (1915), 28, 45 Heisenberg (1930), 271, 272, 275, 276, 294
Gockel and Wulf (1908), 29, 36, 45 Heisenberg (1932), 288, 294
Goeppert-Mayer (1935), 190, 213 Heisenberg (1936), 111, 139
Goldberg (1983), 225, 228 Heitler and Sauter (1933), 279, 294
Goldhaber and Perlmutter (1998), 74, 79, 99 Helfand et al. (2007), 177, 183
Goldhaber et al. (1958), 192, 213 HENAP (2002), 260, 263
Goldstein et al. (1995), 93, 99 Hersil et al. (1961), 120, 139
Gopal-Krishna and Wiita (2001), 79, 99 Hersil et al. (1962), 120, 139
Gopal-Krishna et al. (2010), 92, 95, 99 Hess (1911), 24, 45
Goret et al. (1991), 160, 183 Hess (1912), 24–27, 45, 73, 89, 99, 104, 140,
Gorham et al. (2008), 136, 139 143, 183
Goto et al. (2008), 93, 99 Hess (1913), 24, 45
Goto et al. (2011), 93, 99 Hess (1926), 29, 45
Gould (1987), 224, 228 Hess and Kofler (1917), 28, 45
324 Citation Index
Langer and Moffat (1952), 192, 213 Maze (1938), 108, 140
Langer et al. (2010), 80, 100 Mazzali et al. (2007), 79, 100
Lattes (1983), 285, 294 Mazzali et al. (2008), 80, 100
Lattes et al. (1947), 42, 43, 46 McCusker and Winn (1963), 123, 140
Learned (1979), 238, 263 Meli and Biermann (2006), 92, 100
Learned et al. (1988), 223, 228 Menon and O‘Ceallaigh (1954), 43, 46
Lee and Bludman (1988), 222, 228 Merck (1993), 154, 184
Lemaître (1927), 51, 52, 69 Merck et al. (1991), 155, 184
Lemoine-Gourmard et al. (2007), 175, 184 Mestel and Roxburgh (1962), 78, 100
Letessier-Selvon and Stanev (2011), 94, 100 Mészáros (2010), 80, 89, 100
Levine et al. (1950), 194, 213 Mészáros and Rees (2010), 80, 100
Linke (1904), 22, 46 Mészáros and Rees (2011), 80, 100
Linsley (1963), 72, 73, 94, 100 Meyer (1990), 215, 218, 228
Linsley (1963a), 121, 137, 140 Miehlnickel (1938), 17, 46
Linsley (1963b), 121, 126, 140 Mikheyev and Smirnov (1986), 203, 213
Linsley (1977), 124, 140 Milgrom (2009), 89, 100
Linsley (1979), 135, 140 Milgrom and Bekenstein (1987), 89, 100
Linsley (1980), 121, 140 Millikan (1911), 267, 294
Linsley and Hillas (1982), 124, 140 Millikan (1917), 267, 295
Linsley et al. (1961), 121, 140 Millikan and Bowen (1926), 28, 46
Liu et al. (2011), 83, 100 Millikan and Cameron (1926), 28–30, 36, 46
Lloyd-Evans et al. (1983), 131, 140, 154, 184 Millikan and Cameron (1928), 146, 184
Lobashev (2003), 193, 213 Mirabel (2006), 173, 184
Loewenstein and Kusenko (2010), 88, 100 Mirabel and Rodrigues (2003), 80, 100
Longair (2006), 194, 213 Mirabel and Rodriguez (1998), 80, 100
Lorentz (1895), 287, 294 Mirabel and Rodriguez (1999), 80, 100
Lorenz (2006), 156, 184 Mirabel et al. (2001), 80, 100
Los Alamos (1997), 192, 213 Mirabel et al. (2011), 76, 80, 100
LoSecco et al. (1987a), 223, 228 Mohapatra and Marshak (1980), 221, 228
LoSecco et al. (1987b), 225, 228 Moiseenko et al. (2006), 80, 100
Lovelace (1976), 94, 100 Møller (1931), 283, 295
Lovelace et al. (2002), 80, 100 Morrison (1958), 157, 184
Lowder et al. (1991), 246, 263 Mott (1929), 272, 276, 295
Lubimov et al. (1980), 192, 213 Mukhanov and Chibisov (1981), 63, 64, 69
Lüst (1952), 80, 100 Müller (1993), 290, 295
Lynden-Bell (1967), 88, 100 Myssowsky and Tuwim (1926), 31, 46
Lynden-Bell and Pringle (1974), 80, 100
N
M Nagano and Watson (2000), 121, 140
Maier et al. (2011), 176, 184 Nath and Biermann (1994), 93, 100
Majorana (1937), 190, 213 Nath and Silk (2009), 93, 100
Mampe et al. (1989), 216, 228 Navarra (2006), 132, 140
Maoz (1998), 82, 100 Neddermeyer and Anderson (1938), 38, 40, 43,
Maraschi et al. (2008), 84, 100 46
Marcaide et al. (1984), 75, 100 NEMO, 254, 263
Markov (1960), 232, 263 Nernst (1921), 36, 46
Markov and Zheleznykh (1961), 233, 263 NESTOR, 246, 263
Marshak et al. (1985), 154, 184 Ng et al. (1987), 225, 228
Mather (2007), 74, 100 Nikolsky (1962), 113, 140
Mather et al. (1990), 66, 69 Nikolsky and Sinitsyna (1989), 160, 184
Matthaeus and Zhou (1989), 93, 100 Nishino et al. (2012), 221, 222, 228
Matthews (2005), 112, 140 Noether (1918), 288, 295
Mayer-Hasselwander and Simpson (1988), Nolan et al. (2012), 171, 184
226, 228 Novikov and Thorne (1973), 80, 101
326 Citation Index
O Q
Occhialini and Powell (1947), 42, 46 Qian et al. (2010), 79, 101
Olbert (1957), 118, 141 Quinn et al. (1996), 160, 185
Olive and Srednicki (1989), 225, 228 Quinn et al. (1999), 161, 185
Oort (1932), 73, 86, 101
Oppermann et al. (2011), 79, 101 R
Ostriker et al. (2005), 74, 101 Racah (1937), 190, 213
Otmianowska-Mazur et al. (2009), 79, 101 Rachen et al. (1993), 94, 101
Otten and Weinheimer (2008), 193, 213 Ramaty et al. (2000), 93, 101
Ramaty et al. (2001), 93, 101
P Rasetti (1941), 41, 46
Pacini (1910), 22, 46 Reeves (1994), 74, 101
Pacini (1912), 22, 46 Regener (1932a), 30, 46
Pagels and Primack (1982), 224, 228 Regener (1932b), 30, 31, 46
Pais (1986), 187, 213, 265, 281–284, 295
Regener and Ehmert (1938), 107, 141
Panofsky et al. (1950), 43, 46
Regener and Pfotzer (1935), 106, 141
Parikh and Wilczek (2000), 86, 101
Regis et al. (2012), 222, 228
Parker (1966), 77, 93, 101
Reimer et al. (2005), 252, 263
Parker (1969), 78, 101
Reines (1960), 232, 263
Parker (1970a), 78, 101
Reines (1981), 233, 263
Parker (1970b), 78, 101
Reines and Cowan (1953), 191, 213
Parker (1970c), 78, 101
Reines and Crouch (1974), 218, 228
Parsignault et al. (1976), 154, 184
Reines et al. (1954), 218, 228
Pati and Salam (1973), 217, 219, 228
Paul et al. (1989), 216, 228 Reines et al. (1965), 234, 263
Pauli (1931), 189, 213 Remillard and Levine (1997), 161, 185
Pauli (1961), 189, 213 Reno and Quigg (1988), 225, 228
Peebles (1982), 64, 65, 69 Resvanis et al. (1994), 247, 252, 263
Peebles and Yu (1970), 62, 69 Richardson (1906), 36, 46
Penzias and Wilson (1965), 60, 69, 72, 74, 94, Richardson (1948), 43, 46
101, 126, 141, 156, 185 Richman and Wilcox (1950), 43, 46
Perkins (1947), 42, 43, 46 Rickett (1977), 93, 101
Perlmutter et al. (1999), 68, 69 Riess et al. (1998), 67, 70, 74, 79, 101
Perrin (1909), 268, 295 Riordan (1987), 215, 228, 265, 295
Peters (1961), 113, 132, 133, 141 Ripamonti and Abel (2005), 76, 101
Pfeffermann and Aschenbach (1996), 167, 185 Ritz and Seckel (1988), 225, 228
Pfotzer (1936), 112, 141 Roberts (1977), 235, 263
Phillips et al. (1989), 220, 228 Roberts (1992), 235, 237, 239, 240, 263
Piccard et al. (1932), 30, 46 Roberts and Wilkins (1978), 236, 238, 263
Pickering (1984), 265, 295 Rochester and Butler (1947), 43, 46
Piran (2004), 80, 101 Rollinde et al. (2009), 75, 101
Planck-Coll. (2011), 82, 101 Rosenband et al. (2008), 77, 101
Plotkin et al. (2012), 81, 101 Rossi (1930), 106, 141
Pontecorvo (1946), 196, 213 Rossi (1933), 106, 107, 141
Portegies Zwart et al. (2004), 82, 101 Rossi (1934), 107, 141
Porter et al. (1958), 114, 141 Rossi (1952), 274, 278–280, 283, 285, 287,
Powell (1950), 44, 46 295
Powell et al. (1959), 285, 295 Rossi (1983), 44, 46, 283, 295
Prandtl (1925), 91, 101 Rossi (1985), 106, 141
Prantzos (1984), 89, 101 Rossi (1990), 265, 272, 283, 295
Prantzos (1991), 89, 101 Rossi and Greisen (1941), 115, 141
Citation Index 327
Rutherford et al. (1930), 268, 273, 277, 281, Steenbeck and Krause (1965), 78, 102
287, 295 Steenbeck and Krause (1966), 78, 102
Ryle and Clarke (1961), 59, 70 Steenbeck et al. (1966), 78, 102
Ryu et al. (1998), 79, 93, 101 Steenbeck et al. (1967), 78, 102
Ryu et al. (2008), 79, 93, 101 Steinmetz et al. (2008), 77, 102
Stepanian et al. (1983), 158, 185
S Strigari et al. (2008), 88, 102
Sacharov (1967), 219, 228 Strohmaier and Rosner (2006), 285, 295
Sachs and Wolfe (1967), 62, 70 Suga (1962), 128, 141
Salazar (2009), 173, 185 Suga et al. (1961), 120, 123, 141
Saltzberg et al. (2001), 136, 141 Sun and Malkan (1989), 80, 102
Samorski, 155, 185 Sunyaev and Zeldovich (1970), 62, 70
Samorski and Stamm (1983), 130, 141, Svoboda et al. (1987), 234, 263
153–155, 185
Sanders (1970), 82, 101 T
Sanders (2008), 89, 101 Taiuti et al. (2011), 256, 263
Sanders and Mirabel (1996), 81, 101 Takita et al. (1986), 221, 228
Savage et al. (2011), 89, 101 Thomson (1897), 287, 295
Schlickeiser (2002), 89, 101 Thomson (1899), 267, 295
Schmeiser and Bothe (1938), 107, 108, 141 Totsuka (1989), 225, 228
Schmidt (1963), 72, 101 Townsend (1897), 267, 295
Schödel et al. (2009), 82, 101 Trigg (1971), 268, 269, 295
Schrödinger (1927), 271, 295 Trimble (1987), 216, 224, 228
Schwarzschild (1916), 82, 101 Turner and Wilczek (1989), 225, 228
Schweber (1994), 282, 283, 295 Tylka (1989), 225, 229
Science (1925), 28, 46
U
Seaborg et al. (1953), 217, 218, 228
Uson and Wilkinson (1984), 64, 70
Segre (1952), 217, 228
Usoskin and Kovaltsov (2006), 125, 141
Seidel et al. (1988), 220, 228
Sekido and Elliot (1985), 17, 46 V
Shakura and Sunyaev (1973), 80, 101 Van Aller et al. (1983), 239, 263
Shapiro and Teukolsky (1983), 79, 80, 101 Vandenbroucke (2010), 173, 185
Silberberg and Tsao (1990), 92, 101 Vazza et al. (2011), 79, 102
Silk (1968), 62, 70 Vietri (1995), 80, 95, 102
Silk and Rees (1998), 82, 101 Vladimirsky et al. (1989), 160, 185
Silk and Takahashi (1979), 83, 101 Völk and Biermann (1988), 92, 93, 102
Silk et al. (1985), 224, 228 Volkova (1980), 222, 229
Simons (1987), 268, 295 Von Neumann (1932), 275, 295
Simpson and Wright (1911), 22, 31, 46 Von Schweidler (1915), 36, 46
Sinnis (2009), 168, 185
Skobeltzyn (1927), 34, 38, 47, 106, 141 W
Skobeltzyn (1929), 34, 43, 47, 106, 141 Wagner (2008), 160, 185
Skobeltzyn (1985), 273, 295 Wald (1997), 85, 102
Skobeltzyn et al. (1947), 110, 141 Wang and Biermann (1998), 79, 82, 102
Slipher (1917), 51, 70 Wang et al. (2008), 92, 95, 102
Smoluchowski (1916), 83, 101 Watson (1985), 154, 185
Smoot (2007), 74, 101 Waxman (1995), 80, 95, 102
Smoot et al. (1992), 66, 70 Waxman and Bahcall (1999), 259, 263
Sokalski and Spiering (1992), 242, 263 Weekes (1983), 158, 185
Sokolsky and Thomson (2007), 134, 141 Weekes (1988), 226, 229
Spangler and Gwinn (1990), 93, 101 Weekes (2003), 148, 185
Stanev (2010), 89, 94, 102 Weekes and Turver (1977), 157, 185
Stanev et al. (1993), 73, 92, 102 Weekes et al. (1989), 144, 157, 158, 185, 226,
Stecker (2005), 259, 263 229, 238, 263
328 Citation Index
Weibel (1959), 72, 78, 91, 102 Woo et al. (2010), 83, 102
Weinberg (1972), 87, 102 Woosley and Weaver (1986), 80, 102
Weizsäcker (1937), 196, 213 Woosley et al. (2002), 74, 80, 102
Weizsäcker (1938), 196, 213 Wu et al. (1957), 192, 213
Wheaton (1983), 269, 295 Wulf (1909a), 21, 47
Wheeler and Zurek (1983), 293, 295 Wulf (1909b), 22, 47
Wick et al. (1949), 217, 229 Wulf (1910), 22, 23, 31, 47
Wiebel-Sooth et al. (1998), 92, 102 Wyithe and Loeb (2003), 83, 102
Wigner (1939), 288, 295 Wyse and Gilmore (2008), 87, 102
Wigner (1952), 217, 229
Williams and Roberts (1940), 41, 47 Y
Wilson (1900), 20, 47 Yang et al. (1984), 216, 229
Wilson (1901), 35, 47 York et al. (1953), 43, 47
Wilson (1911), 33, 47 Yukawa (1935), 41, 47, 288, 295
Wilson (1912), 33, 34, 47 Yukawa et al. (1938), 42, 47
Wilson (1952), 124, 141 Yungelson et al. (2008), 82, 102
Winn et al. (1986a), 123, 141
Winn et al. (1986b), 123, 141 Z
Wirtz (1922), 51, 70 Zatsepin and Kuz’min (1966), 94, 102
Wischnewski et al. (1993), 243, 263 Zatsepin and Kuzmin (1966), 126, 127, 141
Wise and Abel (2008a), 76, 82, 102 Zhang et al. (2008), 76, 83, 102
Wise and Abel (2008b), 76, 82, 102 Zheleznykh (2006), 232, 233, 263
Wolfenstein (1978), 203, 213 Zwicky (1933), 73, 86, 102
Woltjer (1958), 71, 102 Zwicky (1937), 86, 102
Name Index
P S
Pacini, D., 22, 26, 298 Sacharov, A., 12, 215, 217, 219
Parker, E.N., 77, 78 Sachs, R.K., 62, 302
Parsons, W., 3rd Earl of Rosse, 297 Sakata, S., 42, 302
Pauli, W., 11, 37, 189, 299, 300, 306 Salam, A., 307
Pease, F.G., 299 Saltzberg, D., 136
Peebles, J., 60, 62, 64, 65, 302 Samorski, M., 130, 303
Peierls, R., 190, 300 Sandage, A., 302
Penzias, A.A., 60, 61, 68, 72, 74, 126, 156, Scarsi, L., 122
302, 307 Schatz, G., 132
Perkins, D., 42 Schawlow, A.L., 307
Perl, M.L., 303, 308 Scherb, F., 115
Perlmutter, S., 68, 79, 304, 308 Schmeiser, K., 107, 108
Perrin, J., 268, 298 Schmidt, B.P., 67, 68, 308
Peters, B., 113 Schmidt, M., 302
Pfotzer, J.G., 106, 111, 300 Schréder, G., 126, 156, 302
Pickering, E.C., 195 Schrödinger, E., 271, 299, 302, 306
Planck, M.K.E.L., 5, 18, 297, 305 Schwartz, M., 212, 308
Pontecorvo, B., 196 Schwarzschild, K., 82, 298
Popper, K., 7, 291 Schwinger, J., 301, 307
Porter, N.A., 114, 301 Scott, R., 22
Powell, C.F., 42, 44, 285, 306 Segré, E., 306
Price, B., 246 Seyfert, C.K., 301
Primack, J., 65 Shapley, H., 50, 299
Name Index 333
V Z
Vacanti, G., 303 Zas, E., 245
Van Allen Belt, J., 302 Zatsepin, G.T., 113, 126, 235, 302
Van der Meer, S., 308 Zeitnitz, B., 132
Van Vleck, J.H., 307 Zeldovich, Y., 62, 302
Vernov, S.N., 113 Zeller, E., 245
Villard, P.U., 297 Zheleznykh, I., 232
Vladimirsky, B.M., 158 Zuber, K., 11
Von Baeyer, A., 188 Zweig, G., 302
Von Jolly, P., 18 Zwicky, F., 37, 65, 73, 77, 86, 94, 300
Subject Index
Cosmic rays, 1, 3, 7–10, 13, 23, 26, 27, 29, 30, Dirac particles, 282
34, 36, 37, 72, 73, 75, 77, 81, 89, 90, Dirac-νD and s-neutrino νe , νμ type, 225
92–95, 103–105, 113, 119, 120, 127, Dirac’s theory, 279
128, 131–135, 137, 143, 145, 146, 162, Discharge tube, 18
169, 215, 218, 219, 225, 231, 260, 266, Discovery, 19, 36
269, 271–273, 277, 278, 280, 282, 283, Discovery of cosmic rays, 18
285, 290–292 DMR, 66
Cosmological constant, 50, 51, 53, 59, 67, 68 DOM, 256, 257
Cosmological inflation, 63 DONUT, 192
Cosmological principle, 7 Doppler shift, 51
Cosmological principle, perfect form of, 8, 58 DUMAND, 14, 231, 235, 237, 240, 241, 244,
Cosmological standard model, 66, 67 245, 247, 248, 260
Cosmology, 5 DUMAND-II, 238, 239, 247
Counters, 43 Dwarf galaxy, 179
Coupling constants, 217, 218
CP violation, 219, 290 E
Crab Nebula, 71, 157–160, 165, 168, 169, 171, e+ e− -annihilation, 216
175, 176, 180, 226, 232, 233, 238, 261 Earth, age of, 53
Crab pulsar, 171 Earth magnetic field, 32, 148
CREAM, 92 EAS, 111, 126, 132, 135
Crimean GT48, 160 EAS modelling, 125
Crimean multi-telescope, 162 EAS simulations, 125
Critical rationalism, 7 EAS-1000, 120
CTA, 173 EAS-TOP, 120, 131–133
Culham, 113, 114, 121 Effective scattering length, 249
Culham array, 121 EGRET, 171
CW group, 234 Einstein’s cylindric world, 50
CWI, 13, 233 Electrical earth field, 22
Cygnus X-3, 13, 130, 131, 153–155, 170, 238 Electricity of the atmosphere, 21, 22
Cylindric world, 50 Electromagnetic and hadronic showers, 151,
157
D Electromagnetic cascades, 111, 112, 115, 125
Dark accelerators, 176 Electromagnetic radiation, 270
Dark energy, 3, 8, 72, 74, 75, 77, 79, 81 Electromagnetic shower, 112, 119, 148, 153,
Dark matter, 8, 14, 55, 64–67, 71–77, 81, 155, 283
86–89, 179, 216, 224, 225, 251, 257 Electromagnetic spectrum, 144
Dark matter and energy, 3 Electromagnetic theory, 41
Dark matter particle mass, 88 Electromagnetic waves, 266, 269, 272
Dark matter particle neutrino, 12 Electron, 19, 30, 37, 92, 104, 110–112, 117,
De Sitter model, 59 145, 147, 152, 188–190, 203, 216, 221,
Decaying muon, 41 266–271, 273, 280, 281, 284
Deep Underwater Muon and Neutrino Electron mass, 41, 42, 189
Detector, 235 Electron neutrino, 220–223
DeepCore, 257 Electron scattering, 201
DeepIce, 256 Electron track, 41, 128, 281
Density fluctuations, 64 Electron-, muon- and tau-neutrinos, 216
Density of ionization, 280 Electron- and tau-neutrino, 258
DESY, 4, 249 Electron–photon, 284
Detectors, 26 Electron–positron pair production, 126
Differential microwave radiometer, 66 Electron–positron pairs, 39
Diffuse emission, 226 Electronic circuit, 35
Diffuse flux, 251, 258, 259 Electroscope, 6, 19–24, 28–35, 43
Digital optical modules, 256 Elementary particle physics, 43
Dirac equation, 282, 283 Emulsion plates, see nuclear emulsions, 119
338 Subject Index
Grenoble, 216 IceCube, 14, 93, 133, 210, 231, 252, 254,
GUT, 218, 222 256–261
GVD, 260, 261 IceTop, 133
GZK, 9, 94, 126, 127, 130, 133–135, 137 Illuminating gas, 24
Image parameterization, 158
H Imaging camera, 157
H.E.S.S., 79, 166, 167, 169–171, 173, 174, 180 IMB, 14, 202, 209, 220, 221, 234, 235
Hadron, 152, 169 Inflation, period of, 63
Hadron showers, 158 Infrared background, 11
Hadronic absorption lengths, 148 Infrared (IR) photons, 156
Hadronic particles, 110 Inner neutrino radiation, 238
Hadronic primaries, 112 INS, 116, 120, 128, 132
Hadronic processes, 145, 179 Interstellar medium, 77
Hadronic showers, 148, 152 Inverse Compton scattering, 145, 178, 260
HALO, 210 Ionization, 19–32, 35, 36, 38–44, 89, 108, 136,
Harwell air shower array, 150 147, 187, 272, 275–280, 283, 284, 289
Haverah Park, 114, 122, 123, 131, 137 Ionization degree, 273, 281
Haverah Park array, 121 Ionization density, 273
HAWC, 173 Ionization dependence, 32
Heavy neutrinos, 225 Ionization measurements, 22, 30
HEGRA, 131, 154, 155, 160, 161, 163, 164, Ionization of gases, 20
251 Ionization of the atmosphere, 23
HEGRA Observatory, 163 Ionization rate, 25, 27, 31
Helium, 55 Ionizing radiation, 19
HEP experiments, 151 Ionizing rays, 33
HESS J1303-631/PSR J1301-6305, 176 Irvine-Michigan-Brookhaven detector, 202
Hidden sources, 238 ‘Island Universe’ theory, 51
High altitudes, 35 Isospin, 286, 288, 291
High energy gamma ray astronomy, 131
High energy neutrino astronomy, 13, 231 J
High energy neutrinos, 225, 231, 232, 260 J0632+057, 176
High energy particles, 89
High energy photon, 126, 226 K
High energy physics, 146, 148, 216 K + , K − -meson, 43
High mass stars, 80 KLo -meson, 43
High-z Supernova Search Team, 67 Kamioka Nucleon Decay Experiment, 202
Highest-energy cosmic rays, 105 Kamiokande, 14, 202, 208, 220, 221, 223, 235,
Hillas parameterization, 158 244
HiRes, 130, 133–135, 137 Kamiokande II, 202, 209, 220, 221, 223
HiRes II, 133 KamLAND, 206, 210
Hoffmann bursts, 111 KamLAND-Zen, 194, 211
Höhenstrahlung, 17, 27, 29 Kaons, 137
Homestake, 198, 199, 204, 205, 208, 212 KASCADE, 113, 120, 125, 131–133, 136
Homestake Gold Mine, 198 KASCADE-Grande, 120, 133, 137
Horizon problem, 63, 66 Kaskade array, 92
Hubble constant, 53, 54 Kaskade-Grande, 92
Hubble expansion, 8, 75 KATRIN, 193, 211
Hubble time, 53 KDF9, 122
Hubble’s law, 52 Kepler supernova in 1604, 208
Hydrogen filled balloon, 25 KGF, 13, 234
Hyperons, 137 Kiel, 155
Kiel detector, 13, 153
I KM3NeT, 253, 260, 261
IBM, 215 Knee, 105, 132, 133, 137
340 Subject Index
Muon, 40–43, 104, 110, 111, 116, 117, 119, Neutron stars, 79, 80, 83, 175, 176
132, 137, 149, 152, 169, 216, 218, 233, Newtonian, 7
234, 237, 251, 273, 284, 285 Newtonian gravitation, 227
Muon content, 118 Newtonian gravity, 49
Muon detection, 135 NGC 253, 178
Muon detectors, 120, 121, 124 Night sky background light, 149
Muon measurements, 125 NKG, 119
Muon neutrino, 220, 222, 223, 258 NKG function, 119
Muon neutrino interactions, 222 nn oscillations, 221
Muon neutrinos, 221 Non-accelerator physics, 12, 215, 216
Muon number, 117 Non-baryonic dark matter, 224
Muon spectrometer, 121 None-accelerator experiments, 12, 215
Muon versus electron number, 116 NT200, 231, 242–244, 260
MWBG, 74 Nuclear emulsions, 104, 266, 280, 285, 288,
291
N Nuclear physics, 4, 5, 19, 44, 115, 195
νe –νμ and νμ –ντ oscillations, 223 Nuclei, 94
Narrabri, 160 Nucleon decay, 217, 221
NEMO, 246, 252, 254 Nucleon decay detectors, 223
NESTOR, 246, 247, 252–255, 260 Nucleons, 117
Neutral particles, 144 Nucleus, 37, 41, 42, 188
Neutralinos, 179, 225 Nucleus–nucleus collisions, 93
Neutrino astronomy, 4, 145, 231, 237, 248, Number of different light neutrinos, 216
259, 261 Numerical simulations, 65
Neutrino component, 76 NUSEX, 220, 223, 224
Neutrino detection, 191, 196
Neutrino emission, 84 O
Neutrino energy spectra, 206, 222 Operational particle concept, 272
Neutrino events, 222 Optical modules, 242, 243, 247–249, 253
Neutrino flavors, 216 Origin of magnetic fields, 81
Neutrino flux, 222, 225, 232, 238, 252 Origin of the elements, 54
Neutrino interactions, 232 Orphan flare, 251
Neutrino mass, 11, 12, 187, 190, 192, 210 Oscillation results, 206
Neutrino mass limit, 193, 209 Oscillations, 63, 89, 203, 223
Neutrino mass searches, 192 Osmic ray flux, 134
Neutrino oscillation, 12, 196, 202, 204, 210,
212, 221–223, 233, 244, 247, 258, 290 P
Neutrino physics, 196 π 0 decay, 226
Neutrino searches, 125 π 0 production, 226
Neutrino sky map, 234 π ± - and μ± -mass, 43
Neutrino source, 11 π ± -lifetime, 43
Neutrino telescope, 201, 241, 249, 286 π o -mass, 43
Neutrino tracing, 237 π o -meson, 43
Neutrino–electron scattering, 200, 204 P- and CP-violations, 215, 217
Neutrino-less double beta decay, 12, 190, 192 Pair creation, 274, 277–279, 282, 283, 287,
Neutrinos, 37, 65, 80, 88, 93, 134, 137, 144, 289
145, 187, 190–193, 196–198, 201, 203, Pamir mountains, 110, 113
206–208, 210, 216, 218, 219, 221, 225, Parity, 286, 288, 291
231, 234, 235, 260, 270, 284, 292, 338 Parity violation, 192
Neutrinos emission, 232 Particle, 105, 266, 267, 270–272, 277, 287,
Neutrinos to antineutrinos, 222 290
Neutron, 37, 104, 189–191, 196, 216 Particle acceleration, 13, 71, 74, 176–178
Neutron capture, 56 Particle accelerator, 3, 4, 8, 17, 43, 285, 289
Neutron lifetime, 216 Particle aspect, 269, 290
342 Subject Index
Y Z
Yakutsk array, 113, 120, 122 Z0 particle, 216