Physics
I INTRODUCTION
Physics, major science, dealing with the
fundamental constituents of the universe, the forces they exert on one another,
and the results produced by these forces. Sometimes in modern physics a more
sophisticated approach is taken that incorporates elements of the three areas
listed above; it relates to the laws of symmetry and conservation, such as
those pertaining to energy, momentum, charge, and parity. See Atom;
Energy.
See also separate articles on the different aspects
of physics and the various sciences mentioned in this article.
II SCOPE OF PHYSICS
Physics is closely related to the other natural
sciences and, in a sense, encompasses them. Chemistry, for example, deals with
the interaction of atoms to form molecules; much of modern geology is largely a
study of the physics of the earth and is known as geophysics; and astronomy
deals with the physics of the stars and outer space. Even living systems are
made up of fundamental particles and, as studied in biophysics and biochemistry,
they follow the same types of laws as the simpler particles traditionally
studied by a physicist.
The emphasis on the interaction between
particles in modern physics, known as the microscopic approach, must often be
supplemented by a macroscopic approach that deals with larger elements or
systems of particles. This macroscopic approach is indispensable to the
application of physics to much of modern technology. Thermodynamics, for
example, a branch of physics developed during the 19th century, deals with the
elucidation and measurement of properties of a system as a whole and remains
useful in other fields of physics; it also forms the basis of much of chemical
and mechanical engineering. Such properties as the temperature, pressure, and
volume of a gas have no meaning for an individual atom or molecule; these
thermodynamic concepts can only be applied directly to a very large system of
such particles. A bridge exists, however, between the microscopic and
macroscopic approach; another branch of physics, known as statistical
mechanics, indicates how pressure and temperature can be related to the motion
of atoms and molecules on a statistical basis (see Statistics).
Physics emerged as a separate science only in
the early 19th century; until that time a physicist was often also a
mathematician, philosopher, chemist, biologist, engineer, or even primarily a
political leader or artist. Today the field has grown to such an extent that
with few exceptions modern physicists have to limit their attention to one or
two branches of the science. Once the fundamental aspects of a new field are
discovered and understood, they become the domain of engineers and other
applied scientists. The 19th-century discoveries in electricity and magnetism,
for example, are now the province of electrical and communication engineers;
the properties of matter discovered at the beginning of the 20th century have
been applied in electronics; and the discoveries of nuclear physics, most of
them not yet 40 years old, have passed into the hands of nuclear engineers for
applications to peaceful or military uses.
III EARLY HISTORY OF PHYSICS
Although ideas about the physical world date from
antiquity, physics did not emerge as a well-defined field of study until early
in the 19th century.
A Antiquity
The Babylonians, Egyptians, and early Mesoamericans
observed the motions of the planets and succeeded in predicting eclipses, but
they failed to find an underlying system governing
planetary motion. Little was added by the Greek civilization, partly because
the uncritical acceptance of the ideas of the major philosophers Plato and
Aristotle discouraged experimentation.
Some progress was made, however, notably in
Alexandria, the scientific center of Greek
civilization. There, the Greek mathematician and inventor Archimedes designed
various practical mechanical devices, such as levers and screws, and measured
the density of solid bodies by submerging them in a liquid. Other important
Greek scientists were the astronomer Aristarchus of Sámos, who measured the ratio of the distances from the
earth to the sun and the moon; the mathematician, astronomer, and geographer
Eratosthenes, who determined the circumference of the earth and drew up a catalog of stars; the astronomer Hipparchus,
who discovered the precession of the equinoxes (see Ecliptic); and the
astronomer, mathematician, and geographer Ptolemy, who proposed the system of
planetary motion that was named after him, in which the earth was the center and the sun, moon, and stars moved around it in
circular orbits (see Ptolemaic System).
B Middle Ages
Little advance was made in physics, or in any other
science, during the Middle Ages, other than the
preservation of the classical Greek treatises, for which the Arab scholars such
as Averroës and Al-Quarashi,
the latter also known as Ibn al-Nafīs,
deserve much credit. The founding of the great medieval universities by
monastic orders in Europe, starting in the 13th century, generally failed to
advance physics or any experimental investigations. The Italian Scholastic
philosopher and theologian Saint Thomas Aquinas, for instance, attempted to
demonstrate that the works of Plato and Aristotle were consistent with the
Scriptures. The English Scholastic philosopher and scientist Roger Bacon was
one of the few philosophers who advocated the experimental method as the true
foundation of scientific knowledge and who also did some work in astronomy,
chemistry, optics, and machine design.
C 16th and 17th Centuries
The advent of modern science followed the Renaissance
and was ushered in by the highly successful attempt by four outstanding
individuals to interpret the behavior of the heavenly
bodies during the 16th and early 17th centuries. The Polish natural philosopher
Nicolaus Copernicus propounded the heliocentric
system that the planets move around the sun. He was convinced, however, that
the planetary orbits were circular, and therefore his system required almost as
many complicated elaborations as the Ptolemaic system it was intended to
replace (see Copernican System). The Danish
astronomer Tycho Brahe,
believing in the Ptolemaic system, tried to confirm it by a series of
remarkably accurate measurements. These provided his assistant, the German
astronomer Johannes Kepler, with the data to
overthrow the Ptolemaic system and led to the enunciation of three laws that
conformed with a modified heliocentric theory. Galileo, having heard of the
invention of the telescope, constructed one of his own and, starting in 1609,
was able to confirm the heliocentric system by observing the phases of the
planet Venus. He also discovered the surface irregularities of the moon, the
four brightest satellites of Jupiter, sunspots, and many stars in the Milky
Way. Galileo's interests were not limited to astronomy; by using inclined
planes and an improved water clock, he had earlier demonstrated that bodies of
different weight fall at the same rate (thus overturning Aristotle's dictums),
and that their speed increases uniformly with the time of fall. Galileo's
astronomical discoveries and his work in mechanics foreshadowed the work of the
17th-century English mathematician and physicist Sir Isaac Newton, one of the
greatest scientists who ever lived.
IV NEWTON AND MECHANICS
Starting about 1665, at the age of 23, Newton
enunciated the principles of mechanics, formulated the law of universal
gravitation, separated white light into colors,
proposed a theory for the propagation of light, and
invented differential and integral calculus. Newton's contributions covered an
enormous range of natural phenomena: He was thus able to show that not only Kepler's laws of planetary motion but also Galileo's
discoveries of falling bodies follow a combination of his own second law of
motion and the law of gravitation, and to predict the appearance of comets,
explain the effect of the moon in producing the tides, and explain the
precession of the equinoxes.
A The Development of Mechanics
The subsequent development of physics owes much to
Newton's laws of motion (see Mechanics), notably the second, which
states that the force needed to accelerate an object will
be proportional to its mass times the acceleration. If the force and the
initial position and velocity of a body are given, subsequent positions and
velocities can be computed, although the force may vary with time or position;
in the latter case, Newton's calculus must be applied. This simple law
contained another important aspect: Each body has an inherent property, its
inertial mass, which influences its motion. The greater this mass, the slower
the change of velocity when a given force is impressed. Even today, the law
retains its practical utility, as long as the body is not very small, not very
massive, and not moving extremely rapidly. Newton's third law, expressed simply
as “for every action there is an equal and opposite reaction,” recognizes, in
more sophisticated modern terms, that all forces between particles come in
oppositely directed pairs, although not necessarily along the line joining the
particles.
B Gravity
Newton's more specific contribution to the description
of the forces in nature was the elucidation of the force of gravity. Today
scientists know that in addition to gravity only three other fundamental forces
give rise to all observed properties and activities in the universe: those of
electromagnetism, the so-called strong nuclear interactions that bind together
the neutrons and protons within atomic nuclei, and the weak interactions
between some of the elementary particles that account for the phenomenon of
radioactivity. Understanding of the force concept, however, dates from the
universal law of gravitation, which recognizes that all material particles, and
the bodies that are composed of them, have a property called gravitational
mass. This property causes any two particles to exert attractive forces on each
other (along the line joining them) that are directly proportional to the
product of the masses, and inversely proportional to the square of the distance
between the particles. This force of gravity governs the motion of the planets
about the sun and the earth's own gravitational field, and it may also be
responsible for the possible gravitational collapse, the final stage in the
life cycle of stars. See Black Hole; Gravitation; Star.
One of the most important observations of
physics is that the gravitational mass of a body (which is the source of one of
the forces existing between it and another particle), is effectively the same
as its inertial mass, the property that determines the motional response to any
force exerted on it (see Inertia). This equivalence, now confirmed
experimentally to within one part in 1013, holds in the sense of
proportionality—that is, when one body has twice the gravitational mass of
another, it also has twice the inertial mass. Thus, Galileo's demonstrations,
which antedate Newton's laws, that bodies fall to the ground with the same
acceleration and hence with the same motion, can be explained by the fact that
the gravitational mass of a body, which determines the forces exerted on it,
and the inertial mass, which determines the response to that force, cancel out.
The full significance of this equivalence between
gravitational and inertial masses, however, was not appreciated until Albert
Einstein, the theoretical physicist who enunciated the theory of relativity,
saw that it led to a further implication: the inability to distinguish between
a gravitational field and an accelerated frame of reference (see the Modern
Physics: Relativity section of this article).
The force of gravity is the weakest of the
four forces of nature when elementary particles are considered. The
gravitational force between two protons, for example, which are among the
heaviest elementary particles, is at any given distance only 10-36
the magnitude of the electrostatic forces between them, and for two such
protons in the nucleus of an atom, this force in turn is many times smaller
than the strong nuclear interaction. The dominance of gravity on a macroscopic
scale is due to two reasons: (1) Only one type of mass
is known, which leads to only one kind of gravitational force, which is
attractive. The many elementary particles that make up a large body, such as
the earth, therefore exhibit an additive effect of their gravitational forces
in line with the addition of their masses, which thus become very large. (2)
The gravitational forces act over a large range, and decrease only as the
square of the distance between two bodies.
By contrast, the electric charges of elementary
particles, which give rise to electrostatic and magnetic forces, are either
positive or negative, or absent altogether. Only particles with opposite
charges attract one another, and large composite bodies therefore tend to be
electrically neutral and inactive. On the other hand, the nuclear forces, both
strong and weak, are extremely short range and become hardly noticeable at
distances of the order of 1 million-millionth of an inch.
Despite its macroscopic importance, the force of gravity
remains so weak that a body must be very massive before its influence is
noticed by another. Thus, the law of universal gravitation was deduced from
observations of the motions of the planets long before it could be checked
experimentally. Not until 1771 did the British physicist and chemist Henry
Cavendish confirm it by using large spheres of lead to attract small masses
attached to a torsion pendulum, and from these measurements also deduced the
density of the earth.
In the two centuries after Newton, although
mechanics was analyzed, reformulated, and applied to complex systems, no new
physical ideas were added. The Swiss mathematician Leonhard
Euler first formulated the equations of motion for rigid bodies, while Newton
had dealt only with masses concentrated at a point, which thus acted like
particles. Various mathematical physicists, among them Joseph Louis Lagrange of
France and Sir William Rowan Hamilton of Ireland extended Newton's second law
in more sophisticated and elegant reformulations. Over the same period, Euler,
the Dutch-born scientist Daniel Bernoulli, and other scientists also extended
Newtonian mechanics to lay the foundation of fluid mechanics.
C Electricity and Magnetism
Although the ancient Greeks were aware of the
electrostatic properties of amber, and the Chinese as early as 2700 bc made crude
magnets from lodestone, experimentation with and the understanding and use of
electric and magnetic phenomena did not occur until the end of the 18th
century. In 1785 the French physicist Charles Augustin
de Coulomb first confirmed experimentally that electrical charges attract or
repel one another according to an inverse square law, similar to that of
gravitation. A powerful theory to calculate the effect of any number of static
electric charges arbitrarily distributed was subsequently developed by the
French mathematician Siméon Denis Poisson and the
German mathematician Carl Friedrich Gauss.
A positively charged particle attracts a negatively
charged particle, tending to accelerate one toward the other. If the medium
through which the particle moves offers resistance to that motion, this may be
reduced to a constant-velocity (rather than accelerated) motion,
and the medium will be heated up and may also be otherwise affected. The
ability to maintain an electromotive force that could continue to drive
electrically charged particles had to await the development of the chemical
battery by the Italian physicist Alessandro Volta in 1800. The classical theory
of a simple electric circuit assumes that the two terminals of a battery are
maintained positively and negatively charged as a result of its internal
properties. When the terminals are connected by a wire, negatively charged
particles will be simultaneously pushed away from the negative terminal and
attracted to the positive one, and in the process heat up the wire that offers
resistance to the motion. Upon their arrival at the positive terminal, the
battery will force the particles toward the negative terminal, overcoming the
opposing forces of Coulomb's law. The German physicist Georg
Simon Ohm first discovered the existence of a simple proportionality constant
between the current flowing and the electromotive force supplied by a battery,
known as the resistance of the circuit. Ohm's law, which states that the
resistance is equal to the electromotive force, or voltage, divided by the
current, is not a fundamental and universally applicable law of physics, but
rather describes the behavior of a limited class of
solid materials. See Electric Circuit.
The historical concepts of magnetism, based on the existence
of pairs of oppositely charged poles, had started in the 17th century and owe
much to the work of Coulomb. The first connection between magnetism and
electricity, however, was made through the pioneering experiments of the Danish
physicist and chemist Hans Christian Oersted, who in
1819 discovered that a magnetic needle could be deflected by a wire nearby
carrying an electric current. Within one week after learning of Oersted's discovery, the French scientist André Marie Ampère showed experimentally that two current-carrying
wires would affect each other like poles of magnets. In 1831 the British
physicist and chemist Michael Faraday discovered that an electric current could
be induced (made to flow) in a wire without connection to a battery, either by
moving a magnet or by placing another current-carrying wire with an
unsteady—that is, rising and falling—current nearby. The intimate connection
between electricity and magnetism, now established, can best be stated in terms
of electric or magnetic fields, or forces that will act at a particular point
on a unit charge or unit current, respectively, placed at that point.
Stationary electric charges produce electric fields; currents—that is, moving
electric charges—produce magnetic fields. Electric fields are also produced by
changing magnetic fields, and vice versa. Electric fields exert forces on
charged particles as a function of their charge alone; magnetic fields will
exert an additional force only if the charges are in motion.
These qualitative findings were finally put into a
precise mathematical form by the British physicist James Clerk Maxwell who, in
developing the partial differential equations that bear his name, related the
space and time changes of electric and magnetic fields at a point with the
charge and current densities at that point. In principle, they permit the
calculation of the fields everywhere and any time from a
knowledge of the charges and currents. An unexpected result arising from
the solution of these equations was the prediction of a new kind of
electromagnetic field, one that was produced by accelerating charges, that was
propagated through space with the speed of light in the form of an
electromagnetic wave, and that decreased with the inverse square of the
distance from the source. In 1887 the German physicist Heinrich Rudolf Hertz
succeeded in actually generating such waves by electrical means, thereby laying
the foundations for radio, radar, television, and other forms of
telecommunications. See Electromagnetic Radiation.
The behavior of electric
and magnetic fields in these waves is quite similar to that of a very long taut
string, one end of which is rapidly moved up and down in a periodic fashion.
Any point along the string will be observed to move up and down, or oscillate,
with the same period or with the same frequency as the source. Points along the
string at different distances from the source will reach the maximum vertical
displacements at different times, or at a different phase. Each point along the
string will do what its neighbor did, but a little
later, if it is further removed from the vibrating source (see Oscillation).
The speed with which the disturbance, or the message to oscillate, is
transmitted along the string is called the wave velocity (see Wave Motion).
This is a function of the medium, its mass, and the tension in the case of a
string. An instantaneous snapshot of the string (after it has been in motion
for a while) would show equispaced points having the
same displacement and motion, separated by a distance known as the wavelength,
which is equal to the wave velocity divided by the frequency. In the case of
the electromagnetic field one can think of the electric-field strength as
taking the place of the up-and-down motion of each piece of the string, with
the magnetic field acting similarly at a direction at right angles to that of
the electric field. The electromagnetic-wave velocity away from the source is
the speed of light.
D Light
The apparent linear propagation of light was known
since antiquity, and the ancient Greeks believed that light consisted of a
stream of corpuscles. They were, however, quite confused as to whether these
corpuscles originated in the eye or in the object viewed. Any satisfactory
theory of light must explain its origin and disappearance and its changes in
speed and direction while it passes through various media. Partial answers to
these questions were proposed in the 17th century by Newton, who based them on
the assumptions of a corpuscular theory, and by the English scientist Robert Hooke and the Dutch astronomer, mathematician, and
physicist Christiaan Huygens, who proposed a wave
theory. No experiment could be performed that distinguished between the two
theories until the demonstration of interference in the early 19th century by
the British physicist and physician Thomas Young. The French physicist Augustin Jean Fresnel decisively favored the wave theory.
Interference can be demonstrated by placing a thin
slit in front of a light source, stationing a double slit farther away, and
looking at a screen spaced some distance behind the double slit. Instead of
showing a uniformly illuminated image of the slits, the screen will show equispaced light and dark bands. Particles coming from the
same source and arriving at the screen via the two slits could not produce
different light intensities at different points and could certainly not cancel
each other to yield dark spots. Light waves, however, can produce such an
effect. Assuming, as did Huygens, that each of the double slits acts as a new
source, emitting light in all directions, the two wave trains arriving at the
screen at the same point will not generally arrive in phase, though they will
have left the two slits in phase. Depending on the difference in their paths,
“positive” displacements arriving at the same time as “negative” displacements
of the other will tend to cancel out and produce darkness, while the
simultaneous arrival of either positive or negative displacements from both
sources will lead to reinforcement or brightness. Each apparent bright spot
undergoes a timewise variation as successive in-phase
waves go from maximum positive through zero to maximum negative displacement
and back. Neither the eye nor any classical instrument, however, can determine
this rapid “flicker,” which in the visible-light range has a frequency from 4 ×
1014 to 7.5 × 1014 Hz, or cycles per second. Although it
cannot be measured directly, the frequency can be inferred from wavelength and
velocity measurements. The wavelength can be determined from a simple
measurement of the distance between the two slits, and the distance between
adjacent bright bands on the screen; it ranges from 4 × 10-5 cm (1.6
× 10-5 in) for violet light to 7.5 × 10-5 cm (3 × 10-5
in) for red light with intermediate wavelengths for the other colors.
The first measurement of the velocity of light was
carried out by the Danish astronomer Olaus Roemer in
1676. He noted an apparent time variation between successive eclipses of
Jupiter's moons, which he ascribed to the intervening change in the distance
between Earth and Jupiter, and to the corresponding difference in the time
required for the light to reach the earth. His measurement was in fair
agreement with the improved 19th-century observations of the French physicist
Armand Hippolyte Louis Fizeau,
and with the work of the American physicist Albert Abraham Michelson and his coworkers, which extended into the 20th century. Today the
velocity of light is known very accurately as 299,292.6 km (185,971.8 mi sec)
in vacuum. In matter, the velocity is less and varies with frequency, giving
rise to a phenomenon known as dispersion. See also Optics; Spectrum;
Vacuum.
Maxwell's work contributed several important results to
the understanding of light by showing that it was electromagnetic in origin and
that electric and magnetic fields oscillated in a light wave. His work
predicted the existence of nonvisible light, and
today electromagnetic waves or radiations are known to cover the spectrum from
gamma rays (see Radioactivity), with wavelengths of 10-12 cm
(4 × 10-11 in), through X rays, visible light, microwaves, and radio
waves, to long waves of hundreds of kilometers in
length (see X Ray). It also related the velocity of light in vacuum and
through media to other observed properties of space and matter on which
electrical and magnetic effects depend. Maxwell's discoveries, however, did not
provide any insight into the mysterious medium, corresponding to the string,
through which light and electromagnetic waves had to travel (see the Electricity
and Magnetism section above). Based on the experience with water, sound,
and elastic waves, scientists assumed a similar medium to exist, a “luminiferous ether” without mass, which was all-pervasive
(because light could obviously travel through a massless
vacuum), and had to act like a solid (because electromagnetic waves were known
to be transverse and the oscillations took place in a plane perpendicular to
the direction of propagation, and gases and liquids could only sustain longitudinal
waves, such as sound waves). The search for this mysterious ether occupied
physicists' attention for much of the last part of the 19th century.
The problem was further compounded by an extension
of a simple problem. A person walking forward with a speed of 3.2 km/h (2 mph)
in a train traveling at 64.4 km/h (40 mph) appears to
move at 67.6 km/h (42 mph), to an observer on the ground. In terms of the
velocity of light the question that now arose was: If light travels at about
300,000 km/sec (about 186,000 mi/sec) through the ether, at what velocity
should it travel relative to an observer on earth while the earth also moves
through the ether? Or, alternately, what is the earth's velocity through the
ether? The famous Michelson-Morley experiment, first performed in 1887 by
Michelson and the American chemist Edward Williams Morley using an
interferometer, was an attempt to measure this velocity; if the earth were traveling through a stationary ether, a difference should
be apparent in the time taken by light to traverse a given distance, depending
on whether it travels in the direction of or perpendicular to the earth's
motion. The experiment was sensitive enough to detect even a very slight
difference by interference; the results were negative. Physics was now in a
profound quandary from which it was not rescued until Einstein formulated his
theory of relativity in 1905.
E Thermodynamics
A branch of physics that assumed major stature
during the 19th century was thermodynamics. It began by disentangling the
previously confused concepts of heat and temperature, by arriving at meaningful
definitions, and by showing how they could be related to the heretofore purely
mechanical concepts of work and energy. See also Heat Transfer.
E1 Heat and Temperature
A different sensation is experienced when a hot or a
cold body is touched, leading to the qualitative and
subjective concept of temperature. The addition of heat to a body leads to an
increase in temperature (as long as no melting or boiling occurs), and in the
case of two bodies at different temperatures brought into contact, heat flows
from one to the other until their temperatures become the same and thermal
equilibrium is reached. To arrive at a scientific measure of temperature,
scientists used the observation that the addition or subtraction of heat
produced a change in at least one well-defined property of a body. The addition
of heat, for example, to a column of liquid maintained at constant pressure
increased the length of the column, while the heating of a gas confined in a
container raised its pressure. Temperature, therefore, can invariably be
measured by one other physical property, as in the length of the mercury column
in an ordinary thermometer, provided the other relevant properties remain unchanged.
The mathematical relationship between the relevant physical properties of a
body or system and its temperature is known as the equation of state. Thus, for
an ideal gas, a simple relationship exists between the pressure, p,
volume V, number of moles n, and the absolute temperature T,
given by pV = nRT,
where R is the same constant for all ideal gases. Boyle's law, named
after the British physicist and chemist Robert Boyle, and Gay-Lussac's law or Charles's law, named after the French
physicists and chemists Joseph Louis Gay-Lussac and
Jacques Alexandre César
Charles, are both contained in this equation of state (see Gases).
Until well into the 19th century, heat was
considered a massless fluid called caloric, contained
in matter and capable of being squeezed out of or into it. Although the
so-called caloric theory answered most early questions on thermometry and calorimetry, it failed to provide a sound explanation of
many early 19th-century observations. The first true connection between heat
and other forms of energy was observed in 1798 by the Anglo-American physicist
and statesman Benjamin Thompson who noted that the heat produced in the boring
of cannon was roughly proportional to the amount of work done. In mechanics,
work is the product of a force on a body and the distance through which the
body moves during its application.
E2 The First Law
of Thermodynamics
The equivalence of heat and work was explained by
the German physicist Hermann Ludwig Ferdinand von Helmholtz
and the British mathematician and physicist William Thomson, 1st Baron Kelvin,
by the middle of the 19th century. Equivalence means that doing work on a
system can produce exactly the same effect as adding heat; thus the same
temperature rise can be achieved in a gas contained in a vessel by adding heat
or by doing an appropriate amount of work through a paddle wheel sticking into
the container where the paddle is actuated by falling weights. The numerical
value of this equivalent was first demonstrated by the British physicist James
Prescott Joule in several heating and paddle-wheel experiments between 1840 and
1849.
That performing work or adding heat to a system
were both means of transferring energy to it was thus recognized. Therefore,
the amount of energy added by heat or work had to increase the internal energy
of the system, which in turn determined the temperature. If the internal energy
remains unchanged, the amount of work done on a system must equal the heat
given up by it. This is the first law of thermodynamics, a statement of the
conservation of energy. Not until the action of molecules in a system was
better understood by the development of the kinetic theory could this internal
energy be related to the sum of the kinetic energies of all the molecules
making up the system.
E3 The Second
Law of Thermodynamics
While the first law indicates that energy must
be conserved in any interactions between a system and its surroundings, it
gives no indication whether all forms of mechanical and thermal energy exchange
are possible. That overall changes in energy proceed in one direction was first
formulated by the French physicist and military engineer Nicolas Léonard Sadi Carnot,
who in 1824 pointed out that a heat engine (a device that can produce work
continuously while only exchanging heat with its surroundings) requires both a
hot body as a source of heat and a cold body to absorb heat that must be
discharged. When the engine performs work, heat must be transferred from the
hotter to the colder body; to have the inverse take place requires the
expenditure of mechanical (or electrical) work. Thus, in a continuously working
refrigerator, the absorption of heat from the low temperature source (the cold
space) requires the addition of work (usually as electrical power), and the
discharge of heat (usually via fanned coils in the rear) to the surroundings (see
Refrigeration). These ideas, based on Carnot's
concepts, were eventually formulated rigorously as the second law of
thermodynamics by the German mathematical physicist Rudolf Julius Emanuel Clausius and by Lord Kelvin in various alternate, although
equivalent, ways. One such formulation is that heat cannot flow from a colder
to a hotter body without the expenditure of work.
From the second law, it follows that in an
isolated system (one that has no interactions with the surroundings) internal
portions at different temperatures will always adjust to a single uniform
temperature and thus produce equilibrium. This can also be applied to other
internal properties that may be different initially. If milk is poured into a
cup of coffee, for example, the two substances will continue to mix until they
are inseparable and can no longer be differentiated. Thus, an initial separate
or ordered state is turned into a mixed or disordered state. These ideas can be
expressed by a thermodynamic property, called the entropy (first formulated by Clausius), which serves as a measure of how close a system
is to equilibrium—that is, to perfect internal disorder. The entropy of an
isolated system, and of the universe as a whole, can only increase, and when
equilibrium is eventually reached, no more internal change of any form is
possible. Applied to the universe as a whole, this principle suggests that
eventually all temperature in space becomes uniform, resulting in the so-called
heat death of the universe.
Locally, the entropy can be lowered by external
action. This applies to machines, such as a refrigerator, where the entropy in
the cold chamber is being reduced, and to living organisms. This local increase
in order is, however, only possible at the expense of an entropy increase in
the surroundings; here more disorder must be created.
This continued increase in entropy is related to the
observed nonreversibility of macroscopic processes.
If a process were spontaneously reversible—that is, if, after undergoing a
process, both it and all the surroundings could be brought back to their
initial state—the entropy would remain constant in violation of the second law.
While this is true for macroscopic processes, and therefore corresponds to
daily experience, it does not apply to microscopic processes, which are
believed to be reversible. Thus, chemical reactions between individual
molecules are not governed by the second law, which applies only to macroscopic
ensembles.
From the promulgation of the second law, thermodynamics
went on to other advances and applications in physics, chemistry, and
engineering. Most chemical engineering, all power-plant engineering, and
air-conditioning and low-temperature physics are just a few of the fields that
owe their theoretical basis to thermodynamics and to the subsequent
achievements of such scientists as Maxwell, the American physicist Willard
Gibbs, the German physical chemist Walther Hermann Nernst,
and the Norwegian-born American chemist Lars Onsager.
F Kinetic Theory and Statistical
Mechanics
The modern concept of the atom was first proposed
by the British chemist and physicist John Dalton in 1808 and was based on his
studies that showed that chemical elements enter into combinations based on
fixed ratios of their weights. The existence of molecules as the smallest
particles of a substance that can exist in the free—that is, gaseous—state and
have the properties of any larger amount of the substance, was first
hypothesized by the Italian physicist and chemist Amedeo
Avogadro in 1811, but did not find general acceptance until about 50 years
later, when it also formed the basis of the kinetic theory of gases (see Avogadro's
Law). Developed by Maxwell, the Austrian physicist Ludwig Boltzmann,
and other physicists, it applied the laws of mechanics and probability to the behavior of individual molecules, and drew statistical
inferences about the properties of the gas as a whole.
A typical but important problem solved in this
manner was the determination of the range of speeds of
molecules in the gas, and from this the average kinetic energy of the
molecules. The kinetic energy of a body, as a simple consequence of Newton's
second law, is mv2, where m is the mass of the
body and v its velocity. One of the achievements of kinetic theory was
to show that temperature, the macroscopic thermodynamic property describing the
system as a whole, was directly related to the average kinetic energy of the
molecules. Another was the identification of the entropy of a system with the
logarithm of the statistical probability of the energy distribution. This led
to the demonstration that the state of thermodynamic equilibrium corresponding
to that of highest probability is also the state of maximum entropy. Following
the success in the case of gases, kinetic theory and statistical mechanics were
subsequently applied to other systems, a process that is still continuing.
G Early Atomic and Molecular Theories
The development of Dalton's atomic theory and
Avogadro's molecular law had overriding influence on the development of
chemistry, in addition to their importance in physics.
G1 Avogadro's Law
Avogadro's law, which was easily proved by kinetic
theory, indicated that a specified volume of a gas at a given temperature and
pressure always contained the same number of molecules, irrespective of the gas
selected. This number, however, could not be accurately determined, and the
19th-century physicists therefore had no sound knowledge of molecular or atomic
mass and size until the turn of the 20th century, when subsequent to the
discovery of the electron, the American physicist Robert Andrews Millikan carefully determined its charge. This finally
permitted accurate determination of the so-called Avogadro's number, which is
the number of molecules in that amount of material exactly equal to its
molecular weight.
Besides the mass, another quantity of interest was
the size of an atom. Various and only partly successful attempts at finding the
size of an atom were made during the latter part of the 19th century; the most
successful applied the results of kinetic theory to nonideal
gases—that is, gases the behavior of which depended
on the fact that molecules were not points but had finite volumes. Only later
experiments involving the scattering of X rays, alpha particles, and other
atomic and subatomic particles by atoms led to more precise measurements of
their size as being between 10-8 and 10-7 cm (4 × 10-7
and 4 × 10-6 in) in diameter. A precise statement about the size of
an atom, however, requires some explicit definition of what is meant by size,
since most atoms are not exactly spherical and can exist in various states that
change the distance between the nucleus and the electrons within the atom.
G2 Spectroscopy
One of the most important developments leading
to the exploration of the interior of the atom, and to the eventual overthrow
of the classical theories of physics, was spectroscopy; the
other was the discovery of the subatomic particles themselves.
In 1823 the British astronomer and chemist Sir
John Frederick William Herschel suggested that a chemical substance might be
identified by examining its spectrum—that is, the discrete wavelength pattern
in which light from a gaseous substance is emitted. In the years that followed,
the spectra of a great many substances were cataloged
by two Germans, the chemist Robert Wilhelm Bunsen and the physicist Gustav
Robert Kirchhoff. Helium was first discovered as a
new element following the discovery of an unexplained spectral line in the
sun's spectrum by the British astronomer Sir Joseph Norman Lockyer
in 1868. From the standpoint of atomic theory, however, the most important
contributions were made by the study of the spectra of simple atoms, such as
hydrogen, which showed few spectral lines. See Chemical Analysis.
Discrete line spectra originate from gaseous substances
where, in terms of modern knowledge, the electrons have been excited by heat or
by bombardment with subatomic particles. In contrast, a heated solid has a
continuous spectrum over the full visible range and into the infrared and
ultraviolet regions. The total amount of energy emitted depends strongly on the
temperature, as does the relative intensity of the different wavelength
components. As a piece of iron is heated, for example, its radiation is first
in the infrared spectrum and cannot be seen; it then extends into the visible
spectrum where the glow shifts from red to white as the peak of its radiant
spectrum shifts toward the middle of the visible range. Attempts to explain the
radiation characteristics of solids, using the tools of theoretical physics
available at the end of the 19th century, led to the prediction that at any
given temperature the amount of radiation increased with frequency and without
limit. This calculation, in which no error was found, was in disagreement with
experiment and also led to an absurd conclusion: A body at a finite temperature
could radiate an infinite amount of energy. This required a new way of thinking
about radiation and, indirectly, about the atom. See Infrared Radiation;
Ultraviolet Radiation.
H The Breakdown of Classical Physics
By about 1880 physics was serene; most phenomena
could be explained by Newtonian mechanics, Maxwell's electromagnetic theory,
thermodynamics, and Boltzmann's statistical
mechanics. Only a few problems, such as the determination of the properties of
the ether and the explanation of the radiation spectra from solids and gases,
appeared unsolved. These unexplained phenomena, however, formed the seeds of
revolution, a revolution that was augmented by a series of remarkable
discoveries within the last decade of the 19th century: the discovery of X rays
by Wilhelm Conrad Roentgen of Germany in 1895; of the electron by Sir Joseph
John Thomson of Great Britain in 1895; of radioactivity by Antoine Henri
Becquerel of France in 1896; and of the photoelectric effect by Hertz, Wilhelm Hallwachs, and Philipp Eduard Anton Lenard of Germany
during the period from 1887 to 1899 (see Photoelectric Cell). Coupled
with the disturbing results of the Michelson-Morley experiments and the
discovery of cathode rays, or electron stream, the experimental evidence in
physics outstripped all available theories to explain it.
V MODERN PHYSICS
Two major new developments during the first third
of the 20th century, the quantum theory and the theory of relativity, explained
these findings, yielded new discoveries, and changed the understanding of
physics as it is known today.
A Relativity
To extend the example of relative velocity
introduced with the Michelson-Morley experiment, two situations can be
compared. One consists of a person, A, walking forward with a velocity v
in a train moving at velocity u. The velocity of A with regard to
an observer B stationary on the ground is then simply V = u
+ v. If, however, the train were at rest in the station and A was
moving forward with velocity v while observer B walked backward
with velocity u, the relative speed between A and B would
be exactly the same as in the first case. In more general terms, if two frames
of reference are moving relative to each other at constant velocity,
observations of any phenomena made by observers in either frame will be
physically equivalent. As already mentioned, the Michelson-Morley experiment
failed to confirm the concept of adding velocities, and two observers, one at
rest and the other moving toward a light source with velocity u, both
observe the same light velocity V, commonly denoted by the symbol c.
Einstein incorporated the invariance of c into his
theory of relativity. He also demanded a very careful rethinking of the
concepts of space and time, showing the imperfection of intuitive notions about
them. As a consequence of his theory, it is known that two clocks that keep
identical time when at rest relative to each other must run at different speeds
when they are in relative motion, and two rods that are identical in length (at
rest) will become different in length when they are in relative motion. Space
and time must be closely linked in a four-dimensional continuum where the
normal three-space dimensions must be augmented by an interrelated time
dimension.
Two important consequences of Einstein's relativity
theory are the equivalence of mass and energy and the limiting velocity of the
speed of light for material objects. Relativistic mechanics describes the
motion of objects with velocities that are appreciable fractions of the speed
of light, while Newtonian mechanics remains useful for velocities typical of
the macroscopic motion of objects on earth. No material object, however, can
have a speed equal to or greater than the speed of light.
Even more important is the relation between
the mass m and energy E. They are coupled by the relation E
= mc2, and because c is very large, the energy
equivalence of a given mass is enormous. The change of mass giving an energy
change is significant in nuclear reactions, as in reactors or nuclear weapons,
and in the stars, where a significant loss of mass accompanies the huge energy
release.
Einstein's original theory, formulated in 1905 and known
as the special theory of relativity, was limited to frames of reference moving
at constant velocity relative to each other. In 1915, he generalized his
hypothesis to formulate the general theory of relativity that applied to
systems that accelerate with reference to each other. This extension showed
gravitation to be a consequence of the geometry of space-time and predicted the
bending of light in its passage close to a massive body like a star, an effect
first observed in 1919. General relativity, although less firmly established
than the special theory, has deep significance for an understanding of the
structure of the universe and its evolution. See also Cosmology.
B Quantum Theory
The quandary posed by the observed spectra emitted
by solid bodies was first explained by the German physicist Max Planck.
According to classical physics, all molecules in a solid can vibrate with the
amplitude of the vibrations directly related to the temperature. All vibration
frequencies should be possible and the thermal energy of the solid should be
continuously convertible into electromagnetic radiation as long as energy is
supplied. Planck made a radical assumption by postulating that the molecular
oscillator could emit electromagnetic waves only in discrete bundles, now
called quanta, or photons. See Photon; Quantum Theory. Each photon has a
characteristic wavelength in the spectrum and an energy E given by E
= hf, where f is the frequency
of the wave. The wavelength λ related to the frequency by λf = c, where c is the speed of
light. With the frequency specified in hertz (Hz), or cycles per second, h,
now known as Planck's constant, is extremely small (6.626 × 10-27
erg-sec). With his theory, Planck again introduced a partial duality into the
theory of light, which for nearly a century had been considered to be wavelike
only.
C Photoelectricity
If electromagnetic radiation of appropriate wavelength
falls upon suitable metals, negative electric charges, later identified as
electrons, are ejected from the metal surface. The important aspects of this
phenomenon are the following: (1) the energy of each photoelectron depends only
on the frequency of the illumination and not on its intensity; (2) the rate of
electron emission depends only on the illuminating intensity and not on the
frequency (provided that the minimum frequency to cause emission is exceeded);
and (3) the photoelectrons emerge as soon as the illumination hits the surface.
These observations, which could not be explained by Maxwell's electromagnetic
theory of light, led Einstein to assume in 1905 that light can be absorbed only
in quanta or photons, and that the photon completely vanishes in the absorption
process, with all of its energy E (=hf)
going to one electron in the metal. With this simple assumption Einstein
extended Planck's quantum theory to the absorption of electromagnetic
radiation, giving additional importance to the wave-particle duality of light.
It was for this work that Einstein was awarded the 1921 Nobel Prize in physics.
D X Rays
These very penetrating rays, first discovered by
Roentgen, were shown to be electromagnetic radiation of very short wavelength
in 1912 by the German physicist Max Theodor Felix von
Laue and his coworkers. The
precise mechanism of X-ray production was shown to be a quantum effect, and in
1914 the British physicist Henry Gwyn-Jeffreys
Moseley used his X-ray spectrograms to prove that the atomic number of an
element, and hence the number of positive charges in an atom, is the same as
its position in the periodic table (see Periodic Law). The photon theory
of electromagnetic radiation was further strengthened and developed by the
prediction and observation of the so-called Compton effect
by the American physicist Arthur Holly Compton in 1923.
E Electron Physics
That electric charges were carried by extremely small
particles had already been suspected in the 19th century and, as indicated by
electrochemical experiments, the charge of these elementary particles was a
definite, invariant quantity. Experiments on the conduction of electricity
through low-pressure gases led to the discovery of two kinds of rays: cathode
rays, coming from the negative electrode in a gas discharge tube, and positive
or canal rays from the positive electrode. Sir Joseph John Thomson's 1895
experiment measured the ratio of the charge q to the mass m of
the cathode-ray particles. Lenard in 1899 confirmed
that the ratio of q to m for photoelectric particles was
identical to that of cathode rays. The American inventor Thomas Alva Edison had
noted in 1883 that very hot wires emit electricity, called thermionic
emission (now called the Edison effect), and in 1899 Thomson showed that this
form of electricity also consisted of particles with the same q to m
ratio as the others. About 1911 Millikan finally
determined that electric charge always arises in multiples of a basic unit e,
and measured the value of e, now known to be 1.602 × 10-19
coulombs. From the measured value of q to m ratio, with q set
equal to e, the mass of the carrier, called electron, could now be
determined as 9.110 × 10-31 kg.
Finally, Thomson and others showed that the
positive rays also consisted of particles, each carrying a charge e, but
of the positive variety. These particles, however, now recognized as positive
ions resulting from the removal of an electron from a neutral atom, are much
more massive than the electron. The smallest, the hydrogen ion, is a single
proton with a mass of 1.673 × 10-27 kg, about 1837 times more
massive than the electron (see Ion; Ionization). The “quantized” nature
of electric charge was now firmly established and, at the same time, two of the
fundamental subatomic particles identified.
F Atomic Models
In 1913 the New Zealand-born British physicist
Ernest Rutherford, making use of the newly discovered radiations from
radioactive nuclei, found Thomson's earlier model of an atom with uniformly
distributed positive and negative charged particles to be untenable. The very
fast, massive, positively charged alpha particles he employed were found to
deflect sharply in their passage through matter. This effect required an atomic
model with a heavy positive scattering center.
Rutherford then suggested that the positive charge of an atom was concentrated
in a massive stationary nucleus, with the negative electron moving in orbits
about it, and positioned by the electric attraction between opposite charges.
This solar-system-like atomic model, however, could not persist according to
Maxwell's theory, where the revolving electrons should emit electromagnetic
radiation and force a total collapse of the system in a very short time.
Another sharp break with classical physics was
required at this point. It was provided by the Danish physicist Niels Henrik David Bohr, who
postulated the existence within atoms of certain specified orbits in which
electrons could revolve without electromagnetic radiation emission. These
allowed orbits, or so-called stationary states, are determined by the condition
that the angular momentum J of the orbiting electron must be a positive
multiple integral of Planck's constant, divided by 2 , that is, J
= nh/2p, where the quantum number n may have any positive
integer value. This extended “quantization” to dynamics, fixed the possible
orbits, and allowed Bohr to calculate their radii and the corresponding energy
levels. Also in 1913 the model was confirmed experimentally by the German-born
American physicist James Franck and the German physicist Gustav Hertz.
Bohr developed his model much further. He explained
how atoms radiate light and other electromagnetic waves, and also proposed that
an electron “lifted” by a sufficient disturbance of the atom from the orbit of
smallest radius and least energy (the ground state) into another orbit, would soon “fall” back to the ground state. This
falling back is accompanied by the emission of a single photon of energy E
= hf, where E is the difference
in energy between the higher and lower orbits. Each orbit shift emits a
characteristic photon of sharply defined frequency and wavelength; thus one
photon would be emitted in a direct shift from the n = 3 to the n
= 1 orbit, which will be quite different from the two photons emitted in a
sequential shift from the n = 3 to n = 2 orbit, and then from
there to the n = 1 orbit. This model now allowed Bohr to account with
great accuracy for the simplest atomic spectrum, that of hydrogen, which had
defied classical physics.
Although Bohr's model was extended and refined, it could
not explain observations for atoms with more than one electron. It could not
even account for the intensity of the spectral colors
of the simple hydrogen atom. Because it had no more than a limited ability to
predict experimental results, it remained unsatisfactory for theoretical
physicists.
G Quantum Mechanics
Within a few years, roughly between 1924 and
1930, an entirely new theoretical approach to dynamics was developed to account
for subatomic behavior. Named quantum mechanics or
wave mechanics, it started with the suggestion in 1923 by the French physicist
Louis Victor, Prince de Broglie, that not only
electromagnetic radiation but matter could also have wave
as well as particle aspects. The wavelength of the so-called matter waves
associated with a particle is given by the equation λ = h/mv, where m is the particle mass and v
its velocity. Matter waves were conceived of as pilot waves guiding the
particle motion, a property that should result in diffraction under suitable
conditions. This was confirmed in 1927 by the experiments on electron-crystal
interactions by the American physicists Clinton Joseph Davisson
and Lester Halbert Germer
and the British physicist George Paget Thomson.
Subsequently, Werner Heisenberg, Max Born, and Ernst Pascual
Jordan of Germany and the Austrian physicist Erwin Schrödinger developed Broglie's idea into a mathematical form capable of dealing
with a number of physical phenomena and with problems that could not be handled
by classical physics. In addition to confirming Bohr's postulate regarding the
quantization of energy levels in atoms, quantum mechanics now provides an
understanding of the most complex atoms, and has also been a guiding spirit in
nuclear physics. Although quantum mechanics is usually needed only on the
microscopic level (with Newtonian mechanics still satisfactory for macroscopic
systems), certain macroscopic effects, such as the properties of crystalline
solids, also exist that can only be satisfactorily explained by principles of
quantum mechanics.
Going beyond Broglie's notion
of the wave-particle duality of matter, additional important concepts have
since been incorporated into the quantum-mechanical picture. These include the
discovery that electrons must have some permanent magnetism and, with it, an
intrinsic angular momentum, or spin, as a fundamental property. Spin was
subsequently found in almost all other elementary particles. In 1925 the
Austrian physicist Wolfgang Pauli expounded the
exclusion principle, which states that in an atom no two electrons can have
precisely the same set of quantum numbers. Four quantum numbers are needed to
specify completely the state of an electron in an atom. The exclusion principle
is vital for an understanding of the structure of the elements and of the
periodic table. Heisenberg in 1927 put forth the uncertainty principle, which
asserted the existence of a natural limit to the precision with which certain
pairs of physical quantities can be known simultaneously.
Finally, a synthesis of quantum mechanics and
relativity was made in 1928 by the British mathematical physicist Paul Adrien Maurice Dirac, leading to
the prediction of the existence of the positron and bringing the development of
quantum mechanics to a culmination.
Largely as a result of Bohr's ideas, a
different and statistical approach developed in modern physics. The fully
deterministic cause-effect relations produced by Newtonian mechanics were
supplanted by predictions of future events in terms of statistical probability
only. Thus, the wave property of matter also implies that, in accordance with
the uncertainty principle, the motion of the particles can never be predicted
with absolute certainty even if the forces are known completely. Although this
statistical aspect plays no detectable role in macroscopic motions, it is
dominant on the molecular, atomic, and subatomic scale.
H Nuclear Physics
The understanding of atomic structure was also
facilitated by Becquerel's discovery in 1896 of radioactivity in uranium ore (see
Uranium). Within a few years radioactive radiation was found to consist of
three types of emissions: alpha rays, later found by Rutherford to be the
nuclei of helium atoms; beta rays, shown by Becquerel to be very fast
electrons; and gamma rays, identified later as very short wavelength
electromagnetic radiation. In 1898 the French physicists Marie and Pierre Curie
separated two highly radioactive elements, radium and polonium, from uranium
ore, thus showing that radiations could be identified with particular elements.
By 1903 Rutherford and the British physical chemist Frederick Soddy had shown
that the emission of alpha or beta rays resulted in the transmutation of the
emitting element into a different one. Radioactive processes were shortly
thereafter found to be completely statistical; no method exists that could
indicate which atom in a radioactive material will decay at any one time. These
developments, in addition to leading to Rutherford's and Bohr's model of the
atom, also suggested that alpha, beta, and gamma rays could only come from the
nuclei of very heavy atoms. In 1919 Rutherford bombarded nitrogen with alpha
particles and converted it to hydrogen and oxygen, thus producing the first
artificial transmutation of elements.
Meanwhile, a knowledge of the
nature and abundance of isotopes was growing, largely through the development
of the mass spectrograph. A model emerged in which the nucleus contained all the
positive charge and almost all the mass of the atom. The nuclear-charge
carriers were identified as protons, but except for hydrogen, the nuclear mass
could be accounted for only if some additional uncharged particles were
present. In 1932 the British physicist Sir James Chadwick discovered the
neutron, an electrically neutral particle of mass 1.675 × 10-27 kg,
slightly more than that of the proton. Now nuclei could be understood as
consisting of protons and neutrons, collectively called nucleons, and the
atomic number of the element was simply the number of protons in the nucleus.
On the other hand, the isotope number, also called the atomic mass number, was
the sum of the neutrons and protons present. Thus, all atoms of oxygen (atomic
no. 8) have eight protons, but the three isotopes of oxygen, O16, O
17, and O18, also contain within their respective nuclei
eight, nine, or ten neutrons.
Positive electric charges repel each other, and because
atomic nuclei (except for hydrogen) have more than one proton, they would fly
apart except for a strong attractive force, called the nuclear force, or strong
interaction that binds the nucleons to each other. The energy associated with
this strong force is very great, millions of times greater than the energies
characteristic of electrons in their orbits or chemical binding energies. An
escaping alpha particle (consisting of two protons and two neutrons),
therefore, will have to overcome this strong interaction force to escape from a
radioactive nucleus such as uranium. This apparent paradox was explained by the
American physicists Edward U. Condon, George Gamow,
and Ronald Wilfred Gurney, who applied quantum mechanics to the problem of
alpha emission in 1928 and showed that the statistical nature of nuclear
processes allowed alpha particles to “leak” out of radioactive nuclei, even
though their average energy was insufficient to overcome the nuclear force.
Beta decay was explained as a result of a neutron disruption within the
nucleus, the neutron changing into an electron (the beta particle), which is
promptly ejected, and a residual proton. The proton left behind leaves the
“daughter” nucleus with one more proton than its “parent” and thus increases
the atomic number and the position in the periodic table. Alpha or beta
emission usually leaves the nucleus with excess energy, which it unloads by
emitting a gamma-ray photon.
In all these nuclear processes a large amount
of energy, given by Einstein's E = mc2 equation, is
released. After the process is over, the total mass of the product is less than
that of the parent, with the mass difference appearing as energy. See Nuclear
Energy.
VI DEVELOPMENTS IN PHYSICS SINCE 1930
The rapid expansion of physics in the last
few decades was made possible by the fundamental developments during the first
third of the century, coupled with recent technological advances, particularly
in computer technology, electronics, nuclear-energy applications, and
high-energy particle accelerators.
A Accelerators
Rutherford and other early investigators of nuclear
properties were limited to the use of high-energy emissions from naturally
radioactive substances to probe the atom. The first artificial high-energy
emissions were produced in 1932 by the British physicist Sir John Douglas Cockcroft and the Irish physicist Ernest Thomas Sinton
Walton, who used high-voltage generators to accelerate protons to about 700,000
eV and to bombard lithium with them, transmuting it
into helium. One electron volt is the energy gained by an electron when the accelerating
voltage is 1 V; it is equivalent to about 1.6 × 10-19 joule (J). Modern accelerators produce energies measured in
million electron volts (usually written mega-electron volts, or MeV), billion electron volts (giga-electron
volts, or GeV), or trillion electron volts (tera-electron volts, or TeV).
Higher-voltage sources were first made possible by the invention, also in 1932,
of the Van de Graaff generator by the American
physicist Robert J. Van de Graaff.
This was followed almost immediately by the
invention of the cyclotron by the American physicists Ernest Orlando Lawrence and Milton Stanley Livingston. The
cyclotron uses a magnetic field to bend the trajectories of charged particles
into circles, and during each half-revolution the particles are given a small
electric “kick” until they accumulate the high energy level desired. Protons
could be accelerated to about 10 MeV by a cyclotron,
but higher energies had to await the development of the synchrotron after the
end of World War II (1939-1945), based on the ideas of the American physicist
Edwin Mattison McMillan and the Soviet physicist
Vladimir I. Veksler. After World War II, accelerator
design made rapid progress, and accelerators of many types were built,
producing high-energy beams of electrons, protons, deuterons, heavier ions, and
X rays. For example, the accelerator at the Stanford
Linear Accelerator Center (SLAC) in Stanford,
California, accelerates electrons down a straight “runway,” 3.2 km (2 mi) long,
at the end of which they attain an energy of more than
20 GeV.
While lower-energy accelerators are used in various
applications in industry and laboratories, the most powerful ones are used in
studying the structure of elementary particles, the fundamental building blocks
of nature. In such studies elementary particles are broken up by hitting them
with beams of projectiles that are usually protons or electrons. The
distribution of the fragments yields information on the structure of the
elementary particles.
To obtain more detailed information in this manner,
the use of more energetic projectiles is necessary. Since the acceleration of a
projectile is achieved by “pushing” it from behind, to obtain more energetic
projectiles it is necessary to keep pushing for a longer time. Thus, high-energy
accelerators are generally larger in size. The highest beam energy reached at
the end of World War II was less than 100 MeV. A
bigger accelerator, reaching 3 GeV, was built in the
early 1950s at the Brookhaven National Laboratory at Upton, New York. A
breakthrough in accelerator design occurred with the introduction of the strong
focusing principle in 1952 by the American physicists Ernest D. Courant,
Livingston, and Hartland S. Snyder. Today the world's largest accelerators have
been or are being built to produce beams of protons beyond 1 TeV. Two are located at the Fermi National Accelerator
Laboratory, near Batavia, Illinois, and at the European Organization for
Nuclear Research, known as CERN, in Geneva, Switzerland. See Particle
Accelerators.
B Particle Detectors
Detection and analysis of elementary particles were
first accomplished through the ability of these
particles to affect photographic emulsions and to energize fluorescent
materials. The actual paths of ionized particles were first observed by the
British physicist Charles Thomson Rees Wilson in a cloud chamber, where water
droplets condensed on the ions produced by the particles during their passage.
Electric or magnetic fields can be used to bend the particle paths, yielding
information about their momentum and electric charges. A significant advance on
the cloud chamber was the construction of the bubble chamber by the American
physicist Donald Arthur Glaser in 1952. It uses a liquid, usually hydrogen,
instead of air, and the ions produced by a fast particle become centers of boiling, leaving an observable bubble track.
Because the density of the liquid is much higher than that of air, more
interactions take place in a bubble chamber than in a cloud chamber.
Furthermore, the bubbles clear out faster than water droplets, allowing more
frequent cycling of the bubble chamber. A third development, the spark chamber,
evolved in the 1950s. In this device, many parallel plates are kept at a high
voltage in a suitable gas atmosphere. An ionizing particle passing between the
plates breaks down the gas, forming sparks that delineate its path.
A different type of detector, the discharge
counter, was developed early during the 20th century, largely by the German
physicist Hans Geiger, and later improved by the German American physicist
Walther Müller. It is now commonly known as the
Geiger-Müller counter, and although small and
convenient, it has been largely replaced by faster and more convenient
solid-state counting devices, such as the scintillation counter, developed
about 1947 by the German American physicist Hartmut
Paul Kallmann and others. It uses the ability of
ionized particles to produce a flash of light as they pass through certain
organic crystals and liquids. See Particle Detectors.
C Cosmic Rays
About 1911 the Austrian-American physicist Victor
Franz Hess discovered that cosmic radiation, consisting of rays originating
outside the earth's atmosphere, arrived in a pattern determined by the earth's
magnetic field (see Cosmic Rays). The rays were found to be positively
charged and to consist mostly of protons with energies ranging from about 1 GeV to 1011 GeV
(compared to about 30 GeV for the fastest particles
produced by artificial accelerators). Cosmic rays trapped into orbits around the
earth account for the Van Allen radiation belts discovered during an
artificial-satellite flight in 1959 (see Radiation Belts).
When a very energetic primary proton smashes into
the atmosphere and collides with the nitrogen and oxygen nuclei present, it
produces large numbers of different secondary particles that spread toward the
earth as a cosmic-ray shower. The origin of the cosmic-ray protons is not yet
fully understood; some undoubtedly come from the sun and the other stars.
Except for the slowest rays, however, no mechanism can be found to account for
their high energies and the likelihood is that weak galactic fields operate
over very long periods to accelerate interstellar protons (see Galaxy;
Milky Way).
D Elementary Particles
To the electron, proton, neutron, and photon have
been added a number of fundamental particles. In 1932 the American physicist
Carl David Anderson discovered the antielectron, or
positron, predicted in 1928 by Dirac. Anderson found
that the stopping of an energetic cosmic gamma ray near a heavy nucleus yielded
an electron-positron pair out of pure energy. When a positron subsequently
meets an electron, they annihilate each other with a burst of photons of
energy.
D1 Discovery of the Muon
In 1935 the Japanese physicist Yukawa Hideki developed a theory explaining how a nucleus
is held together, despite the mutual repulsion of its protons, by postulating
the existence of a particle intermediate in mass between the electron and the
proton. In 1936 Anderson and his coworkers discovered
a new particle of 207 electron masses in secondary cosmic radiation; now called
the mu-meson or muon, it
was first thought to be Yukawa's nuclear “glue.”
Subsequent experiments by the British physicist Cecil Frank Powell and others
led to the discovery of a somewhat heavier particle of 270 electron masses, the
pi-meson or pion (also obtained from secondary cosmic
radiation), which was eventually identified as the missing link in Yukawa's theory.
Many additional particles have since been found in
secondary cosmic radiation and through the use of large accelerators. They
include numerous massive particles, classed as hadrons (particles that take
part in the “strong” interaction, which binds atomic nuclei together),
including hyperons and various heavy mesons with masses ranging from about one
to three proton masses; and intermediate vector bosons such as the W and Z0
particles, the carriers of the “weak” nuclear force. They may be electrically
neutral, positive, or negative, but never have more than one elementary
electric charge e. Enduring from 10-8
to 10-14 sec, they decay into a variety of lighter particles. Each
particle has its antiparticle and carries some angular momentum. They all obey
certain conservation laws involving quantum numbers, such as baryon number,
strangeness, and isotopic spin.
In 1931 Pauli, in order
to explain the apparent failure of some conservation laws in certain
radioactive processes, postulated the existence of electrically neutral
particles of zero-rest mass that nevertheless could carry energy and momentum.
This idea was further developed by the Italian-born American physicist Enrico Fermi, who named the missing particle the neutrino.
Uncharged and tiny, it is elusive, easily able to penetrate the entire earth
with only a small likelihood of capture. Nevertheless, it was eventually
discovered in a difficult experiment performed by the Americans Frederick Reines and Clyde Lorrain Cowan, Jr. Understanding of the internal structure of protons and
neutrons has also been derived from the experiments of the American physicist
Robert Hofstadter, using fast electrons from linear accelerators.
In the late 1940s a number of experiments
with cosmic rays revealed new types of particles, the existence of which had
not been anticipated. They were called strange particles, and their properties
were studied intensively in the 1950s. Then, in the 1960s, many new particles
were found in experiments with the large accelerators. The electron, proton,
neutron, photon, and all the particles discovered since 1932 are collectively
called elementary particles. But the term is actually a misnomer, for most of
the particles, such as the proton, have been found to have very complicated
internal structure.
Elementary particle physics is concerned with (1) the
internal structure of these building blocks and (2) how they interact with one
another to form nuclei. The physical principles that explain how atoms and
molecules are built from nuclei and electrons are already known. At present,
vigorous research is being conducted on both fronts in order to learn the
physical principles upon which all matter is built.
One popular theory about the internal structure of
elementary particles is that they are made of so-called quarks (see Quark),
which are subparticles of fractional charge; a
proton, for example, is made up of three quarks. This theory was first proposed
in 1964 by the American physicists Murray Gell-Mann and George Zweig. The theory explains a number of phenomena, and
physicists have collected a great deal of evidence of quarks in combinations
with each other. No individual quarks have been observed, however, and current
theory suggests that quarks may never be released as separate entities except
under such extreme conditions as those found during the very creation of the
universe. The theory postulated three kinds of quarks, but later experiments,
especially the discovery of the J/psi particle in
1974 by the American physicists Samuel C. C. Ting and Burton Richter, called
for the introduction of three additional kinds.
D2 Unified Field Theories
The interaction between elementary particles—and
if quarks exist, between the quarks—is a more difficult area of research. The most successful theories, thus far, are called gauge
theories. In these, the interaction between two kinds of particles is
characterized by symmetry. The symmetry between neutrons and protons, for
example, is such that if the identities of the particles are interchanged,
nothing changes as far as the “strong” force is concerned. The first of the
gauge theories applied to the electric and magnetic interactions between
charged particles. Here, the symmetry consists in the fact that changes in the
combination of electric and magnetic potentials have no effect on the results.
A powerful gauge theory, which has since been verified, was that proposed
independently by both the American physicist Steven Weinberg and the Pakistani
physicist Abdus Salam in
1967 and 1968. Their model linked the intermediate vector boson with the
photon, thus uniting the electromagnetic and weak interactions, although only
for leptons. Later work by others (Sheldon Lee Glashow,
J. Iliopolis, and L. Maiani)
showed how the model could be applied to hadrons (the strongly interacting
particles) as well.
Gauge theory, in principle, can be applied to any
force field, holding out the possibility that all the interactions, or forces,
can be brought together into a single unified field theory. Such efforts
inevitably involve the concept of symmetry. Generalized symmetries extend to
particle interchanges that vary from point to point in space and time. The
difficulty for physicists is that such symmetries, while mathematically
elegant, do not extend scientific understanding of the underlying nature of
matter. For this reason, many physicists are exploring the possibilities of
so-called supersymmetry theories, which would
directly relate fermions and bosons to one another by postulating further
particle “twins” to those now known, differing only in spin. Doubts have been expressed
about such efforts, but another approach known as “superstring” theory is
attracting a good deal of interest. In such theories, fundamental particles are
considered not as dimensionless objects but as “strings” that extend
one-dimensionally to lengths of no more than 10-35 meters. Such
theories solve a number of problems for the physicists who are working on
unified field theories, but they are still only highly theoretical constructs.
E Nuclear Physics
In 1931 the American physicist Harold Clayton Urey discovered the hydrogen isotope deuterium and made
heavy water from it. The deuterium nucleus, or deuteron (one proton plus one
neutron), makes an excellent bombarding particle for inducing nuclear
reactions. The French physicists Irène and Frédéric Joliot-Curie produced the first artificially
radioactive nucleus in 1933 and 1934, leading to the production of
radioisotopes for use in archaeology, biology, medicine, chemistry, and other
sciences.
Fermi and many collaborators attempted a series of
experiments to produce elements beyond uranium by bombarding uranium with
neutrons. They succeeded, and now at least a dozen such transuranium
elements have been made. As their work continued, an even more important
discovery was made. Irène Joliot-Curie, the German
physicists Otto Hahn and Fritz Strassmann, the
Austrian physicist Lise Meitner,
and the British physicist Otto Robert Frisch found that some uranium nuclei
broke into two parts, a phenomenon called nuclear fission. At the same time, a
huge amount of energy was released by mass conversion, as well as some
neutrons. These results suggested the possibility of a self-sustained chain
reaction, and this was achieved by Fermi and his group in 1942, when the first
nuclear reactor went into operation. Technological developments followed
rapidly; the first atomic bomb was produced in 1945 as a result of a massive
program under the direction of the American physicist J. Robert Oppenheimer,
and the first nuclear power reactor for the production of electricity went into
operation in England in 1956, yielding 78 million watts. See Nuclear
Weapons.
Further developments were based on the investigation of
the energy source of the stars, which the German American physicist Hans
Albrecht Bethe showed to be a series of nuclear
reactions occurring at temperatures of millions of degrees. In these reactions,
four hydrogen nuclei are converted into a helium nucleus, with two positrons
and massive amounts of energy forming the by-products. This nuclear-fusion
process was adopted in modified form, largely based on ideas developed by the
Hungarian-American physicist Edward Teller, as the basis of the fusion or
hydrogen bomb. First detonated in 1952, it is a weapon much more powerful than
the fission bomb. A small fission bomb provides the high temperature necessary
to trigger fusion of hydrogen.
Much current research is devoted to producing a
controlled, rather than an explosive, fusion device, which would be less
radioactive than a fission reactor and would provide an almost limitless source
of energy. In December 1993 significant progress was made toward this goal when
researchers at Princeton University used the Tokamak
Fusion Test Reactor to produce a controlled fusion reaction that output 5.6
million watts of power. However, the tokamak consumed
more power than it produced during its operation.
F Solid-State Physics
In solids, the atoms are closely packed,
leading to strong interactive forces and numerous interrelated effects that are
not observed in gases, where the molecules largely act independently.
Interaction effects lead to the mechanical, thermal, electrical, magnetic, and
optical properties of solids, which is an area that remains difficult to handle
theoretically, although much progress has been made.
A principal characteristic of most solids is their
crystalline structure, with the atoms arranged in
regular and geometrically repeating arrays (see Crystal). The specific
arrangement of the atoms may arise from a variety of forces; thus, some solids,
such as sodium chloride, or common salt, are held together by ionic bonds
originating in the electric attraction between the ions of which the materials
are composed. In others, such as diamond, atoms share electrons, giving rise to
covalent bonding. Inert substances, such as neon, exhibit neither of these
bonds. Their existence is a result of the so-called van der
Waals forces, named after the Dutch physicist
Johannes Diderik van der Waals. These forces exist between neutral molecules or
atoms as a result of electric polarization. Metals, on the other hand, are
bonded by a so-called electron gas, or electrons that are freed from the outer
atomic shell and shared by all atoms, and that define most properties of the
metal (see Metallography; Metals).
The sharp, discrete energy levels permitted to the
electrons in individual atoms become broadened into energy bands when the atoms
become closely packed in a solid. The width and separation of these bands
define many properties, and thus the separation by a so-called forbidden band,
where no electrons may exist, restricts their motion and results in a good
electric and thermal insulator. Overlapping energy bands and
their associated ease of electron motion results in their being good conductors
of electricity and heat. If the forbidden band is narrow, a few fast
electrons may be able to jump across, yielding a semiconductor. In this case
the energy-band spacing may be greatly affected by minute amounts of
impurities, such as arsenic in silicon. The lowering of a high-energy band by
the impurity results in a so-called donor of electrons, or an n-type
semiconductor. The raising of a low-energy band by an impurity like gallium
results in an acceptor, where the vacancies or “holes” in the electron
structure act like movable positive charges and are characteristic of p-type
semiconductors. A number of modern electronic devices, notably the transistor,
developed by the American physicists John Bardeen,
Walter Houser Brattain, and William Bradford
Shockley, are based on these semiconductor properties.
Magnetic properties in a solid arise from the electrons'
acting like tiny magnetic dipoles. Electron spin plays a big role in magnetism,
leading to spin waves that have been observed in some solids. Almost all solid
properties depend on temperature. Thus, ferromagnetic materials, including iron
and nickel, lose their normal strong residual magnetism at a characteristic
high temperature, called the Curie temperature. Electrical resistance usually
decreases with decreasing temperature, and for certain materials, called superconductors, it becomes extremely low, near absolute
zero. These and many other phenomena observed in solids depend on energy
quantization and can best be described in terms of effective “particles” such
as phonons, polarons, and magnons.
G Cryogenics
At very low temperatures (near absolute zero), many
materials exhibit strikingly different characteristics (see Cryogenics).
At the beginning of the 20th century the Dutch physicist Heike Kamerlingh Onnes developed techniques
for producing these low temperatures and discovered the superconductivity of
mercury: It loses all electrical resistance at about 4 K. Many other elements,
alloys, and compounds do the same at their characteristic near-zero
temperature, with originally magnetic materials becoming magnetic insulators.
The theory of superconductivity, developed largely by the American physicists
John Bardeen, Leon N. Cooper, and John Robert Schrieffer, is extremely complicated, involving the pairing
of electrons in the crystal lattice.
Another fascinating discovery was that helium does not
freeze but changes at about 2 K from an ordinary liquid, He I, to the superfluid He II, which has no viscosity and has a thermal
conductivity about 1000 times greater than silver. Films of He II can creep up
the walls of their containing vessels and He II can readily permeate some
materials like platinum. No fully satisfactory theory is yet available for this
behavior.
H Plasma Physics
A plasma is any substance
(usually a gas) whose atoms have one or more electrons detached and therefore
become ionized. The detached electrons remain, however, in the gas volume that
in an overall sense remains electrically neutral. The ionization can be
effected by the introduction of large concentrations of energy, such as
bombardment with fast external electrons, irradiation with laser light, or by
heating the gas to very high temperatures (see Laser). The individually
charged plasma particles respond to electric and magnetic fields and can
therefore be manipulated and contained.
Plasmas are found in gas-filled light sources,
such as a neon lamp, in interstellar space where residual hydrogen is ionized
by radiation, and in stars whose great interior temperatures produce a high
degree of ionization, a process closely connected with the nuclear fusion that
supplies the energy of stars. For the hydrogen nuclei to fuse into heavier
nuclei, they must be fast enough to overcome their mutual electric repulsion.
This implies high temperature (millions of degrees) when the hydrogen ionizes
into a plasma. In order to produce a controlled fusion, or thermonuclear
reaction, it is necessary to generate and contain plasmas magnetically; this is
an important but difficult problem that falls in the field of magnetohydrodynamics.
I Lasers
An important recent development is that of the
laser, an acronym for light amplification by stimulated emission of radiation.
In lasers, which may have gases, liquids, or solids as the working substance, a
large number of atoms are raised to a high energy level and caused to release
this energy simultaneously, producing coherent light where all waves are in
phase. Similar techniques are used for producing microwave emissions by the use
of masers. The coherence of the light allows for very high intensity, sharp
wavelength light beams that remain narrow over tremendous distances; they are
far more intense than light from any other source. Continuous lasers can
deliver hundreds of watts of power, and pulsed lasers can produce millions of
watts of power for very short periods. Developed during the 1950s and 1960s,
largely by the American engineer and inventor Gordon Gould and the American
physicists Charles Hard Townes, T. H. Maiman, Arthur Leonard Schawlow,
and Ali Javan, the laser today has become an
extremely powerful tool in research and technology, with applications in communications,
medicine, navigation, metallurgy, fusion, and material cutting.
J Astrophysics
The construction of large and specially designed
optical telescopes has led to the discovery of new stellar objects, including a
number of quasars, which are billions of light-years away, and has led to a
better understanding of the structure of the universe. Radio astronomy has
yielded other important discoveries, such as pulsars and the interstellar
background radiation, which probably dates from the origin of the universe. The
evolutionary history of the stars is now well understood in terms of nuclear
reactions. As a result of recent observations and theoretical calculations, the
belief is now widely held that all matter was originally in one dense location
and that between 10 and 20 billion years ago it exploded in one titanic event
often called the big bang. The aftereffects of the
explosion have led to a universe that appears to be still expanding. A puzzling
aspect of this universe, recently revealed, is that the galaxies are not
uniformly distributed. Instead, vast voids are bordered by galactic clusters
shaped like filaments. The pattern of these voids and filaments lends itself to
nonlinear mathematical analysis of the sort used in chaos theory. See also Inflationary
Theory.
Microsoft ® Encarta ® Reference Library 2003. ©
1993-2002 Microsoft Corporation. All rights reserved.