Astronomy, its influence and some of its uses and applications in the last few centuries

From a historical point of view, astronomy is one of the oldest natural and exact sciences, with observations and scientific explanations by various cultures dating back to early Antiquity.

Spherical or observational astronomy is the oldest branch of astronomy. Observations of celestial objects have been important for religious and astrological purposes, and for timekeeping and navigation.

Celestial Navigation has been used in position fixing and navigation by observing the positions of celestial bodies, including the sun, moon, planets and stars. Instruments such as sextants have been used since medieval times for measuring the positions of stars and the angular distances between celestial objects.

In short, what is known as the scientific revolution essentially started in the 16th century with the publication by Nicolaus Copernicus of his work about heliocentric astronomy in 1543. This was followed by the works, observations and ideas of other scientists and astronomers such as Tycho Brahe, Galileo Galilei, and Johannes Kepler, culminating with Isaac Newton’s work entitled Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), first published in 1687, which expounded his laws of motion and the law of universal gravitation, and established the discipline of classical mechanics.

Newton’s Principia historically had an indirect, albeit significant, influence on the progress of navigation and on related topics such as tide analysis and prediction.

Newton’s theory of gravitation first enabled an explanation of why there were generally two tides a day, not one, and gave hope for a detailed understanding of tidal forces and behavior.

Newton presented in the Principia his mathematical theory concerning tides and lunar motion, and it is known that sea travel was essential for trade to the British and other navigators, who needed to have a good knowledge of tidal cycles and patterns and how they affect navigation and the determination of longitudes.

In the 17th century, the creation of learned societies like the Royal Society in England under the patronage of the king and with the help of some known personalities, and the French academy of sciences by Louis XIV and his minister Colbert, was helpful in advancing the interest in scientific and astronomical research.

During the 18th and 19th centuries, the study of the three-body problem by Euler, Clairaut, and d’Alembert led to more accurate predictions about the motions of the Moon and planets. This work was further refined by Joseph-Louis Lagrange and Pierre Simon Laplace, allowing the masses of the planets and moons to be estimated from their perturbations.

The scientists and astronomers who came after Newton and continued or developed his work made additional contributions to the theory of tides. As an example, in 1776, Laplace formulated a set of linear partial differential equations, for tidal flow described as a barotropic two-dimensional sheet flow (this is a flow whose density is a function of pressure only). Laplace obtained these equations by simplifying the fluid dynamic equations.

Nathaniel Bowditch, regarded as the founder or one of the founders of modern maritime navigation, read Newton’s Principia as a young man, and then translated Laplace’s Mécanique céleste (Celestial Mechanics), a work that extended and completed Newton’s Principia and Newton’s theories.

Sometimes scientists and astronomers were helped or supported by the state, and sometimes they were helped by influential personalities. In addition to his contributions to mathematics, Carl Friedrich Gauss is also known for his contributions to astronomy and planetary theory, having among other things published a book or work entitled Theoria motus corporum coelestium in sectionibus conicis solem ambientum (Theory of motion of the celestial bodies moving in conic sections around the Sun). Gauss was financially supported during his years of study by the Duke of Brunswick.

Larger and more powerful telescopes were developed and built during the 18th and 19th centuries, contributing to the progress in observational and theoretical astronomy. One of the famous applications of astronomical theories and celestial mechanics around the middle of the 19th century was the prediction of the existence and position of planet Neptune, mainly by Urbain Leverrier, using only mathematics and astronomical observations of planet Uranus. Telescopic observations confirming the existence of a major planet (subsequently called or named Neptune) were made on September 1846 at the Berlin Observatory.

William Thomson (Lord Kelvin) applied Fourier analysis to the determination of tidal motion and to explain tidal phenomena in relation to harmonic analysis. As a practical application of the astronomical theory of tides and lunar theory (i.e. the theory of the moon’s motion as deduced from the law of gravitation with its perturbations), at the end of the 19th century Thomson and others conceived tide-predicting machines, which were special-purpose mechanical analog computers constructed and set up to predict the ebb and flow of sea tides and the irregular variations in their heights – which change in mixtures of rhythms, that never (in the aggregate) repeat themselves exactly. Their purpose was to shorten the difficult and error-prone computations of tide prediction. These machines provided predictions valid from hour to hour and day to day for a year or more ahead. They were widely used for constructing official tidal predictions for general marine navigation, and were viewed as of strategic military importance until the second half of the 20th century.

The image below shows the tide predicting machine by Sir William Thomson in 1876. This machine combined ten tidal components, one pulley for each component. It could trace the heights of the tides for one year in about four hours.

(Image source: https://en.m.wikipedia.org/wiki/File:DSCN1739-thomson-tide-machine.jpg )

Tide-predicting machines became generally used for constructing official tidal predictions for general marine navigation, and were viewed as of strategic military importance until the second half of the 20th century.

Important advances were made in astronomy during the 18th and 19th centuries due to observations as well as theoretical investigations and publications. These advances were accompanied or followed by applications related to navigation and nautical astronomy, sometimes stimulated or supported by societal and commercial interests or needs.

Progess in astronomical theory, research and applications continued during the 20th century in several directions.
In the late 19th century and most of the 20th century, astronomical images were made using photographic equipment. Modern images are obtained using digital detectors, particularly using charge-coupled devices (CCDs) and recorded on modern media.

Radio astronomy flourished mostly after the second half of the 20th century
The discovery of the cosmic microwave background radiation, regarded as evidence for the Big Bang theory, was made through radio astronomy.
Radio astronomy uses large radio antennas known as radio telescopes, that are either used singularly, or with multiple linked telescopes utilizing the techniques of radio interferometry and aperture synthesis.

Other observational branches of astronomy include infrared astronomy, x-ray astronomy, and ultraviolet astronomy.

Related fields or subfields of astronomy that were developed during the 20th century include astrophysics, astrochemistry, stellar astronomy, galactic astronomy, physical cosmology, astrobiology, …

Theoretical astronomy in the 20th century studied the existence of objects such as black holes and neutron stars, which have been used to explain such observed phenomena as quasars, pulsars, blazars, and radio galaxies. Physical cosmology made advances during the 20th century. In the early 1900s the model of the Big Bang theory was formulated, supported by cosmic microwave background radiation, Hubble’s law, and the cosmological abundances of elements.

Space telescopes have enabled measurements in parts of the electromagnetic spectrum normally blocked or blurred by the atmosphere.

Operational space telescopes orbiting Earth outside the atmosphere avoid light pollution from artificial light sources on Earth. Their angular resolution is often much higher than a ground-based telescope with a similar aperture.

The image below shows the Hubble Space Telescope, which was launched into low Earth orbit in 1990, and remains in operation in 2023:

(Image source: https://en.m.wikipedia.org/wiki/File:HST-SM4.jpeg )

During the 1990s, the measurement of the stellar wobble (or Doppler spectroscopy) of nearby stars was used to detect large extrasolar planets orbiting those stars.

Human missions and crewed spaceflights have been sent to explore outer space (until now mostly in the vicinity of planet Earth and to the Moon) since after the second half of the 20th century.
Interplanetary space probes have flown to all the observed planets in the Solar System as well as to dwarf planets Pluto and Ceres, and several asteroids. Orbiters and landers usually return more information than fly-by missions.

The difference between quantum mechanics and quantum field theory, some basic explanations

Let’s start with some historical notes and considerations.

Quantum mechanics as a theory was gradually created and formulated during the first two decades of the 20th century. Important milestones include the 1900 quantum hypothesis by Planck that any energy-radiating atomic system can theoretically be divided into a number of discrete “energy elements” such that each of these elements is proportional to the frequency ν with which each of them individually radiate energy, and then the interpretation of the photoelectric effect.

The origins of quantum field theory date to the 1920s and to the problem of creating a quantum theory of the electromagnetic field.
The first coherent and acceptable theory of quantum electrodynamics, which included the electromagnetic field and electrically charged matter as quantum mechanical objects, was created by Paul Dirac in 1927.

A further development for quantum field theory (or QFT) came with the discovery of the Dirac equation, which was originally formulated and interpreted as a single-particle equation similar to the Schrodinger equation, but the Dirac equation additionally satisfies both the Lorentz invariance, i.e. the requirements of special relativity, and the rules of quantum mechanics.

Theoretical formulations and advances took place during the 1940s and 1950s, resulting the introduction of renormalized quantum electrodynamics (or QED).

Quantum chromodynamics (QCD), the theory of the strong interaction between quarks mediated by gluons, was formulated diring the 1960s.

In the 1960s and 1970s it was shown that the weak nuclear force and quantum electrodynamics could be merged into a single electroweak interaction.

The Standard Model of particle physics, the theory describing three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifying all known elementary particles, is a paradigm of a quantum field theory for theorists, exhibiting a wide range of physical phenomena.

Quantum field theory is a quantum mechanical theory. In this theory, fields with quantized normal modes of oscillation represent particles. So particles are regarded as excitations of quantum fields filling all of space. Relativistic theories of quantized fields depict the interactions between elementary particles.

In general, quantum field theory is a theoretical framework combining classical field theory, special relativity, and quantum mechanics. It is used to build physical models of subatomic particles (in relation to particle physics) and quasi-particles (in relation to condensed matter physics).

Below is a helpful description and explanation of quantum field theory, taken from the book A Modern Introduction to Quantum Field Theory, by Michele Maggiore:

“Quantum field theory is a synthesis of quantum mechanics and special relativity, and it is one of the great achievements of modern physics. Quantum mechanics, as formulated by Bohr, Heisenberg, Schrodinger, Pauli, Dirac, and many others, is an intrinsically non-relativistic theory. To make it consistent with special relativity, the real problem is not to find a relativistic generalization of the Schrodinger equation. Actually, Schrodinger first found a relativistic equation, that today we call the Klein–Gordon equation. He then discarded it because it gave the wrong fine structure for the hydrogen atom, and he retained only the non-relativistic limit. Wave equations, relativistic or not, cannot account for processes in which the number and the type of particles changes, as in almost all reactions of nuclear and particle physics.[…] Furthermore, relativistic wave equations suffer from a number of pathologies, like negative-energy solutions.

A proper resolution of these difficulties implies a change of viewpoint, from wave equations, where one quantizes a single particle in an external classical potential, to quantum field theory, where one identifies the particles with the modes of a field, and quantizes the field itself. The procedure also goes under the name of second quantization.

The methods of quantum field theory (QFT) have great generality and flexibility and are not restricted to the domain of particle physics. In a sense, field theory is a universal language, and it permeates many branches of modern research. In general, field theory is the correct language whenever we face collective phenomena, involving a large number of degrees of freedom, and this is the underlying reason for its unifying power. For example, in condensed matter the excitations in a solid are quanta of fields, and can be studied with field theoretical methods. An especially interesting example of the unifying power of QFT is given by the phenomenon of superconductivity which, expressed in the field theory language, turns out to be conceptually the same as the Higgs mechanism in particle physics. As another example we can mention that the Feynman path integral, which is a basic tool of modern quantum field theory, provides a formal analogy between field theory and statistical mechanics, which has stimulated very important exchanges between these two areas.”

For additional info and details about these topics, the following links can be viewed or consulted:

Differences between principles of QM and QFT

What is the difference between QM and non-relativistic QFT

Formalism of Quantum Field Theory vs Quantum Mechanics

Quantizing gravity, or “gravitizing” quantum theory?

Roger Penrose shows in his writings that he is more favorable to the theory of general relativity and to determinism in physics, and that he is more critical towards quantum mechanics, its description of reality, and its probabilistic formulation. Other physicists or scientists have somewhat similar or comparable views or opinions.

Many physicists seem to be trying to formulate a theory of quantum gravity. But there is a different or opposite approach highlighted by some insightful remarks by Penrose.

In a 2013 paper entitled “On the Gravitization of Quantum Mechanics 1: Quantum State Reduction”, Penrose wrote:

” This paper argues that the case for “gravitizing” quantum theory is at least as strong as that for quantizing gravity.Accordingly, the principles of general relativity must influence, and actually change, the very formalism of quantum mechanics. Most particularly, an“Einsteinian”,rather than a“Newtonian” treatment of the gravitational field should be adopted, in a quantum system,in order that the principle of equivalence be fully respected. This leads to an expectation that quantum superpositions of states involving a significant mass displacement should have a finite lifetime[…]”

Penrose continues:

” The title of this article contains the phrase “gravitization of quantum Mechanics” in contrast to the more usual “quantization of gravity”. This reversal of wording is deliberate, of course, indicating my concern with the bringing of quantum theory more in line with the principles of Einstein’s general relativity, rather than attempting to make Einstein’s theory—or any more amenable theory of gravity—into line with those of quantum mechanics (or quantum field theory).[…]

I think that people tend to regard the great twentieth century revolution of quantum theory, as a more fundamental scheme of things than gravitational theory. Indeed, quantum mechanics, strange as its basic principles seem to be, has no evidence against it from accepted experiment or observation, and people tend to argue that this theory is so well established now that one must try to bring the whole of physics within its compass.Yet, that other great twentieth century revolution, namely the general theory of relativity,is also a fundamental scheme of things which, strange as its basic principles seem to be, also has no confirmed experiments or observations that tell against it[…]”

Penrose thinks that it is essentially quantum mechanics that ought to be changed, not general relativity. In his book The Road to Reality, Penrose states:

” My own viewpoint is that the question of ‘reality’ must be addressed in quantum mechanics—especially if one takes the view (as many physicists appear to) that the quantum formalism applies universally to the whole of physics—for then, if there is no quantum reality, there can be no reality at any level (all levels being quantum levels, on this view). To me, it makes no sense to deny reality altogether in this way. We need a notion of physical reality, even if only a provisional or approximate one, for without it our objective universe, and thence the whole of science, simply evaporates before our contemplative gaze![…]

We must think of a wavefunction as one entire thing. If it causes a spot to appear at one place, then it has done its job, and this apparent act of creation forbids it from causing a spot to appear somewhere else as well. Wavefunctions are quite unlike the waves of classical physics in this important respect. The different parts of the wave cannot be thought of as local disturbances, each carrying on independently of what is happening in a remote region. Wavefunctions have a strongly non-local character; in this sense they are completely holistic entities.

[…]there are powerful positive reasons […] to believe that the laws of present-day quantum mechanics are in need of a fundamental (though presumably subtle) change. These reasons come from within accepted physical principles and from observed facts about the universe. Yet, I find it remarkable how few of today’s quantum physicists are prepared to entertain seriously the idea of an actual change in the ground rules of their subject. Quantum mechanics, despite its extraordinary exception-free experimental support and strikingly confirmed predictions, is a comparatively young subject, being only about three-quarters of a century old (dating this from the establishment of the mathematical theory by Dirac and others, based on the schemes of Heisenberg and Schrodinger, in the years immediately following 1925). When I say ‘comparatively’, I am comparing the theory with that of Newton, which lasted for nearly three times as long before it needed serious modification in the form of special and then general relativity, and quantum mechanics. […]

Moreover, Newton’s theory did not have a measurement paradox.[…]

Newton’s gravitational theory has the particular mathematical elegance that the gravitational forces always add up in a completely linear fashion; yet this is supplanted, in Einstein’s more precise theory, by a distinctly subtle type of non-linearity in the way that gravitational effects of different bodies combine together. And Einstein’s theory is certainly not short on elegance—of a quite different kind from that of Newton.[…]

Einstein’s theory […] involved a completely radical change in perspective. This, it seems to me, is the general kind of change in the structure of quantum mechanics that we must look towards, if we are to obtain the (in my view) needed non-linear theory to replace the present-day conventional quantum theory. Indeed, it is my own perspective that Einstein’s general relativity will itself supply some necessary clues as to the modifications that are required. The 20th century gave us two fundamental revolutions in physical thought— and, to my way of thinking, general relativity has provided as impressive a revolution as has quantum theory (or quantum Field theory). Yet, these two great schemes for the world are based upon principles that lie most uncomfortably with each other. The usual perspective, with regard to the proposed marriage between these theories, is that one of them, namely general relativity, must submit itself to the will of the other. There appears to be the common view that the rules of quantum Field theory are immutable, and it is Einstein’s theory that must bend itself appropriately to fit into the standard quantum mould. Few would suggest that the quantum rules must themselves admit to modification, in order to ensure an appropriately harmonious marriage. Indeed, the very name ‘quantum gravity’, that is normally assigned to the proposed union, carries the implicit connotation that it is a standard quantum (field) theory that is sought. Yet, I would claim that there is observational evidence that Nature’s view of this union is very different from this! I contend that her design for this union must be what, in our eyes, would be a distinctly non-standard one, and that an objective state reduction must be one of its important features.”

While recognizing the experimental verifications and successes of quantum physics, Penrose uses the word faith in relation to quantum mechanics, and thinks there are limitations to that “faith”.

The following quoted lines are taken from the book Fashion, Faith, and Fantasy in the New Physics of the Universe by Penrose:

” Quantum theory explains the phenomenon of chemical bonding, the colours and physical properties of metals and other substances, the detailed nature of the discrete frequencies of light that particular elements and their compounds emit when heated (spectral lines), the stability of atoms (where classical theory would predict a catastrophic collapse with the emission of radiation as electrons spiral rapidly into their atomic nuclei), superconductors, superfluids, Bose–Einstein condensates […]

When we combine quantum theory with special relativity we get quantum field theory, which is essential for, in particular, modern particle physics.[…]

Quantum theory is commonly regarded as a deeper theory than the classical scheme of particles and forces that had preceded it.[…]

The dogma of quantum mechanics is thus seen to be very well founded indeed, as it rests on an enormous amount of extremely hard evidence. With systems that are simple enough that detailed calculation can be carried out and sufficiently accurate experiments performed, we find an almost incredible precision in the agreement between the theoretical and observational results that are obtained.[…]

Perhaps the multitude of theoreticians involved in the formulation of quantum mechanics is a manifestation of the totally non-intuitive nature of that theory. Yet, as a mathematical structure, there is a remarkable elegance; and the deep coherence between the mathematics and physical behaviour is often as stunning as it is unexpected.[…]

Quantum mechanics provides, indeed, an overarching framework that would appear to apply to any physical process, at no matter what scale. There is perhaps no puzzle, therefore, in the fact that a profound faith has arisen among physicists, that all the phenomena of nature must adhere to it.[…]”

Penrose also mentions and criticizes the Copenhagen interpretation or view:

“According to standard quantum mechanics, the information in the quantum state of a system – or the wave function ψ – is what is needed for probabilistic predictions to be made for the results of experiments that might be performed upon that system.[…]

[According to the Copenhagen interpretation and] to various other schools of thought also, ψ is to be regarded as a calculational convenience with no ontological status other than to be part of the state of mind of the experimenter or theoretician, so that the actual results of observation can be probabilistically assessed. It seems that a good part of such a belief stems from the abhorrence felt by so many physicists that the state of the actual world could suddenly “jump” from time to time in the seemingly random way that is characteristic of the rules of quantum measurement.”

Moreover, Penrose is of the opinion that the de Broglie-Bohm theory, the pilot wave theory, or Bohmian mechanics

“provides an interesting alternative ontology to that provided (or not really provided!) by the Copenhagen view, and it is fairly widely studied, though certainly not qualifying as a fashionable theory. It claims no alternative observational effects from that of conventional quantum mechanics, but provides a much more clear-cut picture of the “reality” of the world.”

I think remarks, views and ideas similar to those of Roger Penrose ought to be taken into consideration. Quantizing gravity is not the only available option. It will be beneficial and useful if other alternatives such as those proposed by Penrose are considered, discussed, studied, and developed, in order to determine the best future direction for research, and formulate comprehensive appropriate consistent theoretical explanations and advances, with regard to gravity and to physics in general.

Fields, rings, groups, their history and why they are called that way in math

In short, mathematicians usually like to use words or concepts (starting from the native language they use in their mathematical work) to describe or define theories or mathematical ”things”. Then these words get translated into other languages, get defined better and more accurately, and become widely known and used.

For more details, let us review how or when mathematicians began using these words, starting with the origin or history of the word field in mathematics:

“The term Zahlenkörper (body of numbers) is due to Richard Dedekind (1831-1916) . Dedekind used the term in his lectures of 1858 but the term did not come into general use until the early 1890s. Until then, the expression used was “rationally known quantities,” which means either the field of rational numbers or some finite extension of it, depending on the context.[…]

Dedekind used Zahlenkörper [literally “number body”] in Supplement X of his 4th edition of Dirichlet’s Vorlesungenueber Zahlentheorie, section 159. In a footnote, he explained his choice of terminology, writing that, in earlier lectures (1857-8) he used the term ‘rationalen Gebietes’ and he says that Kronecker (1882) used the term ‘Rationalitaetsbereich’.

Dedekind did not allow for finite fields; for him, the smallest field was the field of rational numbers. According to a post in sci.math by Steve Wildstrom, ‘Dedekind’s ‘Koerper’ is actually what we would call a division ring rather than a field as it does not require that multiplication be commutative.’

Eliakim Hastings Moore (1862-1932) was apparently the first person to use the English word field in its modern sense and the first to allow for a finite field. He coined the expressions ‘field of order s’ and ‘Galois-field of order s=q^n.‘ These expressions appeared in print in December 1893 in the Bulletin of the New York Mathematical Society III. 75.”

(Source: https://jeff560.tripod.com/f.html)

In 1871 Richard Dedekind introduced, for a set of real or complex numbers that is closed under the four arithmetic operations, the German word Körper, which means “body” or “corpus” (to suggest an organically closed entity).

In 1881 Leopold Kronecker defined what he called a domain of rationality, which is a field of rational fractions in modern terms. Kronecker’s notion did not cover the field of all algebraic numbers (which is a field in Dedekind’s sense), but on the other hand was more abstract than Dedekind’s in that it made no specific assumption on the nature of the elements of a field. Kronecker interpreted a field such as Q(π) abstractly as the rational function field Q(X).

The first clear definition of an abstract field is due to Heinrich Weber (1893).

Now about the origin of the word ring in mathematics:

In 1871, Richard Dedekind defined the concept of the ring of integers of a number field. In this context, he introduced the terms “ideal” (inspired by Ernst Kummer’s notion of ideal number) and “module” and studied their properties. But Dedekind did not use the term “ring” and did not define the concept of a ring in a general setting.

The term “Zahlring” (number ring) was coined by David Hilbert in 1892 and published in 1897 (in relation to algebraic number theory).

In 19th century German, the word “Ring” could mean “association”, which is still used today in English in a limited sense (e.g., spy ring), so if that were the etymology then it would be similar to the way “group” entered mathematics by being a non-technical word for “collection of related things”. According to Harvey Cohn, Hilbert used the term for a ring that had the property of “circling directly back” to an element of itself.

The image below is a photograph of David Hilbert, from 1912:

(Image source: https://en.m.wikipedia.org/wiki/File:Hilbert.jpg)

The first axiomatic definition of a ring was given by Adolf Fraenkel in 1914, but his axioms were stricter than those in the modern definition. For instance, he required every non-zero-divisor to have a multiplicative inverse. In 1921, Emmy Noether gave the modern axiomatic definition of (commutative) ring and developed the foundations of commutative ring theory in her paper Idealtheorie in Ringbereichen.

The mathematical disciplines or areas of study that influenced the formulation of group theory at the end of the 18th century and in the 19th century were geometry, number theory, and the theory of algebraic equations (related to to the study of permutations).

Moreover,

“Evariste Galois introduced the term group in the form of the expression groupe de l’équation in his ‘Mémoire sur les conditions de résolubilité des équations par radicaux’ (written in 1830 but first published in 1846) Oeuvres mathématiques. p. 417. Cajori (vol. 2, page 83) points out that the modern definition of a group is somewhat different from that of Galois, for whom the term denoted a subgroup of the group of permutations of the roots of a given polynomial.

Group appears in English in Arthur Cayley, ‘On the theory of groups, as depending on the symbolic equation \theta^n=1, ‘Philosophical Magazine, 1854, vol. 7, pp. 40-47. […] The paper also introduced the term theory of groups. At the time this more abstract notion of a group made little impact.[…]

Klein and Lie use the term ‘closed system’ in their ‘Ueber diejenigen ebenen Curven, welche durch ein geschlossenes System von einfach unendlich vielen vertauschbaren linearen Transformationen in sich übergehen,’Mathematische Annalen, 4, (1871), 50-84. Klein adopted the term gruppe in his ‘Vergleichende Betrachtungen über neuere geometrische Forschungen’ written in 1872 [in relation to the Erlangen program].

Group-theory is found in English in 1888 in George Gavin Morrice’s translation of Felix Klein, Lectures on the Ikosahedron and the solution of Equations of the Fifth Degree.”

(Source: http://jeff560.tripod.com/g.html)

The image below shows a portrait of Évariste Galois (1811-1832):

(Image source: https://en.m.wikipedia.org/wiki/File:Evariste_galois.jpg)

After novel geometries such as hyperbolic and projective geometry (dealing with the behavior of geometric figures under various transformations) had emerged, Klein used group theory to organize them in a more coherent way. Further advancing these ideas, Sophus Lie founded the study of Lie groups in 1884.

The convergence of various sources into a uniform theory of groups started with Camille Jordan’s Traité des substitutions et des équations algébriques (Treatise on Substitutions and Algebraic Equations) in 1870.

Walther von Dyck (1882) introduced the idea of specifying a group by means of generators and relations, and was also the first to give an axiomatic definition of an “abstract group”, in the terminology of the time.

As of the 20th century, groups gained wide recognition by the work of Ferdinand Georg Frobenius and William Burnside
, who worked on representation theory
of finite groups, Richard Brauer’s modular representation theory
and Issai Schur’s papers. The theory of Lie groups, and more generally locally compact groups was studied by Hermann Weyl, Élie Cartan and others.

Additional information about fields, rings, groups and related topics can be found in Wikipedia and other or similar online resources.

Relevant and miscellaneous info about the Red Planet

It seems November 28 is called “Red Planet day”. I already wrote a post about Mars, but there are always additional and interesting facts and information about Mars, some of which I will present here.

The atmosphere of Mars is composed of carbon dioxide (about 95%), molecular nitrogen (2.8%), and argon (2%).It also contains trace levels of water vapor, oxygen, carbon monoxide, hydrogen, and noble gases. The atmosphere of Mars is much thinner than Earth’s. The average surface pressure is only about 610 pascals (0.088 psi) which is less than 1% of the Earth’s value. The currently thin Martian atmosphere precludes the existence of liquid water on the surface of Mars, but many studies suggest that the Martian atmosphere was thicker in the past. The Martian atmosphere is an oxidizing atmosphere. The photochemical reactions in the atmosphere tend to oxidize the organic species and turn them into carbon dioxide or carbon monoxide.

Mars has two permanent polar ice caps. During a pole’s winter, it lies in continuous darkness, chilling the surface and causing the deposition of 25–30% of the atmosphere into slabs of CO2 ice (dry ice). The temperature and circulation on Mars vary every Martian year, as expected for any planet with an atmosphere and axial tilt.
The surface of Mars has a very low thermal inertia, meaning it heats quickly when the sun shines on it. Typical daily temperature swings, away from the polar regions, are around 100 K.

An example of a known geological feature on Mars is Olympus Mons, a large shield volcano on Mars. The volcano has a height of over 21.9 km (13.6 miles or 72,000 feet) as measured by the Mars Orbiter Laser Altimeter (MOLA). Olympus Mons is the youngest of the large volcanoes on Mars, having formed during Mars’s Hesperian Period with eruptions continuing well into the Amazonian. The volcano is located in Mars’s western hemisphere, with the center at 18°39′N 226°12′E, just off the northwestern edge of the Tharsis bulge. There is a possibility that Olympus Mons is still active.

The image below shows a colorized topographic map of the volcano Olympus Mons, together with its surrounding aureole, from the Mars Orbiter Laser Altimeter (MOLA) instrument of the Mars Global Surveyor spacecraft:

(Image source: https://en.wikipedia.org/wiki/File:Olympus_Mons_aureole_MOLA_zoom_64.jpg)

Now for some explanations of the red color of Mars. The surface of the planet Mars appears reddish from a distance because of rusty dust suspended in the atmosphere, with an omnipresent dust layer that is typically on the order of millimeters thick.. A large amount of the regolith of Mars, or its surface material, comprises iron oxide. Basically, rocks on Mars contain a lot of iron, and when they are exposed to the various atmospheric phenomena, they ‘oxidize’ and turn onto a reddish color. The surface iron on Mars became oxidized, forming iron oxide known more commonly as rust — a compound made of two iron atoms and three oxygen atoms, the chemical formula of iron (III) oxide being Fe₂O₃. The massive oxidation most likely occurred when Mars had flowing water and a thicker atmosphere.

Detailed observations of the position of Mars were made in Antiquity by Babylonian astronomers who developed arithmetic techniques to predict the future position of the planet. The late ancient philosophers and astronomers (such as Hipparchus, and then Claudius Ptolemy in his work known as the Almagest) developed a geocentric model to explain the planet’s motions, using systems and combinations of circular tracks called deferents and epicycles.

During the seventeenth century CE, Tycho Brahe measured the diurnal parallax of Mars that Johannes Kepler used to make a preliminary calculation of the relative distance to the planet. Kepler studied for years to motion and the orbit of planet Mars.
Kepler tried several oval curves for the orbit of Mars that might fit the observations, including the ellipse. He was not happy with the physical reasons for choosing any of them until he noticed that one focus of an approximating ellipse coincided with the Sun. The curve and focus made it clearer for Kepler to elaborate a physical explanation.

Kepler’s initial attempt to define the orbit of Mars as a circle was off by only eight minutes of arc, but this made him to spend six years to resolve the discrepancy. The data seemed to produce a symmetrical oviform curve inside of his predicted circle. He first tested an egg shape, then engineered a theory of an orbit which oscillates in diameter, and returned to the egg. In early 1605, he geometrically tested an ellipse, which he had previously assumed to be too simple a solution for earlier astronomers to have overlooked. He had already derived this solution trigonometrically many months earlier.
In his Astronomia Nova, published in 1609, Kepler presented a proof that Mars’ orbit is elliptical. Evidence that the other known planets’ orbits are elliptical was presented only in 1621. Kepler published his first two laws about planetary motion in 1609, having found them by analyzing the astronomical observations of Tycho Brahe.
Kepler gradually discovered that all planets orbit the Sun in elliptical orbits, with the Sun at one of the two focal points. This result became the first of Kepler’s three laws of planetary motion.

The image below depicts the orbits of the planets Mercury, Venus, Earth, and the elliptical orbit of Mars around the Sun. The date is November 28, 1613 (image made with the Starry Night astronomy software):

mars elliptical orbit

The first person to draw a map of Mars that displayed any terrain features was the Dutch astronomer Christiaan Huygens.

Mars comes closer to Earth more than any other planet save Venus at its nearest—56 million km is the closest distance between Mars and Earth, whereas the closest Venus comes to Earth is 40 million km. Mars comes closest to Earth every other year, around the time of its opposition, when Earth is sweeping between the sun and Mars. Extra-close oppositions of Mars happen every 15 to 17 years, when we pass between Mars and the sun around the time of its perihelion (closest point to the sun in orbit). The minimum distance between Earth and Mars has been declining over the years, and in 2003 the minimum distance was 55.76 million km, nearer than any such encounter in almost 60,000 years (circa 57,617 BCE). The record minimum distance between Earth and Mars in 2729 will stand at 55.65 million km. In the year 3818, the record will stand at 55.44 million km, and the distances will continue to decrease for about 24,000 years.

Starting in 1960, the Soviet Union launched and sent a series of probes to Mars including the first attempted flybys and hard (impact) landing. The first successful flyby of Mars was on 14–15 July 1965, by NASA’s Mariner 4. On November 14, 1971, Mariner 9 became the first space probe to orbit another planet when it entered into orbit around Mars.
The first to contact the surface were two Soviet probes: Mars 2 lander on November 27 and Mars 3 lander on December 2, 1971—Mars 2 failed during descent and Mars 3 about twenty seconds after the first Martian soft landing. Mars 6 failed during descent but did return some corrupted atmospheric data in 1974.The 1975 NASA launches of the Viking program consisted of two orbiters, each with a lander that successfully soft landed in 1976. Viking 1 remained operational for six years, Viking 2 for three. The Viking landers relayed the first color panoramas of Mars.

The image below shows the clearest image of craters of Mars taken by Mariner 4:

(Image source: https://en.wikipedia.org/wiki/File:Mariner_4_craters.gif)

In order to understand and study the gravity of Mars, its gravitational field strength g and gravitational potential U are frequently measured. Mars being a non-spherical planetary body and influenced by complex geological processes, the gravitational potential is described with spherical harmonic functions, following the conventions in geodesy, via the following formula:

Here is an explanation of the potential formula above:


Mars will be in opposition with the Sun and in opposition to Earth on December 8, 2022. This means that Mars and the Sun will be on opposite sides of planet Earth, the two planets being the closest together in their respective orbits.

The following image shows planetary orbits, with the shining Sun in the middle of the image and with Mars in opposition to Earth, on December 8, 2022 (image made with the Mobile Observatory astronomy app):

An important event related to Mars will be evidently the first human mission to the Red Planet, which should be the outcome or result of thorough preparation and international cooperation, so that the prepared, trained and qualified crew of the first human trip to Mars will be able to travel in space, set foot and land on the Red Planet, stay there and explore for a limited period of time, and then come back safely to Earth.

Finally, the image below shows planet Mars and the orbit of one of its moons (Phobos) as seen from the surface of planet Saturn at 0°N 0°E, on November 28, 2022 (image made with the Starry Night astronomy software):

Some particular opinions and results in Newton’s Principia, and aftermath

Isaac Newton is regarded as the greatest scientist/mathematician, or natural philosopher (as scientists, physicists and mathematicians were called at that time) during the second half of the seventeenth century and the first half of the eighteenth century, and his Principia, first published in 1687, can be regarded as the greatest scientific work during that same period of time.

Newton’s Philosophiæ Naturalis Principia Mathematica was a product of its time, containing mathematical tools or methods used at that time, some of them developed by Newton himself. The work also contains important scientific theories and scientific information or knowledge that have been subjected to scrutiny, criticized, updated and/or corrected with the progress in theory and experimentation during the last three centuries.

Below is an image of the title page of Newton’s Principia, from the first edition published in 1687:

(Image source: https://en.m.wikipedia.org/wiki/File:Prinicipia-title.png)

Newton gave a determination of the speed of sound in Proposition 49 of Book II of the Principia. He provided a value (979 ft/sec) which is too low by approximately 15%. The discrepancy is due primarily to neglecting the (then unknown) effect of rapidly-fluctuating temperature in a sound wave. In modern terms, sound wave compression and expansion of air is an adiabatic process, not an isothermal process. About a century later, Pierre-Simon Laplace corrected the flaw or deficiency in Newton’s analysis and gave a more accurate formula to calculate the speed of sound.

Newton was also a product of his time and of his social environment. He was a religious person who believed in the Bible, and he reconciled his beliefs by adopting the idea that God set in place at the beginning of time the “mechanical” laws of nature, but retained the power to intervene and alter that mechanism at any time.

Newton thought that gravitation is based somewhat directly on divine influence. He sometimes mentioned in private correspondence that the force of gravity was due to a divine or an immaterial influence.

Newton’s General Scholium, published with the second edition of the Principia, contains (among other things) theological views and discussions. Newton was unable to explain all the intricacies and the details of the motions and orbits of the planets in the solar system. According to Newton, divine Providence was sometimes required to intervene in order to rectify the orbits and paths of the planets and to make the entire celestial system work correctly. In the following century and beyond, mathematicians and astronomers, such as Laplace, Lagrange, and others, provided mathematical explanations for the perturbations in the orbits of planets and for the stability of the solar system.

In the following decades and centuries, scientists and philosophers were inspired or influenced by Newton’s methods of analysis and by his scientific ideas, but many did not necessarily agree with his religious and theological views.

A shift away from Newton’s religious ideas started mainly with David Hume’s criticism of miracles and with the criticism of organized religion by philosophers of the Enlightenment. In the next two or three centuries after Newton, scientists, physicists and philosophers of science, when they were not agnostic or non-religious, tended to separate their religious views from their scientific work and research.

Clarifying the subsequent influence of Newton’s ideas and theories, and summing up:

“The test of Newtonian mechanics was its congruence with physical reality. At the beginning of the 18th century it was put to a rigorous test. Cartesians insisted that the Earth, because it was squeezed at the Equator by the etherial vortex causing gravity, should be somewhat pointed at the poles, a shape rather like that of an American football. Newtonians, arguing that centrifugal force was greatest at the Equator, calculated an oblate sphere that was flattened at the poles and bulged at the Equator. The Newtonians were proved correct after careful measurements of a degree of the meridian were made on expeditions to Lapland and to Peru. The final touch to the Newtonian edifice was provided by Pierre-Simon, marquis de Laplace, whose masterly Traité de mécanique céleste (1798–1827; Celestial Mechanics) systematized everything that had been done in celestial mechanics under Newton’s inspiration. Laplace went beyond Newton by showing that the perturbations of the planetary orbits caused by the interactions of planetary gravitation are in fact periodic and that the solar system is, therefore, stable, requiring no divine intervention.”

(Source: https://www.britannica.com/science/history-of-science/Newton)

Every great work of science becomes progressively and increasingly scrutinized, debated, and in need of updates or rectifications with the passing of time and centuries. This does not necessarily take away from its historical importance.

The development of calculus, and explaining the law of gravitation

Isaac Newton started developing methods of the Differential calculus as early as 1666. He called his findings and methods about this subject the Fluxional Calculus or also the ‘method of fluxions and fluents’.

Gottfried Wilhelm Leibniz developed his own form of calculus a few years after Newton had developed the principles of differential calculus, but Leibniz published his discovery of differential calculus a few years before Newton published his own findings about the subject.

Newton published his book Philosophiæ Naturalis Principia Mathematica or Mathematical Principles of Natural Philosophy in 1687.
In this important book Newton’s laws of motion are stated, and Kepler’s laws of planetary motion as well as Newton’s law of universal gravitation are derived.
However in the Principia the modern language of calculus was absent, and Newton mostly gave proofs in a geometric form of infinitesimal calculus, based on limits of ratios of vanishing small geometric quantities.

As an example, Kepler’s second law of motion states that:
The line joining a planet to the Sun sweeps out equal areas in equal times as the planet travels following an elliptical orbit.

In the Principia, Newton stated Kepler’s second law as follows:
The areas, which revolving bodies describe by radii drawn to an immovable centre of force do lie in the same immovable planes, and are proportional to the times in which they are described.

Below is the image taken from Newton’s Principia with which he proved geometrically Kepler’s second law:

And below is an image used by Newton in a later section of the Principia to prove geometrically (using limits of vanishing small quantities) the universal law of gravitation:

Newton proved with the help of the image above that the (centripetal) force varies as the inverse of the square of the distance between the center of force and the planet P.

The rigorous development of calculus , the clarification of its important notions and principles and its extensive use in classical and celestial mechanics and physics was carried out in the next two centuries after Newton by scientists and mathematicians such as the Bernoullis, Euler, Lagrange , Laplace, Cauchy, and others.

Additional info about the universal law of gravitation and the history of calculus can be found by searching online about these topics in resources such as Wikipedia and online Encyclopedias and websites.

The text of Newton’s Philosophiæ Naturalis Principia Mathematica can be found freely online at the Internet Archive website.

Physics with or without philosophy, and the relationship between them

According to dictionaries such as the Oxford English Dictionary and the Wiktionary, the word philosopher comes originally from Anglo-Norman filosofre, philosophre variant of Old French and modern French philosophe, from Latin philosophus, from ancient Greek philosophos (φιλόσοφος) lover of wisdom, formed as philo- + sophos, meaning wise. It is thought that Pythagoras coined this term to describe himself.

In Antiquity, thinkers such as Pythagoras and Aristotle were first defined as philosophers before being called mathematicians or scientists.

Things changed gradually with the advance of knowledge and technology, and disciplines got more or less separated, but the expression “Natural Philosophy” was still widely used a couple of centuries ago to designate physics and the physical sciences.

Physics as an exact science or a scientific field of study cannot “function” or provide exact measurable results without mathematics, and physics also needs or is enriched by philosophical reflections, ideas and considerations. Philosophy can interact with physics, and vice versa.

As an example, Kant built a philosophical system taking into account ideas from classical and Newtonian physics. At the beginning of the twentieth century, Henri Poincaré wrote instructive books about the philosophy of science, mathematics, and physics, such as “Science and Hypothesis” and “Science and Method”. Einstein (and Infeld) wrote a book entitled “The Evolution of Physics” about the history of the important concepts and ideas in physics, mostly from a philosophical point of view. Werner Heisenberg wrote a book entitled “Physics and Philosophy: The Revolution in Modern Science”, mainly dealing with quantum mechanics, its history, its implications, and its philosophical consequences.

More recently, there have been instances of books about the philosophy of science trying to discuss or explore “The trouble with physics” (title of a book by Lee Smolin), or trying to consider or examine why certain theories are possibly “Not even wrong” (title of a book by Peter Woit), because sometimes by their actions and theories physicists might be at risk of saying “Farewell to Reality” (title of a book by Jim Baggott), with authors sometimes cautioning against getting “Lost in Math: How Beauty Leads Physics Astray” ( title of a book by Sabine Hossenfelder).

As a branch of philosophy, epistemology
of science and of physics is useful to study the nature, source, scope and limitations of scientific knowledge. Ideas, notions and concepts related to philosophy, epistemology, science, and physics include induction, deduction, empiricism, positivism, rationalism, skeptical enquiry,…

Each scientist or physicist is usually guided or influenced by his or her own philosophy, philosophical background or readings.

Some useful books about the history of modern mathematics

There are some books that deal with the history of (modern) mathematics during the last 100 years, during the 20th century, or the last two centuries.

Two books by Jean Dieudonné:

A History of Algebraic and Differential Topology, 1900-1960

History of Functional Analysis

Here is a good book about the history of group theory and Lie groups:

Emergence of the Theory of Lie Groups, An Essay in the History of Mathematics 1869-1926, by Thomas Hawkins.

Other relevant or informative books include the following ones:

Elements of the History of Mathematics, by Nicolas Bourbaki.

The Mathematical Century: The 30 Greatest Problems of the Last 100 Years, by Piergiorgio Odifreddi.

Symmetry and the Monster: The Story of One of the Greatest Quests of Mathematics, by Mark Ronan.

History of Topology, Edited by I.M. James.

Mathematics and its History (3rd edition), by John Stillwell. The last chapters of Stillwell’s book deal with modern or recent mathematical topics such as Non-Euclidean geometry, group theory, hypercomplex numbers, algebraic number theory, Topology, simple groups, sets, logic and computation, and combinatorics.

Another useful book dealing with mathematics in the last two centuries :

Changing Images in Mathematics: From the French Revolution to the New Millennium (Routledge Studies in the History of Science, Technology and Medicine), by Umberto Bottazini, Amy Dahan Dalmedico.

Below is a book focusing on five mathematical theorems and results of the twentieth century :

5 Golden Rules: Great Theories of 20th-Century Mathematics and Why They Matter, by John L. Casti.

The following related links can also be helpful:

Books about history of recent mathematics

20th century mathematics

The state of quantum physics at the time of the 1927 Solvay conference

The fifth Solvay conference took place in October 1927. Here is a summary of some of the principle ideas, theories and advances related to quantum physics, elaborated before or at the time of the conference.

Let’s start with what is called the old quantum theory (from 1900 to 1924–1925).

Max Planck introduced in 1900 the idea that energy is quantized in order to derive a formula for the observed frequency dependence of the energy emitted by a black body. This has been called Planck’s law.

Planck’s constant h was conceived as the proportionality constant between the minimal increment of energy, E, of a hypothetical electrically charged oscillator in a cavity that contained black body radiation, and the frequency, f, of its associated electromagnetic wave.

Einstein explained the photoelectric effect
in 1905 by postulating that light, or more generally all electromagnetic radiation, can be divided into a finite number of “energy quanta” that are localized points in space.

Einstein theorized that the energy in each quantum of light was equal to the frequency multiplied by Planck’s constant. A photon above a threshold frequency has the required energy to eject a single electron, creating the observed effect. The 1905 paper where Einstein described the photoelectric effect was entitled “On a Heuristic Viewpoint Concerning the Production and Transformation of Light”.

Niels Bohr proposed, in 1913, what is now called the Bohr model of the atom. He suggested that electrons could only have certain classical motions:

  • Electrons in atoms orbit the nucleus.
  • The electrons can only orbit stably, without radiating, in certain orbits at a certain discrete set of distances from the nucleus. These orbits are associated with definite energies and are also called energy shells or energy levels.
  • Electrons can only gain and lose energy by jumping from one allowed orbit to another, absorbing or emitting electromagnetic radiation with a frequency ν determined by the energy difference of the levels according to the Planck relation:

Moreover, the angular momentum, L, of the orbiting electron is quantised such that:

where n=1,2,3… is called the principal quantum number, and

The basic idea of the old quantum theory is that the motion in an atomic system is quantized, or discrete. The system obeys classical mechanics except that not every motion is allowed, only those motions which obey the quantization condition:

where the pi are the momenta of the system and the qi are the corresponding coordinates. The quantum numbers ni are integers and the integral is taken over one period of the motion at constant energy (as described by the Hamiltonian). The integral is an area in phase space, which is a quantity called the action and is quantized in units of Planck’s constant, which was often called the quantum of action.

Between the 1910s and the 1920s, many problems were attacked using the old quantum theory with mixed results. Molecular rotation and vibration spectra were understood and the electron’s spin was discovered, leading to the confusion of half-integer quantum numbers. Max Planck introduced the zero point energy and Arnold Sommerfeld semiclassically quantized the relativistic hydrogen atom. Hendrik Kramers explained the Stark effect. Bose and Einstein gave the correct quantum statistics for photons.

The folloeing image shows the participants in the 1927 Solvay conference:

(Image source: https://en.m.wikipedia.org/wiki/File:Solvay_conference_1927.jpg )

Louis be Broglie introduced his theory of electron waves in his 1924 thesis entitled Recherches sur la théorie des quanta (Research on the Theory of the Quanta). The theory included the wave–particle duality theory of matter, De Broglie stated that any moving particle or object had an associated wave, thus creating a new field in physics, the mécanique ondulatoire, or wave mechanics, uniting the physics of energy (wave) and matter (particle).

Modern quantum mechanics was born in 1925, when Werner Heisenberg, Max Born, and Pascual Jordan developed matrix mechanics and Erwin Schrödinger invented wave mechanics and the non-relativistic Schrödinger equation as an approximation to the generalized case of de Broglie’s theory. Schrödinger subsequently showed that the two approaches were equivalent.
Heisenberg’s paper “Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen” (English “Quantum theoretical re-interpretation of kinematic and mechanical relations”), published in September 1925, laid the groundwork for matrix mechanics, later developed further by Born and Pascual Jordan.

Matrix mechanics gave an account of quantum jumps that supplanted the electron orbits of the Bohr model. It interpreted the physical properties of particles as matrices that evolve in time.
In January 1926, Schrödinger published in Annalen der Physik the paper “Quantisierung als Eigenwertproblem” (Quantization as an Eigenvalue Problem) on wave mechanics and presented what is now known as the Schrödinger equation. In this paper, he gave a “derivation” of the wave equation for time-independent systems and showed that it gave the correct energy eigenvalues for a hydrogen-like atom.
Heisenberg formulated his uncertainty principle or uncertainty relations in 1927, and the Copenhagen interpretation started to take shape at about the same time.

In his march 1927 paper, entitled “Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik” (“On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics”), Heisenberg described the uncertainty relation as the minimum amount of unavoidable momentum disturbance caused by any position measurement. He used the German word, “Ungenauigkeit” (“indeterminacy”), to describe the basic theoretical principle. Only in the endnote did he switch to the word, “Unsicherheit” (“uncertainty”).

PAM Dirac published a number of papers about quantum mechanics between 1925 and 1927. It is to be noted that Dirac formally derived his known relativistic wave equation in 1928.

Einstein argued for the incompleteness of quantum mechanics in the Solvay conference of 1927, even before the EPR paper by Einstein, Podolsky and Rosen of 1935.

De Broglie published a paper about pilot wave theory in May 1927. At that time Einstein also intended to publish a paper presenting an alternative version of pilot-wave theory, but he withdrew it from publication.

De Broglie’s pilot-wave theory was criticized by Pauli and was not accepted or adopted during the conference.