Lunar eclipse followed by total solar eclipse, March and April 2024

The penumbral lunar eclipse takes place on Monday, March 25, 2024.

A penumbral lunar eclipse happens when only the Earth’s outer and light shadow, called the penumbra, falls on the lunar disk. During this type of lunar eclipse, the lunar disk appears only a little darkened. During the March eclipse, about 95.6% of the Moon will be in the penumbral shadow.

The image below explains the different types of lunar eclipses:

Image source:

https://starwalk.space/en/news/penubral-lunar-eclipse-march-25-2024

The March full “worm” moon announces the coming of spring. Named for the emergence of earthworms when the ground thaws, the worm moon is considered to represent renewal and growth.

We will take the example of the city of Rio de Janeiro in Brazil, where the penumbral lunar eclipse begins on Monday, March 25, 2024, st 01:53(AM). The maximum eclipse occurs at 04:13, the eclipse ends at 06:05. The duration of this eclipse is 4 hours and 13 minutes.

The image below shows how the Moon looks like when viewed from Rio de Janeiro at about 2:00 AM, and at the time of the maximum eclipse at 4:12 AM (Moon appearance and image made with the Stellarium astronomy app)


Historically, it is generally known or recognized that the first solar eclipse was predicted by the ancient philosopher and astronomer Thales. This eclipse most likely took place on May 28, 585 BCE.

A solar eclipse happens when the Moon comes in between the sun and the Earth. The Moon gets in the way of the Sun’s light and casts its shadow on Earth.

The following image shows the positions of planet Earth, the Moon and the Sun in the solar system at the time of the total solar eclipse on April 8, 2024 (image made with the Mobile Observatory app):

The image below shows an animation of the path of the solar eclipse:

Image source: https://en.m.wikipedia.org/wiki/File:SE2024Apr08T.gif

Taking as an example the town of Tipton, Indiana, USA, the animated image below shows three steps of the total solar eclipse. The maximum of the eclipse occurs on Monday, April 8 2024 at 15:08, the eclipse ends at 16:23. The duration of the solar eclipse is 2 hours and 32 minutes (eclipse images made with the Stellarium astronomy app):

Animated solar eclipse

In the image above, Zeta Piscium A is the primary component of Zeta Piscium (ζ Piscium, abbreviated Zet Psc, ζ Psc), a quintuple star system in the zodiac constellation of Pisces. Zeta Piscium A consists of a pair of A-type sub-giant stars with an angular separation of 0.15 arcseconds and visual magnitude 5.28.

The image below shows the solar eclipse taking place on April 8 in Torreon, Mexico at 11:46 (AM)

Lunar and solar eclipses come in pairs, separated by an interval of two weeks.

The March lunar eclipse is the first of the year 2024. It is followed by three more eclipses: a total solar eclipse on April 8, a partial lunar eclipse on September 18, and an annular solar eclipse on October 2.

Some thoughts about educational costs, reform and economics

I have written in earlier posts about the need to change and reform the existing educational system, with its general structure and requirements dating back to the last couple of centuries. With the passage of time, students and learners can generally understand and acquire past, recent, advanced knowledge at a younger and earlier age.

It would be beneficial and advisable to give young learners more possibilities, and allow them to begin their college/university and higher formal education at a younger or lower age.

New, alternative methods of education include academic and education acceleration, skipping grades, homeschooling, online education, e-learning, etc, and possible adequate or coherent combinations of these methods.

A detailed study or analysis of the consequences of such changes or educational reforms on a large scale could be undertaken from a financial perspective or in relation to the economics of education. I will try to provide a short or practical analysis, and provide some considerations on the possible implementation of these changes.

In countries such as Germany and France, institutions of higher education or universities can be public or private. In public universities tuition costs and fees are very low or minimal, they mostly include yearly enrollment fees. In France for example these minimal fees are usually applied in public universities to all students, including international students and students of all countries, who can benefit from the low costs of education.

Many countries or nations on a larger international scale involving all continents could negotiate and agree to implement a reformed educational system where young learners or students are given the opportunity to finish their secondary education and get into the university at a younger age, thus being able to finish their university studies before the end of their teenage years. For example, a young, perceptive, fast learner could undergo or follow an accelerated educational method, process or system (with or without homeschooling), and be able to enter the university at the age of 11 or 12, then finish his or her studies with a master’s degree or a PhD at about the age of 16, 17 or 19.
An additional useful change would be to accompany this educational acceleration with a tuition system similar to the one found in public universities in countries like Germany or France, where students pay low or minimal tuition fees.

As a general remark, students or learners who are able or who are given the possibility to finish their university studies in their teens, instead of having to wait until their twenties or thirties to finish these studies and obtain the same diploma, will have saved considerable time, expenses and costs for themselves and for the community, society or country where they live. Such a refined and reformed education system would help reduce educational costs, and reduce costs, expenditures and fees in relation to the allocation of educational resources, and the allocation and use of facilities and buildings for educational purposes.

Considering for example the case of the educational system in the USA and the related student debt issues, implememting such educational changes and reforms would represent an effective solution to the student debt and student loan problems. Thus students would be able to finish their college and university studies at a younger age, with very insignificant debt or debt-free. Within this new or reformed educational framework, a young person who drops out or leaves the university and the formal education system as a teenager at about the age of 16 or 17 will be able to leave after having obtained and finished a university degree such as an engineering degree, or a master’s or a PhD degree, instead of leaving with only high school qualifications or a high school degree.

An idea I alluded to before is that the changes, progress and advances in knowledge, information and scientific theories should be constantly and regularly accompanied by reassessing, re-evaluating and changing the education system and the methods through which these theories and information are presented, transmitted or taught. I think the considerations or suggestions I presented here represent a significant step in the right direction to implement the suitable reformed, inclusive, affordable, effective and efficient educational system of the future.

Astronomy, its influence and some of its uses and applications in the last few centuries

From a historical point of view, astronomy is one of the oldest natural and exact sciences, with observations and scientific explanations by various cultures dating back to early Antiquity.

Spherical or observational astronomy is the oldest branch of astronomy. Observations of celestial objects have been important for religious and astrological purposes, and for timekeeping and navigation.

Celestial Navigation has been used in position fixing and navigation by observing the positions of celestial bodies, including the sun, moon, planets and stars. Instruments such as sextants have been used since medieval times for measuring the positions of stars and the angular distances between celestial objects.

In short, what is known as the scientific revolution essentially started in the 16th century with the publication by Nicolaus Copernicus of his work about heliocentric astronomy in 1543. This was followed by the works, observations and ideas of other scientists and astronomers such as Tycho Brahe, Galileo Galilei, and Johannes Kepler, culminating with Isaac Newton’s work entitled Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), first published in 1687, which expounded his laws of motion and the law of universal gravitation, and established the discipline of classical mechanics.

Newton’s Principia historically had an indirect, albeit significant, influence on the progress of navigation and on related topics such as tide analysis and prediction.

Newton’s theory of gravitation first enabled an explanation of why there were generally two tides a day, not one, and gave hope for a detailed understanding of tidal forces and behavior.

Newton presented in the Principia his mathematical theory concerning tides and lunar motion, and it is known that sea travel was essential for trade to the British and other navigators, who needed to have a good knowledge of tidal cycles and patterns and how they affect navigation and the determination of longitudes.

In the 17th century, the creation of learned societies like the Royal Society in England under the patronage of the king and with the help of some known personalities, and the French academy of sciences by Louis XIV and his minister Colbert, was helpful in advancing the interest in scientific and astronomical research.

During the 18th and 19th centuries, the study of the three-body problem by Euler, Clairaut, and d’Alembert led to more accurate predictions about the motions of the Moon and planets. This work was further refined by Joseph-Louis Lagrange and Pierre Simon Laplace, allowing the masses of the planets and moons to be estimated from their perturbations.

The scientists and astronomers who came after Newton and continued or developed his work made additional contributions to the theory of tides. As an example, in 1776, Laplace formulated a set of linear partial differential equations, for tidal flow described as a barotropic two-dimensional sheet flow (this is a flow whose density is a function of pressure only). Laplace obtained these equations by simplifying the fluid dynamic equations.

Nathaniel Bowditch, regarded as the founder or one of the founders of modern maritime navigation, read Newton’s Principia as a young man, and then translated Laplace’s Mécanique céleste (Celestial Mechanics), a work that extended and completed Newton’s Principia and Newton’s theories.

Sometimes scientists and astronomers were helped or supported by the state, and sometimes they were helped by influential personalities. In addition to his contributions to mathematics, Carl Friedrich Gauss is also known for his contributions to astronomy and planetary theory, having among other things published a book or work entitled Theoria motus corporum coelestium in sectionibus conicis solem ambientum (Theory of motion of the celestial bodies moving in conic sections around the Sun). Gauss was financially supported during his years of study by the Duke of Brunswick.

Larger and more powerful telescopes were developed and built during the 18th and 19th centuries, contributing to the progress in observational and theoretical astronomy. One of the famous applications of astronomical theories and celestial mechanics around the middle of the 19th century was the prediction of the existence and position of planet Neptune, mainly by Urbain Leverrier, using only mathematics and astronomical observations of planet Uranus. Telescopic observations confirming the existence of a major planet (subsequently called or named Neptune) were made on September 1846 at the Berlin Observatory.

William Thomson (Lord Kelvin) applied Fourier analysis to the determination of tidal motion and to explain tidal phenomena in relation to harmonic analysis. As a practical application of the astronomical theory of tides and lunar theory (i.e. the theory of the moon’s motion as deduced from the law of gravitation with its perturbations), at the end of the 19th century Thomson and others conceived tide-predicting machines, which were special-purpose mechanical analog computers constructed and set up to predict the ebb and flow of sea tides and the irregular variations in their heights – which change in mixtures of rhythms, that never (in the aggregate) repeat themselves exactly. Their purpose was to shorten the difficult and error-prone computations of tide prediction. These machines provided predictions valid from hour to hour and day to day for a year or more ahead. They were widely used for constructing official tidal predictions for general marine navigation, and were viewed as of strategic military importance until the second half of the 20th century.

The image below shows the tide predicting machine by Sir William Thomson in 1876. This machine combined ten tidal components, one pulley for each component. It could trace the heights of the tides for one year in about four hours.

(Image source: https://en.m.wikipedia.org/wiki/File:DSCN1739-thomson-tide-machine.jpg )

Tide-predicting machines became generally used for constructing official tidal predictions for general marine navigation, and were viewed as of strategic military importance until the second half of the 20th century.

Important advances were made in astronomy during the 18th and 19th centuries due to observations as well as theoretical investigations and publications. These advances were accompanied or followed by applications related to navigation and nautical astronomy, sometimes stimulated or supported by societal and commercial interests or needs.

Progess in astronomical theory, research and applications continued during the 20th century in several directions.
In the late 19th century and most of the 20th century, astronomical images were made using photographic equipment. Modern images are obtained using digital detectors, particularly using charge-coupled devices (CCDs) and recorded on modern media.

Radio astronomy flourished mostly after the second half of the 20th century
The discovery of the cosmic microwave background radiation, regarded as evidence for the Big Bang theory, was made through radio astronomy.
Radio astronomy uses large radio antennas known as radio telescopes, that are either used singularly, or with multiple linked telescopes utilizing the techniques of radio interferometry and aperture synthesis.

Other observational branches of astronomy include infrared astronomy, x-ray astronomy, and ultraviolet astronomy.

Related fields or subfields of astronomy that were developed during the 20th century include astrophysics, astrochemistry, stellar astronomy, galactic astronomy, physical cosmology, astrobiology, …

Theoretical astronomy in the 20th century studied the existence of objects such as black holes and neutron stars, which have been used to explain such observed phenomena as quasars, pulsars, blazars, and radio galaxies. Physical cosmology made advances during the 20th century. In the early 1900s the model of the Big Bang theory was formulated, supported by cosmic microwave background radiation, Hubble’s law, and the cosmological abundances of elements.

Space telescopes have enabled measurements in parts of the electromagnetic spectrum normally blocked or blurred by the atmosphere.

Operational space telescopes orbiting Earth outside the atmosphere avoid light pollution from artificial light sources on Earth. Their angular resolution is often much higher than a ground-based telescope with a similar aperture.

The image below shows the Hubble Space Telescope, which was launched into low Earth orbit in 1990, and remains in operation in 2023:

(Image source: https://en.m.wikipedia.org/wiki/File:HST-SM4.jpeg )

During the 1990s, the measurement of the stellar wobble (or Doppler spectroscopy) of nearby stars was used to detect large extrasolar planets orbiting those stars.

Human missions and crewed spaceflights have been sent to explore outer space (until now mostly in the vicinity of planet Earth and to the Moon) since after the second half of the 20th century.
Interplanetary space probes have flown to all the observed planets in the Solar System as well as to dwarf planets Pluto and Ceres, and several asteroids. Orbiters and landers usually return more information than fly-by missions.

The difference(s) between stable and unstable isotopes

Most chemical elements in the natural world consist of a mixture of isotopes.

By definition, an isotope is one two or more types of a specific chemical element having different numbers of neutrons in the nucleus, i.e. different relative atomic masses and different nuclear properties, but the same chemical properties.

From a historical perspective, the possibility of isotopes was first suggested in 1913 by Frederick Soddy, after studying radioactive decay chains that indicated about 40 different species referred to as radioelements (or radioactive elements) between uranium and lead. Soddy proposed that several types of atoms, differing in radioactive properties, could occupy the same place in the periodic table. In 1922, Frederick Soddy and Francis William Aston were awarded the Nobel Prize in chemistry for their work in discovering isotopes.

Stable isotopes are atoms with stable nuclei that do not emit radiation. They do not go through radioactive decay.

Unstable isotopes are essentially atoms having unstable nuclei that emit radiation. They are radioactive isotopes.

The first evidence for the existence of multiple isotopes of a stable, non-radioactive, element was found by J. J. Thomson in 1912 as part of his exploration into the composition of canal rays (or positive ions).

The sum of protons and neutrons and the ratio of protons to neutrons represent the two main influences on the stability of nuclei.

The unstable isotope of an element undergoing radioactive decay is called a radioisotope.

Stable as well as unstable isotopes have various applications in science, medicine, agriculture, …

The hydrogen element has three isotopes: protium 1H with zero neutrons, deuterium 2H with one neutron, and tritium 3H with two neutrons. Protium is the most stable and abundant isotope, tritium the most unstable. Deuterium is stable but not too abundant, and is used as heavy water in laboratory applications.

The following image shows the three isotopes of hydrogen:

Image source: https://en.m.wikipedia.org/wiki/File:Hydrogen_Deuterium_Tritium_Nuclei_Schmatic-en.svg

A known example of application is isotopic labeling, the use of unusual isotopes as tracers or markers in chemical reactions. Normally, atoms of a given element are indistinguishable from each other. By using isotopes of different masses, even different nonradioactive stable isotopes can be distinguished by mass spectrometry or infrared spectroscopy. For example, in ‘stable isotope labeling with amino acids in cell culture (SILAC)’ stable isotopes are used to quantify proteins. If radioactive isotopes are used, they can be detected by the radiation they emit (this is called radioisotopic labeling).

A technique similar to radioisotopic labeling is radiometric dating: using the known half-life of an unstable element, one can calculate the amount of time that has elapsed since a known concentration of isotope existed. The most widely known example is radiocarbon dating, used to determine the age of carbonaceous materials.

Isotope geochemistry is an aspect of geology based upon the study of natural variations in the relative abundances of isotopes of various elements. Variations in isotopic abundance are measured by isotope ratio mass spectrometry, and can reveal information about the ages and origins of rock, air or water bodies, or processes of mixing between them.

Stable isotope geochemistry is largely concerned with isotopic variations arising from mass-dependent isotope fractionation, whereas radiogenic isotope geochemistry is concerned with the products of natural radioactivity.

As an application in medicine, it is possible to use or apply radioactive isotopes on diseased tissue. Some gamma ray emissions coming from isotopes can be pointed in the direction of the tissue to be destroyed. Cobalt-60 is an exampke of a very convenient isotope used for this kind of procedure.

Anemia is a condition where there is deficiency in, a low number of or not enough healthy red blood cells or hemoglobin to carry oxygen to the body’s tissues, resulting in shortness of breath and lack of energy. The radioisotope Cobalt-57 can be used as a tracer and as an agent for the diagnosis of pernicious anemia, and other conditions related to the imperfect or bad absorption of vitamin B-12. It is also useful for the calibration of radiometric equipment and medical gamma-cameras.

Radioactive iodine may be used to image the thyroid gland for medical and diagnostic applications and purposes.

Several forms of spectroscopy rely on the unique nuclear properties of specific isotopes, both radioactive and stable. For example, nuclear magnetic resonance (NMR) spectroscopy can be used only for isotopes with a nonzero nuclear spin.

The difference between quantum mechanics and quantum field theory, some basic explanations

Let’s start with some historical notes and considerations.

Quantum mechanics as a theory was gradually created and formulated during the first two decades of the 20th century. Important milestones include the 1900 quantum hypothesis by Planck that any energy-radiating atomic system can theoretically be divided into a number of discrete “energy elements” such that each of these elements is proportional to the frequency ν with which each of them individually radiate energy, and then the interpretation of the photoelectric effect.

The origins of quantum field theory date to the 1920s and to the problem of creating a quantum theory of the electromagnetic field.
The first coherent and acceptable theory of quantum electrodynamics, which included the electromagnetic field and electrically charged matter as quantum mechanical objects, was created by Paul Dirac in 1927.

A further development for quantum field theory (or QFT) came with the discovery of the Dirac equation, which was originally formulated and interpreted as a single-particle equation similar to the Schrodinger equation, but the Dirac equation additionally satisfies both the Lorentz invariance, i.e. the requirements of special relativity, and the rules of quantum mechanics.

Theoretical formulations and advances took place during the 1940s and 1950s, resulting the introduction of renormalized quantum electrodynamics (or QED).

Quantum chromodynamics (QCD), the theory of the strong interaction between quarks mediated by gluons, was formulated diring the 1960s.

In the 1960s and 1970s it was shown that the weak nuclear force and quantum electrodynamics could be merged into a single electroweak interaction.

The Standard Model of particle physics, the theory describing three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifying all known elementary particles, is a paradigm of a quantum field theory for theorists, exhibiting a wide range of physical phenomena.

Quantum field theory is a quantum mechanical theory. In this theory, fields with quantized normal modes of oscillation represent particles. So particles are regarded as excitations of quantum fields filling all of space. Relativistic theories of quantized fields depict the interactions between elementary particles.

In general, quantum field theory is a theoretical framework combining classical field theory, special relativity, and quantum mechanics. It is used to build physical models of subatomic particles (in relation to particle physics) and quasi-particles (in relation to condensed matter physics).

Below is a helpful description and explanation of quantum field theory, taken from the book A Modern Introduction to Quantum Field Theory, by Michele Maggiore:

“Quantum field theory is a synthesis of quantum mechanics and special relativity, and it is one of the great achievements of modern physics. Quantum mechanics, as formulated by Bohr, Heisenberg, Schrodinger, Pauli, Dirac, and many others, is an intrinsically non-relativistic theory. To make it consistent with special relativity, the real problem is not to find a relativistic generalization of the Schrodinger equation. Actually, Schrodinger first found a relativistic equation, that today we call the Klein–Gordon equation. He then discarded it because it gave the wrong fine structure for the hydrogen atom, and he retained only the non-relativistic limit. Wave equations, relativistic or not, cannot account for processes in which the number and the type of particles changes, as in almost all reactions of nuclear and particle physics.[…] Furthermore, relativistic wave equations suffer from a number of pathologies, like negative-energy solutions.

A proper resolution of these difficulties implies a change of viewpoint, from wave equations, where one quantizes a single particle in an external classical potential, to quantum field theory, where one identifies the particles with the modes of a field, and quantizes the field itself. The procedure also goes under the name of second quantization.

The methods of quantum field theory (QFT) have great generality and flexibility and are not restricted to the domain of particle physics. In a sense, field theory is a universal language, and it permeates many branches of modern research. In general, field theory is the correct language whenever we face collective phenomena, involving a large number of degrees of freedom, and this is the underlying reason for its unifying power. For example, in condensed matter the excitations in a solid are quanta of fields, and can be studied with field theoretical methods. An especially interesting example of the unifying power of QFT is given by the phenomenon of superconductivity which, expressed in the field theory language, turns out to be conceptually the same as the Higgs mechanism in particle physics. As another example we can mention that the Feynman path integral, which is a basic tool of modern quantum field theory, provides a formal analogy between field theory and statistical mechanics, which has stimulated very important exchanges between these two areas.”

For additional info and details about these topics, the following links can be viewed or consulted:

Differences between principles of QM and QFT

What is the difference between QM and non-relativistic QFT

Formalism of Quantum Field Theory vs Quantum Mechanics

Quantizing gravity, or “gravitizing” quantum theory?

Roger Penrose shows in his writings that he is more favorable to the theory of general relativity and to determinism in physics, and that he is more critical towards quantum mechanics, its description of reality, and its probabilistic formulation. Other physicists or scientists have somewhat similar or comparable views or opinions.

Many physicists seem to be trying to formulate a theory of quantum gravity. But there is a different or opposite approach highlighted by some insightful remarks by Penrose.

In a 2013 paper entitled “On the Gravitization of Quantum Mechanics 1: Quantum State Reduction”, Penrose wrote:

” This paper argues that the case for “gravitizing” quantum theory is at least as strong as that for quantizing gravity.Accordingly, the principles of general relativity must influence, and actually change, the very formalism of quantum mechanics. Most particularly, an“Einsteinian”,rather than a“Newtonian” treatment of the gravitational field should be adopted, in a quantum system,in order that the principle of equivalence be fully respected. This leads to an expectation that quantum superpositions of states involving a significant mass displacement should have a finite lifetime[…]”

Penrose continues:

” The title of this article contains the phrase “gravitization of quantum Mechanics” in contrast to the more usual “quantization of gravity”. This reversal of wording is deliberate, of course, indicating my concern with the bringing of quantum theory more in line with the principles of Einstein’s general relativity, rather than attempting to make Einstein’s theory—or any more amenable theory of gravity—into line with those of quantum mechanics (or quantum field theory).[…]

I think that people tend to regard the great twentieth century revolution of quantum theory, as a more fundamental scheme of things than gravitational theory. Indeed, quantum mechanics, strange as its basic principles seem to be, has no evidence against it from accepted experiment or observation, and people tend to argue that this theory is so well established now that one must try to bring the whole of physics within its compass.Yet, that other great twentieth century revolution, namely the general theory of relativity,is also a fundamental scheme of things which, strange as its basic principles seem to be, also has no confirmed experiments or observations that tell against it[…]”

Penrose thinks that it is essentially quantum mechanics that ought to be changed, not general relativity. In his book The Road to Reality, Penrose states:

” My own viewpoint is that the question of ‘reality’ must be addressed in quantum mechanics—especially if one takes the view (as many physicists appear to) that the quantum formalism applies universally to the whole of physics—for then, if there is no quantum reality, there can be no reality at any level (all levels being quantum levels, on this view). To me, it makes no sense to deny reality altogether in this way. We need a notion of physical reality, even if only a provisional or approximate one, for without it our objective universe, and thence the whole of science, simply evaporates before our contemplative gaze![…]

We must think of a wavefunction as one entire thing. If it causes a spot to appear at one place, then it has done its job, and this apparent act of creation forbids it from causing a spot to appear somewhere else as well. Wavefunctions are quite unlike the waves of classical physics in this important respect. The different parts of the wave cannot be thought of as local disturbances, each carrying on independently of what is happening in a remote region. Wavefunctions have a strongly non-local character; in this sense they are completely holistic entities.

[…]there are powerful positive reasons […] to believe that the laws of present-day quantum mechanics are in need of a fundamental (though presumably subtle) change. These reasons come from within accepted physical principles and from observed facts about the universe. Yet, I find it remarkable how few of today’s quantum physicists are prepared to entertain seriously the idea of an actual change in the ground rules of their subject. Quantum mechanics, despite its extraordinary exception-free experimental support and strikingly confirmed predictions, is a comparatively young subject, being only about three-quarters of a century old (dating this from the establishment of the mathematical theory by Dirac and others, based on the schemes of Heisenberg and Schrodinger, in the years immediately following 1925). When I say ‘comparatively’, I am comparing the theory with that of Newton, which lasted for nearly three times as long before it needed serious modification in the form of special and then general relativity, and quantum mechanics. […]

Moreover, Newton’s theory did not have a measurement paradox.[…]

Newton’s gravitational theory has the particular mathematical elegance that the gravitational forces always add up in a completely linear fashion; yet this is supplanted, in Einstein’s more precise theory, by a distinctly subtle type of non-linearity in the way that gravitational effects of different bodies combine together. And Einstein’s theory is certainly not short on elegance—of a quite different kind from that of Newton.[…]

Einstein’s theory […] involved a completely radical change in perspective. This, it seems to me, is the general kind of change in the structure of quantum mechanics that we must look towards, if we are to obtain the (in my view) needed non-linear theory to replace the present-day conventional quantum theory. Indeed, it is my own perspective that Einstein’s general relativity will itself supply some necessary clues as to the modifications that are required. The 20th century gave us two fundamental revolutions in physical thought— and, to my way of thinking, general relativity has provided as impressive a revolution as has quantum theory (or quantum Field theory). Yet, these two great schemes for the world are based upon principles that lie most uncomfortably with each other. The usual perspective, with regard to the proposed marriage between these theories, is that one of them, namely general relativity, must submit itself to the will of the other. There appears to be the common view that the rules of quantum Field theory are immutable, and it is Einstein’s theory that must bend itself appropriately to fit into the standard quantum mould. Few would suggest that the quantum rules must themselves admit to modification, in order to ensure an appropriately harmonious marriage. Indeed, the very name ‘quantum gravity’, that is normally assigned to the proposed union, carries the implicit connotation that it is a standard quantum (field) theory that is sought. Yet, I would claim that there is observational evidence that Nature’s view of this union is very different from this! I contend that her design for this union must be what, in our eyes, would be a distinctly non-standard one, and that an objective state reduction must be one of its important features.”

While recognizing the experimental verifications and successes of quantum physics, Penrose uses the word faith in relation to quantum mechanics, and thinks there are limitations to that “faith”.

The following quoted lines are taken from the book Fashion, Faith, and Fantasy in the New Physics of the Universe by Penrose:

” Quantum theory explains the phenomenon of chemical bonding, the colours and physical properties of metals and other substances, the detailed nature of the discrete frequencies of light that particular elements and their compounds emit when heated (spectral lines), the stability of atoms (where classical theory would predict a catastrophic collapse with the emission of radiation as electrons spiral rapidly into their atomic nuclei), superconductors, superfluids, Bose–Einstein condensates […]

When we combine quantum theory with special relativity we get quantum field theory, which is essential for, in particular, modern particle physics.[…]

Quantum theory is commonly regarded as a deeper theory than the classical scheme of particles and forces that had preceded it.[…]

The dogma of quantum mechanics is thus seen to be very well founded indeed, as it rests on an enormous amount of extremely hard evidence. With systems that are simple enough that detailed calculation can be carried out and sufficiently accurate experiments performed, we find an almost incredible precision in the agreement between the theoretical and observational results that are obtained.[…]

Perhaps the multitude of theoreticians involved in the formulation of quantum mechanics is a manifestation of the totally non-intuitive nature of that theory. Yet, as a mathematical structure, there is a remarkable elegance; and the deep coherence between the mathematics and physical behaviour is often as stunning as it is unexpected.[…]

Quantum mechanics provides, indeed, an overarching framework that would appear to apply to any physical process, at no matter what scale. There is perhaps no puzzle, therefore, in the fact that a profound faith has arisen among physicists, that all the phenomena of nature must adhere to it.[…]”

Penrose also mentions and criticizes the Copenhagen interpretation or view:

“According to standard quantum mechanics, the information in the quantum state of a system – or the wave function ψ – is what is needed for probabilistic predictions to be made for the results of experiments that might be performed upon that system.[…]

[According to the Copenhagen interpretation and] to various other schools of thought also, ψ is to be regarded as a calculational convenience with no ontological status other than to be part of the state of mind of the experimenter or theoretician, so that the actual results of observation can be probabilistically assessed. It seems that a good part of such a belief stems from the abhorrence felt by so many physicists that the state of the actual world could suddenly “jump” from time to time in the seemingly random way that is characteristic of the rules of quantum measurement.”

Moreover, Penrose is of the opinion that the de Broglie-Bohm theory, the pilot wave theory, or Bohmian mechanics

“provides an interesting alternative ontology to that provided (or not really provided!) by the Copenhagen view, and it is fairly widely studied, though certainly not qualifying as a fashionable theory. It claims no alternative observational effects from that of conventional quantum mechanics, but provides a much more clear-cut picture of the “reality” of the world.”

I think remarks, views and ideas similar to those of Roger Penrose ought to be taken into consideration. Quantizing gravity is not the only available option. It will be beneficial and useful if other alternatives such as those proposed by Penrose are considered, discussed, studied, and developed, in order to determine the best future direction for research, and formulate comprehensive appropriate consistent theoretical explanations and advances, with regard to gravity and to physics in general.

Time travel, its possibility, and its representation in the media

Over the past several decades, the topic of time travel has been considered and discussed by physicists, philosophers, journalists, presenters, and lay people. Movies have been made that feature time travel as the main plot or as one of the essential elements of the movie plot.

Some of these movies were entertaining or pleasant, but that does not mean that these movies are accurate or that time travel is possible.

Time travel, whether backwards or forwards, including changing or turning back time, is essentially a speculative, theoretical extrapolation of existing physical theories such as the theory of special relativity. Without getting into all the details or into long philosophical considerations, I esteem that time travel is not really possible or realistically doable.

Various explanations or interpretations have been provided in books or textbooks concerning time travel.

In some good textbooks dealing with the theory of relativity, it is pointed out that experiments have been carried out and have verified the time dilation equation (for example, expriments with muons, with mu mesons, …), but it is also indicated that the phenomenon is called apparent time dilation. In a similar way the phenomenon of length contraction is called apparent length contraction.

Let’s analyze as examples one or two movies featuring time travel and related effects in their plot.

In the movie Superman I starring Christopher Reeve, Superman turns back time as an emotional reaction to the death of Lois Lane. This type of action can be realistically described as a useless, naive action going against the rules of physics.

The scene in the Superman movie seems to assume that the entire world consists of planet Earth. How about “turning around” the solar system, or around the Galaxy, or around the Local Group of galaxies, and so on.

Turning back the rotation of the Earth could very well have devastating effects on everything and everyone on the planet. Not to mention the gravitational perturbations and disturbances affecting the Moon, the planets, the entire solar system, and beyond.

I think it would have been better if the writers had told the story differently, without killing Lois Lane or having Superman “turn back time”.

Time travel has been used and abused in sci-fi movies, and in movies or tv series by DC comics and Marvel comics, as some sort of deus ex machina or ultimate solution to fix everything or to set everything straight. Regrettably, this does not add to the accuracy or credibility of these movies. It also does not make them more realistic or convincing, even when exercising or trying to apply one’s suspension of disbelief.

In the movie Interstellar, physicist Kip Thorne worked out the equations that depict the path of light waves traveling through a wormhole or around a black hole. The visual effects in the film are based on the gravitational theory and the field equations of general relativity.

“Interstellar” is based on generally accurate existing theoretical and scientific concepts like neutron stars, spinning black holes, accretion disks, and time dilation.

Wormholes are theoretical physical entities that are considered to be like tunnels or shortcuts through the geometry of spacetime, connecting different parts of the universe.

According to the story in the movie, a crew of space explorers travel on an extra-galactic journey through a wormhole. They reach on the other side another solar system with a spinning black hole for a sun.

The spaceship’s destination is Gargantua, a supermassive black hole with a mass 100 million times that of the sun, located about 10 billion light-years from Earth. Gargantua rotates at 99.8 percent of the speed of light.

The movie refers to five-dimensional reality, and five-dimensional space is described in the movie as a form of extra-dimensional “tesseract” where time appears as a spatial dimension. The movie plot mentions and uses the concepts of time travel and time dilation.

I want to note that while this movie uses mostly accurate existing theoretical notions in physics, I think that the concepts of time travel and time dilation are nevertheless debatable theoretical and speculative consequences and extrapolations of physical theories such as the theory of relativity, that time travel cannot physically happen, and that it can generally be clarified by other explanations, such as being an apparent effect. In this sense, I think the use of time travel and time dilation diminishes the preciseness and undermines the realistic, plausible character of the movie.

Considering the possibility that in the future rigorous scientific experiments are made and these experiments prove the possibility of faster-than-light speeds and travel, the effects, nature and consequences of faster-than-light speeds should be carefully studied, but I don’t think time travel will be one of those consequences.

Then perhaps new equations, new explanations or new physics rules or laws would have to be formulated. Or perhaps the speed of light would be somewhat viewed like the speed of sound as a limiting speed representing a certain type of singularity. In any case, these are just speculations or general ideas at the present time.

To conclude (again), I think time travel (to the past as well as to the future ) is not possible and will not happen.

A way of proving this could be found not only in physics or in the physical or natural sciences, but also in the objective study of the structure and the rules of (human) historical events, and the realization that there are ‘laws’, patterns and regularities which govern these events.

There will be regrettably no “quantum leap time travel machine”, and no “quantum realm time machine”, these expressions illustrating how the word “quantum” is inaccurately used as a hype word in a attempt to add a veneer of “scientificity” or plausibility to the movies using them. Nobody will be able to travel in time to kill this or that person, or to change history. The Terminator will not and cannot be sent back in time, neither to save nor to kill John and Sarah Connor. The time machine in the eponymous novel by H.G. Wells is not feasible and will not work. The DC comics character the Flash will sadly not be able to change and reverse timelines, or to travel back in time to change past events. I could go on mentioning other examples, novels, works and movies, but I think I got the idea across.

The last several decades witnessed a craze or a fad for entertaining movies involving or dealing with time travel, but I consider that time travel movies, or in general movies that rely on time travel as a plot twist, plot device or as a deus ex machina to solve everything, lack scientific accuracy, and I suggest that moviemakers increasingly stop using time travel altogether, because in the future or in the next decades these types of movies will be viewed or assessed negatively.

Some AI generated poetry, questions and answers

I have been recently trying the ChatGPT AI chatbot application, asking some questions and obtaining answers which can be described as interesting and educational. It is also possible to verify that one can find good answers to questions related to various programming languages.

Without further ado, here are some questions (aka chats) and the answers provided.

Q: Write a sonnet with rhyme about nuclear and particle physics

A:

Nuclear and particle physics,
Fields that are complex, yet so precise,
Unravelling the secrets of matter and energy,
Discoveries that are quite a surprise.

From quarks and leptons to protons and neutrons,
From fission and fusion to radioactivity,
The mysteries of the atomic nucleus
Are revealed through experiments with intensity.

But the study of these fundamental forces
Is not just about equations and theories,
It’s about understanding the universe
And how it works, from its smallest to largest entities.

So let us delve into the depths of physics,
Uncovering new truths and unlocking mysteries.

Q: Write a limerick about planet Mars

A:

There once was a planet called Mars,
Whose red hue caused people to pause,
It was filled with dry dust,
And was a place most people would cuss,
But some saw it as an exciting new cause.

Q: Write a limerick about differential equations

A:

There was a differential equation,
That caused quite a stir and commotion,
It was tough to solve,
But eventually it was resolved,
And the solution brought great elation.

Q: Write a stanza about the Higgs boson

A:

The Higgs boson, a particle so small,
Discovered at the Large Hadron Collider,
It gives mass to all,
A force that’s hard to recall,
A fundamental part of the universe’s order.

Named for physicist Peter Higgs,
Who proposed its existence back in 1964,
It took decades to find,
But now we’re sure it’s intertwined,
In the fabric of space and time, forevermore.

So let us celebrate this discovery,
A triumph of science and technology,
The Higgs boson, a marvel to see,
A testament to the mysteries yet to be.

Q: Write a limerick about Newton and Leibniz

A:

Newton and Leibniz, two giants of math,
Each had their own approach to the path,
Newton developed calculus,
Leibniz came up with a basis,
Together, they expanded human’s math.

Q: distance between planet Mars and Earth on September 15, 2035 in astronomical units

A: The distance between Mars and Earth on September 15, 2035, in astronomical units (AU) will be approximately 0.614 AU. This is equivalent to 91.5 million kilometers (56.9 million miles). It’s worth noting that one astronomical unit (AU) is defined as the average distance between the Earth and the Sun, which is approximately 149.6 million kilometers (93 million miles). So, 0.614 AU is equivalent to 91.5 million kilometers (56.9 million miles) approximately. Also, keep in mind that this prediction is based on current knowledge of planetary dynamics, and small errors may occur due to unmodeled perturbations or other factors.

Actually the distance between Mars and Earth given by ChatGPT is not accurate, since according to several astronomy programs, applications and software the distance between Mars and Earth on September 15,2035 is or will be approximately 0.381 AU, or about 57 million kilometers.

Q: Was Prometheus punished justly?

A:

Prometheus was a figure from Greek mythology who is known for stealing fire from the gods and giving it to humans. According to the myth, Prometheus’s actions angered Zeus, the king of the gods, and he punished Prometheus by having him chained to a rock, where an eagle would come each day and eat his liver, which would then grow back overnight, causing Prometheus to suffer eternally.

Whether or not Prometheus was punished justly depends on one’s interpretation of the myth and their personal beliefs. Some may view Prometheus’s punishment as justified because he disrespected the gods and broke a divine rule by stealing fire. Others may see it as excessive and unjust because Prometheus was only trying to help humanity by giving them fire, which was a vital resource for survival. Ultimately, the question of whether Prometheus was punished justly is a matter of personal perspective.

Here is a somewhat more serious question:

Q: What is the meaning of life?

A: The meaning of life is a question that has puzzled philosophers, theologians, and scientists for centuries. There is no one answer that can be given, as the meaning of life is likely to be different for every individual. Some people believe that the meaning of life is to find happiness and fulfillment, while others believe that it is to contribute to the greater good of society. Still others believe that the meaning of life is to seek enlightenment and understanding of the world and our place in it. Ultimately, the meaning of life is something that each person must determine for themselves, based on their own values, beliefs, and experiences.

Questions can be asked and answers provided in other languages too. It is to be noted that this application has its limitations and it can still be enhanced, improved or upgraded.

Fields, rings, groups, their history and why they are called that way in math

In short, mathematicians usually like to use words or concepts (starting from the native language they use in their mathematical work) to describe or define theories or mathematical ”things”. Then these words get translated into other languages, get defined better and more accurately, and become widely known and used.

For more details, let us review how or when mathematicians began using these words, starting with the origin or history of the word field in mathematics:

“The term Zahlenkörper (body of numbers) is due to Richard Dedekind (1831-1916) . Dedekind used the term in his lectures of 1858 but the term did not come into general use until the early 1890s. Until then, the expression used was “rationally known quantities,” which means either the field of rational numbers or some finite extension of it, depending on the context.[…]

Dedekind used Zahlenkörper [literally “number body”] in Supplement X of his 4th edition of Dirichlet’s Vorlesungenueber Zahlentheorie, section 159. In a footnote, he explained his choice of terminology, writing that, in earlier lectures (1857-8) he used the term ‘rationalen Gebietes’ and he says that Kronecker (1882) used the term ‘Rationalitaetsbereich’.

Dedekind did not allow for finite fields; for him, the smallest field was the field of rational numbers. According to a post in sci.math by Steve Wildstrom, ‘Dedekind’s ‘Koerper’ is actually what we would call a division ring rather than a field as it does not require that multiplication be commutative.’

Eliakim Hastings Moore (1862-1932) was apparently the first person to use the English word field in its modern sense and the first to allow for a finite field. He coined the expressions ‘field of order s’ and ‘Galois-field of order s=q^n.‘ These expressions appeared in print in December 1893 in the Bulletin of the New York Mathematical Society III. 75.”

(Source: https://jeff560.tripod.com/f.html)

In 1871 Richard Dedekind introduced, for a set of real or complex numbers that is closed under the four arithmetic operations, the German word Körper, which means “body” or “corpus” (to suggest an organically closed entity).

In 1881 Leopold Kronecker defined what he called a domain of rationality, which is a field of rational fractions in modern terms. Kronecker’s notion did not cover the field of all algebraic numbers (which is a field in Dedekind’s sense), but on the other hand was more abstract than Dedekind’s in that it made no specific assumption on the nature of the elements of a field. Kronecker interpreted a field such as Q(π) abstractly as the rational function field Q(X).

The first clear definition of an abstract field is due to Heinrich Weber (1893).

Now about the origin of the word ring in mathematics:

In 1871, Richard Dedekind defined the concept of the ring of integers of a number field. In this context, he introduced the terms “ideal” (inspired by Ernst Kummer’s notion of ideal number) and “module” and studied their properties. But Dedekind did not use the term “ring” and did not define the concept of a ring in a general setting.

The term “Zahlring” (number ring) was coined by David Hilbert in 1892 and published in 1897 (in relation to algebraic number theory).

In 19th century German, the word “Ring” could mean “association”, which is still used today in English in a limited sense (e.g., spy ring), so if that were the etymology then it would be similar to the way “group” entered mathematics by being a non-technical word for “collection of related things”. According to Harvey Cohn, Hilbert used the term for a ring that had the property of “circling directly back” to an element of itself.

The image below is a photograph of David Hilbert, from 1912:

(Image source: https://en.m.wikipedia.org/wiki/File:Hilbert.jpg)

The first axiomatic definition of a ring was given by Adolf Fraenkel in 1914, but his axioms were stricter than those in the modern definition. For instance, he required every non-zero-divisor to have a multiplicative inverse. In 1921, Emmy Noether gave the modern axiomatic definition of (commutative) ring and developed the foundations of commutative ring theory in her paper Idealtheorie in Ringbereichen.

The mathematical disciplines or areas of study that influenced the formulation of group theory at the end of the 18th century and in the 19th century were geometry, number theory, and the theory of algebraic equations (related to to the study of permutations).

Moreover,

“Evariste Galois introduced the term group in the form of the expression groupe de l’équation in his ‘Mémoire sur les conditions de résolubilité des équations par radicaux’ (written in 1830 but first published in 1846) Oeuvres mathématiques. p. 417. Cajori (vol. 2, page 83) points out that the modern definition of a group is somewhat different from that of Galois, for whom the term denoted a subgroup of the group of permutations of the roots of a given polynomial.

Group appears in English in Arthur Cayley, ‘On the theory of groups, as depending on the symbolic equation \theta^n=1, ‘Philosophical Magazine, 1854, vol. 7, pp. 40-47. […] The paper also introduced the term theory of groups. At the time this more abstract notion of a group made little impact.[…]

Klein and Lie use the term ‘closed system’ in their ‘Ueber diejenigen ebenen Curven, welche durch ein geschlossenes System von einfach unendlich vielen vertauschbaren linearen Transformationen in sich übergehen,’Mathematische Annalen, 4, (1871), 50-84. Klein adopted the term gruppe in his ‘Vergleichende Betrachtungen über neuere geometrische Forschungen’ written in 1872 [in relation to the Erlangen program].

Group-theory is found in English in 1888 in George Gavin Morrice’s translation of Felix Klein, Lectures on the Ikosahedron and the solution of Equations of the Fifth Degree.”

(Source: http://jeff560.tripod.com/g.html)

The image below shows a portrait of Évariste Galois (1811-1832):

(Image source: https://en.m.wikipedia.org/wiki/File:Evariste_galois.jpg)

After novel geometries such as hyperbolic and projective geometry (dealing with the behavior of geometric figures under various transformations) had emerged, Klein used group theory to organize them in a more coherent way. Further advancing these ideas, Sophus Lie founded the study of Lie groups in 1884.

The convergence of various sources into a uniform theory of groups started with Camille Jordan’s Traité des substitutions et des équations algébriques (Treatise on Substitutions and Algebraic Equations) in 1870.

Walther von Dyck (1882) introduced the idea of specifying a group by means of generators and relations, and was also the first to give an axiomatic definition of an “abstract group”, in the terminology of the time.

As of the 20th century, groups gained wide recognition by the work of Ferdinand Georg Frobenius and William Burnside
, who worked on representation theory
of finite groups, Richard Brauer’s modular representation theory
and Issai Schur’s papers. The theory of Lie groups, and more generally locally compact groups was studied by Hermann Weyl, Élie Cartan and others.

Additional information about fields, rings, groups and related topics can be found in Wikipedia and other or similar online resources.

Relevant and miscellaneous info about the Red Planet

It seems November 28 is called “Red Planet day”. I already wrote a post about Mars, but there are always additional and interesting facts and information about Mars, some of which I will present here.

The atmosphere of Mars is composed of carbon dioxide (about 95%), molecular nitrogen (2.8%), and argon (2%).It also contains trace levels of water vapor, oxygen, carbon monoxide, hydrogen, and noble gases. The atmosphere of Mars is much thinner than Earth’s. The average surface pressure is only about 610 pascals (0.088 psi) which is less than 1% of the Earth’s value. The currently thin Martian atmosphere precludes the existence of liquid water on the surface of Mars, but many studies suggest that the Martian atmosphere was thicker in the past. The Martian atmosphere is an oxidizing atmosphere. The photochemical reactions in the atmosphere tend to oxidize the organic species and turn them into carbon dioxide or carbon monoxide.

Mars has two permanent polar ice caps. During a pole’s winter, it lies in continuous darkness, chilling the surface and causing the deposition of 25–30% of the atmosphere into slabs of CO2 ice (dry ice). The temperature and circulation on Mars vary every Martian year, as expected for any planet with an atmosphere and axial tilt.
The surface of Mars has a very low thermal inertia, meaning it heats quickly when the sun shines on it. Typical daily temperature swings, away from the polar regions, are around 100 K.

An example of a known geological feature on Mars is Olympus Mons, a large shield volcano on Mars. The volcano has a height of over 21.9 km (13.6 miles or 72,000 feet) as measured by the Mars Orbiter Laser Altimeter (MOLA). Olympus Mons is the youngest of the large volcanoes on Mars, having formed during Mars’s Hesperian Period with eruptions continuing well into the Amazonian. The volcano is located in Mars’s western hemisphere, with the center at 18°39′N 226°12′E, just off the northwestern edge of the Tharsis bulge. There is a possibility that Olympus Mons is still active.

The image below shows a colorized topographic map of the volcano Olympus Mons, together with its surrounding aureole, from the Mars Orbiter Laser Altimeter (MOLA) instrument of the Mars Global Surveyor spacecraft:

(Image source: https://en.wikipedia.org/wiki/File:Olympus_Mons_aureole_MOLA_zoom_64.jpg)

Now for some explanations of the red color of Mars. The surface of the planet Mars appears reddish from a distance because of rusty dust suspended in the atmosphere, with an omnipresent dust layer that is typically on the order of millimeters thick.. A large amount of the regolith of Mars, or its surface material, comprises iron oxide. Basically, rocks on Mars contain a lot of iron, and when they are exposed to the various atmospheric phenomena, they ‘oxidize’ and turn onto a reddish color. The surface iron on Mars became oxidized, forming iron oxide known more commonly as rust — a compound made of two iron atoms and three oxygen atoms, the chemical formula of iron (III) oxide being Fe₂O₃. The massive oxidation most likely occurred when Mars had flowing water and a thicker atmosphere.

Detailed observations of the position of Mars were made in Antiquity by Babylonian astronomers who developed arithmetic techniques to predict the future position of the planet. The late ancient philosophers and astronomers (such as Hipparchus, and then Claudius Ptolemy in his work known as the Almagest) developed a geocentric model to explain the planet’s motions, using systems and combinations of circular tracks called deferents and epicycles.

During the seventeenth century CE, Tycho Brahe measured the diurnal parallax of Mars that Johannes Kepler used to make a preliminary calculation of the relative distance to the planet. Kepler studied for years to motion and the orbit of planet Mars.
Kepler tried several oval curves for the orbit of Mars that might fit the observations, including the ellipse. He was not happy with the physical reasons for choosing any of them until he noticed that one focus of an approximating ellipse coincided with the Sun. The curve and focus made it clearer for Kepler to elaborate a physical explanation.

Kepler’s initial attempt to define the orbit of Mars as a circle was off by only eight minutes of arc, but this made him to spend six years to resolve the discrepancy. The data seemed to produce a symmetrical oviform curve inside of his predicted circle. He first tested an egg shape, then engineered a theory of an orbit which oscillates in diameter, and returned to the egg. In early 1605, he geometrically tested an ellipse, which he had previously assumed to be too simple a solution for earlier astronomers to have overlooked. He had already derived this solution trigonometrically many months earlier.
In his Astronomia Nova, published in 1609, Kepler presented a proof that Mars’ orbit is elliptical. Evidence that the other known planets’ orbits are elliptical was presented only in 1621. Kepler published his first two laws about planetary motion in 1609, having found them by analyzing the astronomical observations of Tycho Brahe.
Kepler gradually discovered that all planets orbit the Sun in elliptical orbits, with the Sun at one of the two focal points. This result became the first of Kepler’s three laws of planetary motion.

The image below depicts the orbits of the planets Mercury, Venus, Earth, and the elliptical orbit of Mars around the Sun. The date is November 28, 1613 (image made with the Starry Night astronomy software):

mars elliptical orbit

The first person to draw a map of Mars that displayed any terrain features was the Dutch astronomer Christiaan Huygens.

Mars comes closer to Earth more than any other planet save Venus at its nearest—56 million km is the closest distance between Mars and Earth, whereas the closest Venus comes to Earth is 40 million km. Mars comes closest to Earth every other year, around the time of its opposition, when Earth is sweeping between the sun and Mars. Extra-close oppositions of Mars happen every 15 to 17 years, when we pass between Mars and the sun around the time of its perihelion (closest point to the sun in orbit). The minimum distance between Earth and Mars has been declining over the years, and in 2003 the minimum distance was 55.76 million km, nearer than any such encounter in almost 60,000 years (circa 57,617 BCE). The record minimum distance between Earth and Mars in 2729 will stand at 55.65 million km. In the year 3818, the record will stand at 55.44 million km, and the distances will continue to decrease for about 24,000 years.

Starting in 1960, the Soviet Union launched and sent a series of probes to Mars including the first attempted flybys and hard (impact) landing. The first successful flyby of Mars was on 14–15 July 1965, by NASA’s Mariner 4. On November 14, 1971, Mariner 9 became the first space probe to orbit another planet when it entered into orbit around Mars.
The first to contact the surface were two Soviet probes: Mars 2 lander on November 27 and Mars 3 lander on December 2, 1971—Mars 2 failed during descent and Mars 3 about twenty seconds after the first Martian soft landing. Mars 6 failed during descent but did return some corrupted atmospheric data in 1974.The 1975 NASA launches of the Viking program consisted of two orbiters, each with a lander that successfully soft landed in 1976. Viking 1 remained operational for six years, Viking 2 for three. The Viking landers relayed the first color panoramas of Mars.

The image below shows the clearest image of craters of Mars taken by Mariner 4:

(Image source: https://en.wikipedia.org/wiki/File:Mariner_4_craters.gif)

In order to understand and study the gravity of Mars, its gravitational field strength g and gravitational potential U are frequently measured. Mars being a non-spherical planetary body and influenced by complex geological processes, the gravitational potential is described with spherical harmonic functions, following the conventions in geodesy, via the following formula:

Here is an explanation of the potential formula above:


Mars will be in opposition with the Sun and in opposition to Earth on December 8, 2022. This means that Mars and the Sun will be on opposite sides of planet Earth, the two planets being the closest together in their respective orbits.

The following image shows planetary orbits, with the shining Sun in the middle of the image and with Mars in opposition to Earth, on December 8, 2022 (image made with the Mobile Observatory astronomy app):

An important event related to Mars will be evidently the first human mission to the Red Planet, which should be the outcome or result of thorough preparation and international cooperation, so that the prepared, trained and qualified crew of the first human trip to Mars will be able to travel in space, set foot and land on the Red Planet, stay there and explore for a limited period of time, and then come back safely to Earth.

Finally, the image below shows planet Mars and the orbit of one of its moons (Phobos) as seen from the surface of planet Saturn at 0°N 0°E, on November 28, 2022 (image made with the Starry Night astronomy software):