Draft dated November 9, 2007 HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY Volume I: Iso-, Geno-, Hyper-Formulations for Matter and Their Isoduals for Antimatter Ruggero Maria Santilli CV: http://www.i-b-r.org/Ruggero-Maria-Santilli.htm Institute for Basic Research P. O. Box 1577, Palm Harbor, FL 34682, U.S.A. ibr@gte.net, http://www.i-b-r.org http://www.neutronstructure.org http://www.magnegas.com International Academic Press Copyright c 2007 owned by Ruggero Maria Santilli P. O. Box 1577, Palm Harbor, FL 34682, U.S.A. All rights reserved. This book can be reproduced in its entirely or in part, or stored in a retrieval system or transmitted in any form, provided that the source and its paternity are identified fully and clearly. Violators will be prosecuted in the U. S. Federal Court and lawsuits will be listed in the web site http://www.scientificethics.org U. S. Library of Congress Cataloguing in publication data: Santilli, Ruggero Maria, 1935 – Foundations of Hadronic Mathematics Mechanics and Chemistry Volume I: Iso-, Geno-, and Hyper-Formulations for matter and their Isoduals for Antimatter with Bibliography and Index Additional data sup-plied on request ISBN INTERNATIONAL ACADEMIC PRESS This volume is dedicated to the memory of Professor Grigorios Tsagas in recognition of his pioneering work on the Lie-Santilli isotheory. Contents Foreword Preface Ethnic Note Legal Notice Acknowledgments xi xiv xxxi xxxiii xxxv 1. SCIENTIFIC IMBALANCES OF THE TWENTIETH CENTURY 1 1.1 THE SCIENTIFIC IMBALANCE CAUSED BY ANTIMATTER 1 1.1.1 Needs for a Classical Theory of Antimatter 1 1.1.2 The Mathematical Origin of the Imbalance 3 1.1.3 Outline of the Studies on Antimatter 4 1.2 THE SCIENTIFIC IMBALANCE CAUSED BY NONLOCAL-INTEGRAL INTERACTIONS 4 1.2.1 Foundations of the Imbalance 4 1.2.2 Exterior and Interior Dynamical Problems 7 1.2.3 General Inapplicability of Conventional Mathematical and Physical Methods for Interior Dynamical Systems 13 1.2.4 Inapplicability of Special Relativity for Dynamical Systems with Resistive Forces 15 1.2.5 Inapplicability of Special Relativity for the Propagation of Light within Physical Media 16 1.2.6 Inapplicability of the Galilean and Poincar´e symmetries for Interior Dynamical Systems 18 1.2.7 The Scientific Imbalance Caused by Quark Conjectures 21 1.2.8 The Scientific Imbalance Caused by Neutrino Conjectures 24 1.2.9 The Scientific Imbalance in Experimental Particle Physics 30 1.2.10 The Scientific Imbalance in Nuclear Physics 33 1.2.11 The Scientific Imbalance in Superconductivity 37 1.2.12 The Scientific Imbalance in Chemistry 38 1.2.13 Inconsistencies of Quantum Mechanics, Superconductivity and Chemistry for Underwater Electric Arcs 47 1.3 THE SCIENTIFIC IMBALANCE CAUSED BY IRREVERSIBILITY 49 1.3.1 The Scientific Imbalance in the Description of Natural Processes 49 vi RUGGERO MARIA SANTILLI 1.3.2 The Scientific Imbalance in Astrophysics and Cosmology 52 1.3.3 The Scientific Imbalance in Biology 53 1.4 THE SCIENTIFIC IMBALANCE CAUSED BY GENERAL RELATIVITY AND QUANTUM GRAVITY 55 1.4.1 Consistency and Limitations of Special Relativity 55 1.4.2 The Scientific Imbalance Caused by General Relativity on Antimatter, Interior Problems, and Grand Unifications 56 1.4.3 Catastrophic Inconsistencies of General Relativity due to Lack of Sources 58 1.4.4 Catastrophic Inconsistencies of General Relativity due to Curvature 65 1.4.5 Concluding Remarks 70 1.5 THE SCIENTIFIC IMBALANCE CAUSED BY NONCANONICAL AND NONUNITARY THEORIES 71 1.5.1 Introduction 71 1.5.2 Catastrophic Inconsistencies of Noncanonical Theories 71 1.5.3 Catastrophic Inconsistencies of Nonunitary Theories 82 1.5.4 The Birth of Isomathematics, Genomathematics and their Isoduals 94 1.5.5 Hadronic Mechanics 98 References 102 2. ISODUAL THEORY OF POINT-LIKE ANTIPARTICLES 109 2.1 ELEMENTS OF ISODUAL MATHEMATICS 109 2.1.1 Isodual Unit, Isodual Numbers and Isodual Fields 109 2.1.2 Isodual Functional Analysis 113 2.1.3 Isodual Differential and Integral Calculus 114 2.1.4 Lie-Santilli Isodual Theory 114 2.1.5 Isodual Euclidean Geometry 115 2.1.6 Isodual Minkowskian Geometry 117 2.1.7 Isodual Riemannian Geometry 118 2.2 CLASSICAL ISODUAL THEORY OF POINT-LIKE ANTIPARTICLES 120 2.2.1 Basic Assumptions 120 2.2.2 Need for Isoduality to Represent All Time Directions 121 2.2.3 Experimental Verification of the Isodual Theory of Antimatter in Classical Physics 122 2.2.4 Isodual Newtonian Mechanics 123 2.2.5 Isodual Lagrangian Mechanics 125 2.2.6 Isodual Hamiltonian Mechanics 126 HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY vii 2.2.7 Isodual Galilean Relativity 127 2.2.8 Isodual Special Relativity 129 2.2.9 Inequivalence of Isodual and Spacetime Inversions 133 2.2.10 Dunning-Davies Isodual Thermodynamics of Antimatter 134 2.2.11 Isodual General Relativity 136 2.3 OPERATOR ISODUAL THEORY OF POINT-LIKE ANTIPARTICLES 137 2.3.1 Basic Assumptions 137 2.3.2 Isodual Quantization 138 2.3.3 Isodual Hilbert Spaces 139 2.3.4 Isoselfduality of Minkowski’s Line Elements and Hilbert’s Inner Products 140 2.3.5 Isodual Schr¨odinger and Heisenberg’s Equations 141 2.3.6 Isoselfdual Re-Interpretation of Dirac’s Equation 142 2.3.7 Equivalence of Isoduality and charge conjugation 145 2.3.8 Experimental Verification of the Isodual Theory of Antimatter in Particle Physics 148 2.3.9 Elementary Particles and their Isoduals 148 2.3.10 Photons and their Isoduals 149 2.3.11 Electrons and their Isoduals 151 2.3.12 Protons and their Isoduals 151 2.3.13 The Hydrogen Atom and its Isodual 152 2.3.14 Isoselfdual Bound States 153 2.3.15 Resolution of the Inconsistencies of Negative Energies 155 References 157 3. LIE-ISOTOPIC BRANCH OF HADRONIC MECHANICS AND ITS ISODUAL 159 3.1 INTRODUCTION 159 3.1.1 Conceptual Foundations 159 3.1.2 Closed Non-Hamiltonian Systems 163 3.1.3 Need for New Mathematics 166 3.2 ELEMENTS OF SANTILLI’S ISOMATHEMATICS AND ITS ISODUAL 173 3.2.1 Isounits, Isoproducts and their Isoduals 173 3.2.2 Isonumbers, Isofields and their Isoduals 177 3.2.3 Isospaces and Their Isoduals 184 3.2.4 Isofunctional Analysis and its Isodual 188 3.2.5 Isodifferential Calculus and its Isodual 194 viii RUGGERO MARIA SANTILLI 3.2.6 Kadeisvili’s Isocontinuity and its Isodual 199 3.2.7 TSSFN Isotopology and its Isodual 201 3.2.8 Iso-Euclidean Geometry and its Isodual 204 3.2.9 Minkowski-Santilli Isogeometry and its Isodual 209 3.2.10 Isosymplectic Geometry and its Isodual 225 3.2.11 Isolinearity, Isolocality, Isocanonicity and Their Isodualities 229 3.2.12 Lie-Santilli Isotheory and its Isodual 231 3.2.13 Unification of All Simple Lie Algebras into Lie-Santilli Isoalgebras 240 3.2.14 The Fundamental Theorem for Isosymmetries and Their Isoduals 242 3.3 CLASSICAL LIE-ISOTOPIC MECHANICS FOR MATTER AND ITS ISODUAL FOR ANTIMATTER 243 3.3.1 Introduction 243 3.3.2 Insufficiencies of Analytic Equations with External Terms 244 3.3.3 Insufficiencies of Birkhoffian Mechanics 246 3.3.4 Newton-Santilli Isomechanics for Matter and its Isodual for Antimatter 248 3.3.5 Hamilton-Santilli Isomechanics for Matter and its Isodual for Antimatter 253 3.3.6 Simple Construction of Classical Isomechanics 261 3.3.7 Invariance of Classical Isomechanics 262 3.4 OPERATOR LIE-ISOTOPIC MECHANICS FOR MATTER AND ITS ISODUAL FOR ANTIMATTER 263 3.4.1 Introduction 263 3.4.2 Naive Isoquantization and its Isodual 264 3.4.3 Isohilbert Spaces and their Isoduals 265 3.4.4 Structure of Operator Isomechanics and its Isodual 267 3.4.5 Dynamical Equations of Operator Isomechanics and their Isoduals 269 3.4.6 Preservation of Quantum Physical Laws 271 3.4.7 Isoperturbation Theory and its Isodual 274 3.4.8 Simple Construction of Operator Isomechanics and its Isodual 276 3.4.9 Invariance of Operator Isomechanics and of its Isodual 278 3.5 SANTILLI ISORELATIVITY AND ITS ISODUAL 279 3.5.1 Limitations of Special and General Relativities 279 3.5.2 Minkowski-Santilli Isospaces and their Isoduals 283 HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY ix 3.5.3 Poincar´e-Santilli Isosymmetry and its Isodual 286 3.5.4 Isorelativity and Its Isodual 292 3.5.5 Isorelativistic Hadronic Mechanics and its Isoduals 295 3.5.6 Isogravitation and its Isodual 297 Appendices 303 3.A Universal Enveloping Isoassociative Algebras 303 3.B Recent Advances in the TSSFN Isotopology 305 3.C Recent Advances on the Lie-Santilli Isotheory 309 3.D Lorentz versus Galileo-Roman Relativistic Symmetry 315 References 323 4. LIE-ADMISSIBLE BRANCH OF HADRONIC MECHANICS AND ITS ISODUAL 327 4.1 INTRODUCTION 327 4.1.1 The Scientific Imbalance Caused by Irreversibility 327 4.1.2 The Forgotten Legacy of Newton, Lagrange and Hamilton 329 4.1.3 Early Representations of Irreversible Systems 331 4.2 ELEMENTS OF SANTILLI GENOMATHEMATICS AND ITS ISODUAL 335 4.2.1 Genounits, Genoproducts and their Isoduals 335 4.2.2 Genonumbers, Genofunctional Analysis and Their Isoduals 337 4.2.3 Genogeometries and Their Isoduals 339 4.2.4 Santilli Lie-Admissible Theory and Its Isodual 340 4.2.5 Genosymmetries and Nonconservation Laws 341 4.3 LIE-ADMISSIBLE CLASSICAL MECHANICS FOR MATTER AND ITS ISODUAL FOR ANTIMATTER 342 4.3.1 Fundamental Ordering Assumption on Irreversibility 342 4.3.2 Newton-Santilli Genoequations and Their Isoduals 343 4.3.3 Hamilton-Santilli Genomechanics and Its Isodual 346 4.4 LIE-ADMISSIBLE OPERATOR MECHANICS FOR MATTER AND ITS ISODUAL FOR ANTIMATTER 349 4.4.1 Basic Dynamical Equations 349 4.4.2 Simple Construction of Lie-Admissible Theories 352 4.4.3 Invariance of Lie-Admissible Theories 354 4.5 APPLICATIONS 355 4.5.1 Lie-admissible Treatment of Particles with Dissipative Forces 355 x RUGGERO MARIA SANTILLI 4.5.2 Direct Universality of Lie-Admissible Representations for Nonconservative Systems 358 4.5.3 Pauli-Santilli Lie-Admissible Matrices 360 4.5.4 Minkowski-Santilli Irreversible Genospacetime 363 4.5.5 Dirac-Santilli Irreversible Genoequation 364 4.5.6 Dunning-Davies Lie-Admissible Thermodynamics 365 4.5.7 Ongoing Applications to New Clean Energies 367 References 369 5. HYPERSTRUCTURAL BRANCH OF HADRONIC MECHANICS AND ITS ISODUAL 371 5.1 The Scientific Imbalance in Biology 371 5.2 The Need in Biology of Irreversible Multi-Valued Formulations 371 5.3 Rudiments of Santilli Hyper-Mathematics and Hypermechanics 373 5.4 Rudiments of Santilli Isodual Hypermathematics 376 5.5 Santilli Hyperrelativity and Its Isodual 377 Appendices 381 5.A Eric Trell’s Hyperbiological Structures TO BE COMPLETED AND EDITED. 381 References 382 Postscript 383 Index 389 Foreword Mathematics is a subject which possibly finds itself in a unique position in academia in that it is viewed as both an Art and a Science. Indeed, in different universities, graduates in mathematics may receive Bachelor Degrees in Arts or Sciences. This probably reflects the dual nature of the subject. On the one hand, it may be studied as a subject in its own right. In this sense, its own beauty is there for all to behold; some as serene as da Vinci’s “Madonna of the Rocks”, other as powerful and majestic as Michelangelo’s glorious ceiling of the Sistine Chapel, yet more bringing to mind the impressionist brilliance of Monet’s Water Lily series. It is this latter example, with the impressionists interest in light, that links up with the alternative view of mathematics; that view which sees mathematics as the language of science, of physics in particular since physics is that area of science at the very hub of all scientific endeavour, all other branches being dependent on it to some degree. In this guise, however, mathematics is really a tool and any results obtained are of interest only if they relate to what is found in the real world; if results predict some effect, that prediction must be verified by observation and/or experiment. Again, it may be remembered that physics is really a collection of related theories. These theories are all manmade and, as such, are incomplete and imperfect. This is where the work of Ruggero Santilli enters the scientific arena. Although “conventional wisdom” dictates otherwise, both the widely accepted theories of relativity and quantum mechanics, particularly quantum mechanics, are incomplete. The qualms surrounding both have been muted but possibly more has emerged concerning the inadequacies of quantum mechanics because of the people raising them. Notably, although it is not publicly stated too frequently, Einstein had grave doubts about various aspects of quantum mechanics. Much of the worry has revolved around the role of the observer and over the question of whether quantum mechanics is an objective theory or not. One notable contributor to the debate has been that eminent philosopher of science, Karl Popper. As discussed in my book, “Exploding a Myth”, Popper preferred to refer to the experimentalist rather than observer, and expressed the view that that person played the same role in quantum mechanics as in classical mechanics. He felt, therefore, that such a person was there to test the theory. This is totally opposed to the Copenhagen Interpretation which claims that “objective reality has evaporated” and “quantum mechanics does not represent particles, but rather our knowledge, our observations, or our consciousness, of particles”. Popper points xii RUGGERO MARIA SANTILLI out that, over the years, many eminent physicists have switched allegiance from the pro-Copenhagen view. In some ways, the most important of these people was David Bohm, a greatly respected thinker on scientific matters who wrote a book presenting the Copenhagen view of quantum mechanics in minute detail. However, later, apparently under Einstein’s influence, he reached the conclusion that his previous view had been in error and also declared the total falsity of the constantly repeated dogma that the quantum theory is complete. It was, of course, this very question of whether or not quantum mechanics is complete which formed the basis of the disagreement between Einstein and Bohr; Einstein stating “No”, Bohr “Yes”. However, where does Popper fit into anything to do with Hadronic Mechanics? Quite simply, it was Karl Popper who first drew public attention to the thoughts and ideas of Ruggero Santilli. Popper reflected on, amongst other things, Chadwick’s neutron. He noted that it could be viewed, and indeed was interpreted originally, as being composed of a proton and an electron. However, again as he notes, orthodox quantum mechanics offered no viable explanation for such a structure. Hence, in time, it became accepted as a new particle. Popper then noted that, around his (Popper’s) time of writing, Santilli had produced an article in which the “first structure model of the neutron” was revived by “resolving the technical difficulties which had led, historically, to the abandonment of the model”. It is noted that Santilli felt the difficulties were all associated with the assumption that quantum mechanics applied within the neutron and disappeared when a generalised mechanics is used. Later, Popper goes on to claim Santilli to belong to a new generation of scientists which seemed to him to move on a different path. Popper identifies quite clearly how, in his approach, Santilli distinguishes the region of the arena of incontrovertible applicability of quantum mechanics from nuclear mechanics and hadronics. He notes also his most fascinating arguments in support of the view that quantum mechanics should not, without new tests, be regarded as valid in nuclear and hadronic mechanics. Ruggero Santilli has devoted his life to examining the possibility of extending the theories of quantum mechanics and relativity so that the new more general theories will apply in situations previously excluded from them. To do this, he has had to go back to the very foundations and develop new mathematics and new mathematical techniques. Only after these new tools were developed was he able to realistically examine the physical situations which originally provoked this lifetime’s work. The actual science is his, and his alone, but, as with the realization of all great endeavours, he has not been alone. The support and encouragement he has received from his wife Carla cannot be exaggerated. In truth, the scientific achievements of Ruggero Santiili may be seen, in one light, as the results of a team effort; a team composed of Ruggero himself and Carla Gandiglio in Santilli. The theoretical foundations of the entire work are contained HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY xiii in this volume; a volume which should be studied rigorously and with a truly open mind by the scientific community at large. This volume contains work which might be thought almost artistic in nature and is that part of the whole possessing the beauty so beloved of mathematicians and great artists. However, the scientific community should reserve its final judgement until it has had a chance to view the experimental and practical evidence which may be produced later in support of this elegant new theoretical framework. Jeremy Dunning-Davies, Physics Department, University of Hull, England. September 8, 2007 Preface Our planet is afflicted by increasingly cataclysmic climactic changes. The only possibility for their containment is the development of new, clean, energies and fuels. But, all possible energies and fuels that could be conceived with quantum mechanics, quantum chemistry, special relativity, and other conventional theories, had been discovered by the middle of the 20-th century, and they all resulted in being environmentally unacceptable either because of an excessive production of atmospheric pollutants, or because of the release of dangerous waste. Hence, the scientific community of the 21-st century is faced with the quite complex duties of, firstly, broadening conventional theories into forms permitting the prediction and quantitative study of new clean energies and fuels and, secondly, developing them up to the needed industrial maturity. These volumes outline the efforts conducted by the author and a number of other scientists, as well as industrialists, toward these pressing needs of the human society. To begin, we shall say that a theory is: 1) exactly valid for given physical conditions when it allows a numerically exact representation of all experimental data from unadulterated first axioms; 2) approximately valid for different physical conditions when requiring the use of unknown parameters to fit the experimental data; and 3) basically inapplicable for yet different conditions when unable to provide any quantitative treatment even with the use of arbitrary parameters. Note that quantum mechanics, quantum chemistry, special relativity and other theories of the 20-th century, cannot be claimed to be “violated” for conditions 2) and 3) since, as we shall see, they were not conceived for the latter conditions. There is no doubt that quantum mechanics permitted in the 20-th century the achievement of historical advances in various fields. These successes caused a widespread belief that quantum mechanics is exactly valid for all possible conditions of particles existing in the universe. Such a belief is ascientific, particularly when ventured by experts. As established by history, science will never admit final theories. No matter how valid any theory may appear at a given time, its structural generalization for a representation of previously unknown conditions is only a matter of time. Needless to say, quantum mechanics is exactly valid for the physical conditions of its original conception, point-like particles and electromagnetic waves propagating in vacuum, as occurring in the structure of the hydrogen atom, the structure of crystals, the motion of particles in an accelerator, and numerous other conditions. HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY xv Contrary to a rather popular belief, quantum mechanics is only approximately valid for a number of particle conditions at short mutual distances. A clear example is the Bose-Einstein correlation in which protons and antiprotons collide at very high energy, annihilate each other, and result in the production of a large number of mesons that remain correlated at large mutual distances. On strict scientific grounds, a theory constructed for the orbiting of point-like electrons in vacuum around atomic nuclei is not expected to be exactly valid for the dramatically different conditions occurring in the mutual penetration of the hyperdense protons and antiprotons. In fact, the fit of experimental data by the two point function of the BoseEinstein correlation requires four arbitrary parameters of unknown physical or mathematical origin (significantly called the “chaoticity parameters”). But the Hamiltonian is Hermitian and two-dimensional, thus allowing only two parameters for the diagonal elements 11 and 22. Additionally, the remaining two parameters interconnect off-diagonal elements 12 and 21, a feature absolutely prohibited by the quantum axiom of expectation values for a Hermitian, thus diagonal Hamiltonian. These and other features establish beyond scientific or otherwise credible doubt that the four parameters needed to fit experimental data are a direct measure of the approximate character of quantum mechanics for the BoseEinstein correlation. During the course of our analysis we shall identify numerous additional cases of approximate validity of quantum mechanics because of irreconcilable incompatibilities with the ultimate axioms of the theory, such as: the approximate character of quantum mechanics in nuclear physics (due to the incompatibility of the spin 1 of the deuteron with quantum axioms requiring spin 0 for the ground state of two particles with spin 1/2 and numerous other reasons); the approximate character of the conventional “potential scattering theory” for deep inelastic scatterings of extended and hyperdense hadrons (due to the need for contact, non-Hamiltonian, thus nonunitary contributions outside the class of equivalence of quantum mechanics); the approximate character of superconductivity (because of structural problems in the Cooper pair); and other cases. Finally, quantum mechanics is basically inapplicable for a number of particle events, such as the synthesis of the neutron from protons and electrons as occurring in stars, or, more generally, the synthesis of strongly interacting particles (called hadrons), such as the synthesis of the πo meson from electrons and positrons. All consistent quantum bound states (such as nuclei, atoms and molecules) require a negative binding energy for which the rest energy (or mass) of the final state is smaller than the sum of the rest energies (or masses) of the constituents. However, experimental data establish that the synthesis of the neutron via the familiar reaction p+ + e− → n + ν requires a positive binding energy because the xvi RUGGERO MARIA SANTILLI rest energy of the neutron is 0.78 MeV bigger than the sum of the rest energies of the proton and the electron. Under these conditions, quantum mechanics is unable to provide any meaningful treatment at all because, as we shall see in details in this and in the subsequent volume, Schr¨odinger’s equation admits no physical solution for positive binding energies, as the skeptic reader is encouraged to verify. The attempt of salvaging quantum mechanics via the conjugate reaction p+ + e− + ν¯ → n, namely, the dream of using the hypothetical antineutrino to provide the missing energy, has no credibility because the hypothetical antineutrino has an absolutely null cross section with protons and electrons. A similar basic inapplicability of quantum mechanics occurs for numerous other cases whose treatment is generally ignored or claimed as not needed, such as the synthesis of the πo via the known reaction e+ + e− → πo, as well as for the synthesis of all unstable particles. The author has dedicated his research life to the study of the limitations of conventional theories, the construction of suitable generalization, and their application to the industrial development of new clean energies and fuels. The studies initiated with paper [1] of 1956 (written when the author was an undergraduate student of physics at the University of Napoli, Italy), on the conception of space, or vacuum as a universal medium (or substratum) of high density and energy. The paper was written for the resolution of the controversy on the “ethereal wind” raging at that time via the reduction of all particles constituting matter, such as the electron, to “pure oscillations of space,” namely, oscillations of the space itself without any oscillating conventional mass. Under these conditions, when masses are moved, there cannot be any ethereal wind since we merely move oscillations of space from given points to others [1]. According to this view and in dramatic contrast with our sensory perception, matter is completely empty in the sense that it can be entirely reduced to pure oscillations of space without any oscillation of conventional masses, as apparently necessary for the structure of the electron. Consequently, the view requires that space is completely full of a medium of extremely high density (from the very large value of the speed of light). Also, space was conceived in Ref. [1] as possessing a feature approximating our notion of rigidity from the purely transversal character of light. The study of space as a universal medium is significant for the main objectives of these volumes, including the search for new clean energies. In cosmology, we have the long standing hypothesis of the continuous creation of matter in our universe. In the event this hypothesis is correct, the most plausible origin of the creation of matter is precisely the synthesis of the neutron from protons and electrons in the core of stars, because the minimum missing energy of 0.78 MeV in the reaction p+ + e− → n + ν could originate precisely from space and, in any case, the hypothesis is so fundamental for our entire scientific knowledge HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY xvii to mandate quantitative studies, of course, jointly with other hypotheses (such as the drawing of the missing energy from the star environment). As we shall see in this and in the following volume, a first meaning of the novel hadronic mechanics is that of providing the first known methods for quantitative studies of the possible interplay between matter and its underlying universal substratum. The understanding is that space is the final frontier of human knowledge, with potential outcome beyond the most vivid science fiction of today, whose study will likely require the entire third millennium. During graduate studies in physics at the University of Torino, Italy, the author learned that Lie algebras with product [A, B] = AB − BA (where A, B are matrices, operators, etc.) are the ultimate foundations of classical and quantum mechanics, special relativity and other quantitative sciences. Hence, the author dedicated his graduate studies for the Ph.D. thesis to the search of a structural generalization of Lie’s algebras. These studies resulted in the first publication in a physics journals [2] of 1967 (see also the more general study [3] of 1968) of the covering Lie-admissible algebras with product (A, B) = pAB − qBA, where p and q are non-null parameters. Some twenty years later these algebras produced a large number of papers under the name of “q-deformations” with the simplified product (A, B) = AB − qBA. Lie-admissible algebras were selected not only because of their covering character over Lie algebras, but also for their capability of representing irreversible processes, a crucial feature for the main objectives of these studies. Following a decade of papers in conventional fields, the construction of a Lieadmissible covering of quantum mechanics under the name of hadronic mechanics was proposed by the author in two memoirs [4,5] of 1978 when at Harvard University under support from the U. S. Department of Energy. The proposal was based on the Lie-admissible generalization of Heisenberg equation idA/dt = (A, H) = AP H − HP A [5] (today known as Heisenberg-Santilli genoequations), where P and Q are nonsingular matrices or operators. The equations were proposed for the treatment of open irreversible events (such as energy releasing particle processes). The original proposal [5] also presented the Lie-isotopic particularization idA/dt = [A, H]∗ = AT H − HT A (today known as Heisenberg-Santilli isoequations) for the representation of closed–isolated systems of particles at small mutual distances (such as the structure of hadrons, nuclei and stars). The latter systems are expected to have conventional potential interactions represented by the Hamiltonian H(r, p) and the most general possible nonlinear, nonlocal and nonpotential interactions represented by the operator T (t, r, p, ψ, ...). Hadronic mechanics was proposed in memoirs [4,5], specifically, for the achievement of a quantitative representation of the synthesis of the neutron as well as of composite hadrons at large, for which scope the name “hadronic mechanics” xviii RUGGERO MARIA SANTILLI was suggested. An evident necessary condition to achieve a quantitative representation of the neutron synthesis was (and remains) that the covering mechanics had to exit from the class of unitary equivalence of quantum mechanics, namely, hadronic mechanics had to have a nonunitary structure (when referred to a conventional Hilbert space over a conventional field). Since unitary transformations are a trivial particular case of nonunitary ones, the basic nonunitarity condition, particularly when realized via the Lie-admissible covering of Lie algebras, assured the covering character of hadronic over quantum mechanics ab initio [4,5]. Via the use of Heisenberg-Santilli isoequation, the validity of hadronic mechanics was proved since the original proposal [4,5] with the achievement of a numerically exact representation of all the characteristics of the πo meson in the reaction e− + e+ → πo, including a numerically exact representation of features that are beyond the representational capabilities of the standard model, such as the size (charge radius) and meanlife (see Section 5 of memoir [5]). Following the necessary construction of a nonunitary covering of the Lorentz and Poincar´e symmetries (today known as the Lorentz- and Poincar´e-Santilli isosymmetries) [6,7] and of the special relativity [8] (today known as Santilli isorelativity), a numerically exact representation of all characteristics of the neutron in the reaction p+ + e− → n + ν was reached in paper [9] of 1990 at the nonrelativistic level and in paper [10] of 1993 at the relativistic level. As clearly stated in the original proposal [4,5], the construction of hadronic mechanics was specifically recommended for the conception and development of new clean energies. The neutron is one of the biggest reservoirs of clean energy available to mankind because it is naturally unstable (when isolated or part of unstable isotopes) and decays via the release of a highly energetic electron easily stopped with a metal shield, plus the innocuous and hypothetical neutrino. In fact, hadronic mechanics has permitted the conception of fundamentally new energies, today known as hadronic energies [11] because originating in the structure of hadrons, rather than in the structure of nuclei, atoms or molecules. These new energies are now seeing large industrial investments and developments reported in the subsequent volume. A quantitative representation of the neutron synthesis is an evident pre-requisite for the stimulated decay of the neutron, one of the possible forms of hadronic energies, and this explains the relentless decades of efforts in the study of the synthesis of the neutron from protons and electrons as occurring in stars. As indicated earlier, quantum mechanics admits conditions of exact validity. By comparison, quantum chemistry admits no conditions of exact validity, and it is either approximately valid for chemical structures and processes or basically inapplicable in its conventional formulation. In fact, quantum chemistry failed to achieve an exact representation from unadulterated primitive axioms of the binding energy of the simplest possible molecule, the hydrogen molecule HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY xix H2 = H − H, in view of the historical 2% missing following one century of failed attempts, with bigger deviations for the water molecule H2O = H − O − H, and rather embarrassing deviations for complex molecules. Consequently, on rigorous scientific grounds, quantum chemistry can only be claimed to be approximately valid for molecular structures. In view of the above well known insufficiency, chemists introduced in the last part of the 20-th century the “screening of the Coulomb law,” namely, the Coulomb law V (r) = q1q2/r was multiplied by an arbitrary function f (r) whose explicit value was fit from the experimental data. This mechanism did indeed improve the representational capability of molecular binding energies although, regrettably for science, the resulting discipline was still called “quantum chemistry.” It is well known that quantized orbits can only be formulated for the unadulterated Coulomb law V (r) = q1q2/r while the notion of quantum does not exist for the screened law V ∗(r) = f (r)(q1q2/r). Also, it is well known to experts to qualify as such that the Coulomb law is a fundamental invariant of quantum mechanics and chemistry. Consequently, the transition from the Coulomb law to its screened version requires a nonunitary transform, namely, the necessary exiting from the class of equivalence of quantum chemistry. At any rate, the representation of the binding energy via the screened Coulomb law is merely approximate. Hence, on serious scientific grounds, quantum chemistry is only approximately valid and cannot be credibly claimed to be exactly valid for molecular structures even after the screening of the Coulomb law. Additionally, quantum chemistry is basically inapplicable for fundamental chemical features, such as the notion of valence. A “scientific treatment” of the valence requires: i) the precise identification of the origin of the bonding force; ii) the proof that such a force is indeed attractive; and iii) the achievement, with such an attractive force, of an exact representation of the binding energies and other feature. By comparison, despite its widespread use generally without a serious inspection, the quantum chemical notion of valence used throughout the 20-th century is a pure nomenclature deprived of quantitative content. Following one century of studies, quantum chemistry has failed to identify the origin of the force responsible for valence bonds and, consequently, cannot even address its needed attractive character, let alone provide a quantitative representation of the bond itself. To render the scientific scene embarrassing, the two identical electrons in a valence bond should repel, rather than attract each other according to quantum chemistry, evidently in view of their identical charge. Additionally, quantum chemistry is basically inapplicable for irreversible chemical reactions, particularly those producing energy, because its axioms are structurally reversible in time (that is, reversible for any possible Hamiltonian), while said reactions are not. xx RUGGERO MARIA SANTILLI Yet another embarrassing insufficiency of quantum chemistry is the prediction that all substances are paramagnetic contrary to reality. This is due to the lack of a sufficiently strong attractive force between valence electrons, as a result of which electron orbitals are essentially independent from each other, thus being orientable under an external magnetic field, with resulting paramagnetic character for all substances that is in dramatic disagreement with reality. The construction of hadronic mechanics was additionally submitted for the purpose of achieving a covering of quantum chemistry, today known as hadronic chemistry [12] that is capable of resolving the above limitations. In view of numerous reasons studied in these volumes, quantum mechanics can be exactly valid only for conditions permitting an effective point-like abstraction of particles. These conditions are verified for one hydrogen atom. However, the same conditions fail to be verified for two hydrogen atoms bonded into the hydrogen molecule H − H because in the latter case we have the deep mutual penetration of the two valence electrons (in singlet coupling) resulting in contact, nonpotential interactions over the finite volume of overlapping. Under these conditions, quantum mechanics and chemistry simply cannot be exactly valid for numerous technical reasons, beginning with the inapplicability of the underlying local-differential topology that can only represent a finite number of isolated points. The contact nonpotential character of the deep mutual penetration of the wavepackets of identical electrons in singlet valence bond clearly identifies its nonHamiltonian character, namely, the impossibility for the Hamiltonian to provide a complete description of the valence bond. In turn, the non-Hamiltonian character demands that a covering chemistry be necessarily nonunitary, as confirmed by the need for a nonunitary map of the Coulomb law into a screened form. A nonunitary transform U U † = I of Heisenberg’s equation then yields precisely the Heisenberg-Santilli isoequation U (idA/dt)U † = idA /dt = U (AH − HA)U † = A T H − H T A , A = U AU †, H = U HU †, T = 1/(U U †) [5]. In this case the Hamiltonian represents all conventional interactions of the 20-th century and T represent the new non-Hamiltonian interactions and effects. Such a nonunitary structure allowed hadronic chemistry to [12]: admit as particular cases all infinitely possible screenings of the Coulomb laws, not as unknown adulterations, but derived from first axiomatic principles; achieve the first known quantitative theory of the valence in all the three main requirements i), ii) and iii) identified above; reach the first known numerically exact representation of the binding energies of the hydrogen, water and other molecules; and resolve other insufficiencies of quantum chemistry, such as the prediction that all substances are paramagnetic. Moreover, hadronic chemistry has indeed achieved the main scope for which it was proposed, the conception and development of new clean fuels with complete combustions, today known as magnegases, that is, gaseous HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY xxi fuels possessing the new chemical structure of Santilli magnecules [12,13] studied in the second volume, now seeing rather large industrial investments. One of the biggest scientific imbalances of the sciences of the 20-th century has been the quantum treatment of biological structures via quantum mechanics without the identification of the limitation of the studies. We teach in first year graduate schools that quantum mechanics is incompatible with the deformation theory because the latter causes the breaking of the central pillar of quantum mechanics, the rotational symmetry. This is the reason for the great effectiveness of quantum mechanics for the treatment of crystals. But then, any use of quantum mechanics in biology implies that biological structures are perfectly rigid, something beyond the boundary of science. Additionally, we also teach in first year graduate school that the very axioms of quantum mechanics are irreversible in time. This is the reason for the great effectiveness of quantum mechanics to represent irreversible atomic orbitals, as well as provide an explanation for their eternal character. But then, quantum mechanical studies in biology imply that biological structures are eternal, something truly beyond any minimum of scientific ethics and accountability. The complexities of biological structures, beginning with a simple cell, are such to be beyond our most vivid imagination. any attempt of treating these complexities with a theory conceived for the atomic structure should be dismissed as non-scientific. The author has stated several times in his papers that special relativity has a “majestic axiomatic structure and validity” for the original conditions of applicability limpidly stated by Einstein, namely, for point-like particles and electromagnetic waves propagating in vacuum, such as for the structure of the hydrogen atom, particles moving in accelerators, etc. However, for numerous different conditions, special relativity is either approximately valid or basically inapplicable. There are numerous conditions for which special relativity is basically inapplicable (rather than violated, because not conceived for the conditions at hand). For instance, special relativity is inapplicable for the classical treatment of antimatter, as clearly established by the absence of any differentiation between neutral matter and antimatter. Special relativity is also inapplicable for the classical representation of charged antiparticles because, in view of the existence of only one quantization channel, the operator image of a classical antiparticle is that of a “particle” (rather than a charged conjugated antiparticle) with the wrong sign of the charge. At any rate, antimatter had not yet been conceived, let alone detected, at the time of the inception of special relativity. Hence, the current widespread use of special relativity for the classical description of antimatter is a scientific manipulation by Einstein’s followers, and definitely not a scientific blunder by Albert Einstein. Similarly, special relativity is inapplicable for a quantitative treatment of the chemical valence or, along much similar lines, for the contact, nonlocal and non- xxii RUGGERO MARIA SANTILLI potential conditions of deep inelastic scatterings of particles, because its mathematical structure simply cannot represent forces not admitting a Hamiltonian representation by. When passing to the main scope of these volumes, energy releasing processes, their study via special relativity is outside the boundaries of science. This is due to the fact that all energy releasing processes are structurally irreversible in time, in the sense of being irreversible for all possible Hamiltonians, while special relativity is known to be structurally reversible in time (since all known Hamiltonians are reversible in time). It is evident that a theory proved to be valid for the representation of the time reversal invariant orbits of atomic electrons, cannot permit a serious scientific study of irreversible energy releasing processes. As an example, special relativity predicts that, following the combustion of petroleum, the produced smoke, ashes and thermal energy spontaneously reproduce the original petroleum. In the author’s view, the above physical insufficiencies are due to insufficient mathematics because the mathematics that proves to be so effective for the treatment of a given physical problem does not necessarily apply for basically different physical conditions. As a matter of fact, major physical insufficiencies are generally created by the insistence in treating new physical conditions via old mathematics. At any rate, the author has stated several times in his works that there cannot be truly new physical theories without truly new mathematics, and there cannot be truly new mathematics without new numbers. For this reason, as a theoretical physicist, the author had to dedicate the majority of his research time to the search and development of basically new mathematics specifically constructed for the quantitative treatment of the physical conditions at hand. By far, the biggest efforts were devoted to the search of new numbers, that is, numbers verifying the conventional axioms of a field without which no physical application is possible. The search appeared impossible prima facie, because the mathematical literature emphatically indicated that all fields had been classified since Hamilton’s time and were given by the real, complex, and quaternionic numbers (Octonions are not ”numbers” as conventionally understood because their multiplication is nonassociative). The solution came from the fact that pure mathematics is afflicted by a number of beliefs essentially originated from protracted use without a rigorous scrutiny. An inspection revealed that the axioms of a field are insensitive to the numerical value as well as the sign of the (multiplicative) unit, provided that the product is modified in such a way to preserve all axioms. The author discovered in this way that contemporary treatments of the number theory are not mathematically accurate because statements, for instance, ”2 × 3 = 6” or that ”4 is not a prime number” should be completed with the statement to be solely valid under the assumption of the unit 1 dating back to biblical times. In fact, as we shall see, the assumption of the value 1/3 as unit implies that 2 × 3 = 18 and ”4 is a prime number”. HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY xxiii These studies lead to new numbers (that is, rings verifying all conventional axioms of a field) characterized by a unit with an arbitrary (nonsingular) value today called Santilli isonumbers, genonumbers and hypernumbers for the treatment of matter and their anti-Hermitian versions known as Santilli isodual isonumbers, isodual genonumbers and isodual hypernumbers for the treatment of antimatter. The new numbers were presented for the first time in paper [14] of 1993. The author considers this paper his most important mathematical contribution because the novel iso-, geno- and hyper-mathematics for matter and their isoduals for antimatter were constructed via simple compatibility conditions with the new basic numbers. The first clear illustration of he need for new mathematics is given by the classical treatment of antimatter. As recalled above, special relativity has no means whatsoever to differentiate between neutral matter and antimatter, thus leaving the only possible solution to a new appropriate mathematics. A search in the mathematical libraries of the Cantabridgean area in the early 1980s revealed that a mathematics for the classical treatment of antimatter did not exist and had to be built. Recall that charge conjugation is anti-automorphic, although solely applicable on a Hilbert space over the field of complex numbers. Hence, a mathematics suitable for the corresponding classical treatment has to be antihomomorphic or, more generally, anti-isomorphic to conventional mathematics as an evident necessary condition to achieve compatibility with the operator treatment. This identifies the need for numbers, spaces, differential calculus, topology, algebras, symmetries, etc., that are anti-isomorphic to conventional formulations. Following laborious trials and errors, the author had to construct the needed new mathematics beginning with the original proposal to construct hadronic mechanics that is known today as Santilli isodual mathematics and related isodual special relativity for the classical and operator treatment of antimatter, which new formulations resulted to have far reaching implications, such as: the prediction of antigravity experienced by antimatter in the field of matter or vice versa [15]; the consequential prediction of a non-Newtonian, spacetime geometric locomotion with unlimited speeds without any violation of causality laws, although only for certain states called “isoselfdual” [16]; the prediction that light emitted by antimatter is delectably different than that emitted by matter, thus offering for the first time in history the possibility in due time of ascertaining whether a far away galaxy or quasar is made up of matter or of antimatter; and other advances [16,17] (see monograph [18] for a review). The second illustration of the need for new mathematics is given by the relativistic description of deep mutual penetrations of the wavepackets and/or charge distributions of particles as occurring in the synthesis of hadrons, deep inelastic scatterings, electrons valence bonds, etc. The very foundations of conventional xxiv RUGGERO MARIA SANTILLI mathematics, its local-differential topology, identifies quite clearly its inapplicability to the problem considered due to its nonlocal-integral character. An additional time consuming search in the mathematical libraries of the CambridgeBoston area was conducted in the late 1970s with no result. More specifically, the search did identify a number of new topologies, some of which of integral type, but they violated the central physical conditions of being a covering of the conventional local-differential topology so as to allow the new physical theories to be coverings of the old ones. Hence, the mathematics needed for the quantitative treatment of the indicated nonlocal-integral conditions of particles had to be built. Following additional trials and errors, the new mathematics was constructed beginning with the original memoirs of 1978 to construct hadronic mechanics [4,5] and then continuing in numerous works (see the mathematical presentation [19] of 1996. The new mathematics carries today the name of Santilli isomathematics and permits a classical and operator treatment of extended, nonspherical and deformable particles under linear and nonlinear, local and nonlocal and potential as well as nonpotential interactions. Santilli isodual isomathematics then holds for the corresponding conditions of antiparticles [18]. In turn, the new mathematics permitted the structural generalization of special relativity into a covering today known as Santilli isorelativity [4-8] for closed isolated systems with conventional potential interactions, as well as the most possible nonlinear, nonlocal and nonpotential forces as needed for true advances in the structure of hadrons, nuclei and stars. Santilli isodual isorelativity [18] then represent the corresponding antimatter systems. All the above efforts turned out to be merely preliminaries for the central objective of these studies, the search for new clean energies and fuels, because the latter are characterized by irreversible processes recalled earlier, while all the preceding four mathematics (conventional and isotopic mathematics for matter and their isoduals for antimatter) are structurally reversible. Hence, the author had to initiate an additional laborious search and construction of yet another new mathematics, this time with an ordering in its very axiomatic structure and an inequivalent dual that could be physically used to represent motion forward and backward in time. This additional new mathematics was eventually built and it is today known as Santilli’s genomathematics for the treatment of extended, nonspherical and deformable particles under unrestricted irreversible conditions [4,8,18]. Santilli isodual genomathematics then applies to antiparticles in corresponding irreversible conditions. The original proposal [4,5] to construct hadronic mechanics was formulated precisely via genomathematics, isomathematics being a particular case. Genomathematics was then studied in various works (see memoirs [19,20]). HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY xxv The most complex efforts dealt with an irreversible generalization of special relativity into a form, today known as Santilli genorelativity admitting isorelativity as well as the conventional special relativity as particular cases, with corresponding isodual for antimatter. A main difficulty was given by the need to achieve structurally irreversible symmetries characterizing time rates of variation of physical quantities, as occurring in nature. The solution was permitted by the Lie-admissible covering of Lie theory along studies initiated in 1967 [2] (see also [19,20]). The indicated lack of final theories in science was confirmed by the fact that all the preceding six different mathematics (conventional, isotopic and genotopic, and their isoduals) resulted in being insufficient for serious studies in biology since, for reasons we shall see, the latter require multi-valued methods. This occurrence can be intuitively seen from the fact that, e.g., a few atoms in a DNA can generate a complex organ with a huge number of cells. A multi-valued mathematics did exist in the literature, the so-called hyperstructures, but they had no possibility of applications to biological structures due to the absence of a left and right unit (evidently crucial to permit measurements), the use of rather abstract operations not compatible with experiments, and other reasons. These limitations led the author to the construction of a final form of mathematics, today known as Santilli hyper-mathematics that is irreversible, multivalued and possesses a left and right unit at all levels. Santilli isodual hypermathematics is then the corresponding form for antimatter [21]. After the above laborious research, including the construction of the above new mathematics and related broadening (called lifting) of quantum mechanics, quantum chemistry, special and general relativities, the author had still failed to achieve by the early 1990s a property truly crucial for serious physical value, the invariance of the numerical predictions under the time evolution of the theory, namely, the prediction of the same numerical values under the same conditions at different times. The indicated new mathematics did indeed provide a sequence of generalizations of Hamilton’s classical equations, Heisenberg’s operator equations, Einstein’s axioms for the special relativity, etc., but their numerical predictions under the same physical conditions turned out to change over time, a catastrophic inconsistency that delayed the applications of hadronic mechanics for decades because the author simply refused to publish papers he considered catastrophically inconsistent. Again, major physical problems generally originate from insufficient mathematical, and the solution emerged from the identification and dismissal of another popular belief in pure mathematics, the belief that the differential calculus does not depend on the basic field. It turned out that this mathematical belief is correct only for constant units, since said belief is no longer valid whenever the generalized units depend on the local coordinates. This occurrence permit- xxvi RUGGERO MARIA SANTILLI ted the discovery of basically new differential calculi, today known as Santilli iso-, geno- and hyper-differential calculi and their isoduals published for the first time in memoir [19] of 1996. These new calculi finally permitted the achievement of the invariance of numerical predictions over time so much needed for physical applications. In summary, the studies presented in these two volumes deal with eight different mathematics: the conventional, iso-, geno- and hyper-mathematics for treatment of matter in conditions of progressively increasing complexity, and their isoduals for the treatment of antimatter. The strict understanding with the world ”new mathematics” is that each of them requires the appropriate new formulation of the totality of the mathematics used in the physics of the 20-th century, including numbers, fields, spaces, differential calculus, functional analysis, algebras, geometries, topologies, etc. The absence of only one proper formulation, for instance, the treatment of isomechanics with the conventional functional analysis, leads to catastrophic inconsistencies. While special relativity does indeed admit physical conditions of exact validity, general relativity at large, and Einstein’s formulation of gravitation via the hypothetical curvature, have been known for decades to verify several theorems of catastrophic inconsistencies [22] reviewed in Chapter 1 of this volume. To avoid technical issues in these introductory lines, we merely mention that it is impossible to represent via curvature a most basic gravitational event, the free fall of masses in a gravitational field along a straight radial line. Similarly, the “bending of light” when passing near a star (that was used by Einstein’s supporters to promote the acceptance of general relativity) is known to be due to Newtonian attraction and, when used as “evidence” of curvature of space, it leads to known inconsistencies, such as either the incompatibility of Einstein’s views with Newtonian gravitation, or the prediction of a value of light bending double that experimentally measured, one for the Newtonian attraction and another for curvature. Besides the indicated catastrophic inconsistencies, a most unreassuring implication of Einstein’s gravitation is that its central formulation via curvature has prohibited basic advances for about one century, such as: the failed attempts of achieving quantum gravity; the impossibility of achieving a consistent grand unification cosmological theories of pure theological character; and other ascientific conditions. It should be admitted by serious scholars that gravitation on a Riemannian space is a noncanonical theory with consequential nonunitary operator image that, as such, see unavoidable collapse of quantum axioms, violation of causality, and other irreconcilable problems. Similarly, serious scholars should admit that any attempt at grand unification of electroweak theories and Einstein’s gravitation faces catastrophic inconsistencies for electroweak interactions originating from the lack of any symmetry by Einstein gravitation. HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY xxvii Following numerous years of research, a resolution of these inconsistencies was reached via a geometric unification of special and general relativities beginning with a geometric unification of the Minkowskian and Riemannian geometries [23] via the abstract Minkowskian axioms thanks to the power of isomathematics. According to this view, any Riemannian metric g(x) is decomposed into the Minkowskian metric η = Diag.(1, 1, 1, −1) multiplied by a positive-definite fourdimensional matrix Tˆ(x) carrying the entire gravitational content, g(x) = Tˆ(x) × η. Gravity is then reformulated on the Minkowski-Santilli isospace Mˆ (x, ηˆ, Uˆ ) with isometric ηˆ(x) ≡ g(x) but formulated on an isofield with isounit given by the inverse of the gravitational matrix, Iˆ(x) = 1/Tˆ(x). This procedure eliminates the origin of all problems of Einstein’s gravitation, curvature, since Mˆ is flat (this formulation of gravity was presented by the author at the Marcel Grossman Meeting in Gravitation of 1994 [24]). The new conception of gravitation without curvature permitted, apparently for the first time, the resolution of one century old controversies on Einstein’s gravitation as well as to: achieve a universal symmetry for all possible gravitational elements, the Poincar´e-Santilli isosymmetry [7]; the achievement of a fully consistent operator formulation of gravity, including a fully valid PCT theorem, via the embedding of gravitation in the unit of relativistic quantum mechanics; and an axiomatically consistent grand unifications of electroweak and gravitational interactions including, for the first time, matter and antimatter, and based on the universal Poincar´e-Santilli isosymmetry (unification presented at the Marcel Grossmann meeting in gravitation of 1998 [25]. As we shall see, all the above studies, the most crucial one being the representation of gravity without curvature, suggest rather radical new vistas in cosmology, such as: 1) The possibility of experimental resolution in due time whether far away galaxies and quasars are made of matter or antimatter via the predicted gravitational repulsion caused by matter on light emitted by antimatter and other experimental means; 2) The most logical interpretation of the expansion of the universe permitted by matter and antimatter galaxies and quasars, since their gravitational repulsion allows a quantitative representation not only of the expansion of the universe but also of its increase in time; 3) Dramatic revisions in the notion of time that becomes local, i.e.. varying from an astrophysical body to another and with opposite signs for matter and antimatter, with a possible ”null total time of the universe” that would avoid immense discontinuities at creation, such as those implied by the ’big bang”; 4) The first known cosmology with a universal symmetries, the Poincar´eSantilli isosymmetry for matter multiplied by its isodual for antimatter; and xxviii RUGGERO MARIA SANTILLI 5) The first ”cosmology” in the Greek sense of the word, thus including biological structures. We cannot close these introductory words without a few comments on the most fundamental equations of physics, Newton’s equations from which all physical formulations can be derived via compatibility conditions. Due to extended use over three centuries, Newton’s equations have been believed to be “universal”, namely, applicable for all possible classical conditions of particles in the universe. This popular belief turned out to be untrue. Newton’s equation have no meaningful feature to represent antiparticles, whether charged or neutral, and lack the mathematics needed for the representation of the actual, extended, nonspherical and deformable shape of particles, their irreversible conditions when the force is time independent while the system is nonconservative, thus irreversible, and other insufficiencies. At any rate, no broadening of quantum mechanics, special relativity and other discipline can have any serious scientific value without a broadening of their ultimate foundations, Newton’s equations. The studies on the generalization of Newton’s equations were conducted by following Newton’s teaching, and not the teaching of Newton’s followers. Recall that, as a necessary condition to achieve his historical equations, Newton had to discover first the differential calculus (jointly with Leibnitz). Hence, the discovery of new numbers for the generalization of the mass in the celebrated equations was basically insufficient. Newton’s teaching then became instrumental in achieving the new iso-, geno-, and hyperdifferential calculi for matter and their isoduals for antimatter which led to the sequence of generalized equations, today called Newton-Santilli iso-, geno-, hyper-equations for matter and their isoduals for antimatter presented for the first time in memoir [19] of 1996 the author considers his most important physics paper. In this volume, we report the mathematical and theoretical contributions that initiated the various aspects as outlined in this Preface, plus subsequent contributions by colleagues too numerous to be mentioned here (see the General Bibliography). This first volume is intended as an upgrade of the two volumes of Elements of Hadronic Mechanics [26] published by the author in the early 1990. Nevertheless, the study of these volumes is recommended for a serious knowledge of the new theories since numerous detailed treatments presented in volumes [26] are not reproduced in this volume for brevity. In Volume II, we report experimental verifications, theoretical advances and industrial applications. Ruggero Maria Santilli Carignano (Torino), Italy July 4, 2007 Revised Hermosillo, Mexico August 14, 2007 HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY xxix [1] R. M. Santilli, “Why space is rigid” (in Italian), Pungolo Verde, Campobasso, Italy (1956). [2] R. M. Santilli, “Imbedding of Lie algebras in nonassociative structures,” Nuovo Cimento 51, 570–576 (l967). [3] R. M. Santilli, “An Introduction to Lie-admissible Algebras,” Nuovo Cimento Suppl. 6, 1225–1249 (l968). [4] R. M. Santilli, “On a possible Lie-admissible covering of the Galilei Relativity for nonconservative and Galilei form-noninvariant systems,” Hadronic J. , 223–423 (l978), and Addendum 1, 1279–1342 (l978). [5] R. M. Santilli, “Need for subjecting to an experimental verification the validity within a hadron of Einstein’s Special Relativity and Pauli’s Exclusion Principle,” Hadronic J. 1, 574–902 (l978). [6] R. M. Santilli, “Lie-isotopic lifting of the special relativity for extended deformable particles,” Lett. Nuovo Cimento 37, 545–555 (l983). [7] R. M. Santilli, “Nonlinear, nonlocal and noncanonical isotopies of the Poincar´e symmetry,” J. Moscow Phys. Soc. 3, 255–280 (1993). [8] R. M. Santilli, Isotopic Generalizations of Galilei and Einstein Relativities, Volumes I and II, Hadronic Press, Palm Harbor, Florida (1991). [9] R. M. Santilli, “Apparent consistency of Rutherford’s hypothesis of the neutron as a compressed hydrogen atom,” Hadronic J. 13, 513–532 (l990). [10] R. M. Santilli, “Recent theoretical and experimental evidence on the cold fusion of elementary particles,” Communication of the Joint Institute for Nuclear Research # E4-93-252 (1993), also published in Chine J. Syst. Eng. & Electr. 6, 177–199 (1995). [11] R. M. Santilli “Hadronic energy,” Hadronic J. 17, 311–348 (1994). [12] R. M. Santilli, Foundations of Hadronic Chemistry with Applications to New Clean Energies and Fuels, Kluwer Academic Publishers, Boston-Dordrecht-London (2001). Russian Translation available in pdf-zip format at http://i-b-r.org/docs/sanrus.pdf.zip. [13] R. M. Santilli, The Physics of New Clean Energies and Fuels According to Hadronic Mechanics, Special issue of the Journal of New Energy, 318 pages (1999). [14] R. M. Santilli, “Isonumber and genonumbers of dimension 1, 2, 4, 8, their isoduals and pseudoduals, and hidden numbers of dimension 3, 5, 6, 7,” Algebras, Groups and Geometries 10, 273–321 (1993). [15] R. M. Santilli, “Classical isodual theory of antimatter and its prediction of antigravity,” Intern. J. Modern Phys. A 14, pp. 2205–2238 (1999). [16] R. M. Santilli, “Does antimatter emit a new light?”, invited paper at Hyperfine Interactions 109, pp. 63–81 (1997). [17] R. M. Santilli,“Iso-, geno-, hyper-relativities for matter and their isoduals for antimatter, and their novel applications in physics, chemistry and biology,” Journal of Dynamical Systems and Geometric Theories, 1, 121–193 (2003). xxx RUGGERO MARIA SANTILLI [18] R. M. Santilli, Isodual Theory of Antimatter and its Application to Antigravity, Grand Unifications and Cosmology, Springer (2006). [19] R. M. Santilli, “Nonlocal-integral isotopies of differential calculus, geometries and mechanics,” Rendiconti Circolo Matematico Palermo, Suppl. Vol. 42, pp. 7–82 (1996). [20] R. M. Santilli, “Lie-admissible invariant representation of irreversibility for matter and antimatter at the classical and operator levels,” Nuovo Cimento B, 121, 443–498 (2006). [21] R. M. Santilli, Isotopic, Genotopic and Hyperstructural Methods in Theoretical Biology, Ukrainian Academy of Sciences, Kiev (1996). [22] R. M. Santilli, “Nine inconsistency theorems of general relativity and their apparent resolution via the Poincar´e Invariant isogravitation,” Galilean Electrodynamics 17, Special issue 3, pp. 42–54 (2006). [23] R. M. Santilli, “Isominkowskian geometry for the gravitational treatment of matter and its isodual for antimatter,” Int. J. Mod. Phys. D Vol. 7, 351–407 (1998). [24] R. M. Santilli, “Isotopic quantization of gravity and its universal iso-Poincar´e symmetry,” in Proceedings of the VII M. Grossmann Meeting on Gravitation, R.T. Jantzen, G. Mac Keiser and R. Ruffini, Editors, World Scientific, Singapore, pp. 500–505 (1996). [25] R. M. Santilli, “Unification of gravitation and electroweak interactions,” Contributed paper in the Proceedings of the Eight Marcel Grossmann Meeting in Gravitation, T. Piran, and R. Ruffini, Editors, World Scientific, pp. 473–475 (1999). [26] R. M. Santilli, Elements of Hadronic Mechanics, Vols. I and II, Ukrainian Academy of Sciences, Kiev (second edition 1995). Ethnic Note The author has repeatedly stated in his works that Albert Einstein is, unquestionably, the greatest scientist of the 20-th century, but he is also the most exploited scientist in history to date, because a large number of researchers have exploited Einstein’s name for personal gains in money, prestige, and power . In these two volumes, we shall honor Einstein’s name as much as scientifically possible, but we shall jointly express the strongest possible criticisms of some of Einstein’s followers ,by presenting a plethora of cases in which Einstein’s name has been abused for conditions dramatically beyond those conceived by Einstein, under which conditions his theories are inapplicable (rather than violated) because not intended for. In so doing, Einstein’s followers have created one of the biggest scientific obscurantism in history, superior to that caused by the Vatican during Galileo’s time. This obscurantism has to be contained, initiating with open denunciations, and then resolved via advances beyond Einstein’s theories, for the very survival of our society since, as technically shown in these volumes, the resolution of our current environmental problems requires new scientific vistas. As known by all, Albert Einstein was Jewish. The countless denunciations of Einstein’s followers presented and technically motivated in these volumes will likely spark debates to keep historians occupied for generations. It is my pleasant duty to indicated that Jewish scientists have been among the best supporters of the authors’ research, as established by the following facts: 1) The author had the privilege of participating to the Marcel Grossmann Meeting on General Relativity held at the Hebrew University, Jerusalem, in June 1997, with a contribution showing various inconsistencies of Einstein gravitation and proposing an alternative theory with gravitation embedded in a generalized treatment of the unit. Unfortunately, the author had to cancel his trip to Jerusalem at the last moment. Nevertheless, the organizers of the meeting had the chairman of the session read the author’s transparencies and did indeed publish his paper in the proceedings. 2) One of the first formal meetings ”beyond Einstein” was organized in Israel at Ben Gurion University, in 1998, under the gentle title of ”Modern Modified Theories of Gravitation and Cosmology,” in which the author had the privilege of participating with a contributed paper criticizing and going beyond Einstein’s theories. xxxii RUGGERO MARIA SANTILLI 3) Numerous Jewish mathematicians, theoreticians and experimentalists have collaborated with and/or supported the author in the development of hadronic mechanics, as we see in many of the papers reviewed throughout the presentation. As a matter of fact, the author has received to date more support from Jewish scientists than that from Italian colleagues, the author being a U. S. citizen of Italian birth and education. Such a statement should not be surprising to readers who know the Italian culture as being based on the most virulent possible mutual criticisms that are perhaps a reason for the greatness of Italian contributions to society. Needless to say, the denial of a Jewish component in the scientific controversies raging on Einstein followers would be a damaging hypocrisy, but we are referring to a very small segment of the Jewish scientific community as established by 1), 2), 3) and additional vast evidence. At any rate, we have similar ethnic components: in Italy, for Galileo’s initiation of quantitative science; in England, for Newton’s historical discoveries; in Germany, for Heisenberg’s quantum studies; in Japan, for Yukawa’s advances in strong interactions; in France, for de Broglie’s pioneering research; in Russia, for Bogoliubov’s advances; in India, for Bose’s pioneering discoveries; and so on. The point the author wants to stress with clarity, and document with his personal experience, is that, in no way, this variety of small ethnic components may affect scientific advances because, unlike politics, science belongs to all of mankind, positively without any ethnic or other barrier. Ruggero Maria Santilli Palm Harbor, Florida, October 27, 2007 Legal Notice The underwriter Ruggero Maria Santilli states the following: 1) To be the sole person responsible for the content of Hadronic Mathematics, Mechanics and Chemistry, Volumes I and II; to be the sole owner of the Copyrights on these two volumes; and to have recorded, beginning with 1992, the copyright ownership of a number of his main contributions in the field. 2) The undersigned hereby authorizes anybody to copy, and/or use, and/or study, and/or criticize and /or develop, and/or apply any desired part of these volumes without any advance authorization by the Copyrights owner under the sole condition of implementing known rules of scientific ethics, namely: 2A) The originating papers are clearly quoted in the initial parts; 2B) Scientific paternity are clearly identified and documented; and 2C) Any desired additional papers are additionally quoted at will, provided that they are directly relevant and quoted in chronological order. Violators of these known ethical rules will be notified with a request of immediate corrections essentially consisting publishing missed basic references. In the event of delays or undocumented excuses, authors who violate the above standard rules of scientific ethics will be prosecuted in the U. S. Federal Court jointly with their affiliations and funding sources. 3) There are insisting rumors that organized interests in science are waiting or the author’s death to initiate premeditated and organized actions for paternity fraud via the known scheme, often used in the past, based on new papers in the field without the identification of the author’s paternity, which papers are then quickly quoted as originating papers by pre-set accomplices and the fraud is then accepted by often naive or ignorant followers merely blinded by the academic credibility of the schemers. Members of these rumored rings should be aware that the industrial applications of hadronic mathematics, mechanics and chemistry have already provided sufficient wealth to set up a Paternity Protection Trust solely funded to file lawsuits against immoral academicians attempting paternity fraud, their affiliations and their funding agencies. This legal notice has been made necessary because, as shown in Section 1.5, the author has been dubbed ”the most plagiarized scientist of the 20-th century,” as it is the case of the thousands of papers in deformations published without any quotation of their origination by the author in 1967. These, and other attempted paternity frauds, have forced the author to initiate legal action reported in web site [1]. xxxiv RUGGERO MARIA SANTILLI In summary, honest scientists are encouraged to copy, and/or study, and/or criticize, and/or develop, and/or apply the formulations presented in these volumes in any way desired without any need of advance authorization by the copyrights owner, under the sole conditions of implementing standard ethical rules 2A, 2B, 2C. Dishonest academicians, paternity fraud dreamers, and other schemers are warned that legal actions to enforce scientific ethics are already under way [1], and will be continued after the author’s death. In faith Ruggero Maria Santilli U. S. Citizen acting under the protection of the First Amendment of the U. S. Constitution guaranteeing freedom of expression particularly when used to contain asocial misconducts. Tarpon Springs, Florida, U. S. A. October 11, 2007 [1] International Committee on Scientific Ethics and Accountability http://www.scientificethics.org Acknowledgments The author expresses his deepest appreciation in memory of: the late British philosopher Carl Popper, for his strong support in the construction of hadronic mechanics, as shown in the Preface of his last book Quantum Theory and the Schism in Physics; the late Nobel Laureate Ilya Prigogine, for pioneering the need of nonunitary broadening of quantum theory and his personal support for the organization of the Hadronic Journal since its inception; the late Italian physicist Piero Caldirola, for his pioneering work in noncanonical broadening of conventional canonical theories as well as support for the construction of hadronic mechanics; the Greek mathematician Grigorios Tsagas, for fundamental contributions in the Lie-isotopic methods underlying hadronic mechanics; the late Italian physicist Giuliano Preparata, for pioneering anisotropic departures from the geometric structure of special relativity, extended by hadronic mechanics into anisotropic and inhomogeneous media; the late mathematician Robert Oehmke for pioneering work on the Lie-admissible structure of hadronic mechanics; the mathematician Jaak L¨ohmus whose studies on nonassociative algebras, with particular reference to the octonion algebra, have been particularly inspiring for the construction of hadronic mechanics; and other scholars who will be remembered by the author until the end of his life. The author expresses his appreciation for invaluable comments to all participants of: the International Workshop on Antimatter Gravity and Anti-Hydrogen Atom Spectroscopy held in Sepino, Molise, Italy, in May 1996; the Conference of the International Association for Relativistic Dynamics, held in Washington, D.C., in June 2002; the International Congress of Mathematicians, held in Hong Kong, in August 2002; the International Conference on Physical Interpretation of Relativity Theories, held in London, September 2002, and 2004; and the XVIII Workshop on Hadronic Mechanics held in Karlstad, Sweden, in June 2005. The author would like also to express his deepest appreciation to Professors: A. van der Merwe, Editor of Foundations of Physics; P. Vetro, Editor of Rendiconti Circolo Matematico Palermo; G. Langouche and H. de Waard, Editors of Hyperfine Interactions; V. A. Gribkov, Editor of Journal of Moscow Physical Society; B. Brosowski, Editor of Mathematical Methods in Applied Sciences; D. V. Ahluwalia, Editor of the International Journal of Modern Physics; T. N. Veziroglu, Editor of the International Journal of Hydrogen Energy; H. Feshback, Editor of the (MIT) Annals of Physics; the Editors of the Italian, American, British, French , Russian, Indian and other physical and mathematical societies; xxxvi RUGGERO MARIA SANTILLI and other Editors for very accurate refereeing in the publication of papers that have a fundamental character for the studies presented in these monographs. Particular thanks are also due for invaluable and inspiring, constructive and critical remarks, to Professors A. K. Aringazin, P. Bandyopadhyay, P. A. Bjorkum, J. Dunning-Davies, T. L. Gill, E. J. T. Goldman, I. Guendelman, F. W. Hehl, M. Holzscheiter, L. Horwitz, S. Kalla, J. V. Kadeisvili, N. Kamiya, A. U. Klimyk, S. Johansen, D. F. Lopez, J. P. Mills, jr., R. Miron, P. Rowlands, G. Sardanashvily, K. P. Shum, H. M. Srivastava, N. Tsagas, E. Trell, C. Udriste, C. Whitney, F. Winterberg, and others. Special thanks are finally due to Professors D. V. Ahluwalia for an invaluable critical reading of an earlier version of the manuscript and for suggesting the addition of isodual space and time inversions. Additional thanks are due to Professors J. Dunning-Davies, V. Keratohelcoses and H. E. Wilhelm for an accurate reading of a later version of the manuscript. Thanks are finally due to Prof. Richard Koch of the University of Oregon for assistance in composing this monograph with TexShop, and to Dr. I. S. Gandzha for assistance in the LaTeX composition, without which help these volumes would not have been printed. Thanks are finally due to various colleagues for a technical control, including Drs. G.Mileto, M. Sacerdoti and others, and to Mrs. Dorte Zuckerman for proofreading assistance. Needless to say, the author is solely responsible for the content of this monograph due also to several additions and improvements in the final version. Chapter 1 SCIENTIFIC IMBALANCES OF THE TWENTIETH CENTURY 1.1 THE SCIENTIFIC IMBALANCE CAUSED BY ANTIMATTER 1.1.1 Needs for a Classical Theory of Antimatter The first large scientific imbalances of the 20-th century studied in this monograph is that caused by the treatment of matter at all possible levels, from Newtonian to quantum mechanics, while antimatter was solely treated at the level of second quantization [1]. Besides an evident lack of scientific democracy in the treatment of matter and antimatter, the lack of a consistent classical treatment of antimatter left open a number of fundamental problems, such as the inability to study whether a faraway galaxy or quasar is made up of matter or of antimatter, because such a study requires first a classical representation of the gravitational field of antimatter, as an evident pre-requisite for the quantum treatment (see Figure 1.1). It should be indicated that classical studies of antimatter simply cannot be done by merely reversing the sign of the charge, because of inconsistencies due to the existence of only one quantization channel. In fact, the quantization of a classical antiparticle solely characterized by the reversed sign of the charge leads to a particle (rather than a charge conjugated antiparticle) with the wrong sign of the charge. It then follows that the treatment of the gravitational field of suspected antimatter galaxies or quasars cannot be consistently done via the Riemannian geometry in which there is a simple change of the sign of the charge, as rather popularly done in the 20-th century, because such a treatment would be structurally inconsistent with the quantum formulation. At any rate, the most interesting astrophysical bodies that can be made up of antimatter are neutral. In this case general relativity and its underlying Rieman- 2 RUGGERO MARIA SANTILLI Figure 1.1. An illustration of the first major scientific imbalance of the 20-th century studied in this monograph, the inability to conduct classical quantitative studies as to whether faraway galaxies and quasars are made-up of matter or of antimatter. In-depth studies have indicated that the imbalance was not due to insufficient physical information, but instead it was due to the lack of a mathematics permitting the classical treatment of antimatter in a form compatible with charge conjugation at the quantum level. nian geometry can provide no difference at all between matter and antimatter stars due to the null total charge. The need for a suitable new theory of antimatter then becomes beyond credible doubt. As we shall see in Chapter 14, besides all the above insufficiencies, the biggest imbalance in the current treatment of antimatter occurs at the level of grand unifications, since all pre-existing attempts to achieve a grand unification of electromagnetic, weak and gravitational interactions are easily proved to be inconsistent under the request that the unification should hold not only for matter, as universally done until now, but also for antimatter. Hence, prior to venturing judgments on the need for a new theory of antimatter, serious scholars are suggested to inspect the entire scientific journey including the iso-grand-unification of Chapter 14. HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 3 1.1.2 The Mathematical Origin of the Imbalance The origin of this scientific imbalance was not of physical nature, because it was due to the lack of a mathematics suitable for the classical treatment of antimatter in such a way as to be compatible with charge conjugation at the quantum level. Charge conjugation is an anti-homomorphism. Therefore, a necessary condition for a mathematics to be suitable for the classical treatment of antimatter is that of being anti-homomorphic, or, better, anti-isomorphic to conventional mathematics. Therefore, the classical treatment of antimatter requires numbers, fields, functional analysis, differential calculus, topology, geometries, algebras, groups, symmetries, etc. that are anti-isomorphic to their conventional formulations for matter. The absence in the 20-th century of such a mathematics is soon established by the lack of a formulation of trigonometric, differential and other elementary functions, let alone complex topological structures, that are anti-isomorphic to the conventional ones. In the early 1980s, due to the absence of the needed mathematics, the author was left with no other alternative than its construction along the general guidelines of hadronic mechanics, namely, the construction of the needed mathematics from the physical reality of antimatter, rather than adapting antimatter to pre-existing insufficient mathematics.1 After considerable search, the needed new mathematics for antimatter resulted in being characterized by the most elementary and, therefore, most fundamental possible assumption, that of a negative unit, −1, (1.1.1) and then the reconstruction of the entire mathematics and physical theories of matter in such a way as to admit −1 as the correct left and right unit at all levels. In fact, such a mathematics resulted in being anti-isomorphic to that representing matter, applicable at all levels of study, and resulting in being equivalent to charge conjugation after quantization.2 1In the early 1980s, when the absence of a mathematics suitable for the classical treatment of antimatter was identified, the author was (as a theoretical physicist) a member of the Department of Mathematics at Harvard University. When seeing the skepticism of colleagues toward such an absence, the author used to suggest that colleagues should go to Harvard’s advanced mathematics library, select any desired volume, and open any desired page at random. The author then predicted that the mathematics presented in that page resulted in being fundamentally inapplicable to the classical treatment of antimatter, as it did indeed result to be the case without exceptions. In reality, the entire content of advanced mathematical libraries of the early 1980s did not contain the mathematics needed for a consistent classical treatment of antimatter. 2In 1996, the author was invited to make a 20 minutes presentation at a mathematics meeting held in Sicily. The presentation initiated with a transparency solely containing the number −1 and the statement 4 RUGGERO MARIA SANTILLI 1.1.3 Outline of the Studies on Antimatter Recall that “science” requires a mathematical treatment producing numerical values that can be confirmed by experiments. Along these lines, Chapter 2 is devoted, first, to the presentation of the new mathematics suggested by the author for the classical treatment of antimatter under the name of isodual mathematics with Eq. (1.1.1) as its fundamental isodual left and right unit. The first comprehensive presentation was made by the author in monograph [94]. The first is, however, in continuous evolution, thus warranting an update. Our study of antimatter initiates in Chapter 2 where we present the classical formalism, proposed under the name of isodual classical mechanics that begins with a necessary reformulation of Newton’s equations and then passes to the needed analytic theory. The operator formulation turned out to be equivalent, but not identical, to the quantum treatment of antiparticles, and was submitted under the name of isodual quantum mechanics. Following these necessary foundational studies, Chapter 2 includes the detailed verification that the new isodual theory of antimatter does indeed verify all classical and particle experimental evidence. In subsequent chapters we shall then study some of the predictions of the new isodual theory of antimatter, such as antigravity, a causal time machine, the isodual cosmology in which the universe has null total characteristics, and other predictions that are so far reaching as to be at the true edge of imagination. All these aspects deal with point-like antiparticles. The study of extended, nonspherical and deformable antiparticles (such as the antiproton and the antineutron) initiates in Chapter 3 for reversible conditions and continues in the subsequent chapters for broader irreversible and multi-valued conditions. 1.2 THE SCIENTIFIC IMBALANCE CAUSED BY NONLOCAL-INTEGRAL INTERACTIONS 1.2.1 Foundations of the Imbalance The second large scientific imbalance of the 20-th century studied in this monograph is that caused by the reduction of contact nonlocal-integral interactions that such a number was assumed as the basic left and right unit of the mathematics to be presented. Unfortunately, this first transparency created quite a reaction by most participants who bombarded the author with questions advancing his presentation, questions often repeated with evident waste of precious time without the author having an opportunity to provide a technical answer. This behavior continued for the remaining of the time scheduled for the talk to such an extent that the author could not present the subsequent transparencies proving that numbers with a negative unit verify all axioms of a field (see Chapter 2). The case illustrates that the conviction of absolute generality is so engraved among most mathematicians to prevent their minds from admitting the existence of new mathematics. HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 5 Figure 1.2. A first illustration of the second major scientific imbalance of the 20-th century studied in this monograph, the abstraction of extended hyperdense particles, such as protons and neutrons, to points, with consequential ignorance of the nonlocal and nonpotential effects caused by the deep overlapping of the hyperdense media in the interior of said particles. As we shall see, besides having major scientific implications, such as a necessary reformulation of Feynman’s diagrams, the quantitative treatment of the nonlocal and nonpotential effects of this figure permits truly momentous advances, such as the conversion of divergent perturbative series into convergent forms, as well as the prediction and industrial development of basically new, clean energies and fuels. among extended particles to pre-existing action-at-a-distance local-differential interactions among point-like particles (see Figure 1.2). It should be indicated that there exist numerous definitions of “nonlocality” in the literature, a number of which have been adapted to be compatible with pre-existing doctrines. The notion of nonlocality studied by hadronic mechanics is that specifically referred to interactions of contact type not derivable from a potential and occurring in a surface, as for the case of resistive forces, or in a volume, as for the case of deep mutual penetration and overlapping of the wavepackets and/or charge distributions of particles. The imbalance was mandated by the fact (well known to experts to qualify as such) that nonlocal-integral and nonpotential interactions are structurally incompatible with quantum mechanics and special relativity, beginning with its localdifferential topology, because the interactions here considered cause the catastrophic collapse of the mathematics underlying special relativity, let alone the irreconcilable inapplicability of the physical laws. 6 RUGGERO MARIA SANTILLI In fact, the local-differential topology, calculus, geometries, symmetries, and other mathematical methods underlying special relativity permit the sole consistent description of a finite number of point-like particles moving in vacuum (empty space). Since points have no dimension and, consequently, cannot experience collisions or contact effects, the only possible interactions are at-a-distance, thus being derivable from a potential. The entire machinery of special relativity then follows. For systems of particles at large mutual distances for which the above setting is valid, such as for the structure of the hydrogen atom, special relativity is then exactly valid. However, classical point-like particles do not exist; hadrons are notoriously extended; and even particles with point-like charge, such as the electron, do not have “point-like wavepackets”. As we shall see, the representation of particles and/or their wavepackets as they really are in nature, that is, extended, generally nonspherical and deformable, cause the existence of contact effects of nonlocalintegral as well as zero-range nonpotential type that are beyond any hope of quantitative treatment via special relativity. This is the case for all systems of particles at short mutual distances, such as the structure of hadrons, nuclei and stars, for which special relativity is inapplicable (rather than “violated”) because not conceived or intended for the latter systems. The understanding is that the approximate character remains beyond scientific doubt. Well known organized academic interests on Einsteinian doctrines then mandated the abstraction of nonlocal-integral systems to point-like, local-differential forms as a necessary condition for the validity of special relativity. This occurrence caused a scientific distortion of simply historical proportions because, while the existence of systems for which special relativity is fully valid is beyond doubt, the assumption that all conditions in the universe verify Einsteinian doctrines is a scientific deception for personal gains. In Section 1.1 and in Chapter 2, we show the structural inability of special relativity to permit a classical representation of antimatter in a form compatible with charge conjugation. In this section and in Chapter 3, we show the inability of special relativity to represent extended, nonspherical and deformable particles or antiparticles and/or their wavepackets under nonlocal-integral interactions at short distances. In Section 1.3 and in Chapter 4, we show the irreconcilable inapplicability of special relativity for all possible, classical and operator irreversible systems of particles and antiparticles. The widely ignored theorems of catastrophic inconsistencies of Einstein’s gravitation are studied in Section 1.4 and in Chapter 3. A primary purpose of this monograph is to show that the political adaptation of everything existing in nature to special relativity, rather than constructing new relativities to properly represent nature, prevents the prediction and quan- HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 7 titative treatment of new clean energies and fuels so much needed by mankind. In fact, new clean energies are permitted precisely by contact, nonlocal-integral and nonpotential effects in hadrons, nuclei and stars that are beyond any dream of treatment via special relativity. Therefore, the identification of the limits of applicability of Einsteinian doctrines and the construction of new relativities are nowadays necessary for scientific accountability vis-a-vis society, let alone science. Needless to say, due to the complete symbiosis of special relativity and relativistic quantum mechanics, the inapplicability of the former implies that of the latter, and vice-versa. In fact, quantum mechanics will also emerge from our studies as being only approximately valid for system of particles at short mutual distances, such as for hadrons, nuclei and stars, for the same technical reasons implying the lack of exact validity of special relativity. The resolution of the imbalance due to nonlocal interactions is studied in Chapter 3. 1.2.2 Exterior and Interior Dynamical Problems The identification of the scientific imbalance here considered requires the knowledge of the following fundamental distinction: DEFINITION 1.2.1: Dynamical systems can be classified into: EXTERIOR DYNAMICAL SYSTEMS, consisting of particles at sufficiently large mutual distances to permit their point-like approximation under sole actionat-a-distance interactions, and INTERIOR DYNAMICAL PROBLEMS, consisting of extended and deformable particles at mutual distances of the order of their size under action-at-a-distance interactions as well as contact nonpotential interactions. Interior and exterior dynamical systems of antiparticles are defined accordingly. Typical examples of exterior dynamical systems are given by planetary and atomic structures. Typical examples of interior dynamical systems are given by the structure of planets at the classical level and by the structure of hadrons, nuclei, and stars at the operator level. The distinction of systems into exterior and interior forms dates back to Newton [2], but was analytically formulated by Lagrange [3], Hamilton [4], Jacobi3[5] and others (see also Whittaker [6] and quoted references). The distinction was 3Contrary to popular belief, the celebrated Jacobi theorem was formulated precisely for the general analytic equations with external terms, while all reviews known to this author in treatises on mechanics of the 20-th century present the reduced version of the Jacobi theorem for the equations without external terms. Consequently, the reading of the original work by Jacobi [5] is strongly recommended over simplified versions. 8 RUGGERO MARIA SANTILLI still assumed as fundamental at the beginning of the 20-th century, but thereafter the distinction was ignored. For instance, Schwarzschild wrote two papers in gravitation, one of the exterior gravitational problem [7], and a second paper on the interior gravitational problem [8]. The former paper reached historical relevance and is presented in all subsequent treatises in gravitation of the 20-th century, but the same treatises generally ignore the second paper and actually ignore the distinction into gravitational exterior and interior problems. The reasons for ignoring the above distinction are numerous, and have yet to be studied by historians. A first reason is due to the widespread abstraction of particles as being point-like, in which case all distinctions between interior and exterior systems are lost since all systems are reduced to point-particles moving in vacuum. An additional reason for ignoring interior dynamical systems is due to the great successes of the planetary and atomic structures, thus suggesting the reduction of all structures in the universe to exterior conditions. In the author’s view, the primary reason for ignoring interior dynamical systems is that they imply the inapplicability of the virtual totality of theories constructed during the 20-th century, including classical and quantum mechanics, special and general relativities, etc., as we shall see. The most salient distinction between exterior and interior systems is the following. Newton wrote his celebrated equations for a system of n point-particle under an arbitrary force not necessarily derivable from a potential, ma × dvak dt = Fak(t, r, v), (1.2.1) where: k = 1, 2, 3; a = 1, 2, 3, ..., n; t is the time of the observer; r and v represent the coordinates and velocities, respectively; and the conventional associative multiplication is denoted hereon with the symbol × to avoid confusion with numerous additional inequivalent multiplications we shall identify during our study. Exterior dynamical systems occur when Newton’s force Fak is entirely derivable from a potential, in which case the system is entirely described by the sole knowledge of a Lagrangian or Hamiltonian and the truncated Lagrange and Hamilton analytic equations, those without external terms d ∂L(t, r, v) ∂L(t, r, v) dt ∂vak − ∂rak = 0, drak = ∂H(t, r, p) , dt ∂pak dpak dt = ∂H(t, r, p) − ∂rak , L = 1 2 × ma × v2a − V (t, r, v), (1.2.2a) (1.2.2b) (1.2.2c) HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 9 H = p2a + V (t, r, p), 2 × ma (1.2.2d) V = U (t, r)ak × vak + Uo(t, r); (1.2.2e) where: v and p represent three-vectors; and the convention of the sum of repeated indices is hereon assumed. Interior dynamical systems when Newton’s force Fak is partially derivable from a potential and partially of contact, zero-range, nonpotential types thus admitting additional interactions that simply cannot be represented with a Lagrangian or a Hamiltonian. For this reason, Lagrange, Hamilton, Jacobi and other founders of analytic dynamics presented their celebrated equations with external terms representing precisely the contact, zero-range, nonpotential forces among extended particles. Therefore, the treatment of interior systems requires the true Lagrange and Hamilton analytic equations, those with external terms d ∂L(t, r, v) ∂L(t, r, v) dt ∂vak − ∂rak = Fak(t, r, v), (1.2.3a) drak = ∂H(t, r, p) , dt ∂pak dpak dt ∂H(t, r, p) = − ∂rak + Fak(t, r, p), L = 1 2 × ma × va2 − V (t, r, v), H = p2a + V (t, r, p), 2 × ma V = U (t, r)ak × vak + Uo(t, r), F (t, r, v) = F (t, r, p/m). (1.2.3b) (1.2.3c) (1.2.3d) (1.2.3e) (1.2.3f ) Comprehensive studies were conducted by Santilli in monographs [9] (including a vast historical search) on the necessary and sufficient conditions for the existence of a Lagrangian or a Hamiltonian known as the conditions of variational selfadjointness. These studies permitted a rigorous separation of all acting forces into those derivable from a potential, or variationally selfadjoint (SA) forces, and those not derivable from a potential, or variationally nonselfadjoint (NSA) forces according to the expression Fak = FaSkA(t, r, v) + FaNkSA(t, r, v, a, ...). (1.2.4) In particular, the reader should keep in mind that, while selfadjoint forces are of Newtonian type, nonselfadjoint forces are generally non-Newtonian, in the sense 10 RUGGERO MARIA SANTILLI Figure 1.3. A reproduction of a “vignetta” presented by the author in 1978 to the colleagues at the Lyman Laboratory of Physics of Harvard University as part of his research under his DOE contract number DE-ACO2-80ER-10651.A001 to denounce the truncation of the external terms in Lagrange’s and Hamilton’s equations that was dominating physical theories of the time for the clear intent of maintaining compatibility with Einsteinian doctrines (since the latter crucially depend on the truncation depicted in this figure). The opposition by the Lyman colleagues at Harvard was so great that, in the evident attempt of tryinmg to discourage the author from continuing the research on the true Lagrange’s and Hamilton’s equations, the Lyman colleagues kept the author without salary for one entire academic year, even though the author was the recipient of a DOE grant and he had two children in tender age to feed and shelter. Most virulent was the opposition by the Lyman colleagues to the two technical memoirs [39,50] presented in support of the ”vignetta” of this figure, for the evident reason that they dealt with a broadening of Einsteinian doctrines beginning with their title, and then continuing with a broadening of algebras, symmetries, etc.. But the author had no interest in a political chair at Harvard University, was sole interested in pursuing new scientific knowledge, and continued the research by dismissing the fierce opposition by his Lyman colleagues as ascientific and asocial (the episode is reported with real names in book [93] of 1984 and in the 1,132 pages of documentation available in Ref. [94]). As studied in details in these two volumes, the proper mathematical treatment of the true, historical, analytic equations, those with external terms, permits indeed the advances opposed by the Lyman colleagues, namely, the achievement of coverings of Einsteinian doctrines, that, being invariant (as shown later on), will indeed resist the test of time, while permitting the prediction and industrial development of new clean energies and fuels, thus confirming a HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 11 Figure 1.4. Another illustration of the major scientific imbalance studied in this monograph. The top view depicts a typical Newtonian system with nonlocal and nonpotential forces, such as a missile moving in atmosphere, while the bottom view depicts its reduction to point-like constituents conjectured throughout the 20-th century for the evident purpose of salvaging the validity of quantum mechanics and Einsteinian doctrines. However, the consistency of such a reduction has now been disproved by theorems, thus confirming the necessity of nonlocal and nonpotential interactions at the primitive elementary level of nature. of having an unrestricted functional dependence, including that on accelerations a and other non-Newtonian forms.4 As we shall see, nonselfadjoint forces generally have a nonlocal-integral structure that is usually reduced to a local-differential form via power series expansions in the velocities. For instance, the contact, zero-range, resistive force experienced by a missile moving in our atmosphere is characterized by an integral over the surface of the missile and it is usually approximated by a power series in the velocities, e.g. F NSA = k1 × v + k2 × v2 + k3 × v3 + . . . (see Figure 1.3). 4There are serious rumors that a famous physicist from a leading institution visited NASA in 1998 to propose a treatment of the trajectory of the space shuttle during re-entry via (the truncated) Hamiltonian mechanics, and that NASA engineers kindly pushed that physicist through the door. 12 RUGGERO MARIA SANTILLI Moreover, the studies of monographs [9] established that, for the general case in three dimensions, Lagrange’s and Hamilton’s equations without external terms can only represent in the coordinates of the experimenter exterior dynamical systems, while the representation of interior dynamical systems in the given coordinates (t, r) of the experimenter require the necessary use of the true analytic equations with external terms. Whenever exposed to dynamical systems not entirely representable via the sole knowledge of a Lagrangian or a Hamiltonian, a rather general attitude is that of transforming them into an equivalent purely Lagrangian or Hamiltonian form. these transformations are indeed mathematically possible, but they are physically insidious. It is known that, under sufficient continuity and regularity conditions and under the necessary reduction of nonlocal external terms to local approximations such as that in Eq. (1.2.4), the Darboux’s theorem of the symplectic geometry or, equivalently, the Lie-Koening theorem of analytic mechanics assure the existence of coordinate transformations {r, p} → {r (r, p), p (r, p)}, (1.2.5) under which nonselfadjoint systems (1.2.2) can be turned into a selfadjoint form (1.2.1), thus eliminating the external terms. However, coordinate transforms (1.2.5) are necessarily nonlinear. Consequently, the new reference frames are necessarily noninertial. Therefore, the elimination of the external nonselfadjoint forces via coordinate transforms cause the necessary loss of Galileo’s and Einstein’s relativities. Moreover, it is evidently impossible to place measuring apparata in new coordinate systems of the type r = exp(k ×p), where k is a constant. For these reasons, the use of Darboux’s theorem or of the Lie-Koening theorem was strictly prohibited in monographs [9,10,11]. Thus, to avoid misrepresentations, the following basic assumption is hereon adopted: ASSUMPTION 1.2.1: The sole admitted analytic representations are those in the fixed references frame of the experimenter without the use of integrating factors, called direct analytic representations. Only after direct representations have been identified, the use of the transformation theory may have physical relevance. Due to its importance, the above assumption will also be adopted throughout this monograph. As an illustration, the admission of integrating factors within the fixed coordinates of the experimenter does indeed allow the achievement of an analytic representation without external terms of a restricted class of nonconservative systems, resulting in Hamiltonians of the type H = ef(t,r,...) × p2/2 × m. This HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 13 Hamiltonian has a fully valid canonical meaning of representing the time evolution. However, this Hamiltonian loses its meaning as representing the energy of the system. The quantization of such a Hamiltonian then leads to a plethora of illusions, such as the belief that the uncertainty principle for energy and time is still valid while, for the example here considered, such a belief has no sense because H does not represent the energy (see Refs. [9b] for more details). Under the strict adoption of Assumption 1.2.1, all these ambiguities are absent because H will always represent the energy, irrespective of whether conserved or nonconserved, thus setting up solid foundations for correct physical interpretations. 1.2.3 General Inapplicability of Conventional Mathematical and Physical Methods for Interior Dynamical Systems The impossibility of reducing interior dynamical systems to an exterior form within the fixed reference frame of the observer causes the loss for interior dynamical systems of all conventional mathematical and physical methods of the 20-th century. To begin, the presence of irreducible nonselfadjoint external terms in the analytic equations causes the loss of their derivability from a variational principle. In turn, the lack of an action principle and related Hamilton-Jacobi equations causes the lack of any possible quantization, thus illustrating the reasons why the voluminous literature in quantum mechanics of the 20-th century carefully avoids the treatment of analytic equations with external terms. By contrast, one of the central objectives of this monograph is to review the studies that have permitted the achievement of a reformulation of Eqs. (1.2.3) fully derivable from a variational principle in conformity with Assumption 1.2.1, thus permitting a consistent operator version of Eqs. (1.2.3) as a covering of conventional quantum formulations. Recall that Lie algebras are at the foundations of all classical and quantum theories of the 20-th century. This is due to the fact that the brackets of the time evolution as characterized by Hamilton’s equations, dA dt = ∂A ∂rak × drak dt + ∂A ∂pak × dpak dt = ∂A ∂H ∂H ∂A = ∂rak × ∂pak − ∂rak × ∂pak = [A, H], (1.2.6) firstly, verify the conditions to characterize an algebra as currently understood in mathematics, that is, the brackets [A, H] verify the right and left scalar and distributive laws, [n × A, H] = n × [A, H], (1.2.7a) 14 RUGGERO MARIA SANTILLI [A, n × H] = [A, H] × n, [A × B, H] = A × [B, H] + [A, H] × B, [A, H × Z] = [A, H] × Z + H × [A, Z], and, secondly, the brackets [A, H] verify the Lie algebra axioms (1.2.7b) (1.2.7c) (1.2.7d) [A, B] = −[B, A], (1.2.8a) [[A, B], C] + [[B, C], A] + [[C, A], B] = 0. (1.2.8b) The above properties then persist following quantization into the operator brackets [A, B] = A × B − B × A, as well known. When adding external terms, the resulting new brackets, dA dt = ∂A ∂rak × drak dt + ∂A ∂pak × dpak dt = = ∂A ∂rak × ∂H ∂pak − ∂H ∂rak × ∂A ∂pak + ∂A ∂rak × Fak = = (A, H, F ) = [A, H] + ∂A ∂rak × Fak , (1.2.9) violate the right scalar law (1.2.7b) and the right distributive law (1.2.7d) and, therefore, the brackets (A, H, F ) do not constitute any algebra at all, let alone violate the basic axioms of the Lie algebras [9b]. The loss of the Lie algebras in the brackets of the time evolution of interior dynamical systems in their historical treatment by Lagrange, Hamilton, Jacobi and other founders of analytic dynamics, causes the loss of all mathematical and physical formulations built in the 20-th century. The loss of basic methods constitutes the main reason for the abandonment of the study of interior dynamical systems. In fact, external terms in the ana- lytic equations were essentially ignored through the 20-th century, by therefore adapting the universe to analytic equations (1.2.2) today known as the truncated analytic equations. By contrast, another central objective of this monograph is to review the studies that have permitted the achievement of a reformulation of the historical analytic equations with external terms,that is not only derivable from an action principle as indicated earlier, but also characterizes brackets in the time evolution that, firstly, constitute an algebra and, secondly, that algebra results in being a covering of Lie algebras. HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 15 1.2.4 Inapplicability of Special Relativity for Dynamical Systems with Resistive Forces The scientific imbalance caused by the reduction of interior dynamical systems to systems of point-like particles moving in vacuum, is indeed of historical proportion because it implied the belief of the exact applicability of special relativity and quantum mechanics for all conditions of particles existing in the universe, thus implying their applicability under conditions for which these theories were not intended for. A central scope of this monograph is to show that the imposition of said theories to interior dynamical systems causes the suppression of new clean energies and fuels already in industrial, let alone scientific, development, thus raising serious problems of scientific ethics and accountability. At the classical level, the “inapplicability” (rather then the “violation”) of (the Galilean and) special relativities for the description of an interior system such as a missile in atmosphere (as depicted in Figure 1.4) is beyond credible doubt, as any expert should know to qualify as such, because said relativities can only describe systems with action-at-a-distance potential forces, while the force acting on a missile in atmosphere are of contact-zero-range nonpotential type. Despite this clear evidence, the resiliency by organized academic interests on conventional relativities knows no boundaries. As indicated earlier, when faced with the above evidence, a rather general posture is, that the resistive forces are “illusory” because, when the missile in atmosphere is reduced to its elementary point-like constituents all resistive forces “disappear.” Such a belief is easily proved to be nonscientific by the following property that can be proved by a first year graduate student in physics: THEOREM 1.2.1 [9b]: A classical dissipative system cannot be consistently reduced to a finite number of quantum particles under sole potential forces and, vice-versa, no ensemble of a finite number of quantum particles with only potential forces can reproduce a dissipative classical system under the correspondence or other principles. Note that the above property causes the inapplicability of conventional relativities for the description of the individual constituents of interior dynamical systems, let alone their description as a whole. Rather than adapting nature to pre-existing organized interests on Einsteinian doctrines, the scope of this monograph is that of adapting the theories to nature, as requested by scientific ethics and accountability. 16 RUGGERO MARIA SANTILLI 1.2.5 Inapplicability of Special Relativity for the Propagation of Light within Physical Media Another case of manipulation of scientific evidence to serve organized academic interests on conventional relativities is the propagation of light within physical media, such as water. As it is well known, light propagates in water at a speed C much smaller than the speed c in vacuum and approximately given by the value c2 3 C = = × c << c, n = >> 1. n3 2 (1.2.10) It is well known that electrons can propagate in water at speeds bigger than the local speed of light, and actually approaching the speed of light in vacuum. In fact, the propagation of electrons faster than the local speed of light is responsible for the blueish light, called Cerenkov light, that can be seen in the pools of nuclear reactors. It is well known that special relativity was built to describe the propagation of light IN VACUUM, and certainly not within physical media. In fact, the setting of a massive particle traveling faster than the local speed of light is in violation of the basic axioms of special relativity. To salvage the principle of causality it is then often assumed that the speed of light “in vacuum” is the maximal causal speed “within water”. However, in this case there is the violation of the axiom of relativistic addition of speeds, because the sum of two speeds of light in water does not yield the speed of light, as required by a fundamental axiom of special relativity, C + C 12 Vtot = 1 + C2 c2 = × c = C. 13 (1.2.11) Vice-versa, if one assumes that the speed of light “in water” C is the maximal causal speed “in water”, the axiom of relativistic compositions of speeds is verified, C +C Vtot = 1 + C2 C2 = C, (1.2.12) but there is the violation of the principle of causality evidently due to the fact that ordinary massive particles such as the electron (and not hypothetical tachyons) can travel faster than the local causal speed. Again, the resiliency by organized interests on established relativities has no boundaries. When faced with the above evidence, a general posture is that, when light propagating in water is reduced to photons scattering among the atoms con- stituting water, all axioms of special relativities are recovered in full. In fact, according to this belief, photons propagate in vacuum, thus recovering the con- ventional maximal causal speed c, while the reduction of the speed of light is due to the scattering of light among the atoms constituting water. HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 17 Figure 1.5. A further visual evidence of the lack of applicability of Einstein’s doctrines within physical media, the refraction of light in water, due to the decrease of its speed contrary to the axiom of the “universal constancy of the speed of light”. Organized academic interests on Einsteinian doctrines have claimed throughout the 20-th century that this effect is “illusory” because Einsteinian doctrines are recovered by reducing light to the scattering of photons among atoms. The political nature of the argument, particularly when proffered by experts, is established by numerous experimental evidence reviewed in the this section. The nonscientific character of the above view is established by the following evidence known to experts to qualify as such: 1) Photons are neutral, thus having a high capability of penetration within electrons clouds, or, more technically, the scattering of photons on atomic electron clouds (called Compton scattering) is rather small. Explicit calculations (that can be done by a first year graduate student in physics via quantum electrodynamics) show that, in the most optimistic of the assumptions and corrections, said scattering can account for only 3% of the reduction of the speed of light in water, thus leaving about 30% of the reduction quantitatively unexplained. Note that the deviation from physical reality is of such a magnitude that it cannot be ”resolved” via the usual arbitrary parameters “to make things fit.” 2) The reduction of speed occurs also for radio waves with one meter wavelength propagating within physical media, in which case the reduction to photons has no credibility due to the very large value of the wavelength compared to the size of atoms. The impossibility of a general reduction of electromagnetic waves to photon propagating within physical media is independently confirmed 18 RUGGERO MARIA SANTILLI by the existence of vast experimental evidence on non-Doppler’s effects reviewed in Chapter 9 indicating the existence of contributions outside the Doppler’s law even when adjusted to the local speed. 3) There exist today a large volume of experimental evidence reviewed in Chap- ter 5 establishing that light propagates within hyperdense media, such as those in the interior of hadrons, nuclei and stars, at speed much bigger than the speed in vacuum, c C = >> c, n << 1. n (1.2.13) in which case the reduction of light to photons scattering among atoms loses any physical sense (because such propagation can never reach the speed c, let alone speeds bigger than c). In conclusion, experimental evidence beyond credible doubt has established that the speed of light C is a local quantity dependent on the characteristics in which the propagation occurs, with speed C = c in vacuum, speeds C << c within physical media of low density and speeds C >> c within media of very high density. The variable character of the speed of light then seals the lack of universal applicability of Einsteinian doctrines, since the latter are notoriously based on the philosophical assumption of “universal constancy of the speed of light”. 1.2.6 Inapplicability of the Galilean and Poincar´e symmetries for Interior Dynamical Systems By remaining at the classical level, the inapplicability of Einsteinian doctrines within physical media is additionally established by the dramatic dynamical differences between the structure of a planetary system such as our Solar system, and the structure of a planet such as Jupiter. The planetary system is a Keplerian system, that is, a system in which the heaviest component is at the center (actually in one of the two foci of elliptical orbits) and the other constituents orbit around it without collisions. By contrast, planets absolutely do not constitute a Keplerian system, because they do not have a Keplerian center with lighter constituents orbiting around it (see Figure 1.6). Moreover, for a planetary system isolated from the rest of the universe, the total conservation laws for the energy, linear momentum and angular momentum are verified for each individual constituent. For instance, the conservation of the intrinsic and orbital angular momentum of Jupiter is crucial for its stability. On the contrary, for the interior dynamical problem of Jupiter, conservation laws hold only globally, while no conservation law can be formulated for individual constituents. For instance, in Jupiter’s structure we can see in a telescope the existence in Jupiter’s atmosphere of interior vortices with variable angular momentum, yet always in such a way to verify total conservation laws. We merely have internal HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 19 Figure 1.6. Another illustration of the second major scientific imbalance studied in this monograph, the dramatic structural differences between exterior and interior dynamical systems, here represented with the Solar system (top view) and the structure of Jupiter (bottom view). Planetary systems have a Keplerian structure with the exact validity of the Galilean and Poincar´e symmetries. By contrast, interior systems such as planets (as well as hadrons, nuclei and stars) do not have a Keplerian structure because of the lack of the Keplerian center. Consequently, the Galilean and Poincar´e symmetries cannot possibly be exact for interior systems in favor of covering symmetries and relativities studied in this monograph. exchanges of energy, linear and angular momentum but always in such a way that they cancel out globally resulting in total conservation laws. In the transition to particles the situation remains the same as that at the classical level. For instance, nuclei do not have nuclei and, therefore, nuclei are not Keplerian systems. Similarly, the Solar system is a Keplerian system, but the Sun is not. At any rate, any reduction of the structure of the Sun to a Keplerian system directly implies the belief in the perpetual motion within a physical medium, because 20 RUGGERO MARIA SANTILLI electrons and protons could move in the hyperdense medium in the core of a star with conserved angular momenta, namely, a belief exiting all boundaries of credibility, let alone of science. The above evidence establishes beyond credible doubt the following: THEOREM 1.2.2 [10b]: Galileo’s and Poincar´e symmetries are inapplicable for classical and operator interior dynamical systems due to the lack of Keplerian structure, the presence of contact, zero-range, non-potential interactions, and other reasons. Note the use of the word “inapplicable”, rather than “violated” or “broken”. This is due to the fact that, as clearly stated by the originators of the basic spacetime symmetries (rather than their followers of the 20-th century), Galileo’s and Poincar´e symmetries were not built for interior dynamical conditions. Perhaps the biggest scientific imbalance of the 20-th century has been the abstraction of hadronic constituents to point-like particles as a necessary condition to use conventional spacetime symmetries, relativities and quantum mechanics for interior conditions. In fact, such an abstraction is at the very origin of the conjecture that the undetectable quarks are the physical constituents of hadrons (see Section 1.2.7 for details).. Irrespective of whether we consider quarks or other more credible particles, all particles have a wavepacket of the order of 1 F = 10−13 cm, that is, a wavepacket of the order of the size of all hadrons. Therefore, the hyperdense medium in the interior of hadrons is composed of particles with extended wavepackets in conditions of total mutual penetration. Under these conditions, the belief that Galileo’s and Poincar´e symmetries are exactly valid in the interior of hadrons implies the exiting from all boundaries of credibility, let alone of science. The inapplicability of the fundamental spacetime symmetries then implies the inapplicability of Galilean and special relativities as well as of quantum nonrelativistic and relativistic mechanics. We can therefore conclude with the following: COROLLARY 1.2.2A [10b]: Classical Hamiltonian mechanics and related Galilean and special relativities are not exactly valid for the treatment of interior classical systems such as the structure of Jupiter, while nonrelativistic and relativistic quantum mechanics and related Galilean and special relativities are not exactly valid for interior particle systems, such as the structure of hadrons, nuclei and stars. Another important scope of this monograph is to show that the problem of the exact spacetime symmetries applicable to interior dynamical systems is not a mere academic issue, because it carries a direct societal relevance. In fact, HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 21 we shall show that broader spacetime symmetries specifically built for interior systems predict the existence of new clean energies and fuels that are prohibited by the spacetime symmetries of the exterior systems. As we shall see in Section 1.2.7, Chapter 6 and Chapter 12, the assumption that the undetectable quarks are physical constituents of hadrons prohibits possible new energy based on processes occurring in the interior of hadrons (rather than in the interior of their ensembles such as nuclei). On the contrary, the assumption of hadronic constituents that can be fully defined in our spacetime and can be produced free under suitable conditions, directly implies new clean energies. 1.2.7 The Scientific Imbalance Caused by Quark Conjectures One of the most important objectives of this monograph, culminating in the presentation of Chapter 12, is to show that the conjecture that quarks are physical particles existing in our spacetime constitutes one of the biggest threats to mankind because it prevents the orderly scientific process of resolving increasingly cataclysmic environmental problems. It should be clarified in this respect, as repeatedly stated by the author in his writings that the unitary, Mendeleev-type, SU(3)-color classification of hadron into families can be reasonably considered as having a final character (see e.g., Ref. [99] and papers quoted therein), in view of the historical capability of said classification to predict several new particles whose existence was subsequently verified experimentally. All doubts herein considered solely refer to the joint use of the same classification models as providing the structure of each individual element of a given hadronic family (for more details, see memoirs [100,101] and preprint [102] and Chapter 6). Far from being alone, this author has repeatedly expressed the view that quarks cannot be physical constituents of hadrons existing in our spacetime for numerous independent reasons. On historical grounds, the study of nuclei, atoms and molecules required two different models, one for the classification and a separate one for the structure of the individual elements of a given SU(3)-color family. Quark theories depart from this historical teaching because of their conception to represent with one single theory both the classification and the structure of hadrons. As an example, the idea that the Mendeleev classification of atoms could jointly provide the structure of each individual atom of a given valence family is outside the boundary of science. The Mendeleev classification was essentially achieved via classical theories, while the understanding of the atomic structure required the construction of a new theory, quantum mechanics. Independently from the above dichotomy classification vs structure, it is well known by specialists, but rarely admitted, that quarks are purely mathematical 22 RUGGERO MARIA SANTILLI quantities, being purely mathematical representations of a purely mathematical unitary symmetry defined in a purely mathematical complex-valued unitary space without any possibility, whether direct or implied, of being defined in our spacetime (representation technically prohibited by the O’Rafearthaigh theorem). It should be stressed that, as purely mathematical objects, quarks are necessary for the consistency of SU(3)-color theories. Again, quarks are the fundamental representations of said Lie symmetry and, as such, their existence is beyond doubt. All problems emerge when said mathematical representation of a mathematical symmetry in the mathematical unitary space is assumed as characterizing physical particles existing in our spacetime. It follows that the conjecture that quarks are physical particles is afflicted by a plethora of major problematic aspects today known to experts as catastrophic inconsistencies of quark conjectures, such as: 1) No particle possessing the peculiar features of quark conjectures (fraction charge, etc.), has ever been detected to date in any high energy physical laboratory around the world. Consequently, a main consistency requirement of quark conjectures is that quarks cannot be produced free and, consequently, they must be “permanently confined” in the interior of hadrons. However, it is well known to experts that, despite half a century of attempts, no truly convincing “quark confinement” inside protons and neutrons has been achieved, nor can it be expected on serious scientific grounds by assuming (as it is the case of quark conjectures) that quantum mechanics is identically valid inside and outside hadrons. This is due to a pillar of quantum mechanics, Heisenberg’s uncertainty principle, according to which, given any manipulated theory appearing to show confinement for a given quark, a graduate student in physics can always prove the existence of a finite probability for the same quark to be free outside the hadron, in catastrophic disagreement with physical reality. Hence, the conjecture that quarks are physical particles is afflicted by catastrophic inconsistencies in its very conception [100]. 2) It is equally well known by experts to qualify as such that quarks cannot experience gravity because quarks cannot be defined in our spacetime, while gravity can only be formulated in our spacetime and does not exist in mathematical complex-unitary spaces. Consequently, if protons and neutrons were indeed formed of quarks, we would have the catastrophic inconsistency that all quark believers should float in space due to the absence of gravity [101]. 3) It is also well known by experts that “quark masses” cannot possess any inertia since they are purely mathematical parameters that cannot be defined in our spacetime. A condition for any mass to be physical, that is, to have inertia, is that it has to be the eigenvalue of a Casimir invariant of the Poincar´e symmetry, while quarks cannot be defined via said symmetry because of their hypothetical fractional charges and other esoteric assumptions. This aspect alone implies numerous catastrophic inconsistencies, such as the impossibility of having the HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 23 energy equivalence E = mc2 for any particle composed of quarks, against vast experimental evidence to the contrary. 4) Even assuming that, because of some twist of scientific manipulation, the above inconsistencies are resolved, it is known by experts that quark theories have failed to achieve a representation of all characteristics of protons and neutron, with catastrophic inconsistencies in the representation of spin, magnetic moment, means lives, charge radii and other basic features [102]. 5) It is also known by experts that the application of quark conjectures to the structure of nuclei has multiplied the controversies in nuclear physics, while resolving none of them. As an example, the assumption that quarks are the constituents of the protons and the neutrons constituting nuclei has failed to achieve a representation of the main characteristics of the simplest possible nucleus, the deuteron. In fact, quark conjectures are afflicted by the catastrophic inconsistencies of being unable to represent the spin 1 of the deuteron (since they predict spin zero in the ground state while the deuteron has spin 1), they are unable to represent the anomalous magnetic moment of the deuteron, they are unable to represent the deuteron stability, they are unable to represent the charge radius of the deuteron, and when passing to larger nuclei, such as the zirconium, the catastrophic inconsistencies of quark conjectures can only be defined as being embarrassing [102]. In summary, while the final character of the SU(3)-color classification of hadrons into families has reached a value beyond scientific doubt, the conjecture that quarks are the actual physical constituents of hadrons existing in our spacetime is afflicted by so many and so problematic aspects to raise serious issues of scientific ethics and accountability, particularly in view of the ongoing large expenditures of public funds in the field. On a personal note the author remembers some of the seminars delivered by the inventor of quarks, Murray Gell Mann, at Harvard University in the early 1980s, at the end of which there was the inevitable question whether Gell Mann believed or not that quarks are physical particles. Gell Mann’s scientific caution (denoting a real scientific stature) is still impressed in the author’s mind because he routinely responded with essentially the viewpoint outlined here, namely, Gell Mann stressed the mathematical necessity of quarks, while avoiding a firm posture on their physical reality. It is unfortunate that such a serious scientific position by Murray Gell-Manns was replaced by his followers with nonscientific positions mainly motivated by money, power and prestige. Subsequently, quark conjectures have become a real “scientific business”, as established by claim proffered by large high energy physics laboratories to have “discovered that and that quark”. while in reality they had discovered a new particle predicted by SU(3)-color classification. 24 RUGGERO MARIA SANTILLI The decay of scientific ethics in the field is so serious, and the implications for mankind so potentially catastrophic (due to the suppression by quark conjectures as physical particles of possible new clean energies studied in Volume II) that, in the author’s view, quark conjectures have been instrumental in the creation of the current scientific obscurantism of potentially historical proportions (see the Open Denunciation of the Nobel Foundation for Heading an Organized Scientific Obscurantism available in the web site http://www.scientificethics.org/NobelFoundation.htm). 1.2.8 The Scientific Imbalance Caused by Neutrino Conjectures Another central objective of this monograph is to show that neutrino conjectures constitute a political obstacle of potentially historical proportions against the orderly prediction and development of much needed new clean energies of ”hadronic type”, that is, new energies originating in the structure of individual hadrons, rather than in their collection as occurring in nuclei. Moreover, we shall show that neutrino conjectures constitute an additional political obstacle also of potentially historical proportions against the study of one of the most important scientific problems in history, the interplay between matter and the universal substratum needed for the existence and propagation of electromagnetic waves and elementary particles. To prevent misrepresentations by vociferous (yet self-destructing) organized interests in the field, it should be stressed up-front that, as it is the case for quark conjectures, neutrino conjectures of are necessary for the ”current” treatment of weak interactions. Therefore, a large scientific imbalance emerges only for the political use and interpretation of neutrino conjectures that has been dominant in the 20-th century and remains dominant to this day, namely, the use and interpretation of neutrino conjectures conceived and implemented in a capillary way for the continuation of the dominance of Einsteinian doctrines for all of physics. Most distressing are contemporary claims of ”neutrino detections” (denounced technically in Volume II) when the originator of neutrinos, Enrico Fermi, is on record by stressing that ”neutrinos cannot be detected.” Hence, the scientifically correct steatment would be the ”detection of physical particles predicted by neutrino conjectures.” As it was the case for Murray Gell-M ann, it is unfortunate that the scientific caution by Enrico Fermi was replaced by his followers with political postures essentially aiming at money, prestige and power. In this subsections we shall show the political character of neutrino conjectures via a review the historical objections against the belief that the current plethora of neutrinos constitute actual physical particles in our spacetime. Alternative theoretical interpretations can be presented only in Chapter 6 with HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 25 Figure 1.7. A view of the historical “bell shaped” curve representing the variation of the energy of the electron in nuclear beta decays (see, e.g., Ref. [13]). As soon as the apparent “missing energy” by the electron was detected in the early part of the 20-th century, it was claimed to be experimental evidence on the existence of a new particle with spin 1/2, charge zero and mass zero called by Fermi the “little neutron” or “neutrino”. industrial applications in Chapter 12 following the prior study and verification of new mathematics that is notoriously needed for true new vistas in science. As it is well known, Rutherford [104] submitted in 1920 the conjecture that hydrogen atoms in the core of stars are compressed into a new particle he called the neutron according to the synthesis (p+, e−) → n. The existence of the neutron was subsequently confirmed experimentally in 1932 by Chadwick [105]. However, numerous objections were raised by the leading physicists of the time against Rutherford’s conception of the neutron as a bound state of one proton p+ and one electron e−. Pauli [106] first noted that Rutherford’s synthesis violates the angular momentum conservation law because, according to quantum mechanics, a bound state of two particles with spin 1/2 (the proton and the electron) must yield a particle with integer spin and cannot yield a particle with spin 1/2 and charge zero such as the neutron. Consequently, Pauli conjectured the existence of a new neutral particle with spin 1/2 that is emitted in synthesis (p+, e−) → n. or in similar radioactive processes so as to verify the angular momentum conservation law. Fermi [107] adopted Pauli’s conjecture, coined the name neutrino (meaning in Italian a “little neutron”) and presented the first comprehensive theory of the underlying interactions (called “weak”), according to which the neutron synthesis 26 RUGGERO MARIA SANTILLI should be written (p+, e−) → n + ν, where ν is the neutrino, in which case the inverse reaction (the spontaneous decay of the neutron) reads n → p+ + e− + ν¯, where ν¯ is the antineutrino. Despite the scientific authority of historical figures such as Pauli and Fermi, the conjecture on the existence of the neutrino and antineutrino as physical particles was never universally accepted by the entire scientific community because of: the impossibility for the neutrinos to be directly detected in laboratory; the neutrinos inability to interact with matter in any appreciable way; and the existence of alternative theories that do not need the neutrino conjecture (see Refs. [108-110] and literature quoted therein, plus the alternative theory presented in Chapter 6). By the middle of the 20-th century there was no clear experimental evidence acceptable by the scientific community at large confirming the neutrino conjecture beyond doubt, except for experimental claims in 1959 that are known today to be basically flawed on various grounds, as we shall see below and in Chapter 6. In the last part of the 20-th century, there was the advent of the so-called unitary SU(3) theories and related quark conjectures studied in the preceding subsection. In this way, neutrino conjectures became deeply linked to and their prediction intrinsically based on quark conjectures. This event provided the first fatal blow to the credibility of the neutrino conjectures because serious physics cannot be done via the use of conjectures based on other conjectures. In fact, the marriage of neutrino and quark conjectures within the standard model has requested the multiplication of neutrinos, from the neutrino and antineutrino conjectures of the early studies, to six different hypothetical particles, the so called electron, muon and tau neutrinos and their antiparticles. In the absence of these particles the standard model would maintain its meaning as classification of hadrons, but would lose in an irreconcilable way the joint capability of providing also the structure of each particle in a hadronic multiplet. In turn, the multiplication of the neutrino conjectures has requested the additional conjecture that the electron, muon and tau neutrinos have masses and, since the latter conjecture resulted in being insufficient, there was the need for the additional conjecture that neutrinos have different masses, as necessary to salvage the structural features of the standard model. Still in turn, the lack of resolution of the preceding conjectures has requested the yet additional conjecture that neutrinos oscillate, namely, that “they change flavor” (transform among themselves back and forth). In addition to this rather incredible litany of sequential conjectures, each conjecture being voiced in support of a preceding unverified conjecture, all conjectures being crucially dependent on the existence of quarks as physical particles despite their proved lack of gravity and physical masses, by far the biggest con- HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 27 Figure 1.8. A schematic illustration of the fact that the electron in beta decays can be emitted in different directions. When the energy in the beta decay is computed with the inclusion of the Coulomb interactions between the expelled (negatively charged) electron and the (positively charged) nucleus at different expulsion directions, the nucleus acquires the “missing energy,” without any energy left for the hypothetical neutrino. As we shall see in Chapter 6, rather than being a disaster, the occurrence is at the foundation of a possible basically new scientific horizon with implications sufficient to require studies over the entire third millennium. troversies have occurred in regard to experimental claims of neutrino detection voiced by large collaborations. To begin, both neutrinos and quarks cannot be directly detected as physical particles in our spacetime. Consequently, all claims on their existence are indirect, that is, based on the detection of actual physical particles predicted by the indicated theories. This occurrence is, per se, controversial. For instance, controversies are still raging following announcements by various laboratories to have “discovered” one or another quark, while in reality the laboratories discovered physical particles predicted by a Mendeleev-type classification of particles, the same classification being admitted by theories that require no quarks at all as physical particles, as we shall indicate in Chapter 6. In the 1980s, a large laboratory was built deep into the Gran Sasso mountain in Italy to detect neutrinos coming from the opposite side of Earth (since the mountain was used as a shield against cosmic rays). Following the investment of large public funds and five years of tests, the Gran Sasso Laboratory released no evidence of clear detection of neutrino originated events. Rather than passing to a scientific caution in the use of public funds, the failure of the Gran Sasso experiments to produce any neutrino evidence stimulated 28 RUGGERO MARIA SANTILLI massive efforts by large collaborations involving hundred of experimentalists from various countries for new tests requiring public funds in the range of hundred of millions of dollars. The increase in experimental research was evidently due to the scientific stakes, because, as well known by experts but studiously omitted, the lack of verification of the neutrino conjectures would imply the identification of clear limits of validity of Einsteinian doctrines and quantum mechanics. These more recent experiments resulted in claims that, on strict scientific grounds, should be considered “experimental beliefs” by any serious scholars for numerous reasons, such as: 1) The predictions are based on a litany of sequential conjectures none of which is experimentally established on clear ground; 2) The theory contain a plethora of unrestricted parameters that can essentially fit any pre-set data (see next subsection); 3) The “experimental results” are based on extremely few events out of hundreds of millions of events over years of tests, thus being basically insufficient in number for any serious scientific claim; 4) In various cases the “neutrino detectors” include radioactive isotopes that can themselves account for the selected events; 5) The interpretation of the experimental data via neutrino and quark conjectures is not unique, since there exist nowadays other theories representing exactly the same events without neutrino and quark conjectures (including a basically new scattering theory of nonlocal type indicated in Chapter 3 and, more extensively, in monograph [10b]). To understand the scientific scene, the serious scholar (that is, the scholar not politically aligned to the preferred ”pet theories” indicated in the Preface) should note that neutrino and quark conjectures have requested to date the expenditure of over one billion dollars of public funds in theoretical and experimental research with the result of increasing the controversies rather than resolving any of them. Therefore, it is now time for a moment of reflection: scientific ethics and accountability require that serious scholars in the field exercise caution prior to venturing claims of actual physical existence of so controversial and directly unverifiable conjectures. Such a moment of reflection requires the re-inspection of the neutrino conjecture at its foundation. In fact, it is important to disprove the neutrino conjecture as originally conceived, and then disprove the flavored extension of the conjecture as requested by quark conjectures. As reported in nuclear physics textbooks (see, e.g., Ref. [13]), the energy experimentally measured as being carried by the electron in beta decays is a bell-shaped curve with a maximum value of 0.782 MeV, that is the difference in value between the mass of the neutron and that of the resulting proton in the HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 29 Figure 1.9. A picture of one of the “neutrino detectors” currently under construction at CERN for use to attempt “experimental measurements” of neutrinos (which one?) at the Gran Sasso Laboratory in Italy. The picture was sent to the author by a kind colleague at CERN and it is presented here to have an idea of the large funds now feverishly obtained from various governments by organized interests on Einsteinian doctrines in what can only be called their final frantic attempts at salvage the large litany of unverified and unverifiable quark, neutrino and other conjectures needed to preserve the dominance of Einstein doctrines in physics. For an understanding of the potential immense damage to mankind, we suggest the reader to study this monograph up to and including Chapter 12 on the necessity of abandoning these clearly sterile trends to achieve new clean energies. neutron decay. As soon as the “missing energy” was identified, it was instantly used by organized interests in Einsteinian doctrines as evidence of the neutrino hypothesis for the unspoken yet transparent reasons that, in the absence of the neutrino conjectures, Einsteinian doctrines would be grossly inapplicable for the neutron decay. As it is equally well known, the scientific community immediately accepted the neutrino interpretation of the “missing energy” mostly for academic gain, as it must be the case whenever conjectures are adopted without the traditional scientific process of critical examinations. It is easy to see that the neutrino interpretation of the “missing energy” is fundamentally flawed. In fact, the electron in beta decays is negatively charged, while the nucleus is positively charged. Consequently, the electron in beta decays experiences a Coulomb attraction from the original nucleus. 30 RUGGERO MARIA SANTILLI Moreover, such an attraction is clearly dependent on the angle of emission of the electron by a decaying peripheral neutron. The maximal value of the energy occurs for radial emissions of the electron, the minimal value occurs for tangential emissions, and the intermediate value occur for intermediate directions of emissions, resulting in the experimentally detected bell-shaped curve of Figure 1.7. When the calculations are done without political alignments on pre-existing doctrines, it is easy to see that the “missing energy” in beta decays is entirely absorbed by the nucleus via its Coulomb interaction with the emitted electron. Consequently, in beta decays there is no energy at all available for the neutrino conjecture, by reaching in this way a disproof of the conjecture itself at its historical origination. Supporters of the neutrino conjecture are expected to present as counterarguments various counter-arguments on the lack of experimental evidence for the nucleus to acquire said “missing energy.” Before doing so, said supporters are suggested to exercise scientific caution and study the new structure models of the neutron without the neutrino conjecture (Chapter 6), as well as the resulting new structure models of nuclei (Chapter 7) and the resulting new clean energies (Chapter 12). Only then, depending on the strength of their political alignment, they may eventually realize that, in abusing academic authority to perpetrate unproved neutrino conjectures they may eventually be part of real crimes against mankind. The predictable conclusion of this study is that theoretical and experimental research on neutrino and quark conjectures should indeed continue. However, theoretical and experimental research on theories without neutrino and quark conjectures and their new clean energies should be equally supported to prevent a clear suppression of scientific democracy on fundamental needs of mankind, evident problems of scientific accountability, and a potentially severe judgment by posterity. For technical details on the damage caused to mankind by the current lack of serious scientific caution on neutrino conjectures, interested readers should study Volume Ii and inspect the Open Denunciation of the Nobel Foundation for Heading an Organized Scientific Obscurantism available in the web site http://www.scientificethics.org/Nobel-Foundation.htm. 1.2.9 The Scientific Imbalance in Experimental Particle Physics Another central objective of this monograph is to illustrate the existence at the dawn of the third millennium of a scientific obscurantism of unprecedented proportions, caused by the manipulation of experimental data via the use of experimentally unverified and actually unverifiable quark conjectures, neutrino conjectures and other conjectures complemented by a variety of ad hoc parameters HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 31 for the unspoken, but transparent and pre-meditated intent of maintaining the dominance of Einsteinian doctrines in physics. At any rate, experimental data are elaborated via the conventional scattering theory that, even though impeccable for electromagnetic interactions among pointlike particles, is fundamentally insufficient for a serious representation of the scattering among extended, nonspherical and hyperdense hadrons (Figure 1.2 and Chapter 3). As a matter of fact, serious scholars and, above all, future historians, should focus their main attention on the fact that the climax of unscientific conduct by organized interests on Einsteinian doctrines occurs primarily in the manipulation of experiments, beginning with the control of the conditions of funding, then following with the control of the conduction of the experiments and, finally, with the control of the theoretical elaboration of the data to make sure that the orchestrated compliance with Einsteinian doctrines occurs at all levels. Among an unreassuringly excessive number of cases existing in the literature, some of which are reviewed in Chapter 6, a representative case is that of the BoseEinstein correlation in which protons and antiprotons collide at high energy by annihilating each other and forming the so-called “fireball”, that, in turn, emits a large number of unstable particles whose final product is a number of correlated mesons (see, e.g., review [7] and Figure 1.7). The simplest possible case is that of the two-points correlation function C2 = P (p1, p2) , P (p1) × P (p2) (1.2.14) where p1 and p2 are the linear momenta of the two mesons and the P ’s represent their probabilities. By working out the calculations via unadulterated axioms of relativistic quantum mechanics one obtains expressions of the type C2 = 1 + A × e−Q12 − B × e−Q12 , (1.2.15) where A and B are normalization parameters and Q12 is the momentum transfer. This expression is dramatically far from representing experimental data, as shown in Chapter 5. To resolve the problem, supporters of the universal validity of quantum mechanics and special relativity then introduce four arbitrary parameters of unknown physical origin and motivation called “chaoticity parameters” cµ, µ = 1, 2, 3, 4, and expand expression (1.2.15) into the form C2 = 1 + A × e−Q12/c1 + B × e−Q12/c2 + C × e−Q12/c3 − D × e−Q12/c4 , (1.2.16) which expression does indeed fit the experimental data, as we shall see. However, the claim that quantum mechanics and special relativity are exactly valid is a scientific deception particularly when proffered by experts. 32 RUGGERO MARIA SANTILLI Figure 1.10. A schematic view of the Bose-Einstein correlation originating in proton-antiproton annihilations, for which the predictions of relativistic quantum mechanics are dramatically far from experimental data from unadulterated first principles. In order to salvage the theory and its underlying Einsteinian doctrines, organized interests introduce “four” ad hoc parameters deprived of any physical meaning or origin, and then claim the exact validity of said doctrines. The scientific truth is that these four arbitrary parameters are in reality a direct measurement of the deviation from the basic axioms of relativistic quantum mechanics and special relativity in particle physics. As we shall see in technical details in Chapter 5, the quantum axiom of expectation values (needed to compute the probabilities) solely permit expression (1.2.15), since it deals with Hermitian, thus diagonalized operators of the type < ψ×ψ2| × P × |ψ1 × ψ2 >= P11 + P22, (1.2.17) while the representation of a correlation between mesons 1 and 2 necessarily requires a structural generalization of the axiom of expectation value in such a form to admit off-diagonal elements for Hermitian operators, for instance of the type < ψ×ψ2| × T × P × T × |ψ1 × ψ2 >= P11 + P12 + P21 + P22, (1.2.18) where T is a 2 × 2-dimensional nonsingular matrix with off-diagonal elements (and P remains diagonal). The scientific deception occurs because quantum mechanics and special relativity are claimed to be exactly valid for the Bose-Einstein correlation when HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 33 experts, to qualify as such, know that the representation requires a structural modification of the basic axiom of expectation values as well as for numerous additional reasons, such as: 1) The Bose-Einstein correlation is necessarily due to contact, nonpotential, nonlocal-integral effects originating in the deep overlapping of the hyperdense charge distributions of protons and antiprotons inside the fireball; 2) The mathematical foundations of quantum mechanics (such as its topology), let alone its physical laws, are inapplicable for a meaningful representation of said nonlocal and nonpotential interactions as outlined in preceding sections; and 3) Special relativity is also inapplicable, e.g., because of the inapplicability of the basic Lorentz and Poincar´e symmetries due to lack of a Keplerian structure, the approximate validity of said theories remaining beyond scientific doubt. Admittedly, there exist a number of semiphenomenological models in the literature capable of a good agreement with the experimental data. Scientific deception occurs when these models are used to claim the exact validity of quantum mechanics and special relativity since the representation of experimental data requires necessary structural departures from basic quantum axioms. Of course, the selection of the appropriate generalization of quantum mechanics and special relativity for an exact representation of the Bose-Einstein correlation is open to scientific debate. Scientific deception occurs when the need for such a generalization is denied for personal gains. As we shall see, relativistic hadronic mechanics provides an exact and invariant representation of the experimental data of the Bose-Einstein correlation at high and low energies via unadulterated basic axioms, by providing in particular a direct representation of the shape of the p − p¯ fireball and its density, while recovering the basic invariant under a broader realization of the Poincar´e symmetry. An in depth investigation of all applications of quantum mechanics and special relativity at large reveals that they have provided an exact andinvariant representation from unadulterated basic axioms of all experimental data of the hydrogen atom, as well as of physical conditions in which the mutual distances of particles is much bigger than the size of the charge distribution (for hadrons) or of the wavepackets of particles (for the case of the electron). 1.2.10 The Scientific Imbalance in Nuclear Physics There is no doubt that quantum mechanics and special relativity permitted historical advances in also nuclear physics during the 20-th century, as illustrated, for instance, by nuclear power plants. However, any claim that quantum mechanics and special relativity are exactly valid in nuclear physics is a scientific deception, particularly when proffered by experts, because of the well known inability of these theories to achieve an exact and invariant representation of numerous nu- 34 RUGGERO MARIA SANTILLI Figure 1.11. The first historical experimental evidence on the lack of exact validity of quantum mechanics in nuclear physics was given by data on nuclear magnetic moments that do not follow quantum mechanical predictions, and are instead comprised between certain minimal and maximal values, called the Schmidt Limits [13], without any possible quantum treatment. The additional suppression of the impossibility for the Galilean and Poincar´e symmetries to be exact in nuclear physics due to the lack of a Keplerian center (see next figure), have essentially rendered nuclear physics a religion without a serious scientific process. clear data despite one century of attempts and the expenditure of large public funds. To resolve the insufficiencies, the use of arbitrary parameters of unknown physical origin and motivation was first attempted, semiphenomenological fits were reached and quantum mechanics and special relativity were again claimed to be exact in nuclear physics, while in the scientific reality the used parameters are a direct representation of deviations from the basic axioms of the theories as shown in detail in Chapter 5. Subsequently, when the use of arbitrary parameters failed to achieve credible representations of nuclear data (such as nuclear magnetic moments as indicated below), organized academic interests claimed that “the deviations are resolved by deeper theories such as quark theories”. At that point nuclear physics left the qualification of a true science to become a scientific religion. Besides a plethora of intrinsic problematic aspects or sheer inconsistencies (such as the impossibility for quarks to have gravity mentioned earlier), quark HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 35 theories failed to achieve any credible representation even of the spin of individual nucleons, let alone achieve exact representations of experimental data for their bound states. Admittedly, the deviations here considered are at times small. Nevertheless, as we shall see in Chapter 6, small deviations directly imply new clean energies that cannot be even conceived, let alone treated, via quantum mechanics. Therefore, we have a societal duty to conduct serious investigations on broader mechanics specifically conceived for nuclear physics. The first evidence on the lack of exact character of quantum mechanics in nuclear physics dates back to the birth of nuclear physics in the 1930s where it emerged that experimental values of nuclear magnetic moments could not be explained with quantum mechanics, because, starting with small deviations for small nuclei, the deviations then increased with mass, to reach deviations for large nuclei, such as the Zirconium so big to escape any use of unknown parameters “to fix things” (see Figure 1.8). Subsequently, it became clear that quantum mechanics and special relativity could not explain the simplest possible nucleus, the deuteron, despite vast efforts. In fact, quantum mechanics missed about 1% of the deuteron magnetic moment despite all possible relativistic corrections, as well as the questionable assumptions that the ground state of the deuteron is a mixture of various states in a way manifestly against experimental evidence. Next, quantum mechanics and special relativity were unable to represent the spin of the deuteron, an occurrence well known to experts in the field but carefully undisclosed. The axioms of quantum mechanics require that the ground state of two particles with spin 1/2 (such as the proton and the neutron) must have spin zero (anti-parallel or singlet coupling), while the case with spin 1 (parallel spin or triplet coupling) is unstable, as a first year graduate student in physics can prove. By contrast, the deuteron has spin 1, thus remaining fundamentally unexplained by quantum mechanics and special relativity to this day.5 Additionally, quantum mechanics has been unable to represent the stability of the neutron, its charge radius, and numerous other data. Perhaps the most distressing, yet generally undisclosed, insufficiency of quantum mechanics and special relativity in nuclear physics has been the failure to understand and represent nuclear forces. Recall that a necessary condition for the applicability of quantum mechanics is that all interactions must be derivable from a potential. The original concept that nuclear forces were of central type soon resulted in being disproved by nuclear reality, thus requiring the addition of non-central, yet 5As we shall see in Chapter 6, the correct interpretation of the spin 1 of the deuteron has implications so deep to require a revision of the very notion of neutron. 36 RUGGERO MARIA SANTILLI Figure 1.12. A visual evidence of the impossibility for quantum mechanics to be exactly valid in nuclear physics: the fact that “nuclei do not have nuclei.” Consequently, the Galilean and Poincar´e symmetries, as well as nonrelativistic and relativistic quantum mechanics, cannot possibly be exact for the nuclear structure since said symmetries demand the heaviest constituent at the center. The above occurrence establishes the validity of covering symmetries for interior systems without Keplerian centers, which symmetries are at the foundation of the covering hadronic mechanics. still potential forces. The insufficiency of this addition requested the introduction of exchange, van der Waals, and numerous other potential forces. As of today, after about one century of adding new potentials to the Hamiltonian, we have reached the unreassuring representation of nuclear forces via some twenty or more different potentials in the Hamiltonian [13] H = Σk=1,2,...,n 2 p2k × mk + V1 + V2 + V3 + V4 + V5 + V6+ +V7 + V8 + V9 + V10 + V11 + V12 + V13 + V14+ +V15 + V16 + V17 + V18 + V19 + V20 + ......... (1.2.19) and we still miss a credible understanding and representation of the nuclear force! It is evident that this process cannot be kept indefinitely without risking a ma- jor condemnation by posterity. The time has long come to stop adding potentials to nuclear Hamiltonians and seek fundamentally new approaches and vistas. In the final analysis, an inspection of nuclear volumes establishes that nuclei are generally composed of nucleons in conditions of partial mutual penetration, as illustrated in Figure 1.9. By recalling that nucleons have the largest density measured in laboratory until now, the belief that all nuclear forces are of actionat-a-distance, potential type, as necessary to preserve the validity of quantum mechanics and special relativity, is pure academic politics deprived of scientific value. HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 37 As we shall see in Chapter 7, a central objective of hadronic mechanics is that of truncating the addition of potentials and re-examining instead the nuclear force from its analytic foundations, by first separating potential nonpotential forces, and then examining in details each of them. In summary, the lack of exact character of quantum mechanics and special relativity in nuclear physics is beyond scientific doubt. The open scientific issue is the selection of the appropriate generalization, but not its need. As we shall see in Chapter 6, the covering hadronic mechanics and isospecial relativity resolve the fundamental open problems of nuclear physics by permitting the industrial development of new clean energies based on light natural and stable elements without the emission of dangerous radiations and without the release of radioactive waste. 1.2.11 The Scientific Imbalance in Superconductivity The condition of superconductivity in the 20-th century can be compared to that of atomic physics prior to the representation of the structure of the atom. Recall that individual electrons cannot achieve a superconducting state because their magnetic fields interact with electromagnetic fields of atoms by creating in this way what we call electric resistance. Superconductivity is instead reached by deeply correlated-bonded pairs of electrons in singlet couplings, called Cooper pairs. In fact, these pairs have an essentially null total magnetic field (due to the opposite orientations of the two fields), resulting in a substantial decrease of electric resistance. There is no doubt that quantum mechanics and special relativity have permitted the achievement of a good description of an “ensemble” of Cooper pairs, although each Cooper pair is necessarily abstracted as a point, the latter condition being necessary from the very structure of the theories. However, it is equally well known that quantum mechanics and special relativity have been unable to reach a final understanding and representation of the structure of one Cooper pair, trivially, because electrons repel each other according to the fundamental Coulomb law. The failure of basic axioms of quantum mechanics and special relativity to represent the attractive force between the two identical electrons of the Cooper pairs motivated the hypothesis that the attraction is caused by the exchange of a new particle called phonon. However, phonons certainly exist in sounds, but they have found no verification at all in particle physics, thus remaining purely conjectural to this day. In reality, as we shall see in Chapter 7, the interactions underlying the Cooper pairs are of purely contact, nonlocal and integral character due to the mutual penetration of the wavepackets of the electrons, as depicted in Figure 1.10. As 38 RUGGERO MARIA SANTILLI such, they are very similar to the interactions responsible for Pauli’s exclusion principle in atomic structures. Under these conditions, the granting of a potential energy, as necessary to have phonon exchanges, is against physical evidence, as confirmed by the fact that any representation of Pauli’s exclusion principle via potential interactions cause sizable deviations from spectral lines. Therefore, the belief that quantum mechanics and special relativity provide a complete description of superconductivity is pure academic politics deprived of scientific content. Superconductivity is yet another field in which the exact validity of quantum mechanics and special relativity has been stretched in the 20-th century well beyond its limit for known political reasons. At any rate, superconductivity has exhausted all its predictive capacities, while all advances are attempted via empirical trials and errors without a guiding theory. As it was the case for particle and nuclear physics, the lack of exact character of quantum mechanics and special relativity in superconductivity is beyond doubt. Equally beyond doubt is the need for a deeper theory. As we shall see in Chapter 7, the covering hadronic mechanics and isospecial relativity provide a quantitative representation of the structure of the Cooper pair in excellent agreement with experimental data, and with basically novel predictive capabilities, such as the industrial development of a new electric current, that is characterized by correlated electron pairs in single coupling, rather than electrons. 1.2.12 The Scientific Imbalance in Chemistry There is no doubt that quantum chemistry permitted the achievement of historical discoveries in the 20-th century. However, there is equally no doubt that the widespread assumption of the exact validity of quantum chemistry caused a large scientific imbalance with vast implications, particularly for the alarming environmental problems. After about one century of attempts, quantum chemistry still misses a historical 2% of molecular binding energies when derived from axiomatic principles without ad hoc adulterations (see below). Also, the deviations for electric and magnetic moments are embarrassing not only for their numerical values, but also because they are wrong even in their sign [14], not to mention numerous other insufficiencies outlined below. It is easy to see that the reason preventing quantum chemistry from being exactly valid for molecular structures is given by contact, nonlocal-integral and nonpotential interactions due to deep wave-overlappings in valence bonds that, as such, are beyond any realistic treatment by local-differential-potential axioms, such as those of quantum chemistry (Figure 1.10). HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 39 Figure 1.13. A schematic view of the fundamental conditions studied in this monograph, the deep overlapping of the extended wavepackets of electrons in valence bonds and Cooper pairs according to a singlet coupling as required by Pauli’s principle. Recall that, for quantum mechanics and special relativity, electrons are points and, therefore, the conditions of this figure have no meaning at all. However, said point character can only be referred to the charge structure of the electron, since “point-like wavepackets” do not exist in nature. For the covering hadronic mechanics, superconductivity and chemistry, the point-like charge structure of the electrons remains, with the additional presence of the contact nonpotential interactions due to the overlapping of the extended wavepackets represented via a nonunitary structure. As shown in Chapters 8, 9 and 11, the treatment of the latter interactions via hadronic mechanics and chemistry has permitted the achievement, for the first time in scientific history, of an “exact and invariant” representations of molecular data from first axioms without ad hoc adulterations. Recall that quantum mechanics achieved an exact and invariant representation of all experimental data of one hydrogen atom. Nevertheless, quantum mechanics and chemistry miss 2% of the binding energy of two hydrogen atoms coupled into the hydrogen molecule (Figure 1.11). The only possible explanation is that in the hydrogen atom all interactions are of action-at-a-distance potential type due to the large mutual distances of the constituents with respect to the size of their wavepackets. By contrast, in the hydrogen molecule we have the mutual penetration of the wavepackets of valence electrons with the indicated contact, nonlocal-integral and nonpotential interactions at short mutual distances that are absent in the structure of the hydrogen atom. Alternatively and equivalently, the nuclei of the two hydrogen atoms of the H2 molecule cannot possibly be responsible for said 2% deviation. Therefore, the deviation from basic axioms can only originate in the valence bond. 40 RUGGERO MARIA SANTILLI Figure 1.14. A first clear evidence of the lack of exact validity of quantum chemistry. The top view depicts one hydrogen atom for which quantum mechanics resulted in being exactly valid. The bottom view depicts two hydrogen atoms coupled into the H2 molecule in which case quantum chemistry has historically missed a 2% of the binding energy when applied without adulteration of basic axioms “to fix things” (such as via the used of the screening of the Coulomb law and then claim that quantum chemistry is exact). Since nuclei do not participate in the molecular bond, the origin of the insufficiency of quantum mechanics and chemistry rests in the valence bond. By no means the above insufficiencies are the only ones. Quantum chemistry is afflicted by a true litany of limitations, insufficiencies or sheer inconsistencies that constitute the best kept secret of the chemistry of the 20-th century because known to experts (since they have been published in refereed journals), but they remain generally ignored evidently for personal gains. We outline below the insufficiencies of quantum chemistry for the simplest possible class of systems, those that are isolated from the rest of the universe, thus verifying conventional conservation laws of the total energy, total linear momentum, etc., and are reversible (namely, their time reversal image is as physical as the original system). The most representative systems of the above class are given by molecules, here generically defined as aggregates of atoms under a valence bond. Despite undeniable achievements, quantum chemical models of molecular structures have the following fundamental insufficiencies studied in detail in monograph [11]: HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 41 Figure 1.15. A schematic view of the fact that the total Coulomb force among the atoms of a molecular structure is identically null. As a consequence, conventional Coulomb interactions cannot provide credible grounds for molecular bonds. At the same time, existing chemical conjectures, such as the exchange and van der Waals forces, are weak, as known from nuclear physics. These facts establish that the chemistry of the 20-th century is like nuclear physics before the discovery of the strong interactions, because chemistry missed the identification of an attractive force sufficiently strong to represent molecular structure. As we shall see in Chapter 8, hadronic chemistry will indeed provide, for the first time in scientific history, the numerical identification of the missed “attractive strong attractive valence force” as being precisely of contact, nonlocal and nonpotential type. The achievement of an exact representation of molecular data is then consequential. 1: Quantum chemistry lacks a sufficiently strong molecular binding force. After 150 years of research, chemistry has failed to identify to this day the attractive force needed for a credible representation of valence bonds. In the absence of such an attractive force, names such as “valence” are pure nomenclatures without quantitative meaning. To begin, the average of all Coulomb forces among the atoms constituting a molecule is identically null. As an example, the currently used Schr¨odinger equation for the H2 molecule is given by the familiar expression [15], 2 (− 2µ1 ∇21 − 2 2µ2 ∇22 − e2 r1a − e2 r2a − e2 r1b − e2 r2b e2 + R + e2 )|ψ r12 >= E|ψ >, (1.2.20) which equation contains the Coulomb attraction of each electron by its own nucleus, the Coulomb attraction of each electron from the nucleus of the other atom, the Coulomb repulsion of the two electrons, and the Coulomb repulsion of the two protons. It is easy to see that, in semiclassical average, the two attractive forces of each electron from the nucleus of the other atom are compensated by the average of the two repulsive forces between the electrons themselves and those between the 42 RUGGERO MARIA SANTILLI protons, under which Eq. (1.2.20) reduces to two independent neutral hydrogen atoms without attractive interaction, as depicted in Fig. 1.2.12, 2 − 2µ1 ∇21 − e2 r1a + 2 − 2µ2 ∇22 − e2 r2a |ψ = E|ψ . (1.2.21) In view of the above occurrence, quantum chemistry tries to represent molecular bonds via exchange, van der Waals and other forces [15]. However, the latter forces were historically introduced for nuclear structures in which they are known to be very weak, thus being insufficient to provide a true representation of molecular bonds. It is now part of history that, due precisely to the insufficiencies of exchange, van der Waals and other forces, nuclear physicists were compelled to introduce the strong nuclear force. As an illustration, calculations show that, under the currently assumed molecular bonds, the molecules of a three leaf should be decomposed into individual atomic constituents by a weak wind of the order of 10 miles per hour. To put it in a nutshell, after about one century of research, quantum chemistry still misses in molecular structures the equivalent of the strong force in nuclear structures. As we shall see in Chapter 8, one of the objectives of hadronic chemistry is precisely to introduce the missing force, today known as the strong valence force, that is, firstly, ATTRACTIVE, secondly, sufficiently STRONG, and, thirdly, INVARIANT. The exact and invariant representation of molecular data will then be a mere consequence. 2: Quantum chemistry admits an arbitrary number of atoms in the hydrogen, water and other molecules. This inconsistency is proved beyond scientific doubt by the fact that the exchange, van der Waals, and other forces used in current molecular models were conceived in nuclear physics for the primary purpose of admitting a large number of constituents. When the same forces are used for molecular structures, they also admit an arbitrary number of constituents. As specific examples, when applied to the structure of the hydrogen or water molecule, any graduate student in chemistry can prove that, under exchange, van der Waals and other forces of nuclear type, the hydrogen, water and other molecules admit an arbitrary number of hydrogen atoms (see Figure 1.13). Rather than explaining the reason why nature has selected the molecules H2 and H2O as the sole possible, current molecular models admit “molecules” of the type H5, H23, H7O, H2O121, H12O15, etc., in dramatic disagreement with experimental evidence. HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 43 3: Quantum chemistry has been unable to explain the correlation of valence electrons solely into pairs. Experimental evidence clearly establishes that the valence correlations only occur between electron pairs in singlet coupling. By contrast, another known insufficiency of quantum chemistry is the intrinsic inability to restrict correlations to valence pairs. This insufficiency is then passed to orbital theories, that work well at semiempirical levels but remain afflicted by yet unresolved problems, eventually resulting in deviations of the prediction of the theory from experimental data that generally grow with the complexity of the molecule considered. The inability to restrict correlations to valence pairs also provides an irrefutable additional confirmation that quantum chemistry predicts an arbitrary number of constituents in molecular structures. As we shall see in Chapter 8, thanks to the advent of the new strong valence bond, the covering quantum chemistry does indeed restrict valence bonds strictly and solely to electron pairs. The resolution of inconsistency 2 will then be a mere consequence. 4: The use in quantum chemistry of “screened Coulomb potentials” violates basic quantum principles. The inability by quantum chemistry to achieve an exact representation of binding energies stimulated the adulteration of the basic Coulomb law into the so-called screened Coulomb law of the type e2 F = ±f (r) × , r (1.2.22) that did indeed improve the representation of experimental data. However, the Coulomb law is a fundamental invariant of quantum mechanics, namely, the law remains invariant under all possible unitary transforms F = e2 ± → U × (± e2 ) × U † = e2 ±, r r r (1.2.23a) U × U† = I. (1.2.23b) Therefore, any structural deviation from the Coulomb law implies deviations from the basic quantum axioms. It then follows that the only possibility of achieving screened Coulomb laws is via the use of nonunitary transforms of the type F = e2 ± → W × (± e2 ) × W † = ±eA×r × e2 , r r r (1.2.24a) W × W † = eA×r = I. (1.2.24b) Therefore, by their very conception, the use of screened Coulomb laws implies the exiting from the class of equivalence of quantum chemistry. Despite that, 44 RUGGERO MARIA SANTILLI Figure 1.16. A schematic view of the fact that quantum chemistry predicts an arbitrary number of atoms in molecules because the exchange, van der Waals, and other bonding forces used in chemistry were identified in nuclear physics for an arbitrary number of constituents. Consequently, quantum chemistry is basically unable to explain the reasons nature has selected the molecules H2, H2O, CO2, etc. as the sole possible molecular structures, and other structures such as H5, H23, H7O, HO21, H12O15, etc. cannot exist. As we shall see in Chapter 8, the “strong valence force” permitted by hadronic chemistry can only occur among “pairs” of valence electrons, thus resolving this historical problem in a quantitative way. organized academic interests have continued to claim that screened Coulomb laws belong to quantum chemistry, thus exiting from the boundaries of science. Irrespective from the above, a first year graduate student in chemistry can prove that screened Coulomb laws cause the abandonment of the very notion of quantum in favor of the continuous emission or absorption of energy. In fact, quantized emissions and absorptions of photons crucially depend on the existence of quantized orbits that, in turn, solely exist for unadulterated Coulomb potentials, as well known. This insufficiency establishes the need to generalize quantum chemistry into a covering theory since the Coulomb law is indeed insufficient to represent molecular data. Rather than adapting a theory to adulterated basic axioms, it is scientifically more appropriate to build a new theory based on the needed broader axioms. As we shall see in Chapter 8, the covering hadronic chemistry has been conceived to have a nonunitary structure as an evident necessary condition for novelty. In so doing, quantum chemistry naturally admits all infinitely possible HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 45 screened Coulomb laws of type (1.2.22). However, such screenings are solely admitted in the nonlocal-integral region of deep wave-overlappings of valence electrons that are of the order of 1 F = 10−13 cm, while recovering the conventional Coulomb law automatically for all distances greater that 1 F. This conception permits the achievement of an exact representation of molecular binding energies while preserving in full the quantum structure of the individual atoms. 5: Quantum chemistry cannot provide a meaningful representation of thermodynamical reactions. The missing 2% in the representation of binding energies is misleadingly small, because it corresponds to about 1,000 Kcal/mole while an ordinary thermodynamical reaction (such as that of the water molecule) implies an average of 50 Kcal/mole. No scientific calculation can be conducted when the error is of about twenty times the quantity to be computed.6 As we shall see in Chapter 8, our covering hadronic chemistry does indeed permit exact thermochemical calculations because it has achieved exact representations of molecular characteristics. 6: Computer usage in quantum chemical calculations requires excessively long periods of time. This additional, well known insufficiency is notoriously due to the slow convergence of conventional quantum series, an insufficiency that persists to this day despite the availability of powerful computers. As we shall also see in Chapter 8, our covering hadronic chemistry will also resolve this additional insufficiency because the mechanism permitting the exact representation of molecular characteristics implies a fast convergent lifting of conventional slowly convergent series. 7: Quantum chemistry predicts that all molecules are paramagnetic. This inconsistency is a consequence of the most rigorous discipline of the 20-th century, quantum electrodynamics, establishing that, under an external magnetic field, the orbits of peripheral atomic electrons must be oriented in such a way to offer a magnetic polarity opposite to that of the external field (a polarization that generally occurs via the transition from a three-dimensional to a toroidal distribution of the orbitals). According to quantum chemistry, atoms belonging to a molecule preserve their individuality. Consequently, quantum electrodynamics predicts that the periph- 6The author received a request from a U. S. public company to conduct paid research on certain thermochemical calculations. When discovering that the calculations had to be based on quantum chemistry due to political needs by the company to be aligned with organized academic interests, the author refused the research contract on grounds that it would constitute a fraud of public funds, due to the excessively large error of all thermochemical calculations when based on quantum chemistry. 46 RUGGERO MARIA SANTILLI Figure 1.17. A schematic view of the prediction by quantum chemistry that water is paramagnetic, in dramatic disagreement with experimental evidence. In fact, quantum chemistry does not restrict the correlation of valence bonds to pairs. As a result, the individual valence electrons of the water molecule remain essentially independent. Quantum electrodynamics then demands the capability to polarize all valence electrons under an external magnetic field, resulting in the net magnetic polarity of this figure, and the consequential paramagnetic character of the water (as well as of all) molecules. As we shall see in Chapter 8, hadronic chemistry resolves this additional historical problem because our ”strong valence force” deeply correlates valence electron pairs, thus permitting a global polarization of a molecule only in special cases, such as those with unbounded electrons. eral atomic electrons of a molecule must acquire polarized orbits under an external magnetic field. As a result, quantum chemistry predicts that the application of an external magnetic field, to hydrogen H − H, water H − O − H and other molecules imply their acquisition of a net total, opposite polarity, H↑ − H↑, H↑ − O↑ − H↑, etc., which polarization is in dramatic disagreement with experimental evidence. The above inconsistency can also be derived from its inability to restrict the correlation solely to valence pairs. By contrast, the strong valence bond of the covering hadronic chemistry eliminates the independence of individual atoms in a molecular structure, by correctly representing the diamagnetic or paramagnetic character of substances. No serious advance in chemistry can occur without, firstly, the admission of the above serious insufficiencies and/or inconsistencies, secondly, their detailed study, and, thirdly, their resolution via a covering theory. HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 47 Most importantly, we shall show in Chapter 10 that no resolution of the now alarming environmental problems is possible without a resolution of the above serious inconsistencies of quantum chemistry. 1.2.13 Inconsistencies of Quantum Mechanics, Superconductivity and Chemistry for Underwater Electric Arcs Submerged electric arcs among carbon-base electrodes are known to permit the production of cost competitive and clean burning gaseous fuels via a highly efficient process since the primary source of energy is carbon combustion by the arc, the electric current used by the arc being a comparatively smaller energy. As such, submerged electric arcs have particular relevance for the main objectives of hadronic mechanics, as studied in Chapter 10 (see also monograph [11]). An understanding of the motivations for the construction of hadronic mechanics, superconductivity and chemistry requires a knowledge of the fact that, contrary to popular beliefs, submerged electric arcs provide undeniable evidence of the following deviations from established doctrines: 1) When the liquid feedstock is distilled water and the electrodes are given by essentially pure graphite, quantum mechanics and chemistry predict that the produced gas is composed of 50% H2 and 50% CO. However, CO is combustible in atmosphere and its exhaust is given by CO2. Therefore, in the event said prediction was correct, the combustion exhaust of the gas should contain about 42% of CO2. Numerous measurements conducted by an EPA accredited automotive laboratory [11] have established that the combustion exhaust contains about 4%-5% CO2 without an appreciable percentage of unburned CO. Consequently, the error of quantum mechanics and chemistry is of about ten times the measured value, the error being in defect. 2) For the same type of gas produced from distilled water and carbon electrodes, quantum mechanics and chemistry predict that the thermochemical processes underlying the formation of the gas release about 2,250 British Thermal Units (BTU) per standard cubic feet (scf) (see Ref. [11]). In reality, systematic measurements have established that the heat produced is of the order of 250 BTU/scf. Therefore, the error of quantum mechanics and chemistry is again of the order of ten times the measured quantity, the error being this time in excess. Note that deviation 1) is fully compatible with deviation 2). In fact, the primary source of heat is the production of CO. Therefore, the production of 1/10-th of the heat predicted confirms that the CO is about 1/10-th the value predicted by quantum mechanics and chemistry. 3) Again for the case of the gas produced from distilled water and graphite electrodes, quantum mechanics and chemistry predict that no oxygen is present in the combustion exhaust, since the prediction is that, under the correct stochio- 48 RUGGERO MARIA SANTILLI metric ratio between atmospheric oxygen and the combustible gas, the exhaust is composed of 50% H2O and 50% CO2. In reality, independent measurements conducted by an EPA accredited automotive laboratory have established that, under the conditions here considered, the exhaust contains about 14% of breathable oxygen. Therefore, in this case the error of quantum mechanics and chemistry if about fourteen times the measured value. 4) Quantum mechanics and chemistry predict that the H2 component of the above considered gas has the conventional specific weight of 2.016 atomic mass units (amu). Numerous measurements conducted in various independent laboratories have established instead that the hydrogen content of said gas has the specific weight of 14.56 amu, thus implying it a seven-fold deviation from the prediction of conventional theories. 5) Numerous additional deviations from the prediction of quantum mechanics and chemistry also exist, such as the fact that the gas has a variable energy content, a variable specific weight, and a variable Avogadro number as shown in Chapters 8 and 10, while conventional gases have constant energy content, specific weight and Avogadro number, as it is well known. Above all the most serious deviations in submerged electric arc occurs for Maxwell’s electrodynamics, to such an extent that any industrial or governmental research in the field based on Maxwell’s electrodynamics is a misuse of corporate or public funds. At this introductory level we restrict ourselves to the indication of the axial attractive force between the electrodes and other features structurally incompatible with Maxwell’s electrodynamics. Needless to say, structural incompatibilities with Maxwell’s electrodynamics automatically imply structural incompatibilities with special relativity due to the complete symbiosis of the two theories. Note the re-emergence of the distinction between exterior and interior problems also in regard to Maxwell’s electrodynamics. In fact, an arc in vacuum constitutes an exterior problem, while an arc within a liquid constitutes an interior problem. The impossibility of conducting serious industrial research via Maxwell’s electrodynamics for submerged electric arcs can then be derived from the inapplicability of special relativity in the conditions considered. The departures also extend to quantum superconductivity because the initiation of submerged electric arcs causes the collapse of the electric resistance, from very high value (as it is the case for distilled water) down to fractional Ohms. As a consequence, a submerged electric arc has features reminiscent of superconductivity. But the arc occurs at about 10,000 times the maximal temperature predicted by quantum superconductivity. The limitations of the theory is then beyond credible doubt, the only open scientific issues being the selection of the appropriate generalization. HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 49 In summary, under the above deviations, any use of quantum mechanics, superconductivity and chemistry for the study of submerged electric arcs exits the boundaries of scientific ethics and accountability. The departures of experimental evidence from old doctrines are just too big to be removed via arbitrary parameters “to fix things”, thus mandating the construction of suitable covering theories. 1.3 THE SCIENTIFIC IMBALANCE CAUSED BY IRREVERSIBILITY 1.3.1 The Scientific Imbalance in the Description of Natural Processes Numerous basic events in nature, including particle decays, such as n → p+ + e− + ν¯, (1.3.1) nuclear transmutations, such as C(6, 12) + H(1, 2) → N (7, 14), (1.3.2) chemical reactions, such as 1 H2 + 2 O2 → H2O, (1.3.3) and other processes are called irreversible when their images under time reversal, t → −t, are prohibited by causality and other laws. Systems are instead called reversible when their time reversal images are as causal as the original ones, as it is the case for planetary and atomic structures when considered isolated from the rest of the universe. Yet another large scientific imbalance of the 20-th century has been the treatment of irreversible systems via the formulations developed for reversible systems, such as Lagrangians and Hamiltonian mechanics, quantum mechanics and chemistry and special relativity. In fact, all these formulations are strictly reversible, in the sense that all their basic axioms are fully reversible in time, by causing in this way limitations in virtually all branches of science. The imbalance was compounded by use of the truncated Lagrange and Hamilton equations (see Section 1.2.2) based on conventional Lagrangians or Hamiltonians, L = 1 Σk=1,2,...,n 2 × mk × vk2 − V (r), H = Σa=1,2,..,n 2 p2a × ma + V (r), (1.2.4a) (1.3.4b) 50 RUGGERO MARIA SANTILLI under the full awareness that all known potentials (such as those for electric, magnetic, gravitational and other interactions), and therefore, all known Hamiltonians, are reversible. This additional scientific imbalance was dismissed by academicians with vested interests in reversible theories with unsubstantiated statements, such as “irreversibility is a macroscopic occurrence that disappears when all bodies are reduced to their elementary constituents”. The underlying belief is that mathematical and physical theories that are so effective for the study of one electron in a reversible orbit around a proton are tacitly believed to be equally effective for the study of the same electron when in irreversible motion in the core of a star with the local nonconservation of energy, angular momentum, and other characteristics. Along these lines a vast literature grew during the 20-th century on the dream of achieving compatibility of quantum mechanics with the evident irreversibility of nature at all levels, most of which studies were of manifestly political character due to the strictly reversibility of all methods used for the analysis. These academic beliefs have been disproved by the following: THEOREM 1.3.1 [10b]: A classical irreversible system cannot be consistently decomposed into a finite number of elementary constituents all in reversible conditions and, vice-versa, a finite collection of elementary constituents all in reversible conditions cannot yield an irreversible macroscopic ensemble. The property established by the above theorems dismisses all nonscientific beliefs on irreversibility, and identify the real needs, the construction of formulations that are structurally irreversible, that is, irreversible for all known reversible potentials, Lagrangians or Hamiltonians, and are applicable at all levels of study, from Newtonian mechanics to second quantization. The historical origin of the above imbalance can be outlined as follows. One of the most important teaching in the history of science is that by Lagrange [2], Hamilton [3], and Jacobi [4] who pointed out that irreversibility originates from contact nonpotential interactions not representable with a potential, for which reason they formulated their equations with external terms, as in Eqs. (1.2.3). In the planetary and atomic structures, there is no need for external terms, since all acting forces are of potential type. In fact, these systems admit an excellent approximation as being made-up of massive points moving in vacuum without collisions (exterior dynamical problems). In these cases, the historical analytic equations were “truncated” with the removal of the external terms. In view of the successes of the planetary and atomic models, the main scientific development of the 20-th century was restricted to the “truncated analytic equa- HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 51 Figure 1.18. A pictorial view of the impossibility for quantum mechanics to be exactly valid in nature: the growth of a seashell. In fact, quantum mechanics is structurally irreversible, in the sense that all its axioms, geometries and symmetries, potentials, etc., are fully reversible in time, while the growth of a seashell is structurally irreversible. The need for an irreversible generalization of quantum mechanics is then beyond credible doubt, as studied in detail in Chapter 4. tions”, without any visible awareness that they are not the equations conceived by the founders of analytic mechanics. Therefore, the origin of the scientific imbalance on irreversibility is the general dismissal by scientists of the 20-th century of the historical teaching by Lagrange, Hamilton and Jacobi, as well as academic interests on the truncated analytic equations, such as quantum mechanics and special relativity. In fact, as outlined earlier, the use of external terms in the basic analytic equations cause the inapplicability of the mathematics underlying said theories. It then follows that no serious scientific advance on irreversible processes can be achieved without first identifying a structurally irreversible mathematics and then the compatible generalizations of conventional theories, a task studied in details in Chapter 4. 52 RUGGERO MARIA SANTILLI As we shall see, contrary to popular beliefs, the origin of irreversibility results in being at the ultimate level of nature, that of elementary particles in interior conditions. irreversibility then propagates all the way to the macroscopic level so as to avoid the inconsistency of Theorem 1.3.1. 1.3.2 The Scientific Imbalance in Astrophysics and Cosmology Astrophysics and cosmology are new branches of science that saw their birth in the 20-th century with a rapid expansion and majestic achievements. Yet, these new fields soon fell pray to organized interests in established doctrines with particular reference to quantum mechanics, special relativity and gravitation, resulting in yet another scientific imbalance of large proportions. To begin, all interior planetary or astrophysical problems are irreversible, as shown by the very existence of entropy, and known thermodynamical laws stu- diously ignored by supporters of Einsteinian doctrines. This feature, alone, is sufficient to cause a scientific imbalance of historical proportions because, as stressed above, irreversible systems cannot be credibly treated with reversible theories. Also, quantum mechanics has been shown in the preceding sections to be inap- plicable to all interior astrophysical and gravitational problems for reasons other than irreversibility. Any reader with an independent mind can then see the lim- itations of astrophysical studies for the interior of stars, galaxies and quasars based on a theory that is intrinsically inapplicable for the problems considered. The imposition of special relativity as a condition for virtually all relativistic astrophysical studies of the 20-th century caused an additional scientific imbal- ance. To illustrate its dimensions and implications, it is sufficient to note that all calculations of astrophysical energies have been based on the relativistic mass- energy equivalence E = m × c2, (1.3.5) namely, on the philosophical belief that the speed of light c is the same for all conditions existing in the universe (this is the well known “universal constancy of the speed of light”). As indicated earlier, this belief has been disproved by clear experimental evidence, particularly for the case of interior astrophysical media in which the maximal causal speed has resulted to be C = c/n >> c, n << 1, in which case the correct calculation of astrophysical energies is given by the equivalence principle of the isospecial relativity (see Chapter 3) E = m × C2 = m × c2/n2 >> m × c2, n << 1, (1.3.6) thus invalidating current view on the “missing mass”, and others. HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 53 A further large scientific imbalance in astrophysics and cosmology was caused by the imposition of general relativity, namely, by one of the most controversial theories of the 20-th century because afflicted by problematic aspects and sheer inconsistencies so serious called catastrophic, as outlined in the next section. It is hoped these preliminary comments are sufficient to illustrate the weakness of the scientific foundations of astrophysical studies of the 20-th century. 1.3.3 The Scientific Imbalance in Biology By far one of the biggest scientific imbalances of the 20-th century occurred in biology because biological structures were treated via quantum mechanics in full awareness that the systems described by that discipline are dramatically different than biological structures. To begin, quantum mechanics and chemistry are strictly reversible, while all biological structures and events are structurally irreversible, since biological struc- tures such as a cell or a complete organism, admit a birth, then grow and then die. Moreover, quantum mechanics and chemistry can only represent perfectly rigid systems, as well known from the fundamental rotational symmetry that can only describe “rigid bodies”. As a consequence, the representation of biological systems via quantum me- chanics and chemistry implies that our body should be perfectly rigid, without any possibility of introducing deformable-elastic structures, because the latter would cause catastrophic inconsistencies with the basic axioms. Moreover, another pillar of quantum mechanics and chemistry is the verifica- tion of total conservation laws, for which Heisenberg’s equation of motion became established. In fact, the quantum time evolution of an arbitrary quantity A is given by dA i × = [A, H] = A × H − H × A, dt (1.3.7) under which expression we have the conservation law of the energy and other quantities, e.g., i dH/dt = H × H − H × H ≡ 0. (1.3.8) A basic need for a scientific representation of biological structures is instead the representation of the time-rate-of-variations of biological characteristics, such as size, weight, density, etc. This identifies another structural incompatibility between quantum mechanics and biological systems. When passing to deeper studies, the insufficiencies of quantum mechanics and chemistry emerge even more forcefully. As an example, quantum theories can well represent the shape of sea shells, but not their growth in time. In fact, computer visualizations [16] have shown that, when the geometric axioms of quantum mechanics and chemistry (those of the Euclidean geometry) 54 RUGGERO MARIA SANTILLI are imposed as being exactly valid, sea shells first grow in a deformed way, and then crack during their growth. Finally, the ideal systems described with full accuracy by quantum mechanics, such as an isolated hydrogen atom or a crystal, are eternal. Therefore, the description via quantum theories implies that biological systems are eternal. These occurrences should not be surprising to inquisitive minds, because the birth and growth, e.g., of a seashell is strictly irreversible and nonconservative, while the geometric axioms of quantum theories are perfectly reversible and conservative, as indicated earlier, thus resulting in a structural incompatibility, this time, at the geometric level without any conceivable possibility of reconciliation, e.g., via the introduction of unknown parameters “to fix things”. Additional studies have established that the insufficiencies of quantum mechanics and chemistry in biology are much deeper than the above, and invest the mathematics underlying these disciplines. In fact, Illert [16] has shown that a minimally correct representation of the growth in time of sea shells requires the doubling of the Euclidean axes. However, sea shells are perceived by the human mind (via our three Eustachian tubes) as growing in our three-dimensional Euclidean space. As we shall see in Chapter 8, the only known resolution of such a dichotomy is that via multivalued irreversible mathematics, that is, mathematics in which operations such as product, addition, etc., produce a set of values, rather than one single value as in quantum mechanics and chemistry. At any rate, the belief that the simplistic mathematics underlying quantum mechanics and chemistry can explain the complexity of the DNA code, has no scientific credibility, the only serious scientific issue being the search for broader mathematics. In conclusion, science will never admit “final theories”. No matter how valid any given theory may appear at any point in time, its structural broadening for the description of more complex conditions is only a matter of time. This is the fate also of quantum mechanics and chemistry, as well as special and general relativities that cannot possibility be considered as “final theories” for all infinitely possible conditions existing in the universe. After all, following only a few centuries of research, rather than having reached a “final stage”, science is only at its infancy. HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 55 1.4 THE SCIENTIFIC IMBALANCE CAUSED BY GENERAL RELATIVITY AND QUANTUM GRAVITY 1.4.1 Consistency and Limitations of Special Relativity As it is well known, thanks to historical contributions by Lorentz, Poincar´e, Einstein, Minkowski, Weyl and others, special relativity achieved a majestic axiomatical consistency.7 After one century of studies, we can safely identify the origins of this consistency in the following crucial properties: 1) Special relativity is formulated in the Minkowski spacetime over the field of real numbers; 2) All laws of special relativity are invariant (rather than covariant) under the fundamental Poincar´e symmetry; 3) The Poincar´e transformations and, consequently, all times evolutions of special relativity, are canonical at the classical level and unitary at the operator level with implications crucial for physical consistency. Consequently, since canonical or unitary transforms conserve the unit by their very definition, special relativity admits basic units and numerical predictions that are invariant in time. After all, the quantities characterizing the dynamical equations are the Casimir invariants of the Poincar´e symmetry. As a result of the above features, special relativity has been and can be confidently applied to experimental measurements because the units selected by the experimenter do not change in time, and the numerical predictions of the theory can be tested at any desired time under the same conditions without fear of internal axiomatic inconsistencies. It is well established at this writing that special relativity is indeed “compatible with experimental evidence” for the arena of its original conception, the classical and operator treatment of “point-like” particles and electromagnetic waves moving in vacuum. Despite historical results, it should be stressed that, as is the fate for all theories, special relativity has numerous well defined limits of applicability, whose identification is crucial for any serious study on gravitation, since 7It should be indicated that the name “Einstein’s special relativity” is political, since a scientifically correct name should be “Lorentz-Poincar´e-Einstein relativity.” Also, it is appropriate to recall (as now reviewed in numerous books under testimonials by important eyewitnesses) that Einstein ended up divorcing his first wife Mileva Maric because she was instrumental in writing the celebrated paper on special relativity of 1905 and, for that reason, she had been originally listed as a co-author of that article, co-authorship that was subsequently removed when the article appeared in print. In fact, Einstein awarded his Nobel Prize money on that article to Mileva. Similarly, it should be recalled that Einstein avoided quoting Poincar´e in his 1905 article following his consultation, and in documented knowledge that Poincar´e had preceded him in various features of special relativity (see, e.g., the historical account by Logunov [96] or the instructive books [97,98]). 56 RUGGERO MARIA SANTILLI general relativity is known to be an extension of the special. Among the various limitations, we quote the following: INAPPLICABILITY # 1: Special relativity is inapplicable for the classical treatment of antiparticles as shown in Section 1.1 and Chapter 2. This is essentially due to the existence of only one quantization channel. Therefore, the quantization of a classical antiparticle characterized by special relativity (essentially via the sole change of the sign of the charge) clearly leads to a quantum mechanical particle with the wrong sign of the charge, and definitely not to the appropriate charge conjugated antiparticle, resulting in endless inconsistencies. INAPPLICABILITY # 2: Special relativity has also been shown to be inapplicable (rather than violated) for the treatment of both, particles and antiparticles when represented as they are in the physical reality, extended, generally nonspherical and deformable particles (such as protons or antiprotons), particularly when interacting at very short distances. In fact, these conditions imply the mutual penetration of the wavepackets and/or the hyperdense media constituting the particles, resulting in nonlocal, integro-differential and nonpotential interactions that cannot be entirely reduced to potential interactions among point-like constituents. INAPPLICABILITY # 3: Special relativity is also afflicted by the historical inability to represent irreversible processes. This inapplicability has been identified in Section 1.3 in the reversibility of the mathematical methods used by special relativity, under which conditions the reversibility in time of its basic axioms is a mere consequence. INAPPLICABILITY # 4: An additional field of clear inapplicability of special relativity is that for all biological entities, since the former can only represent perfectly rigid and perfectly reversible, thus eternal structures, while biological entities are notoriously deformable and irreversible, having a finite life. INAPPLICABILITY # 5: In addition, serious scholars should keep in mind that the biggest limitation of special relativity may well result to be the forgotten universal medium needed for the characterization and propagation not only of electromagnetic waves, but also of elementary particles, since truly elementary particles such as the electron appear to be pure oscillations of said universal medium. Rather than being forgotten, the issue of the privileged reference frame and its relationship to reference frames of our laboratory settings appears to be more open than ever. 1.4.2 The Scientific Imbalance Caused by General Relativity on Antimatter, Interior Problems, and Grand Unifications As indicated above, special relativity has a majestic axiomatic structure with clear verifications in the field of its original conception. By contrast, it is safe HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 57 to state that general relativity (see, e.g., monograph [17]) has been the most controversial theory of the 20-th century for a plethora of inconsistencies that have grown in time, rather than being addressed and resolved. We now address some of the inconsistencies published by numerous scholars in refereed technical journals, yet generally ignored by organized interests on Einsteinian doctrines, which inconsistencies are so serious to be known nowadays as being “catastrophic”. The apparent resolution of the inconsistencies will be presented in Chapters 3, 4, 5, 13, and 14. Let us begin with the following basic requirement for any classical theory of gravitation to be consistent: REQUIREMENT 1: Any consistent classical theory of antimatter must allow a consistent representation of the gravitational field of antimatter. General Relativity does not verify this first requirement because, in order to attempt a compatibility of classical and quantum formulations, antimatter requires negativeenergies, while general relativity solely admit positive-definite energies, as well known. Even assuming that this insufficiency is somewhat bypassed, general relativity can only represent antimatter via the reversal of the sign of the charge. But the most important astrophysical bodies expected to be made up of antimatter are neutral. This confirms the structural inability of general relativity to represent antimatter in a credible way. REQUIREMENT 2: Any consistent classical theory of antimatter must be able to represent interior gravitational problems. General relativity fails to verify this second requirement for numerous reasons, such as the inability to represent the density of the body considered, its irreversible condition, e.g., due to the increase of entropy, the locally varying speed of light, etc. REQUIREMENT 3: Any consistent classical theory of gravitation must permit a grand unifications with other interactions. It is safe to state that this requirement too is not met by general relativity since all attempts to achieve a grand unification have failed to date since Einstein times (see Chapter 12 for details). REQUIREMENT 4: Any consistent classical theory of gravitation must permit a consistent operator formulation of gravity. This requirement too has not been met by general relativity, since its operator image, known as quantum gravity [18] is afflicted by additional independent inconsistencies mostly originating from its unitary structure as studied in the next section. REQUIREMENT 5: Any consistent classical theory of gravitation must permit the representation of the locally varying nature of the speed of light. This requirement too is clearly violated by general relativity. 58 RUGGERO MARIA SANTILLI The above insufficiencies are not of marginal character because they caused serious imbalances in most branches of quantitative sciences. As an illustration, the first insufficiency prevented any study whatever as to whether a far-away galaxy or quasar is made up of matter or of antimatter. The second insufficiency created a form of religion related to the so-called “black holes”, since before claiming their existence, gravitational singularities must evidently come out of interior gravitational problems and definitely not from theoretical abstractions solely dealing with exterior gravitation. The third insufficiency has been responsible for one of the longest list of failed attempts in grand unification without addressing the origin of the failures in the gravitational theory itself. The fourth insufficiency prevented throughout the entire 20-th century a consistent quantum formulation of gravity with large implications in particle physics. The fifth insufficiency cause cosmological models that can only be qualified as scientific beliefs, rather than quantitative theories based on sound physical foundations. It is hoped that even the most representative members of organized interests on Einsteinian doctrines will admit that any additional support for said interests is now counterproductive, since it has already passed the mark for a severe condemnation by posterity. It is time to provide a scientific identification of the basic insufficiencies of general relativity and initiate systematic studies for their resolution. 1.4.3 Catastrophic Inconsistencies of General Relativity due to Lack of Sources There exist subtle distinctions between “general relativity”, “Einstein’s Gravitation”, and “Riemannian” formulation of gravity. For our needs, we here define Einstein’s gravitation of a body with null electric and magnetic moments as the reduction of exterior gravitation in vacuum to pure geometry, namely, gravitation is solely represented via curvature in a Riemannian space R(x, g, R) with spacetime coordinates x = {xµ}, µ = 1, 2, 3, 0 and nowhere singular real-valued HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 59 and symmetric metric g(x) over the reals R, with field equations [19,20]8 Gµν = Rµν − gµν × R/2 = 0, (1.4.1) in which, as a central condition to have Einstein’s gravitation, there are no sources for the exterior gravitational field in vacuum for a body with null total electromagnetic field (null total charge and magnetic moment). For our needs, we define as general relativity any description of gravity on a Riemannian space over the reals with Einstein-Hilbert field equations with a 8The dubbing of Eqs. (1.4.1) as “Einstein’s field equations” is political since it is known, or it should be known by “expert” in the field to qualify as such, that Hilbert independently published the same equations, and that Einstein consulted Hilbert without quotation his work in his gravitational paper of 1916, as done by Einstein in other cases. It is also appropriate to recall that the publication of his 1916 paper on gravitation caused Einstein the divorce from his second wife, Elsa Loewenstein, for essentially the same reason of his first divorce. In fact, unlike Einstein, Elsa was a true mathematician, had trained Einstein on the Riemannian geometry (a topic only for very few pure mathematics at that time), and was supposed to be a co-author of Einstein’s 1916 paper, a co-authorship denied as it was the case for the suppression of co-authorship of his first wife Mileva for his 1905 paper on special relativity (see the instructive books [97,98]). To avoid a scandal for the 1905 paper, Einstein donate to Mileva the proceeds of his Nobel Prize. However, he did not receive a second Nobel Prize to quite down his second wife Elsa. A scandal was then avoided for the 1916 paper via the complicity of the Princeton community, complicity that is in full force and effect to this day. Hence, Princeton can indeed be considered as being an academic community truly leading in new basic advances during Einstein’s times. By contrast, Princeton is nowadays perceived as a ”scientific octopus” with kilometric tentacles reaching all parts of our globe for the studious suppression, via the abuse of academic credibility, of any spark of advance over Einsteinian doctrines. In fact, no truly fundamental advance came out of Princeton since Einstein’s times, thus leaving Einstein as the sole source of money, prestige and power. The documentation of the actions by Princeton academicians to oppose, jeopardize and disrupt research beyond Einstein is vast and includes hundreds of researchers in all developed countries. It is their ethical duty, if they really care for scientific democracy and the human society, to come out and denounce publicly the serious misconducts by Princeton academicianns they had to suffer (for which denunciations I am sure that the International Committee on Scientific Ethics and Accountability will offer its website http://www.scientificethics.org). In regard to the author’s documented experiences, it is sufficient to report here for the reader in good faith the rejection by the Princeton academic community with offensive language of all requests by the author (when still naive) for delivering an informal seminar on the isotopic lifting of special relativity for the intent of receiving technical criticisms. There is also documentation that, when the unfortunate session chairman of the second World Congress in Mathematics of the new century, the president of the Institute for Advanced Studies in Princeton prohibited presentations on Lie-isotopic and Lie-admissible algebras not only by the author, but also by the late Prof. Grigorios Tsagas, then Chairman of the Mathematics Department of Aristotle University in Thessaloniki, Greece. This volume has been dedicated to the memory of Prof. Gr. Tsagas also in view of the vexations he had to suffer for his pioneering mathematical research from decaying U. S, academia. The climax of putrescence in the Princeton academic community is reached by the mumbo-jambo research in the so called ”controlled hot fusion” under more than one billion of public funds, all spent under the condition of compatibility with Einsteinian doctrines, and under clear the technical proofs of the impossibility of its success (see Volume II for technical details). The author spares the reader the agony of additional documented episodes of scientific misconducts because too demeaning, and expresses the view that, with a few exceptions, the Princeton academic community is nowadays an enemy of mankind. 60 RUGGERO MARIA SANTILLI Figure 1.19. When the “bending of light” by astrophysical bodies was first measured, organized interests in Einsteinian doctrines immediately claimed such a bending to be an “experimental verification” of “Einstein’s gravitation”, and the scientific community accepted that claim without any critical inspection (for evident academic gains), according to an unreassuring trend that lasts to this day by being at the foundation of the current scientific obscurantism of potentially historical proportions. It can be seen by first year physics students that the measured bending of light is that predicted by the NEWTONIAN attraction. The representation of the same “bending of light” as being entirely due to curvature, as necessary in “Einstein’s gravitation”, implies its formulation in such a way to avoid any Newtonian contribution, with catastrophic inconsistencies in other experiments (see, e.g., next figure). source due to the presence of electric and magnetic fields, Gµν = Rµν − gµν × R/2 = k × tµν , (1.4.2) where k is a constant depending on the selected unit whose value is here irrelevant. For the scope of this section it is sufficient to assume that the Riemannian description of gravity coincides with general relativity according to the above definition. In the following, we shall first study the inconsistencies of Einstein gravitation, that is, the inconsistencies in the entire reduction of gravity to curvature without source, and then study the inconsistency of general relativity, that is, the inconsistencies caused by curvature itself even in the presence of sources. It should be stressed that a technical appraisal of the content of this section can only be reached following the study of the axiomatic inconsistencies of grand unified theories of electroweak and gravitational interactions whenever gravity is represented with curvature on a Riemannian space irrespective of whether with or without sources, as studied in Chapter 12. THEOREM 1.4.1 [21]: Einstein’s gravitation and general relativity at large are incompatible with the electromagnetic origin of mass established by quantum electrodynamics, thus being inconsistent with experimental evidence. HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 61 Proof. Quantum electrodynamics has established that the mass of all elementary particles, whether charged or neutral, has a primary electromagnetic origin, that is, all masses have a first-order origin given by the volume integral of the 00-component of the energy-momentum tensor tµν of electromagnetic origin, m = d4x × teolom. (1.4.3a) tαβ = 1 4π (FαµFµβ + 1 4 gαβ Fµν F µν ), (1.4.3b) where tαβ is the electromagnetic tensor, and Fαβ is the electromagnetic field (see Ref. [11a] for explicit forms of the latter with retarded and advanced potentials). Therefore, quantum electrodynamics requires the presence of a first-order source tensor in the exterior field equations in vacuum as in Eqs. (1.4.2). Such a source tensor is absent in Einstein’s gravitation (1.4.1) by conception. Consequently, Einstein’s gravitation is incompatible with quantum electrodynamics. The incompatibility of general relativity with quantum electrodynamics is es- tablished by the fact that the source tensor in Eqs. (1.4.2) is of higher order in magnitude, thus being ignorable in first approximation with respect to the grav- itational field, while according to quantum electrodynamics said source tensor is of first order, thus not being ignorable in first approximation. The inconsistency of both Einstein’s gravitation and general relativity is finally established by the fact that, for the case when the total charge and magnetic mo- ment of the body considered are null, Einstein’s gravitation and general relativity allows no source at all. By contrast, as illustrated in Ref. [21], quantum elec- trodynamics requires a first-order source tensor even when the total charge and magnetic moments are null due to the charge structure of matter. q.e.d. The first consequence of the above property can be expressed via the following: COROLLARY 1.4.1A [21]: Einstein’s reduction of gravitation in vacuum to pure curvature without source is incompatible with physical reality. A few comments are now in order. As is well known, the mass of the electron is entirely of electromagnetic origin, as described by Eq. (3.3), therefore requiring a first-order source tensor in vacuum as in Eqs. (3.2). Therefore, Einstein’s gravitation for the case of the electron is inconsistent with nature. Also, the electron has a point charge. Consequently, the electron has no interior problem at all, in which case the gravitational and inertial masses coincide, mGErleacvt.ron ≡ mIEnleecrtron. (1.4.4) Next, Ref. [21] proved Theorem 1.4.1 for the case of a neutral particle by showing that the πo meson also needs a first-order source tensor in the exterior 62 RUGGERO MARIA SANTILLI gravitational problem in vacuum since its structure is composed of one charged particle and one charged antiparticle in high dynamical conditions. In particular, the said source tensor has such a large value to account for the entire gravitational mass of the particle [21] mGπorav. = d4x × tE00lm. (1.4.5) For the case of the interior problem of the πo , we have the additional presence of short range weak and strong interactions representable with a new tensor τµν. We, therefore, have the following: COROLLARY 1.4.1B [21]: In order to achieve compatibility with electromagnetic, weak and strong interactions, any gravitational theory must admit two source tensors, a traceless tensor for the representation of the electromagnetic origin of mass in the exterior gravitational problem, and a second tensor to represent the contribution to interior gravitation of the short range interactions according to the field equations GIµnνt. = Rµν − gµν × R/2 = k × (tEµνlm + τµSνhortRange). (1.4.6) A main difference of the two source tensors is that the electromagnetic tensor tEµνlm is notoriously traceless, while the second tensor τµSνhortRange is not. A more rigorous definition of these two tensors will be given shortly. It should be indicated that, for a possible solution of Eqs. (1.4.6), various explicit forms of the electromagnetic fields as well as of the short range fields originating the electromagnetic and short range energy momentum tensors are given in Ref. [21]. Since both source tensors are positive-definite, Ref. [21] concluded that the interior gravitational problem characterizes the inertial mass according to the expression mIner = d4x × (tE00lm + τ0S0hortRange), (1.4.7) with consequential general law mInert. ≥ mGrav., (1.4.8) where the equality solely applies for the electron. Finally, Ref. [21] proved Theorem 1.4.1 for the exterior gravitational problem of a neutral massive body, such as a star, by showing that the situation is essentially the same as that for the πo. The sole difference is that the electromagnetic field requires the sum of the contributions from all elementary constituents of the star, mGStraarv. = Σp=1,2,... d4x × tEp0l0em.. (1.4.9) HADRONIC MATHEMATICS, MECHANICS AND CHEMISTRY 63 In this case, Ref. [21] provided methods for the approximate evaluation of the sum that resulted in being of first-order also for stars with null total charge. When studying a charged body, there is no need to alter equations (3.6) since that particular contribution is automatically contained in the indicated field equations. Once the incompatibility of general relativity at large with quantum electrodynamics has been established, the interested reader can easily prove the incompatibility of general relativity with quantum field theory and quantum chromodynamics, as implicitly contained in Corollary 1.4.1B. An important property apparently first reached in Ref. [11a] in 1974 is the following: COROLLARY 1.4.1C [21]: The exterior gravitational field of a mass originates entirely from the total energy-momentum tensor (3.3b) of the electromagnetic field of all elementary constituents of said mass. In different terms, a reason for the failure to achieve a “unification” of gravitational and electromagnetic interactions initiated by Einstein himself is that the said interactions can be “identified” with each other and, as such, they cannot be unified. In fact, in all unifications attempted until now, the gravitational and electromagnetic fields preserve their identity, and the unification is attempted via geometric and other means resulting in redundancies that eventually cause inconsistencies. Note that conventional electromagnetism is represented with the tensor Fµν and related Maxwell’s equations. When electromagnetism is identified with exterior gravitation, it is represented with the energy-momentum tensor tµν and related equations (1.4.6). In this way, gravitation results as a mere additional manifestation of electromagnetism. The important point is that, besides the transition from the field tensor Fµν to the energy-momentum tensor Tµν, there is no need to introduce a new interaction to represent gravity. Note finally the irreconcilable alternatives emerging from the studies herein considered: ALTERNATIVE I: Einstein’s gravitation is assumed as being correct, in which case quantum electrodynamics must be revised in such a way to avoid the electromagnetic origin of mass; or ALTERNATIVE II: Quantum electrodynamics is assumed as being correct, in which case Einstein’s gravitation must be irreconcilably abandoned in favor of a more adequate theory. 64 RUGGERO MARIA SANTILLI By remembering that quantum electrodynamics is one of the most solid and experimentally verified theories in scientific history, it is evident that the rather widespread assumption of Einstein’s gravitation as having final and universal character is non-scientific. THEOREM 1.3.2 [22,10b]: Einstein’s gravitation (1.4.1) is incompatible with the Freud identity of the Riemannian geometry, thus being inconsistent on geometric grounds. Proof. The Freud identity [11b] can be written Rβα − 1 2 × δβα × R − 1 2 × δβα × Θ = Uβα + ∂Vβαρ/∂xρ = k × (tαβ + τβα), (1.4.10) where Θ = gαβgγδ(ΓραβΓργ β − ΓραβΓργ δ), (1.4.11a) Uβα = 1 − 2 ∂Θ ∂g|ρρα gγβ ↑γ , (1.4.11b) Vβαρ = 1 2 [gγδ (δβα Γρα γδ − δβρ Γρα δ )+ +(δβρgαγ − δβαgργ )Γδγ δ + gργ Γαβ γ − gαγ Γρβγ ]. (1.4.11c) Therefore, the Freud identity requires two first order source tensors for the ex- terior gravitational problems in vacuum as in Eqs. (1.4.6) of Ref. [21]. These terms are absent in Einstein’s gravitation (1.4.1) that, consequently, violates the Freud identity of the Riemannian geometry. q.e.d. By noting that trace terms can be transferred from one tensor to the other in the r.h.s. of Eqs. (1.4.10), it is easy to prove the following: COROLLARY 1.4.2A [10b]: Except for possible factorization of common terms, the t- and τ -tensors of Theorem 3.2 coincide with the electromagnetic and short range tensors, respectively, of Corollary 1.4.1B. A few historical comments regarding the Freud identity are in order. It has been popularly believed throughout the 20-th century that the Riemannian geometry possesses only four identities (see, e.g., Ref. [17]). In reality, Freud [22] identified in 1939 a fifth identity that, unfortunately, was not aligned with Einstein’s doctrines and, as such, the identity was ignored in virtually the entire literature on gravitation of the 20-th century, as it was also the case for Schwarzschild’s interior solution [8]. However, as repeatedly illustrated by scientific history, structural problems simply do not disappear with their suppression, and actually grow in time. In