658 lines
285 KiB
Plaintext
658 lines
285 KiB
Plaintext
Kicking the Sacred Cow:
|
||
Questioning the Unquestionable and Thinking the Impermissible
|
||
James P. Hogan
|
||
Scientists are Only Human and Not Immune to Dogma.
|
||
A New York Times Bestselling Writer Examines the Facts in the Most Profound Controversies in Modern Science.
|
||
Galileo may have been forced to deny that the Earth moves around the Sun; but in the end, science triumphed. Nowadays science fearlessly pursues truth, shining the pure light of reason on the mysteries of the universe. Or does it? As bestselling author James P. Hogan demonstrates in this fact-filled and thoroughly documented study, science has its own roster of hidebound pronouncements which are Not to be Questioned. Among the dogma-laden subjects he examines are Darwinism, global warming, the big bang, problems with relativity, radon and radiation, holes in the ozone layer, the cause of AIDS, and the controversy over Velikovsky. Hogan explains the basics of each controversy with his clear, informative style, in a book that will be fascinating for anyone with an interest in the frontiers of modern science.
|
||
|
||
Copyright
|
||
Copyright © 2004 by James P. Hogan
|
||
All rights reserved, including the right to reproduce this book or portions thereof in any form.
|
||
A Baen Books Original Baen Publishing Enterprises P.O. Box 1403 Riverdale, NY 10471 www.baen.com
|
||
ISBN: 0-7434-8828-8
|
||
Cover art by Allan Pollack
|
||
First printing, July 2004
|
||
These ePub and Mobi editions v1.0 by Dead^Man December, 2011 dmebooks at live dot ca Retail edition, reformatted Updated the out-dated links to websites listed in the notes sections. Foot notes for most sections are still missing from retail (ebook) version!
|
||
Library of Congress Cataloging-in-Publication Data Hogan, James P Kicking the sacred cow: questioning the unquestionable and thinking the impermissible/by James P. Hogan. p. cm. Includes bibliographical references. ISBN 0-7434-8828-8 (HC) 1. Science I. Title. Q158.5. H65 2004 500 – dc22 2004009764
|
||
Distributed by Simon & Schuster 1230 Avenue of the Americas New York, NY 10020
|
||
Production by Windhaven Press, Auburn, NH Printed in the United States of America
|
||
DEDICATION
|
||
To Halton Arp, Peter Duesberg... and all other scientists of integrity who followed where the evidence pointed, and stood by their convictions.
|
||
|
||
ACKNOWLEDGMENTS
|
||
The help and advice of the following people is gratefully acknowledged – for generously giving their time in describing their work and answering questions; providing invaluable material without which the book would not have been possible; giving more of their time to reading, criticizing, and offering suggestions; and in some cases for the plain, simple moral support of wanting to see it finished. A few words at the front never seems enough to repay this kind of cooperation.
|
||
John Ackerman, Firmament & Chaos, Philadelphia, PA; Halton Arp, Max-Planck Institut fur Astrophysik, Germany; Russell T. Arndts; Andre Assis, Universidade Estadual del Campinas-Unicamp, Sao Paulo, Brazil; Petr Beckmann, Professor Emeritus of Electrical Engineering, University of Colorado, Boulder; Michael J. Bennett; Tom Bethell, Hoover Institution, Stanford, CA, and American Spectator, Washington, DC; Anthony Brink, South African Bar, Pietermaritzburg, South Africa; Candace Crandall, Science & Environmental Policy Project, Arlington, VA; David Crowe, Reappraising AIDS Society, Alberta, Canada; Peter Duesberg, Department of Molecular & Cell Biology, University of California, Berkeley; Fintan Dunne, AIDS Watch, Dublin, Ireland; Hugh Ellsaesser, visiting scientist, Lawrence Livermore Laboratories, Livermore, CA; Scott Fields; Charles Ginenthal, The Velikovskian, Queens, NY; Tim Gleason, Unionville, CT; Larry Gould, Department of Physics, University of Connecticut, Storrs; Tina Grant, Venice, CA; Lewis Greenberg, Kronos, Deerfield Beach, FL; Sheryl Guffrey, Tulsa, OK; Ron Hatch, GPS Consultant, Wilmington, CA; Howard Hayden, Professor Emeritus of Physics, University of Connecticut, Storrs; Marjorie Hecht, 21st Century Science Associates, Leesburg, VA; Alex Hogan; Jackie Hogan; Joe Hogan; Mike Hogan; Bob Holznecht, Auto Air, Coco Beach, FL; Kent Hovind, Pensacola, FL; Les Johnson, NASA, Marshall Spaceflight Center, Huntsville, AL; Phillip Johnson, Professor of Law, University of California, Berkeley; Jeff Kooistra, Champagne, IL; Eric Lerner, Princeton, NJ; Robert Lightfoot, Chattanooga, TN; Anthony Liversidge, New York, NY; Scott Lockwood, Lubbock, TX; Christine Maggiore, Alive & Well, Venice, CA; George Marklin, Houston, TX; Paul Marmet, University of Ottawa, Canada; Mike Miller, Quackgrass Press; Bill Nichols, Seattle, WA; Mike Oliver, Carson City, NV; Henry Palka, Farmington, CT; Robert Pease, Professor Emeritus of Physical Climatology at the University of California, Riverside; Peter Perakos; Thomas E. Phipps Jr., Urbana, IL; C. J. Ransom, Colleyville, TX; Lynn E. Rose, Solana Beach, CA; Peter Saint-Andre, Monadnock, NH; S. Fred Singer, SEPP, Arlington, VA; Michael Sisson, Tampa, FL; Patrick Small; Toren Smith, Studio Proteus, San Francisco, CA; E. D. Trimm, Covington, GA; Valendar Turner, Royal Perth Hospital, Australia; Ruyong Wang, St. Cloud State University, MN; Brent Warner, NASA, Goddard Spaceflight Center, Greenbelt, MD; Jonathan Wells, Olympia, WA; Eleanor Wood, Spectrum Literary Agency, New York, NY.
|
||
|
||
Introduction
|
||
|
||
Contents
|
||
|
||
ONE Humanistic Religion The Rush to Embrace Darwinism
|
||
|
||
Science, Religion, and Logic Darwinism and the New Order A Cultural Monopoly Rocks of Ages — The Fossil Record Anything, Everything, and Its Opposite: Natural Selection The Origin of Originality? Genetics and Mutation Life as Information Processing
|
||
|
||
TWO Of Bangs and Braids Cosmology’s Mathematical Abstractions
|
||
|
||
Mathematical Worlds — and This Other One Cosmologies as Mirrors Matters of Gravity: Relativity’s Universes After the Bomb: The Birth of the Bang The Plasma Universe Other Ways of Making Light Elements... And of Producing Expansion Redshift Without Expansion at All The Ultimate Heresy: Questioning the Hubble Law The God of the Modern Creation Myth
|
||
|
||
THREE Drifting in the Ether Did Relativity Take A Wrong Turn?
|
||
|
||
Some Basics Extending Classical Relativity The New Relativity Dissident Viewpoints The Famous Faster-Than-Light Question
|
||
|
||
FOUR Catastrophe of Ethics The Case for Taking Velikovsky Seriously
|
||
|
||
Early Work: The Makings of an Iconoclast Worlds in Collision
|
||
|
||
Science in Convulsion: The Reactions Testimony from the Rocks: Earth in Upheaval Orthodoxy in Confusion Slaying the Monster: The AAAS Velikovsky Symposium, 1974 After the Inquisition: The Parallel Universe
|
||
FIVE Environmentalist Fantasies Politics and Ideology Masquerading As Science
|
||
Garbage In, Gospel Out: Computer Games and Global Warming Holes in the Ozone Logic — But Timely for Some Saving The Mosquitoes: The War On DDT The 1971 EPA Hearings “Vitamin R”: Radiation Good for Your Health 212 Rip-Out Rip-Off: The Asbestos Racket
|
||
SIX CLOSING RANKS AIDS Heresy In The Viricentric Universe
|
||
Science by Press Conference “Side Effects” Just Like AIDS: The Miracle Drugs A Virus Fixation
|
||
AFTERWORD Gothic Cathedrals And The Stars
|
||
REFERENCES & FURTHER READING
|
||
|
||
Introduction
|
||
Engineering and the Truth Fairies
|
||
Science really doesn’t exist. Scientific beliefs are either proved wrong, or else they quickly become engineering. Everything else is untested speculation.
|
||
– JPH
|
||
My interest in science began at an early age, as a boy growing up in postwar England. One of my older sisters, Grace – I was the baby by a large gap in a family with four children, two boys and two girls – was married to a former Royal Air Force radio and electronics technician called Don. He was one of the practical kind that people described as “good with his hands,” capable of fixing anything, it seemed.
|
||
The shelves, additions to the house, and other things that he created out of wood were always true and square, with the pieces fitting perfectly. He would restore pieces of machinery that he had come across rusting in the local tip, and assemble a pile of electrical parts and a coil wound on a cardboard custard container into a working radio. I spent long summer and Christmas vacations at Grace and Don’s, learning the art of using and taking care of tools (“The job’s not finished until they’re cleaned and put away” was one of his maxims), planning the work through (“Measure twice; cut once” was another), and talking with people across the world via some piece of equipment that he’d found in a yard sale and refurbished. Kids today take such things for granted, but there was no e-mail then. Computers were unheard of. Don would never pass by a screw or a bolt lying on the roadside that might be useful for something one day. His children once told me ruefully that they never got to play with their presents on Christmas Day because the paint was never dry.
|
||
Although Don was not a scientist, working with him imbued in me an attitude of mind that valued the practicality of science as a way of dealing with life and explaining much about the world. Unlike all of the other creeds, cults, and ideologies that humans had been coming up with for as long as humanity had existed, here was a way of distinguishing between beliefs that were probably true and beliefs that were probably not in ways that gave observable results that could be repeated. Its success was attested to by the new world that had come into existence in – what? – little more than a century. From atoms to galaxies, phenomena were made comprehensible and predictable that had remained cloaked in superstition and ignorance through thousands of years of attempts at inquiry by other means. Airplanes worked; magic carpets didn’t. Telephones, radio, and TV enabled anyone, at will, anytime, to accomplish things which before had been conceivable only as miracles. The foot deformities that I had been born with were corrected by surgery, not witch doctoring, enabling me later to enjoy a healthy life mountain hiking and rock climbing as a teenager. Asimov’s nonfiction came as a topping to the various other readings I devoured in pursuit of my interest: Science was not only effective and made sense; it could actually be fun too!
|
||
I would describe science as formalized common sense. We all know how easily true believers can delude themselves into seeing what they want to see, and even appearances reported accurately are not always to be relied upon. (My older brother was something of a card
|
||
|
||
sharp, so there was nothing particularly strange in the idea of things sometimes not being what they seemed.) What singled science out was its recognition of objective reality: that whatever is true will remain true, regardless of how passionately someone might wish things to be otherwise, or how many others might be induced to share in that persuasion. A simple and obvious enough precept, one would have thought. Yet every other belief system, even when professing commitment to the impartial search for truth, acted otherwise when it came to recruiting a constituency. And hence, it seemed, followed most of the world’s squabbles and problems.
|
||
So it was natural enough for me to pursue a career in the Royal Aircraft Establishment, Farnborough – a few miles from where Grace and Don lived – after passing the requisite three days of qualifying examinations, as a student of electrical, mechanical, and aeronautical engineering. On completion of the general course I went on to specialize in electronics. Later, I moved from design to sales, then into computers, and ended up working with scientists and engineers across-the-board in just about every discipline and area of application. Seeing the way they went about things confirmed the impressions I’d been forming since those boyhood days of working with Don.
|
||
The problems that the world had been getting itself into all through history would all be solved straightforwardly once people came around to seeing things the right way. Wars were fought over religions, economic resources, or political rivalries. Well, science showed that men made gods, not vice versa. Sufficiently advanced technologies could produce plenty of resources for everybody, and once those two areas were taken care of, what was there left to create political rivalries over? Then we could be on our way to the stars and concern ourselves with things that were truly interesting.
|
||
When I turned to writing in the mid-seventies – initially as a result of an office bet, then going full-time when I discovered I liked it – a theme of hard science-fiction with an upbeat note came naturally. I was accused (is that the right word?) of reinventing the genre of the fifties and sixties from the ground up, which was probably true to a large degree, since I had read very little of it, having come into the field from a direction diametrically opposed to that of most writers. The picture of science that I carried into those early stories reflected the idealization of intellectual purity that textbooks and popularizers portray. Impartial research motivated by the pursuit of knowledge assembles facts, which theories are then constructed to explain. The theories are tested by rigorous experiment; if the predicted results are not observed, the theories are modified accordingly, without prejudice, or abandoned.
|
||
Although the ideal can seldom be achieved in practice, free inquiry and open debate will detect and correct the errors that human frailty makes inevitable. As a result, we move steadily through successively closer approximations toward the Truth.
|
||
Such high-flying fancy either attains escape velocity and departs from the realities of Earth totally, or it comes back to ground sometime. My descent from orbit was started by the controversy over nuclear energy. It wasn’t just political activists with causes, and journalists cooking a story who were telling the public things that the physicists and engineers I knew in the nuclear field insisted were not so.
|
||
Other scientists were telling them too. So either scientists were being knowingly dishonest and distorting facts to promote political views; or they were sincere, but ideology or some other kind of bias affected what they were willing to accept as fact; or vested interests and professional
|
||
|
||
blinkers were preventing the people whom I was talking to from seeing things as they were. Whichever way, the ideal of science as an immutable standard of truth where all parties applied the same rules and would be obliged to agree on the same conclusion was in trouble.
|
||
I quickly discovered that this was so in other fields too. Atmospheric scientists whom I knew deplored the things being said about ozone holes. Chemists scoffed at the hysteria over carcinogens. A curious thing I noticed, however, was that specialists quick to denounce the misinformation and sensationalized reporting concerning their own field would accept uncritically what the same information sources and media said with regard to other fields. Nuclear engineers exasperated by the scares about radiation nevertheless believed that lakes formed in some of the most acidic rock on the continent had been denuded of fish (that had never lived there) by acid rain; climatologists who pointed out that nothing could be happening to the ozone layer since surface ultraviolet was not increasing signed petitions to ban DDT; biologists who knew that bird populations had thrived during the DDT years showed up to picket nuclear plants; and so it went on. Clearly, other factors could outweigh the objective criteria that are supposed to be capable of deciding a purely scientific question.
|
||
Browsing in a library one day, I came across a creationist book arguing that the fossil record showed the precise opposite of what evolutionary theory predicts. I had never had reason to be anything but a staunch supporter of Darwinism, since that was all I’d been exposed to, and everyone knew the creationists were strange anyway. But I checked the book out and took it home, thinking it would be good for a laugh. Now, I didn’t buy their Scriptural account of how it all began, and I still don’t. But contrary to the ridicule and derision that I’d been accustomed to hearing, to my own surprise I found the evidence that they presented for finding huge problems with the Darwinian theory to be solid and persuasive. So, such being my bent, I ordered more books from them out of curiosity to look a bit more deeply into what they have to say. Things got more interesting when I brought my findings up with various biologists whom I knew. While some would fly into a peculiar mix of apoplexy and fury at the mere mention of the subject – a distinctly unscientific reaction, it seemed – others would confide privately that they agreed with a lot of it; but things like pressures of the peer group, the politics of academia, and simple career considerations meant that they didn’t talk about it. I was astonished. This was the late-twentiethcentury West, not sixteenth-century Spain.
|
||
Shortly afterward, I met Peter Duesberg, one of the nation’s leading molecular biologists, tipped by many to be in line for a Nobel Prize, suddenly professionally ostracized and defunded for openly challenging the mainstream dogma on AIDS. What was most disturbing about it after talking with him and his associates and reading their papers was that what they were saying made sense; the official party line didn’t. Another person I got to know was the late Petr Beckmann, professor emeritus of electrical engineering, whose electrical interpretation of the phenomena conventionally explained by the Einstein Relativity Theory (ERT) is equally compatible with all the experimental results obtained to date, simpler in its assumptions, and more powerful predictively – but it is ignored by the physics community. I talked to an astrophysicist in NASA who believed that Halton Arp – excommunicated from American astronomy for presenting evidence contradicting the accepted interpretation of the cosmic redshifts that the Big Bang theory rests on – was “onto something.” But he would never say so in public, nor sign his name to anything to that effect on paper. His job would be on the line, just as Arp’s had been.
|
||
|
||
Whatever science might be as an ideal, scientists turn out to be as human as anyone else, and they can be as obstinate as anyone else when comfortable beliefs solidify into dogma. Scientists have emotions – often expressed passionately, despite the myths – and can be as ingenious as any senator at rationalizing when a reputation or a lifetime’s work is perceived to be threatened. They value prestige and security no less than anyone else, which inevitably fosters convergences of interests with political agendas that control where the money and the jobs come from. And far from least, scientists are members of a social structure with its own system of accepted norms and rewards, commanding loyalties that at times can approach fanaticism, and with rejection and ostracism being the ultimate unthinkable.
|
||
This book is not concerned with cranks or simple die-hards, who are entitled to their foibles and come as part of life’s pattern. Rather, it looks at instances of present-day orthodoxies tenaciously defending beliefs in the face of what would appear to be verified fact and plain logic, or doggedly closing eyes and minds to ideas whose time has surely come. In short, where scientific authority seems to be functioning more in the role of religion protecting doctrine and putting down heresy than championing the spirit of the free inquiry that science should be.
|
||
The factors bringing this about are various. Massive growth of government funding and the direction of science since World War II have produced symbiotic institutions which, like the medieval European Church, sell out to the political power structure as purveyors of received truth in return for protection, patronage, and prestige. Sometimes vested commercial interests call the tune. In areas where passions run high, ideology and prejudice find it easy to prevail over objectivity. Academic turf, like any other, is defended against usurpers and outside invasion. Some readily trade the anonymity and drudgery of the laboratory for visibility as celebrities in the public limelight. Peer pressure, professional image, and the simple reluctance to admit that one was wrong can produce the same effects at the collective level as they do on individuals.
|
||
I used to say sometimes in flippant moments that science was the only area of human activity in which it actually matters whether or not what one believes is actually true. Nowadays, I’m not so sure. It seems frequently to be the case that the cohesiveness that promotes survival is fostered just as effectively by shared belief systems within the social-political structures of science, whether those beliefs be true or not. What practical difference does it make to the daily routine and budget of the typical workaday scientist, after all, if the code that directs the formation and behavior of the self-assembling cat wrote itself out of random processes or was somehow inspired by a Cosmic Programmer, or if the universe really did dance out of the head of a pin? Scientific truth can apparently be an elusive thing when you try to pin it down, like the Irish fairies.
|
||
So today, I reserve the aphorism for engineering. You can fool yourself if you want, and you can fool as many as will follow for as long as you can get away with it. But you can’t fool reality. If your design is wrong, your plane won’t fly. Engineers don’t have the time or the inclination for highfalutin’ theories. In fact, over-elaborate theories that try to reach too far, I’m beginning to suspect, might be the biggest single menace affecting science. Maybe that’s why I find that the protagonists of the later books that I’ve written, now that I look back at them and think about it, have tended to be engineers.
|
||
|
||
ONE
|
||
Humanistic Religion The Rush to Embrace Darwinism
|
||
I think a case can be made that faith is one of the world’s great evils, comparable to the smallpox virus but harder to eradicate.
|
||
– Richard Dawkins, professor of zoology, Oxford University
|
||
History will judge neo-Darwinism a minor twentieth-century religious sect within the sprawling religious persuasion of Anglo-Saxon biology.
|
||
– Lynn Margulis, professor of biology, University of Massachusetts
|
||
Science, Religion, and Logic
|
||
Science and religion are both ways of arriving at beliefs regarding things that are true of the world.
|
||
What distinguishes one from the other? The most common answer would probably be that religion derives its teaching from some kind of supreme authority, however communicated, which must not be questioned or challenged, whereas science builds its world picture on the available facts as it finds them, without any prior commitment to ideas of how things ought to be.
|
||
This is pretty much in accord with our experience of life, to be sure. But I would submit that, rather than being the primary differentiating quality in itself, it comes about as a consequence of something more fundamental. The difference lies in the relationship between the things that are believed and the reasons for believing them. With a religion, the belief structure comes first as an article of faith, and whatever the recognized authority decrees is accepted as being true. Questioning such truth is not permitted. Science begins by finding out what’s true as impartially as can be managed, which means accepting what we find whether we like it or not, and the belief structure follows as the best picture that can be made as to the reasons for it all. In this case, questioning a currently held truth is not only permissible but encouraged, and when necessary the belief structure is modified accordingly. Defined in that way, the terms encompass more than the kinds of things that go on in the neighborhood church or a research laboratory, and take on relevance to just about all aspects of human belief and behavior. Thus, not walking under ladders because it brings bad luck (belief in principle, first; action judged as bad, second) is “religious”; doing the same thing to avoid becoming a victim of a dropped hammer or splashed paint (perceiving the world, first; deciding there’s a risk, second) is “scientific.”
|
||
Of course, this isn’t to say that scientific thinking never proceeds according to preexisting systems of rules. The above two paths to belief reflect, in a sense, the principles of deductive and
|
||
|
||
inductive logic.
|
||
Deduction begins with a set of premises that are taken to be incontestably true, and by applying rules of inference derives the consequences that must necessarily follow. The same inference rules can be applied again to the conclusions to generate a second level of conclusions, and the procedure carried on as far as one wants. Geometry is a good example, where a set of initial postulates considered to be self-evident (Euclid’s five, for example) is operated on by the rules of logic to produce theorems, which in turn yield further theorems, and so on. A deductive system cannot originate new knowledge. It can only reveal what was implicit in the assumptions. All the shelves of geometry textbooks simply make explicit what was implied by the choice of axioms. Neither can deduction prove anything to be true. It demonstrates merely that certain conclusions necessarily follow from what was assumed. If it’s assumed that all crows are black, and given that Charlie is a crow, then we may conclude that Charlie is black.
|
||
So deduction takes us from a general rule to a particular truth. Induction is the inverse process, of inferring the general rule from a limited number of particular instances. From observing what’s true of part of the world, we try to guess on the basis of intuition and experience – in other words, to “generalize” – what’s probably true of all of it. “Every crow I’ve seen has been black, and the more of them I see, the more confident I get that they’re all black.” However, inductive conclusions can never be proved to be true in the rigorous way that deductions can be shown to follow from their premises.
|
||
Proving that all crows are black would require every crow that exists to be checked, and it could never be said with certainty that this had been done. One disconfirming instance, on the other hand – a white crow – would be sufficient to prove the theory false.
|
||
This lack of rigor is probably why philosophers and logicians, who seek precision and universally true statements, have never felt as comfortable with induction as they have with deduction, or accorded it the same respectability. But the real world is a messy place of imperfections and approximations, where the art of getting by is more a case of being eighty percent right eighty percent of the time, and doing something now rather than waste any more time. There are no solid guarantees, and the race doesn’t always go to the swift nor the battle to the strong – but it’s the way to bet.
|
||
Deduction operates within the limits set by the assumptions. Induction goes beyond the observations, from the known to the unknown, which is what genuine innovation in the sense of acquiring new knowledge must do. Without it, how could new assertions about the world we live in ever be made? On the other hand, assertions based merely on conjecture or apparent regularities and coincidences – otherwise known as superstition – are of little use without some means of testing them against actuality. This is where deduction comes in – figuring out what consequences should follow in particular instances if our general belief is correct. This enables ways to be devised for determining whether or not they in fact do, which of course forms the basis of the scientific experimental method.
|
||
|
||
Darwinism and the New Order
|
||
The Triumph of the Enlightenment
|
||
Scientific method played the central role in bringing about the revolutionary world view ushered in by such names as Roger Bacon, Descartes, and Galileo, which by the time of the seventeenth-century
|
||
“Age of Enlightenment” had triumphed as the guiding philosophy of Western intellectual culture. No longer was permissible Truth constrained by interpretation of the Scriptures, readings of Aristotle and the classics, or logical premises handed down from the medieval Scholastics. Unencumbered by dogma and preconceptions of how reality had to be, Science was free to follow wherever the evidence led and uncover what it would. Its successes were spectacular indeed. The heavenly bodies that had awed the ancients and been regarded by them as deities were revealed as no different from the matter that makes up the familiar world, moved by the same forces. Mysteries of motion and form, winds and tides, heat and light were equally reduced to interplays of mindless, mechanical processes accessible to reason and predictable by calculation. The divine hand whose workings had once been invoked to explain just about everything that happened was no longer necessary. Neither, it seemed to many, were the traditional forms of authority that presented themselves as interpreters of its will and purpose. The one big exception was that nobody had any better answers to explain the baffling behavior of living things or where they could have come from.
|
||
The Original in “Origins”: Something for Everyone
|
||
A widely held view is that Charles Darwin changed the world by realizing that life could appear and diversify by evolution. This isn’t really the way it was, or the reason he caused so much excitement. The notion of life appearing spontaneously through some natural process was not in itself new, being found in such places as the Babylonian creation epic, Enuma Elish, and ancient Chinese teachings that insects come from nothing on the leaves of plants. Ideas of progressive development are expressed in the philosophies of Democritus and Epicurus, while Amaximander of Miletus (550 b. c.) held that life had originated by material processes out of sea slime – in some ways anticipating modern notions of a prebiotic soup. Empedocles of Ionia (450 b. c.) proposed a selection-driven process to account for adaptive complexity, in which all kinds of monstrosities were produced from the chance appearance of various combinations of body parts, human and animal, out of which only those exhibiting an inner harmony conducive to life were preserved and went on to multiply. The line continues down through such names as Hume, who speculated that the random juggling of matter must eventually produce ordered forms adapted to their environment; Lamarck, with his comprehensive theory of evolution by the inheritance of characteristics acquired through the striving of the parents during life; to Charles Darwin’s grandfather, Erasmus Darwin, who studied the similarities of anatomy between species and speculated on common ancestry as the reason.
|
||
The full title of Charles Darwin’s celebrated 1859 publication was The Origin of Species By Means of Natural Selection or the Preservation of Favoured Races in the Struggle for Life. The case it presents hardly needs to be elaborated here. Essentially, species improve and diverge through the accumulation of selected modifications inherited from common ancestors, from
|
||
|
||
which arise new species and eventually all of the diversity that makes up the living world. The solution that Darwin proposed was simple and elegant, requiring three premises that were practically self-evident: that organisms varied; that these variations were inherited; and that organisms were engaged in a competition for the means of survival, in the course of which the better equipped would be favored. Given variations, and given that they could be inherited, selection and hence adaptive change of the group as a whole was inevitable.
|
||
And over sufficient time the principle could be extrapolated indefinitely to account for the existence of anything.
|
||
None of the ingredients was especially new. But in bringing together his synthesis of ideas that had all been around for some time, Darwin provided for the first time a plausible, intellectually acceptable naturalistic and materialist explanation for the phenomenon of life at a time when many converging interests were desperately seeking one. Enlightenment thinkers, heady with the successes of the physical sciences, relished the opportunity to finish the job by expelling the last vestiges of supernatural agency from their world picture. The various factions of the new political power arising out of commerce and manufacturing found common ground from which to challenge the legitimacy of traditional authority rooted in land and Church, while at the same time, ironically, the nobility, witnessing the specter of militant socialist revolution threatening to sweep Europe, took refuge in the doctrine of slow, imperceptible change as the natural way of things. Meanwhile, the forces of exploitation and imperialism, long straining against the leash of moral restraint, were freed by the reassurance that extermination of the weak by the strong, and domination as the reward for excellence were better for all in the long run.
|
||
There was something in it for everyone. Apart from the old order fighting a rearguard action, the doctrine of competitive survival, improvement, and growth was broadly embraced as the driving principle of all progress – the Victorian ideal – and vigorously publicized and promoted. Science replaced the priesthood in cultural authority, no longer merely serving the throne but as supreme interpreter of the laws by which empires and fortunes flourish or vanish. Darwin’s biographer, Gertrude Himmelfarb, wrote that the theory could only have originated in laissezfaire England, because “Only there could Darwin have blandly assumed that the basic unit was the individual, the basic instinct self-interest, and the basic activity struggle.” 1
|
||
|
||
A Cultural Monopoly
|
||
Since then the theory has become established as a primary guiding influence on deciding social values and shaping relationships among individuals and organizations. Its impact extends across all institutions and facets of modern society, including philosophy, economics, politics, science, education, and religion. Its advocates pronounce it to be no longer theory but incontestable fact, attested to by all save the simple-minded or willfully obtuse. According to Daniel Dennett, Director of the Center for Cognitive Studies at Tufts University and a staunch proponent of Darwinism, “To put it bluntly but fairly, anyone today who doubts that the variety of life on this planet was produced by a process of evolution is simply ignorant – inexcusably ignorant.” 2
|
||
And from Oxford University’s professor of zoology, Richard Dawkins, one of the most vigorous and uncompromising popularizers of Darwinism today: “It is absolutely safe to say that, if you meet somebody who claims not to believe in evolution, that person is ignorant, stupid or insane (or wicked, but I’d rather not consider that).” 3
|
||
Dennett also expresses reservations about the suitability of anyone denying Darwinism to raise children. 4
|
||
Like the majority of people in our culture, I suppose, I grew up accepting the Darwinian picture unquestioningly because the monopoly treatment accorded by the education system and the scientific media offered no alternative, and the authority images that I trusted at the time told me there wasn’t one.
|
||
And nothing much had happened to change that by the time of my own earlier writings. The dispute between Hunt and Danchekker in Inherit the Stars 5 isn’t over whether or not the human race evolved, but where it happened. And eleven years later I was still militantly defending the theory. 6 By that time, however, my faith in many of the things that “everyone knows” was being eroded as a result of getting to know various people with specialized knowledge in various fields, who, in ways I found persuasive, provided other sides to many public issues, but which the public weren’t hearing. Before long I found myself questioning and checking just about everything I thought I knew.
|
||
Sweeping Claims – and Reservations
|
||
As far as I recall, doubts about evolution as it was taught began with my becoming skeptical that natural selection was capable of doing everything that it was supposed to. There’s no question that it happens, to be sure, and that it has its effects. In fact, the process of natural selection was well known to naturalists before Darwin’s day, when the dominant belief was in Divine Creation. It was seen, however, as a conservative force, keeping organisms true to type and stable within limits by culling out extremes.
|
||
Darwin’s bold suggestion was to make it the engine of innovation. Observation of the progressive changes brought about by the artificial selection applied in animal and plant breeding led him – a pigeon breeder himself – to propose the same mechanism, taken further, as the means for transforming one species into another, and ultimately to something else entirely.
|
||
But on rereading Origin, I developed the uneasy feeling of watching fancy flying away from reality, as it is all too apt to do when not held down by the nails of evidence. The changes that
|
||
|
||
were fact and discussed in great detail were all relatively minor, while the major transitions that constituted the force and substance of the theory were entirely speculative. No concrete proof could be shown that even one instance of the vast transformations that the theory claimed to explain had actually happened. And the same pattern holds true of all the texts I consulted that are offered today. Once the fixation on survival to the exclusion of all else sets in, a little imagination can always suggest a way in which any feature being considered “might” have conferred some advantage. Dull coloring provides camouflage to aid predators or protect prey, while bright coloring attracts mates. Longer beaks reach more grubs and insects; shorter beaks crack tougher seeds. Natural selection can explain anything or its opposite. But how do you test if indeed the fittest survive, when by definition whatever survives is the “fittest”?
|
||
By Scaffolding to the Moon
|
||
All breeders know there are limits beyond which further changes in a characteristic can’t be pushed, and fundamental innovations that can never be induced to any degree. Some varieties of sheep are bred to have a small head and small legs, but this can’t be carried to the point where they reduce to the scale of a rat. You can breed a larger variety of carnation or a black horse, but not a horse with wings. A given genome can support a certain amount of variation, giving it a range of adaptation to alterations in circumstances – surely to be expected for an organism to be at all viable in changeable environments. But no amount of selecting and crossing horses will produce wings if the genes for growing them aren’t there. As Darwin himself had found with pigeons, when extremes are crossed at their limit, they either become nonviable or revert abruptly to the original stock.
|
||
|
||
Horizontal variations within a type are familiar and uncontroversial. But what the theory proposes as occurring, and to account for, are vertical transitions from one type to another and hence the emergence of completely new forms. It’s usual in the literature for these two distinct types of change to be referred to respectively as “microevolution” and “macroevolution.” I’m not happy with these terms, however. They suggest simply different degrees of the same thing, which is precisely the point that’s at issue. So I’m going to call them “adaptive variation” and “evolutionary transition,” which as a shorthand we can reduce to “adaption” and “evolution.” What Darwin’s theory boils down to is the claim that given enough time, adaptive variations can add up to become evolutionary transitions in all directions to an unlimited degree. In the first edition of Origin (later removed) he said, “I can see no difficulty in a race of bears being rendered, by natural selection, more and more aquatic in their habits, with larger and larger mouths, till a creature was produced as monstrous as a whale.” But, unsubstantiated, this is the same as seeing no difficulty in adding to scaffolding indefinitely as a way to get to the Moon, or changing a Chevrolet a part at a time as a workable way of producing a Boeing 747. Regarding the generally held contention that there are limits to natural variation, he wrote, “I am unable to discover a single fact on which this belief is grounded.” 7 But there wasn’t a single fact to support the belief that variation could be taken beyond what had been achieved, either, and surely it was on this side that the burden of proof lay.
|
||
And the same remains true to this day. The assurance that adaptations add up to evolution, presented in textbooks as established scientific fact and belligerently insisted on as a truth that can be disputed only at the peril of becoming a confessed imbecile or a sociopath, is founded on faith. For decades researchers have been selecting and subjecting hundreds of successive generations of fruit flies to X rays and other factors in attempts to induce faster rates of mutation, the raw material that natural selection is said to work on, and hence accelerate the process to observable dimensions. They have produced fruit flies with varying numbers of bristles on their abdomens, different shades of eye colors, no eyes at all, and grotesque variations with legs growing out of their heads instead of antennas. But the results always remain fruit flies. Nothing comes out of it suggestive of a house fly, say, or a mosquito. If selection from variations were really capable of producing such astounding transformations as a bacterium to a fish or a reptile to a bird, even in the immense spans of time that the theory postulates, then these experiments should have revealed some hint of it.
|
||
|
||
Rocks of Ages – The Fossil Record
|
||
Very well, if neither the undisputed variations that are observed today, nor laboratory attempts to extend and accelerate them provide support for the kind of plasticity that evolution requires, what evidence can we find that it nevertheless happened in the past? There is only one place to look for solid testimony to what actually happened, as opposed to all the theorizing and excursions of imagination: the fossil record. Even if the origin of life was a one-time, nonrepeatable occurrence, the manner in which it took place should still yield characteristic patterns that can be predicted and tested.
|
||
Slow-Motion Miracles – The Doctrine of Gradualism
|
||
Transforming a fish into a giraffe or a dinosaur into an eagle involves a lot more than simply switching a piece at a time as can be done with Lego block constructions. Whole systems of parts have to all work together. The acquisition of wolf-size teeth doesn’t do much for the improvement of a predator if it still has rat-size jaws to fit them in. But bigger jaws are no good without stronger muscles to close them and a bigger head to anchor the muscles. Stronger muscles need a larger blood supply, which needs a heavier-duty circulatory system, which in turn requires upgrades in the respiratory department, and so it goes. For all these to all come about together in just the right amounts – like randomly changing the parts of a refrigerator and ending up with a washing machine – would be tantamount to miraculous, which was precisely what the whole theory was intended to get away from.
|
||
|
||
Darwin’s answer was to adopt for biology the principle of “gradualism” that his slightly older contemporary, the Scottish lawyer-turned-geologist, Sir Charles Lyell, was arguing as the guiding paradigm of geology. Prior to the mid nineteenth century, natural philosophers – as investigators of such things were called before the word “scientist” came into use – had never doubted, from the evidence they found in abundance everywhere of massive and violent animal extinctions, oceanic flooding over vast areas, and staggering tectonic upheavals and volcanic events, that the Earth had periodically undergone immense cataclysms of destruction, after which it was repopulated with radically new kinds of organisms. This school was known as “catastrophism,” its leading advocate being the French biologist Georges Cuvier, “the father of paleontology.” Such notions carried too much suggestion of Divine Creation and intervention with the affairs of the world, however, so Lyell dismissed the catastrophist evidence as local anomalies and proposed that the slow, purely natural processes that are seen taking place today, working for long enough at the same rates, could account for the broad picture of the Earth as we find it.
|
||
This was exactly what Darwin’s theory needed. Following the same principles, the changes in living organisms would take place imperceptibly slowly over huge spans of time, enabling all the parts to adapt and accommodate to each other smoothly and gradually. “As natural selection acts solely by accumulating slight, successive, favourable variations, it can produce no great or sudden modifications; it can act only by short and slow steps.” 8 Hence, enormous numbers of steps are needed to get from things like invertebrates protected by external shells to vertebrates with all their hard parts inside, or from a bear-or cowlike quadruped to a whale. It follows that the intermediates marking the progress over the millions of years leading up to the present should vastly outnumber the final forms seen today, and have left evidence of their passing accordingly. This too was acknowledged freely throughout Origin and in fact provided one of the theory’s strongest predictions. For example:
|
||
“[A] ll living species have been connected with the parent-species of each genus, by differences not greater than we see between the natural and domestic varieties of the same species at the present day; and these parent species, now generally extinct, have in turn been similarly connected with more ancient forms; and so on backwards, always converging to the common ancestor of every great class. So that the number of intermediate and transitional links, between all living and extinct species, must have been inconceivably great. But assuredly, if this theory be true, such have lived upon the earth.” 9
|
||
Life’s Upside-Down Tree: The First Failed Prediction
|
||
The theory predicted not merely that transitional forms would be found, but implied that the complete record would consist mainly of transitionals; what we think of as fixed species would turn out to be just arbitrary – way stations in a process of continual change. Hence, what we should find is a treelike branching structure following the lines of descent from a comparatively few ancient ancestors of the major groups, radiating outward from a well-represented trunk and limb formation laid down through the bulk of geological time as new orders and classes appear,
|
||
|
||
to a profusion of twigs showing the diversity reached in the most recent times. In fact, this describes exactly the depictions of the “Tree of Life” elaborately developed and embellished in Victorian treatises on the wondrous new theory and familiar to museum visitors and anyone conversant with textbooks in use up to quite recent times.
|
||
But such depictions figure less prominently in the books that are produced today – or more commonly are omitted altogether. The reason is that the story actually told by the fossils in the rocks is the complete opposite. The Victorians’ inspiration must have stemmed mainly from enthusiasm and conviction once they knew what the answer had to be. Species, and all the successively higher groups composed of species – genus, family, order, class, phylum – appear abruptly, fully differentiated and specialized, in sudden epochs of innovation just as the catastrophists had always said, without any intermediates leading up to them or linking them together. The most remarkable thing about them is their stability thereafter – they remain looking pretty much the same all the way down to the present day, or else they become extinct. Furthermore, the patterns seen after the appearance of a new population are not of divergence from a few ancestral types, but once again the opposite of what such a theory predicted. Diversity was most pronounced early on, becoming less, not greater with time as selection operated in the way previously maintained, weeding out the less suited. So compared to what we would expect to find, the tree is nonexistent where it should be in the greatest evidence, and what does exist is upside down.
|
||
Darwin and his supporters were well aware of this problem from the ample records compiled by their predecessors. In fact, the most formidable opponents of the theory were not clergymen but fossil experts. Even Lyell had difficulty in accepting his own ideas of gradualism applied to biology, familiar as he was with the hitherto undisputed catastrophist interpretation. But ideological fervor carried the day, and the generally agreed answer was that the fossil record as revealed at the time was incomplete. Now that the fossil collectors knew what to look for, nobody had any doubt that the required confirming evidence would quickly follow in plenitude. In other words, the view being promoted even then was a defense against the evidence that existed, driven by prior conviction that the real facts had to be other than what they seemed.
|
||
Well, the jury is now in, and the short answer is that the picture after a century and a half of assiduous searching is, if anything, worse now than it was then. Various ad hoc reasons and speculations have been put forward as to why, of course. These include the theory that most of the history of life consists of long periods of stasis during which change was too slow to be discernible, separated by bursts of change that happened too quickly to have left anything in the way of traces (“punctuated equilibrium”); that the soft parts that weren’t preserved did the evolving while the hard parts stayed the same (“mosaic evolution”); that fossilization is too rare an occurrence to leave a reliable record; and a host of others. But the fact remains that if evolution means the gradual transformation of one kind of organism into another, the outstanding feature of the fossil record is its absence of evidence for evolution.
|
||
Elaborate gymnastics to explain away failed predictions are almost always a sign of a theory in trouble.
|
||
Luther Sunderland describes this as a carefully guarded “trade secret” of evolutionary theorists and refers to it as “Darwin’s Enigma” in his book of the same name, which reports interviews conducted during the course of a year with officials of five natural history museums containing some of the largest fossil collections in the world. 10
|
||
|
||
The plea of incompleteness of the fossil record is no longer tenable. Exhaustive exploration of the strata of all continents and across the ocean bottoms has uncovered formations containing hundreds of billions of fossils. The world’s museums are filled with over 100 million fossils of 250,000 species. Their adequacy as a record may be judged from estimates of the percentage of known, living forms that are also found as fossils. They suggest that the story that gets preserved is much more complete than many people think. Of the 43 living orders of terrestrial vertebrates, 42, or over 97 percent are found as fossils. Of the 329 families of terrestrial vertebrates the figure is 79 percent, and when birds (which tend to fossilize poorly) are excluded, 87 percent. 11 What the record shows is clustered variations around the same basic designs over and over again, already complex and specialized, with no lines of improvement before or links in between. Forms once thought to have been descended from others turn out have been already in existence at the time of the ancestors that supposedly gave rise to them. On average, a species persists fundamentally unchanged for over a million years before disappearing – which again happens largely in periodic mass extinctions rather than by the gradual replacement of the ancestral stock in the way that gradualism requires. This makes nonsense of the proposition we’re given that the bat and the whale evolved from a common mammalian ancestor in a little over 10 million years, which would allow at the most ten to fifteen “chronospecies” (a segment of the fossil record judged to have changed so little as to have remained a single species) aligned end to end to effect the transitions. 12
|
||
Flights of Fancy: The Birds Controversy
|
||
It goes without saying that the failure to find connecting lines and transitional forms hasn’t been from want of trying. The effort has been sustained and intensive. Anything even remotely suggesting a candidate receives wide acclaim and publicity. One of the most well-known examples is Archaeopteryx, a mainly birdlike creature with fully developed feathers and a wishbone, but also a number of skeletal features such as toothed jaws, claws on its wings, and a bony, lizardlike tail that at first suggest kinship with a small dinosaur called Compsognathus and prompted T. H. Huxley to propose originally that birds were descended from dinosaurs. Presented to the world in 1861, two years after the publication of Origin, in Upper Jurassic limestones in Bavaria conventionally dated at 150 million years, its discovery couldn’t have been better timed to encourage the acceptance of Darwinism and discredit skeptics.
|
||
Harvard’s Ernst Mayr, who has been referred to as the “Dean of Evolution,” declared it to be “the almost perfect link between reptiles and birds,” while a paleontologist is quoted as calling it a “holy relic... The First Bird.” 13
|
||
Yet the consensus among paleontologists seems to be that there are too many basic structural differences for modern birds to be descended from Archaeopteryx. At best it could be an early member of a totally extinct group of birds. On the other hand, there is far from a consensus as to what might have been its ancestors. The two evolutionary theories as to how flight might have originated are “trees down,” according to which it all began with exaggerated leaps leading to parachuting and gliding by four-legged climbers; and “ground up,” where wings developed from the insect-catching forelimbs of two-legged runners and jumpers. Four-legged reptiles appear in the fossil record well before Archaeopteryx and thus qualify as possible ancestors by the generally accepted chronology, while the two-legged types with the features that would more be
|
||
|
||
expected of a line leading to birds don’t show up until much later.
|
||
This might make the trees-down theory seem more plausible at first sight, but it doesn’t impress followers of the relatively new school of biological classification known as “cladistics,” where physical similarities and the inferred branchings from common ancestors are all that matters in deciding what gets grouped with what. (Note that this makes the fact of evolution an axiom.) Where the inferred ancestral relationships conflict with fossil sequences, the sequences are deemed to be misleading and are reinterpreted accordingly. Hence, by this scheme, the animals with the right features to be best candidates as ancestors to Archaeopteryx are birdlike dinosaurs that lived in the Cretaceous, tens of millions of years after Archaeopteryx became extinct. To the obvious objection that something can’t be older than its ancestor, the cladists respond that the ancestral forms must have existed sooner than the traces that have been found so far, thus reintroducing the incompleteness-of-the-fossil-record argument but on a scale never suggested even in Darwin’s day. The opponents counter that in no way could the record be that incomplete, and so the dispute continues. In reality, therefore, the subject abounds with a lot more contention than pronouncements of almost-perfection and holy relics would lead the outside world to believe.
|
||
The peculiar mix of features found in Archaeopteryx is not particularly conclusive of anything in itself. In the embryonic stage some living birds have more tail vertebrae than Archaeopteryx, which later fuse. One authority states that the only basic difference from the tail arrangement of modern swans is that the caudal vertebrae are greatly elongated, but that doesn’t make a reptile. 14 There are birds today such as the Venezuelan hoatzin, the South African touraco, and the ostrich that have claws. Archaeopteryx had teeth, whereas modern birds don’t, but many ancient birds did. Today, some fish have teeth while others don’t, some amphibians have teeth and others don’t, and some mammals have teeth but others don’t. It’s not a convincing mark of reptilian ancestry. I doubt if many humans would accept that the possession of teeth is a throwback to a primitive, reptilian trait.
|
||
So how solid, really, is the case for Archaeopteryx being unimpeachable proof of reptile-tobird transition, as opposed to a peculiar mixture of features from different classes that happened upon a fortunate combination that endured in the way of the duck-billed platypus, but which isn’t a transition toward anything in the Darwinian sense (unless remains unearthed a million years from now are interpreted as showing that mammals evolved from ducks)? Perhaps the fairest word comes from Berkeley law professor Phillip Johnson, no champion of Darwinism, who agrees that regardless of the details, the Archaeopteryx specimens could still provide important clues as to how birds evolved. “[W] e therefore have a possible bird ancestor rather than a certain one,” he grants, “... on the whole, a point for the Darwinists.” 15 But he then goes on to comment, “Persons who come to the fossil evidence as convinced Darwinists will see a stunning confirmation, but skeptics will see only a lonely exception to a consistent pattern of fossil disconfirmation.” It was Darwin himself who prophesied that incontestable examples would be “inconceivably great.”
|
||
Lines of Horses
|
||
The other example that everyone will be familiar with from museum displays and textbooks is the famous “horse series,” showing with what appears to be incontrovertible clarity the 65-
|
||
|
||
million-year progression from a fox-sized ungulate of the lower Eocene to the modern-day horse. The increase in size is accompanied by the steady reduction of the foreleg toes from four to one, and the development of relatively plain leaf-browsing teeth into high-crowned grazing ones. Again, this turns out to be a topic on which the story that scientists affirm when closing ranks before the media and the public can be very different from that admitted off the record or behind closed doors. 16
|
||
The first form of the series originated from the bone collections of Yale professor of paleontology O. C. Marsh and his rival Edward Cope, and was arranged by the director of the American Museum of Natural History (AMNH), Henry Fairfield Osborn, in 1874. It contained just four members, beginning with the four-toed Eohippus, or “dawn horse,” and passing through a couple of three-toed specimens to the single-toed Equus of modern times, but that was sufficient for Marsh to declare that “the line of descent appears to have been direct and the remains now known supply every important form.” More specimens were worked into the system and the lineage filled in to culminate in a display put on by the AMNH in 1905 that was widely photographed and reproduced to find its way as a standard inclusion in textbooks for generations afterward. By that time it was already becoming apparent to professionals that the real picture was more complicated and far from conclusive. But it was one of those things that once rooted, takes on a life of its own.
|
||
In the first place, given the wide diversity of life and the ubiquity of the phenomenon known as convergence – which evolutionists interpret as the arrival of closely similar forms from widely separated ancestral lines, for example sharks and porpoises, or marsupial and placental dogs – inferring closeness of relationships purely from skeletal remains is by no means a foolproof business. The coelancanth, an early lobe-finned fish, was once confidently thought to have been a direct ancestor of the types postulated to have invaded the land and given rise to the amphibians. And then the surprise discovery of living specimens in the 1930s and thereafter showed from examination of previously unavailable soft parts that the assumptions based on the fossil evidence alone had been incorrect, and the conclusion was no longer tenable. Hence, if the fossil record is to provide evidence for evolutionary continuity as opposed to the great divisions of nature seen by Cuvier, it is not sufficient that two groups merely resemble each other in their skeletal forms. Proof that it had actually happened would require at least to show one unambiguous continuum of transitional species possessing an incontestable progression of graduations from one type to another. Such a stipulation does, of course, invite the retort that every filling of a gap creates two more gaps, and no continuity could ever be demonstrated that would be capable of pleasing a sufficiently pedantic critic. But a Zeno-like reductio ad absurdum isn’t necessary for an acceptance of the reality of continuity beyond reasonable doubt to the satisfaction of common sense and experience. As an analogy, suppose that the real numbers were scattered over the surface of the planet, and a survey of them was conducted to test the theory that they formed a continuum of infinitesimally small graduations. If the search turned up repeated instances of the same integers in great amounts but never a fraction, our knowledge of probabilities would soon cast growing suspicion that the theory was false and no intermediates between the integers existed. A more recent study of the claim of evolutionary transition of types, as opposed to the uncontroversial fact of variation within types stated: “The known fossil record fails to document a single example of phyletic (gradual) evolution accomplishing a major morphologic transition and hence offers no evidence that the gradualistic school can be valid.” 17
|
||
|
||
Later finds and comparisons quickly replaced the original impressive linear progression into a tangled bushlike structure of branches from assumed common ancestors, most of which led to extinction.
|
||
The validity of assigning the root genus, Eohippus, to the horse series at all had been challenged from the beginning. It looks nothing like a horse but was the name given to the North American animal forming the first of Osborn’s original sequence. Subsequently, it was judged to be identical to a European genus already discovered by the British anatomist and paleontologist, Robert Owen, and named Hyracotherium on account of its similarities in morphology and habitat to the Hyrax running around alive and well in the African bush today, still equipped with four fore-toes and three hind ones, and more closely related to tapirs and rhinoceroses than anything horselike. Since Hyracotherium predated the North American discovery, then by the normally observed custom Eohippus is not the valid name. But the suggestiveness has kept it entrenched in conventional horse-series lore. Noteworthy, however, is that Hyracotherium is no longer included in the display at Chicago’s Museum of Natural History.
|
||
In the profusion of side branches, the signs of relentless progress so aptly discerned by Victorians disappear in contradictions. In some lines, size increases only to reduce again. Even with living horses, the range in size from the tiny American miniature ponies to the huge English shires and Belgian warhorse breeds is as great as that collected from the fossil record. Hyracotherium has 18 pairs of ribs, the next creature shown after it has 19, then there is a jump to 15, and finally a reversion back to 18 with Equus.
|
||
Nowhere in the world are fossils of the full series as constructed found in successive strata. The series charted in school books comes mainly from the New World but includes Old World specimens where the eye of those doing the arranging considered it justified. In places where successive examples do occur together, such as the John Day formation in Oregon, both the three-toed and one-toed varieties that remain if the doubtful Hyracotherium is ignored are found at the same geological levels. And even more remarkable on the question of toes, of which so much is made when presenting the conventional story, is that the corresponding succession of ungulates in South America again shows distinctive groupings of full three-toed, three-toed with reduced lateral toes, and single-toed varieties, but the trend is in the reverse direction, i.e., from older single-toed to later three-toed. Presumably this was brought about by the same forces of natural selection that produced precisely the opposite in North America.
|
||
Keeping Naturalism Pure: Orthogenesis Wars
|
||
The perfection and complexity seen in the adaptations of living things are so striking that even among the evolutionists in Darwin’s day there was a strong, if not predominant belief that the process had to be directed either by supernatural guidance or the imperative of some yet-tobe identified force within the organisms themselves. (After all, if the result of evolution was to cultivate superiority and excellence, who could doubt that the ultimate goal at the end of it all was to produce eminent Victorians?) The view that some inner force was driving the evolutionary processes toward preordained goals was known as “orthogenesis” and became popular among paleontologists because of trends in the fossil record that it seemed to explain – the horse series being one of the most notable. This didn’t sit well with the commitment to materialism a priori that dominated evolutionary philosophy, however, since to many it smacked
|
||
|
||
of an underlying supernatural guidance one step removed from outright creationism. To provide a purely materialist source of innovation, Darwin maintained that some random agent of variation had to exist, even though at the time he had no idea what it was. A source of variety of that kind would be expected to show a radiating pattern of trial-and-error variants with most attempts failing and dying out, rather than the linear progression of an inner directive that knew where it was going. Hence, in an ironic kind of way, it has been the efforts of the Darwinians, particularly since the 1950s, that have contributed most to replacing the old linear picture of the horse series with the tree structure in their campaign to refute notions of orthogenesis.
|
||
But even if such a tree were to be reconstructed with surety, it wouldn’t prove anything one way or the other; the introduction of an element of randomness is by no means inconsistent with a process’s being generally directed. The real point is that the pattern was constructed to promote acceptance of a preexisting ideology, rather than from empirical evidence. Darwin’s stated desire was to place science on a foundation of materialistic philosophy; in other words, the first commitment was to the battle of ideas.
|
||
Richard Dawkins, in the opening of his book The Blind Watchmaker, defines biology as “the study of complicated things that give the appearance of having been designed for a purpose.” 18 The possibility that the suggestion of design might be anything more, and that appearances might actually mean what they say is excluded as the starting premise: “I want to persuade the reader, not just that the Darwinian worldview happens to be true, but that it is the only known theory that could, in principle, solve the mystery of our existence.” The claim of a truth that must be so “in principle” denotes argument based on a philosophical assumption. This is not science, which builds its arguments from facts. The necessary conclusions are imposed on the evidence, not inferred from it.
|
||
Left to themselves, the facts tell yet again the ubiquitous story of an initial variety of forms leading to variations about a diminishing number of lines that either disappear or persist to the present time looking much the same as they always did. And at the end of it all, even the changes that are claimed to be demonstrated through the succession are really quite trivial adjustments when seen against the background of equine architecture as a whole. Yet we are told that they took sixty-five million years to accomplish. If this is so, then what room is there for the vastly more complex transitions between forms utterly unlike one another, of which the evidence shows not a hint?
|
||
|
||
Anything, Everything, and Its Opposite: Natural Selection
|
||
Dissent in the Ranks: Logical Fallacy and Tautology
|
||
Norman Macbeth’s concise yet lucid survey of the subject, Darwin Retried, began when he used some idle time while convalescing in Switzerland to read a volume of essays commemorating the 1959 centenary of Origin ‘s publication. His conclusion after the several years of further research that his first impressions prompted was that “in brief, classical Darwinism is no longer considered valid by qualified biologists.” 19 They just weren’t telling the public. One of the most startling things Macbeth discovered was that while natural selection figured almost as a required credo on all the lists of factors cited in the experts’ writings as contributing to evolution, the importance they assigned to it ranged from its being “the only effective agency,” according to Julian Huxley, to virtually irrelevant in the opinions of others – even though it was just this that formed the substantive part of the title to Darwin’s book.
|
||
The reason for this backing off from what started out as the hallmark of the theory is that while mechanisms showing the effectiveness of natural selection can be readily constructed in imaginative speculations, any actual example of the process in action in the real world proceeds invisibly. Early Darwinians were carried away into concluding that every aspect of an animal down to the number of its spots or bristles was shaped by natural selection and thus was “adaptive,” i.e., relevant to survival.
|
||
Purporting to explain how the selective value of a particular, possibly trivial characteristic arose became something of a game among the enthusiasts, leading to such wild flights of just-sostory fancy and absurd reasoning that the more serious-minded gave up trying to account for the specifics, which were observable, while retaining undiminished faith in the principle, which wasn’t.
|
||
Put another way, it was claimed that natural selection worked because the results said to follow from it were evident all around. But this is the logical fallacy of saying that because A implies B, B must imply A. If it rained this morning, the grass will necessarily be wet. But finding the grass wet doesn’t mean necessarily that it rained. The sprinklers may have been on; the kids could have been playing with the hose; a passing UFO might have vented a coolant tank, and so on. Confirming the deductions from a theory only lends support to the theory when they can be shown to follow from it uniquely, as opposed to being equally consistent with rival theories. If only naturalistic explanations are allowed by the ground rules, then that condition is satisfied automatically since no explanation other than natural selection, even with its problems, has been offered that comes even close to being plausible. But being awarded the prize through default after all other contenders have been disqualified is hardly an impressive performance.
|
||
The Darwinists’ reaction to this entanglement was to move away from the original ideas of struggle and survival, and redefine evolution in terms of visible consequences, namely that animals with certain features did well and increased in numbers, others declined, while yet others again seemed to stay the same. Although perpetuating the same shaky logic, this had the benefit of making the theory synonymous with facts that couldn’t be denied, without the burden of explaining exactly how and why they came about, which had been the original intention. In the general retreat from what Darwinism used to mean, “evolution” became a matter of the
|
||
|
||
mathematics of gene flows and population dynamics, in a word differential reproduction, in the course of which “natural selection” takes on the broader meaning of being simply anything that brings it about. 20 So evolution is defined as change brought about by natural selection, where natural selection, through a massive circularity, arrives back at being anything that produces change. What Macbeth finds staggering in this is the ease with which the leaders of the field not only accept such tautologies blithely as inherent in their belief system, but are unable to see anything improper in tautological reasoning or the meaninglessness of any conclusions drawn from it. 21
|
||
Moth Myths. The Crowning Proof?
|
||
A consequence of such illogic is that simple facts which practically define themselves become celebrated as profound revelations of great explanatory power. Take as an example the case of the British peppered moth, cited in virtually all the textbooks as a perfect demonstration of “industrial melanism” and praised excitedly as living proof of evolution in action before our eyes. In summary, the standard version of the story describes a species of moth found in the British Midlands that were predominantly light-colored in earlier times but underwent a population shift in which a dark strain became dominant when the industrial revolution arrived and tree trunks in the moths’ habitat were darkened by smoke and air pollution. Then, when cleaner air resulted from the changes and legislation in modern times and the trees lightened again, the moth population reverted to its previous balance. The explanation given is that the moths depend on their coloring as camouflage to protect them from predatory birds. When the tree barks were light, the lighter-colored variety of moths was favored, with darker barks the darker moths did better, and the changing conditions were faithfully mirrored in the population statistics. Indeed, all exactly in keeping with the expectations of “evolution” as now understood.
|
||
The reality, however, is apparently more complicated. Research has shown that in at least some localities the darkening of the moths precedes that of the tree barks, suggesting that some common factor – maybe a chemical change in the air – affects both of them. Further, it turns out that the moths don’t normally rest on the trunks in daylight in the way textbook pictures show, and in conditions not artificially contrived for experiments, birds in daylight are not a major influence. The pictures were faked by gluing dead moths to tree trunks. 22
|
||
But even if the facts were as presented, what would it all add up to, really? Light moths do better against a light background, whereas dark moths do better against a dark background. This is the Earth-shattering outcome after a century and a half of intensive work by some of the bestknown names in science developing a theory that changed the world? Both light strains and dark strains of moth were already present from the beginning. Nothing changed or mutated; nothing genetically new came into existence. If we’re told that of a hundred soldiers sent into a jungle wearing jungle camouflage garb along with a hundred in arctic whites, more of the former were still around a week later, are we supposed to conclude that one kind “evolved” into another, or that anything happened that wouldn’t have been obvious to common sense?
|
||
If that’s what we’re told “evolution” in the now-accepted use of the word means, then so be it. But now we’ll need a different word to explain how moths came into existence in the first place. Yet along with such examples as Archaeopteryx and the horse series, the peppered moth is offered as proof that sets the theory on such incontestable grounds that to question it is evidence
|
||
|
||
of being dim-witted or malicious. While other sciences have progressed from sailing clippers to spaceships, Morse telegraph to satellite nets, steam engines to nuclear reactors, these constitute the best evidence that can be mustered after a hundred and fifty years.
|
||
|
||
The Origin of Originality? Genetics and Mutation
|
||
Recombination: Answering the Wrong Question
|
||
Natural selection in itself originates nothing. It can only select out of what is already present to be selected from. In order to be the driving engine of evolution, it needs a source of new raw material to be tested and either preserved for further experimentation or rejected. Much is written about genetic transposition and recombination – the insertion, deletion, and duplication of the genes carried by the chromosomes, and their rearrangement into new permutations. And it is true that an enormous variety of altered programs for directing the form that an organism will assume can be produced in this way – far greater than could ever be realized in an actual population. Lee Spetner, a former MIT physicist and information scientist who has studied the mathematics of evolution for forty years, calculates that the number of possible variations that could occur in a typical mammalian genome to be in the order of one followed by 24 million zeros. 23 (Yes, I did get that right. Not 24 orders of magnitude; 24 million orders of magnitude.) Of this, the fraction that could be stored in a population of a million, a billion, ten billion, or a hundred billion individuals – it really doesn’t make much difference – is so close to zero as to be negligible. And indeed this is a huge source of potential variety. But the attention it gets is misleading, since it’s the same sleight of hand we saw before of presenting lots of discussion and examples of adaptive variations that nobody doubts, and assuming evolutionary transitions to be just more of the same. The part that’s assumed is precisely what the exercise is supposed to be proving. For all that’s going on, despite the stupendous number of combinations it can come up with, is reshuffling the genes that already make up the genome of the species in question. Recombination is a very real and abundant phenomenon, taking place through sexual mixing whenever a mating occurs and well able to account for the variation that we see – it’s theoretically possible for two siblings to be formed from exactly complementary gametes (the half set of parental genes carried by a sperm or egg cell) from each parent, and thus to not share one gene in common. But it can’t work beyond the species level, where inconceivably greater numbers of transitions are supposed to have happened, that we don’t see.
|
||
Random Mutation: Finally, the Key to New Things Under the Sun
|
||
The source of original variation that Darwin sought was eventually identified as the mechanism of genetic mutation deduced from Mendel’s studies of heredity, which was incorporated into Darwinian theory in what became known in the 1930s as the neo-Darwinian synthesis. By the 1940s the nucleic acid DNA was known to be the carrier of hereditary information, and in 1953 James Watson and Francis Crick determined the molecule’s doublehelix structure with its “cross-rungs” of nucleotide base pairs that carry the genetic program. This program is capable of being misread or altered, leading the molecular biologist Jacques Monod, director of the Pasteur Institute, to declare in 1970 that “the mechanism of Darwinism is at last securely founded.” 24 Let’s take a deeper look, then, at what was securely founded.
|
||
An Automated Manufacturing City
|
||
Sequences of DNA base pairs – complementary arrangements of atoms that bridge the gap between the molecule’s two “backbones” like the steps of a helical staircase – encode the
|
||
|
||
instructions that direct the cellular protein-manufacturing machinery to produce the structural materials for building the organism’s tissues, as well as molecules like hormones and enzymes to regulate its functioning. The operations that take place in every cell of the body are stupefyingly complex, embodying such concepts as realtime feedback control, centralized databanks, errorchecking and correcting, redundancy coding, distributed processing, remote sensing, prefabrication and modular assembly, and backup systems that are found in our most advanced automated factories. Michael Denton describes it as a miniature city:
|
||
To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until it is twenty kilometers in diameter and resembles a giant airship large enough to cover a great city like London or New York. What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we would see millions of openings, like the port holes of a vast space ship, opening and closing to allow a continual stream of materials to flow in and out. If we were to enter one of these openings we would find ourselves in a world of supreme technology and bewildering complexity. We would see endless highly organized corridors and conduits branching in every direction away from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plants and processing units. The nucleus itself would be a vast spherical chamber more than a kilometer in diameter, resembling a geodesic dome inside of which we would see, all neatly stacked together in ordered arrays, the miles of coiled chains of the DNA molecules.... We would see all around us, in every direction we looked, all sorts of robot-like machines. We would notice that the simplest of the functional components of the cell, the protein molecules, were astonishingly complex pieces of molecular machinery, each one consisting of about three thousand atoms arranged in highly organized 3-D spatial conformation. We would wonder even more as we watched the strangely purposeful activities of these weird molecular machines, particularly when we realized that, despite all our accumulated knowledge of physics and chemistry, the task of designing one such molecular machine – that is one single functional protein molecule – would be completely beyond our capacity at present and will probably not be achieved until at least the beginning of the next century.” 25
|
||
And this whole vast, mind-boggling operation can replicate itself in its entirety in a matter of hours.
|
||
When this happens through the cell dividing into two daughter cells, the double-stranded DNA control tapes come apart like a zipper, each half forming the template for constructing a complete copy of the original DNA molecule for each of the newly forming cells. Although the copying process is monitored by error-detection mechanisms that surpass anything so far achieved in our electronic data processing, copying errors do occasionally happen. Also, errors can happen spontaneously or be induced in existing DNA by such agents as mutagenic chemicals and ionizing radiation. Once again the mechanism for repairing this kind of damage is
|
||
|
||
phenomenally efficient – if it were not, such being the ravages of the natural environment, no fetus would ever remain viable long enough to be born – but at the end of the day, some errors creep through to become part of the genome written into the DNA. If the cell that an error occurs in happens to be a germ cell (sperm or egg), the error will be heritable and appear in all the cells of the offspring it’s passed on to. About 10 percent of human DNA actually codes for structural and regulatory proteins; the function of the rest is not known. If the inherited copying error is contained in that 10 percent, it could (the code is highly redundant; for example, several code elements frequently specify the same protein, so that mutating one into another doesn’t alter anything) be expressed as some physical or behavioral change.
|
||
The Blind Gunman: A Long, Hard Look at the Odds
|
||
Such “point mutations” of DNA are the sole source of innovation that the neo-Darwinian theory permits to account for all life’s diversity. The theory posits the accumulation of tiny, insensible fluctuations to bring about all major change, since large variations would cause too much dislocation to be viable.
|
||
They must occur frequently enough for evolution to have taken place in the time available; but if they occur too frequently no two generations would be the same, and no “species” as the basis of reproducing populations could exist. The key issue, therefore, is the rate at which the mutations that the theory rests on take place. More specifically, the rate of favorable mutations conferring some adaptive benefit, since harmful ones obviously contribute nothing as far as progress toward something better is concerned.
|
||
And here things run into trouble straight away, for beneficial mutations practically never happen.
|
||
Let’s take some of the well-known mutations that have been cataloged in studies of genetic diseases as examples.
|
||
All body cells need a certain amount of cholesterol for their membranes. It is supplied in packages of cholesterol and certain fats manufactured by the liver and circulated via the cardiovascular system.
|
||
Too much of it in circulation, however, results in degeneration and narrowing of the large and medium-size arteries. Cholesterol supply is regulated by receptor proteins embedded in the membrane wall that admit the packages into the cell and send signals back to the liver when more is needed. The gene that controls the assembly of this receptor protein from 772 amino acids is on chromosome 19 and consists of about 45,000 base pairs. Over 350 mutations of it have been described in the literature.
|
||
Every one of them is deleterious, producing some form of disease, frequently fatal. Not one is beneficial.
|
||
Another example is the genetic disease cystic fibrosis that causes damage to the lungs, digestive system, and in males the sperm tract. Again this traces to mutations of a gene coding for a transmembrane protein, this time consisting of 1,480 amino acids and regulating chloride ion transport into the cell. The controlling gene, called CFTR, has 250,000 base pairs to carry its instructions, of which over 200 mutations are at present known, producing conditions that range from severe lung infections leading to early deaths among children, to lesser diseases such as
|
||
|
||
chronic pancreatitis and male infertility. No beneficial results have ever been observed.
|
||
“The Blind Gunman” would be a better description of this state of affairs. And it’s what experience would lead us to expect. These programs are more complex than anything running in the PC that I’m using to write this, and improving them through mutation would be about as likely as getting a better word processor by randomly changing the bits that make up the instructions of this one.
|
||
The mutation rates per nucleotide that Spetner gives from experimental observations are between 0.1 and 10 per billion transcriptions for bacteria and 0.01 to 1 per billion for other organisms, giving a geometric mean of 1 per billion. 26 He quotes G. Ledyard Stebbins, one of the architects of the neo-Darwinian theory, as estimating 500 successive steps, each step representing a beneficial change, to change one species into another. To compute the probability of producing a new species, the next item required would be the fraction of mutations that are beneficial. However, the only answer here is that nobody knows for sure that they occur at all, because none has ever been observed. The guesses found here and there in the evolutionary literature turn out to be just that – postulated as a hypothetical necessity for the theory to stand. (Objection: What about bacteria mutating to antibiotic-resistant strains? A well-documented fact. Answer: It can’t be considered meaningful in any evolutionary sense. We’ll see why later.)
|
||
But let’s follow Spetner and take it that a potentially beneficial mutation is available at each of the 500 steps, and that it spreads into the population. The first is a pretty strong assumption to make, and there’s no evidence for it. The second implies multiple cases of the mutation appearing at each step, since a single occurrence is far more likely to be swamped by the gene pool of the general population and disappear. Further, we assume that the favorable mutation that exists and survives to spread at every step is dominant, meaning that it will be expressed even if occurring on only one of the two parental chromosomes carrying that gene. Otherwise it would be recessive, meaning that it would have to occur simultaneously in a male and a female, who would then need to find each other and mate.
|
||
Even with these assumptions, which all help to oil the theory along, the chance that the postulated mutation will appear and survive in one step of the chain works out at around 1 in 300,000, which is less than that of flipping 18 coins and having them all come up heads. For the comparable thing to happen through all 500 steps, the number becomes one with more than 2,700 zeros.
|
||
Let’s slow down for a moment to reflect on what that means. Consider the probability of flipping 150 coins and having them all come up heads. The event has a chance of 1 in 2150 of happening, which works out at about 1 in 1045 (1 followed by 45 zeros, or 45 orders of magnitude). This means that on average you’d have to flip 150 coins 1045 times before you see all heads. If you were superfast and could flip 150 coins, count them, and pick them up again all in one second you couldn’t do it in a lifetime. Even a thousand people continuing nonstop for a hundred years would only get through 3 trillion flips, i.e., 3 x 1012 – still a long, long way from 1045.
|
||
So let’s try simulating it on a circuit chip that can perform each flip of 150 coins in a trillionth of a second. Now build a supercomputer from a billion of these chips and then set a fleet of 10 billion such supercomputers to the task... and they should be getting there after somewhere around 3 million years.
|
||
|
||
Well, the odds that we’re talking about, of producing just one new species even with favorable assumptions all the way down the line, is over two thousand orders of magnitude more improbable than that.
|
||
But it Happened! Science or Faith?
|
||
This is typical of the kinds of odds you run into everywhere with the idea that life originated and developed by accumulated chance. Spetner calculates odds of 1 in 600 orders of magnitude against the occurrence of any instance of “convergent evolution,” which is invoked repeatedly by evolutionists to explain physical similarities that by no stretch of the imagination can be attributed to common ancestry.
|
||
The British astronomer Sir Fred Hoyle gives as 5 in 1019 the probability that one protein could have evolved randomly from prebiotic chemicals, and for the 200,000 proteins found in the human body, a number with 40,000 zeros. 27 The French scientist Lecomte de Nouy computed the time needed to form a single protein in a volume the size of the Earth as 10243 years. 28 These difficulties were already apparent by the mid sixties. In 1967 a symposium was held at the Wistar Institute in Philadelphia to debate them, with a dazzling array of fifty-two attendees from the ranks of the leading evolutionary biologists and skeptical mathematicians. Numbers of the foregoing kind were produced and analyzed. The biologists had no answers other than to assert, somewhat acrimoniously from the reports, that the mathematicians had gotten their science backward: Evolution had occurred, and therefore the mathematical problems in explaining it had to be only apparent. The job of the mathematicians, in other words, was not to assess the plausibility of a theory but to rubber-stamp an already incontestable truth.
|
||
|
||
Life as Information Processing
|
||
Evolution Means Accumulating Information
|
||
The cell can be likened to a specialized computer that executes the DNA program and expresses the information contained in it. Cats, dogs, horses, and Archaeopteryxes don’t really evolve, of course, but live their spans and die still being genetically pretty much the same as they were when born. What evolves, according to the theory, is the package of genetic information that gets passed down from generation to generation, accumulating and preserving beneficial innovations as it goes. The species that exists at a given time is a snapshot of the genome expressing itself as it stands at the point it has reached in accumulating information down the line of descent from the earliest ancestor. Although the process may be rapid at times and slow at others, every mutation that contributes to the process adds something on average. This is another way of saying that to count as a meaningful evolutionary step, a mutation must add some information to the genome. If it doesn’t, it contributes nothing to the building up of information that the evolution of life is said to be.
|
||
No mutation that added information to a genome has ever been observed to occur, either naturally or in the laboratory. This is the crucial requirement that disqualifies all the examples that have been presented in scientific papers, reproduced in textbooks, and hyped in the popular media as “evolution in action.” We already saw that the case of the peppered moth involves no genetic innovation; what it demonstrates is an already built-in adaptation capacity, not evolution. This isn’t to say that mutations never confer survival benefits in some circumstances. Such occurrences are rare, but they do happen.
|
||
However, every one that has been studied turns out to be the result of information being lost from a genome, not gained by it. So what’s going on in such situations is just part of the normal process of existing organisms shuffling and jostling in their own peculiar ways for a better place in the sun, but not turning into something new.
|
||
Bacterial Immunity Claims: A False Information Economy
|
||
A frequently cited example is that of bacteria gaining resistance to streptomycin and some other mycin drugs, which they are indeed able to do by a single-point mutation. The drug molecule works by attaching to a matching site on a ribosome (protein-maker) of the bacterium, rather like a key fitting into a lock, and interfering with its operation. The ribosome strings the wrong amino acids together, producing proteins that don’t work, as a result of which the bacterium is unable to grow, divide, or propagate, and is wiped out. Mammalian ribosomes don’t have similar matching sites for the drug to attach to, so only the bacteria are affected, making such drugs useful as antibiotics. However, several mutations of the bacterial genome are possible that render the drug’s action ineffective. In a population where one of them occurs, it will be selected naturally to yield a resistant strain which in the presence of the antibiotic indeed has a survival benefit.
|
||
But the “benefit” thus acquired turns out to be a bit like gaining immunity to tooth decay by losing your teeth. Every one of the resistance-conferring mutations does so by altering one part or another of the ribosome “lock” in such a way that the drug’s molecular “key” will no longer match. This is another way of saying that the specific set of lock parts that enables the key fit is
|
||
|
||
replaced by one of several randomly determined alternative sets that it won’t fit. The significant point is that a single, unique state is necessary to bring about the first condition, “key fits,” whereas any one of a number of states is sufficient to produce the second condition, “key doesn’t fit.” Thinking of it as a combination lock, only one combination of all digits will satisfy the first condition, but altering any digit (or more) meets the second. This makes a number less specific – such as by changing 17365 to 173X5, where X can be any digit. Loss of specificity means a loss of information. The same applies to pests becoming resistant to insecticides such as DDT. Although a survival benefit may be acquired in certain circumstances, the mutant strains invariably show impairment in more general areas, such as by slowed metabolism or sluggish behavior. Hence, they turn out to be not “super species” at all, as the media love to sensationalize, but genetic degenerates which if the artificial conditions were taken away would rapidly be replaced by the more all-round-rugged wild types.
|
||
Losing the genes that control the growth of teeth might produce a strain of survivors in a situation where all the food that required chewing was poisoned and only soup was safe. But it couldn’t count as meaningful in any evolutionary sense. If evolution means the gradual accumulation of information, it can’t work through mutations that lose it. A business can’t accumulate a profit by losing money a bit at a time.
|
||
Neither can it do so through transactions that break even. Some bacteria can become resistant through infection by a virus carrying a gene for resistance that the virus picked up from a naturally resistant variety. Some insects seem to get their uncannily effective camouflage by somehow having acquired the same color-patterning genes as are possessed by the plants they settle on. 29 Similar results can also be achieved artificially by genetic engineering procedures for transferring pieces of DNA from one organism to another. Although it is true that information is added to the recipient genomes in such cases, there is no gain for life as a whole in the sense of a new genetic program being written. The program to direct the process in question was already in existence, imported from somewhere else. Counting it as contributing to the evolution of life would be like expecting an economy to grow by having everyone take in everyone else’s laundry. For an economy to grow, wealth must be created somewhere. And as we’ve seen, considerations of the probabilities involved, limitations of the proposed mechanisms, and all the evidence available, say that theories basing large-scale evolution on chance don’t work.
|
||
More Bacteria Tales: Directed Mutation
|
||
Cases of adaptations occurring not through selection of random changes but being directed by cues in the environment have been reported for over a century. 30 But since any suggestion of nonrandom variation goes against the prevailing beliefs of mainstream biology, they have largely been ignored. Take, for example, the backup feeding system that the laboratory staple bacterium E. coli is able to conjure up on demand. 31
|
||
The normal form of E. coli lives on the milk sugar lactose and possesses a set of digestive enzymes tailored to metabolize it. A defective strain can be produced that lacks the crucial first enzyme of the set, and hence cannot utilize lactose. However, it can be raised in an alternative nutrient. An interesting thing now happens when lactose is introduced into the alternative nutrient. Two independent mutations to the bacterium’s genome are possible which together enable the missing first step to be performed in metabolizing lactose. Neither mutation is any use
|
||
|
||
by itself, and the chances of both happening together is calculated to be vanishingly small at 1018. For the population size in a typical experiment, this translates into the average waiting time for both mutations to happen together by chance being around a hundred thousand years. In fact, dozens of instances are found after just a few days. But only when lactose is present in the nutrient solution. In other words, what’s clearly indicated in experiments of this kind – and many have been described in the literature32 – is that the environment itself triggers precisely the mutations that the organism needs in order to exploit what’s available.
|
||
And So, Back to Finches
|
||
The forms of adult animal bone-muscle systems are influenced to a large degree by the forces that act on them while they are growing. Jaws and teeth have to bear the forces exerted when the animal chews its food, and these forces will depend in strength and direction on the kind of food the animal eats. The adult form of jaws and teeth that develops in many rodents, for example, can vary over wide ranges with changes in diet, brought about possibly by environmental factors or through a new habit spreading culturally through a population. If the new conditions or behavior become established, the result can be a permanent change in the expressed phenotype of the animal.
|
||
In 1967, a hundred or so finches of the same species were brought from Laysan, an island in the Pacific about a thousand miles northwest of Hawaii, forming part of a U.S. government bird reservation, to a small atoll called Southeast Island, somewhat southeast of Midway, which belongs to a group of four small islands all within about ten miles of each other. Twenty years later, the birds had dispersed across all the islands and were found to have given rise to populations having distinct differences, particularly with regard to the shapes and sizes of their beaks. 33 Clearly this wasn’t the result of randomly occurring mutations being naturally selected over many generations. The capacity to switch from one form to another was already present in the genetic program, and the program was switched to the appropriate mode by environmental signals. The ironic aspect of this example, of course, is that observations of precisely this type of variety in beak forms among finches of the Galapagos Islands led Darwin to the notion that he was witnessing the beginnings of new species.
|
||
Confronting the Unthinkable
|
||
By the above, if a population of rodents, say, or maybe horses, were to shift their diet abruptly, the phenotype would change abruptly even though the genotype does not. The fossil record would show abrupt changes in tooth and bone structure, even though there had been no mutation and no selection. Yet the evolution read into the fossil record is inferred largely from bones and teeth. In his reconstruction of the story of horse evolution, Simpson tells that when the great forests gave way to grassy plains, Mesohippus evolved into Merychippus, developing highcrowned teeth through random mutation and selection, for “It is not likely to be a coincidence that at the same time grass became common, as judged by fossil grass seeds in the rocks.” 34
|
||
It may indeed have been no coincidence. But neither does it have to be a result of the mechanism that Simpson assumes. If these kinds of changes in fossils were cued by altered environments acting on the developing organisms, then what has been identified as clear
|
||
|
||
examples of evolution could have come about without genetic modification being involved, and with random mutation and selection playing no role at all.
|
||
Should this really be so strange? After all, at various levels above the genetic, from temperature regulation and damage repair to fighting or fleeing, organisms exhibit an array of mechanisms for sensing their environment and adjusting their response to it. The suggestion here is that the principle of sensing and control extends down also to the genetic level, where genes can be turned on and off to activate already-existing program modules, enabling an organism to live efficiently through short-term changes in its environment. Nothing in the genome changes. The program is set up for the right adaptive changes in the phenotype to occur when they are needed.
|
||
The problem for Darwinism, and maybe the reason why suggestions of directed evolution are so fiercely resisted, is that if there was trouble enough explaining the complexity of genetic programs before, this makes it immeasurably worse. For now we’re implying a genome that consists not only of all the directions for constructing and operating the self-assembling horse, but also all the variations that can be called up according to circumstances, along with all the reference information to interpret the environmental cues and alter the production specification accordingly. Fred Hoyle once observed that the chances of life having arisen spontaneously on Earth were about on a par with those of a whirlwind blowing through a junkyard containing all the pieces of a 747 lying scattered in disarray, and producing an assembled aircraft ready to fly. What we’re talking about now is a junkyard containing parts for the complete range of Boeing civil airliners, and a whirlwind selecting and building just the model that’s best suited to the current situation of cost-performance economics and projected travel demands.
|
||
Intelligence at Work? The Crux of It All
|
||
So finally we arrive at the reason why the subject is not just a scientific issue but has become such a battle of political, moral, and philosophic passions. At the root of it all, only two possibilities exist: Either there is some kind of intelligence at work behind what’s going on, or there is not. This has nothing to do with the world’s being six thousand years old or six billion. A comparatively young world – in the sense of the surface we observe today – is compatible with unguided Catastrophist theories of planetary history, while many who are of a religious persuasion accept orthodox evolution as God’s way of working. What’s at the heart of it is naturalism and materialism versus belief in a creative intelligence of some kind. Either these programs which defy human comprehension in their effectiveness and complexity wrote themselves accidentally out of mindless matter acting randomly; or something wrote them for a reason. There is no third alternative.
|
||
Darwin’s Black Box Opened:
|
||
Biochemistry’s Irreducible Complexity
|
||
At the time Darwin formulated his original theory, nothing was known of the mechanism of heredity or the internal structures of the organic cell. The cell was known to possess a dark nucleus, but the inner workings were pretty much a “black box,” imagined to be a simple unit of living matter, and with most of the interesting things taking place at higher levels of
|
||
|
||
organization. With the further development of sciences leading to the molecular biology that we see today, this picture has been dramatically shattered and the cell revealed as the stupendous automated factory of molecular machines that we glimpsed in Michael Denton’s description earlier. The complexity that has been revealed in the last twenty years or so of molecular biochemistry is of an order that dwarfs anything even remotely imagined before then.
|
||
These findings prompted Michael Behe, professor of biochemistry at Lehigh University in Pennsylvania, to write what has become an immensely popular and controversial book, Darwin’s Black Box, 35 in which he describes systems ranging from the rotary bearings of the cilia that propel mobile cells, to vision, the energy metabolism, and the immune system, which he argues cannot have come into existence by any process of evolution from something simpler. His basis for this assertion is the property they all share, of exhibiting what he terms “irreducible complexity.” The defining feature is that every one of the components forming such a system is essential for its operation. Take any of them away, and the system is not merely degraded in some way but totally incapable of functioning in any way at all. Hence, Behe maintains, such systems cannot have arisen from anything simpler, because nothing simpler – whatever was supposed to have existed before the final component was added – could have done anything; and if it didn’t do anything, it couldn’t have been selected for any kind of improvement. You either have to have the whole thing – which no variation of evolution or any other natural process could bring into existence in one step – or nothing.
|
||
The example he offers to illustrate the principle is the common mousetrap. It consists of five components: a catch plate on which the bait is mounted; a holding bar that sets and restrains the hammer; a spring to provide the hammer with lethal force; and a platform for mounting them all on and keeping them in place. Every piece is essential. Without any one, nothing can work. Hence, it has to be built as a complete, functioning unit. It couldn’t assume its final form by the addition of any component to a simpler model that was less efficient.
|
||
An example of reduced complexity would be a large house built up by additions and extensions from an initial one-room shack. The improvements could be removed in reverse order without loss of the essential function it provides, though the rendering of that function would be reduced in quality and degree.
|
||
Here, from Behe’s book, are the opening lines of a section that sketches the process of vision at the biochemical level. Nobody has been able to offer even a speculation as to how the system could function at all if even one of its molecular cogs were removed.
|
||
When light first strikes the retina a photon interacts with a molecule called 11cis-retinal, which rearranges within picoseconds [a picosecond is about the time light takes to cross the width of a human hair] to trans-retinal. The change in the shape of the retinal molecule forces a change in the shape of the protein rhodopsin, to which the retinal is tightly bound. The protein’s metamorphosis alters its behavior. Now called metarhodopsin II, the protein sticks to another protein, called transducin. Before bumping into metarhodopsin II, transducin had tightly bound a small molecule called GDP. But when transducin interacts with metarhodopsin II, the GDP falls off, and a molecule called GTP binds to transducin.
|
||
|
||
Concluding, after three long, intervening paragraphs of similar intricacy:
|
||
Trans-retinal eventually falls off rhodopsin and must be reconverted to 11-cisretinal and again bound by rhodopsin to get back to the starting point for another visual cycle. To accomplish this, trans-retinal is first chemically modified by an enzyme called trans-retinol – a form containing two more hydrogen atoms. A second enzyme then converts the molecule to 11-cis-retinol. Finally, a third enzyme removes the previously added hydrogen atoms to form 11-cis-retinal, a cycle is complete. 36
|
||
The retinal site is now ready to receive its next photon. Behe gives similarly comprehensive accounts of such mechanisms as blood clotting and the intracellular transport system, where the functions of all the components and their interaction with the whole are known in detail, and contends that only purposeful ordering can explain them. In comparison, vague, less precisely definable factors such as anatomical similarities, growth of embryos, bird lineages, or the forms of horses become obsolete and irrelevant, more suited to discussion in Victorian drawing rooms. The response from the evolutionists to these kinds of revelations has been almost complete silence. In a survey of thirty textbooks of biochemistry that Behe conducted, out of a total of 145,000 index entries, just 138 referred to evolution. Thirteen of the textbooks made no mention of the subject at all. As Behe notes, “No one at Harvard University, no one at the National Institutes of Health, no member of the National Academy of Sciences, no Nobel prize winner – no one at all can give a detailed account of how the cilium, or vision, or blood clotting, or any other complex biochemical process might have developed in a Darwinian fashion.” 37 Behe unhesitatingly sees design as the straightforward conclusion that follows from the evidence itself – not from sacred books or sectarian beliefs. He likens those who refuse to see it to detectives crawling around a body lying crushed flat and examining the floor with magnifying glasses for clues, while all the time ignoring the elephant standing next to the body – because they have been told to “get their man.” In the same way, Behe contends, mainstream science remains doggedly blind to the obvious because it has fixated on finding only naturalistic answers. The simplest and most obvious reason why living systems should show over and over again all the signs of having been designed – is that they were.
|
||
Acknowledging the Alternative: Intelligent Design
|
||
Others whom we have mentioned, such as Denton, Hoyle, Spetner, express similar sentiments – not through any prior convictions but purely from considerations of the scientific evidence. Interest in intelligent design has been spreading in recent years to include not just scientists but also mathematicians, information theoreticians, philosophers, and others dissatisfied with the Darwinian theory or opposed to the materialism that it implies. Not surprisingly, it attracts those with religious interpretations too, including fundamentalists who insist on a literal acceptance of Genesis. But it would be a mistake to characterize the whole
|
||
|
||
movement by one constituent group with extreme views in a direction that isn’t really relevant, as many of its opponents try to do – in the same way that it would be to belittle the notion of extraterrestrial intelligence because UFO abduction believers happen to subscribe to it. As Phillip Johnson says, “ID is a big tent” that accommodates many diverse acts. All that’s asserted is that the evidence indicates a creative intelligence of some kind. In itself, the evidence says nothing about the nature of such an intelligence nor what its purpose, competence, state of mind, or inclination to achieve what we think it should, might be.
|
||
The argument is sometimes put forward that examples of the apparent lack of perfection in some aspects of biological function and adaptation mean that they couldn’t be the work of a supreme, all-wise, all-knowing creator. This has always struck me as curious grounds for scientists to argue on, since notions of all-perfect creators were inventions of opponents more interested in devising means for achieving social control and obedience to ruling authorities than interpreting scientific evidence. Wrathful gods who pass judgments on human actions and mete out rewards or retribution make ideal moral traffic policemen, and it seems to be only a matter of time (I put it at around 200-300 years) before religions founded perhaps on genuine insights for all I know are taken over by opportunists and sell out to, or are coopted by, the political power structure. In short, arguments are made for the reality of some kind of creative intelligence; human social institutions find that fostering belief in a supreme moral judge is to their advantage. Nothing says that the two have to be one and the same. If the former is real, there’s no reason why it needs to possess attributes of perfection and infallibility that are claimed for the latter. Computers and jet planes are products of intelligence, but nobody imagines them to be perfect.
|
||
Those who are persuaded by religious interpretations insist on the need for a perfect God to hand down the absolute moral standards which they see as the purpose in creating the world – and then go into all kinds of intellectual convolutions trying to explain why the world clearly isn’t perfect. I simply think that if such an intelligence exists it would do things for its reasons not ours, and I don’t pretend to know what they might be – although I could offer some possibilities. An analogy that I sometimes use is to imagine the characters in a role-playing game getting complex enough to become aware that they were in an environment they hadn’t created, and which they figure couldn’t have created itself. Their attempts to explain the reason for it all could only be framed in terms of the world that they know, that involves things like finding treasures and killing monsters. They could have no concept of a software writer creating the game to meet a specification and hold down a job in a company that has a budget to meet, and so on.
|
||
I sometimes hear the remark that living things don’t look like the products of design. True enough, they don’t look very much like the things we’re accustomed to producing. But it seems to me that anyone capable of getting self-assembling protein systems to do the work would find better things to do than spend their existence bolting things together in factories. Considering the chaotically multiplying possibilities confronting the development of modules of genetic code turned loose across a range of wildly varying environments to make what they can of themselves, what astounds me is that they manage as well as they do.
|
||
These are all valid enough questions to ask, and we could spend the rest of the book speculating about them. But they belong in such realms of inquiry as theology and philosophy, not science.
|
||
|
||
Is Design Detectable?
|
||
How confident can we be that design is in fact the necessary explanation, as opposed to some perhaps unknown natural process – purely from the evidence? In other words, how do you detect design? When it comes to nonliving objects or arrangements of things, we distinguish without hesitation between the results of design and of natural processes: a hexagonal, threaded nut found among pebbles on a beach; the Mount Rushmore monument as opposed to a naturally weathered and eroded rock formation; a sand castle on a beach, distinguished from mounds heaped by the tide. Exactly what is it that we are able to latch on to? If we can identify what we do, could we apply it to judging biological systems? William Dembski, who holds doctorates in mathematics and philosophy from the Universities of Chicago and Illinois, has tackled the task of setting out formally the criterion by which design is detected. 38 His analysis boils down to meeting three basic conditions.
|
||
The first is what Dembski terms “contingency”: that the system being considered must be compatible with the physics of the situation but not required by it. This excludes results that follow automatically and couldn’t be any other way. Socrates, for example, believed that the cycles of light and darkness, or the progressions of the seasons pointed toward design. But what else could follow day except night? What could come after cold but warming, or after drought other than rain?
|
||
Second is the condition that most people would agree, that of “complexity,” which is another way of describing a situation that has a low probability of occurring. Of all the states that the components of a watch might assume from being thrown in a pile or joined together haphazardly, if I see them put together in precisely the configuration necessary for the watch to work, I have no doubt that someone deliberately assembled them that way.
|
||
But complexity in itself isn’t sufficient. This is the point that people whom I sometimes hear from – and others writing in books, who should know better – miss when they argue that the information content of a genome is nothing remarkable, since there’s just as much information in a pile of sand. It’s true that spelling out the position and orientation of every sand grain to construct a given pile of sand would require a phenomenal amount of information. In fact it would be a maximum for the number of components involved, for there’s no way of expressing a set of random numbers in any shorter form such as a formula or the way a computer program of a few lines of code could be set up to generate, say, all the even numbers up to ten billion. But the only thing the numbers would be good for is to reconstruct that specific pile of sand. But the specificity means nothing, since for the purposes served by a pile of sand on the ground, one pile is as good as another and so you might as well save all the bother and use a shovel. But the same can’t be said of the sequences of DNA base pairs in a genome.
|
||
Suppose someone comes across a line of Scrabble tiles reading METHINKS IT IS LIKE A WEASEL, with spaces where indicated. Asked to bet money, nobody would wager that it was the result of the cat knocking them out of the box or wind gusting through the open window. Yet it’s not the improbability of the arrangement that forces this conclusion. The sequence is precisely no more or no less probable than any other of twenty-eight letters and spaces. So what is it? The typical answer, after some chin stroking and a frown, is that it “means something.” But what does that mean? This is what Dembski was possibly the first to recognize and spell out formally. What we apprehend is that the arrangement, while not only highly improbable,
|
||
|
||
specifies a pattern that is intelligible by a convention separate from the mere physical description. Knowledge of this convention – Dembski calls this “side information” – enables the arrangement to be constructed independently of merely following physical directions. In this case the independent information is knowledge of the English language, Shakespeare, and awareness of a line spoken by Hamlet. Dembski’s term for this third condition is “specificity,” which leads to “specified complexity” as the defining feature of an intelligently contrived arrangement.
|
||
Specifying a pattern recognizable in English enables the message to be encoded independently of Scrabble tiles, for example into highly improbable configurations of ink on paper, electron impacts on a screen, magnetic dots on a VHS sound track, or modulations in a radio signal. Michael Behe’s irreducible complexity is a special case of specified complexity, where the highly improbable organizations of the systems he describes specify independent patterns in the form of unique, intricate biological processes that the components involved, like the parts of a watch, could not perform if organized in any other way.
|
||
Philosophers’ Fruit-Machine Fallacy
|
||
A process that Richard Dawkins terms “cumulative complexity” is frequently put forward as showing that Darwinian processes are perfectly capable of producing such results. An example is illustrated in the form of a contrived analogy given by the philosopher Elliott Sober that uses the same phrase above from Hamlet. 39 The letters are written on the edges of randomly spun disks, one occupying each position of the target sentence like the wheels of a slot machine. When a wheel happens to come up with its correct letter it is frozen thereafter until the sentence is complete. Ergo, it is claimed, pure randomness and selection can achieve the required result surprisingly rapidly. The idea apparently comes from Richard Dawkins and seems to have captured the imagination of philosophers such as Michael Ruse and Daniel Dennett, who also promote it vigorously.
|
||
But their enthusiasm is hard to understand, for the model shows the opposite of what it purports to. Who is deciding which disks to freeze, and why? What the analogy demonstrates is an intelligence directing the assembly of a complex system toward a preordained target already constructed independently of the mechanics by other means – in this case the creativity of Shakespeare. Yet the whole aim of Darwinism was to produce a non teleological explanation of life, i.e., one in which purpose played no role. Hence, either these advocates don’t understand their own theory, or they fail to register that they’ve disproved their assumptions.
|
||
Testing for Intelligence
|
||
Given that little if anything in life is perfect, how confident could we be in a test using these principles to detect the signature of intelligence in nature? As with a medical test it can err in two ways: by giving a “false positive,” indicating design when there isn’t any, or a “false negative,” by failing to detect design when it was actually present.
|
||
We live with false negatives all the time. When the information available is simply insufficient to decide – a rock balanced precariously on another; a couple of Scrabble tiles that happen to spell IT or SO – our tendency is to favor chance, since the improbabilities are not so
|
||
|
||
high as to rule it out, but we’re sometimes wrong. Such instances are specific, yes, but not complex enough to prove design. Intelligence can also mimic natural processes, causing us to let pass as meaningless something encrypted in an unrecognized code or to accept as an accident what had been set up to appear as such when in fact it was arson or a murder. Although we have entire professions devoted to detecting such false negatives, such as police detectives, forensic scientists, and insurance claim investigators, we can get by with imperfection.
|
||
False positives are another thing entirely. A test that can discern design where there is none is like reading information into entrails, tea leaves, or flights of birds that isn’t there, which makes the test totally useless. Hence, a useful test needs to be heavily biased toward making false negatives, rejecting everything where there’s the slightest doubt and claiming a positive only when the evidence is overwhelming. Thinking of it as a net, we’d rather it let any number of false negatives slip through. But if it catches something, we want to be sure that it’s a real positive. How sure can we be?
|
||
What the criterion of specified complexity is saying is that once the improbabilities of a situation become too vast (2728 possible combinations of the Scrabble example above), and the specification too tight (one line from Hamlet), chance is eliminated as a plausible cause, and design is indicated. Just where is the cutoff where chance becomes unacceptable? The French mathematician Emile Borel proposed 10-50 as a universal probability bound below which chance could be precluded – in other words a specified event as improbable as this could not be attributed to chance. 40 This is equivalent to saying it can be expressed in 166 bits of information. How so? Well, Imagine a binary decision tree, where the option at each branch point is to go left or right. The first choice can be designated by “0” or “1,” which is another way of saying it encodes one bit of information. Since each branch leads to a similar decision point, the number of branches at the next level will be four, encoded by two bits: 00, 01, 10, and 11. By the time the tree gets to 166 levels, it will have sprouted 1050 branches. The information to specify the path from the starting point to any one of the terminal points increases by one bit for each decision and hence can be expressed as a binary number of 166 bits.
|
||
The criterion that Dembski develops applies a bound of 10-150. That’s 100 zeros more stringent than the limit beyond which Borel said chance can be discounted. This translates into 500 bits of information. 41
|
||
According to Dembski’s criterion, specified information of greater than 500 bits cannot be
|
||
|
||
considered as having come about via chance processes. The bacterial cilium that Behe presents as one of his cases of irreducible complexity is a whiplike rotary paddle used for propulsion, driven by an intricate molecular machine that includes an acid-powered engine, stator housing, O-rings, bushings, and a drive shaft, and is built from over 40 interacting proteins, every one of them essential. Its complex specified information is well above 500 bits. So are those of all the other cases Behe gives. And we’ve already come across improbabilities that are way beyond this bound, such as Fred Hoyle’s figure for the likelihood of the full complement of human proteins arising through chance, or Lee Spetner’s for speciation and convergence.
|
||
Many other examples could be cited. But those who commit a priori to a philosophy that says the universe consists of nothing but matter and motion must accept evolution. The worldview that they have stated as fact leaves no alternative. Things like fossils, genetics, probabilities, and complexity have no real bearing apart from a need for being somehow interpreted to fit, because the issue has already been decided independently of any evidence. So, to repeat what we said above, either mindless, inanimate matter has the capacity to organize itself purposelessly into the things we’ve been talking about, or some kind of intelligence caused it to be organized. Now let’s go back to the question posed right at the beginning. Based on what we see today, which belief system constrains permissible answers only to those permitted by a prespecified dogma, and which simply follows the evidence, without prejudice, to wherever it seems to be leading? Which, in other words, is the religion, and which is the science? Some defenders of the Darwinist view evade the issue by defining science as the study of naturalistic, materialistic phenomena and the search for answers to all things only in those terms. But what if the simple reality is that some questions don’t have answers in those terms? One response is that science could only be enriched by abandoning that restrictive philosophy and opening its horizons in the way the spirit of free inquiry was supposed to. The alternative could be unfortunate. For in taking such a position, science could end up excluding itself from what could well be some of the most important questions confronting us.
|
||
Section Notes
|
||
1 Himmelfarb, 1962
|
||
2 Dennett, 1995, p. 46
|
||
3 New York Times, April 9, 1989, Sec 7, p. 34 4 Dennett, 1995, pp. 515-516
|
||
5 Hogan, 1977 6 Hogan, 1988
|
||
7 Darwin, 1859, p. 184 8 The Origin of Species, 1872, 6th edition, John Murray, London, p. 468
|
||
9 The Origin of Species, 1872, 6th edition, John Murray, London, p. 309
|
||
10 Sunderland, 1998 11 Denton, 1985, p. 190
|
||
12 Johnson, Phillip, 1991, p. 51
|
||
|
||
13 Wells, 2000, Chapter 6 14 Sunderland, 1998, p. 86 15 Johnson, 1991, p. 79 16 See, for example, Sunderland, 1998, p. 94 17 Stanley, 1979, p. 39 18 Dawkins, 1986, p. 1 19 Macbeth, 1971, p. 5 20 According to Simpson, “anything tending to produce systematic, heritable change in populations between one
|
||
generation and the next.” Quoted in Macbeth, 1971, p. 48 21 Macbeth, 1971, p. 48 22 See Wells, 2000, Chapter 7 for more details and a discussion on the implications of needing to falsify
|
||
textbooks when we’re assured that the evidence for evolution is “overwhelming.” A full account of the story is available online at the Nature Institute, http://www.netfuture.org/index.html 23 Spetner, 1997, p. 63 24 Judson, 1979, p. 217 25 Denton, 1985, pp. 328-29 26 Spetner, 1997, p. 92 27 Hoyle, 1983, pp. 12-17 28 Sunderland, 1996, p. 152 29 Hoyle, 1983, Chapter 5 30 For examples of reviews see Ho and Saunders, 1979; Cook, 1977; Rosen and Buth, 1980 31 Original research reported in Hall, 1982 32 See Spetner, 1997, Chapter 7 for more examples 33 Spetner, 1997, p. 204 34 Simpson, 1951, p. 173 35 Behe, 1996 36 Behe, 1996, p. 20 37 Behe, 1996. p. 187 38 Dembski 1998, 1999, 2002 39 Sober, 1993 40 Borel, 1962, p. 28 41 Dembski, 1998, Section 6.5
|
||
|
||
TWO
|
||
Of Bangs and Braids Cosmology’s Mathematical Abstractions
|
||
It’s impossible that the Big Bang is wrong. – Joseph Silk, astrophysicist
|
||
Can we count on conventional science always choosing the incorrect alternative between two possibilities? I would vote yes, because the important problems usually require a change in paradigm, which is forbidden to conventional science.
|
||
– Halton Arp, observational astronomer
|
||
Mathematical Worlds – and This Other One
|
||
Mathematics is purely deductive. When something is said to be mathematically “proved,” it means that the conclusion follows rigorously and necessarily from the axioms. Of itself, a mathematical system can’t show anything as being “true” in the sense of describing the real world. All the shelves of volumes serve simply to make explicit what was contained in the assumptions. If some mathematical procedures happen to approximate the behavior of certain real-world phenomena over certain ranges sufficiently closely to allow useful predictions to be made, then obviously that can be of immense benefit in gaining a better understanding of the world and applying that knowledge to practical ends. But the only measure of if, and if so to what degree, a mathematical process does in fact describe reality can be actual observation. Reality is in no way obligated to mimic formal systems of symbol manipulation devised by humans.
|
||
|
||
Cosmologies as Mirrors
|
||
Advocates of this or that political philosophy will sometimes point to a selected example of animal behavior as a “natural” model that is supposed to tell us something about humans – even if their rivals come up with a different model exemplifying the opposite. I’ve never understood why people take much notice of things like this. Whether some kinds of ape are social and “democratic,” while others are hierarchical and “authoritarian” has to do with apes, and that’s all. It’s not relevant to the organizing of human societies. In a similar kind of way, the prevailing cosmological models adopted by societies throughout history – the kind of universe they believe they live in, and how it originated – tend to mirror the political, social, and religious fashion of the times.
|
||
Universes in which gods judged the affairs of humans were purpose-built and had beginnings. Hence, the Greek Olympians with their creation epics and thunderbolts, and mankind cast in a tragedy role, heroic only in powers to endure whatever fate inflicted. These also tend to be times of stagnation or decline, when the cosmos too is seen as running downhill from a state of initial perfection toward ruin that humans are powerless to avert. Redemption is earned by appeasing the supernatural in such forms as the God of Genesis and of the Christendom that held sway over Europe from the fall of the Roman Empire to the stirring of the Renaissance.
|
||
But in times of growth and confidence in human ability to build better tomorrows, the universe too evolves of itself, by its own internal powers of self-organization and improvement. Thoughts turn away from afterlives and retribution, and to things of the here and now, and the material. The gods, if they exist at all, are at best remote, preoccupied with their own concerns, and the cosmos is conceived as having existed indefinitely, affording time for all the variety and complexity of form to have come about through the operation of unguided natural forces. Thus, with Rome ruling over the known world, Lucretius expounded the atomism of Epicurus, in which accidental configurations of matter generated all of sensible physical reality and the diversity of living things. A millennium later, effectively the same philosophy reappeared in modern guise as the infinite machine set up by Newton and Laplace to turn the epochal wheels for Lyell and Darwin. True, Newton maintained a religious faith that he tried to reconcile with the emerging scientific outlook; but the cosmos that he discovered had no real need of a creator, and God was reduced to a kind of caretaker on the payroll, intervening occasionally to tweak perturbed orbits and keep the Grand Plan on track as it unfolded.
|
||
Even that token to tradition faded, and by the end of the nineteeth century, with Victorian exultation of unlimited Progress at its zenith, the reductionist goal of understanding all phenomena from the origin of life to the motions of planets in terms of the mechanical operations of natural processes seemed about complete. This was when Lord Kelvin declared that the mission of science was as good as accomplished, and the only work remaining was to determine the basic constants to a few more decimal places of accuracy.
|
||
That world and its vision self-destructed in the trenches of 1914-18. From the aftermath emerged a world of political disillusionment, roller-coaster economics, and shattered faith in human nature.
|
||
Mankind and the universe, it seemed, were in need of some external help again.
|
||
|
||
Matters of Gravity: Relativity’s Universes
|
||
In 1917, two years after developing the general relativity theory (GRT), Albert Einstein formulated a concept of a finite, static universe, into which he introduced the purely hypothetical quantity that he termed the “cosmological constant,” a repulsive force increasing with the distance between two objects in the way that the centrifugal force in a rotating body increases with radius. This was necessary to prevent a static universe from collapsing under its own gravitation. (Isaac Newton was aware of the problem and proposed an infinite universe for that reason.) But the solution was unstable, in that the slightest expansion would increase the repulsive force and decrease gravity, resulting in runaway expansion, while conversely the slightest contraction would lead to total collapse.
|
||
Soon afterward, the Dutch astronomer, Willem de Sitter, found a solution to Einstein’s equations that described an expanding universe, and the Russian mathematician Alexander Friedmann found another. Einstein’s static picture, it turned out, was one of three special cases among an infinity of possible solutions, some expanding, some contracting. Yet despite the excitement and publicity that the General Theory had aroused – publication of Einstein’s special relativity theory in 1905 had made comparatively little impact; his Nobel Prize of that year was awarded for a paper on the photoelectric effect – the subject remained confined to the circle of probably not more than a dozen or so specialists who had mastered its intricacies until well into the 1920s. Then the possible significance began being recognized of observational data that had been accumulating since 1913, when the astronomer V. M. Slipher (who, as is often the case in instances like this, was looking for something else) inferred from redshifts of the spectra of about a dozen galaxies in the vicinity of our own that the galaxies were moving away at speeds ranging up to a million miles per hour.
|
||
An Aside on Spectra and Redshifts
|
||
A spectrum is the range of wavelengths over which the energy carried by a wave motion such as light, radio, sound, disturbances on a water surface, is distributed. Most people are familiar with the visible part of the Sun’s spectrum, ranging from red at the low-frequency end to violet at the high-frequency end, obtained by separating white sunlight into its component wavelengths by means of a prism. This is an example of a continuous, or “broadband” spectrum, containing energy at all wavelengths in the range. Alternatively, the energy may be concentrated in just a few narrow bands within the range.
|
||
|
||
Changes in the energy states of atoms are accompanied by the emission or absorption of radiation. In either case, the energy transfers occur at precise wavelength values that show as “lines,” whose strength and spacings form patterns – “line spectra” – characteristic of different atomic types. Emission spectra consist of bright lines at the wavelengths of the emitted energy. Absorption spectra show as dark lines marking the wavelengths at which energy is absorbed from a background source – for example, of atoms in the gas surrounding a star, which absorb certain wavelengths of the light passing through. From the line spectra found for different elements in laboratories on Earth, the elements present in the spectra from stars and other astronomical objects can be identified.
|
||
A “redshifted” spectrum means that the whole pattern is displaced from its normal position toward the red – longer wavelength – end. In other words, all the lines of the various atomic spectra are observed to lie at longer wavelength values than the “normal” values measured on Earth. A situation that would bring this about would be one where the number of waves generated in a given time were stretched across more intervening space than they “normally” would be. This occurs when the source of the waves is receding. The opposite state of affairs applies when the source is approaching and the wavelengths get compressed, in which case spectra are “blue-shifted.” Such alteration of wavelength due to relative motion between the source and receiver is the famous Doppler shift. 43 Textbooks invariably cite train whistles as an example at this point, so I won’t.
|
||
A Universe in the Red and Lemaitre’s Primeval Atom
|
||
By 1924 the reports of redshifts from various observers had grown sufficiently for Carl Wirtz, a German astronomer, to note a correlation between the amounts of galactic redshift and their optical faintness, which was tentatively taken as a measure of distance. The American
|
||
|
||
astronomer Edwin Hubble had recently developed a new method for measuring galactic distances using the known brightnesses of certain peculiar variable stars, and along with his assistant, Milton Humason, conducted a systematic review of the data using the 60-inch
|
||
telescope at the Mount Wilson Observatory in California, and later the 100-inch – the world’s largest at that time. In 1929 they announced what is now known as Hubble’s Law: that the redshift of galaxies increases steadily with distance. Although Hubble himself always seemed to have reservations, the shift was rapidly accepted as a Doppler effect by the scientific world at large, along with the startling implication that not only is the universe expanding, but that the parts of it that lie farthest away are receding the fastest.
|
||
A Belgian priest, Georges Lemaitre, who was conversant with Einstein’s theory and had studied under Sir Arthur Eddington in England, and at Harvard where he attended a lecture by Hubble, concluded that the universe was expanding according to one of the solutions of GRT in which the repulsive force dominated. This still left a wide range of options, including models that were infinite in extent, some where the expansion arose from a state that had existed indefinitely, and others where the universe cycled endlessly through alternating periods of expansion and contraction. However, the second law of thermodynamics dictated that on balance net order degenerates invariably, one way or another, to disorder, and the process is irreversible. The organized energy of a rolling rock will eventually dissipate as heat in the ground as the rock is brought to a halt by friction, but the random heat motions of molecules in the ground never spontaneously combine to set a rock rolling. This carries the corollary that eventually everything will arrive at the same equilibrium temperature everywhere, at which point all further change must cease. This is obviously so far from being the case with the universe as seen today that it seemed the universe could only have existed for a limited time, and it must have arrived at its present state from one of minimum disorder, or “entropy.” Applying these premises, Lemaitre developed his concept of the “primeval atom,” in which the universe exploded somewhere between 10 billion and 20 billion years ago out of an initial point particle identified with the initial infinitely large singularity exhibited by some solutions to the relativistic equations. According to this “fireworks model,” which Lemaitre presented in 1931, the primeval particle expanded and split up into progressively smaller units the size of galaxies, then stars, and so forth in a process analogous to radioactive decay.
|
||
This first version of a Big Bang cosmology was not generally accepted. The only actual evidence offered was the existence of cosmic rays arriving at high energies from all directions in space, which Lemaitre argued could not come from any source visible today and must be a leftover product of the primordial breakdown. But this was disputed on the grounds that other processes were known which were capable of providing the required energy, and this proved correct. Cosmic-ray particles were later shown to be accelerated by electromagnetic forces in interstellar space. The theory was also criticized on the grounds of its model of stellar evolution based on a hypothetical process of direct matter-to-energy annihilation, since nuclear fusion had become the preferred candidate for explaining the energy output of stars, and Willem de Sitter showed that it was not necessary to assume GRT solutions involving a singularity. Further, the gloomy inevitability of a heat death was rejected as not being necessarily so, since whatever might seem true of the second law locally, nothing was known of its applicability to the universe as a whole. Maybe the world was deciding that the period that had brought about such events as the Somme, Verdun, and the end of Tsarist Russia had been an aberration, and was recovering
|
||
|
||
from its pessimism. Possibly it’s significant, then, that the resurrection of the Big Bang idea came immediately following World War II.
|
||
|
||
After the Bomb: The Birth of the Bang
|
||
Gamow’s Nuclear Pressure-Cooker
|
||
In 1946, Russian-born George Gamow, who had worked on the theory of nuclear synthesis in the 1930s and been involved in the Manhattan Project, conjectured that if an atomic bomb could, in a fraction of a millionth of a second, create elements detectable at the test site in the desert years later, then perhaps an explosion on a colossal scale could have produced the elements making up the universe as we know it. Given high enough temperatures, the range of atomic nuclei found in nature could be built up through a succession starting with hydrogen, the lightest, which consists of one proton. Analysis of astronomical spectra showed the universe to consist of around 75 percent hydrogen, 24 percent helium, and the rest a mix continuing on through lithium, beryllium, boron and so on of the various heavier elements. Although all of the latter put together formed just a trace in comparison to the amount of hydrogen and helium, earlier attempts at constructing a theoretical model had predicted far less than was observed – the discrepancy being in the order of ten orders of magnitude in the case of intermediate mass elements such as carbon, nitrogen, and oxygen, and getting rapidly worse (in fact, exponentially) beyond those.
|
||
Using pointlike initial conditions of the GRT equations, Gamow, working with Ralph Alpher and Robert Herman, modeled the explosion of a titanic superbomb in which, as the fireball expanded, the rapidly falling temperature would pass a point where the heavier nuclei formed from nuclear fusions in the first few minutes would cease being broken down again. The mix of elements that existed at that moment would thus be “locked in,” providing the raw material for the subsequently evolving universe. By adjusting the parameters that determined density, Gamow and his colleagues developed a model that within the first thirty minutes of the Bang yielded a composition close to that which was observed.
|
||
Unlike Lemaitre’s earlier proposal, the Gamow theory was well received by the scientific community, particularly the new generation of physicists versed in nuclear technicalities, and became widely popularized. Einstein had envisaged a universe that was finite in space but curved and hence unbounded, as the surface of a sphere is in three dimensions. The prevailing model now became one that was also finite in time. Although cloaked in the language of particle physics and quantum mechanics, the return to what was essentially a medieval worldview was complete, raising again all the metaphysical questions about what had come before the Bang. If space and time themselves had come into existence along with all the matter and energy of the universe as some theorists maintained, where had it all come from? If the explosion had suddenly come about from a state that had endured for some indefinite period previously, what had triggered it? It seemed to be a one-time event. By the early 1950s, estimates of the total amount of mass in the universe appeared to rule out the solutions in which it oscillated between expansion and contraction. There wasn’t enough to provide sufficient gravity to halt the expansion, which therefore seemed destined to continue forever. What the source of the energy might have been to drive such an expansion – exceeding all the gravitational energy contained in the universe – was also an unsolved problem.
|
||
Hoyle and Supernovas as “Little Bang” Element Factories
|
||
|
||
Difficulties for the theory mounted when the British astronomer Fred Hoyle showed that the unique conditions of a Big Bang were not necessary to account for the abundance of heavy elements; processes that are observable today could do the job. It was accepted by then that stars burned by converting hydrogen to helium, which can take place at temperatures as low as 10 million degrees – attainable in a star’s core. Reactions beyond helium require higher temperatures, which Gamow had believed stars couldn’t achieve. However, the immense outward pressure of fusion radiation balanced the star’s tendency to fall inward under its own gravity. When the hydrogen fuel was used up, its conversion to helium would cease, upsetting the balance and allowing the star to collapse. The gravitational energy released in the collapse would heat the core further, eventually reaching the billion degrees necessary to initiate the fusion of helium nuclei into carbon, with other elements appearing through neutron capture along the lines Gamow had proposed. A new phase of radiation production would ensue, arresting the collapse and bringing the star into a new equilibrium until the helium was exhausted. At that point another cycle would repeat in which oxygen could be manufactured, and so on through to iron, in the middle of the range of elements, which is as far as the fusion process can go. Elements heavier than iron would come about in the huge supernova explosions that would occur following the further collapse of highly massive stars at the end of their nuclear burning phase – “little bangs” capable of supplying all the material required for the universe without need of any primordial event to stock it up from the beginning.
|
||
This model also accounted for the observational evidence that stars varied in their makeup of elements, which was difficult to explain if they all came from the same Big Bang plasma. (It also followed that any star or planet containing elements heavier than iron – our Sun, the Earth, indeed the whole Solar System, for example – must have formed from the debris of an exploded star from an earlier generation of stars.) Well, the images of starving postwar Europe, shattered German cities, Stalingrad, and Hiroshima were fading. The fifties were staid and prosperous, and confidence in the future was returning. Maybe it was time to rethink cosmology again.
|
||
The Steady-State Theory
|
||
Sure enough, Fred Hoyle, having dethroned the Big Bang as the only mechanism capable of producing heavy elements, went on, with Thomas Gold and Herman Bondi, to propose an alternative that would replace it completely. The Hubble redshift was still accepted by most as showing that the universe we see is expanding away in all directions to the limits of observation. But suppose, Hoyle and his colleagues argued, that instead of this being the result of a one-time event, destined to die away into darkness and emptiness as the galaxies recede away from each other, new matter is all the time coming into existence at a sufficient rate to keep the overall density of the universe the same. Thus, as old galaxies disappear beyond the remote visibility “horizon” and are lost, new matter being created diffusely through all of space would be coming together to form new galaxies, resulting in a universe populated by a whole range of ages – analogous to a forest consisting of all forms of trees, from young saplings to aging giants.
|
||
The rate of creation of new matter necessary to sustain this situation worked out at one hydrogen atom per year in a cube of volume measuring a hundred meters along a side, which would be utterly undetectable. Hence, the theory was not based on any hard observational data. Its sole justification was philosophical. The long-accepted “cosmological principle” asserted that,
|
||
|
||
taken at a large-enough scale, the universe looked the same anywhere and in any direction. The Hoyle-Bondi-Gold approach introduced a “perfect cosmological principle” extending to time also, making the universe unchanging. It became known, therefore, as the steady-state theory.
|
||
The steady-state model had its problems too. One in particular was that surveys of the more distant galaxies, and hence ones seen from an earlier epoch because of the delay in their light reaching Earth, showed progressively more radio sources; hence the universe hadn’t looked the same at all times, and so the principle of its maintaining a steady, unvarying state was violated. But it attracted a lot of scientists away from the Big Bang fold. The two major theories continued to rival each other, each with its adherents and opponents. And so things remained through into the sixties.
|
||
Then, in 1965, two scientists at Bell Telephone Laboratories, Arno Penzias and Robert Wilson, after several months of measurement and double-checking, confirmed a faint glow of radiation emanating evenly from every direction in the heavens with a frequency spectrum corresponding to a temperature of 2.7oK. 44 This was widely acclaimed and publicized as settling the issue in favor of the Big Bang theory.
|
||
The Cosmic Background Radiation: News but Nothing New
|
||
Big Bang had been wrestling with the problem of where the energy came from to drive the expansion of the “open” universe that earlier observations had seemed to indicate – a universe that would continue expanding indefinitely due to there being too little gravitating mass to check it. Well, suppose the estimates were light, and the universe was in fact just “closed” – meaning that the amount of mass was just enough to eventually halt the expansion, at which point everything would all start falling in on itself again, recovering the energy that had been expended in driving the expansion. This would simplify things considerably, making it possible to consider an oscillating model again, in which the current Bang figures as simply the latest of an indeterminate number of cycles. Also, it did away with all the metaphysics of asking who put the match to whatever blew up, and what had been going on before.
|
||
A group at Princeton looked into the question of whether such a universe could produce the observed amount of helium, which was still one of Big Bang’s strong points. (Steady state had gotten the abundance of heavier elements about right but was still having trouble accounting for all the helium.) They found that it could. With the conditions adjusted to match the observed figure for helium, expansion would have cooled the radiation of the original fireball to a diffuse background pervading all of space that should still be detectable – at a temperature of 30oK. 45 Gamow’s collaborators, Ralph Alpher and Robert Herman, in their original version had calculated 5oK for the temperature resulting from the expansion alone, which they stated would be increased by the energy production of stars, and a later publication of Gamow’s put the figure at 50oK. 46
|
||
The story is generally repeated that the discovery of the 2.7oK microwave background radiation confirmed precisely a prediction of the Big Bang theory. In fact, the figures predicted were an order of magnitude higher. We’re told that those models were based on an idealized density somewhat higher than that actually reported by observation, and (mumble-mumble, shuffle-shuffle) it’s not really too far off when you allow for the uncertainties. In any case, the Big Bang proponents maintained, the diffuseness of this radiation across space, emanating from
|
||
|
||
no discernible source, meant that it could only be a relic of the original explosion.
|
||
It’s difficult to follow the insistence on why this had to be so. A basic principle of physics is that a structure that emits wave energy at a given frequency (or wavelength) will also absorb energy at the same frequency – a tuning fork, for example, is set ringing by the same tone that it sounds when struck.
|
||
An object in thermal equilibrium with – i.e., that has reached the same temperature as – its surroundings will emit the same spectrum of radiation that it absorbs. Every temperature has a characteristic spectrum, and an ideal, perfectly black body absorbing and reradiating totally is said to be a “blackbody” radiator at that temperature. The formula relating the total radiant energy emitted by a blackbody to its temperature was found experimentally by Joseph Stefan in 1879 and derived theoretically by Ludwig Boltzmann in 1889. Thus, given the energy density of a volume, it was possible to calculate its temperature.
|
||
Many studies had applied these principles to estimating the temperature of “space.” These included Guillaume (1896), who obtained a figure of 5o-6oK, based on the radiative output of stars; Eddington (1926), 3.18oK; Regener (1933), 2.8oK, allowing also for the cosmic ray flux; Nernst (1938), 0.75oK; Herzberg (1941), 2.3oK; Finlay-Freundlich (1953 and 1954), using a “tired light” model for the redshift (light losing energy due to some static process not involving expansion), 1.9oK to 6oK. 47 Max Born, discussing this last result in 1954, and the proposal that the mechanism responsible for “tiring” the light en route might be photon-photon interactions, concluded that the “secondary photons” generated to carry away the small energy loss suffered at each interaction would be in the radar range. The significant thing about all these results is that they were based on a static, nonexpanding universe, yet consistently give figures closer to the one that Arno Penzias and Robert Wilson eventually measured than any of the much-lauded predictions derived from Big Bang models.
|
||
Furthermore, the discrepancy was worse than it appeared. The amount of energy in a radiation field is proportional to the fourth power of the temperature, which means that the measured background field was thousands of times less than was required by the theory. Translated into the amount of mass implied, this measurement made the universe even more diffuse than Gamow’s original, nonoscillating model, not denser, and so the problem that oscillation had been intended to solve – where the energy driving the expansion had come from – became worse instead of better. An oscillating model was clearly ruled out. But with some modifications to the gravity equations – justified by no other reason than that they forced an agreement with the measured radiation temperature – the open-universe version could be preserved, and at the same time made to yield abundances for helium, deuterium, and lithium which again were close to those observed. The problem of what energy source propelled this endless expansion was still present – in fact exacerbated – but quietly forgotten. Excited science reporters had a story, and the New York Times carried the front-page headline signals imply a big bang universe.
|
||
Resting upon three pillars of evidence – the Hubble redshifts, light-element abundance, and the existence of the cosmic background radiation – Big Bang triumphed and became what is today the accepted standard cosmological model.
|
||
Quasar and Smoothness Enigmas:
|
||
|
||
Enter, the Mathematicians.
|
||
At about this time, a new class of astronomical objects was discovered that came to be known as quasars, with redshifts higher than anything previously measured, which by the conventional interpretation of redshift made them the most distant objects known. To be as bright as they appeared at those distances they would also have to be astoundingly energetic, emitting up to a hundred thousand times the energy radiated by an entire galaxy. The only processes that could be envisaged as capable of pouring put such amounts of energy were ones resulting from intense gravity fields produced by the collapse of enormous amounts of mass. This was the stuff of general relativity, and with Big Bang now the reigning cosmology, the field became dominated by mathematical theoreticians. By 1980, around ninety-five percent of papers published on the subject were devoted to mathematical models essentially sharing the same fundamental assumptions. Elegance, internal consistency, and preoccupation with technique replaced grounding in observation as modelers produced equations from which they described in detail and with confidence what had happened in the first few fractions of a millionth of a second of time, fifteen billion years ago. From an initial state of mathematical perfection and symmetry, a new version of Genesis was written, rigorously deducing the events that must have followed. That the faith might be... well, wrong, became simply inconceivable.
|
||
But in fact, serious disagreements were developing between these idealized realms of thought and what astronomers surveying reality were actually finding. For one thing, despite all the publicity it had been accorded as providing the “clincher,” there was still a problem with the background radiation. Although the equations could be made to agree with the observed temperature, the observed value itself was just too uniform – everywhere. An exploding ball of symmetrically distributed energy and particles doesn’t form itself into the grossly uneven distribution of clustered matter and empty voids that we see. It simply expands as a “gas” of separating particles becoming progressively more rarified and less likely to interact with each other to form into anything. To produce the galaxies and clusters of galaxies that are observed, some initial unevenness would have to be present in the initial fireball to provide the focal points where condensing matter clouds would gravitate together and grow. Such irregularities should have left their imprint as hot spots on the background radiation field, but it wasn’t there. Observation showed the field to be smooth in every direction to less than a part in ten thousand, and every version of the theory required several times that amount. (And even then, how do galaxies manage to collide in a universe where they’re supposed to be rushing apart?)
|
||
Another way of stating this was that the universe didn’t contain enough matter to have provided the gravitation for galaxies to form in the time available. There needed to be a hundred times more of it than observation could account for. But it couldn’t simply be ordinary matter lurking among or between the galaxies in some invisible form, because the abundance of elements also depended critically on density, and increasing density a hundredfold would upset one of the other predictions that the Big Bang rested on, producing far too much helium and not enough deuterium and lithium. So another form of matter – “dark matter” – was assumed to be there with the required peculiar properties, and the cosmologists turned to the particle physicists, who had been rearing their own zoo of exotic mathematical creations, for entities that might fill the role. Candidates included heavy neutrinos, axions, a catch-all termed “weakly interacting massive particles,” or “WIMPS,” photinos, strings, superstrings, quark nuggets, none of which had been observed, but had emerged from attempts at formulating unified field theories. The one
|
||
|
||
possibility that was seemingly impermissible to consider was that the reason why the “missing mass” was missing might be that it wasn’t there.
|
||
Finally, to deal with the smoothness problem and the related “flatness” problem, the notion of “inflation” was introduced, whereby the universe began in a superfast expansion phase of doubling in size every 10-35 seconds until 10-33 seconds after the beginning, at which point it consisted of regions flung far apart but identical in properties as a result of having been all born together, whereupon the inflation suddenly ceased and the relatively sluggish Big Bang rate of expansion took over and has been proceeding ever since.
|
||
Let’s pause for a moment to reflect on what we’re talking about here. We noted in the section on evolution that a picosecond, 10-12 seconds, is about the time light would take to cross the width of a human hair. If we represent a picosecond by the distance to the nearest star, Alpha Centauri (4.3 light-years), then, on the same scale, 10-35 seconds would measure around half a micron, or a quarter the width of a typical bacterium – far below the resolving power of the human eye. Fine-tuning of these mathematical models reached such extremes that the value of a crucial number expressed as a part in fifty-eight decimal places at an instant some 10-43 seconds into the age of the universe made the difference between its collapsing or dispersing in less than a second.
|
||
But theory had already dispersed out of sight from reality anyway. By the second half of the 1980s, cosmic structures were being discovered and mapped that could never have come into being since the time of the Big Bang, whatever the inhomogeneities at the beginning or fast footwork in the first few moments to smooth out the background picture. The roughly spherical, ten-million-or-so-light-year-diameter clusters of galaxies themselves turned out to be concentrated in ribbonlike agglomerations termed superclusters, snaking through space for perhaps several hundred million light-years, separated by comparatively empty voids. And then the superclusters were found to be aligned to form planes, stacked in turn as if forming parts of still larger structures – vast sheets and “walls” extending for billions of light-years, in places across a quarter of the observable universe. The problem for Big Bang is that relative to the sizes of these immense structures, the component units that form them are moving too slowly for these regularities to have formed in the time available. In the case of the largest void and shell pattern identified, 150 billion light-years would have been needed at least – eight times the longest that Big Bang allows. New ad-hoc patches made their appearance: light had slowed down, so things had progressed further than we were aware; another form of inflation had accelerated the formation of the larger, early structures, which had then been slowed down by hypothetical forces invented for the purpose. But tenacious resistance persisted to any suggestion that the theory could be in trouble.
|
||
Yet the groundwork for an alternative picture that perhaps explains all the anomalies in terms of familiar, observable processes had been laid in the 1930s.
|
||
|
||
The Plasma Universe
|
||
Hannes Alfven, the Pioneer: Cosmic Cyclotrons.
|
||
Hannes Alfvén studied the new field of nuclear physics at the University of Uppsala, in Sweden, and received his doctorate in 1934. Some of his first research work was on cosmic rays, which Lemaitre had wrongly attributed to debris from the primeval atom in his first version of a Big Bang theory. Although such renowned names as America’s Robert Millikan and Britain’s Sir James Jeans were still ascribing them to some kind of nuclear fission or fusion, Alfvén followed the line of the Norwegian experimental scientist Kristian Birkeland in proposing electromagnetic processes. This set the tone of what would characterize his approach to science through life: reliance on observation in the laboratory as a better guide to understanding the real world than deduction from theory, and a readiness to question received wisdom and challenge the authority of prestigious scientists.
|
||
That decade had seen the development of the cyclotron accelerator for charged particles, which uses an electric field to get them up to speed and a magnetic field to confine them in circular paths. (Electrical dynamics are such that a particle moving through a magnetic field experiences a force at right angles to the direction of motion – like that of a ship’s rudder.) It had been established that the Sun possesses a magnetic field, which seemed likely to be the case with other stars also. A binary system of two stars orbiting each other – of which there are many – could form, Alfvén theorized, the components of a gigantic natural cyclotron capable of accelerating particles of the surrounding plasma to the kinds of energies measured for cosmic rays. This would also explain why they arrived equally from all directions, until then taken as indicating that their source lay outside the galaxy. The streams of high-energy particles would form huge electrical currents flowing through space – Alfvén estimated them to be typically in the order of a billion amperes – which would generate magnetic fields traversing the galaxy. These in turn would react back on the cosmic ray particles, sending them into all manner of curving and spiraling paths, with the result that those happening to arrive at the Earth could appear to have come from anywhere.
|
||
It would be twenty years – not until the fifties – before the electromagnetic acceleration of cosmic rays was generally accepted. The existence of large-scale plasma currents was not confirmed until the seventies. At the time Alfvén put forward his ideas, virtually all scientists believed that space had to be an empty, nonconducting vacuum. One reason why they resisted the notion of an electrically active medium was that it complicated the elegant, spherically symmetrical mathematics of fields constrained to isolated bodies. It often happens when ideas come before their time that when they are eventually accepted, the person who originated them gets forgotten. Ten years after Alfvén’s paper, the electromagnetic acceleration of cosmic rays was proposed by Enrico Fermi and has since been known as the Fermi process.
|
||
Alfvén next applied these concepts to the aurora, which had also interested Birkeland, and explained the effect as the result of plasma currents from the Sun being deflected to the Earth’s poles by its magnetic field, where they produce displays of light by ionizing atoms in the upper atmosphere. (The same process takes place in a neon tube, where the applied voltage creates an ionizing current through a gas. The gas atoms absorb energy from the current and reemit it as visible light.) Although noncontroversial today, this was again resisted for a long time by a mathematically indoctrinated orthodoxy who thought of space in terms of an idealized vacuum
|
||
|
||
and refused to accept that it could conduct electricity. Alfvén used mathematics more in the mode of an engineer – as a tool for quantifying and understanding better what is observed, not as something to determine what reality is allowed to be. On one occasion, in a visit to Alfvén’s home in Sweden, the Cambridge theoretician Sydney Chapman, who had steadfastly opposed Alfvén’s views and declined to debate them, refused to go down to the basement to observe a model that Alfvén had constructed in the hope of swaying him. Alfvén commented, “It was beneath his dignity as a mathematician to look at a piece of laboratory apparatus!” 48
|
||
The tradition of the professors who wouldn’t look through Galileo’s telescope was alive and well, it seemed. It wasn’t until the mid 1960s that satellites began detecting the highly localized incoming currents in the auroral zones that proved Alfvén to have been correct.
|
||
The Solar System as a Faraday Generator
|
||
But Alfvén was already turning to larger things. The currents that produced the aurora led back to the Sun, where the rotating vortexes that appear as sunspots act as generators in the Sun’s magnetic field, accelerating plasma particles outward in flares and prominences that can cause displays extending for hundreds of thousands of miles above the surface. According to the conventional picture of how the Solar System had formed, which went back to Pierre-Simon Laplace, the Sun and planets condensed out of a spinning disk of gas and dust as it contracted under gravity. But there were two problems with this. The first was that as a rotating body contracts it speeds up (conservation of angular momentum), and calculation showed that the outwardly directed centrifugal force would balance any further collapse long before the core region became dense enough to form a star. To reach the form it is in today, Laplace’s disk needed to get rid of the greater part of the angular momentum it had started out with – in fact, about 99.9 percent of it. Second, of the amount that remained, most ought to have ended up concentrated in the Sun, causing it to rotate in something like thirteen hours instead of the twenty-eight days that is found. In fact, most of the angular momentum in the Solar System lies with the planets – 75 percent of it in Jupiter, 27 percent Saturn, 1 percent distributed among the remaining rubble – leaving only 2 percent in the Sun itself. How, then, did the bulk of the angular momentum get transferred to where it is?
|
||
If the central region, rotating faster as it contracts, develops a magnetic field, the field will sweep through the surrounding cloud of plasma, inducing currents to flow inward toward the core. Because the currents are in a magnetic field, they will experience a force accelerating the plasma in the direction of the rotation, in other words, transferring angular momentum from the central region, allowing it to collapse further. Following the field lines, the currents will complete a return path back via the proto-Sun, the effect there being to slow its rotation. A metal disk rotated in a magnetic field shows the same effect and is known as a homopolar generator. Michael Faraday demonstrated it in the middle of the nineteenth century.
|
||
A Skater’s Waltz Among the Planets
|
||
Two parallel wires carrying currents flowing in the same direction experience a force that draws them together. If the conducting medium is a plasma rather than wires, the plasma will tend to pull itself together into filaments. But the movement of charged plasma particles toward
|
||
|
||
each other also constitutes a current that generates its own magnetic field, with the result that the filaments tend to twist around each other like the braided strands of a thread. These filamentary structures are seen clearly in laboratory plasma discharges, solar prominences, and the shimmering draperies of the aurora, kinking and writhing unpredictably under their own internally generated fields, as fusion researchers trying to contain plasmas have learned to their consternation. This braiding repeats on a larger scale like threads twisting to form ropes, creating inhomogeneity and complexity as an inherent tendency of plasma structures.
|
||
This mechanism also accounted for the origin of the angular momentum of a planetary system, which straightforward collapse under gravitation had never really been able to explain. Any two forces that are not in alignment and not directed in parallel in the same direction, applied to a rigid object, will cause it to rotate about some center, and are said to possess “torque,” or turning moment, about that point. Two bodies moving along the lines of action of those forces possess angular momentum about that point, even though they are traveling in straight lines. This can be seen with two skaters approaching each other on paths that are slightly offset. If they link arms as they pass, they will go into a spin about each other; angular momentum has to be conserved, and so it must have been there all along. In a plasma made up of particles of differing masses such as electrons and protons, a magnetic field will accelerate the masses at different rates, concentrating them into polarized regions of opposite charge. When two current filaments are pulled together under their mutual interaction, the forces acting are not center-to-center but offset, like the courses of the skaters. This is what causes filaments to twist around each other and braid into more complex forms.
|
||
By the sixties Alfvén was proposing this as the basis of the formation of the entire Solar System. It was generally rejected on the grounds that electrical currents could not be supported in such plasmas. Ironically, the reason that was given went back to work on solar electrodynamics that Alfvén himself and a few colleagues had done during the early years of World War II, in which Sweden remained neutral. For an electrical current to flow, there must be an electric field maintaining a voltage difference to drive it, in the same way that for a water current to flow, a pipe must have a gradient to maintain a pressure difference. But, it was argued, a conducting plasma would short out any electric field that tried to form, preventing any voltage difference from developing, and so no current could be driven.
|
||
This does come close to being true in the Sun, and the success of Alfvén’s own theory in representing solar phenomena was used as justification for treating all plasma models the same way. Alfvén tried to point out that the limitation on electric fields only applied to dense plasmas, but it was in vain. Whereas before his ideas had been opposed on the grounds of space being a mathematically idealized insulator, now the criticism was that he couldn’t be right because the space he described was assumed to be a perfect conductor. Nevertheless, his earlier work had been so thoroughly vindicated, providing much of what became standard reference material for plasma work, that in 1970 Alfvén was awarded a Nobel Prize, particular mention being made of the very theory whose limitations he had been trying to get the physics community to appreciate. He probably made history by being the only recipient of the prize to criticize, at the award ceremony, the reasons for which his own work was being recognized. “But it is only the plasma that does not understand how beautiful the theories are,” he said, “and absolutely refuses to obey them.” 49
|
||
Space probes pushing out to Jupiter, Saturn, then Uranus through the end of the seventies and
|
||
|
||
into the eighties confirmed the whole system of magnetic fields, ionization belts, and twisting plasma currents that Alfven had theorized. This time the initial proponent of the ideas that led to it all was not overlooked. The vast plasma circuits extending across space are known today as Birkeland currents.
|
||
Solar System to Galaxy
|
||
After spending a short while by invitation in the Soviet Union, in 1967 Alfven moved to the U.S.A. and settled in San Diego. Electrical forces, not gravity, he was by now convinced, had been the primary influence in shaping the Solar System. Gravitation became a significant factor only later, when the natural tendency of plasmas to organize coherent structures out of a diffuse medium at much faster rates had already produced higher-density regions – the “clumpiness” that Big Bang cosmologists had been unable to bring about by means of gravity alone. Only when matter cooled sufficiently for electrically neutral atoms to form could objects like planets arise that moved essentially in response to gravity alone and which allowed the familiar celestial dynamics that worked well enough within the local neighborhood of the Solar System. But local behavior couldn’t be extrapolated to describe a universe existing 99 percent in the form of plasma in stars at temperatures of millions of degrees or charged particles streaming through space.
|
||
Wasn’t the disk-shaped galaxy little more than scaled-up Solar-System geometry? A protogalaxy rotating in an intergalactic magnetic field would generate electric fields in the same way, which in turn would produce filamentary currents flowing inward through the galactic plane to the center, and then up along the rotational axis to loop back in a return path reentering around the rim. As in the case of the Solar System, the self-“pinching” effect would compress these currents into twisting vortexes sweeping around the galaxy like immense fan blades and gathering the matter together into high-density regions along which proto-stars would form as subvortexes. However, it will be a long time yet before man-made probes are able to venture out into the galactic disk with instruments to test such theories.
|
||
Peratt’s Models and Simulations: Galaxies in the Laboratory
|
||
Encouragement came, nevertheless, from a different direction. In 1979, Anthony Peratt, who had been a graduate student of Alfvén’s ten years previously, was working with the aerospace defense contractor Maxwell Laboratories on a device called Blackjack V, which generated enormous pulses of electrical power – 10 trillion watts! – to vaporize wires into filaments of plasma, producing intense bursts of X rays. The purpose was to simulate the effects of the electromagnetic pulse produced by a hydrogen bomb on electronics and other equipment. Highspeed photographs showed the filaments of plasma moving toward each other under the attraction of their magnetic fields, and then wrapping around each other in tight spiral forms strikingly suggestive of familiar astronomical pictures of galaxies. Computer simulations of plasma interactions that Peratt performed later at the Los Alamos National Laboratory duplicated with uncanny faithfulness the features of all known galaxy types. By varying the parameters of the simulations, Peratt was able to match the result with every one of the pictures shown in Halton Arp’s Atlas of Peculiar Galaxies and guess with confidence just what electromagnetic forces were shaping the galaxies.
|
||
|
||
These simulations also suggested a possible answer to another mystery that astronomers had been debating for a long time. In a galaxy held together purely by gravity, the velocity of the component stars about the center as it rotates should decrease with distance from it – as with the Solar System, in which the outer planets move more slowly in their orbits around the Sun. Observations, however, show that the speeds of stars orbiting the galactic center remain fairly constant regardless of distance. This is just what the simulations showed would be expected of an electrically formed galaxy, where the spiral arms form coherent structures that trail back like the cords of a gigantic Weed Eater, moving with the same velocity along their whole length. Conventional theory had been forced to postulate an invisible halo of the strange gravitating but otherwise noninteracting dark matter surrounding a galaxy – there for no other reason than to produce the desired effect. But with electromagnetic forces, behaving not peculiarly but in just the way they are observed to on Earth, the effect emerges naturally.
|
||
An Explanation for X-ray Flashes
|
||
The most intense X-ray emission in the Blackjack V plasmas came from center of the spiral form. This was evocative of the high-energy bursts from galactic centers that cosmologists were trying to explain in terms of black holes and other exotic concepts. Blackjack V didn’t use black holes. But there was a way in which sudden explosive releases of energy could come about from purely electrical causes – the same that sometimes cause the plug of an appliance to spark when it’s pulled out of a wall socket.
|
||
An electric field that drives currents and accelerates particles in a cyclotron, a neon light, or a TV tube is produced by a changing magnetic field (in other words, not by a steady one). A magnetic field accompanies an electric current. In the late fifties, Alfven had been called in by the Swedish power company ASEA to investigate a problem they were having with explosions in mercury arc rectifiers used in the transmission grid. The rectifiers used a low-pressure mercury vapor cell containing a current-carrying plasma. It turned out that under certain conditions the ions and electrons forming the plasma could separate in a positive-feedback process that created a rapidly widening gap in the plasma, interrupting the current. The fall in the magnetic field that the current had been supporting generated an electric field that built up a high voltage, accelerating the electrons to the point where the ensuing heat caused an explosion.
|
||
Alfven’s work had shown that analogous effects involving suddenly collapsing magnetic fields could also operate at larger scales to produce such results as solar flares. The energy released in such an event is nonlocal in that it derives not just from the conditions pertaining at the point where the current break occurs, but from the magnetic field sustained around the entire circuit. The energy stored in a galactic circuit thousands of light-years long and carrying ten million trillions of amperes can be a staggering 1057 ergs – as much energy as a typical galaxy generates in 30 million years. The electric fields produced by that kind of release could accelerate electrons to enormous velocities, approaching that of light. Accelerated charges radiate electromagnetic waves. Black-hole-density concentrations of gravity are not necessary to generate jets of radio brilliance that can be heard on the far side of the universe.
|
||
Eric Lerner and the Plasma Focus
|
||
|
||
Peratt published his findings in a small astronomy journal, Astrophysics and Space Science, in 1983,50 and the following year in the more widely read amateur magazine Sky and Telescope. 51 Little reaction came from mainstream astrophysicists. Then, toward the end of 1984, he was contacted by Eric J. Lerner, a theoretician who had been pursuing a parallel line of thought, though not within the recognized establishment. Lerner’s interest in the subject had been stimulated at an early age by an illustration in an astronomy book of all the trains that would be needed to haul the billions of tons of coal whose burning would equal the Sun’s output in one second. He studied physics at Columbia University and the University of Maryland, with an emphasis on nuclear fusion, and in the mid seventies formed an association with Winston Bostick, who was working on an approach to controlled fusion known as the plasma focus. Invented independently in the sixties by a Soviet, N. V. Filippov, and an American, Joseph Mather, the device first compresses electrical energy a millionfold into a sub-millimeter-size donut of filamentary plasma called a plasmoid, and then collapses the associated magnetic field to shoot out two intense, high-energy beams, each in the order of a micron (one ten-thousandth of a centimeter) wide – electrons in one direction and ions in the other. In the course of this, some of the confined ions are heated to sufficient temperatures to fuse.
|
||
Bostick too thought that filamentary processes might be involved in galaxy formation, and this led Lerner to wonder if something like the energy concentration mechanism of the plasma focus might account for the distant, highly energetic, yet compact quasars mentioned earlier. Since 1980, the new Very Large Array (VLA) radio telescope, consisting of twenty-seven dish antennas spread over miles of the New Mexico desert, had revealed enormously energetic jets of energy emanating from quasars, similar to the ones already known to power the emissions of radio galaxies, which Alfvén’s work attributed to collapsing magnetic fields. If the visible core region of a typical radio galaxy is pictured as a spinning dime, two narrow jets of particles shoot out along the axis in opposite directions for a distance of about a foot before cooling and dissipating into football-size “lobes,” where the energy is radiated away as radio waves. The same processes occur at lesser intensity in the jets created by ordinary galaxies also. In the case of quasars, conventional theory postulated charged particles spiraling inward in the intense gravity fields of black holes as the source. Maybe black holes weren’t needed.
|
||
Going All the Way: Galaxies to the Universe
|
||
A plasma focus can increase the power density of its emission by a factor of ten thousand trillion over that of energy supplied. (Power signifies concentration in time; density, concentration in space.) The flow of current inward along a galaxy’s spiral arms, out along the axis, and looping back around via the rim reproduced the geometry of the plasmoid – the same that Alfvén had arrived at about four years earlier. But the suggestion of structures produced via electrical processes didn’t stop there. Astronomers were producing maps showing the galaxies to be not distributed uniformly across space but in clusters strung in “superclusters” along lacy, filament-like threads running through vast voids – scaled-up versions of the filaments that Lerner had visualized as forming within galaxies, from which stars formed as matter densities increased and gravitation broke them up. These larger filaments – vast rivers of electricity flowing through space – would create the magnetic fields that galaxies rotated in, enabling them to become generators; indeed, it would be from the initial drawing together and twisting of such large-scale
|
||
|
||
filaments that galaxies formed in the first place.
|
||
To establish some kind of firm foundation for his ideas, Lerner needed to know the scaling laws that related laboratory observations to events occurring on a galactic scale – the relationships that changed as the scale of the phenomenon increased, and the ones that remained invariant. This was when a colleague introduced him to Alfvén’s Cosmic Electrodynamics, first published in 1963, which set out the scaling laws that Alfven had derived. These laws provided quantitative support for the hierarchical picture that Lerner had envisaged – a series of descending levels, each repeating the same basic process of plasma twisting itself into vortex filaments that grow until self-gravitation breaks them up.
|
||
Few outside a small circle were receptive to such ideas, however. The majority of astrophysicists didn’t believe that such currents could flow in space because a plasma’s resistance is too low and would dissipate them – the same objection that Alfvén had encountered two decades before, now reiterated at the galactic level. Then bundles of helically twisted filaments a light-year across and a hundred light-years long, looping toward the center and arcing out along the axis of our galaxy and – the sizes predicted by Lerner’s model – were mapped with the VLA telescope by a Columbia University graduate student, Farhad Yusef-Zadeh, and carried on the cover of the August 1984, issue of Nature. Yusef-Zadeh’s colleague, Mark Morris, later confirmed that magnetic forces, not gravity, must have controlled their formation. Encouraged, and at Peratt’s suggestion, Lerner submitted a paper describing his theory to Astrophysics and Space Science, the journal that Peratt had published in, but it was rejected, the reviewer dismissing the analogy between galaxies and the plasma focus as absurd. The black-hole explanation of quasars and the cores of energetic galaxies is still favored, sometimes being invoked to account for Yusef-Zadeh’s filaments. Lerner’s paper did finally appear in Laser and Particle Beams in 1986.52
|
||
The scaling laws implied that the smaller an object is in the hierarchy, the more isolated it will be from neighboring objects of the same kind in terms of the ratio of size to distance. Thus stars are separated from each other by a distance of 10 million times their diameters, galaxies by thirty times their diameters, clusters by ten times their diameters. Hence there was nothing strange about space being so filled in some places and empty in others. Far from being a mystery in need of explanation, the observed clumpiness was inevitable.
|
||
An upper size limit also emerged, beyond which filaments will fail to form from a homogenous plasma because of the distortion of particle paths by internal gravitation. The maximum primordial filament would be in the order of ten billion light-years in diameter and compress itself down to around a fifth that size before breaking into several dozen smaller filaments spaced 200 million light-years apart – which corresponded well with the observed values for the superclusters. Beyond this, therefore, there should exist a further, larger structure of elongated, filamentary form, a billion or so light-years in radius and a few billion light-years long. It turned out to have contracted a bit more than Lerner’s calculations said. Brent Tully’s 1986 paper in Astrophysical Journal announcing the discovery of “supercluster complexes” put their radius at around six hundred million light-years.
|
||
Older Than the Big Bang
|
||
These were far too massive and ancient to have formed since the Big Bang, requiring a
|
||
|
||
trillion years or more for the primordial filaments to differentiate themselves. Although this news caused a sensation among cosmologists, the plasma-universe alternative remained virtually unknown, since papers on it had been rejected by recognized astrophysical journals, while the few journals in which they had appeared were not read by astrophysicists. However, through contacts in the publishing world Lerner was invited to write a specialized science article for the New York Times Magazine and promptly proposed one on Alfven and the plasma universe. Alfven had been skeptical of the Big Bang theory ever since he first came across it in 1939. Nevertheless, in discussing the New York Times offer with Lerner, he cautioned that in his opinion an article challenging the Big Bang would be premature; instead it should focus on the electrical interpretation of more familiar and observable phenomena to prepare the ground. “Wait a year,” he advised. “I think the time will be riper next year to talk about the Big Bang.” 53
|
||
But Lerner couldn’t let such an opportunity pass, and after further consulting with Peratt and much editing and rewriting, he submitted an article giving a full exposition to his theory. It was not only accepted by the editorial staff but scheduled as the cover story for the October 1986 edition. Lerner was elated. But Alfven’s experience of the business turned out to be well rooted, and his advice prescient. Upon routine submission to the science section of the daily paper for review the article was vetoed on the grounds that Alfven was a maverick, without support in the scientific community. (Being awarded a Nobel Prize apparently counts for little against entrenched dogma.) A revised version of Lerner’s article did eventually appear in Discover magazine in 1988.54
|
||
|
||
Other Ways of Making Light Elements...
|
||
The existence of large-scale structures posed difficulties for Big Bang. But it still rested
|
||
solidly on its two other pillars of helium abundance and microwave background ratiation – at least, as far as the general perception went. We’ve already seen that the wide-spread acceptance of the background radiation was a peculiar business, since it had been predicted more accurately without any Big Bang assumptions at all. More recently conducted work showed that it wasn’t necessary to account for the helium abundance either.
|
||
The larger a star, the hotter its core gets, and the faster it burns up its nuclear fuel. If the largest stars, many times heavier than the Sun, tended to form in the earlier stages of the formation of our galaxy, they would long ago have gone through their burning phase, producing large amounts of helium, and then exploded as supernovas. Both in Lerner’s theoretical models and Peratt’s simulations, the stars forming along the spiral arms as they swept through the plasma medium would become smaller as the density of the medium increased. As the galaxy contracted, larger stars would form first, and smaller, longer-lived ones later. The smaller, more sedate stars – four to ten times larger than the Sun – would collapse less catastrophically at the end of the burning phase, blowing off the outer layers where the helium had been formed initially, but not the deeper layers where heavier elements would be trapped. Hence the general abundance of helium would be augmented to a larger degree than of the elements following it; there is no need for a Big Bang to have produced all the helium in a primordial binge.
|
||
Critics have argued that this wouldn’t account for the presence of light elements beyond helium such as lithium and boron, which would be consumed in the stellar reactions. But it seems stars aren’t needed for this anyway. In June 2000, a team of astronomers from the Universities of Austin, Texas, and Toledo, Ohio, using the Hubble Space Telescope and the McDonald Observatory, described a process they termed “cosmic-ray spallation,” in which energetic cosmic rays consisting mainly of protons traveling near the speed of light break apart nuclei of elements like carbon in interstellar space. The team believed this to be the most important source of the lighter elements. 55
|
||
|
||
And of Producing Expansion
|
||
That pretty much leaves only the original Hubble redshift as the basis for the Big Bang. But as we’ve already seen, the steady-state theory proposed another way in which it could be explained. And back in the early sixties, Alfvén gave some consideration to another.
|
||
A theory put forward by an old colleague and teacher of his, Oskar Kleine, had proposed antimatter as the energy source responsible. Antimatter had been predicted from quantum mechanics in the 1920s, and its existence subsequently confirmed in particle experiments. For every type of elementary particle, there also exists an “antiparticle,” identical in all properties except for carrying the opposite electrical charge (assuming the particle is charged). If a particle and its antiparticle meet, they annihilate each other and are converted into two gamma rays equal in energy to the total masses of the particles that created them, plus the kinetic energy they were carrying. (The thermonuclear reaction in a hydrogen bomb converts about one percent of the reacting mass to energy.) Conversely, sufficiently energetic radiation can be converted into particles. When this occurs, it always produces a particle-antiparticle pair, never one of either kind on its own.
|
||
This fact leads to the supposition that the universe too ought to consist of equal amounts of both particles and antiparticles. Kleine hypothesized that in falling together under gravity, a particle-antiparticle mixture (too rarified to undergo more than occasional annihilating collisions) would separate according to mass; at the same time, if the motion were in a magnetic field, positive charges would be steered one way and negative charges the other. The result would be to produce zones where either matter or antimatter dominated, with a layer of energetic reactions separating them and tending to keep them apart while they condensed into regions of galaxies, stars, and planets formed either from ordinary matter, as in our own locality, or of antimatter elsewhere.
|
||
Should such matter and antimatter regions later meet, the result would be annihilation on a colossal scale, producing energy enough, Kleine conjectured, to drive the kind of expansion that the redshift indicated. This would make it a “Neighborhood Bang” rather than the Bang, producing a localized expansion of the part of the universe we see which would be just part of a far vaster total universe that had existed for long before. Although this allows time for the formation of large structures, there are questions as to how they could have been accelerated to the degree they apparently have without being disrupted, and others that require a lot more observational data, and so the idea remains largely speculative.
|
||
|
||
Redshift Without Expansion at All
|
||
Molecular Hydrogen: The Invisible Energy-Absorber
|
||
The steady-state and Kleine’s antimatter theories both accepted the conventional interpretation of the redshift but sought causes for it other than the Big Bang. But what if it has nothing to do with expansion of the universe at all? We already saw that Finlay-Freundlich’s derivation of the background temperature in the early fifties considered a “tired light” explanation that Born analyzed in terms of photon-photon interactions. More recently, the concept has found a more substantial grounding in the work of Paul Marmet, a former physicist at the University of Ottawa, and before that, senior researcher at the Herzberg Institute of Astrophysics of the National Research Council of Canada.
|
||
It has long been known that space is permeated by hydrogen, readily detectable by its 21centimeter emission line, or absorption at that wavelength from the background radiation. This signal arises from the spin of the hydrogen atom. Monatomic hydrogen, however, is extremely unstable and reacts promptly to form diatomic hydrogen molecules, H2. Molecular hydrogen is very stable, and once formed does not easily dissociate again. Hence, if space is pervaded by large amounts of atomic hydrogen, then molecular hydrogen should exist there too — according to the calculations of Marmet and his colleagues, building up to far greater amounts than the atomic kind. 56 Molecular hydrogen, however, is extraordinarily difficult to detect — in fact, it is the most transparent of diatomic molecules. But in what seems a peculiar omission, estimates of the amount of hydrogen in the universe have traditionally failed to distinguish between the two kinds and reported only the immediately detectable atomic variety.
|
||
Using the European Space Agency’s Infrared Space Observatory, E. A. Valentijn and P. P. van der Werf recently confirmed the existence of huge amounts of molecular hydrogen in NGC891, a galaxy seen edge-on, 30 million light-years away. 57 This discovery was based on new techniques capable of detecting the radiation from rotational state transitions that occur in hydrogen molecules excited to relatively hot conditions. Cold molecular hydrogen is still undetectable, but predictions from observed data put it at five to fifteen times the amount of atomic hydrogen that has long been confirmed. This amount of hitherto invisible hydrogen in the universe would have a crucial effect on the behavior of light passing through it.
|
||
Most people having a familiarity with physics have seen the demonstration of momentum transfer performed with two pendulums, each consisting of a rod weighted by a ball, suspended adjacently such that when both are at rest the balls just touch. When one pendulum is moved away and released, it stops dead on striking the other, which absorbs the momentum and flies away in the same direction as the first was moving. The collision is never perfectly “elastic,” meaning that some of the impact energy is lost as heat, and the return swing of the second pendulum will not quite reverse the process totally, bringing the system eventually to rest.
|
||
Something similar happens when a photon of light collides with a molecule of a transparent medium. The energy is absorbed and reemitted in the same, forward direction, but with a slight energy loss — about 10-13 of the energy of the incoming photon. 58 (Note this is not the same as the transverse “Rayleigh scattering” that produces angular dispersion and produces the blueness of the sky, which is far less frequent. The refractive index of a transparent medium is a measure of light’s being slowed down by successive forward re-emissions. In the case of air it is 1.0003, indicating that photons traveling 100 meters are delayed 3 centimeters, corresponding to about a
|
||
|
||
billion collisions. But there is no noticeable fuzziness in images at such distances.)
|
||
What this means is that light traveling across thousands, or millions, or billions of light-years of space experiences innumerable such collisions, losing a small fraction of its energy at each one and hence undergoing a minute reddening. The spectrum of the light will thus be shifted progressively toward the red by an amount that increases with distance — a result indistinguishable from the distance relationship derived from an assumed Doppler effect. So no expansion of the universe is inferred, and hence there’s no call for any Big Bang to have caused it.
|
||
Two further observations that have been known for a long time lend support to this interpretation. The Sun has a redshift not attributable to gravity, which is greater at the edges of the disk than in the center. This could be explained by sunlight from the edge having to pass through a greater thickness of lower solar atmosphere, where more electrons are concentrated. (It’s the electrons in H2 molecules that do the absorbing and reemitting.) Second, it has been known since 1911 that the spectra of hot, bright blue OB-type stars — blue-white stars at the hot end of the range that stars come in — in our galaxy show a slight but significant redshift. No satisfactory explanation has ever been agreed. But it was not concluded that we are located in the center of an expanding shell of OB stars.
|
||
So the redshift doesn’t have to imply an expansion of the universe. An infinite, static universe is compatible with other interpretations — and ones, at that, based on solid bodies of observational data rather than deduction from assumptions. However, none of the models we’ve looked at so far questions the original Hubble relationship relating the amount of the shift to distance (although the value of the number relating it has been reappraised several times). But what if the redshifts are not indicators of distance at all?
|
||
|
||
The Ultimate Heresy: Questioning the Hubble Law
|
||
The completely revolutionary threat to toppling the last of Big Bang’s supporting pillars came not from outside mavericks or the fringes, but from among the respected ranks of the professionals. And from its reactions, it seems that the Establishment reserves its most savage ire for insiders who dare to question the received dogma by putting observation before theory and seeing the obvious when it’s what the facts seem to say.
|
||
Halton Arp’s Quasar Counts
|
||
Halton Arp comes from a background of being one of America’s most respected and productive observational astronomers, an old hand at the world-famous observatories in California and a familiar face at international conferences. Arp’s Atlas of Peculiar Galaxies has become a standard reference source. Then, in the 1960s and ‘70s, “Chip” started finding excess densities of high-redshift quasars concentrated around low-redshift galaxies.
|
||
|
||
A large redshift is supposed to mean that an object is receding rapidly away from us; the larger the shift, the greater the recession velocity and the distance. With the largest shifts ever measured, quasars are by this reckoning the most distant objects known, located billions of lightyears away. A galaxy showing a moderate shift might be thousands or millions of times less. But the recurring pattern of quasars lying conspicuously close to certain kinds of bright galaxies suggested some kind of association between them. Of course, chance alignments of background objects are bound to happen from time to time in a sky containing millions of galaxies. However, calculating how frequently they should occur was a routine statistical exercise, and what Arp was saying was that they were being found in significantly greater numbers than chance could account for. In other words, these objects were associated in some kind of way. A consistently recurring pattern was that the quasars appeared as pairs straddling a galaxy.
|
||
The first reactions from the orthodoxy were simply to reject the observations as being incorrect – because they had to be. Then a theoretician named Claude Canizares suggested an
|
||
|
||
explanation whereby the foreground galaxy acted as a “gravitational lens,” magnifying and displacing the apparent position of a background quasar. According to Einstein’s theory, light rays passing close to a massive body will be bent by its gravity (although, as discussed later in the section on relativity, other interpretations see it as regular optical refraction). So imagine a massive foreground galaxy perfectly aligned with a distant quasar as viewed from Earth. As envisaged by the lensing explanation, light from the quasar that would otherwise pass by around the galaxy is pulled inward into a cone – just like light passing through a convex optical lens – and focused in our vicinity. Viewed back along the line of sight, it would be seen ideally as a magnified ring of light surrounding the galaxy. Less than ideal conditions would yield just pieces of the ring, and where these happened to be diametrically opposed they would create the illusion of two quasars straddling the intervening galaxy. In other cases, where the alignment is less than perfect, the ring becomes a segment of arc to some greater or lesser degree, offset to one side – maybe just a point. So quasar images are found close to galaxies in the sky more often than you’d expect.
|
||
But the locations didn’t match fragmented parts of rings. So it became “microlensing” by small objects such as stars and even planets within galaxies. But for that to work, either the number of background quasars would need to increase sharply with faintness, whereas actual counts showed the number flattening off as they got fainter. Such a detail might sound trivial to the lay public, but it’s the kind of thing that can have immense repercussions within specialist circles. When Arp submitted this fact to Astronomy and Astrophysics the editor refused to believe it until it was substantiated by an acknowledged lens theorist. When Arp complied with that condition, he was then challenged for his prediction as to how the counts of quasars should vary as a function of their apparent brightness. By this time Arp was becoming sure that regardless of the wrecking ball it would send through the whole cosmological edifice, the association was a real, physical one, and so the answer was pretty easy. If the quasars were associated with bright, nearby galaxies, they would be distributed in space the same way. And the fit between the curves showing quasar counts by apparent magnitude and luminous Sb spiral galaxies such as M31 and M81 – galaxies resembling our own – was extraordinarily close, matching even the humps and minor nonlinearities. 59
|
||
Arp’s paper detailing all this, giving five independent reasons why gravitational lensing could not account for the results and demonstrating that only physical association with the galaxies could explain the quasar counts, was published in 1990.60 It should have been decisive. But four years later, papers were still reporting statistical associations of quasars with “foreground” galaxy clusters. Arp quotes the authors of one as stating, “We interpret this observation as being due to the statistical gravitational lensing of background QSO’s [QuasiStellar Objects, i.e., quasars] by galaxy clusters. However, this... overdensity... cannot be accounted for in any cluster lensing model...” 61
|
||
You figure it out. The first part is obligatory, required by custom; the second part is unavoidable, demanded by the data. So I suppose the only answer is to acknowledge both with an Orwellian capacity to hold two contradictory statements and believe both of them. Arp’s paper conclusively disproving lensing was not even referenced. Arp comments wearily, “As papers multiply exponentially one wonders whether the end of communication is near.”
|
||
|
||
Taking on an Established Church
|
||
It’s probably worth restating just what’s at stake here. The whole modern-day picture of extragalactic astronomy has been built around the key assumption that the redshifts are Doppler effects and indicate recessional velocity. Since 1929, when Edwin Hubble formulated the law that redshift increases proportionally with distance, redshift has been the key to interpreting the size of the universe as well as being the prime evidence indicating it to be expanding from an initially compact object. If the redshifts have been misunderstood, then inferred distances can be wrong by a factor of from 10 to 100, and luminosities and masses wrong by factors up to 10,000. The founding premise to an academic, political, and social institution that has stood for three generations would be not just in error but catastrophically misconceived. It’s not difficult to see why, to many, such a possibility would be literally inconceivable. As inconceivable as the thought once was that Ptolemy could have been wrong.
|
||
It began when Arp was studying the evolution of galaxies and found a consistent pattern showing pairs of radio sources sitting astride energetic, disturbed galaxies. It seemed that the sources had been ejected from the galaxies, and the ejection had caused the disturbance. This was in line with accepted thinking, for it had been acknowledged since 1948 that galaxies eject radio-emitting material in opposite directions. Then came the shock that time and time again the sources turned out to be quasars, often showing other attributes of matter in an excited state, such as X-ray emissions and optical emission lines of highly energized atoms. And the galaxies they appeared to have been ejected from were not vastly distant from our own, but close by.
|
||
These associations had been accumulating since the late sixties, but in that time another kind of pattern made itself known also. A small group of Arp’s less conformist colleagues, who even if perhaps not sharing his convictions totally, remained sufficiently open-minded to be sympathetic. From time to time one of them would present observational data showing another pair of radio or X-ray sources straddling a relatively nearby low-redshift galaxy which coincided with the optical images of Blue Stellar Objects – quasar candidates. To confirm that they were quasars required allocation of observation time to check their spectra for extreme quasar redshifts. At that point a dance of evasion would begin of refusals to look through the telescopes – literally. The requests would be turned down or ignored, even when they came from such figures as the director of the X-Ray Institute. When resourceful observers cut corners and made their own arrangements, and their findings were eventually submitted for publication, hostile referees would mount delaying tactics in the form of finicky fussing over detail or petty objections that could hold things up for years.
|
||
In the 1950s, the American astronomer Karl Seyfert had discovered a class of energetic galaxies characterized by having a sharp, brilliant nucleus with an emission line spectrum signifying that large amounts of energy were being released there. Arp found their association with quasar pairs to be so strong that it could almost be said to be a predictable attribute of Seyfert galaxies. Spectroscopically, quasars look like pieces of Seyfert nuclei. One of the most active nearby spiral galaxies, known by the catalog reference NGC4258, has a Seyfert nucleus from which the French astronomer G. Courtès, in 1961, discovered a pair of proto-spiral arms emerging, consisting of glowing gaseous matter also emitting the “synchrotron” radiation of high-energy electrons spiraling in magnetic fields. An X-ray astronomer called Wolfgang Piestch established that the arms of gas led like rocket trails to a pair of X-ray sources coinciding perfectly with two Blue Stellar Objects. When the ritual of obstructionism to obtain the spectra
|
||
|
||
of the BSOs ensued, Margaret Burbridge, a Briton with over fifty years of observational experience, bypassed the regular channels to make the measurement herself using the relatively small 3-meter reflector telescope on Mount Hamilton outside San Jose in California, and confirmed them to be quasars. Arp put the probability of such a chance pairing as being less than 1 in 2.5 million.
|
||
His paper giving all the calculations deemed to be scientifically necessary, along with four other examples each with a chance of being coincidental that was less than one in a million, was not even rejected – just put on indefinite hold and never acted upon since. When the number of examples continued growing, as did Arp’s persistence, his tenure was suddenly terminated and he was denied further access to the major American observatories. After facing censorship from the journals and ferocious personal attacks in public by prestigious figures at conferences, he left the U.S. in 1984 to join the Max-Planck-Institut fur Astrophysik in Germany, who he says have been cooperative and hospitable.
|
||
Eyes Closed and Eyes Open: Professionals and Amateurs
|
||
A new generation of high-resolution telescopes and more-sensitive instruments produced further examples of gaseous bridges emitting in the X-ray bands, connecting the quasars to their source galaxies. The configurations could be seen as a composite, physically connected object. But the response of those trained to the orthodox view was not to see them. They were dismissed as artifacts of random noise or instrument errors. I’ve witnessed this personally. On mentioning Arp’s work to a recent astrophysics graduate I was cut off with, “Those are just background noise,” although I hadn’t mentioned bridges. I asked him if he’d seen any of the pictures. He replied stonily, “I haven’t read anything of Arp’s, but I have read the critics.” Whence, knowing the approved answers is presumably all that is needed. Shades of the Scholastics.
|
||
In 1990, the Max-Planck-Institut für Extraterrestrische Physik (MPE) launched the X-ray telescope ROSAT (Röntgen Observatory Satellite Telescope), which was later used to look for a filament connecting the violently disrupted spiral galaxy NGC4319 to the quasarlike object Markarian 205, whose association had been disputed since 1971. Although the prime aim failed (Arp thinks the connection is probably too old now to show up at the energies searched for), it did reveal two new X-ray filaments coming out of Mark205 and leading to point-like X-ray
|
||
|
||
sources. So the high redshift, quasarlike Seyfert ejected from the low redshift spiral was itself ejecting a pair of yet-higher-redshift sources, which turned out to be quasars.
|
||
The NGC4319-Mark205 connection was subsequently established by a high-school teacher, when the NASA announced a program making 10 percent of the time on the orbiting Hubble Space Telescope available to the community of amateur astronomers. It seems that the amateur community – for whom Halton Arp has an extremely high regard – had taken a great interest in his work and were arranging more investigations of nearby quasar connections, drawing their subject matter mainly from Arp’s 1987 book, Quasars, Redshifts, and Controversies, which the NASA committees that allocated observation time had been avoiding like the plague. After another amateur used his assigned time for a spectroscopic study of an Arp connecting filament, the Space Telescope Science Institute suspended the amateur program on the grounds that it was “too great a strain on its expert personnel.” No doubt.
|
||
Quasar Cascades: Redshifts as a Measure of Galaxy Age
|
||
On this basis, quasars turn out to be young, energetic, high-redshift objects ejected recently, typically from Seyfert galaxies of lower-redshift – in fact, high-resolution X-ray images of the Seyfert galaxy NGC4151 show clearly proto-quasars forming in its nucleus prior to being ejected.
|
||
The quasars are not very luminous but grow in brightness as they age and evolve. The enormous brightness that’s conventionally attributed to them arises from incorrectly assigned
|
||
|
||
distances that place them on the edge of the observable universe. Arp found that on charts showing quasar positions, pairing the quasars by redshift almost always leads to finding a cataloged Seyfert close to the center point between them.
|
||
The process can be taken further. The Seyferts in turn usually occur in matched pairs about some larger, still-lower-redshift galaxy from which they appear to have been originally ejected. This yields a cascade in which large, older galaxies have ejected younger material that has formed into younger companion galaxies around it. The younger galaxies in turn eject material as quasars, which evolve through a sequences of stages eventually into regular galaxies. Corresponding to the age hierarchy at every step is the hierarchy of redshifts reducing as the associated objects become older. Such cascades lead back to massive central spiral galaxies whose advanced age is marked by their large populations of old, red stars. Typically they are found with smaller companion galaxies at the ends of the spiral arms. Companion galaxies are found to be systematically redshifted with respect to the central galaxy, indicating them to be first-generation descendants. The same pattern extends to groupings of galaxies in clusters and of clusters in superclusters.
|
||
Our own Milky Way galaxy is a member of the Local Group, centered on the giant Sb spiral M31, known as the “Andromeda” galaxy, which is the most massive of the group. All members of the group, including our galaxy, are redshifted with respect to M31, indicating it to be the source from which the rest were ejected as young, high-energy objects at some time. So, when gazing at the immense disk of M31, now about a million light-years away, familiar from almost every astronomy book, we’re looking back at our “parent” galaxy – and indeed, we see M31 as having a slight negative redshift, or “blueshift,” indicating it to be older.
|
||
The next nearest major group to us is the M81 group, again centered on the same kind of massive Sb spiral galaxy as M31. Once more, every major companion to M81 is redshifted with respect to it. In fact there are many clusters like the M31 and M81 groups, which together form the Local Supercluster. At its center one finds the Virgo Cluster, which consists of the full range of morphological galaxy types, the smaller ones showing a systematic redshift with respect to the giant spirals. Apart from M31, only six other major galaxies show a negative redshift. All six are in the Virgo Cluster and consist of giant spiral types of galaxy, marking them as the older and originally dominant members. It’s quite possible, therefore, that these are the origin of M31 and our entire Local Group. So with Virgo we are looking back at our “grandparent.”
|
||
On a final note, all the way down, this hierarchy has exhibited the pattern of new objects being produced in pairs. The Virgo Supercluster itself, viewed in terms of the configuration of its dominant originating galaxies and the clusters of groups they have spawned, turns out to be a virtual twin of the Fornax Supercluster, seen from the Southern Hemisphere.
|
||
What Happens to the Distances?
|
||
If redshift isn’t a measure of a recessional velocity at all, and hence not of distance either, what does this do to the scale of distances that has been constructed, mapping structures out to 10 billion or more light-years away? Although the observational evidence has been there for twenty years, conventional astronomy has never really accepted that the redshifts are quantized, and has tried strenuously to find arguments to show that there is no quantization. Quantized means that the values are not continuous through the range like heights of points on a hill from bottom to
|
||
|
||
top, but occur in a series of jumps like a staircase. Since, in general, an object can be moving in any direction relative to us, the radial components of the velocities, i.e., the part of the motion that is directly toward or directly away (which is what the Doppler effect measures) should, if redshift indicates velocity, come in all values. Hence, the conventional theory can’t allow it not to.
|
||
If redshift correlates with galaxy ages, then what quantization would imply is that the ejections of new generations of proto-galaxies in the form of quasars occur episodically in bursts, separated by periods of quiescence – rather like the generations of cell division in a biological culture. This fits with the kind of way we’d imagine a cascade model of the kind we’ve sketched would work. It also has the interesting implication that interpreting the redshift as distance instead of age would give the appearance of galaxies occurring in sheets separated by empty voids, which of course is what the conventional picture shows.
|
||
So what happens to the immense distances? it appears that they largely go away. Arp’s studies indicate that on an age interpretation basis, the Local Supercluster becomes a far more crowded place than is commonly supposed, with all of the quasars and other objects that we feel we know much about existing within it, and not very much at all beyond. So suddenly the universe shrinks back to something in the order of the size it was before Hubble (or, more correctly, the Hubble advocates who grabbed his constant and ran with it) detonated it. No wonder the Establishment puts Arp in the same league as the medieval Church did Giordano Bruno.
|
||
What Causes Redshift? Machian Physics
|
||
and the Generalization of GRT
|
||
Through the last several pages we’ve been talking about a hierarchy in which redshift correlates inversely with the ages of galaxies and other cosmological objects – i.e., as redshift increases, they become younger. Is it possible, then, to say what, exactly, redshift is indicating? In short, what causes it?
|
||
Isaac Newton performed an experiment in which he suspended a pail containing water on a twisted rope. When the pail is released it spins, and the centrifugal force causes the water to pile up toward the sides, changing the shape of the surface from flat to curved. The question is, in an otherwise empty universe, how would the water “know” whether to assume a flat surface or a curved one? In other words, what determines rotation – or for that matter, accelerations in general? Ernst Mach, an Austrian physicist who lived around the turn of the twentieth century, argued that the only sense in which the term has meaning is with respect to the “fixed,” or distant stars. So the property an object exhibits when it resists changes of motion – its “inertial mass” – arises from its interacting with the total mass of the universe. It “senses” that the rest of the universe is out there. Einstein believed that Mach was correct and set out with the intention of developing GRT on a fully Machian basis, but somewhere along the way it turned into a “local” theory.
|
||
Jayant Narlikar is director of the Inter University Center for Astronomy and Astrophysics in Pune, India, and has collaborated with Fred Hoyle and others in looking deeply at some of the fundamental issues confronting physics. In 1977 he rewrote the equations of GRT in a more general form, yielding solutions in which mass is not a constant but can take the form of a
|
||
|
||
quantity that increases with time. 62 Now, the way mathematics is taught is that the proper way to solve an equation is to derive the general form first, and then make any simplifications or approximations that might be appropriate to a particular problem. The approximations that Aleksandr Friedmann used in 1922 in solving the GRT equations to produce the expanding universe solution were made in such a way as to force any changes in mass to be expressed in the geometry of the situation instead. This is what leads to the models involving the curved spacetime that helps give relativity its reputation for incomprehensibility, and which sciencefiction writers have so much fun with. But with the full range of dynamical expressions that permit mass to vary, curved spacetime isn’t needed.
|
||
According to Narlikar’s version, a newly created particle, new to the universe, begins its existence with zero mass. That’s because it doesn’t “know” yet of the existence of any other mass out there, which is necessary for it to begin exhibiting the properties of mass. Its “awareness” grows as an ever-widening sphere of interaction with other masses, and as it does so the particle’s own mass proceeds to increase accordingly, rapidly at first and leveling off exponentially. Note, this isn’t the same process as pair production in an accelerator, which is matter conversion from already existing (and hence “aged”) energy. It represents the introduction of new mass-energy into the universe, induced in the vicinity of concentrations of existing matter – in the form of short-lived “Planck particles,” which according to quantum mechanical dynamics rapidly decay into the more familiar forms.
|
||
This, then, is what’s going on in the nuclei of energetic galaxies like Seyferts. New matter is coming into existence and being ejected at high velocities because of its low initial mass. As the mass increases it slows to conserve momentum, forming the sequence of quasars, BL Lac Objects (highly variable radio and X-Ray sources transitional between quasars and more regular galaxies), BSOs, and the like, eventually evolving into the galaxy clusters that we see. The universe thus grows as a pattern of new generations appearing and maturing before giving rise to the next, unfolding from within itself. This is certainly no more bizarre than a Big Bang that has all the matter in the universe being created at once in a pinpoint. Furthermore, its fundamental process is one of continual production and ejection of material, which is what’s seen everywhere we look, unlike exotic mechanisms built around black holes whose function is just the opposite. And to survive as a theory it doesn’t have to depend on the burying and suppression of observational data.
|
||
But here’s the really interesting thing. Consider an electron in some remote part of the universe (in the Local Supercluster if that’s all there is to it), that’s still relatively new and therefore of low mass. If it has joined with a nucleus to become part of an atom, and if it makes a transition from one energy state to another, the energy of the transition will be less than that of the same transition measured in a laboratory here on Earth, because the mass involved is less. Thus the emitted or absorbed photon will be lower in energy, which means longer in wavelength, i.e., redder. So the correlation between the age hierarchy and the redshift hierarchy is explained. The reason why young objects like quasars have high redshifts is that high redshifts mean exactly that: recently created matter. Redshifts don’t measure velocities; they measure youth, decreasing as matter ages. And for objects that are even older than the massive, luminous spiral that we inhabit, such as its parent, Andromeda, or the dominant galaxies in Virgo that are of the generation before that, it becomes a blueshift.
|
||
|
||
The God of the Modern Creation Myth
|
||
We’ve looked briefly at several alternatives that have been developed to the Big Bang model of cosmology that dominates the thinking of our culture at the present time. In many ways the alternatives seem better supported by the way reality is observed to work at both the laboratory and astronomical scale. Certainly, some of the alternatives might appear to be in conflict; yet in other ways they could turn out to be complementary. I don’t pretend to have all the answers. I doubt if anyone has.
|
||
The Alfvén-Lerner plasma universe builds larger structures up from small, while ArpNarlikar’s cascade of “mini-bangs” produces enlarging, maturing objects from compact, energetic ones.
|
||
Conceivably they could work together, the magnetic fields and currents of the former shaping and ordering into coherent forms the violently ejected materials that would otherwise disperse chaotically.
|
||
Paul Marmet’s molecular hydrogen produces a redshift that increases with distance, preserving the conventional scale and structure without involving expansion velocities or a finite time. But this could be compatible with an age-related redshift too. Quasars appear to be enveloped in extremely fuzzy, gaseous clouds. If this comes with the matter-creation process, subsequent sweeping and “cleaning up” of the area by gravity could give an initially high absorption redshift that reduces with time. Nothing says that the redshift has to be the result of one single cause. It could be a composite effect, with several factors contributing.
|
||
Some critics assert that Lerner’s electrical forces simply wouldn’t be strong enough to confine stars in their orbits and hold galaxies together. Marmet points out that the existence of ten times as much virtually undetectable molecular hydrogen as the measured amount of atomic hydrogen – readily attainable by his estimation – would provide all the gravity that’s needed, without resorting to exotic forms of “missing mass.” And another possibility is that the law of gravitation assumed to be universal but which has only been verified locally could turn out to be just an approximation to something more complex that deviates more with increasing distance.
|
||
The point is that enormous opportunities surely exist for cross-fertilizations of ideas and a willingness to consider innovative answers that admit all the evidence, instead of a closedminded adherence to sacred assumptions that heretics deny on pain of excommunication. Surely it’s a time for eclecticism, not ecclesiasticism. Maybe the metaphor is more than superficial.
|
||
We noted at the outset that there seems to be a historical correlation between creation-type cosmologies being favored at times when things seem in decline and gods are in vogue, and unguided, evolutionary cosmologies when humanity feels in control and materialism prevails. Well, the philosophy dominating the age we currently live in is probably about as reductionist and materialist as it gets. It seems curious that at a time when an ageless plasma universe or a self-regenerating matter-creation universe should, one would think, be eagerly embraced, what has to be the ultimate of creation stories should be so fiercely defended. An age that has disposed of its creator God probably more thoroughly than any in history produces a cosmology that demands one. The throne is there, but there’s nobody to sit on it.
|
||
Or is there?
|
||
Maybe there’s some kind of a Freudian slip at work when the cardinals of the modern Church of Cosmology make repeated allusions to “glimpsing the mind of God” in their writings,
|
||
|
||
and christen one of their exotic theoretical creations the “God Particle.”
|
||
The servant, Mathematics, who was turned into a god, created the modern cosmos and reveals Truth in an arcane language of symbols accessible only to the chosen, promising ultimate fulfillment with the enlightenment to come with the promised day of the Theory of Everything.
|
||
To be told that if they looked through the telescope at what’s really out there, they’d see that the creator they had deified really wasn’t necessary, would make the professors very edgy and angry indeed.
|
||
|
||
THREE
|
||
Drifting in the Ether Did Relativity Take A Wrong Turn?
|
||
Nature and Nature’s laws lay hid in night: God said Let Newton be! And all was light.
|
||
– Alexander Pope. Epitaph intended for Sir Isaac Newton
|
||
It did not last. The Devil, shouting, Ho! Let Einstein be! restored the status quo.
|
||
– Unknown
|
||
It is generally held that few things could be more solidly grounded than Einstein’s theory of relativity which, along with quantum mechanics, is usually cited as one of the twin pillars supporting modern physics. Questioning is a risky business, since it’s a subject that attracts cranks in swarms. Nevertheless, a sizeable body of well-qualified, far-from-crankish opinion exists which feels that the edifice may have serious cracks in its foundations. So, carried forth by the touch of recklessness that comes with Irish genes, and not having any prestigious academic or professional image to anguish about, I’ll risk being branded as of the swarms by sharing some of the things that my wanderings have led me to in connection with the subject.
|
||
The objections are not so much to the effect that relativity is “wrong.” As we’re endlessly being reminded, the results of countless experiments are in accord with the predictions of its equations, and that’s a difficult thing to argue with. But neither was Ptolemy’s model of the planetary system “wrong,” in the sense that if you want to make the Earth the center of everything you’re free to, and the resulting concoction of epicycles within epicycles correctly describes the heavenly motions as seen from that vantage point. Coming up with a manageable force law to account for them, however, would be monumentally close to an impossibility. 63 Put the Sun at the center, however, and the confusion reduces to a simplicity that reveals Keplerian order in a form that Newton was able to explain concisely in a way that was intuitively satisfying, and three hundred years of dazzlingly fruitful scientific unification followed.
|
||
In the same kind of way, critics of relativity maintain that the premises relativity is founded on, although enabling procedures to be formulated that correctly predict experimental results, nevertheless involve needlessly complicated interpretations of the way things are. At best this can only impede understanding of the kind that would lead to another explosion of enlightenment reminiscent of that following the Newtonian revolution. In other words, while the experimental results obtained to date are consistent with relativity, they do not prove relativity in the way we are constantly being assured, because they are not unique to the system that follows from relativity’s assumptions. Other interpretations have been proposed that are compatible with all the cited observations, but which are conceptually and mathematically simpler. Moreover, in some cases they turn out to be more powerful predictively, able to derive from basic principles
|
||
|
||
quantities that relativity can only accept as givens. According to the criteria that textbooks and advocates for the scientific method tell us are the things to go by, these should be the distinguishing features of a preferred theory.
|
||
However, when the subject has become enshrined as a doctrine founded by a canonized saint, it’s not quite that simple. The heliocentric ideas of Copernicus had the same thing going for them too, but he circulated them only among a few trusted friends until he was persuaded to publish in 1543, after which he became ill and died. What might have happened otherwise is sobering to speculate. Giordano Bruno was burned at the stake in 1600 for combining similar thoughts with indiscreet politics. The Copernican theory was opposed by Protestant leaders as being contrary to Scriptural teachings and declared erroneous by the Roman Inquisition in 1616. Galileo was still being silenced as late as 1633, although by then heliocentricism was already implicit in Kepler’s laws, enunciated between 1609 and 1619. It wasn’t until 1687, almost a century and a half after Copernicus’s death, that the simpler yet more-embracing explanation, unburdened of dogma and preconceptions, was recognized openly with the acceptance of Newton’s Principia.
|
||
Fortunately, the passions loosed in such issues seem to have abated somewhat since those earlier times. I experienced a case personally at a conference some years ago, when I asked a well-known physicist if he’d gotten around to looking at a book I’d referred him to on an alternative interpretation to relativity, written by the late Czech professor of electrical engineering Petr Beckmann 64 (of whom, more later). Although a friend of many years, his face hardened and changed before my eyes. “I have not read the book,” he replied tightly. “I have no intention of reading the book. Einstein cannot be wrong, and that’s the end of the matter.”
|
||
|
||
Some Basics
|
||
Reference Frames and Transforms
|
||
The principle of relativity is not in itself new or something strange and unfamiliar, but goes back to the physics of Galileo and Newton. It expresses the common experience that some aspects of the world look different to observers who are in motion relative to each other. Thus, somebody on the ground following a bomb released from an aircraft will watch it describe a steepening curve (in fact, part of an ellipse) in response to gravity, while the bomb aimer in the plane (ignoring air resistance) sees it as accelerating on a straight line vertically downward. Similarly, they will perceive different forms for the path followed by a shell fired upward at the plane and measure different values for the shell’s velocity at a given point along it.
|
||
So who’s correct? It doesn’t take much to see that they both are when speaking in terms of their own particular viewpoint. Just as the inhabitants of Seattle and Los Angeles are both correct in stating that San Francisco lies to the south and north respectively, the observers on the ground and in the plane arrive at different but equally valid conclusions relative to their own frame of reference. A frame of reference is simply a system of x, y, and z coordinates and a clock for measuring where and when an event happens. In the above case, the first frame rests with the ground; the other moves with the plane. Given the mathematical equation that describes the bomb’s motion in one frame, it’s a straightforward process to express it in the form it would take in the other frame. Procedures for transforming events from the coordinates of one reference frame to the coordinates of another are called, logically enough, coordinate transforms.
|
||
On the other hand, there are some quantities about which the two observers will agree. They will both infer the same size and weight for the bomb, for example, and the times at which it was released and impacted. Quantities that remain unvarying when a transform is applied are said to be “invariant” with respect to the transform in question.
|
||
Actually, in saying that the bomb aimer in the above example would see the bomb falling in a straight line, I sneaked in an assumption (apart from ignoring air resistance) that needs to be made explicit. I assumed the plane to be moving in a straight line and at constant speed with respect to the ground. If the plane were pulling out of a dive or turning to evade ground fire, the part-ellipse that the ground observer sees would transform into something very different when measured within the reference frame gyrating with the aircraft, and the bomb aimer would have to come up with something more elaborate than a simple accelerating force due to gravity to account for it.
|
||
But provided the condition is satisfied in which the plane moves smoothly along a straight line when referred to the ground, the two observers will agree on another thing too. Although their interpretations of the precise motion of the bomb differ, they will still conclude that it results from a constant force acting in a fixed direction on a given mass. Hence, the laws governing the motions of bodies will still be the same. In fact they will be Newton’s familiar Laws of Motion. This is another way of saying that the equations that express the laws remain in the same form, even though the terms contained in them (specific coordinate readings and times) are not themselves invariant. Equations preserved in this way are said to be covariant with respect to the transformation in question. Thus, Newton’s Laws of Motion are covariant with respect to transforms between two reference frames moving relative to one another uniformly in a straight line. And since any airplane’s frame is as good as another’s, we can generalize this to
|
||
|
||
all frames moving uniformly in straight lines relative to each other. There’s nothing special about the frame that’s attached to the ground. We’re accustomed to thinking of the ground frame as having zero velocity, but that’s just a convention. The bomb aimer would be equally justified in considering his own frame at rest and the ground moving in the opposite direction.
|
||
Inertial Frames
|
||
Out of all the orbiting, spinning, oscillating, tumbling frames we can conceive as moving with the various objects, real and imaginable, that fill the universe, what we’ve done is identify a particular set of frames within which all observers will deduce the same laws of motion, expressed in their simplest form. (Even so, it took two thousand years after Aristotle to figure them out.) The reason this is so follows from one crucial factor that all of the observers will agree on: Bodies not acted upon by a force of any kind will continue to exist in a state of rest or uniform motion in a straight line – even though what constitutes “rest,” and which particular straight line we’re talking about, may differ from one observer to another. In fact, this is a statement of Newton’s first law, known as the law of inertia. Frames in which it holds true are called, accordingly, “inertial frames,” or “Galilean frames.” What distinguishes them is that there is no relative acceleration or rotation between them. To an observer situated in one of them, very distant objects such as the stars appear to be at rest (unlike from the rotating Earth, for example). The procedures for converting equations of motion from one inertial frame to another are known as Galilean transforms. Newton’s laws of motion are covariant with respect to Galilean transforms.
|
||
And, indeed, far more than just the laws of motion. For as the science of the eighteenth and nineteenth centuries progressed, the mechanics of point masses was extended to describe gravitation, electrostatics, the behavior of rigid bodies, then of continuous deformable media, and so to fluids and things like kinetic theories of heat. Laws derived from mechanics, such as the conservation of energy, momentum, and angular momentum, were found to be covariant with respect to Galilean transforms and afforded the mechanistic foundations of classical science. Since the laws formulated in any Galilean frame came out the same, it followed that no mechanical experiment could differentiate one frame from another or single out one of them as “preferred” by being at rest in absolute space. This expresses the principle of
|
||
“Galilean-Newtonian Relativity.” With the classical laws of mechanics, the Galilean transformations, and the principle of Newtonian relativity mutually consistent, the whole of science seemed at last to have been integrated into a common understanding that was intellectually satisfying and complete.
|
||
|
||
Extending Classical Relativity
|
||
Problems with Electrodynamics
|
||
As the quotation at the beginning of this section says, it couldn’t last. To begin with, the new science of electrostatics appeared to be an analog of gravitation, with the added feature that electrical charges could repel as well as attract. The equations for electrical force were of the same form as Newton’s gravitational law, known to be covariant under Galilean transform, and it was expected that the same would apply. However, as the work of people like André-Marie Ampère, Michael Faraday, and Hans Christian Oersted progressed from electrostatics to electrodynamics, the study of electrical entities in motion, it became apparent that the situation was more complicated. Interactions between magnetic fields and electric charges produced forces acting in directions other than the straight connecting line between the sources, and which, unlike the case in gravitation and electrostatics, depended on the velocity of the charged body as well as its position. Since a velocity in one inertial frame can always be made zero in a different frame, this seemed to imply that under the classical transformations a force would exist in one that didn’t exist in the other. And since force causes mass to accelerate, an acceleration could be produced in one frame but not in the other when the frames themselves were not accelerating relative to each other – which made no sense. The solution adopted initially was simply to exclude electrodynamics from the principle of classical relativity until the phenomena were better understood.
|
||
But things got worse, not better. James Clerk Maxwell’s celebrated equations, developed in the period 1860-64, express concisely yet comprehensively the connection between electric and magnetic quantities that the various experiments up to that time had established, and the manner in which they affect each other across intervening space. (Actually, Wilhelm Weber and Neumann derived a version of the same laws somewhat earlier, but their work was considered suspect on grounds, later shown to be erroneous, that it violated the principle of conservation of energy, and it’s Maxwell who is remembered.) In Maxwell’s treatment, electrical and magnetic effects appear as aspects of a combined “electromagnetic field” – the concept of a field pervading the space around a charged or magnetized object having been introduced by Faraday – and it was by means of disturbances propagated through this field that electrically interacting objects influenced each other.
|
||
An electron is an example of a charged object. A moving charge constitutes an electric current, which gives rise to a magnetic field. An accelerating charge produces a changing magnetic field, which in turn creates an electric field, and the combined electromagnetic disturbance radiating out across space would produce forces on other charges that it encountered, setting them in motion – a bit like jiggling a floating cork up and down in the water and creating ripples that spread out and jiggle other corks floating some distance away. A way of achieving this would be by using a tuned electrical circuit to make electrons surge back and forth along an antenna wire, causing sympathetic charge movements (i.e., currents) in a receiving antenna, which of course is the basis of radio. Another example is light, where the frequencies involved are much higher, resulting from the transitions of electrons between orbits within atoms rather than oscillations in an electrical circuit.
|
||
|
||
Maxwell’s Constant Velocity
|
||
The difficulty that marred the comforting picture of science that had been coming together up until then was that the equations gave a velocity of propagation that depended only on the electrical properties of the medium through which the disturbance traveled, and was the same in every direction. In the absence of matter, i.e., in empty space, this came out at 300,000 kilometers per second and was designated by c, now known to be the velocity of light. But the appearance of this value in the laws of electromagnetism meant that the laws were not covariant under Galilean transforms between inertial frames. For under the transformation rules, in the same way that our airplane’s velocity earlier would reduce to zero if measured in the bomb aimer’s reference frame, or double if measured in the frame of another plane going the opposite way, the same constant velocity (depending only on electrical constants pertaining to the medium) couldn’t be true in all of them. If Maxwell’s equations were to be accepted, it seemed there could only exist one “absolute” frame of reference in which the laws took their standard, simplest form. Any frame moving with respect to it, even an inertial frame, would have to be considered “less privileged.”
|
||
Putting it another way, the laws of electromagnetism, the classical Galilean transforms of space and time coordinates, and the principle of Newtonian relativity, were not compatible. Hence the elegance and aesthetic appeal that had been found to apply for mechanics didn’t extend to the whole of science. The sense of completeness that science had been seeking for centuries seemed to have evaporated practically as soon as it was found. This was not very intellectually satisfying at all.
|
||
One attempt at a way out, the “ballistic theory,” hypothesized the speed of light (from now on taken as representing electromagnetic radiation in general) as constant with respect to the source. Its speed as measured in other frames would then appear greater or less in the same way as that of bullets fired from a moving airplane. Such a notion was incompatible with a field theory of light, in which disturbances propagate at a characteristic rate that has nothing to do with the movement of their sources, and was reminiscent of the corpuscular theory that interference experiments were thought to have laid to rest. But it was consistent with the relativity principle: Light speed would transform from one inertial frame, that of the source, to any other just like the velocity of a regular material body.
|
||
However, observations ruled it out. In binary star systems, for example, where one star is
|
||
|
||
approaching and the other receding, the light emitted would arrive at different times, resulting in distortions that should have been unmistakable but which were not observed. A series of laboratory experiments 65 also told against a ballistic explanation. The decisive one was probably one with revolving mirrors conducted by A. A. Michelson in 1913, which also effectively negated an ingenious suggestion that lenses and mirrors might reradiate incident light at velocity c with respect to themselves – a possibility that the more orthodox experiments hadn’t taken into account.
|
||
Another thought was that whenever light was transmitted through a material medium, this medium provided the local privileged frame in which c applied. Within the atmosphere of the Earth, therefore, the speed of light should be constant with respect to the Earth-centered frame. But this runs into logical problems. For suppose that light were to go from one medium into another moving relative to the first. The speeds in the two domains are different, each being determined by the type of medium and their relative motion. Now imagine that the two media are progressively rarified to the point of becoming a vacuum. The interaction between matter and radiation would become less and less, shown as a steady reduction of such effects as refraction and scattering to the point of vanishing, but the sudden jump in velocity would still remain without apparent cause, which is surely untenable.
|
||
Once again, experimental evidence proved negative. For one thing, there was the phenomenon of stellar aberration, known since James Bradley’s report to Newton’s friend Edmond Halley, in 1728. Bradley found that in the course of a year the apparent position of a distant star describes an ellipse around a fixed point denoting where it “really” is. The effect results from the Earth’s velocity in its orbit around the Sun, which makes it necessary to offset the telescope angle slightly from the correct direction to the star in order to allow for the telescope’s forward movement while the light is traveling down its length. It’s the same as having to tilt an umbrella when running, and the vertically falling rain appears to be coming down at a slant. If the incoming light were swept along with the atmosphere as it entered (analogous to the rain cloud moving with us), the effect wouldn’t be observed. This was greeted by some as vindicating the corpuscular theory, but it turns out that the same result can be derived from wave considerations too, although not as simply. And in similar vein, experiments such as that of Armand Fizeau (1851), which measured the speed of light through fast-flowing liquid in a pipe, and Sir George Airy (1871), who repeated Bradley’s experiment using a telescope filled with water and showed aberration didn’t arise in the telescope tube, demonstrated that the velocity of light in a moving medium could not be obtained by simple addition in the way of airplanes and machine-gun bullets or as a consequence of being dragged by the medium.
|
||
Relativity is able to provide interpretations of these results – indeed, the theory would have had a short life if it couldn’t. But the claim that relativity is thereby “proved” isn’t justified. As the Dutch astronomer M. Hoek showed as early as 1868, attempts at using a moving material medium to measure a change in the velocity of light are defeated by the effect of refraction, which cancels out the effects of the motion. 66
|
||
Michelson, Morely, and the Ether That Wasn’t
|
||
These factors suggested that the speed of light was independent of the motion of the radiation source and of the transmitting medium. It seemed, then, that the only recourse was to abandon
|
||
|
||
the relativity principle and conclude that there was after all a privileged, universal, inertial reference frame in which the speed of light was the same in all directions as the simplest form of the laws required, and that the laws derived in all other frames would show a departure from this ideal. The Earth itself cannot be this privileged frame, since it is under constant gravitational acceleration by the Sun (circular motion, even at constant speed, involves a continual change of direction, which constitutes an acceleration) and thus is not an inertial frame. And even if at some point its motion coincided with the privileged frame, six months later its orbit would have carried it around to a point where it was moving with double its orbital speed with respect to it. In any case, whichever inertial frame was the privileged one, sensitive enough measurements of the speed of light in orthogonal directions in space, continued over six months, should be capable of detecting the Earth’s motion with respect to it.
|
||
Many interpreted this universal frame as the hypothetical “ether” that had been speculated about long before Maxwell’s electromagnetic theory, when experiments began revealing the wave nature of light. If light consisted of waves, it seemed there needed to be something present to be doing the “waving” – analogous to the water that carries ocean waves, the air that conducts sound waves, and so on. The eighteenth to early nineteenth centuries saw great progress in the development of mathematics that dealt with deformation and stresses in continuous solids, and early notions of the ether sought an interpretation in mechanical terms. It was visualized as a substance pervading all space, being highly rigid in order to propagate waves at such enormous velocity, yet tenuous enough not to impede the motions of planets. Maxwell’s investigations began with models of fields impressed upon a mechanical ether, but the analogy proved cumbersome and he subsequently dispensed with it to regard the field itself as the underlying physical reality. Nevertheless, that didn’t rule out the possibility that an “ether” of some peculiar nature might still exist. Perhaps, some concluded, the universal frame was none other than that within which the ether was at rest. So detection of motion with respect to it could be thought of as measuring the “ether wind” created by the Earth’s passage through it in its movement through space.
|
||
The famous experiment that put this to the test, repeated and refined in innumerable forms since, was performed in 1887 by Albert Michelson and Edward Morley. The principle, essentially, was the same as comparing the round-trip times for a swimmer first crossing a river and back, in each case having to aim upstream of the destination in order to compensate for the current, and second covering the same distance against the current and then returning with it. The times will not be the same, and from the differences the speed of the current can be calculated. The outcome was one of the most famous null results in history. No motion through an ether was detected. No preferred inertial reference frame could be identified that singled itself out from all the others in any way.
|
||
So now we have a conundrum. The elaborate experimental attempts to detect a preferred reference frame indicated an acceptance that the relativity principle might have to be abandoned for electromagnetism. But the experimental results failed to identify the absolute reference frame that this willingness allowed. The laws of electromagnetism themselves had proved strikingly successful in predicting the existence of propagating waves, their velocity and other quantities, and appeared to be on solid ground. And yet an incompatibility existed in that they were not covariant under the classical transforms of space and time coordinates between inertial frames. The only thing left to question, therefore, was the process involving the transformations
|
||
|
||
themselves.
|
||
Lorentz’s Transforms for Electromagnetics
|
||
Around the turn of the twentieth century the Dutch theoretical physicist Hendrick Lorentz followed the path of seeking alternative transformation laws that would do for electromagnetics what the classical transforms had done for mechanics. Two assumptions that few people would question were implicit in the form of the Galilean transforms: (1) that observers in all frames will measure time the same, as if by some universal clock that ticks the same everywhere; and (2) while the space coordinates assigned to points on a rigid body such as a measuring rod might differ, the distance between them would not. In other words, time intervals and lengths were invariant.
|
||
In the Lorentz Transforms, as they came to be called, this was no longer so. Time intervals and lengths measured by an observer in one inertial frame, when transformed to another frame, needed to be modified by a factor that depended on the relative motion between them. Lorentz’s system retained the notion of an absolute frame in which the ether is at rest. But the new transforms resulted in distances being reduced in the direction of motion relative to it, and it was this fact which, through an unfortunate coincidence of effects, made detection of the motion unobservable. As a matter of fact, an actual physical shrinkage of precisely this form – the “Fitzgerald Contraction” – had been proposed to explain the Michelson-Morley result as due to a shortening of the interferometer arms in the affected direction. Some textbook writers are of the opinion that Lorentz himself took the contractions as real; others, that he used them simply as mathematical formalisms, symbolizing, as it were, some fictitious realm of space and time that applied to electromagnetic phenomena. I don’t claim to know what Lorentz thought. But here was a system which acknowledged a preferred frame as required by Maxwell’s equations (defined by the constancy of c), yet at the same time observed the relativity that the optical experiment seemed to demand. Okay, maybe things were a bit messy in that a different system applied to mechanics. But everything more or less worked, and maybe that was just the way things from now on would have to be.
|
||
Except that somebody called Albert Einstein wasn’t happy with it.
|
||
|
||
The New Relativity
|
||
Einstein: Transforming All of Physics
|
||
Neither mechanics nor – regardless of the constant in Maxwell’s equations – electromagnetics had revealed an absolute frame of reference. All experiments seemed to indicate that any inertial frame was as good as another. What this suggested to Einstein was that some kind of relativity principle was in evidence that applied across the whole of science, according to which physics should be the same for all observers. Or putting it another way, the equations expressing all physical laws should be covariant between inertial frames. Following Lorentz, but with an aim that was general and not restricted to a subset of physics, Einstein set out to discover a system of transforms that would make this true. Two postulates formed his starting point. (1) The relativity principle applies for all of physics across all inertial frames, which was what the intuitively satisfying solution he was searching for required. (2) The velocity of light, c, is the same for observers in all inertial frames regardless of their state of motion relative to each other. For that’s what Maxwell’s equations said, and being a physical law, it had to apply in all frames for (1) to be true.
|
||
And what he did in his paper on special relativity, published in 1905, was rediscover the Lorentz Transforms. This was hardly surprising, since they gave the right answers for electromagnetism – hence anything saying otherwise would have been wrong. But there was a crucial difference. Whereas Lorentz’s application of them had been restricted to the special area of electromagnetism, Einstein maintained that they applied to everything – mechanics as well.
|
||
But, wait a minute. If the relativity principle was to be observed, and the new transforms applied, how could they still be compatible with Newton’s long-established mechanics, which was enthroned as being consistent with the classical Galilean transforms, not with the new Lorentzian ones?
|
||
The only answer could be that Newtonian mechanics wasn’t as invincibly established as everyone thought it was. Recall the two assumptions we mentioned earlier that the Galilean transforms imply: that space and time intervals are invariant. What Einstein proposed was that the velocity-dependencies deduced by Lorentz were not part of some fudge-factor needed for electromagnetism, but that they expressed fundamental properties of the nature of space and time that were true universally, and hence called for a revision of mechanics. However, the new mechanics could hardly render invalid the classical results that centuries of experimenting had so strongly supported. And indeed, this turned out to be so; at the low velocities that classical science had been confined to, and which shape the common sense of everyday experience, the equations of the new mechanics merged into and became indistinguishable for all practical purposes from the Newtonian ones.
|
||
Relativity’s Weird Results
|
||
Where the two systems began departing significantly was when very high velocities were involved – of the order of those encountered in electromagnetism and late-nineteenth-century experiments on fast-moving particles, where it had already become clear that classical mechanics couldn’t be correct. Space and time were no longer fixed and unchanging but behaved weirdly at extremes of velocity that everyday experience provided no schooling for, with consequences that
|
||
|
||
Newtonian mechanics hadn’t anticipated. These are well-enough known now to require no more than that they be listed. All have been verified by experiment.
|
||
Addition of velocities. In classical mechanics, a bullet fired from an airplane will hit a target on the ground ahead with a velocity equal to that of the plane relative to the ground plus that of the bullet relative to the plane. But according to relativity (henceforth the “special relativity theory,” or “SRT”), what appears to be obvious isn’t exactly so. The velocity in the target’s frame doesn’t equal the sum of the two components – although at the speeds of planes and bullets you’d never notice the difference. The higher the velocities, the greater the discrepancy, the relationship being such that the bullet’s velocity in the target’s frame never manages to exceed c, the speed of light. Thus even if the plane is coming in at 90% c and fires a bullet that leaves the plane at 90% c, the bullet’s velocity measured by the target will be 90% c plus something, but not greater than c itself. (In fact it will be 99.45% c.) In the limit, when the bullet leaves the plane at c, the resultant, bizarre as it sounds, is still c. It has become a photon of light. Its speed is the same in both the frame of the airplane (source) and that of the target (receiver). Add two velocities – or as many as you like – each equal to c, and the result still comes out at c. And that’s what all the Michelson-Morley-type experiments confirm.
|
||
Relativity of simultaneity. The upper limit on velocity makes it impossible to devise a method for synchronizing clocks in a way that enables different frames to agree on whether two events happen simultaneously. Some arbitrary frame could be chosen as a reference, of course – such as the Sun-centered frame – and a correction applied to decide if two events were simultaneous as far as that frame was concerned, but it wouldn’t mean much. One person’s idea of simultaneity would still be no better or worse than any other’s, and the term loses any real significance. Establishing absolute simultaneity without a privileged frame would require an infinitely fast synchronizing signal, which SRT says we don’t have.
|
||
Mass increase. Mass measures the amount of resistance that an object exhibits to being accelerated – that is, having its state of motion (speed and/or direction) changed. A cannon ball has a large mass compared to a soccer ball of the same size, as kicking or trying to stop one of each will verify. Though unobservable at everyday levels, this resistance to being accelerated increases as an object moves with higher speed. In particle accelerators, far more energy is required to nudge the velocity of a particle an additional tenth of a percent c faster when it is already moving at, say, 90% c than to accelerate it the first tenth of a percent from rest.
|
||
Mass-energy equivalence. As the velocity of a body increases, it stores more kinetic energy. From the preceding paragraph, it also exhibits an increase in mass. This turns out to be more than just coincidence, for according to relativity mass and energy become equivalent and can be converted one into the other. This is true even of the residual mass of an object not moving at all, which still has the energy equivalent given by the famous equation E =mc2, where E is the energy and m0 the object’s 0 mass when at rest. All energy transitions thus involve changes in mass, but the effect is usually noticeable only in nuclear processes such as the mass deficit of particles bound into a nucleus or the yield of fission and fusion bombs; also the mass-energy balances observed in particle creation and annihilation events.
|
||
Time dilation. Time, and hence processes that are time-dependent, runs slower in a moving frame than in one at relative rest. An example is the extended lifetimes shown by muons created by bombardment of the upper atmosphere by protons from the Sun. The muons reach the Earth’s surface in numbers about nine times greater than their natural decay time (half-life 2.2
|
||
|
||
microseconds) says they should. This is explained by time in the muon’s moving frame being dilated as measured from the surface, giving a longer decay period than would be experienced by a muon at rest. High-accuracy clocks on rocket sleds run slower than stationary clocks.
|
||
The mathematician Hermann Minkowski developed the Einstein theory further by showing that it entailed a reality consisting not of the three-dimensional space and separate time that are ordinarily perceived, but of a strange, non-Euclidian, four-dimensional merging of the two known since as spacetime. Only from the local standpoint of a particular Galilean frame do they separate out into the space and time of everyday life. But the space and time that they resolve into is different in different frames – which is what the transforms of SRT are saying.
|
||
Unifying Physics
|
||
Although many might remain unconvinced, this kind of thing is what scientists regard as a simplification. When phenomena that were previously thought to be distinct and independent – such as space and time in the foregoing – turn out to be just different aspects of some more fundamental entity, understanding of what’s going on is deepened even if the techniques for unraveling that understanding take some work in getting used to. In the same kind of way, momentum and energy become unified in the new four-dimensional world, as do the classical concepts of force and work, and electric current and charge.
|
||
This also throws light (pun unintended, but not bad so I’ll let it stand) on the interdependence of the electric and magnetic field quantities in Maxwell’s equations. In Maxwell’s classical three-dimensional space the electromagnetic field is formed from the superposition of an electric field, which is a vector field, and a magnetic field, which is a tensor field. In Minkowski’s spacetime these merge into a single four-dimensional tensor called the electromagnetic tensor, and the four three-dimensional equations that Maxwell needed to describe the relationships reduce to two four-dimensional ones. Hence the interdependence of electric and magnetic fields, which in the classical view had to be simply accepted as a fact of experience, becomes an immediate consequence of their being partial aspects of the same underlying electromagnetic entity.
|
||
In SRT, Minkowski’s four-dimensional spacetime is considered to be “flat” – uncurved, like the classical Euclidian space of Newton. An object’s “world-line” – the path showing its history in spacetime – will be a straight line when the object is in a state of rest or uniform motion. What differentiates accelerating frames is that their world-lines become curved. In developing his general theory of relativity (GRT), Einstein sought to remove the restriction of inertial frames and extend the principle to frames in general. In doing so he proposed that a region of space subject to gravitation is really no different from a reference frame undergoing acceleration. Inside an elevator, for example, there’s no way of telling if a pen falling to the floor does so because the elevator is accelerating upward or because the floor is attracting it downward. 67
|
||
If a gravitational field is equivalent to acceleration, motions associated with it will also be represented by curved world-lines in spacetime. Hence, in GRT gravitation is interpreted geometrically. Instead of somehow attracting bodies like planets to move in curved paths through flat space, the presence of the Sun’s mass itself warps the geometry of spacetime such that the paths they naturally follow become curved. An analogy often used to illustrate this is a stretched rubber sheet, representing undeformed space. Placing a heavy object like a bowling ball on the
|
||
|
||
sheet creates a “well,” with sides steepening toward the center, that the ball sits in, but which would be indiscernible to a viewer vertically above who had no knowledge of a dimension extending in that direction. If a marble is now rolled across the sheet, its trajectory will be deflected exactly as if the sheet were flat and the ball exerted an attraction. In the absence of any friction, the marble could be trapped in a closed path where the tendencies to fall down the well and to be lifted out of it by centrifugal force balance, causing it to orbit the bowling ball endlessly.
|
||
If spacetime itself is curved in the vicinity of masses, then not just massive objects but anything that moves through space will also follow paths determined by the nonflat geometry. So stars, for instance, should “attract” light, not just material bodies. That this is so is verified by the observed deflection of starlight passing close to the Sun. So once again, all forms of energy exhibit the equivalence to a property of mass.
|
||
Finally we’re back to a situation where we have the principle of relativity, a universal statement of the laws of physics (the new mechanics, which subsumes electrodynamics), and a system of transformations that are mutually consistent. Science has been integrated into a common understanding that’s found to be intellectually satisfying and complete. Its successes are celebrated practically universally as the crowning achievement of twentieth-century science. So what are some people saying is wrong with it?
|
||
|
||
Dissident Viewpoints
|
||
As we said at the beginning, it’s not so much a case of being “wrong.” When a theory’s predictions accord with the facts as far as can be experimentally determined, it obviously can’t be rejected as an invalid way of looking at things. But that isn’t enough to make it the only valid way. And if other ways that can be shown to be equally valid by according with the same facts are able to do so more simply, they deserve consideration. The objection is more to the confident assurances that we now have all the answers, no way of doing better is conceivable, and the book is closed. When claims to a revelation of final Truth are heard, with all moves toward criticism being censured, ridiculed, or dismissed out of hand, then what’s going is drifting toward becoming intolerant dogmatism rather than science. Einstein would probably have been one of the first to agree. One of his more endearing quotes was that “I only had two original ideas in my life, and one of them was wrong.” I don’t think he would object at all to our taking a long, hard look at the other one too.
|
||
Elegant, Yes. But Is It Really Useful?
|
||
Not everyone is as enamored that the disappearance of such fundamental concepts as space and time into abstractions of mathematical formalism helps our understanding of anything or sees it as necessary. Traditionally, length, time, and mass have constituted the elements of physics from which all other quantities, such as acceleration, force, energy, momentum, and so on, are derived. Elevating a velocity (length divided by time) to a privileged position as Nature’s fundamental reality, and then having to distort space and time to preserve its constancy, just has the feel about it, to many, of somehow getting things the wrong way around. This isn’t to say that what’s familiar and apparently self-evident can always be relied upon as the better guide. But a physics of comprehension built on a foundation of intuition that can be trusted is surely preferable to one of mere description that results from applying formalized procedures that have lost all physical meaning. We live in a world inhabited not by four-dimensional tensors but by people and things, and events that happen in places and at times. A map and a clock are of more use to us than being told that an expression couched in terms of components having an obscure nature is invariant. If other interpretations of the facts that relativity addresses can be offered that integrate more readily with existing understanding, they deserve serious consideration.
|
||
Lorentz’s Ether Revisited
|
||
A good place to start might be with Lorentz’s ether theory (LET). Recall that it was compatible with all the electromagnetic results that SRT accounts for but postulated a fixed ether as the propagating medium, which is what the c in Maxwell’s equations referred to. In another reference frame the velocity of light will be c plus or minus that frame’s velocity relative to the privileged frame defined by the ether. “But measurements don’t show c plus or minus anything. They show c.” Which was where all the trouble started. Well, yes, that’s what measurements show. But measurements are based on standards like meter-rules and clocks. While SRT was willing to give up the Lorentzian assumptions of space and time being immutably what they had always been, the proponents of an LET interpretation point out that SRT itself carries an assumption that would seem far closer to home and more readily open to question, namely that
|
||
|
||
the measuring standards themselves are immutable. Before consigning the entire structure of the universe to deformities that it hasn’t recovered from since, wouldn’t it be a good idea to make sure that it wasn’t the rules and clocks that were being altered?
|
||
If this should be so, then the rest frame of the ether is the one the electromagnetic laws are correct in, which the c in Maxwell’s equations refers to. In frames that are moving relative to it, the speed of light will be different. However, motion through the ether alters physical structures in such a way that the standards used will still measure it as c. So nobody can detect their motion with respect to the ether frame, and the same experimental results as are derived from SRT follow. But space and time remain what they’ve always been, and light retains the same property as every other wave phenomenon in physics in that its velocity is a constant with respect to the medium that it’s traveling through.
|
||
If motion relative to the ether frame could be established, the notion of absolute simultaneity would be restored. The velocity of light within that frame is known, and it would be meaningful to say, for example, that signals sent from the ends of a measured distance arrive at the midpoint at the same time. Velocities in other frames could then be corrected with respect to that standard. The situation would be similar to using sound signals to synchronize a clock on the ground with one carried on a moving vehicle.
|
||
It might seem a remarkable coincidence that the distortions induced in the measuring standards should be of just the right amount to keep the apparent value of c at that given by Maxwell’s equations. But it isn’t really, since the Lorentz Transforms that yield the distortions were constructed to account for those experimental results in the first place.
|
||
Lorentz himself conducted theoretical investigations of the flattening of electrons, assumed to be normally symmetrical, in their direction of motion through the ether. If basic particles can be affected, the notion of physical objects being distorted becomes less difficult to accept. After all, “matter” comprises a volume of mostly empty space – or ether in the context of the present discussion – defined by a highly dispersed configuration of electrical entities linked by forces. (Think of those models made up from balls connected by webs of springs that you see in science displays in museums and high-school laboratories to represent molecules.) Maybe the idea that objects moving fast through the ether could ever not be distorted is what really needs explaining.
|
||
Such distortions would perturb the energy dynamics of electron shell structures and atomic nuclei, with consequent modifications to emitted frequencies and other time-dependent processes, and hence any measuring techniques based on them. So the assumption of immutable clocks stands or falls on the same ground.
|
||
An introduction to the arguments favoring an LET model, and to the philosophical considerations supporting it is given concisely by Dr. George Marklin. 68 The LET interpretation can also be extended to include gravitational effects by allowing the ether to move differentially. Such a general ether theory has been developed by Ilja Schmelzer. 69 It is mathematically equivalent to GRT but uses Euclidean space and absolute time. Schmelzer gives the ether a density, velocity and pressure tensor and satisfies all the appropriate conservation equations, but it’s a fairly recent development and there are still unresolved issues.
|
||
A comprehensive treatment that covers all the ground of SRT and GRT as well as addressing the controversial experimental issues that are argued both ways, such as the interpretation of results from rotating frames, transporting of atomic clocks around the world, and the calibrating
|
||
|
||
of GPS satellite ranging is Ronald Hatch’s “modified Lorentz ether theory,” MLET. 70 The “modified” part comes from its extension of using the same ether to account for material particles in the form of standing waves. The theory and its ramifications are explored in detail in Hatch’s book Escape from Einstein. 71
|
||
Entraining the Ether
|
||
The concept of a fixed ether pervading all of space uniformly like a placid ocean was perhaps something of an idealization that owed more to Aristotlean notions of perfection than the messy, turbulent world we find ourselves living in. The Michelson-Morely result showed that no motion through such an ether can be detected – at least not by present methods – from which one conclusion is that it might as well not be there, and therefore to all practical purposes it doesn’t exist. This is the path that SRT develops. However, the same result would be obtained if the ether in the vicinity of the Earth moved with it in its orbit around the Sun, accompanying it as a kind of “bubble” inside which the Earth and the local ether remain at rest relative to each other. Such an “entrained ether” interpretation was in fact favored by Michelson himself, who never accepted the SRT explanation. The general consensus, however, was it was incompatible with the aberration effect on starlight described earlier, and it was rejected accordingly.
|
||
But aberration turns out, on closer examination, to be a more complex business than is often acknowledged. The typical SRT textbook explanation attributes the effect to relative velocity, for example: “... the direction of a light ray depends essentially on the velocity of the light source relative to the observer.... This apparent motion is simply due to the fact that the observed direction of the light ray coming from the star depends on the velocity of the earth relative to the star.” 72
|
||
This can’t be so, however, since stars in general possess velocities that vary wildly with respect to the Earth. Pointing a telescope at any patch of sky constrained sufficiently to signify direction should still capture a representative sample of them, which should display a spread of aberration displacements accordingly. But that isn’t what’s found. They turn out to be all the same.
|
||
Then again, let’s consider what are known as spectroscopic binary stars, that is, double stars too close together to be resolved separately but which can be distinguished by their Dopplershifted spectra. If aberration depended on velocity, the very difference in velocities that produces the Doppler shifts would be sufficient to separate the images resolvably – in which case they would no longer be spectroscopic binaries!
|
||
And further, even for a star that was not moving with respect to the Earth at all, the atoms in the star’s photosphere that do the actual emitting of light, and which therefore constitute its true sources, will be moving thermally in all directions randomly. If aberration were due to their velocities, the compound effect would be sufficient to expand the points seen in the sky to a size that could be discerned with a good pair of binoculars.
|
||
There is an apparent displacement of planets, also called aberration, unfortunately, that results from the delay of light in reaching the Earth. It does depend on source velocity, but this isn’t the quantity that we’re talking about. Its effect reduces with distance and is effectively zero for things like stars. According to Thomas E. Phipps Jr., Einstein used the wrong one. 73 Howard Hayden, professor emeritus of physics at the University of Connecticut, Storrs, arrives at the
|
||
|
||
same conclusion. 74
|
||
Stellar aberration affects all stars in a locally surveyed region equally and varies systematically with an annual cycle. The velocity that it depends on is clearly the orbital velocity of the Earth, which would seem to imply velocity with respect to the Sun’s frame. But there’s a difficulty. Suppose there was a streetlamp beyond the telescope, directly in line with the star being observed. If some kind of motion through an ether were responsible, you’d think that light from one would follow the same path as the light from the other, and the same aberration should be observed. It isn’t. No measurable effect occurs at all. Relativists chuckle and say, “We told you so. It’s because there’s no relative motion between the street light and the observer.” But the considerations above are enough to say that can’t be true either. It’s more as if different ethers were involved, one containing the Earth and the streetlamp, inside which there is no aberration, the other extending out to somewhere less than the Sun’s distance such that its annual motion within the Sun’s frame produces the effect on starlight. There are further complications too, such as why long-baseline radio telescope arrays should detect aberration when there’s no tube for photons to move sideways in, and the theories and arguments currently doing the rounds to try and account for them could bog us down for the rest of this book. I’ve dwelt on it this far to show that the whole subject of aberration is a lot more involved than the standard treatments that dismiss it in a few lines would lead one to believe.
|
||
Field-Referred Theories
|
||
Petr Beckmann, a Czech professor of electrical engineering at the University of Colorado, developed an alternative theory in which the second of SRT’s two founding premises – that the speed of light is constant with respect to all observers everywhere – is replaced by its speed being constant with respect to the dominant local force field through which it propagates. (SRT’s first premise was the relativity principle, by which the same laws of physics apply everywhere.) For most of the macroscopic universe in which observers and laboratories are located, this means the gravitational field that happens to dominate wherever one happens to be. On the surface of the Earth it means the Earth’s field, but beyond some distance that gives way to the Sun’s field, outside which the field of the local part of the galactic realm dominates, and so on. This gives a more tangible form to the notion of embedded “ether bubbles,” with light propagating at its characteristic speed within fields that move relative to each other – like the currents and drifts and doldrums that make up a real ocean, as opposed to a universally static, glassy abstraction. And since, as with any conservative vector field (one in which energy potentials can be defined), any point of a gravity field is described by a line of force and the equipotential passing through it, the field coordinate system can serve as a local standard of rest.
|
||
Does this mean, then, that the gravitational field is, in fact, the long sought-for “ether”? Beckmann asks, in effect, who cares? since the answers come out the same. Marklin is more of a purist, insisting on philosophical grounds that whatever its nature finally turns out to be, a physically real medium must exist. A “field,” he pointed out when I visited him at his home in Houston while researching this book, is simply a mathematical construct describing what a medium does. The smile can’t exist without the Cheshire cat. I’m not going to attempt to sit in judgment on heavyweights like Petr and George. The purpose of this essay is simply to inform interested readers on some of the ideas that are out there.
|
||
|