The Universe is?
It From Bit(Information)?
What Time is it?
Fields of Dreams?
Brains or Branes?
Black Holes (Black Surface) or Black Stars and quantum gravity
Doom is a game?
My Model or Muddle?
Evolution or devolution?
We got Trouble right here in Rivercity?
An Inquiry into the Heavy Metaphysics of the 2000's
by Dr. Leland Gilsen
Who am I? What am I? When am I? Where am I? How am I? Why am I? Self-portrait done in 1970.
"Brother, can you spare a paradigm?"....Graffiti
"Wake up and smell the science" (Leland Gilsen) 2017.
"I am convinced that my glass is neither half full nor half empty, because it has a head of quantum foam" (Leland Gilsen) 1990.
1. I am reading a great book on anti-gravity. I cannot put it down. 2. I have a new theory on inertia but it does not seem to be gaining momentum. 3. Why cannot atheists solve exponential equations? Because they do not believe in higher powers. 4. A Schrodinger cat walks into a bar. And does not. 5. Do you know the name Pavlov? It rings a bell. 6. A group of protesters in front of a physics lab: What do we want?. Time travel When do we want it?. Irrelevant. 7. What does a subatomic duck say? Quark! 8. A neutron walks into a bar and asks how much for a beer. Bartender replies: For you, no charge. 9. Two atoms are walking along. One of them says: Oh, no, I think I lost an electron. Are you sure?...Yes, I am positive. 10. An optimist sees a glass half full. A pessimist sees it half empty. An engineer sees it twice as large as it needs to be.
"Science progresses one funeral at a time." (Max Planck)
If you have never seen: Mindwalk rent it . If you have not read A Short History of Everything by Bill Bryson read it. And check out THE OXFORD MURDERS motion picture"
"The Athenians had a penchant for adorning their city with idols, which is why in 1638 Bishop John Wilkins pointed out the irony of a man who turned gods into stones being persecuted by people who turned stones into gods". (Singh 2004: 16)
"Sometimes I've believed as many as six impossible things before breakfast." (Lewis Carroll)
"Any universe simple enough to understood is too simple to produce a mind able to understand it." Barrow's Uncertainty Principle 2008:71)
"Many people would rather die than think, in fact, most do". "Even when the experts all agree, they may well be mistaken". (Bertrand Russell)
"A theoretical construction is unlikely to be true, unless it is logically simple". (Albert Einstein)
"Very strange people, physicists - in my experience the ones who aren't dead are in some way very ill". (Douglas Adams)
"Cosmologists are often in error, but never in doubt". (Lev Landau)
"I'll never make that mistake again, reading the experts' opinion". (Richard Feynman
"Before science was science (the study of nature through close observation), it was philosophy (the study of nature through deep thought)." (Richard Panek 2011:67)
"If an idea is not dangerous, it is not worth calling it an idea". (Oscar Wilde)
"The upshot of all this is that we live in a universe whose age we can't quite compute, surrounded by stars whose distances we don't altogether know, filled with matter we can't identify, operating in conformance with physical laws whose properties we don't truly understand" (Bryson 2003:172).
Every person interested in the history of science should check chapter XX of "The Travels of Sir John Mandeville" which claimed to cover 35 years of travel starting in 1322. The agreed upon date for the book is c. 1357. The oldest copy dates to 1371. Chapter 20 describes how the earth and sea are round and includes astrolabe sightings of the "lode star" and the "Antartic star" in different countries indicating the earth is either 20,425 or 31,500 miles in circumference. The author also notes "For from what part of the earth that men dwell, either above or beneath, it seemeth always to them that dwell that they go more right than any other folk." In other words their feet are on the ground no matter where on a round earth. He also reports a tale of a man who went so far that he returned to his starting place.
The origin of the universe violates the second law of thermodynamics, so question the "origin" as a concept??? What does this question mean?
The majority of the following are quotes, anything out of quotes is mine.omnipotence and powerlessness = null (the ability to do everything and do nothing are null strings); omnipresence and absence = null (being present at all time and absence of time are null strings); omniscience and ignorance = null (knowing everything and not knowing are null strings).
Lets put "time" into perspective
|Cosmic Time||Era||Redshift||Event||Time from now|
|0||Singularity||Infinite||Big Bang||20 billion years|
|10-43 seconds||Planck time||1032||Particle creation||20 billion years|
|10-6 seconds||Hadronic Era||1014||Annihilation of proton-anti-proton pairs||20 billion years|
|1 second||Leptonic Era||1010||Annihilation of electron-anti-electron pairs||20 billion years|
|1 minute||Radiation Era||109||Nucleoynthesis of helium and deuterium||20 billion years|
|1 week||.||107||Radiation thermalizes prior to epoch||20 billion years|
|10,000 years||Matter Era||104||Visible Universe becomes matter dominated||20 billion years|
|300,000 years||Decoupling Era||103||Universe becomes transparent||19.7 billion years|
|1-2 billion years||.||10-30||Galaxy formation begins||18-19 billion years|
|3 billion years||.||5||Galaxy clustering begins||17 billion years|
|4 billion years||.||.||Our protogalaxy collapses||16 billion years|
|4.1 billion years||.||.||The first stars form||15.9 billion years|
|5 billion years||.||3||Quasars are born, Population II stars form||15 billion years|
|10 billion years||.||1||Population I stars form||10 billion years|
|15.2 billion years||.||.||Our parent interstellar cloud forms||4.8 billion years|
|15.3 billion years||.||.||Collapse of protosolar nebula||4.7 billion years|
|15.4 billion years||.||.||Planets form||4.6 billion years|
|15.7 billion years||.||.||Intense bombardment of planets||4.3 billion years|
|16.1 billion years||Archeozoic Era||.||Oldest terrestrial rocks||3.9 billion years|
|17 billion years||.||.||Single cell life forms||3 billion years|
|18 billion years||Proterozoic Era||.||Oxigenation of atmosphere||2 billion years|
|19 billion years||Paleozoic Era||.||Multi-cellular life form||1 billion years|
|19.4 billion years||.||.||Earliest fossil records||600 million years|
|19.55 billion years||.||.||Earliest land plants||450 million years|
|19.9 billion years||.||.||Earliest fish||400 million years|
|19.7 billion years||.||.||Earliest ferns||300 million years|
|19.75 billion years||Mesozoic Era||.||Conifers, mountains form||250 million years|
|19.8 billion years||.||.||Reptiles, proto-mammals||200 million years|
|19.85 billion years||Cenezoic Era||.||Dinosaurs, continental drift||150 million years|
|19.95 billion years||.||.||Mammals||50 million years|
|20 billion years||.||.||Homo sapiens||2 million years|
Table from Silk (1980:66-67).
Based on extrapolations, all of the key decision that will determine if humanity survives as a species must be made in the next 10 years. What is belief: One person's trash is another person's treasure.
"There are two conflicting primal impulses of the human mind - one to simplify a thing to its essentials, the other to see through the essentials to the grater implications" (Laughlin 2005:ix).
"We cannot answer every correct question - but we can often answer questions which are not correctly asked, by first giving them a form in which they have meaning. Often the process of reformulating the question and giving the answer is the same process... This is the scientific approach. Do not expect answers before you have found clear meanings. Do not throw away unclear questions. Keep them on file until you have the means at the same time to clarify and to answer them. often these means result from developments in other fields, which at first sight appear to have nothing to do with the question" (Reichenbach 1971: 2-3).
"Today, the world of physics can be divided into two areas. First, there are the laws of nature - timeless, immutable. We have no influence over them. Second, there are the initial (or boundary) conditions...The regime of eternal laws of nature within the boundary conditions of a given point in time lends the world a measure of uniqueness. The laws of nature apply to numerous natural phenomena that differentiate themselves by their initial conditions. We don't know whether there are laws of nature beyond these - laws that set the initial conditions of the universe" (Genz 1999:49).
"Physicists had tended to become less and less human; and now they were hardly more than semisubstantial extrapolations of their own theories" (a short story by Edmond Hamilton: M81: Ursa major).
"I am increasingly persuaded that all physical law we know about has collective origins, not just some of it. In other words, the distinction between fundamental laws and the laws descending from them is a myth, as is the idea of mastery of the universe through mathematics alone" (Laughlin 2005:xv).
Laughlin may be throwing in a fundamental wrench into the mechanics of the universe... causing it to stop (in a theoretical way). "The great power of science is its ability, through brutal objectivity, to reveal to us the truth we did not anticipate" (Laughlin 2005:xvi).
"To us, natural law must be validated by experience - by observation or experimentation. In philosophy, this is called a contingency condition. The laws of nature are not true because of their logical deduction; they are contingent on verification. Things could be otherwise. The laws of nature, such as we see them, make statements about our world that could be conceivably be found to be invalid by observation. We might even say that every so-called verification of a law of nature is tantamount to a failed attempt at falsifying it. There is no such thing as definite verification" (Genz 1999:70).
"Wittgenstein said, in his Tractatus, Not how the world is, is the mystical, but that it is."
I have to ask myself basic questions about "space" and "time"
HERE IS MY BASIC TAKE ON THINGS (2013-2014): Spacetime consists of four variables, three consisting of what we call "space" and one we call "time". Spacetime is a quadratic ratio where LWD:T (length*width*depth:time) within mass and time limits (see next paragraph). Spacetime is never at rest. If the three spacial variables appear to be at rest in relation to each other, they are always moving though time, which is why I put time on the other side of the ratio. There are two extremes in this ratio. Any part of space approaches unity as time approaches infinity OR variables approach unity as time approaches zero. These are the two limits on the ratio. The former is the approach limit but never achieved singularity horizon. The latter is the black horizon (so called black holes are empty because they are not a hole, but a horizon. Gravity is the effect of spacial distortion created by the presence of mass in space (which defines space). Gravity is a variable related to spacetime distortions. Gravity is an emergent quality of the spacetime ratio. Because "black holes" are "horizons" they do not violate the conservation of information law. The information is incorporated into the horizon and stored as time.
Mass has 4 dimensions. Particles are defined by dimension. There is one mass-less particle what only has 1 dimension = time. Gravity is the context of 4 dimensional interaction. Gravity is the outcome of the interaction of time with the other 4 dimensions. Gravity is a variable as it is defined by the interaction of time with "mass". The larger the mass, the slower the time appears to tick as for example when approaching a "black surface". The smaller the mass, the faster time ticks until at zero mass, it approaches infinity in relation to all other mass. When we observe gravitational effects we are actually observing time effects. Time holds the universe together and the so called "expansion" is time expansion. As the "mass bubbles" of the universe separate, relative time speeds up so the rate of expansion speeds up.
There is a ratio between the "universe" and a "black bubble"... the mass of the black bubble approaches infinity while time approaches zero as a limit while the universe mass approaches zero while time approaches infinity as the other limit. Our "reality" hovers between these two limits. Time is the direction between these two limits that distorts space in the other three dimensions into spacetime.
Black Horizons are Hawking engines that create dark particles from virtual particles, increasing dark matter and dark energy and driving expansion of the Universe. One possibility is that the black horizons create dark matter with a different Planck length so they are not perceivable except through gravity... or time is responsible for this effect.
If there is a set of parallel non-Planck particles there there may be a parallel dynamic in the equation E=MC(squared): DE=(DM)DP(squared) where DE = dark energy, DM = dark matter, and DP= the speed of the "dark-photon", the speed of which may not be the same as a photon. It may be possible to work out these ratios by examining the effects of DM and DE on normal matter and energy and the effects of their distortion of space/time?
Imagine a box with a single point in it. Even if the point appears not to move, it is always moving through the other spacial direction we call "time". The ratio is NEVER still. From an observer phased into "time" the point does not appear to move. The observer can uses time to observe the present position, recall the previous position, and predict the future position (i.e. its movement through time). If a box has spatial variables that change time is required to define this change. The so called spooky action at a distance conundrum of quantum physics is misleading because ALL matter is connected via time, no matter how much the " 3-D physical" relationship change.
How to visualize time as the fourth dimension: Create a three dimensional plot (L x W x H). Imagine that time is the dimension within which (it's context) the 3-d plot sits (it really does sit is such a context). Place a point on that plot. That is an instant in spacetime. Plot a second point, which by necessity happens later in time and then draw a line from the first point to the second. The two points appear to be separated by the space between them, and that space can be measured. NOW.. rotate that plot (in the time dimension) around the first point until the line from the viewpoint of the observer becomes a point. For the observer, the spacial difference between the points disappears. THIS IS TIME AS A DIMENSION! Note that the "two" points are equal in their dimensionality so they are now connected in time irregardless of their separation in space (from the special condition of the observer)so what happens to the initial point happens to the second, irregardless of the now-meaningless-spacial-line connecting them. But to any observer in the three dimension plot, that connection is invisible, so the interaction appears to be an unexplainable spooky action at a distance. The special condition of the observer decoheres the relationship because the observer has shifted it's position in time and observes the linear distance, now conceived as a "spooky action at a distance". In one of two special cases, the linear distance is zeroed out and equal in time: rotating the spacial box between either point. But all observers are limited to one special case, rotating around the first point to merge with the second point they marked in spacetime.
The sum of the connections of many 3-d point in space forms the context of what we see as "reality" and constrains local choice/outcomes... a kind of coherence of histories defined by an "instantaneous" dimension?
"The morphing of space and time is not just the stuff of exotic physics. It governs the motion of any falling object. Baseballs, wineglasses, expensive smartphones: things that slip out of your hand accelerate towards the floor because Earth's mass warps time. (The warping of space plays only a minor role in these cases)Down is defined by the direction in which time passes more slowly. Clocks at sea level tick more slowly than clocks on the summit of Denali; a watch strapped to your ankle will fall behind one on your wrist. In human terms, the deviations are small - parts in a trillion at most - but enough to account for the rate at which falling objects pick up speed. When you see an apple fall from a tree, you are watching it roll across the contours of time" (Musser 2015:72).
TIME IS THE CRITICAL DIMENSION. ALL EXPERIMENTS OPERATE WITHIN CHANGING TIME. ALL ATTEMPTS AT QUANTUM MEASUREMENT HAVE A TIME FACTOR THAT MUST BE ACCOUNTED FOR. TIME BINDS ALL THINGS. "Different observers may ascribe different locations to a place but will agree on the relations that places bear to one another" (Musser 2015:73). Time is the "spooky" aspect of physical reality. Visualizing time is the most important key to physics and cosmology.
Plank scale is 10 to the -35 meters, so small it has been proposed by quantum theory. In time scales: 3.3yoctoseconds (10 to the-24) for a photon to travel past a proton; 0.33 picoseconds (10 to the -12) for a photon to travel 0.1 millimeters; 3.3 nanoseconds (10 to the -9)for a photon to travel 1 meter; 500 seconds for a photon to travel from the sun to the earth; 100 million seconds (10 to the 8th) for a photon from our sun to travel to Proxima Centauri, and one hundred quadrillion seconds (10 to the 17th) for a photon to travel from the cosmic horizon to us: AND for the photon we capture into our 3 dimensional interaction, NO TIME HAS Passed FOR THAT PHOTON - linking the horizon with us in spacetime instantaneously for that photon from that horizon. Spacetime is a network of photon interactions with mass defining space and time. This is the spooky reality of the complexity of our framework.
Time is very complex. Every experiment that attempts to minimize is actually telling us something about time. Every experimenter and experiment exists in a SUM of events. You cannot get out of time. Any experiment that approaches the singular event must predict the event outcome. There is a ratio between the sum of events and singular events equal to the predictability and out come of the experiment. The ratio between a sum and a singular. The so called quantum computer is a time machine. And physics suggests a time machine is barred by the SUM... thus a quantum computer is also barred by the SUM. What to see is what you get and what you get is what you see. Every attempt to isolate an event predicts its result. The only experiment on SUMS large enough to measure any significant change in the effect of "time sums" is the deep look through time that suggests the expansion of the Universe is speeding up. Singular particles are entangled in time. They express the ratio between SUM and SINGULAR.
Mass = Time dimension (space/time) and Gravity is time! High mass = slow time and small mass = fast time. Mass-less particles = NO TIME. A photon is instantaneous until it interacts with mass. It only exists in time in relationship to another mass. . The spooky action at a distance is an illusion. Black holes are bubbles in Planck space/time and a glowing bubble in my Gisen space/time model. Mass is converted from Plank scale to Gilsen scale as time approaches 0 as a limit under "black hole" scale mass. Gravity is a time well. Large mass is a "time bandit" The laws of thermodynamics are the laws of time. Black bubbles distort time as it does not allow time to escape so they are time bubbles. Black bubbles distort space/time in the center of galaxies... producing the look of black energy and black mass. Black bubbles are time bubbles that distort the TIME side of SPACE such that we observe effects we call "black matter". The distortion causes galaxies to rotate as if they are under the influence of some distributed hidden mass. They are under the distortions of the central blank time bubble we call a "black hole".
Quick question: what if the "missing" particles are not that heavy, but are non-detectable except in their gravitational effects because they have a different Planck mass, size and "time" (out-of-phase except in dimensional gravitational effects)? Could the so-called dark matter be non-Planck anti-particles (dark energy the out-of-phase effect that has the outcome of acting like a different dimension)? A non-Planck dimension? Could super-symmetric particle creation only be detected by quantized gravitational interactions? Could the non-detection under current experimental limitations not be an issue at all?
THEREFOR: so called "dark matter" is the emergent phenomenon of large scale distortion of spacetime derived from matter in rotation at huge scales. "Dark energy" is the emergent effect of this "history" of distortions. The further back an observer looks into space, the more time distorts space. This effect only becomes large enough to be perceived by the observer as time approaches zero (the earliest phase of the Universe). The result is the perception that spacetime expansion has recently begun to speed up within the observer paradigm.
The PHOTON is TIMELESS, it is a 3 dimensional zero-time oddity. It is light and sets the two limits of time: zero and 186,000 miles per second. That second is defined by the relative positions of the other three dimensions... for the photon NO TIME HAS ELAPSED. The interaction of a photon with matter collapses it into the 3rd dimension. At large scales, the sum of these collapses produces the odd effects of so called dark matter and energy. What the measurements that suggest increasing rates in the expansion of the universe are actually a "map" of the structure of the Universe in its bizarre instantaneous/collapse of photons. A light year is the distance light travels in one year. That distance is 5,900,000,000,000 miles. Alpha Centuari, which is 4.5 light years distance from us is hard to visualize when you think that 4.5 light years is almost 25 trillion miles away. The farthest galaxies we can see today, we see them after they have traveled the universe for the last 13.2 billion years. As these photons arrive for us to see, they carry with them a picture of their place of origin. In essence, the further light has traveled to us, the further back in time we are seeing. When we look at these 13 billion light year distant galaxies, we are seeing the light that left these galaxies 13.2 billion years ago. Most of the photons from those sources have collapsed by encountering matter... defining their relationship with matter and defining our concept of apacetime. The photons we record as we look at the "past" have "arrived" with zero-time elapsed as far as each photon is concerned i.e. it has not intersected with space until we record it. This TIES its origin to our spacetime. Time is really complex, and what we see (photons) and measure is the strange fact that time is the structure of the universe. "Expansion" is TIME and time is the interaction of photons with matter (the other three dimensions). At large scales this interaction creates the strange looking effects of dark matter and dark energy.
Imagine that you traveling along with a photon that came into existence within our time frame 13.5 billion years ago, and has not impinged on matter.. that photon is at the leading edge of the universe with the other three dimensions in flux but for this photon NO TIME HAS OCCURRED. That non- matter impinging photon is defining the scale of the universe without any "time" passing. Get your head around this simple fact. Do an Einstein thought experiment concerning all the other photons that have interacted or melded, defining the spacetime of our Universe with all the "space" revolving around all the "times" from those interactions and meldings.
The PHOTON IS THE FUNDAMENTAL UNIT OF TIME. It is the zero limit of time. When the Universe began to expand it flashed out light in all directions. Those photons created a "bubble" of expanding light. The interactions of photons with matter create spaceTIME... otherwise space has no time dimension. The initial burst of photons created a space bubble and once matter started to interact with photons then SPACEtime existed. The difference between the initial time/nospace wave and spacetime is the so called inflationary phase. The interaction of each "timeless" photon with spatial objects (matter) melds space and time into spacetime. THE PHOTON IS TIME
If you are still riding along with that photon that is defining the expansion of the universe, as space builds around it, fellow initial photons get less and less dense as the universe expands and at some three dimensional level it vanishes over the horizon, At that phase spatial geometry becomes null and the photon we have been traveling with has no dimension.. becomes the so called singularity in spatial terms, which it cannot, and releases all of its time energy as the beginning of a new universe?
The Penrose concept that the Big bang is the reverse of the Black Hole indicates to me that there is no singularity at either end of the spectrum, the "Event Horizon" of a "Black Hole" is the same as the "Event Horizon" of the "Big Expansion". No singularity is cloaked by an Event Horizon because there is only the Black Horizon and the Expansion Horizon. There is no "Cosmic censorship hypothesis". There is no breakdown of physical laws. There is conservation of information. A Black Hole is actually a dimensional bubble in which the three space dimensions form a minimal surface limited by the time dimension. Black bubbles exist in time and space, they just form an extreme limit.
There is no such thing as the big BANG. I am convinced it was the unfolding of dimensions. Obviously it "started" with unfolding time. The spacial dimensions followed. I visualize a one dimensional point followed by a two dimensional surface followed by the three dimensional space. As each unfolded, the internal structure of energy was modified from uniform to patchy. The unfolding of the 3rd dimension is what is termed the "theory of cosmic inflation". It was the BIG UNFOLDING. The last unfolding produced what is seen in the background radiation.
Black horizons are pseudo surfaces powered by pressure to refold space and time back from three to two to one dimensions. The amount of energy needed to complete the refold would be equal to the energy of the universe as a whole. They can never refold, but distort space by the into a pseudo-two-dimension surface we mistakenly term a "black hole".
The black horizon or black surface, in my opinion, requires a much greater mass and energy to form than the black hole model. The missing matter and energy and the dark matter and dark energy theory may be explained by the black horizon. The "missing matter" is actually tied up in the black horizon surface. The gravity of the "black hole" is spread out over a surface area. The spacetime distortions of the (relatively) large surface area may account for the effects attributed to dark matter and dark energy.
Matter is locked up energy. Energy is locked up space. Matter and energy can be traded. Time is the ultimate lock. The black surface is a time lock. Figure out how to manipulate that lock and you have the key to the universe. There is a fundamental relationship between the black surface where time = 0 and the photon, where time = 0. They are in opposition, one approaching infinite mass and the other approaching zero mass AS A LIMIT. They define the limits of space(time). They are the SAME THING at different scales. There is no loss of "information at either limit, simply bound by TIME.
The big expansion started with infinite mass and zero time. The rate of expansion of the universe should be thought of as a mirror image set of two exponential lines where mass approaches infinite density on the upper left side and mass approaches zero density on the lower right side. Where these two curves meet in the middle is a period of relative balance. The exponential aspect at the beginning curve normalized the relative distribution of matter. The evidence for the increase in the expansion of the universe is simply that the universe has moved past the mid-point between the mirrored exponential curves. As the universe expands and relative density falls, the rate of expansion will speed up heading to infinity as a limit. Time will be normalized.
In a 2015 article resolving black hole conundrums Polchinski discusses the entropy and information problem and concludes: Furthermore, if the firewall exists, what is it? One idea is that the firewall is simply the end of space. Perhaps the conditions for for spacetime to form do not exist inside the black hole. As Marloff once remarked, maybe the interior cannot form, because the black hole's quantum memory is full. If spacetime cannot occur inside then space ends at the horizon, and an infalling astronaut who hits it dissolves into quantum bits residing on this boundary." (Polchinski 2015:41). This is what I have been saying for several years. His "firewall" is my black horizon.
I emailed Polchinski about this bold section. He replied "Thank you for your message. There is a lot in there, it is difficult to respond to, but to go beyond what's in my article would really take getting into mathematical details." so a polite non-reply on Mar 26, 2015 from a busy theorist.
This is a recent Aug 2015 news bit: "Stephen Hawking says he may have solved a problem that has plagued astrophysics for 40 years: the information loss paradox.
For decades, scientists have argued about what happens to the information relating to the death of a star that forms a black hole. It is known that nothing, not even light, can escape from a black hole owing to its intense gravitational pull. Quantum mechanics, though, says that information cannot be destroyed; general relativity says it must be. Hence, the information loss paradox.
In the 1970s, Hawking said black holes could emit information-less photons via quantum fluctuations - tiny perturbations in space-time - called Hawking radiation, but in 2004 he produced a new theory that claimed information could actually escape from a black hole. How that would occur was not clear, but now he says he has an answer.
I propose that the information is not stored in the interior of the black hole as one might expect, but on its boundary, the event horizon, he said today at the KTH Royal Institute of Technology in Stockholm, Sweden. Specifically, he says a super translation takes place, which is essentially a hologram of the information. It means that information can survive and escape from a black hole at the event horizon, the boundary at which nothing is said to be able to break free.
They key to this theory is Hawking radiation. Hawking says it can pick up information and move it beyond the event horizon. But it is not all good news; the information is essentially useless. The information about ingoing particles is returned, but in a chaotic and useless form, said Hawking. This results in the information paradox. For all practical purposes, the information is lost." So Hawking is now talking about the event horizon as an information surface, exactly what I have been saying for over a year. The information is not lost, it is stored.
Everything that is problematic in physics is about TIME as a dimension. The so called spooky aspect of quantum physics; the big bang expansion and rate of change; the cosmological constant; black matter and black energy; black "holes"; and matter/energy and gravity. The "normal" dimensions can wrap around the time dimension and can change while still maintaining a zero time connection, but when any of the other dimensions impinge (the measuring issue or the process IS time change) this creates the spooky effect of "quantum" physics. This applies to the slit experiment as well. The so called "inflation" is a time-dependent effect of a spaceTIME distortion from the emergence of matter from energy. Gravity is time dependent distortion of space... it is TIME as a dimension. We see time all the time (bad pun?) as a dimension, but do not internalize it in that way.
Entanglement IS time. Time approaches zero as a limit and space approaches infinite at the surface horizon of a black horizon. Virtual particles are entangled in TIME as the one in the horizon exists in a quasi-spacetime dimension that preserves information. The black horizon STORES INFORMATION.
The so called problem of the "end of inflation" (Afshordi, Mann & Pourhasan: 2014::41) is resolved by the emergence of matter (including visible and black) defining dimension. The changing inflation is a result of conversion of visible matter into dark matter and energy. The universe is fluctuating between visible matter/energy (Planck) and dark matter/energy (non-Planck) as a driving force. Since this is my idea... I will call the non-Planck: Gilsen matter and Gilsen energy (grin)
In a 2015 article in Scientific American (June) Clara Moskowitz writes about observations of the Abell 3827 cluster of colliding galaxies suggested dark matter significantly lagged behind the ordinary matter suggesting dark particles were interacting and slowing themselves down. The astronomers from Durham University suggested an exchange of "dark photons" could have created such a force. In other words, dark matter is filled with corresponding dark particles that act in analogous ways to ordinary matter. Another suggestion that my non-Planck matter exists.
From Science magazine 2014:
Researchers in Portsmouth and Rome have found hints that dark matter, the cosmic scaffolding on which our Universe is built, is being slowly erased, swallowed up by dark energy. The findings appear in the journal Physical Review Letters, published by the American Physical Society. In the journal cosmologists at the Universities of Portsmouth and Rome, argue that the latest astronomical data favours a dark energy that grows as it interacts with dark matter, and this appears to be slowing the growth of structure in the cosmos. Professor David Wands, Director of Portsmouth's Institute of Cosmology and Gravitation, is one of the research team. He said: "This study is about the fundamental properties of space-time. On a cosmic scale, this is about our Universe and its fate. "If the dark energy is growing and dark matter is evaporating we will end up with a big, empty, boring Universe with almost nothing in it. "Dark matter provides a framework for structures to grow in the Universe. The galaxies we see are built on that scaffolding and what we are seeing here, in these findings, suggests that dark matter is evaporating, slowing that growth of structure." Cosmology underwent a paradigm shift in 1998 when researchers announced that the rate at which the Universe was expanding was accelerating. The idea of a constant dark energy throughout space-time (the "cosmological constant") became the standard model of cosmology, but now the Portsmouth and Rome researchers believe they have found a better description, including energy transfer between dark energy and dark matter. Research students Valentina Salvatelli and Najla Said from the University of Rome worked in Portsmouth with Dr Marco Bruni and Professor Wands, and with Professor Alessandro Melchiorri in Rome. They examined data from a number of astronomical surveys, including the Sloan Digital Sky Survey, and used the growth of structure revealed by these surveys to test different models of dark energy. Professor Wands said: "Valentina and Najla spent several months here over the summer looking at the consequences of the latest observations. Much more data is available now than was available in 1998 and it appears that the standard model is no longer sufficient to describe all of the data. We think we've found a better model of dark energy. "Since the late 1990s astronomers have been convinced that something is causing the expansion of our Universe to accelerate. The simplest explanation was that empty space - the vacuum - had an energy density that was a cosmological constant. However there is growing evidence that this simple model cannot explain the full range of astronomical data researchers now have access to; in particular the growth of cosmic structure, galaxies and clusters of galaxies, seems to be slower than expected." Professor Dragan Huterer, of the University of Michigan, has read the research and said scientists need to take notice of the findings. He said: "The paper does look very interesting. Any time there is a new development in the dark energy sector we need to take notice since so little is understood about it. I would not say, however, that I am surprised at the results, that they come out different than in the simplest model with no interactions. We've known for some months now that there is some problem in all data fitting perfectly to the standard simplest model."
While cosmology is a hobby, I have always felt that physicists were not understanding time. I personally feel the "silly" idea that observation is required in the quantum process is shockingly stupid. Quantum collapse is happening all the time everywhere.
I repeat everything that is problematic in physics is about TIME as a dimension. The so called spooky aspect of quantum physics; the big bang expansion and rate of change; the cosmological constant; black matter and black energy; black "holes"; and matter/energy and gravity. The "normal" dimensions can wrap around the time dimension and can change while still maintaining a zero time connection, but when any of the other dimensions impinge (the measuring issue or the process IS time change) this creates the spooky effect of "quantum" physics. This applies to the slit experiment as well. The so called "inflation" is a time-dependent effect of a spaceTIME distortion from the emergence of matter from energy. Gravity is time dependent distortion of space... it is TIME as a dimension. We see time all the time (bad pun?) as a dimension, but do not internalize it in that way.
Acceleration and gravity are equivalents. Measurements indicate the expansion of the universe is accelerating. Therefor gravity is increasing as well. As a gravity field gets stronger time slows down. The rate of expansion appears to be increasing but time is decreasing. Visualize this paradox. What do the measurements really portend?
What is the universe we live in like? "If you were to see something at a random moment,then it is a 95% chance that you will be glimpsing it during the middle 95% of its total lifetime."(Barrow 2008:109). I wonder what the real chance is that we are in that middle percentage when we observe our universe? Based on many models, we are at the beginning of the bell curve, a great distance from the average. I wonder what that means? "The universe has been expanding for 13.7 billion years. If this is how long it has been in existence, then with 95% probability it will last for more than another 351 billion years and less than 534.3 billion years." (Barrow 2008:110)
As pointed out by Barrow (1991: 38) initial conditions can be so all pervasive that they look like natural laws, for example, the second law of thermodynamics. It can be theorized that the initial condition of the universe created many more ways for things to go from order to disorder then from disorder to order. This condition established time-order... the arrow of time is a reflection of entropy and the improbability of the initial conditions. Space and time are just dimensions where three directions define spatial parameters and the fourth direction defines movement within those other three. There may be other dimensions as well, that are tied up into the minute tube-like strings we call particles so completely that they react with each other as nearly point-like objects.... their other-dimensionality interfering with the other dimensions to produce the "illusion" of structure (i.e. - matter). The tension in strings is high in low energy environments, which binds the strings up into particle-like structures. Their stringiness is only seen in high energy environments, like the early stages of the big bang. Again, as Barrow noted (1999: 69), if the Universe is unique then the initial conditions are unique and become a law of nature. But if the Universe is just one of many possible universe, then the initial conditions have no special status.... but this just pushes the "origin" question back another level... and the "why" question back another level... the easy way out.
"Over the past three centuries, obsessive attention to detail has slowly revealed that some physical quantities are not only accurately reproducible from one experiment to the next but are completely universal. It is hard to overstate how astonishing and disturbing this is. The extreme reliability and exactness of these quantities elevates their status from mere useful fact to a kind of moral certainty...The deeper meaning of these discoveries is still being debated, but everyone agrees they are important, for such certainty is uncommon in nature and demands explanation" (Laughlin 2005:12-13).
"A universal constant is a measurement that comes out the same every time. A physical law is a relationship between measurements that comes out the same every time" (Laughlin 2005:30).
"Etched into a tombstone in the Zentralfriedhof in Vienna, near the graves of Beethoven, Brahms, Schubert, and Strauss, is a single equation, S=k log W, which expresses the mathematical formulation of a powerful concept known as entropy. The tombstone bears the name of Ludwig Boltzmann, one of the most insightful physicists working at the turn of the last century" (Greene 2004:151).
"The entropy of a system in a given macrostate is, roughly speaking, the amount of information - the number of bits - necessary to specify one of the microstates in that macrostate, with the mircostates all treated as if they were equally likely" (Gell-Mann 1994:219).
Determinism has been nullified by both quantum mechanics and our understanding of chaos theory. There is a relationship between initial conditions, the laws of nature, chaos theory and quantum mechanics that yields the needed degrees of freedom for our complex universe to operate. That all things must have a cause is not true in the strange world of quantum theory. Observations of specific cause cannot be traced to specific effects, and to some, this explains how the universe can be the ultimate free lunch.
"We can't ever know the exact location and exact velocity of even a single particle. We can't predict with total certainty the outcome of even the simplest of experiments, let alone the evolution of the entire universe. Quantum mechanics shows that the best we can ever do is predict the probability that an experiment will turn out this way or that" (Greene 2004: 79).
Thank the Universe for the Law of Large Numbers
"We also know that systems with small numbers of atoms are motivated by simple deterministic laws of motion and nothing else. We also know that attempts to discover the scale at which these laws cease to work or are supplanted by others have failed. And finally, we know that elementary laws have the ability in principle to generate phases and phases transitions as organizational phenomena. Thus when one strips away the unhelpful complexities, one is left with the following simple argument: microscopic laws are true and could plausibly cause phases; therefore we are sure they do cause them, even though we cannot prove this deductively" (Laughlin 2005:35-36).
"Quantum mechanics is starkly efficient: it explains what you see but prevents you from seeing the explanation" (Greene 2004: 183).
"The fact that our universe is young and evolving puts the question of the origin of the laws of nature in a quite different light. If the universe is eternal, there are two possible answers for the question of why the laws of nature are as we find them to be: religion or Platonism. Either God (who is, in most tellings, eternal) made the laws of nature as he made the world; or they are as they are because there is a mathematical form for the laws that is somehow fixed by some abstract principle. But although deism and Platonism seem, at first, poles apart, in a certain sense these two kinds of explanation are not really very different. Mathematical truth is supposed to be eternal, as is god. Mathematical truth is supposed to be something that holds irrespective of what is in the world, or indeed whether the world exists at all. A world made by mathematical laws, like a world made by a god, is a world constructed by something that exists eternally and outside of the world it creates" (Smolin 1997: 17-18).
Quantum theory makes the universe enjoyable. It upends intuition and logic as we think we know it. At the macroscopic level, logic and process work. At the quantum level, there are no facts. "The problem is not epistemological (about what we know) but ontological (about what is)" (Albert & Galchen 2009:32).
Entanglement is part of the heart of quantum "spookiness". "Entanglement... appears to entail the deeply spooky and radically counterintuitive phenomenon called nonlocality - the possibility of physically affecting something without touching it or touching any series of entities reaching from here to there" (Albert & Galchen 2009:34).
But if one simply recognizes that time is a direction, no different from up and down, or right or left, and that time defines the issue of nonlocality, one can see that the time direction binds the so-called nonlocal phenomena. They are local in the time direction. Forget about thinking about time as a non-direction. It simply IS A DIRECTION. A particle can be in a different space direction but still be tied into the same time direction. EVERY experiments are trapped inside time (as they are in the other three spacial dimensions)as a direction. Entanglement is simply that direction appearing to be spooky
Albert and Galchen (2009:36) conclude that "...the actual physical world is nonlocal. Period." But if one takes into consideration of the requirement that the time direction must be included in the experiments, then the world, in my opinion, becomes local again. This resolves the paradox of faster then light actions, as the entanglements are entangled in time. We are seeing time as a physical direction inter-phased with the classical directions.
Remember "No object can move through space faster than light, but there is no restriction on how quickly space itelf can move" (Nadis 2011:34). (i.e. no restriction on how quickly time can move: the limits of movement of the classical dimensions at the speed of light does not limit the speed of the time dimension... it is instantaneous).
A possible verification for my view is: "What is uncanny about the way that quantum mechanical particles can be nonlocally influence one another is that it does not depend on the particles' spatial arrangements or their intrinsic physical characteristics.... but only whether or not the particles in question are quantum mechanically entangled with one another"" (Albert & Galchen 2009:38). Create and mental image of two particles in a three dimensional box. Now create another similar box with the particles rotated to different locations around a time axis. Time hooks the two together boxes and time ties them together as a direction.
I came up with my own explanation by making time more complex: that large scale spacetime is a warped version of quantum spacetime, and that entangled particles are "joined at the hip" by instantaneous time (as a direction) not changing as far as the two particles are concerned... only the other "three" dimensions rotating around time until some large scale interaction breaks that no-time-has-passed connection. In other words it breaks the time joined-at-the-hip connection to reveal what appears at the macro scale spooky action at a distance. It was not. The distance between them is/was "TIME" as a dimension. Time is the elastic that ties things together.
TIME is a dimension. I am convinced that the passage of time is a byproduct of the expansion of the universe. We are inside that expansion and it's limits are set as the speed of light AND the shape of space. The speed of light is a "relative constant" in that it is the byproduct of the expansion, the cosmological variable (cannot call it a constant as it is changing). "C" is the relative local limit of change. Think of a three dimensional point traveling along the expansion of "space" through "time" as the time side of the spacetime variable shift the location of the "space" side of spacetime. As an entity approaches the speed of light, it approaches the limits of cosmological expansion, and thus the limits of time in spacetime, as they are one and the same. Black holes have a spacetime horizon... they have no interior as time approaches zero is a limit at the horizon. A black hole is only a black horizon, it has no "interior". The arrow of time is the arrow of expansion. As there is evidence that expansion is speeding up, then that limit is changing. Time as the 4th dimension comes from the cosmic variable driving change in spacetime resulting in increased entropy.
Spacetime = 3 directions + time as a direction (4 variables) as a quadratic equation or ratio. Each of the variables can approach zero as a limit or infinity as a limit. The three spacial dimensions approach zero as time approaches unity OR space approaches unity as time approaches zero. The former is the so called "singularity". The latter is the so called "black hole". But the latter is not a hole, it is a horizon in time. Black holes are empty, they have no interior, only a surface horizon. Gravity is an effect due to the distortion of the three spacial directions in this ratio. Gravity is a variable based on mass dimension and time distortions. Gravity is an emergent quality of spacetime ratios.
My model meets the reality criterion, the locality assumption because the speed of light is not the speed of time, it deals with Bells problem with Neuumman's proof, it resolves the information problem, it resolves the "faster than light" problem because the speed of time is not the speed of light in its dimension: C is a limit only in the other three dimensions, it keeps entanglement from violating the theory of relativity, it resolves the quantum lottery, it resolves the information "bit form it" issue and all of the "entanglement" problems.
Time is an independent variable/dimension. The speed of light relates to mass-less particles interacting through dimensions. Time itself is instantaneous. As any mass approaches the speed of light, it's mass increases and it's time clock shrinks in relation to all other mass, approaching zero as a limit, yet for the object, time appears to be normal, because it is following its own time dimension.
David Wiltshire (University of Canterbury, New Zealand) feels time near galaxies could be slower than empty space. "In a truly relativistic view, the age of the universe differs from place to place....In empty space, over 18 billion years have elapsed since the big bang, but within galaxies only about 15 billion years have passed." (Merali 2012: 49)
..."Wiltshire claims cosmologists have misinterpreted the positions of the distant supernova explosions used to determine how quickly the universe is expanding. Light from a supernova travels to Earth's telescopes after passing through both patches of empty space (where the universe expands more rapidly) and through intervening galaxies filled with matter (where the expansion slows). As a result, Wiltshire says, cosmologists expect supernovas to be closer than they appear, creating the illusion that the expansion of the universe is speeding up. Supernova measurements are the key evidence for dark energy. But Wiltshire thinks physicists have been chasing shadows rather than zeroing in on reality for years." (Merali 2012: 49-51)
I think this strengthens my model. Time changes near matter, and thus the apparent age of the universe changes near matter.
Imagine this model: a box containing a point. Even if the "point" does not move in any of the three physical directions, it is constantly moving in the time direction. All 4 dimensions are always changing because of the time dimension. From the observer phased into time, the point does not appear to move. If the point changes in any one of the physical dimensions, only then does the observer perceive physical change. One physical variable change = a timeline in physical space. Two variables change = timeplane in physical space. Three variable change = timecone in space. Four variable change = spacetime sphere.
Dark matter is an emergent quality of large scale spaceTIME distortions. Dark energy is the emergent quality of SPACEtime distortions on the "history" of the universe. As we observers look back into time the effects become significant as the spatial parameters of the universe approach zero as a limit giving distorted data.
>We have to GIVE UP the idea that length (space) and duration (time) are different. Time is a length, in no way different from length as space. They are flexible variables constantly changing from one frame of reference (a length/duration) to another. The relationships MUST change. Local length (space) can appear to be motionless, but it is not because if those three variables appear to be locally "frozen" the duration (time) variable cannot be frozen. They are a sum of parts that equal unity. If length (space) is set to zero, then time is set to infinity. If and length (space) is set to infinity, then the other variables are set to infinity. We cannot "see" duration (time) as a length except as the passage of time.
Time is not the same throughout the infinite range of lengths (space) because they are that mutual sets of ratios. Time is a direction. Every local measurement is relative. There is no absolute spacetime (absolute space or absolute time)... only a dance of ratios. NOTHING shares the same length (space) and duration (time). There is no absolute frame of reference for the Universe. There is NO SIMULTANEITY for any frame of reference, only the illusion of local shared approximations.
Gravity is not a "force" in itself, it is an emergent quality of the relationship of spacetime ratios of matter/mass. Matter/mass creates a debt in the geometry of spacetime, a distortion around which the ratio must navigate.
The question about time as infinite or eternal as actually asking if the Universe has always had four dimension. It has and will have. If that dimensionality changes, then there will be a new ratio for spacetime.
The Spacetime ratio can approach zero or infinity as a limit, but never get to either extreme. This hods true for the Universe as a whole (space) and so called "black holes" (time). A black hole is not a hole, it is a horizon where one or all of the spacial dimensions (length) approaches zero as a limit and duration (time) approaches infinity as a limit.... which none can. So a "black hole" is empty, it has no interior, only an event horizon, and all of the information that tries to approach that limit is faced with the "infinite time" pressure that will bring up against the demands of the ratio. That information will reemerge as the demands of the ratio are modified over duration... the surface evaporates mass and leaves the extremes.
Almost everything in this booklet has nothing to do with "religion", therefore it has everything to do with "religion". Throw away every preconception you have. Be prepared to be non-linear.
"Dissatisfied emotion has frequently been projected into logic. In theories of the universe it often reappears in the guise of logical queries and pseudo-logical constructions. A philosopher argues that he has discovered a puzzle of Being which logic cannot solve - he might as well say that he has discovered a fact that arouses his emotional resistance" (Reichenbach 1971: 4).
"The Theologians think they know the questions but cannot understand the answers. The physicists think they know the answers, but do not know the questions. An optimist might thus regard a dialogue as a recipe for enlightenment, whilst the pessimist might predict the likely outcome to be a state in which we find ourselves knowing neither the questions nor the answers" (Barrow 1991: 1).
"Irrationality is the square root of all evil." - Hofstadter
"Maybe I am being a bit harsh on philosophers, but they have not been very kind to me. My approach has been described as naive and simpleminded. I have been variously called a nominalist, an instrumentalist, a positivist, a realist, and several other ists. The technique seems to be refutation by denigration: If you can attach a label to my approach, you don't have to say what is wrong with it. Surely everyone knows the fatal errors of all those isms" (Hawking 1993:42).
Cosmology: A branch of philosophy dealing with the origin, processes, and structure of the universe. Divided into physics and metaphysics.
Epistemology: The division of philosophy that investigates the nature and origin of knowledge.
Metaphysics: The branch of philosophy that systematically investigates the nature of the first principles and problems of ultimate reality. It includes the study of being (ontology) and the study of the structure of the universe (cosmology) where theory cannot be verified or tested.
I have added Epistemic Structural Reality: The branch of philosophy that says that we can only know the relations among things and not the things themselves. That relations are all that is. This implies that particle physics and quantum models are chimera(Kuhlmann 2013:45)(von Berger 2013:47-51).
I have added Meta-physics or perhaps Heavy Metaphysics or even Heavy Metalphysics?: Multiverse, parallel universes, post-inflation bubbles, quantum universes and other mathematical structures such as string theory.
Physics: The science (learning or study concerned with demonstrated truths or observable phenomena and characterized by the systematic application of scientific method) of matter and energy and of interaction of the two.
Quantum Theory: Is there a logical definition? Is there something that defies logic yet defines/controls everything? But then see next definition below. Is quantum mechanics the religion of science?
Religion: The expression of the belief in and reverence for a superhuman power or powers regarded as creating or governing the universe and any personal or institutionalized system of beliefs or practices embodying this belief.
Universe: All existing things, including the earth, the heavens, the galaxies, and all herein, regarded as a whole, the cosmos, regarded as the whole; the cosmos, the sphere or realm in which everything exists and takes place
"By this very definition of "universe", one might expect the notion of a multiverse to be forever in the domain of metaphysics. Yet the borderline between physics and metaphysics is defined by whether a theory is experimentally testable, not by whether it is weird or involves unobservable entities" (Tegmark 2003:41).Unfortunately, none of the meta-physic/heavy metaphysics/heavy metalphysics theories appear to be testable.
"Any physical theory is always provisional, in the sense that it is only a hypothesis: you can never prove it. No matter how many times the results of experiments agree with some theory, you can never be sure that the next time the result will not contradict the theory. On the other hand, you can disprove a theory by finding even a single observation that disagrees with the predictions of the theory" (Hawking 1988:10).
"The best theories are ones that are unique in two senses. First of all, there should be no uncertainty about their consequences. The theory should predict all that is possible to predict and no more. But there is a second kind of uniqueness that would be especially treasured in what Steven Weinberg calls a final theory. It is a kind of inevitability - a sense that the theory could not be any other way. The best theory would not only be of everything, but it would be the only possible theory of everything" (Susskind 2006:113-114).
"The complaint about weirdness is aesthetic rather than scientific, and it really only makes sense only in the Aristotelian world-view. yet what did we expect? When we ask a profound question about the nature of reality, do we not expect an answer that sounds strange" (Tegmark, 2003:51)?
"Something that is logically necessary is not subject to change. If the world with all its present properties corresponds to this definition, it has to remain the same throughout the ages. Properties we call contingent are those that might also be different. Contingencies are subject to change and cannot be uniquely captured by thought" (Genz 1999: 308).
So we have four possible constructs for everything around us: 1) There was no previous space/time = matter, energy, space and time just suddenly started (logical conundrum for a beginning out of nothing); 2) There was a primeval quantum state out of which space/time emerged (logical conundrum of a pre-space/time state); 3) There is a multiverse from which our universe budded (logical conundrum of space/time outside of space/time and pushes origins into non-question); and 4) The universe cycles through expansion and collapse (logical conundrum of non-origin, ultimate recycling with total recovery of process/entropy).
That leaves a future of: 1) Expansion ends and the universe expands eternally and last stars burn out in about 100 billion years; 2) Expansion continues and in about 30 billion years all galaxies move outside any local horizon; 3) Expansion increases and in about 50 billion years the "big rip" tears apart all structures; and 4) expansion changes in about 30 billion years to collapse into a new expansion. Too bad I will not live for 30-50 billion years to check it out.
What do these definitions have in common? They are all systematic models for the way things are and the way things work in the universe.
"Cosmology is the study of the universe as a whole, including its size and shape, its history and destiny, from one end to the other, from the beginning to the end of time. That's a big topic. And it's not a simple one. It's not even simple to define what those concepts mean, or even if they have meaning" (Isaacson 2007:249).
The meta-physics of cosmology uses the scientific approach beyond its limits in one sense, but it approaches religion and epistemology as a limit in another sense. Things merge and get fuzzy. Just how something can exist outside of time (i.e. never existed) and suddenly just exists and set the parameters of time and space (i.e. simply defines existence).. is weird stuff.
Nobody should be afraid of "truth" as discovered by science. A religion or philosophy that cannot embrace reality derived from the scientific method shows a lack of sense and intelligence (i.e. is dull, obtuse = stupid). Fear of knowledge reflects dogma or doctrines used to define "us" verses "them"... creeds of the "enlightened" as opposed to the rest of the "barbarians". How can anyone declare enlightenment by excluding truth? All belief systems must contain its beliefs, and thus define its believers. That is just the way things are. Those who oppose science are opposing reality, and if there is a "god" in the religious sense, then that "god" created reality, and denying the truth of reality is denying "god".
Nobody ever said human beings are logical creatures. Most religions require their members to "believe" as faith (i.e. deny reality) the tenets of the system. In other word, they require that you become stupid in order to join.
People seem to flee from science because it offers no personal assurance for their moral and emotional needs. One does not have to have a religious creed to be moral, and most morals are defined by cultural values as well as human ecology. Since meta-physics and cosmology cannot prove or disprove "god"... anymore than religion or philosophy can... science cannot be the dogmatic basis for such beliefs or faith. Some people twist science to that end. But most people turn to religion or philosophy for defining their belief or faith systems. But people should not deny science or truth or reality when they choose a belief system, and they should never use science to "disprove" religion. Science has shown one thing, it is impossible to fully disprove anything. It has shown that it is possible to "confirm" things by experiment and predictions that are tested against results. Science is an optimistic kind of thing. Scientific pessimism is not science, it is a belief system disguised as science.
"Any string of symbols that can be given an abbreviated representation is called algorithmically compressible.... we recognize science to be the search for algorithmic compressions.... Science is predicated upon the belief that the Universe is algorithmically compressible and the modern search for a Theory of Everything is the ultimate expression of that belief, a belief that there is an abbreviated representation of the logic behind the Universe's properties that can be written down in finite form by human beings" (Barrow 1991: 11).
"The problem of fitting human life into the impersonal tapestry of cosmic space and time has been pondered by mystics, philosophers, theologians, and scientists of all ages. Their views straddle the entire range of options. At one extreme is painted the depressing materialistic picture of human life as a local accident, totally disconnected and irrelevant to the inexorable march of the Universe from the "Big Bang" into the future "Big Crunch" of devastating heat, or the eternal oblivion of the "Heat Death". At the other is preached the traditional teleological view that the Universe has some deep meaning, and part of that meaning is ourselves" (Barrow 1991: 164).
"Borrowing from the Big Bang example, astronomers gave the first options the cheerfully inadequate names Big Crunch (too much matter) and the Big Chill (too little matter); the third option was the Goldilocks universe (just right) (Panek 2011: 58). Again do not say sciences has no sense of humor.
Digression into authority:
Because people wrote things down in the past, and such writings are considered as coming from a "god" or through a "god"... anything that shows that the "word" is secular rather than spiritual is a threat. For some reason, people want to believe that founders of belief systems were somehow more connected directly to divinity than themselves... and have some special "authority".
I have some basic news for you... everyone who ever lived, that lives, and who ever will live, will have no better connection or authority than anyone else. Simply because someone said they were better connected, or other say that about them, does not make it true. If you believe that there are authorities on "god", then I have this bridge I would like to sell you. Simply because people drop out of normal society to "study" religion, does not make them experts on anything except the dogma they are studying. That they dropped out of society should tell you something right away. That someone should try to become an expert on a system of faith and belief by studying and pondering on everything related to that system should again tell you something about that persons emotional needs and drives. Those who reject the universe for the sake of a religious belief also speaks loud.
My basic advice:
Put your faith and trust in yourself (in moderation). Learn, but learn in moderation. Believe, but believe in moderation.... be moderately heretical about your own beliefs. Be moderately certain you are right, but you might be moderately wrong as well. Be open. Be willing to change. Do not blame. Affirm in moderation. Be tolerant of others and listen with an open mind and heart, yet question in moderation. Be moderately true to yourself and to others. Like yourself in moderation, and you can like others in moderation. Never fear change. Never fear "truth" but take it in moderation. Never fear what science brings, because even science must be taken in moderation. Scientific "truth" changes over time as more data comes in and more tests are done. Embrace reality, never fear it. Embrace the universe and respect your place in it as well as the place of others. Think about the consequences of your actions is relation to others. Be selfish in moderation. Be loving in moderation. Eat, drink and be merry in moderation. Exercise in moderation. You are the center of your universe, but you share that universe in connection with all non-living and living things. Live in moderation.
"If I were to put it into a very few words, my dear sir, I should say that our prevalent belief is in moderation. We inculcate the virtue of avoiding excess of all kinds - even including, if you will pardon the paradox, excess of virtue itself. .... We rule with moderate strictness, and in return are satisfied with moderate obedience. And I think I can claim that our people are moderately sober, moderately chaste and moderately honest. .... I can add that our community has various faiths and usages, but we are moderately heretical about them" (Hilton 1934:90-91).
"Laziness in doing stupid things can be a great virtue" (Hilton 1934:187).
The reason WHY everyone has been wrong about "religion" is simple:
1) The people thinking about it have been humans (a bad starting point);
2) The people thinking about religion have been (living) amateurs (they have no experience as dead people);
3) God, or at least the Universe, is counter-intuitive. If "god"was intuitive, Xhe/she/it would be just like us, although most people assume "god", at the very least, .... is intelligent. After looking at some belief systems, that is questionable among humans.
Also, as explained in a book for children (The Phantom Tollbooth), problems are not as simple as most thinkers want you to believe. While traveling, a group of critters met the Dodecahedron, a creature with many faces at a place in a road that split into three roads:
"Then perhaps you can help us decide which road to take," said Milo.
"By all means," he replied happily. "There's nothing to it. If a small car carrying three people at thirty miles an hour for ten minutes along a road five miles long at 11:35 in the morning starts at the same time as three people who have been traveling in a little automobile at twenty miles per hour for fifteen minutes on another road exactly twice as long as one half the distance of the other, while a dog, a bug, and a boy travel an equal distance in the same time or the same distance in an equal time along a third road in mid-October, then which one arrives first and which is the best way to go?"
"Seventeen!" shouted the Humbug, scribbling furiously on a piece of paper.
"Well, I'm not sure, but--"Milo stammered after several minutes of frantic figuring.
"You'll have to do better than that," scolded the Dodecahedron, "or you'll never know how far you've gone or whether or not you've ever gotten there."
"I'm not very good at problems," admitted Milo.
"What a shame," sighed the Dodecahedron. "They're so very useful. Why, did you know that if a beaver two feet long with a tail a foot and a half long can build a dam twelve feet high and six feet wide in two days, all you would need to build Boulder Dam is a beaver sixty-eight feet long with a fifty-one foot tail?"
"Where would you find a beaver that big?" grumbled the Humbug as his pencil snapped.
"I'm sure I don't know," he replied, "but if you did, you'd certainly know what to do with him."
"That's absurd," objected Milo, whose head was spinning from all the numbers and questions.
"That may be true," he acknowledged, "but it's completely accurate, and as long as the answer is right, who cares if the question is wrong? If you want sense, you'll have to make it yourself."
"All three roads arrive at the same place at the same time," interrupted Tock, who had patiently been doing the first problem.
"Correct!" shouted the Dodecahedron. "And I'll take you there myself. Now you can see how important problems are. If you hadn't done this one properly, you might have gone the wrong way."
"I can't see my mistake," said the Humbug, frantically rechecking his figures.
"But if all the roads arrive at the same place at the same time, then aren't they all the right way?" asked Milo.
"Certainly not!" he shouted, glaring from his most upset face. "They're all the WRONG way. Just because you have a choice, it doesn't mean that any of them HAS to be right."
He walked to the sign and quickly spun it around three times. As he did, the three roads vanished and a new one suddenly appeared, heading in the direction that the sign now pointed.
"Is every road five miles form Digitopolis?" asked Milo.
"I'm afraid it has to be," the Dodecahedron replied, leaping onto the back of the car. "It's the only sign we've got."
Does this sound familiar? Have you run into models that were internally consistent but have nothing to do with phenomenal reality? Have found answers to questions that were simply wrong? Are you stuck with the only facts you have?
Many models can be internally consistent, but be completely absurd. So much of human belief is built on such GIGO. Most is so absurd, that suspension of common sense is a basic tenant of the system: suspend your common sense and BELIEVE... have FAITH... "After all, it's the only SIGN we've got!" Most religions ask their adherents to cultivate ignorance as a high art: Instead of "Don't worry ... be happy" its more like "Be STUPID.... be happy." Or perhaps, as the disco lyrics go: "Lookin' for God in all the wrong places.... lookin' for God.... lookin' for God."
"The laws of nature themselves, like the biological species, may not be eternal categories, but rather the creations of natural processes occurring in time. There will be reasons why the laws of physics are what they are, but these reasons may be partly historical and contingent, as in the case of biology" (Smolin 1997: 18).
What "god" and the Universe ain't:
1) OMNIPRESENT: I can definitely tell you from personal experience that loss of time sense is sheer madness. If "god" is omnipresent, Xhe/she/it is bonkers. Time and entropy are part of the basic structure of the universe. The speed of light sets limits.
2) OMNISCIENT: Possession of universal and complete knowledge would be a total dead end. There is no intelligence in total knowledge, because there is no change. Without change, there is no meaning. A completely meaningful being is a meaningless being. Intelligence is defined as the ability to learn or understand or to deal with new or trying situations. An omniscient being cannot learn or interact with new or trying situations ... as they have already happened. The universe is not determined.
3) OMNIPOTENT: The ability to have unlimited authority or influence is again worthless. With nothing impossible, then why bother? Ultimate banality. The universe is currently not a singularity, and a singularity may have never existed.
4) OMNIFICENT: Having unlimited (endless, boundless, infinite) creative power is again useless. Creation without struggle has no merit. Infinity is really big. It is bigger than any bigness you can imagine. But with no bounds, there is no meaning. The universe may be the ultimate free lunch, but is has limits. The total energy of the universe is set.
So an omnipresent, omniscient, omnipotent and omnificent being is an insane meaningless banal merit less being and universe. I guess that to some people, that is a pretty good description of "god" or the universe. It certainly sounds like some people I know. Maybe one of them is "god"? I should probably ask.
Barrow (1991: 23-30) put things into perspective, which I have modified and commented upon. Using three variables (G = God, U = Universe, and L = Law), he looked at the following:
1) U is a subset of L That there was a pre-existing set of laws or logic that define the nature of the universe when it is born, that there is a structure larger than the universe. This is the position of cosmologists looking for the Theory of Everything. Since no theorem of the Universe can possess a larger information content than the axioms of the Universe, this may be impossible to discover... unless the Universe is incomplete (?) or flawed. It would be quite amusing to discover that the Universe is a flaw in an otherwise perfect symmetry.
2) L is a subset of U That law did not exist before the universe. In some places in the universe, it does not apply. That law perhaps evolves out of the structure of the universe and can change.
3) L is U This is more-or-less the singularity argument for the beginning of the universe, that law began with the birth of the universe that began out of nothing... it cannot explain why creation should occur. It implies a prior #5 below!
4) L is non-existent There may be no deep structure, everything may be chaos. What we think is law is illusion.
5) U is non-existent This is interesting because it is a logical outgrowth of version 3 above... a singularity requires the non-existence of everything before it exists. A logical problem.
6) U is a subset of G This is pantheism. That God is in all things but not identical to all things. That God existed before the universe and created it.
7) G is a subset of U God is a superbeing limited to this universe. That when the universe ends, God will end. This implies the prior existence of 9 below.
8) G is U God as nature, found in many Eastern philosophies, non-personal.
9) G is non-existent The view of the atheist. It is a precursor of number 7 above with its logical problems.
10) U is non-existent Same as possibility 5 above.
11) L is a subset of G The laws of nature are imposed by God the lawmaker.
12) G is a subset of L The evolving Deity where God is constrained by some higher order logic. If this relates to the universe, then implies 14 below as a pre-existing condition.
13) G is L The impersonal God as law or logic of nature.
14) L is non-existent Same as item 4 above. If relates to the universe, then can bring on logic issue with 12 above.
15) G is non-existent Same as item 9 above.
"Set theory can be viewed as a form of exact theology" - Rudy Rucker.
Meditate on the above. Remember that some physicists think there was no singularity, that the Universe has no beginning nor end, like a sphere has no beginning or end.
Scientific American (Feb 2011) says the current estimated number of stars in the visible Universe is: 300,000,000,000,000,000,000,000! Many of these will have planets.
Oct. 28, 2010
Trent Perrotto Headquarters, Washington 202-358-0321 firstname.lastname@example.org
Whitney Clavin Jet Propulsion Laboratory, Pasadena, Calif. 818-354-4673 email@example.com
NASA SURVEY SUGGESTS EARTH-SIZED PLANETS ARE COMMON
WASHINGTON -- Nearly one in four stars similar to the sun may host planets as small as Earth, according to a new study funded by NASA and the University of California.
The study is the most extensive and sensitive planetary census of its kind. Astronomers used the W.M. Keck Observatory in Hawaii for five years to search 166 sun-like stars near our solar system for planets of various sizes, ranging from three to 1,000 times the mass of Earth.
All of the planets in the study orbit close to their stars. The results show more small planets than large ones, indicating small planets are more prevalent in our Milky Way galaxy.
"We studied planets of many masses -- like counting boulders, rocks and pebbles in a canyon -- and found more rocks than boulders, and more pebbles than rocks. Our ground-based technology can't see the grains of sand, the Earth-size planets, but we can estimate their numbers," said Andrew Howard of the University of California, Berkeley, lead author of the study. "Earth-size planets in our galaxy are like grains of sand sprinkled on a beach -- they are everywhere," Howard said.
The study is in the Oct. 29 issue of the journal Science.
A computer simulation done by David Nesvorny (SW Research Institute in Colorado)for the development of our planetary system strongly suggests that a gas giant outside the orbit of Saturn was thrown out during planetary development which allowed Jupiter to stabilize and for the sun to maintain Uranus and Neptune. It is now (2012) estimated that our galaxy has hundreds of billions of rogue planets ejected during planetary system development.
The research provides a tantalizing clue that potentially habitable planets also could be common. These hypothesized Earth-size worlds would orbit farther away from their stars, where conditions could be favorable for life. NASA's Kepler spacecraft also is surveying sun-like stars for planets and is expected to find the first true Earth-like planets in the next few years.
Howard and his planet-hunting team, which includes principal investigator Geoff Marcy, also of the University of California, Berkeley, looked for planets within 80-light-years of Earth, using the radial velocity, or "wobble," technique.
They measured the numbers of planets falling into five groups, ranging from 1,000 times the mass of Earth, or about three times the mass of Jupiter, down to three times the mass of Earth. The search was confined to planets orbiting close to their stars -- within 0.25 astronomical units, or a quarter of the distance between our sun and Earth.
A distinct trend jumped out of the data: smaller planets outnumber larger ones. Only 1.6 percent of stars were found to host giant planets orbiting close in. That includes the three highest-mass planet groups in the study, or planets comparable to Saturn and Jupiter. About 6.5 percent of stars were found to have intermediate-mass planets, with 10 to 30 times the mass of Earth -- planets the size of Neptune and Uranus. And 11.8 percent had the so-called "super-Earths," weighing in at only three to 10 times the mass of Earth.
"During planet formation, small bodies similar to asteroids and comets stick together, eventually growing to Earth-size and beyond. Not all of the planets grow large enough to become giant planets like Saturn and Jupiter," Howard said. "It's natural for lots of these building blocks, the small planets, to be left over in this process."
The astronomers extrapolated from these survey data to estimate that 23 percent of sun-like stars in our galaxy host even smaller planets, the Earth-sized ones, orbiting in the hot zone close to a star. "This is the statistical fruit of years of planet-hunting work," said Marcy. "The data tell us that our galaxy, with its roughly 200 billion stars, has at least 46 billion Earth-size planets, and that's not counting Earth-size planets that orbit farther away from their stars in the habitable zone."
The findings challenge a key prediction of some theories of planet formation. Models predict a planet "desert" in the hot-zone region close to stars, or a drop in the numbers of planets with masses less than 30 times that of Earth. This desert was thought to arise because most planets form in the cool, outer region of solar systems, and only the giant planets were thought to migrate in significant numbers into the hot inner region. The new study finds a surplus of close-in, small planets where theories had predicted a scarcity.
Goldsmith suggests there will be more planets in habitable zones in trillions of years than are available now. Planets will be more common and be more enriched with the chemicals of life. Red dwarfs have the possibility of life bearing planets for "epochs well beyond easy imagination" (Goldsmith 2012:39).
"We are at the cusp of understanding the frequency of Earth-sized planets among planetary systems in the solar neighborhood," said Mario R. Perez, Keck program scientist at NASA Headquarters in Washington. "This work is part of a key NASA science program and will stimulate new theories to explain the significance and impact of these findings."
"The logical unity of the Universe demands a single invariance that remains unchanged in the face of all the complexity and transience we see about us from the smallest sub-atomic scales to the farthest reaches of outer space. Identifying this over-arching symmetry, if it does exist and is manifest in a form that is intelligible to us, may be the nearest thing we could get to discovering the "secret of the Universe"" (Barrow 1991:31).
"Some philosophers of science have used Gödel's theorems regarding the incompleteness of arithmetic (and hence of any logical system containing arithmetic) to argue that we can never know everything about the physical universe in terms of mathematical laws of Nature because we cannot produce all the true, and only the true, statements of arithmetic, nor can all arithmetic statements be decided true or false" (Barrow 1991: 37).
But Barrow points out that while physical reality may be mathematical, it can be flawed in that it does not use all of it, and therefore can be proved (1991: 38). I like a flawed universe theory.
Probably the most remarkable thing about our universe is that at the sub-atomic level, the playing field is perfect... every particle that exists is an EXACT copy of every other particle of its type. To make a pun, if you have seen one photon, you have seen every photon! This remarkable fact is basic to the organization of the Universe. NOTHING at the macroscopic level is ever an exact copy of anything else... which is the saving grace of the universe! These two differences are the key to everything. "It is this repeatability of things that is the hallmark of most basic entities in Nature and at root it is the reason why there can be accuracy and reliability in the physical world, whether it be in DNA replication or in the stability of the properties of matter" (Barrow 1991: 73).
But every exact copy is different in one respect, no two particles can occupy the same spacetime. Their "location" must be different. I suspect that the expansion (big bang) is simply an expression of this fact. My model explains why the universe is expanding and expanding at an increasing rate. If time = 0% then space = 100%: a particle traveling at the speed of light = 0% time and 100% space. Any shift from 0% time changes space and creates the arrow of time. Light, which is massless travels at the speed of light. If light had mass... it could not.
"... at 10 percent of the speed of light an object's mass is only 0.5 percent more than normal, while at 90 percent of the speed of light it would be more than twice its normal mass. As an object approaches the speed of light, its mass rises ever more quickly, so it takes more and more energy to speed it up further. It can in fact never reach the speed of light, because by then its mass would have become infinite, and by the equivalence of mass and energy, it would have taken an infinite amount of energy to get there. For this reason, any normal object is forever confined by relativity to move at speeds slower than the speed of light. Only light, or other waves that have no intrinsic mass, can move at the speed of light"(Hawking 1988:21).
"If there is space between two objects... we can and do consider the two objects to be independent. We regard them as separate and distinct entities. Space, whatever it is fundamentally, provides the medium that separates and distinguishes one object from another. That is what space does. Things occupying different locations in space are different things" (Greene 2004: 79).
I would revise this and substitute "spacetime" for space and add that things occupying different times are different things as well. Also warp your brain around this: I think time is just another physical direction, no different from length, width and depth. We simply experience the time dimension differently, it is always changing in a specific direction. At a Planck time interval the length, width and depth "box" moves along the time dimension, always unique and tying the other three dimensions together.
But things are odder yet: locality. What does it mean? It should mean that objects influence each other locally.. they interact because they are close to each other.
"But a class of experiments performed during the last couple of decades has shown that something we do over here (such as measuring certain properties of a particle) can be subtly entwined with something that happens over there (such as the outcome of measuring certain properties of another distant particle), without anything being sent from here to there" (Greene 2004: 80).
In my opinion... there is something "sent"... time!
There is no "space" and there is no "time"... there is only "spacetime". And time has an arrow because it is connected to space... and as "space" expands, "time" has a direction (spacetime expansion). What is entangled is time. Time allows/defines probability as well as entropy. Quantum mechanics simply add time to the equation, where it belongs. Both are at "light speed" and therefore at zero time.. so entanglement is not "spooky" as non-local entities simultaneously interact. The process of measurement is never at the speed of light (timeless).
Actually spacetime is simply time. Time is a dimension and we see it all the time as we experience change in that direction irregardless if the other three dimensions seem to be fixed at any "point" in time. In reality, the other three dimensions are moving along the fourth dimension (time) so they are never fixed. Distance between the other three dimensions is a change in time.
"Recall another of Leibniz's principles, the identity of the indiscernible which requires that any two particles which have the same relationship with the other things in the universe must be in fact the same. For if things are only distinguished by their relations, then there is no way to tell them apart. A world constructed according to these principles must be complex enough to allow observers to distinguish each particle uniquely, by talking about their relationships with the other particles in the universe.
"How differentiated does the universe have to be, according to Leibniz's principles, in order to speak meaningfully of the universe as a three-dimensional space that exists in time? To use a word favored by Leibniz, the universe must have so much variety that no two observers experience the same thing, and no moment ever repeats itself"
"The common view, which we have inherited form Newtonian science, is that we live in a universe composed from a great many identical parts. The parts - the elementary particles - are each very simple, and each is identical to every other of its kind. Their arrangement happens to be very complex, but this is in no way necessary - it is just our good luck. The opposing picture, posited, each by Leibniz and Einstein, is of a world made by a great many particles, each of which is different. While each proton has the same charge and mass as every other, each is different, because each occupies a different place. Each elementary particle has a unique relation to the whole. The world they make is necessarily complex because a certain minimal complexity is required if each proton is to be distinguished from all others by its relationships to the rest. We may say that where something is, is determined by its view of the rest, which is to say by its relationship to the others. If each of a vast number of particles is to have a unique view of the rest, the world must have a fantastic variety of views" (Smolin 1997: 218-220).
"Whereas Newton imagined gravity as a force that acts across space, Einstein's equations cast gravity as a property that belongs to space. In newton's physics, space was passive, a vessel for a mysterious force between masses. In Einstein's physics, space was active, collaborating with matter to produce what we perceive as gravity's effects" (Panek 2011: 14).
The implications of this are interesting in relationship to the big bang and the entropy of the universe.
The Universe is filled with things that change through time and move through space. Life evolves out of non-life. Each living thing is born, lives and dies, and each is unique. There is some small variation in everything. This variation allows life to evolve and meet differing circumstances. Some change is detrimental to the individual. This is neither good nor bad, it just is. Some change is beneficial to the individual. This is neither good not bad, it just is. Shit happens.
The Universe was born, ages and dies. This is neither good, not bad, it just is. Life evolves in this universe. This is simply a mechanical process based on the structure of the universe. Life exists in those universes where life is possible. Life requires active systems that map the matter and energy needed to sustain life.... awareness is simply a basic aspect of living systems: how they seek and obtain the matter and energy required to sustain life. Reproduction is life's only meaningful goal ... to sustain continued life. What each life experiences, is all there is, nothing more, nothing less. Shit happens.
The Universe is really big. There are about one hundred billion stars in the Milky Way. Star Trek not withstanding, if you started counting one star a second 24 hours a day, your descendants will still be counting 3,000 years from now (Guth 1997:1)! There are at least a hundred billion other galaxies in the observable Universe (20-30 billion light years). According to Guth (1997:186) the Universe is at least 1023 (100,000,000,000,000,000,000,000) times bigger than the small parcel we can observe through our telescopes! Multiply that hundred billion galaxies times that number: that is a lot of galaxies. That is a lot of the Universe we can never see or participate in. "If the inflationary theory is correct, then the observed universe is only a minute speck in a universe that is many orders of magnitude larger" (Guth 1997: 186).
"Galaxies formed first, at redshifts of 2 to 4 - or roughly nine to twelve billion years ago. Then those galaxies gathered into clusters, at redshifts of less than 1 - or less than roughly six billion years ago. And now, today (in a cosmic sense), those clusters are gathering into superclusters" (Panek 2011: 190).
Guth suggests that "false vacuum" is the key to understanding the creation of a universe. This false vacuum both grows and decays exponentially. Where it decays, it creates a universe, where it grows, it creates the probability of creating more universes... a process that goes on forever, increasing the rate of creation of universes, like the spreading complexity of a fractal pattern, an infinity of universes upon universes... "While life in our pocket universe will presumably die out, life in the universe as a whole will thrive for eternity" (1997: 248).
"The classical big bang didn't happen at a specific place within an infinite void; rather it happened everywhere because it was everything. There was "nothing" - not even empty space - outside of it. Hence the radiation is everywhere and goes in all directions, and will continue to do so as long as the universe exists..."(Smoot & Davidson 1993: 85).
Guth's inflation jumped the beginning universe from the size of a proton to the size of a grapefruit in an inconceivably brief period of time. One physicist, named Joao Magueijo believes that the speed of light has changed. That during the early birth of the universe, the speed of light was almost infinite, and that as the universe inflated, there was a phase transition as the universe cooled establishing the current speed of light (Folger 2003:36).
"The physicists assumed their theory affected only the very early universe. Once the speed of light froze to its current rate, the standard rules of physics would apply. But in that brief initial moment, a variable speed of light would solve two fundamental puzzles of cosmology" (Folger 2003:37).
"The first is something physicists call the horizon problem: No matter which way astronomers look in the sky, the universe - at the very largest scales - looks the same. Clusters of galaxies spangle the cosmos in a remarkably uniform manner" (Folger 2003:37).
"The second challenge facing Magueijo and Albrecht's theory was more daunting. Cosmologists say that the shape of the universe is "flat", meaning that it's delicately poised between two extremes: eternal expansion and imminent implosion" (Folger 2003:38).
"If energy and matter distort space-time, then the spontaneous creation or destruction of energy and matter, in Magueijo's theory, would change the curvature of the universe accordingly" (Folger 2003:38).
Oddly enough, light can be slowed down! Marguerite Holloway has slowed light down to 17 meters per second. When she measured light on and off in a coupling laser in a Bose-Einstein condensate. "Light had come in with information, conveyed that information to matter and disappeared. Then matter produced light with that same information" (Holloway 2007:53).
"I think of my lifetime in physics as divided into three periods. In the first period ... I was in the grip of the idea that Everything is particles ... I call my second period Everything is Fields ... Now I am in the grip of a new vision that Everything is Information" (Wheeler 1998).
"For every type of particle in nature, there is a field, and for every field there is a particle" (Susskind 2006:95). and, "...wherever there is field, there is energy" (Susskind 2006:100).
Is space empty? Faraday created the concept of the "field" which pervades "space"... in other words, it pervades spacetime: electromagnetic fields and the Higgs (gravitational) field. So space is not empty.
The thing to keep in mind is not that everything is moving because of the bang, but that space is expanding, and thus matter clumps are getting farther apart, like ink dots on an expanding balloon. There is some motion of matter due to gravity, but most of the movement is the result of space changing its size. This expansion is the Hubble flow, but galaxies also have independent motion that adds noise to the measurement of the Hubble constant. Galaxies are expanding withspace rather than into space (Smoot & Davidson 1993: 53). This makes me wonder if the relative density of matter is decreasing over time with the expansion of space?
"It is easy to have parts of the universe moving apart at greater than the speed of light (without violating special relativity) if space is expanding. If space is expanding, then two parts separated by a distance greater than the speed of light divided by the expansion rate must move apart faster than the speed of light even though neither of them is moving or moving very fast relative to its local neighbors or space-time. What is impossible is keeping things synchronized and matches. It is exactly this synchronization problem that leads to defects or a highly mismatched and lumpy universe" (Smoot & Davidson 1993: 177).
"Bad inflation means a period of acceleration whose outcome conflicts with what we observe.... The difference between good and bad hinges on the precise shape of the potential energy curve, which is controlled by a numerical parameter that could, in pirnciple, take on any value whatsoever. Only a very narrow range of values could produce the observed temperature variation" (Steinhardt 2011:40-41).
"Not only is bad inflation more likely than good inflation, but no inflation is more likely than either....Penrose's shocking conclusion, though, was that a flat universe without inflation is much more likely than with inflation-- by a factor of 10 to the google (10100) power!(Steinhardt 2011:41).
Steinhardt proposes that the universe is cyclical on a scale of about a trillion years and that the universe is smoothed by the contarction phase.(Steinhardt 2011:43).
Forget about "space" and forget about "time".. instead wrap your mind around "spacetime" as a single unified concept.
"But, the term "big bang" is rather misleading because it was neither big nor loud, nor was it an explosion in the usual sense. If fact, the big bang did not occur anywhere in space, nor did it have an origin in time, because initially space and time did not exist. Instead, our current view of the big bang is that spacetime and energy were initially combined in an infinitely dense and infinitely hot state. Under these conditions everything was extremely simple. The four fundamental forces were unified; there were no particles because energy and mass were interchangeable; and there were no measurable events. Suddenly, 12 to 15 billion years ago, spacetime began expanding and as it did, mass-energy began cooling. Almost instantly, in a process known as symetry breaking, the force of gravity separated from the grand unified force (the still unified "strong electroweak force"). At this epoch, quarks and leptons, and their antiparticles were in equilibrium with energy (they materialized from energy and dematerialized back to energy in rapid succession). Another brief instant later, spacetime began inflating, exponentially, at speeds faster than light, in a process called inflation. (According to Einstein's theories, spacetime can expand faster than light. Mass-energy cannot travel through spacetime faster than light.) Inflation had the effect of flattening the geometry of spacetime, and expanding spacetime beyond the "light horizon."
"Yet the old dichotomy between the big bang and the steady state still points to the heart of a great cosmological dilemma - one that has yet to be resolved. The choice is simple: either the universe always existed or it did not. If the universe always existed (in any form at all), then we have to accept something that simply is - something infinite in time. Conversely, if the universe did not always exist, then we are forced to accept that existence arose from nonexistence - an equally daunting concept" (Bernstein 2000: 70).
If time existed before the big bang then space existed before the big bang as they are a single spacetime entity.
"But really, the full three-dimensional space and the full four-dimensional spacetime are warped. Time is warped because it too is a dimension from the vantage point of special and general relativity" (Randall 2005:110).
"... in all cases matter tells spacetime how to curve, and spacetime tells matter how to move. Curved spacetime sets up the geodesic paths along which, in the absense of other forces, things will travel. Gravity is encoded into the geometry of spacetime" (Randall 2005:110-111).
This implies, Ann Rand fashion, that every action I do as a material being changes the "outcome" of the universe.
"Among its many merits, general relativity eliminated the annoying action-at-a-distance of Newtonian gravity, which asserted that an object's gravitational effects would be felt everywhere as soon as it appeared or moved. With general relativity, we know that before gravity can act, spacetime has to deform. This process does not happen instantaneously. It takes time. Gravity waves (space deformation) travel at the speed of light(time deformation)" (Randall 2005:112). (Italics my comments).
The relative smoothness of the universe lead to the theory of inflation... an early time where all things could interact within the constraints of space-time.
"Our most refined theories of the origin of the universe - our most refined cosmological theories - tell us that by the time the universe was a couple of minutes old, it was filled with a nearly uniform hot gas composed of roughly 75 percent hydrogen, 23 percent helium, and small amounts of deuterium and lithium. The essential point is that this gas filling the universe had extraordinary low entropy. The big bang started the universe off in a state of low entropy, and that state appears to be the source of the order we currently see. In other words, the current order is a cosmological relic" (Greene 2004: 171).
"And of the fundamental forces of nature, gravity is the one that exploits this feature of entropy tally to the hilt. Because gravity operates across vast distances and is universally attractive, it instigates the formation of the ordered clumps - stars - that give off the light we see in a clear night sky, all in keeping with the net balance of entropy increase" (Greene 2004: 173).
In the October 2009 Scientific American, Black Holes were questioned and replaced by Black Stars. So substitute in your mind Black Hole/Star for each quoted use of the term "Black Hole" in the following texts. Quotes from the STAR concept are found in the section dedicated to Black "stellar objects"
"The more squeezed, dense, and massive the clumps of gas are, the larger the overall entropy. Black holes, the most extreme form of gravitational clumping and squeezing in the universe, take this to the limit. The gravitational pull of a black hole is so strong that nothing, not even light, is able to escape, which explains why black holes are black. Thus, unlike ordinary stars, black holes stubbornly hold on to all the entropy they produce: none of it can escape the black hole powerful gravitational grip. In fact.... nothing in the universe contains more disorder - more entropy - than a black hole" (Greene 2004: 173).
"When gravity flexes its muscles to the limit, it becomes the most efficient generator of entropy in the known universe" (Greene 2004: 173).
"Because the universe is expanding under the influence of the cosmological constant, cosmology also has its horizon. Our cosmic horizon is about fifteen billion light-years away, where things are moving so rapidly away from us that light from them can never reach us, nor can any other signal. It is exactly the same as a black hole horizon - a point of no return. The only difference is that the cosmic horizon surrounds us, whereas we surround a black hole horizon" (Susskind 2006:15).
How does this fit into the conservation of information issue?
But some physicists now believe there is an alternative to the black hole, a naked singularity. Simulations of star collapse yield a model of a "moment in time", with a high but finite high density, "the place where the physical world ends. We should think of it as an event rather than an object, a moment when collapsing matter reaches the edge and ceases to be, like the big bang in reverse" (Joshi 2009: 42)."In that case, questions such as what will come out of a naked singularity are not really meaningful; there is nothing to come out of, because the singularity is just a moment in time. What we see from a distance is not the singularity itself but the processes that occur in the extreme conditions of matter near this event, such as shock waves caused in inhomogeneities in this ultradense medium or quantum-gravitational effects in its vicinity" (Joshi 2009: 42-43).
Does this imply that dark energy is the most efficient generator of order in the universe?
"In a Universe with dark energy the connection between geometry and destiny is severed" (Michael Turner quoted in Panek 2011:208). And roughly five billion years ago, dark energy took over from gravity as the dominant feature of the Universe (Panek 2011: 210).
"But Wheeler observes that the black hole keeps a record of the information it engulfs. The more information swallowed, the bigger the black hole is - and the more space on the black hole's surface to accommodate boxes depicting bits. To Wheeler, this realization is curious and profound. A black hole can consume anything that exists and still be described in terms of how much information it has digested. In other words, the black hole converts all sorts of real things into information. Somehow, Wheeler concludes, information has some connection to existence, a view he advertises with the slogan "It from bit"" (Siegfried 2000: 2-3).
"It is not easy to grasp Wheeler's idea of connecting information to existence. He seems to be saying that information and reality have some sort of mutual relationship. On the one hand, information is real, not merely an abstract idea. On the other hand reality - or existence - can somehow be, described, or qualified, in terms of information" (Siegfried 2000: 3).
"Many scientists now conceive of information as something real, as real as space, time, energy and matter" (Siegfried 2000: 7).
"Until recently, information was regarded as unphysical, a mere record of the tangible, material universe, existing beyond and essentially decoupled from the domain governed by the laws of physics. This view is no longer tenable" (Wojciech Zurek 1991: )
"In the beginning was the bit" (LLoyd 2006: ix). "Things, or 'its', arise out of information or 'bits'" (LLoyd 2006: ix).
"The universe is made of bits. Every molecule, atom, and elementary particle registers bits of information. Every interaction between those pieces of the universe processes that information by altering those bits. That is, the universe computes and because the universe is governed by the laws of quantum mechanics, it computes in an intrinsically quantum-mechanical fashion; its bits are quantum bits." (LLoyd 2006: 3).
"So it seems that the idea that information has a definite location in space is wrong" (Susskind 2006:337).
"What does the universe compute? It computes itself." (LLoyd 2006: 3). "Physical systems speak a language whose grammar consists of the laws of physics" (LLoyd 2006: 9).
"It has been known that any desired logical expression, including complex mathematical calculations, can be built up of of NOT, COPY, AND, and OR. They make up a universal set of logic gates" (LLoyd 2006: 33).
"The energy we see around us,then - in the form of Earth, stars, light, heat - was drawn out of the underlying quantum fields by the expansion of our universe. Gravity is an attractive force that pulls things together. (As high school students will tell you, 'Gravity sucks'). As the universe expands (which it continues to do), gravity sucks energy out of the quantum fields. The energy in the quantum fields is almost always positive, and this positive energy is exactly balanced by the negative energy of gravitational attraction." (LLoyd 2006: 40).
"Energy makes physical systems do things. Information tells them what to do" (LLoyd 2006: 33). "To do anything requires energy. To specify what is done requires information" (LLoyd 2006: 44).
"The theory of quantum mechanics gives rise to large scale structure because of its intrinsically probabilistic nature. Counter intuitive as it may seem, quantum mechanics produces detail and structure because it is inherently uncertain" (LLoyd 2006: 49).
"The kinetic energy associated with the quantum jitters is called zero-point energy, and it cannot be eliminated" (Susskind 2006:29).
"Chance is a crucial element of the language of nature. Every roll of the quantum dice injects a few more bits of detail into the world. As these details accumulate, they form the seeds for the variety of the universe". "Gambling for money may be infernal, but betting on throws of the quantum dice is divine" (LLoyd 2006: 50).
"The existence of complex and intricate patterns does not require that these patterns be produced by a complex and intricate machine or intelligence" (LLoyd 2006: 59).
"...the first law of thermodynamics is a statement about the energy: energy is conserved when it is transformed from mechanical energy to heat. The second law of thermodynamics, however, is a statement about information, and about how it is processed at the microscopic scale. The law states that entropy (which is a measure of information) tends to increase. More precisely, it states that each physical system contains a certain number of bits of information - both invisible and information (or entropy) and visible information - and that the physical dynamics that process and transform that information never decrease that total number of bits" (LLoyd 2006: 66).
"Information can be created, but it can't be destroyed" "Any process that erases a bit in one place must transform that same amount of information somewhere else" (LLoyd 2006: 77).
The laws of physics preserve information. The number of bits registered by a system (such as a helium filled balloon) does not decrease. (LLoyd 2006: 79). "Suppose an unknown bit of information interacts with a know bit of information. After the interaction, the first bit is still unknown, but now the second bit is unknown too. The unknown bit has infected the known bit, spreading the lack of knowledge, and increasing the entropy of the system" (LLoyd 2006: 81).
"In fact, as pointed out by Edward Fredkin of Carnegie Mellon University and Tommaso Toffoli of Boston University, atomic collisions naturally perform AND, OR, Not, and COPY logic operations. In the language of information processing, atomic collisions are computationally universal" (LLoyd 2006: 97).
"Planck found that if the energy of each of these (photon) particles (measured in joules) was equal to 6.63x10-34 times the wave's frequency per second, then energy was conserved by the radiant heat. Planck's constant relates to frequency. It is so ubiquitous in physics that it has been given its own special symbol, h" (LLoyd 2006: 103).
The so called spooky double-slit experiment with light:
"Perform the double-slit experiment with particles. What do you see? The spots made by the individual particles fall across the photographic plate in a series of bands. When you cover one of the slits, the interference pattern disappears. Evidently, the particles behave as if they were waves" (LLoyd 2006: 106).
"Suppose you place a detector on the right hand slit. The detector registers the presence or absence of a particle at the slit, letting the particle pass through otherwise unchanged. When the detector detects a particle, it clicks. Now perform the double-slit experiment with the detector operating. Look at the screen. The interference pattern has disappeared" (LLoyd 2006: 107)!
"Observation (or measurement), as it is conventionally called) destroys interference. Without measurement, the particle merrily goes through both slits at once; with measurement, it goes through one or the other. In other words, measurement intrinsically disturbs the particle" (LLoyd 2006: 108).
"It is now clear why big things tend to show up in one place or another, but not both. Pebbles, people, and planets are constantly interacting with their surroundings. Each interaction with an electron, a molecule or air, a particle of light tends to localize a system. Big things interact with lots of little things, each of which gets information about the location of the big things. As a result, big things tend to appear here or there instead of here and there at the same time" (LLoyd 2006: 108).
Makes me wonder if one could block all interactions if the so called "warp" drive could exist? Be there instead of here? Impossible to block all = outside universe... but block enough to be elsewhere? How much is enough?
"The process by which the environment destroys the wavelike nature of things by getting information about a quantum system is called 'decoherence'" (LLoyd 2006: 108).
"The uncertainty principle states that if the value of some physical quantity is certain, then the value of a complementary quantity is uncertain" (LLoyd 2006: 111).
"What's going on is that quantum mechanics unlike classical mechanics can create information out of nothing" and "..entanglement is responsible for the generation of information in the universe" and "In fact, entanglement does not involve action at a distance, spooky or otherwise" (LLoyd 2006: 117-120).
"In the case of the double-slit experiment, for example, there are two possible histories. In one of them, the particle goes through the left slit and lands on the wall. In the other, the particle goes through the right slit and lands on the wall. These histories are coherent, not decoherent: they interfere with each other to create the pattern of bands on the wall"
"Now add the detector to the right-hand slit. There are still two possible histories. In one of them, the particle goes through the left slit and lands on the wall. In the other, the particle goes through the right slit, trips the detector, and lands on the wall. Because of the detector, the interference pattern goes away. These histories are decoherent: they do not interfere with each other" (LLoyd 2006: 125).
"The sentence, "A particle has a position AND a momentum," must be replaced by, "A particle has a position OR a momentum." Likewise, light is particles, OR light is waves" (Susskind 2006:334-335).
Look hard at those sentences. The first half has nothing directly to do with time. The second half has everything to do with time. Momentum and waves require time. The issue of spookiness has to do with a key aspect of the universe, spaceTIME. Time removes all issues, if one thinks deeply.
"You are to an atom as Earth is to an ant: very large. Atoms are typically a few ten-billionths of a meter across - tiny, bouncy spheres held together by electricity. An atom consists of a compact nucleus (Latin for 'nut') 100,000 times smaller still, made up of protons (which are positively charged) and neutrons (lacking a charge). Most of the mass of the atom lies in its nucleus, which is surrounded by a cloud of electrons, whose masses are a couple of thousands times smaller than those of protons or neutrons" (LLoyd 2006: 129).
If you have never seen the movie: Mindwalk rent it. It uses the following image: Atoms are essentially empty space. If the nucleus is about the size of a marble, then the first electron shell would be about one-half a kilometer away (1640 feet) and the electrons as particles would be like grains of sand. But the electrons are more like a smear of probabilities. The particles do not exist in a definite place but as probabilities across spacetime. Classical matter is solid because probability patterns are hard to compress. Classical solids are interaction probability patterns.
"The simplest wave that can fit around a nucleus is a sphere the wave wraps smoothly all the way around. The next simplest wave has one peak as it wraps; then comes a wave with two peaks, and so on. Each of these waves corresponds to an electron in a definite energy state" (LLoyd 2006: 130).
"When an electron jumps from a higher energy state to a lower one, it emits a chunk, or quantum of light - a photon - whose energy is equal to the difference between the energies of the two states" (LLoyd 2006: 130).
"Not only can atoms emit light, they can absorb it. Just as an atom can jump from a higher energy state to a lower one, emitting a photon in the process, an atom can absorb a photon and jump from a lower energy state to a higher one" (LLoyd 2006: 131).
A quantum bit is a qubit.
"When you zap an atom with light whose photons have the right energy, you can make the atom flip its state from [0> to [1> and back again. You are flipping the atom's bit. In other words, you are performing the logical operation known as NOT" and "Unlike classical bits, qubits can be in quantum superpositions of [0> and [1>; that is, they can register 0 and 1 at the same time" (LLoyd 2006: 136).
"A quantum computer given 10 input qubits can do 1,024 things at once. A quantum computer given 20 qubits can do 1,048,576 things at once. One with 300 qubits of input can do more things at once that there are elementary particles in the universe" (LLoyd 2006: 138-139).
"In a quantum computer, however, there is no distinction between analog and digital computation" (LLoyd 2006: 152).
"Thus at bottom, the universe can be thought of as performing a quantum computation" and "...a simulation of the universe on a quantum computer is indistinguishable from the universe itself" (LLoyd 2006: 154).
"Information can't travel any faster than the speed of light. Because the universe has a finite age and because the speed of light is finite, the part of the universe about which we can have information is also finite. The part of the universe about which we can have information is said to be within the horizon. Beyond the horizon we can only guess as to what is happening" (LLoyd 2006: 164).
"When we look through a telescope, we also look back in time, and the most remote objects we can see appear as they were a little under 14 billion years ago. In the intervening time, because of the expansion of the universe, those objects have moved farther away, and right now they are 42 billion light-years from us" (LLoyd 2006: 164).
"The horizon is 42 billion light-years away. On average, every cubic meter of the universe withing the horizon contains a mass of about one hydrogen atom. Each hydrogen atom contributes energy E=mc2. Toting up all the energy in the universe, we find that the universe contains about 100 million billion billion billion billion billion billion billion (1071) joules of energy" (LLoyd 2006: 164).
"The result is that every second, a computer made up of all the energy in the universe could perform 100,000 googol (10105) operations. Over the 14 billion years the universe has been around, this cosmological computer could have performed about 10,000 billion billion googol (10122) ops" (LLoyd 2006: 165).
"The result is that the cosmological computer could store 100 billion billion billion billion billion billion billion billion billion billion (1092) bits of information" (LLoyd 2006: 165).
"The fact that a quantum computation doesn't care how it is embedded in spacetime means that the spacetime derived from the quantum computation obeys the laws of general relativity. Why? Because Einstein derived the laws of general relativity by requiring that those laws don't care how the underlying physical dynamics of matter is embedded in spacetime. Under the proper assumptions, general relativity is the only theory of gravity that is generally covariant" (LLoyd 2006: 172).
"The primary consequence of the computational nature of the universe is that the universe naturally generates complex systems, such as life. Although the laws of physics are comparatively simple in form, they give rise, because they are computationally universal, to systems of enormous complexity" (LLoyd 2006: 176).
"Logical depth referred to bit strings, computer programs and logical operations. Heinz wanted a measure of complexity that referred to physical systems - energy and entropy. So he and I concocted a physical analog to logical depth, which we called thermodynamic depth, to emphasize the connection to Bennett's work" (LLoyd 2006: 191).
"Recall that entropy is measured in bits. Entropy consists of random, unknown bits. The opposite of entropy is called negentropy. Negentropy consists of known, structured bits. A systems negentropy is a measure of how far away that system is from its maximum possible entropy. A living, breathing human being has lots of negentropy, as opposed to, say, a gas of helium atoms at room temperature, which has no negentropy" (LLoyd 2006: 191).
"..effective complexity, a measure of the amount of regularity in a system; this definition of complexity was originally proposed by Murray Gell-Mann. Over the last decade, Gell-Mann and I have worked to make the notion of effective complexity mathematically precise" (LLoyd 2006: 193).
"The amount of information required to describe a system's regularities is its effective complexity" (LLoyd 2006: 193).
"The computational capacity of the universe means that logically and thermodynamically deep things necessarily evolve spontaneously" (Lloyd 2006: 200)).
In other words, LIFE is a probable outcome where the environmental variables allow it to survive.
The universe revisited
While our sun has been fusing hydrogen into helium for 4.5 billion years, it can continue for another 5 billion (Rees 1979: 9). All of the heavy elements were created in solar furnaces after the "big bang" and were distributed into space by supernova. So the early universe stars and planets were lifeless as the chemicals required for life as we know it, did not exist. The universe evolves and changes. Over time, heavy elements are becoming more common and lighter elements are being used by stars: "We are stardust - the ashes of long dead stars" (Rees 1997: 17). But since most star systems are binaries... the chances for life on planets gets less likely.
"Planetary nebulae were named, or rather misnamed, two centuries ago by English astronomer William Herschel. He was a prodigious discoverer of nebulae - fuzzy, cloudlike objects visible only through a telescope. Many had a vaguely round shape that reminded Herschel of the greenish planet Uranus (which he discovered), and he speculated that they might be planetary systems taking shape around young stars. The name stuck even though the opposite turned out to be true: this type of nebula consists of gas molted from dying stars. It represents not only our past but our future and our fate. In five billion years or so our sun will end its cosmic tenure in the elegant violence of a planetary nebula" (Balick & Frank (2004: 51,52).
"Over the past century, astronomers have come to realize that stars cleanly separate into two distinct classes as they die. The elite massive stars - those with a birth weight exceeding eight solar masses - explode suddenly as supernovae. More modest stars, such as the sun, have a drawn-out death. Instead of detonating, they spend their last years burning their fuel spasmodically, like an automobile engine running out of gas" (Balick & Frank (2004:52).
"Initially the loosely bound outer layers stream off the star at 10 to 20 kilometers per second - a relatively slow outflowing wind that will carry the bulk of the nebula's eventual mass. As the star strips down to its still hot core, it evolves from orange to yellow, then white, and finally blue. When its surface temperature exceeds about 25,000 kelvins, it bathes the surrounding gas in harsh ultraviolet light, which has enough punch to dismember molecules and strip atoms of their electrons."
"The stellar wind carries ever less mass at ever increasing speed. After 100,000 to one million years, depending on the original mass of the star, it ceases altogether, and the remnant star settles down as an extremely dense and hot white dwarf - a stellar ember crushed by gravity into a nearly crystalline orb about the size of earth" (Balick & Frank (2004:52).
"At least 50 percent of the all the "stars" you see at night are really pairs of stars orbiting each other. In most of these systems, the stars are so far apart they develop independently. But in a small fraction, the gravity of one star can deflect or even control the material flowing out of another. This fraction matches the fraction of planetary nebulae that are bipolar" (Balick & Frank (2004:57).
"Our Galaxy, the Milky Way, is a huge disk 100,000 light-years across and containing a hundred billion stars. Its oldest stars formed more than 10 billion years ago. The primordial material contained only the simplest atoms - no carbon, no oxygen, and no iron. Our Sun, a middle-aged star (some others are more than twice as old), formed 4.5 billion years ago, by which time several generations of heavy stars could have been through their entire life cycles. The chemically interesting atoms - those essential for complexity and life - were forged inside these stars. Their death throes, supernova explosions, flung these atoms back into interstellar space" (Rees 1997: 18).
The Milky Way is rotating about 161,000 kilometers per hour faster than previously understood. This means the mass of our galaxy is about 50% larger than previously thought. We are not the smaller sister of Andromeda, but about the same size. In addition, the Milky Way probably has four arms, not two.
"Gradually, though, it has become clear that the Milky Way is not a finished work but rather a body that is forming" (Wakker & Richter 2004:40).
"Our galaxy contains about 100 billion stars most of which are concentrated in a thin disk about 100,000 light years across and 3,000 light-years thick. These starts revolve around the galactic center in nearly circular orbits. The sun, for example, trundles around at nearly 200 kilometers per second. Another 10 billion stars form the galactic "halo", a huge spherical envelope that surrounds the disk" (Wakker & Richter 2004:40-41).
There is what is called the Magellanic Stream, an arc of gas in the orbit of two small companion galaxies that orbit the Milky Way. These are the large and small Magellanic Clouds.
"The Slaon Digital Sky Survey has bridged some of the gap between theory and observation by finding 15 more dwarf galaxies surrounding the Milky Way. Because the survey covers only one quarter of the sky and must look past various obstacles, both local and cosmic, it probably missed another 60 to 80 similar dwarf galaxies, according to Gerry Gilmore of the University of Cambridge" (Lemonick 2009:63)>
The dwarf galaxies only have about 1000 stars but the mass of a million... dark matter way in excess of the predictions of models. And the data indicates that luminous matter began very early after the big bang. (Lemonick 2009:63,64)>
Fritz Zwicky coined the term DARK MATTER dunkle materie in 1933 (Panek 2011:48).
"Other Sloan researchers have identified a new class of white dwarfs, the cores left over after sun-sized stars die, and have sighted elusive brown dwarfs, objects too big to be planets but not quite massive enough to ignite fusion reactions and become stars (Lemonick 2009:65)>
"Most galaxies are scattered through space far from their nearest neighbor, and of these only 10 to 20 percent are ellipticals,; spirals dominate. The remaining galaxies, however, are packed into clusters, and for them the situation is reversed. Ellipticals are the majority, and the spirals that do exist are anemic systems depleted of gas and young stars. This so-called morphology-density relation has long puzzled astronomers" (Kauffman & van den Bosch 2002:16).
"The high efficiency of star formation during mergers explains why ellipticals typically lack gas: they have used it up. The merger model also accounts for the morphology-density relation: a galaxy in a high-density environment will undergo more mergers and is thus more likely to become an elliptical" (Kauffman & van den Bosch 2002:19).
"To be fair, we've made two experimental discoveries in the past few decades: the neutrinos have mass and that the universe is dominated by a mysterious dark energy that seems to be accelerating its expansion. But we have no idea why neutrinos (or any other particles) have mass or what explains their mass value. As for the dark energy, it's not explained in terms of any existing theory. Its discovery cannot be counted on as a success, for it suggests that there is some major fact we are missing. And except for the dark energy, no new particle has been discovered, no new force found, no new phenomenon encountered that was not known and understood twenty-five years ago" Smolin 2006: xii).
"Skeptics liked to quote a saying: You get to invoke the tooth fairy only once - meaning dark matter - but now we have to invoke the tooth fairy twice - meaning dark energy" (Panek 2011: 172).
"... researchers are still far from working out all the processes involved. Moreover, they have yet to resolve some troubling inconsistencies. The simple picture of a gas cooling inside dark matter halos faces an important problem known as the cooling catastrophe. Calculations of the cooling rates imply that the gas should have cooled briskly and pooled in the centers of halos, leaving the intergalactic space virtually empty. yet the space between galaxies is far from empty. Some extra input of energy must have prevented the gas from cooling down" (Kauffman & van den Bosch 2002:21).
In 2006, The "bullet Cluster" seemed to demonstrate the presence of a dark matter halo from gravitational lensing. The dark matter seemed to correspond to the galaxies, not the gas produced from their collision. But the "Abell 250" collision cluster appears to show dark matter in the gas center of the colliding galaxies and not near the galaxies at all (Schilling 2007:32). The data almost suggests a fifth force of nature, but needs better observations to clarify the issue.
My model, I believe, resolves this issue.
"Another problem concerns angular momentum. The amount of angular momentum imparted to protogalaxies in the models is comparable to the angular momentum that we actually see in spiral galaxies. So long as the gas retains its angular momentum, the CDM (cold dark matter) picture reproduces the observed sizes of spirals. Unfortunately, in the simulations the angular momentum leaks away. Much of it is transferred to the dark matter during galaxy mergers. As a result, the disks emerging from these simulations are a factor of 10 too small. Apparently the models are still missing an essential ingredient" (Kauffman & van den Bosch 2002:21).
Again, I believe my model resolves this issue.
"A third inconsistency has to do with the number of dwarf galaxies. Hierarchical theories predict a proliferation of low-mass dark matter halos and, by extension, dwarf galaxies. These are simply not seen. In the neighborhood of the Milky Way, the number of low-mass dwarfs is a factor of 10 to 100 lower than theories predict. Either the dark matter halos do not exist or they are present but have eluded decection because stars do not form within them" (Kauffman & van den Bosch 2002:21).
Again, my model resolves this issue.
There was a time when it was thought the earth was the universe. People thought the heavens rotated around the earth, including the sun. Then it was found that the earth rotated around the sun and that the earth was not the center of our planetary system. So people thought our sun was the center of the universe. Later, is was realized that the sun was just a minor star in our galaxy, the "Milky Way". So people thought our galaxy was the universe. Then it was discovered that our galaxy was one of many. Our galaxy was not even a particularly big nor impressive one. Still later, it was discovered that the galaxies group along bubble-like regions and that our area is not within a very impressive wall of galaxies.
Over time, we have fallen from the center of things as a planet and star. So now, we think that life so rare that we are the center of the universe because we are alive. Or, if life is common, intelligence is not, thus we are the center of the intelligent universe. Will we ever learn from what the history of research has been telling us for a very long time? Some day, this life or intelligence provincialism will fall as well.
Believe it or not, but "In October 1995 Michel Mayor and Didier Queloz of Geneva Observatory in Switzerland reported the first planet" (Marcy & Butler 1998:11).
As more planets were found, planets moved in eccentric and oval orbits. Many large gas giants were discovered close to their stars. The old ideas about planetary formations had to be abandoned as another provincial view. The so-called 51 Peg planets are gas giants with orbits as short as 1.5 days and would be inside the orbit of Mercury (Marcy & Butler 1998:13).
"It transpires that there exist a number of very unusual coincidences regarding the values of particular combinations of the constants of nature which are necessary conditions for our own existence. Were the fine-structure constant to differ by roughly one per cent form its actual value, then the structure of stars would be dramatically different. Indeed, there is every reason to suspect that we would not be here to discuss the matter. For the biological elements like carbon, nitrogen, oxygen, and phosphorus are produced during the final explosive death throes of the stars. They are blown out into space where they become incorporated into the planets and, ultimately, into people. But, carbon, the crucial biological element which we believe to be essential for the spontaneous evolution of life, should really only exist as the minutest trace element in the Universe instead of in the healthy abundance that we find. This is because the explosive nuclear reactions that make carbon in the late stages of stellar evolution are typically rather slow at producing it. However, there exists a remarkable coincidence of Nature that allows carbon to be produced in unexpected abundance" (Barrow 1991: 95).
The values of the particles is important, they are parameters rather than absolutes:
"Although many different kinds of elementary particles have been discovered, almost all the matter in the universe is made of four kinds: protons, neutrons, electrons and neutrinos. These interact via four basic forces: gravity, electromagnetism and the strong and weak electromagnetic forces. Each of these forces is characterized by a few numbers. Each has a range, which tells us the distances over which the forces can be felt. Then, for each kind of particle and each force, there is a number which tells us the strength by which that particle participates in interactions governed by that force. These are called the coupling constants" (Smolin 1997: 37).
Smolin (1997: 59) says there are three universal phenomena: 1) everything that moves is described by the principles of relativity; 2) everything that exists is described by quantum theory; and 3) gravity, which applies to everything universally. I think he is wrong, in my opinion there is another universal that applies to everything: 4) time. He notes there are three universal physical constants: 1) Newton's gravitational constant (G); 2) Planck's constant (h); and 3) the speed of light (c). The Planck mass is about 10-5 grams and the Planck length is 10-33 cm (18 powers of 10 smaller than a proton or neutron!).
"How about the natural unit of time? That comes out to be about 10-42 seconds. That's unimaginably small" (Susskind 2006:85).
"Planck realized that physicists had never seen the granular nature of energy because the "size" of each packet was incredibly tiny (determined by the number h=6.5x10-27 erg sec, now called "Planck's Constant"). This number is so astronomically small that we never see quantum effects in everyday life" (Kaku & Thompson 1987:38).
"...The fabric of space on scales smaller than the Planck length - a millionth of a billionth of a billionth of a billionth (10-33) of a centimeter - space becomes a seething, boiling cauldron of frenzied fluctuations.... the usual notions of left/right, back/forth, and up/down become so jumbled by the ultramicroscopic tumult that they loose all meaning. Even the usual notion of before/after... is rendered meaningless by quantum fluctuations on time scales shorter than the Planck time, about a tenth of a millionth of a trillionth of a trillionth of a trillionth (10-43) of a second (which is roughly the time it takes for light to travel a Planck length)" (Greene 2004:333).
Gravity is universal, has infinite range, is always attractive, but is very weak (10-38). Because of its peculiar nature, it dominates the large structure of the universe (as does time... dwell on that). The weakness of gravity is critical for the existence of stars and the life of stars. Weak gravity allows large masses to exist to create stars that can burn for billions of years.
"If the gravitational force were stronger by only a factor of ten, the lifetime of a typical star would decrease from about ten billion years to the order of ten million years. If its strength were increased by still another factor of ten, making the gravitational force between two protons still an effect of order of one part in 1036, the lifetime of a star would shrink to ten thousand years" (Smolin 1997: 39).
"Carbon originates in the Universe via a two-step process from nuclei of helium, or alpha particles as we usually call them. Two alpha particles combine under stellar conditions to make a nucleus of the element beryllium. The addition of a further alpha particle is necessary to transform this into a carbon nucleus. One would have expected this two-step process to be extremely improbable, but remarkably the last step happens to possess a rare property called "resonance" which enables it to process at a rate far in excess of our naive expectation. In effect, the energies of the participating particles plus the ambient heat energy in the star add to a value that lies just above a natural energy level of the carbon nucleus and so the product of the nuclear reaction finds a natural state to drop into. It amounts to something akin to the astronomical equivalent of a hole-in-one. But this is not all. While it is doubly striking enough for there to exist not only a carbon resonance level but one positioned just above the incoming energy total within the interior of the star, it is well-neigh miraculous to discover that there exists a further resonance level in the oxygen nucleus that would be made in the next step of the nuclear reaction chain when a carbon nucleus interacts with a further alpha particle. But this resonance level lies just above the total energy of the alpha particle, the carbon nucleus, and the ambient environment of the star. Hence, the precious carbon fails to be totally destroyed by further resonant nuclear reaction. This multiple coincidence of the resonance levels is a necessary condition for our existence" (Barrow 1991: 95).
"But for the existence of stars requires not only that the gravitational force be incredibly weak. Stars burn through nuclear reactions that fuse protons and neutrons into a succession of more and more massive nuclei. For these processes to take place, protons and neutron must be able to stick together, creating a large number of different kinds of atomic nuclei. For this to happen, it turns out that the actual values of the masses of the elementary particles must be chosen very delicately. Other parameters, such as those that determine the strengths of the different forces, must also be carefully tuned" (Smolin 1997: 39).
"Were the electron's mass not about the same size as the amount that the neutron outweighs the proton, and were each of these not much smaller than the proton's mass, it would be impossible for nuclei to stick together to form stable nuclei" (Smolin 1997: 40).
"Mystery number one is why the proton mass is so tiny compared to the Planck mass. Mystery number two is why the cosmological constant is so much tinier still. Between the scale of the cosmological constant and the Planck mass is a ratio of 1060. It is extraordinary that such a huge ratio should come into fundamental physics. But this is not all. Taking these values into account, it turns out, apparently coincidentally, that the lifetime of a typical star is about the same as the lifetime of the universe, measured as best we can by the speed of its expansion"
"In fact, we will see that the history of the universe is, to a large extent, the history of symmetry. The most pivotal moments in the evolution of the universe are those in which balance and order suddenly change, yielding cosmic arenas qualitatively different from those of preceding eras. Current theory holds that the universe went through a number of these transitions during its earliest moments and that everything we've ever encountered is a tangible remnant of an earlier, more symmetric cosmic instance" (Greene 2004: 219-220).
"What Noether (Emmy Noether: German Mathematician) discovered is that whenever nature displays a continuous symmetry, a conservation law comes along for the ride, and vice versa. In particular, spatial symmetry dictates that momentum is conserved; rotational symmetry ensures angular momentum is conserved; and time symmetry means that energy is conserved" (Davis 2010:41).
"Most, if not all, of the attributes set by symmetry breaking appear to be fine-tuned. Changing their values by modest amounts would have resulted in a quantitatively different universe - one in which we probably would not exist. If protons were 0.2 percent heavier, they would decay into neutrons, destabilizing atoms. If the electromagnetic force were 4 percent weaker, there would be no hydrogen and no normal stars. If the weak interaction were much weaker, hydrogen would not exist; if it were much stronger, supernovae would fail to seed interstellar space with heavy elements. If the cosmological constant were much larger, the universe would have blown itself apart before galaxies could form" (Tegmark, 2003:46).
"Why should the expansion rate of the universe have been set to the scale of the lifetime of stars, if the first stars formed millions of years after the big bang? What kind of physical mechanism could account for this" (Smolin 1997: 42)?
In summary, Smolin says: "... we should ask just how probable is it that a universe created by randomly choosing the parameters will contain stars.... The answer, in round numbers, comes to about one chance in 10229" (1997: 45).
But this creative power is also destructive power. Recent research into the center of our galaxy has shown it is a very chaotic mess. Stellar matter is being swirled into giant blobs that is being thrown out into the spiral arms. Near the center of our galaxy, one arm of our galaxy (northern) is distorted and fanned out by this massive tumult. Our galaxy has a disk, bulge and halo. The bright central halo is about 10-30 parsecs (3.26 light years) across. The nuclear bulge is filled with stars so close together, they cannot be resolved. Near the center of the bulge is the circumnuclear ring about 1.5 parsecs from the center and about 180 parsecs in circumference. This is a cloud of nuclear material from exploded stars. The center is a black hole with a density of about 3 million solar masses. The nearby 15-star cluster (IRS 16) is the fuel pouring into this black hole and losing their mass at about 500-700 km/sec. The gas outflow from IRS 16 is thrown out as huge blobs about every 100 years. It is these blobs that are fracturing the structure of the inner northern spiral arm. The gaseous wind is so powerful it creates a tail of ionized gas around supergiant IRS 7 and a huge shock wave in the circumnuclear ring behind IRS 7. This entire complex can collapse at any time, reshaping the center of our galaxy and having effects throughout our Milky Way.
Molecular clouds create stars and planets. These clouds generally range from 100 to 300 light-years in diameter. The presence if dust and ice provides the process for collapse into proto-stars and planetary systems. Our Milky Way has at least 6,000 such clouds. The matter is cold, only about 30 Kelvin and only contain about 1,000 million atoms per cm2... so one liter would weigh only 3 billionth's of a gram. The chemistry of such clouds is complex. It includes hydrogen, helium, ethanol, ring and acetylenic chains, carbon, oxygen, nitrogen, silicon, neon, magnesium, iron and sulphur. In the Orion nebula, the molecular cloud is several times larger than the entire nebula! Nagoya University studied clouds in the direction of Cepheus and Cassiopeia. Yonekura, Dobashi, Mizuno, Ogawa, and Fukui found that clouds occur all over the place. Out of 48,000 positions, they observed 1015 clouds of which 188 were creating proto-stars, and of these 101 had never been catalogued before. This indicates there is a lot more matter out there than previously known, and a lot more star and planetary formation.
But these star forming and relatively cool clouds exist because they contain carbon that radiates excess heat and because they are dirty with dust that shields them from star light and heat. This dust is mostly carbon. Since the elements within these clouds were made in stars... how did the early stars form from the relatively pure hydrogen and helium that filled the early universe (Smolin 1997:110)?
Light takes 25,000 years to reach our sun from our galactic center. Our sun rotates one full turn in our galaxy every200- 240 million years. We are about 28,000 light years from the center of our galaxy. We use one trip around our Sun as our "solar year". If we assume the universe is 10-15 billion years old and our Milky Way was created about a billion years after the big bang than one trip around the Milky Way for our Sun could be called our "galactic year". This galactic year would take about 200-240 million years and would make our Sun a mere 58 galactic years old! (Magee 2000: 106). Galaxies in these terms are still young systems, still gobbling up other galaxies and growing like children in their early childhood.
"But although the Milky Way may be a glorious sight, it is a constant source of frustration for astronomers who study the universe beyond our galaxy. The disk blocks a full 20 percent of the cosmos, and it seems to be a very exciting 20 percent."
"Somewhere behind the disk, for example, are crucial parts of the two biggest structures in the nearby universe: the Perseus-Pisces supercluster of galaxies and the "Great Attractor," a gargantuan agglomeration of matter whose existence has been inferred from the motions of thousands of galaxies through space" (Kraan-Korteweg & Lahav 2000: 75).
In 1994, the Sagittarius dwarf galaxy was discovered only 80,000 light years away, in fact, well inside the Milky Way but on the other side of our galaxy. We are colliding with this galaxy, and it is being incorporated into the Milky Way in several orbits. The Magellanic Clouds pass through our galaxy every billion years and loose energy and matter each time. In about 10 billion years (when the universe is twice its present age), the Milky Way will have eaten them as well. Bizarrely enough, many of the oldest stars in the Milky Way galactic halo move in retrograde orbits... which suggests the proto-galaxy out of which we formed captured sizeable fragments moving in the opposite direction! (van den Bergh & Hesser 2000: 111).
"The Milky Way was though to have about 10 satellites, but within the last year or so, that number has nearly doubled" (Selim 2007:12). and..."the large and small Magellanic clouds are shooting by us at about 200 miles a second, faster than a satellite would" (Selim 2007:12-13).
Andromeda, our nearest neighbor is 2 million light years away and is heading for our galaxy and will hit in about 5 billion years. It has been found to be at least 5 times larger than previous estimates with a halo of stars half a million light years out from the center, and "much of the space between the milky Way and Andromeda is filled with stars that belong to the galaxies...They practically overlap. It really challenges the notion of galaxies as groups of stars with empty space between them" (Selim 2007:12 quoting astronomer Raja Guhathakurta of the University of California, Santa Cruz).
Our local group of galaxies consists of us, Andromeda and about 20 smaller galaxies. We are part of the Virgo cluster centered about 50 million light years away which are part of a larger wall-like filament about 200 million years away (Rees 1997: 31). While the so-called dark matter has not been defined, there is about 0.1 atom per cubic meter of space in the universe and 400 million quanta of radiation per cubic meter as well. There are about a billion photons for every atom in the universe (Rees 1979: 50). In the centers of some galaxies are black holes weighing as much as millions or billions of stars. M47 has a dark central mass weighing about 3 billion suns and Andromeda's weighs about 30 million suns (Rees 1997: 88). Galaxies are at least 10 times bigger and heavier than previously thought based on dark matter. Our Milky Way requires about 10 times the dark matter to luminous matter to maintain its structure and rotation. Galactic clusters need about 100 times their visible mass to maintain their gravitational attraction (Genz 1999: 298-299). Baryonic matter is heavy: mostly protons and neutrons. Since 99% of the matter in the universe is dark (and if the universe is flat) only about 10% can be baryonic. What this matter consists of is unknown.
"The skinny black line on a plot of stellar rotation speed versus distance was expected to go down - stars close to the galactic center should orbit faster than stars at the edge because all the mass concentrated at the center of the galaxy pulls most powerfully on the closest stars. The same thing happens in the solar system: Mars moves faster than Jupiter because the sun's gravity pulls harder on it. Jupiter orbits faster than Saturn, and so on, out to Pluto and beyond. A plot of orbital speeds and distance - a rotation curve of the solar system - does decrease with distance. The skinny black line just falls, just as Newton's laws say it should" (Frank 2006:34).
"The rotation curves for spiral galaxies do not. At a certain distance from the galactic center, the rotation curves for stars in most spiral galaxies simply do not fall; instead, at some point they flatten. All the stars in the middle and out parts of these galaxies orbit with the same speed, in seeming defiance of newton;s laws. Why don't the outer stars move more slowly than the inner ones" (Frank 2006:34)?
"Faced with flat rotation curves that seemed to flout Newton's laws, astronomers assumed the existence of a halo or dark matter around every spiral galaxy. Whatever the stuff was, it did not emit light, but it did exert gravitational pull. The dark matter tugged on the stars, cranking up their speeds and creating the flat rotational curves" (Frank 2006:34).
"Common knowledge has it that part of this extra mass consists of ordinary matter that gives off too little radiation for present technology to detect: planets, dwarf stars, warm gas. Such material is more precisely called dim matter. It could represent up to 10 times as much matter as astronomers see, but even so it could account for only a fraction of the missing mass. When researchers refer to dark matter, they usually mean an exotic breed of matter that makes up the difference" (Milgrom 2003:4).
"In sum, astronomers widely believe the current energy content of the universe to be roughly 4 percent ordinary (or "baryonic") matter, about a tenth of which is seen as stars and gas; a third dark matter in some unknown form; and two thirds dark energy, the nature of which is even less understood" (Milgrom 2003:4).
"Milgrom found that the best way to resolve the problem of the flat rotation curves was to modify this hallowed equation" (i.e. -Newton) (Frank 2006:35):
"I (Milgrom) assumed that when the acceleration due to gravitational forces becomes very small, the formula changes to F = ma2/ao" (Frank 2006:35).
"According to Milgrom, this change holds only when accelerations fall below one 10-billionth of a meter per second every second. Not only does this work best with the data, he adds, but the new constant, ao, may be of cosmological significance: Accelerating at this rate will take you from a resting state to the speed of light in the lifetime of the universe" (Frank 2006:35).
Milgrom called this idea MOND (Modified Newtonian Dynamics). "how does MOND fare when confronted with the data? Orbital velocities in spiral galaxies, instead of declining with increasing distance from the galactic center, flatten out to a constant value, as predicted by MOND. Moreover, , according to an observed correlation known as the Tully-Fisher relation, this constant velocity is proportional to the fourth root of the galaxy's luminosity. This too, emerges naturally from MOND" (Milgrom 2003:5).
"Just as Planck's constant appears in many roles in quantum theory, so does ao appear in many ways in MOND's predictions for galactic systems. It is a part of the success of the theory that the same value - approximately one angstrom per second per second - works for all these diverse appearances" (Milgrom 2003:7).
"MOND appears to suggest that inertia - the responsiveness of a body to a force - is not an inherent property of bodies but is acquired by the body by dint of its interaction with the universe at large.This idea falls within the framework of an old concept, Mach's principle, which attributes inertia to such an interaction.... If so, what could be the agent whose presence impedes acceleration and thus produces inertia" (Milgrom 2003:9)?
"An Exciting possibility is the vacuum. The vacuum is what is left when one annihilates all matter (or, equivalently, energy) that can be annihilated. According to quantum theory, the remnant is not a complete void but rather a minimal representation of all forms of energy. The interaction of all the vacuum with particles might contribute to the inertia of objects" (Milgrom 2003:9).
Back to the Universe.
The Coma Cluster consists of about 100 galaxies containing 1011 stars (Rees 1979: 116). The Hubble deep scan indicates that in our 10 billion light year volume of viewing, there are at least 100 billion galaxies. Puts things into perspective, if one can grasp such numbers and scales. The group of galaxies making up the "Great Attractor" contains at least 600 galaxies and is larger in scale and mass than the Coma Cluster. This would dominate our sky if we were on the other side of the Milky Way.
"The farthest you can see is the distance light has been able to travel during the 14 billion years since the big bang expansion began. The most distant visible objects are now about 4 x 1026 meters away - a distance that defines our observable universe, also called our Hubble volume, our horizon volume or simply our universe" (Tegmark, 2003:41).
"A typical group is 50 million times as massive as the Sun, and has a temperature of 10 million degrees C. By comparison, a typical cluster weighs 1,000 trillion Suns and registers a temperature of 75 million degrees C., the heaviest known cluster is five times as massive and nearly three times as hot" (Henry, Briel & Böhringer 2000: 46).
The idea that we are alone in such scales approaches silly as a limit (to paraphrase what little I remember from calculus). One person, Ksanformality (Space Research Institute in Moscow) and Arkipov think life is so prevalent, that we are bombarded by artifacts from other civilizations at a rate of about 4,000 artifacts in the last 4.5 billion years.
"Astromomers suspect that our Hubble volume has at least 1020 habitable planets; some might well look like Earth" (Tegmark, 2003:42).
"Those beautiful spiral patterns that one sees in pictures of galaxies are not, in most cases, the patterns of where the stars are. In many cases, if one looked at a picture of where the stars are actually distributed, one would not see a spiral pattern. The spirals are only the region in which new stars are currently being formed. As a result, while it is true that spiral galaxies rotate, it is not true that the spiral structure, which is only the trace of the process of star formation, rotates with the stars of the galaxy. Instead, observations suggest that it moves through the galaxy, dissolving and reforming on scales somewhat slower than the rotation of the galaxy" (Smolin 1997: 121).
"Embedded in the halo one finds a disk of stars, gas and dust rotating slowly around an axis through the halo's center. This rotational motion is definitely not random; in any region of the disk the velocities of nearby stars differ by not more than ten percent from the overall rotational speed. One of the interesting and unexpected facts about the disk is that it does not rotate rigidly, like a merry-go-round or a top. Instead, the stars and clouds of gas rotate with roughly the same velocity no matter how far they are from the center, so that those father out take more time to complete a rotation.
The constancy of the rotational speed of the stars in a spiral disk is one of the spectacular scientific discoveries of the second half of the twentieth century. This is because it is possible to use Newton's laws to deduce this distribution of matter in a galaxy, given only the knowledge of the velocities of the stars. In most galaxies, the result of this is a very different distribution of matter than is seen in the stars and gas. Typically, between 80 and 90 percent of the matter of a galaxy is found to be spread out beyond the disk and is not in the form of visible stars and gas" (Smolin 1997: 121-122).
"As they have been mapped by astronomers, the interstellar medium of spiral galaxies are quite complex. The different phases of the medium, which differ dramatically one from another in density, temperature and composition, coexist side by side. One of these phases consists of the very cold and dense molecular clouds in which stars are formed. Very different from this is an extremely hot plasma phase, in which the electrons and nuclei have become disassociated. Still another phase consists of normal atomic gas, with rather moderate temperatures extending up to room temperature" (Smolin 1997: 123).
Smolin deduces that spiral galaxies are not in thermal equilibrium and that there is a flow of matter and energy as a complex system through feedback mechanisms. Supernova explosion fuel the system (input), and star formation eats the fuel (output), and the spiral galaxy forms the means for the process (throughput). Star formations occur in groups in the dense clouds of gas and dust... and this causes the cloud to heat up and be pushed away... stopping local star formation (feedback). The push of gas hits other dense clouds which triggers new star formation (feedback). As stars age, those that go supernova refuel the system with gas and dust and also send out pressure waves that trigger star formation (feedback) in relatively stable clouds (Smolin 1997: 123-128).
"It has been estimated that in a typical spiral galaxy an amount of material equal to about three to five times the mass of the Sun in each year converted from gas to stars. On the other hand, the estimates are that each year the stars return, on the average, at least half of this same amount of material to the interstellar medium, through stellar winds and supernova explosions" (Smolin 1997: 131).
The universe is full of structure, from the smallest to the largest things we have observed: "At the upper end, the largest scales we have been able to probe are about half a billion light years, which is roughly 1059 times the fundamental Planck length. The smallest scale we have so far been able to probe is about one hundredth the diameter of the proton, which is 1018 Planck length. Thus from the largest to the smallest phenomena we have yet studied, the known world spans forty-one orders of magnitude (Smolin 1997: 163). At the top end this is within one percent of its visible diameter and it is till structured.
What about the other universal: time?
"The present is the only reality. While it slips away, we enter into a new present, thus always remaining in the eternal Now" (Reichenbach 1971: 2).
"The deterministic conception of time flow may be compared to the happenings seen in a motion picture theater. While we watch a fascinating scene, its future development is already imprinted on the film; Becoming an illusion, because it makes no difference to the happenings at what point we look at them. What we regard as Becoming is merely our acquisition of knowledge of the future, but it has no relevance to the happenings themselves" (Reichenbach 1971: 11).
Reichenbach (1971: 20-23) argues the following: 1) Time goes from the past to the future; 2) The present, which divides the past from the future, is now; 3) The past never comes back; 4) We cannot change the past, but we can change the future; 5) We have records of the past, but not of the future; 6) The past is determined; the future is undetermined.
He argues that time order is related to causal order... that "every asymmetrical, connected, and transitive relation establishes a serial order" (Reichenbach 1971: 26). The laws of thermodynamics enter here:
"The first law states that in all changes there exists a certain quantity, called energy, which retains a constant value. In its classical form the second law states that there exists another quantity, called entropy, which in some changes remains constant, but in other changes increases, whereas it is impossible that this quantity ever decrease. Irreversible processes are those in which entropy increases" (Reichenbach 1971: 50). In other words: 1) you can't win.
"Since completely reversible processes do not occur, the second law of thermodynamics can be stated in the form that the entropy of a closed system increases as long as any processes are going on within it, or, in other words, so long as a state of equilibrium has not been reached" (Reichenbach 1971: 53). In other words: 2) you can't break even.
"We see that our conception of causality, of the past that determines the present and the future, is closely connected with our definition of positive time in terms of growing entropy. In opposite time we find it's equivalent in a conception of finality, according to which the future determines the present and the past" (Reichenbach 1971: 154). In other words: 3) you can't get out of the game.
Thus the cause produces the effect and the effect records the cause. There is a close connection between entropy and information. Information is derived from the past. Information is created in the now. "Entropy measures the degree of randomness of a system" (Gatlin 1972: 26), while Information measures the degree of order of a system.
"If I stretch my imagination, I can just begin to believe in the idea that space is not something fundamental, but emerges only as an approximate way of describing the way things are organized and interrelated. Temperature is such an emergent property; it has no meaning on the atomic level. It is only a measure of the average energy of vast numbers of molecules. In the picture I described in the last chapter, space is something like this; there is a fundamental level in which there are only the connections among the nodes and edges of a network. These networks do not exist in space - they simply are. It is their network of interconnections that define, in appropriate circumstances, the geometry of space, just as the jumps and dances of all the atoms in a cubic centimeter of air define its temperature. Perceived at vastly larger scales than the Planck length, the network seems to trace a continuous geometry, just as the cloth of my shirt is woven from a network of threads. Perhaps, just perhaps, this is the way the world is" (Smolin 1997:286).
"But what about time? Could time also be something that emerges from some more fundamental level? Is it possible that at this level there is no time, no change" (Smolin 1997: 286)?
The indeterminacy principle of Heisenberg now enters the equation. It says that for every physical quantity there exists many other quantities which cannot be measured simultaneously.
Heisenberg discovered that one could discovery either the position (space) or the momentum (time) of the light, but never both at the same time. The uncertainty of this turned out to be Planck's constant divided by the mass of the particle. At this level, partitioning by space or time into successive events is not possible. This area of uncertainty is what I call god's loaded dice.
"The Uncertainty Principle makes it impossible to predict the precise behavior of individual atoms, let alone the universe. Moreover, according to the theory, in the subatomic realm, only probabilities can be calculated" Kaku & Thompson 1997: 44).
When you read the following "classical" version, recall the "it form bit" version noted earlier. Compare and contrast the two.
"The uncertainty principle, so simple and yet so startling, was a stake in the heart of classical physics. It asserts that there is no objective reality - not even an objective position of a particle - outside of our observations. In addition, Heisenberg's principle and other aspects of quantum mechanics undermine the notion that the universe obeys strict causal laws. Chance, indeterminacy, and probability took the place of certainty" (Isaacson 2007:332).
"This inability to know a so-called "underlying principle" meant that there was no strict determinism in the classical sense. "When one wishes to calculate 'the future" from 'the present' one can only get statistical results, "Heisenburg said, "since one can never discover every detail of the present"" (Isaacson 2007:333).
There is another term that is nice to snuggle up to on a long cold winter night: eigenstate. Eigen is German for "particular". It can be said, that for states where the uncertainty principle applies that as long as no measurement of a system has been done, it is impossible to know the eigenstate of the of that system.... and the system itself does not know what its eigenstate is until the observer measures it. The thought experiments called "Schrödinger's Cat" illustrates this concept. A cat is placed into a box where a small sample of radium with a 50-50 probability of a decay in any one hour could trip a switch that breaks a beaker filled with cyanide gas. After an hour, the lid is lifted to see if the cat is alive or dead. According to one extreme view of quantum mechanics, at that instant, the system will be forced to jump to one of the two eigenstates: alive or dead (like a wave or a particle)... and that until the observer looks, the cat is in both states, partly alive and partly dead(!) ... that the cat exists in two possible parallel universes, one in which it is alive, and one in which it is dead. Is light a wave or is light a particle... does light exist in two parallel universes where the action of the observer determines which one. Do previous states constrain future states? This implies that the history of the universe impacts the future of the universe... but that the each "present" can branch into different paths at the "now".
"One distinguishes two things about a wave. First, a wave has a front, and a succession of wave fronts forms a system of surfaces like the layers of an onion. A two-dimensional analog is the beautiful wave circles that form of the smooth surface of a pond when a stone is thrown in. The second characteristic of a wave, less intuitive, is the path along which it travels - a system of imagined lines perpendicular to the wave fronts. These lines are known as the wave "normals" or "rays".
We can make the provisional assertion that these rays correspond to the trajectories of particles. Indeed, if you cut a small piece out of a wave, approximately 10 or 20 wave-lengths along the direction of propagation and about as much across, such a "wave packet" would actually move along a ray with exactly the same velocity and change of velocity as we might expect from a particle of this particular kind at this particular place, taking into account any force fields acting on the particle" (Schrödinger 2000: 28-29).
"One interpretation of wave phenomena extensively supported by experiments is this: at each position of a uniformly propagating wave train, there is a twofold structural connection of interactions, which may be distinguished as "longitudinal" and "transversal". The transversal structure is that of the wave fronts and manifests itself in diffraction and interference experiments; the longitudinal structure is that of the wave normals and manifests itself in the observation of single particles" (Schrödinger 2000: 29).
Ultimate Zen: This version implies our choice, within the constraints of the history of previous choice, is directed by conscious thought!?
Einstein showed that space and time are really space/time.
"Raffiniert ist der Herr Gott, aber boshaft ist er nicht" meaning "Subtle is the Lord, but malicious he is not" (Isaacson 2007:297).
"The relativity of space and time is a startling conclusion. I have known about it for more than twenty-five years, but even so, whenever I quietly sit and think it through, I am amazed. From the well-worn statement that the speed of light is constant, we conclude that space and time are in the eye of the beholder. each of us carries our own clock, our own monitor of the passage of time. Each clock is equally precise, yet when we move relative to one another, these clocks do not agree. They fall out of synchronization; they measure different amounts of elapsed time between two chosen events.* The same is true of distance. Each of us carries our own yardstick, our own monitor of distance in space. Each yardstick is equally precise, yet when we move relative to one another, these yardsticks do not agree; they measure different distances between the locations of two specified events.* If space and time did not behave this way, the speed of light would not be constant and would depend on the observer's state of motion. But it is constant; space and time do behave this way. Space and time adjust themselves in an exactly compensating manner so that observations of light's speed yield the same result, regardless of the observer's velocity" (Greene 2004:47).
*Remember.. it is spacetime... not space and time... so they are, in my opinion, connected in an "elastic way" to each other.
"It is hardly possible to explain how a length of space and a length of time can each appear differently to differently moving observers, while their combined space-time interval remains the same for all, except by pointing out a peculiar mathematical fact: that the interval squared always equals the difference between its space squared and its time squared, a difference that is constant and unaffected by shifting observer's viewpoints or the relative proportions of space and time involved" (Murchie 1961: 547).
The faster you are moving in relation to the velocity of light in relation to other objects in the Universe... the slower your clocks appear to tick in comparison. A person traveling near the speed of light may age only a few minutes while the slower person would age hundreds of years. Fast objects operate more in time than space, and slow objects operate more in space than time. So if someone calls you "spacey", you work it out.
"Interval is the sole objective physical relation between events, the mathematician's fundamental invariant, the prime ingredient of world texture and probably one of the few absolutes left in our fathomless new ocean of relativity" (Murchie 1961: 547).
Now for some additional counter-intuitive stuff (stuff is a highly technical term used by great scientists):
"If you are surprised to see that the past meets the future only here, not elsewhere, just remember the Einstein has quite thoroughly exploded the myth that simultaneity prevails throughout the universe. And this means that the only definite location of now is here. In fact, every man's now is here (the "here" meaning "where he is"). While beyond here, now becomes more and more a matter of viewpoint, or relative motion" (Murchie 1961: 556).
"That the whole abstraction of place, which we once learned to trust, turns out to be nothing but a viewpoint that is different from every side .... it is neither the point in space, nor the instant in time, at which something happens that has physical reality, but only the event itself" (Murchie 1961: 560561).
Einstein found that gravity and accelerated motion are profoundly intertwined. His mind experiment found that an observer confined inside a compartment cannot distinguish between acceleration and gravity: the equivalence principle. But in reality, I think there is a profound difference. Accelerated motion require the expenditure of energy on the part of the moving object. What energy is expended by gravity? It takes energy to accelerate... does it take energy to gravitate? If they are equivalent, then something must be expending energy to create the equivalent effect. What energy? From where? The observer must know about the source of energy expended for the accelerated motion. Then why does the observer not observe the energy expended by gravity? This indicates to me that they are not equivalent or is time perhaps the observable energy change?
Two stories about relativity:
1) "The skeptical Silberstein came up to Eddington and said that people believed that only three scientists in the world understood general relativity. He had been told that Eddington was one of them.
The shy Quaker said nothing. "Don't be so modest, Eddington!" said Silberstein.
Replied Eddington, "One the contrary. I'm just wondering who the third might be."" (Isaacson 2007:262).
2) "It was by all accounts, a pleasant Atlantic crossing, during which Einstein tried to explain relativity to Weizmann. Asked upon their arrival whether he understood the theory, Weizmann gave a delightful reply: "During the crossing, Einstein explained his theory to me every day, and by the time we arrived I was fully convinced that he really understands it."" (Isaacson 2007:292).
I finally think I understand relativity a bit more after reading the following:
"Einstein proclaimed that all objects in the universe are always traveling through space/time at one fixed speed - that of light. This is a strange idea; we are used to the notion that objects travel at speeds considerably less than that of light.... We are presently talking about an object's combined speed through all four dimensions - three space and one time - and it is the object's speed in this generalized sense that is equal to that of light... If an object is sitting still (relative to us) and consequently does not move through space at all, then ... all of the object's motion is used to travel through one dimension - in this case, the time dimension. Moreover, all objects that are at rest relative to us and to us and to each other move through time - they age - at exactly the same rate or speed. If an object does move through space, however, this means that some of the previous motion through time must be diverted... the object will travel more slowly through time than its stationary counterparts, since some of its motion now is being used to move through space... We see that this framework immediately incorporates the fact that there is a speed limit to an object's spatial velocity: the maximum speed through space occurs if all of an object's motion through time is diverted to motion through space. This occurs when all of its previous light-speed motion through time is diverted to light-speed motion through space. But having used up all of its motion through time, this is the fastest speed through space that the object - any object - can possibly achieve... Thus light does not get old; a photon that emerged from the big bang is the same age today as it was then. There is no passing of time at light speed" (Greene 1999: 50-51).
I tried to visualize our solar system in this way... that the light generated by matter is standing still and that the matter is moving and distorting space at the "speed attributed to light", and that it is us, not light that is traveling through the universe... and it twisted my brain into a knot.
Thus, I thought these factors can be viewed as vectors whose total value is equal to the speed of light. I wondered if it was possible to substitute this vector as an equation for the "c" in e=mc2 ... and then solve the equation for the Planck length of a string to gain a numeric value for "time" : e=m(vectors)2, where "vectors" includes directionality and time as an equation? I sent an email to Brian Greene, asking this question. He replied that the factors refer to different quantities in the units and it is more like a conversion factor than something that can be solved for... so this is not an equation in that sense. I also wondered if it is possible to solve for the amount of information in the universe? I have this GUT feeling for a law of the conservation of information parallel to the conservation of energy... that information cannot be destroyed, only converted to chaos, and that there is a simple isomorphic relationship between e=mc2 and disorder, order and some constant: e(disorder) = m(order)c2... chaos=(information)c2. Lets call the non-higgs concept higgsnon. In this, dark energy would be enernon (e*), and dark matter would be matternon (m*). In this system, e*=M*c*2 may or may not be equivalent if c* (speed of light in the higgsnon system. If so, then the space/time aspects of the higsnon would be different. That may explain the odd distribution of m* seen through gravitational studies. The dark photon (photnon) may differ in its c value. Can a model be created that fits a higgsnon system that explains the distribution of matternon and the effects of enernon? Anyway it is time to give names for dark energy and dark matter!
Hermann "Minkowski imagined that the spatial distance measured by two observers in relative motion is a projection of an underlying four-dimensional spacetime distance onto the three-dimensional space that thet can sense; and, similarly, that the temporal "distance" between two events is a projection of the four-dimensional spacetime distance onto their own spacetime" (Krauss, 1995:29).
"So the crazy invariance of the speed of light for all observers provided the clue to unravel the true nature of the fout-dimensional universe of spacetime in which we actually live. Light displays the hidden connection between space and time. Indeed the speed of light defines the connection" (Krauss, 1995:29).
"...if light rays map out spacetime, then spacetime must bend in a gravitational field. Finally, since matter produces a gravitational field, then matter must bend spacetime" (Krauss, 19595:32-33).
"The central premise of Einstein's general relativity is simple to state in words: the curvature of spacetime is directly determined by the distribution of matter and energy contained within it. Einstein's equations, in fact, provide simply the strict mathematical relation between curvature on the one hand and matter and energy on the other:
"The left had side of this equation fixes the geometry of spactime. The right hand side fixes the matter and energy distribution" (Krauss, 19595:49).
Douglas Adams put it best:
"We live in strange times..... We also live in strange places: each in a universe of our own. The people with whom we populate our universes are the shadows of whole other universes intersecting with our own. Being able to glance out into this bewildering complexity of infinite recursion and say things like, "Oh, hi, Ed! Nice tan. How's Carol?" involves a great deal of filtering skill for which all conscious entities have eventually to develop a capacity in order to protect themselves from the contemplation of the chaos through which they seethe and tumble. So give your kid a break, okay?
Extract from Practical Parenting in a Fractally Demented Universe" (Adams 1997:737).
I wish I had said that. I like the image of a fractally demented universe....but dementia is in the mind of the beholder.
There is no time at the speed of light. Light does not age, it just is. Does this sound like a description of a god-like property?
"It's name not withstanding, Einstein's theory does not proclaim that everything is relative. Special relativity does claim that some things are relative: velocities are relative; distances across space are relative; durations of elapsed time are relative. But the theory actually introduces a grand new sweepingly absolute concept: absolute spacetime. Absolute spacetime is as absolute for special relativity as absolute space and absolute time were for Newton, and partly for this reason Einstein did not suggest or particularly like the name "relativity theory." Instead, he and other physicists suggested invariance theory, stressing that the theory, at its core, involves something that everyone agrees on, something that is not relative" (Greene 2004:51).
"Absolute space does not exist. Absolute time does not exist. But according to special relativity, absolute spacetime does exist" (Greene 2004:59).
Keep in mind:
There is a distance, called the Planck distance (German physicist, Max Planck), of 1.61 x 10-33centimeters where general relativity ceases. If you divide this distance by the speed of light you get Planck time: 5.35 x 10-44 second (Morris 1985: 194-195).
Time stops at the speed of light. Light is unchanging, non-aging so it is the perfect "bit" of information. But light expands the observers universe at the speed of light, and from the point of view of the observer, light expands across a larger and larger area of the universe. As it expands, the amount of light per surface area is reduced. So while light is instantaneous and unchanging, it changes in relation to any observer. So what is changing, the light, or the observer?
If the entropy of the universe increases to a ball of universal light, does the universe convert to a ball of ultimate information? Is there a correlation between matter and chaos and energy an entropy?
Is a black hole the fundamental unit of chaos, anti-information, anti-order?
"It appears, therefore, that we exist in a very improbable kind of universe, one that was fine-tuned to an accuracy of one part in 1015 at a time of one second after the big bang. In fact, this fine-tuning was even greater at earlier times. At some point, when the universe was only a fraction of a second old, it would have been not one part in 1015, but one part in 1050.... If this fine-tuning had not taken place, we would not exist. In a universe that had slightly less matter than ours, the stars and galaxies would never have formed. Matter would have expanded outward at such a rate that gravity could never have created the condensations of hydrogen and helium gas from which the galaxies were formed. On the other hand, if the matter density had differed from the critical value by slightly more than a factor of one part in 1015 in the other direction, gravity would have been too strong. The expansion would have halted, and the universe would have collapsed in a big crunch long before life had a chance to evolve" (Morris 1990:53-54).
The Great Attractor: In 1977, study of cosmic microwave radiation backgrounds discovered that is was slightly red-shifted on one side of the sky, and slightly blue-shifted on the other, suggesting a shift out of the velocity due to the expansion of the universe. This change was about 600 kilometers per second. In 1987, a group of astrophysicists, known as the "Seven Samurai", checked the relative motion of 400 galaxies, which resulted in an astounding discovery: "The Local Group, the cluster of galaxies in Virgo, and two superclusters in the Hydra-Centaurus and Pavo-Indus region were all caught in the gravitational grip of some huge mass... All the galaxies in our region of the universe were caught up in a streaming motion toward an attractor that had the mass at least 5 x 1016 times greater than that of the sun, and equal to that of tens of thousands of galaxies, and which was situated at least 400 million lightyears from the Milky Way. The velocity of the streaming motion was about 600 kilometers per second in the vicinity of our galaxy. In places close to the Great Attractor, it rose to 1,000 kilometers per second or more" (Morris 1990: 129).
If there was an origin as a Big bang, then at zero time, the universe had zero size, therefore the density of the universe was infinite: this is called a singularity. Singularities may not exist, and only approach nothingness as a limit. But particles were created after the big bang and appear to have sprung into existence everywhere at the same time as a phase transition...something else other than matter approached nothingness as a limit.
"Instead of a singularity, Turok and Hawking proposed the instanton - a particle of highly compressed space and time, having the mass of a pea but the size a millionth of a trillionth of a trillionth that of a pea. The instanton is called so because it is a particle that exists for only an instant. Before the instanton, time does not exist, and neither does space. Unlike the big bang singularity, the instanton is smooth. And as the instanton explodes, cosmic inflation begins just as predicted by Alan Guth. Eventually the universe that sprang out of the instanton expands forever" (Aczel 1999:213).
"...but one of the emerging theories of 21st-century cosmology is that the known universe, the sum of all we can see, may just be a tiny region in the full extent of space" (Burgess & Quevedo 2007:53).
"The weak anthropic principle has been stated by the British physicist Brandon Carter as follows: "What we can expect to observe must be restricted by the conditions necessary for our presence as observers." In other words, if the universes did not have the properties it does, we would not be here to see it" (Morris 1990: 216).
"The strong principle has been stated by Carter as follows: "The Universe must be such as to admit the creation of observers within it at some state". In other words, a universe that does not have the potential for the creation of life is impossible" (Morris 1990: 218). As Wheeler put it, there was no "before" the Big bang, and there would be no "after" a Big Crunch. The universe is space/time, and it contains space and time, space and time did not create the universe. Wheeler came up with the principle he called "Genesis by Observership", meaning the observer creates reality: "Do billions upon billions of elementary acts of observer-participancy add up to all that we call creation?"
"... we have to confront the problem of how to construct a rational and complete understanding of the world that allows the observer to be in the world. But observers are not simple things, and any universe that naturally gives rise to, and is hospitable to, an observer must be complex. Thus, a theory of a whole universe, if it is to be consistent with what we know of quantum theory and relativity, must be of a complex, self-organized universe" (Smolin 1997: 19).
Barrow (1991: 164-167) notes that the current universe is as big as it is because we exist right now. We exist because the chemicals needed for carbon based life exist in the necessary and sufficient quantities for life.
"Living systems on Earth are based upon the subtle chemical properties of carbon and their interplay with hydrogen, nitrogen, phosphorus, and oxygen. These biological elements, and all much-vaunted alternatives like silicon, do not emerge as fossils from the inferno of the Big Bang. They are the results of nuclear reactions in the interiors of the stars. There, primordial hydrogen and helium nuclei are burnt into heavier elements by the process of nuclear fusion. When these stars reach the ends of their lives, they explode and disperse these heavier biological elements into space where they become incorporated into molecules, planets, and eventually people. Almost all the carbon atoms in our bodies have this dramatic astral history" (Barrow 1991: 165).
"This process whereby Nature produces the biological building blocks of life from inert relics of the Big Bang is long and slow by terrestrial standards. It takes more than ten billion years. This vast period of stellar alchemy is necessary to provide the necessary precursors to life. Since the Universe is expanding, we now discern why it is necessary for it to be at least ten billion light years in size. A universe simply as big as our galaxy indeed has room for a hundred billion stars, but it would be little more than a month old. There is a niche in the history of the Universe when life could and did evolve spontaneously. That niche is bounded on one side by the requirement that the Big Bang cool off sufficiently to allow stars, atoms, and biochemicals to exist, and on the other side by the fact that all the stars will have burned out after a hundred billion years" (Barrow 1991: 165).
Genz (1999) reminds me that there is no such thing as nothingness in space or vacuum. He asked "How empty can space be and still remain in consonance with the laws of nature" (1999: vii). Space is filled with radiation, even if it is not filled with "mass":
"According to quantum mechanics - more specifically, to Heisenberg's uncertainty relation - we can never precisely fix the amount of energy that fills a certain region of space in a certain amount of time. The amount of energy will fluctuate. Consequently, we will never be able to define a zero-scale for energy. One might say that the vacuum of physics emits energy - more of it the shorter the time span we define, less of it for longer times...
According to Albert Einstein's famous formula E = mc2, energy and mass are the same thing. Mass therefore also fluctuates, and empty space will see a constant emergence and disappearance of particles that carry this mass. These particles don't last, and physicists call then virtual particles...
The physical vacuum is by no means empty and devoid of characteristics. Rather, anything that can exist at all will oscillate and spin in it in a random, disordered fashion. In this vacuum, quantities will emerge that, in an abstract space of particle properties, will define directions; these quantities, which in their abstract space act somewhat like magnets in real space, are called fields" (Genz 1999: viii).
Fields are odd little things. There was an argument about whether the electron has a negative charge and a proton has a positive charge, or if simply that they were opposites. Herman Weyl solved this issue by finding an explanation as to why we can make whatever choice we want. He found that if the particles did not directly interact, but instead interacted with a field in space, then the charge of each is felt through its impact to the field, and all that matters is the relationship between the charge and the field. When Weyl did the math to describe this, he discovered it was the same as the equations for the electromagnetic field! The concept of fields has become what is called the gauge principle (Smolin 1997: 51-53).
"...let us consider a more complicated kind of electron, which can have not one, but three kinds of charge. Let us name these charges after the three primary colors, so now we can have red charges, yellow charges and blue charges. Following Weyl's ideas, several people then asked what would happen if we were able to change our minds about which color was which, freely, in different places at different times. They found that this could be accomplished if there was a field that interacted with the particles. This new field is a fancier object than the electromagnetic fields; it is something like eight different electrodynamic fields, which interact not only with the colored particles but with each other. The new theories were called Yang-Mills theories after C. N. Yang and Richard Mills, who together were among those who proposed them in 1954" (Smolin 1997: 53).
"Shortly after this, several people realized that, when combined with quantum mechanics, Yang-Mills fields can have some remarkable properties. Everyone is familiar with the fact that in electricity opposite charges attract. But when there is more than one kind of charge, as in our example with colors, this tendency can be realized in a way that is much more drastic. Opposite colors not only attract - they cannot be separated from each other. No combination of colored particles can be separated from others unless all the color averages out completely. This property is called the confinement of colors. It means that one can never observe a colored particle in nature. One can only see combinations of particles in which the colors cancel each other out.
"As soon as people understood this property of confinement, the application to physics was obvious. Physicists had a good reason to believe that protons and neutrons are each composed of three particles, which had been called quarks. Moreover, every one of the many strongly interacting particles that had been seen in experiments could be interpreted as containing an equal mix of the three colors. The result was one can understand all the phenomena of the strong interactions, including all of nuclear physics, by supposing that each of the quarks comes in three colors, and that the forces between them are the result of their interactions with the Yang-Mills field.
"This new theory, called quantum chromodynamics, or QCD, for short, must be considered to be one of the triumphs of twentieth century science" (Smolin 1997: 53-54).
There is a basic first variant in quantifying things called the Higgs field. In order to visualize this field one must think about phase transitions. The freezing of water is such a transition with a critical temperature (energy) of 0 degrees Celsius. The critical temperature for the Higgs field transition was so high it occurred in the very earliest phase of the big bang.... fractions of a second after its creation (Genz 1999: 11). The Higgs field "pops up in our empty space simply because, in its presence, space is in a state of lower energy than in its absence" (page 11). This field "connotes a highly ordered state - just as water molecules become ordered when they undergo a phase transition into ice. As long as they move in the liquid water phase, they wander about in random motions; in the crystalline ice phase, they oscillate about a point of rest" (page 13).
Ice forms a pattern that has symmetry with respect to arbitrary translations and rotations. Symmetry is important to understanding transitions. "Ice has only very limited translational and rotational symmetries, leaving its crystal structure unchanged. This means that certain distances and certain directions in geometrical space are singled out as special. The same is true for the Higgs field; this field also distinguishes certain directions in its own abstract space - the space spanned by the properties of elementary particles" (Genz 1999: 13-14).
Order, chaos and symmetry are connected and symmetry can be broken, for example, through the circle, the line of the plane. "Patterns with symmetries that are based on the ordering of finite elements will, by that very definition, break higher symmetries based on continuous parameters" (Genz 1999: 15). Nondeterministic symmetry breaking is called spontaneous symmetry breaking. Thus "empty" space contains nothing that could be changed by translation, rotation or mirror imaging so it is symmetric in all three. In addition, there are symmetries between "empty" space and natural laws excluding changes in velocity (acceleration and deceleration):
"The only property of space that can be defined by itself objectively and free of arbitrary choices is the physical concept of its acceleration; this is true because Newton's laws are not symmetrical under changes in acceleration" (Genz 1999: 20).
If a space of lowest energy is in a state that is not invariant under symmetry transformations of the basic laws of nature, then it contains "something"; it is not a void. The transformations of the basic laws of nature act on this "something" and change it. The very fact that this space has distinguishable states, equivalent to each other through symmetry transformations of the laws, permits the definition of a direction, a distance, a concept that defines a particular rotational motion, and these vary transformations can take the space from one such state to another.... any space that distinguishes directions and accelerations cannot be "empty"..." (Genz 1999:20-21).
"If we remove from space as much energy as possible, there may be a "spontaneous" materialization of structures that we have named Higgs fields... virtual particles incessantly emerge and vanish in this space... The vacuum of physics contains in its facilities everything that the laws of nature will permit, It fluctuates - the virtual particles come and go. The only thing that may be missing is the energy it would take to make them appear as real particles. All that can appear in reality must be present as a possibility - as a state of virtual particles - in a vacuum. Add energy to the vacuum and those virtual states may appear as particles" (Genz 1999: 24).
Thus it appears that a Higgs field will appear if energy is at its minimum, but that the transformation from a virtual universe to a real one requires an energy input. "Real systems are, in this sense, "excitations of the vacuum" ... The vacuum in itself is shapeless, but it may assume specific shapes: in so doing, it becomes a physical reality, a "real world"" (Genz 1999: 26). The implications of this are interesting. A singularity of infinitely low energy interacts with an infinitely hot and dense flash of pure energy with zero structure generates a Higgs field which in turn generates structure where space, matter and time share a common origin. Infinite heat means infinite disorder. I ask, is there a phase transition between information and energy that generates the heat needed to begin a universe?
"... the mass of the electron is not intrinsic; it comes from its interaction with certain particles, which are called Higgs particles. If there were no Higgs particles, the electron would have no mass. It would move at the speed of light, like a photon. But if it finds itself surrounded by a gas of Higgs particles, an electron is not able to move so quickly. The electron seems to gain mass because it is moving, not through empty space, but through a muck of Higgs particles, it becomes heavier because when one pushes it, one also pushes all the Higgs particles around it.
"In fact there is good reason to believe that the world is filled with a gas of Higgs particles, which are responsible for giving the electron its mass. But that's not the whole story. Adding the Higgs particles doesn't really remove the distinction between electrons and neutrinos: they are now different because the electron interacts with the Higgs particles while the neutrino does not" (Smolin 1997: 54-55)
There should be neutrino Higgs as well.. Where are they? It was discovered that the only stable configuration was one in which there was only one type of Higgs particle. Spontaneous symmetry breaking created our universe with its electron Higgs field.
"... the true vacuum at zero temperature still has an infinite supply of radiation energy. As we proceed, we shall see that electromagnetic radiation is in fact only one component, albeit infinite in quantity, of the unfathomable energy supply of the vacuum" (Genz 1999: 189).
"We have seen that the vacuum of physics is distinct from a simple nothing by virtue of the nonrenewable activity we called zero-point radiation. Fields and particles originate and disappear; virtual particles appear for the briefest of time - shorter for the heavier ones, a bit longer for the lighter ones. In this framework, real particles appear - in a fashion reminiscent of what Heraclitus called panta rhei - as phenomena accompanied by excitations of the vacuum. We can loosely compare the vacuum to a farmer's field of wheat that is swaying in the wind, and the particles to the wave patterns excited by the wind on its surface" (Genz 1999: 189).
"There is a saying that mass is, in a way, frozen energy" (Genz 1999: 190) in that the rest mass is "frozen" but the other energy of the particle can be transferred to other particles. In addition, "the uncertainty relation implies that there cannot be a region in space totally devoid of electromagnetic fields" (Genz 1999: 202).
"... every particle has its antiparticle of opposite charge and that particle plus antiparticle are nothing but an excitation of the vacuum accessible from the Dirac sea once there is enough energy for the transition. Conversely, we saw that every real particle-antiparticle pair annihilate into a pure energy excitation of the vacuum. these are the results that count; and the uncertainty relation tells us that pair creation and pair annihilation happen in the vacuum at all times, in all places" (Genz 1999: 205).
There is a field for every particle. The Higgs field pervades the entire universe as it originated when the universe cooled down. This field has a lower energy content than space in its absence. It emerged into the universe when the temperature dropped below 1015 degrees Celsius.
"The parameter we hold responsible for the ability or inability of a particle to move with a velocity other than that of light is its mass. When the mass is zero, such as that of a real photon, the particle has to move with the velocity of light at all times; for finite masses, its velocity can take any value below the velocity of light, and can also be zero. We saw that quantum mechanics treats light as a current of mass-zero particles called photons, which have to move with the greatest possible velocity across space. That space, real space, contains the pervasive Higgs field. The photon is one of the particles - it might even be the only one! - that does not interact with the Higgs field and can therefore move in this field without being slowed down. We might redefine the influence of the Higgs field on the velocities of the particles it interacts with in the following way: The Higgs field imparts effective masses to the particles - which, above the critical temperature of 1015 degrees, are massless like the photon" (Genz 1999: 231).
A massless particle seems rather pointless to me (bad pun).
I wonder if light defines the Higgs field? That perhaps the field was created when the photon separated out as a phase transition? The Higgs field is frictionless, particles move with constant motion through empty space. Could the Higgs field be the result of the symmetry break that created the photon... that the universe is larger (in time) than its space/time, and that the time out-of-phase defines the false vacuum? And that time/phase allows it to be larger than the space it contains? Just a thought from an amateur for a Monday afternoon.
ZENDELICIOUS: "... there may be something that has less energy than nothing" (Genz 1999: 261).
The following table is from Genz 1999: 270 (modified by myself)
|Time||Temp (K)||Gigavolts||Collision Distance (cm)||Size of Universe (cm)||Example||Comment|
|0||infinite?||infinite?||0||0||or smear?||Origin (TOE)|
|10-44||1032||1019||10-33||0||Planck Time||(GUT) Gravity|
|10-36||1028||1015||10-29||0||Inflation Starts||Strong Break|
|10-33||1027||1014||10-28||10 cm||Inflation Ends|
|10-10||1015||100||10-16||1015||(Solar System)||Electroweak Break|
TOE = Theory of Everything
GUT = Grand Unified Theory
Prior to Planck time, all forces (gravity, weak, strong and electromagnetic) were a single "ur-force" that was probably dominated by gravity. The first critical instant was when gravity broke away as a phase transition and the GUT is broken. When the strong force breaks, inflation begins.
ZENLET: "As long as we limit ourselves to average values, there is no way of telling complete chaos and perfect symmetry apart" (Genz 1999: 276). (i.e. - chaos is symmetric with respect to translation or rotation?).
Spontaneous symmetry breaking implies the existence of waves. The Goldstone theorem is that every symmetry that is not also the symmetry of the ground state implies a particle and fixes the properties of that particle (Genz 1999: 283). Spontaneous symmetry breaking and phase transitions are the processes that created the particles in our universe.
There appears to be no difference between gravitational (the force that attracts mass to mass) and inertial (resistance to acceleration) mass. Even the best vacuum we can make contains 10,000 molecules per cubic meter and the estimated density in interstellar space is about 1 hydrogen atom per cubic meter. The critical matter density for the Universe is estimated at 10-7 hydrogen atoms cm3 (Genz 1999: 142).
"... in the general theory of relativity... Its space is in no way absolute; it is, we might say, adaptable. It adapts to the masses of the universe ... All masses act on space, and space reacts by fixing their trajectories... Einstein's general theory of relativity elevates the concept of inertiarather than that of Euclidian geometry to the central postulate of its edifice: Mass points, or probes, will move along lines of shortest distances in an appropriately chosen geometry" (Genz 1999: 171).
Genz concludes that light does not need a material medium for "propagation". It is easy to visualize light as a particle zooming through empty space but hard to view as an oscillation that oscillates no substance. I have no problem with the latter when I view light as the medium... it oscillates itself. There is no space if there is no light. Light is the excitation of empty space because, in my opinion, light is space. Light is pure energy in that respect, it defines the medium, the vacuum. Light is the "field" of dreams. Light does not loose energy in traveling "through" space, because light "defines" space. This, in my opinion, removes the statistical problems of the wave nature of light as well. No "black box" experiment can remove all energy from a sub-system of the universe... or it will pop out of the universe (what a black hole tries to do, but cannot fully). Empty space free of all radiation is simply not possible except at the outside our universe, a place impossible to reach. A zero matter and zero energy space has no "measure" nor "observability". "Casimer, Sparnaay, and Lamoreauz really do show that there are electromagnetic oscillations in free space that is empty as it could be - as cold as possible, and without boundaries" (Genz 1999: 186).
".. the true vacuum at zero temperature still has an infinite supply of radiation energy. As we proceed, we will see that electromagnetic radiation is in fact only one component, albeit infinite in quantity, of the unfathomable energy supply of the vacuum" (Genz 1999: 189).
Thus mass is like a phase transition of energy. And light, in my opinion, act like a "massless particle" and thus is constantly there as a "real particle" of its own medium. This is why there is a fundamental connection between matter, energy and light (e = mc2). The "frozen" energy of the rest mass of particles can only be removed by breaking the particles, but the other energy the particle have is the useful energy in our macro-universe that can be transferred from one form to another.
Smoot & Davidson (1993) summarize the thermal history of the universe:
"At a ten-millionth of a trillionth of a trillionth (10-42) of a second after the big bang - the earliest moment about which we can sensibly talk, and then only with some suspension of disbelief - all the universe we can observe today was the tiniest fraction of the size of a proton. Space and time had only just begun. (remember, the universe did not expand into existing space after the big bang; its expansion created space-time as it went.) The temperature at this point was a hundred million trillion trillion (1023) degrees, and the three forces of nature - electromagnetism and the strong and weak nuclear forces - were fused into one.
By a ten-billionth of a trillionth of a trillionth of a second (10-34 second) inflation had expanded the universe (at an accelerating rate) a million trillion trillion (1030) times, and the temperature had fallen to below a billion billion billion (1027) degrees. The strong nuclear force had separated, and matter underwent its first phase transition, existing now as quarks (the building blocks of protons and neutrons), electrons, and other fundamental particles.
Hark... hark... do I smell a snark among the quarks?
The next three phase transitions occurred at a ten-thousandth of a second when quarks began to bind together to form protons and neutrons (and antiprotons and antineutrons). Annihilation of particles of matter and antimatter began, eventually leaving a slight residue of matter. All the forces of nature were now separate.
The temperature had fallen sufficiently after about a minute to allow protons and neutrons to stick together when they collided, forming the nuclei of hydrogen and helium, the stuff of stars. This thick soup of matter and radiation, which initially was the density of water, continued expanding and cooling for another three hundred thousand years, but was too energetic for electrons to stick to the hydrogen and helium nuclei to form atoms. The energetic photons existed in a frenzy of interactions with the particles in the soup. The photons could only travel a very short distance between interactions. The universe was essentially opaque.
When the temperature fell to about 3,000 degrees, at three hundred thousand years, a crucial further phase transition occurred. The photons were no longer energetic enough to dislodge electrons from around hydrogen and helium nuclei and so atoms of hydrogen and helium formed and stayed together. The photons no longer interacting with the electrons and were free to escape and travel great distances. With this decoupling of matter and radiation, the universe became transparent, and radiation streamed in all directions - to course through time as the cosmic background radiation we experience still. The radiation released at that instant gives us a snapshot of the distribution of the matter within the universe at three hundred thousand years of age. Had all matter been distributed evenly, the fabric of space would have been smooth, and the interaction of photons with particles would have been homogeneous, resulting in a completely uniform cosmic background radiation. Our discovery of the wrinkles reveals that matter was not uniformly distributed, that it was already structured, thus forming the seeds out of which today's complex universe has grown" (Smoot & Davidson 1993: 283-285).
What is probable in "god", the universe and everything:
Only some things are probable. The logic of probabilities is statistics. There is no such thing as fundamental chaos. Chaos is the primary tool of matter. Just as "In relativity, Matter tells Space how to curve, and Space tells Matter how to move" (Douglas Adams: Life, the Universe and Everything), Chaos tells Matter how to function, and Matter tells Chaos how to form itself. The information level of the universe remains constant. Just as:
E=MC2 E=energy M=matter C=[speed of] light (constant)
Here is my guess, just based on consistency:
Ch=IT2 Ch=chaos I= information T=time?
My GUT level feeling is that energy and chaos are isomorphic, as are information and matter. As matter and energy can be converted into each other. Chaos and information can be converted into each other. The total amount of matter, energy and light stays the same... and the total amount of chaos, information and time remains the same... that light and time are reciprocals of each other?
"Since it varies directly with the entropy, high entropy means high potential information... Are we to conclude that as the entropy increases the information always increases? No, it is not quite that simple. The information concept is far richer than that" (Gatlin 1972: 48-49). Gatlin feels that stored information varies at least inversely with entropy.
"The second law of thermodynamics is indeed an order-degrading principle in itself and without constraint; but when we place it under the control of the higher laws of information theory, it becomes directly responsible for the production of order of a very important type. This is why life has arisen " (Gatlin 1972: 190).
"When something falls into a black hole, one cannot expect it ever to come flying back out. The information coded in the properties of its constituent atoms is, according to Hawking, impossible to retrieve. The problem... is that if the information is truly lost, quantum mechanics breaks down. Despite its famed indeterminancy, quantum mechanics controls the behavior of particles in a very specific way: it is reversible. When one particle interacts with another, it may be absorbed or reflected or may even break up into other particles. But one can always reconstruct the initial configuration of the particles from the final products."
"If this rule is broken by black holes, energy may be created or destroyed, threatening one of the most essential underpinnings of physics. The conservation of energy is ensured by the mathematical structure of quantum mechanics, which also guarantees reversibility; losing one means losing the other. As Thomas Banks, Michael Peskin and I showed in 1980 at Stanford University, information loss in a black hole leads to enormous amounts of energy being generated. For such reasons... I believe the information that falls into a black hole must somehow become available to the outside world" (Susskind 2000: 118).
I like the idea that information contains huge amounts of energy!
"Hot objects also possess an intrinsic disorder called entropy, which is related to the amount of information a system can hold. Think of a crystal lattice with N sites; each site can house one atom or none. Thus every site holds one "bit" of information, corresponding to whether an atom is there or not; the total lattice has N such bits and can contain N units of information. Because there are two choices for each site and N ways of combining these choices, the total system can be in one of 2n states (each of which corresponds to a different pattern of atoms). The entropy (or disorder) is defined as the logarithm of possible states. It is roughly equal to N - the same number that quantifies the capacity of the system for holding information."
"Beckenstein found that the entropy of a black hole is proportional to the area of its horizon. The precise formula, derived by Hawking, predicts an entropy of 3.2 x 1064 per square centimeter of horizon area. Whatever physical system carries the bits of information at the horizon must be extremely small and densely distributed: their linear dimensions have to be 1/1020 the size of a proton's."
"The discovery of entropy and other thermodynamic properties of black holes led Hawking to a very interesting conclusion. Like other hot bodies, a black hole must radiate energy and particles into the surrounding space. The radiation comes from the region of the horizon and does not violate the rule that nothing can escape from within. But it causes the black hole to loose energy and mass. In the course of time an isolated black hole radiates away all its mass and vanishes" (Susskind 2000: 121).
"The evaporation is the nasty point, says Susskind. That evaporation as well as the peculiar nature of what has come to be called the hawking radiation that black holes emit, seems to contradict one of the most fundamental principles of physics. All present physics is based very heavily on the assumption that you can recover the past from the present - in principle, if not always in practice, says Susskind. But black holes, much to the consternation of physicists, seem to break this rule" (Folger 1993:100).
"When something falls into a black hole, Says Susskind, as far as we know the information that distinguished whatever fell into it is erased. The products of the outgoing radiation appear, as far as we can tell, to be featureless and completely independent of what fell in. Black holes, in other words, sever the bonds between past, present, and future" (Folger 1993:100).
"But Hawking radiation doesn't come from within the black hole, it erupts from the vacuum of space around but outside the hole, safely beyond the zone from which nothing escapes. So unlike the energy and debris released when a supernova explodes, or the ashes and energy produced when a magazine burns, or unlike even the steam rising from a cup of coffee, Hawking radiation doesn't have a direct connection with its source. If cups of coffee behaved this way, the steam would materialize far from the cup and would have no unique aroma, the temperature of the steam would have nothing to do with the temperature of the coffee" (Folger 1993:100).
Try to get your mind around that concept. Breaking fundamental laws of entropy.
What I feel is meta-physics.
Susskind believes string theory resolves the information paradox:
"A string is a minute object, 1/1020 the size of a proton. But as it falls into a black hole its vibrations slow down, and more and more of them become visible. Mathematical studies done at Stanford by Amanda Peet, Thoracius, Arthur Mezhlumian and me have demonstrated the behavior of a string as its higher modes freeze out. The string spreads and grows, just as if it were bombarded by particles and radiation in a very hot environment. In a relatively short time the string and all the information that it carries are smeared out over the entire horizon."
"This picture applies to all the material that ever fell into the black hole - because according to string theory, everything is ultimately made of strings. Each elementary string spreads and overlaps all the others until a dense tangle covers the horizon. Each minute segment of string, measuring 10-33 centimeters across, functions as a bit. Thus, strings provide a means for the black hole's surface to hold the immense amount of information that fell in during its birth and thereafter."
"It seems then, that the horizon is made up of all the substance in the black hole, resolved into a giant tangle of strings. The information, as far as the outside observer is concerned, never actually fell into the black hole; it stopped at the horizon and was later radiated back out" (Susskind 2000: 122).
But what about gravity and time? If light is timeless, then it is space... gravity may not be a force, it is merely bent space... bent light. If you are looking for the fabric of the Universe, it is light. The old concept of the "ether" is just "light". Mass bends light, so one gets the appearance of a force we call gravity. Stop light from bending... and you have a an anti-gravity device. Think of space/time: where light, traveling at the speed of light, does not exist in time... but in fact sets a time barrier which is also the limits to the Universe. Light = space/no time. Energy = no space/time. Mass = space/time. A black hole is ultimate mass, that bends light so completely that light is converted into pure energy within no space (i.e. light is converted from space to time). Light is space, energy is time, and matter is a mixture of both. (Light-space)/information is one variable, (energy-time)/entropy is another, and (matter-space-time)/chaos is the third variable that define the Universe. Light removes time and useful information from the Universe and builds space. Mass and black holes bend or remove light and space from the Universe and modify or build time. The speed of light as space sets the limits. Light can travel slower than its maximum as the vacuum. The total amount of information is a constant, but information levels can change just as light can slow down. The arrow of time is set by conversion of matter/energy to light, which expands the limits of space.... at the expense of matter, energy and their related chaos and entropy. The amount of information remains the same, but the conversion of information from useful to non-useful is directly related to the expansion of space. Life converts light into information (!!!) by organizing matter into structure at the overall expense of the Universe. Light near matter (mass) is bent and light hitting matter is partially converted to energy as heat and partially reflected. The conversion of light to heat REDUCES the density of light around mass! This bends light, and therefore space, since light is space. Reflect all light, and space is not bent (there is no gravity)! "I am the light and the life".
"The energy in the gravitational fields that hold stars, planets, galaxies, and clusters of galaxies together is far greater than all the other forms of energy combined. This is due to the long range of the gravitational force. Although the gravitational force is relatively weak, every particle in the universe attracts every other.... In the cosmic scheme of things, then, gravity is far more important than heat, light, chemical energy, or radioactivity. There is much more gravitational energy in the universe than there is nuclear energy. Furthermore, this gravitational energy is negative. It is so large a negative quantity that all the positive contributions of other kinds of energy are unimportant... It turns out that the contribution of matter is a very large positive number, and that the contribution of gravity is a very large negative quantity. Do they exactly balance one another? No one really knows, but they very well could" (Morris 1990: 66-68).
"Gravity is an extraordinarily weak force. On the scale of atoms and everyday objects, it is ten followed by 37 naughts time weaker than the other forces of Nature. Hence it is very difficult to detect. Its effects are overwhelmed by the other forces: magnets stop pieces of metal falling to the ground; the sub-atomic forces of Nature prevent elementary particles just falling into a heap on the floor. Moreover, gravity acts upon everything: you cannot turn it off or shield it as you can other forces. For, whereas electricity and magnetism come in positive and negative varieties which cancel out, the gravitational "charge" is mass and that only comes in positive doses. And it is this that allows gravity to rule in the domain of the very large. For when astronomically large bodies of matter accumulate the net positive effects of the forces of nature tend to cancel out because they exist in positive and negative varieties. Mass, by contrast, just accumulates in the positive sense and eventually wins out despite its intrinsic weakness" (Barrow 1991: 84).
If light is a particle, a photon, that has energy, but no mass, and if light is massless, timeless space, then light becomes the medium within which mass and energy operate. Mass does not define space, light does, but mass forms the texture of space. Therefore, gravity is not a "force", it is merely the warping of space by the presence of mass. I wonder if gravity may not be a particle (gravitron), it may be simply the warped medium of light energy in which mass sits. Its effect are infinite, because the warping of the medium has changed the shape of the universe. The amount of mass determines the amount of warping... so the balance between matter/mass (as a positive number) and "gravity" (as a negative number) is always equal. Light-warping gives the appearance of a force. Light is the medium, mass is the cause. Without mass, space has no warping, Without space, mass has no medium to exist within and warp. The stretching and warping of the massless, timeless space/light is a form of thinning or concentrating of space/light that creates the illusion of gravity. Gravity is, I suspect a density issue of the fabric of space. Mass changes the density of space like a ball bearing dropped onto a thin rubber sheet changes the density of the sheet as it dimples down ... giving the appearance of a gravity field for any object rolled across the rubber sheet. I suspect, that instead of a "gravitron", physics need to look at the relative density of the photon as "gravity"... and that they may have already found the "gravitron".... it is the photon? But according to superstring theory, the graviton has a spin-2 while the photon has a spin-1... so there must be a gravitron after all. But then... many late 1990's physicists think string theory is dead.
"But perhaps the most spectacular conclusion of the superstring theory is that it can actually make statements about what happened before the Big Bang, at the beginning of time. The superstring theory, in fact, views the Big Bang as a by-product of a much more violent explosion, the breakdown of a ten dimensional universe into a four dimensional one" (Kaku & Thompson 1987:129).
The speed at which gravity propagates is no higher then the speed of light. When mass changes, the rate of bending of light changes, and this change moves out at no more then the speed of light, because it is light. Since light is space, the effects of gravity is defined by the defining character of the universe, the ball of light, within which any observer is confined by its limiting character.... one cannot exceed the speed of light (exit the universe). The only possible appearance of exception is the black hole where space is converted to time that approaches infinity as a limit. It does not reach infinity because of the second law of thermodynamics.
As to dimensions: space has three physical dimensions: 1) length; 2) width; and 3) depth. The other dimension 4) is time. If light is space, then it has the three dimensions.... but a photon has a time dimension of zero, so light is a unique form of space, its limiting factor. I suspect that energy has three dimensions as well: 5) chaos; 6) entropy; and 7) information. Energy operates in time as well, except for light as a form of energy. Matter (mass) is a space/time mixture. Time has three dimensions: 8) past; 9) present; and 10) future, which modify space. Space has another dimension 11) density which is related to the density of time/light (gravity). Energy has density as well (variable 12?). A black hole is the other limiting factor for space, where light is warped to the point it leaves the universe... and is perhaps converted to infinite time as a limit.
Gravity and time are entangled in my view. I wonder if we have found the 'gravitron'... it is time?
Brains of Branes?
More meta-physics, in my oinion.
"A membrane-like object in higher dimensional space that can carry energy and confine particles and forces" (Randall 2005: 460).
"Dirichler Brane - D-brane, for short. D-branes are large, massive surfaces that float within space. They act like slippery sheets of flypaper: the ends of open strings move on them but cannot be pulled off. Subatomic particles such as electrons and protons may be nothing more than open strings and, if so, are stuck to a brane. Only a few hypothetical particles, such as the graviton (which transmits the force of gravity), must be closed strings and are thus able to move completely freely through the extra dimensions" (Burgess & Quevado 2007:56).
There is a P-Brane model as well, again there is humor in physics. It is hard for my pea-brain to unwrap itself to surround some of these theories.
Theoretical physicist Lisa Randall thinks two branes are better than one.
Lisa Randall proposes that "Gravity, unlike all other forces, is never confined to a brane. Brane bound gauge bosons and fermions are the result of open strings, but in string theory, the gravitron - the particle that communicates gravity - is a closed string. Closed strings have no ends, and therefore there are no ends to pin down on a brane" (Randall 2005: 327).
"The gravitron, unlike gauge bosons or fermions, must travel through the entire higher-dimensional spacetime" (Randall 2005: 327).
Randall proposes a two brane solution: "space is so strongly warped in the presence of two flat boundary branes that the heirarchy problem of particle physics is automatically solved - without the need for a large dimension, or for any arbitrary large number at all" (Randall 2005: 387).
"We'll see that one of the amazing consequences of warped geometry is that size, mass, and even time depend on position along the fifth dimension. The warping of space and time in this two-brane setup is like the warping of time near the horizon of a black hole. But in this case, time dilates, geometry expands, and on one of the branes particles have a small mass - so the heirarchy problem gets automatically solved" (Randall 2005: 387).
"Because each slice of spacetime is completely flat in our warped geometry, the gravitron's probability function doesn't vary along the three standard dimensions - it changes only along the fifth dimension" (Randall 2005: 392).
"The distance between branes in this warped geometry need only be a little larger than the Planck scale length" (Randall 2005: 394).
Think of a flat plane with gravity (gravitybrane) and a nearby weakbrane with the particles that make up matter, energy and information that we see around us. The standard model resides on the weakbrane with four dimensions. Gravity is not restricted to the four dimensional weakbrane which explains its relative weakness... it is spread out on five dimensions.
Burgess, Quevado, Majumdar, Rajesh, Zhang and Nolte have proposed the antibrane. "Antibranes are to branes what antimatter is to matter. They attract each other, much as electrons attract their antiparticles (positrons). If a brane came near an antibrane, the two would pull each other together. The energy inside the branes could provide the positive energy needed to start inflation, and their mutual attraction could provide the reason for it to end, with the brane and antibrane colliding to annihilate each other in a grand explosion" (Burgess & Quevado 2007:58).
They speculate that this process causes other branes to fragment, some into 3 dimensional branes.
"Those string theories we know how to study are known to be wrong. Those we cannot study are thought to exist in such vast numbers that no conceivable experiment could ever disagree with all of them" (Smolin 2006: xiv).
In a 1998 Scientific American, Andrei Linde presented a model based on fractals and scalar fields that produces possible models for self-reproducing multiple universes:
"If this model is correct, then physics alone cannot provide a complete explanation for all properties of our allotment of the universe. The same physical theory may yield large parts of the universe that have diverse properties. According to this scenario, we find ourselves inside a four-dimensional domain with our kind of physical laws, not because domains with different dimensionality and with alternative properties are impossible or improbable but simply because out kind of life cannot exist in other domains... Does this mean that understanding all the properties or our region of the universe will require, besides a knowledge of physics, a deep investigation of our own nature, perhaps even including the nature of consciousness? This conclusion would certainly be one of the most unexpected that one could draw from the recent developments in inflationary cosmology" (Linde 1998: 104).
Anything is possible. Less things are probable. The art of possibilities is fuzzy logic. Information operates by fuzzy logic. The information content of the Universe is a constant. Matter is converted to energy and matter is converted into chaos when matter is concentrated at critical densities. A star is a matter to energy converter. A black hole simply converts matter to chaos. The creation of energy allows information to be reintegrated as life on blobs of matter below the threshold density (planets). The possibility of life is thus probable in a statistical sense. Probabilities, out of possibilities arise.
I have this GUT feeling that if gravity relates to mass changing the curve of the Universe, and the universe is expanding, hence changing its curvature, then gravity is relative as well... it slowly changes as the universe changes in size. What this means when figuring out how much mass is required for a universe to slow down and collapse is beyond my non-mathematical mind. I had this idea while looking at two beach balls creating pseudo-gravity wells in my grand-children's plastic pool as I filled it up .... and it expanded a bit with the pressure of the water. Is gravity a variable that has changed in strength over time? I have another GUT feeling that its effect will be inverse... that it will increase as the universe expands and was weaker when it began... perhaps zero at the moment of the big bang.... how does that relate to ideas like the inflation concept? I learned after I had written these words that Paul Steinhardt has authored a theory about extended inflation by incorporating a gravitational constant that changes in strength during the early phases of the growth of the universe.
Jacob Bekenstein proposed that "the area of a black hole was not analogous to entropy, it was entropy.... based on a twentieth-century spin on the idea of entropy: that it was negative information, that is to say, that disorder destroys meaning... Information could be quantified into "bits" like the contents of a computer memory. In 1972 he calculated the number of bits it took to characterize the details that had been erased about matter lost inside a black hole and showed that they were proportional to the area of the event horizon. In effect, what Bekenstein proposed was that what a black hole really eats and is swelled by is information" (Overbye 1991: 106-107).
Black Holes or Black Stars
Hawking (ND) has looked into the gravitational entropy of black holes and found that black holes radiate exactly the energy required to prevent a violation of the second law of thermodynamics and that entropy is a global property. "Hawking had come full circle from the days when he resisted Bekenstein's notion that black holes had entropy. Now he seemed to see black holes as almost pure entropy, wreaking disorder and randomness on the universe, roaming like hungry sharks, eating information, and spreading unpredictability in their wakes. Because it came from the singularity, he said, the radiation from a black hole had an unpredictability that went beyond the already famous unpredictability of the uncertainty principle. In the latter case one could know either the velocity or the position of a particle. In the case of black hole radiation, he contended, you couldn't predict either one. This extra degree of randomness he called the "principle of ignorance". He concluded by alluding to a famous statement that Einstein had one made in arguing against quantum theory... "God doesn't play dice".... "God not only plays dice." Hawking announced, "he sometimes throws them where they can't be seen."" (Overbye 1991: 116-117).
"Perhaps the greatest achievement, so far, in the search for quantum gravity was the realization, in the mid-1970's, that black holes are thermodynamic systems. This means that, as discovered by Jacob Bekenstein and Steven Hawking, each black hole has a temperature and an entropy. The entropy of a system is a measure of the maximum amount of information it may contain. What is remarkable about a black hole is that its entropy is proportional to the area of its horizon. ... Since the entropy of a black hole is proportional to its area, the maximum amount of information any system can contain is proportional to the area of its boundary. ... So the entropy of any system contained within a finite region is bounded. But then the information it can contain is also bounded, as entropy is a measure of information" (Smolin 1997: 274).
"God may know how the universe began, but we cannot give any particular reason for thinking it began one way rather than another. On the other hand, the quantum theory of gravity has opened up a new possibility, in which there would be no boundary to space-time and so there would be no need to specify the behavior at the boundary. There would be no singularities at which the laws of science broke down and no edge of space-time at which one would have to appeal to God or some new law to set the boundary conditions for space-time. One could say: 'the boundary condition of the universe is that it has no boundary.' The universe would be completely self-contained and not affected by anything outside itself. It would neither be created nor destroyed. It would just BE"(Hawking 1988:136).
"... if we think of empty spacetime as some immaterial substance, consisting of a very large number of minute, structureless pieces, and if we let those microscopic building blocks interact with one another according to simple rules dictated by gravity and quantum theory, they will spontaneously arrange themselves into a whole that in many ways looks like the observed universe. It is similar to the way that molecules assemble themselves into crystalline or amorphous solids" (Ambjorn, Jurkiewicz & Loll: 2008: 42).
"Moreover, unlike other approaches to quantum gravity our recipie is very robust. When we vary the details in our simulations, the result hardy changes. This robustness gives up reason to believe we are on the right track" (Ambjorn, Jurkiewicz & Loll: 2008: 43).
"The insensitivity to a variety of small-scale details also goes under the name of 'universality'. It is a well-known phenomenon in statistical mechanics, the study of molecular motion in gases and fluids; these substances behave much the same whatever their detailed composition is. Universality is associated with properties of systems of many interacting parts and shows up on a scale much larger than that of the individual constituents" (Ambjorn, Jurkiewicz & Loll: 2008: 44).
"Euclidean quantum gravity took a big technological leap in the 1980's and 1990's with the development of powerful computer simulations" (Ambjorn, Jurkiewicz & Loll: 2008: 44).
"Unfortunately, these simulations revealed that Euclidean quantum gravity is clearly missing an important ingredient somewhere along the line. They found that nonperturbative superpositions of four-dimensional universes are inherently unstable" (Ambjorn, Jurkiewicz & Loll: 2008: 45).
"In our search for loopholes and loose ends in the Euclidean approach, we finally hit on the crucial idea, the one ingredient absolutely necessary to make the stir fry come out right: the universe must encode what physicists call causality. Causality means that empty spacetime has a structure that allows us to distinguish unambiguously between cause and effect. It has an integral part of the classical theories of special and general relativity" (Ambjorn, Jurkiewicz & Loll: 2008: 46).
"Euclidean quantum gravity does not build in a notion of causality. The term 'Euclidean' indicates that space and time are treated equally.... Because Euclidean universes have no distinct notion of time, they have no structure to put events into a specific order; people living in these universes would not have the words 'cause' or 'effect' in their vocabulary" (Ambjorn, Jurkiewicz & Loll: 2008: 46).
"The technical term for our method is causal dynamical triangulations. In it, we first assign each simplex an arrow of time pointing from the past to the future. Then we enforce causal gluing rules: two simplices must be glued together to keep their arrows pointing in the same direction. The simplices must share a notion of time, which unfolds steadily in the direction of these arrows and never stands still or runs backward. Space keeps its overall form as time advances; it cannot break up into disconnected pieces or create wormholes" (Ambjorn, Jurkiewicz & Loll: 2008: 46).
"Thus we held our breath in 2004 when our computer was about to give us the first calculations of a large causal superposition of four-simplices. Did this spacetime really behave on large distances like a four-dimensional, extended object and not like a crumpled ball or polymer" (Ambjorn, Jurkiewicz & Loll: 2008: 46)?
"Imagine our elation when the number of dimensions came out as four (more precisely, as 4.02 +/- 0.1). It was the first time anyone had ever derived the observed number of dimensions from first principles" (Ambjorn, Jurkiewicz & Loll: 2008: 42).
"Our next step was to study the shape of spacetime over large distances and to verify that it agrees with reality - that is, with the predictions of general relativity" (Ambjorn, Jurkiewicz & Loll: 2008: 47).
"It turned out for our model to work we needed to include from the outset a so-called cosmological constant, an invisible and immaterial substance that space contains even in the complete absence of other forms of matter and energy. This requirement is good news, because cosmologists have found observational evidence for such energy. What is more, the emergent spacetime has what physicists call a de Sitter geometry, which is exactly the solution to Einstein's equations for a universe that contains nothing but the cosmological constant. It is truly remarkable that by assembling microscopic building blocks in an essentially random manner - without regard to any symmetry or preferred geometric structure - we end up with a spacetime that on large scales has the highly symmetric shape of the de Sitter universe" (Ambjorn, Jurkiewicz & Loll: 2008: 47).
When they did the models at small scales, the dimensions dropped from 4 to 2, but the spacetime was still continuous without wormholes. Does this explain the so called "inflation"? A phase transition from 2 to 4 dimensions suddenly changing/expanding the geometry of spacetime? Does a smaller and earlier 2 dimensional model explain the relative homogeneity of matter in 4 dimensional spacetime as well? Just a passing thought from an interested amateur.
Carlos Barcelo, Stefano Liberati, Sebastiano Sonego and Matt Visser propose that quantum gravity effects may prevent black holes from forming. Instead it leads to black stars.
They propose that the renormalized stress energy tensor (RSET) describing the curvature-producing quantities of an object turning into a black object "can acquire arbitrarily large and negative values in the region near the Schwarzschild radius - where the classical event horizon would have formed. A negative RSET produces a repulsion, which further slows the collapse. The collapse might come to a complete halt just short of forming a horizon, or it might continue forever at an ever slowing pace, becoming ever closer to forming a horizon but never actually producing one"(Barcelo, Liberati, Sonego and Vissser 2009:44).
"Thus, experience tells us that matter following the laws of quantum mechanics always seems to find ways of delaying gravitational collapse "(Barcelo, Liberati, Sonego and Vissser 2009:45).
"The resulting bodies would be the new kind of object we have named black stars. Because of their extremely small size and high density, they would share many observable properties with black holes, but conceptually they would be radically different. they would be material bodies, with a material surface and be interior filled with dense matter. They would be extremely dim because light emitted from their surface would be very redshifted - the light wave greatly stretched - in traveling from the intensely curved space near the black star to distant astronomers"(Barcelo, Liberati, Sonego and Vissser 2009:45).
"By having no horizon, the black star cannot lock away any information. Instead the emitted particles and whatever matter remains with the black star carry all the information. Standard quantum physics would describe the formation and evaporation process"(Barcelo, Liberati, Sonego and Vissser 2009:44).
The Universe does not care about what happens to you.
You must care for what happens to you in the Universe.
You should care very much about what happens to the Universe.
When a child is learning to walk, and falls down, and hurts itself on the hard, hard, ground. The ground does not care one bit. The child must learn not to fall down, if it does not want to get hurt.
When a rock falls off a cliff and hits someone in the head and kills them... that is not god's fault. It is fault-less. The rock just fell. Rocks fall. The person who was killed was just in the wrong place at the wrong time. If that person had been there a second earlier or later... the rock would have missed. There is no blame in natural occurrence or in simple statistical probability. When a plane crashes, and some die and some live, there should be no guilt in survival... it is just happened. There is no justice or inequality implied... just that there is an outcome of some kind. If you survive, go on with life... if you didn't ... that's just a fact you can no longer live with (because your dead).
When a friend gets cancer and dies, the cancer does not care who the friend is. You should care about that friend. That is your job.
Your choice will change your Universe. Everything you do, effects every thing you will do. Everything you do, impacts all other beings in your Universe. Everything that cooperating and non-cooperating other beings do, effects others who share your Universe. It is a catch 22, because if you exit through death, that changes the Universe as well. Everything you do, or not do, changes everything that will be.
There possibly may be no such thing as a trivial action.... but there probably are.
In CHAOS theory, an action, or any non-action, has some effect on any system of which it is a part. The Universe is the ultimate system... you cannot get out of it.
"Tiny differences in input could quickly become overwhelming differences in output - a phenomenon given the name "sensitive dependence on initial conditions". In weather, for example, this translates into what is only half-jokingly known as the Butterfly Effect - the notion that a butterfly stirring the air today in Peking can transform storm systems next month in New York" (Gleick 1987:8).
Chaos theory also teaches us that when any system is under pressure, that its equilibrium level rises, and at some point splits into two states through which it fluctuates. As pressure increases, bifurcation comes faster and faster until the system turns chaotic.... then oddly, in the midst of choice verging on infinity... patterns emerge out of chaos. Such structure becomes infinitely deep and can have exquisite fine structure. The implication is that order is driven by chaos. Perhaps genetic order is simply genetic chaos and not "negantropic" but very "entropic"? The study of chaos has created a geometry of nature, as seen in Mandelbrot sets and fractals.
Chaos is full of strange attractors (patterns and structure) that are recursive and self-referential. THINK ABOUT THIS: CHAOS CREATES INFORMATION
So to thine own self be true. Strangely enough, when systems bifurcate into chaos, the rate of change becomes constant, discovered by a man named Feigenbaum
"Feigenbaum's number let him predict when the period-doublings would occur. Now he discovered that he could also predict the precise values of each point on this ever-more-complicated attractor - two points, four points, eight points ... He could predict the actual populations reached in the year-to-year oscillations. There was yet another geometric convergence. These numbers, too, obeyed a law of scaling" (Gleick 1987:175).
He discovered that the model was recursive and self-referential, and this guided the behavior hidden inside.
"To Robert Shaw, strange attractors were engines of information. In his first and grandest conception, chaos offered a natural way of returning to the physical sciences, in invigorated form, the ideas that information theory had drawn from thermodynamics. Strange attractors, conflating order and disorder, gave challenging twist to the question of measuring a system's entropy. Strange attractors served as efficient mixers. They created unpredictability. They raised entropy. And as Shaw saw it, they created information where none existed" (Gleick 1987: 258).
Nature, and living systems are chaotic patterns operating under the laws of thermodynamics. "Sensitive dependence on initial conditions serves not to destroy but to create" (Gleick 1987: 311). "Evolution is chaos with feedback" , says Joseph Ford.
"Some are orderly in space but disorderly in time, others orderly in time but disorderly in space. Some patterns are fractal, exhibiting structures self-similar in scale. Others give rise to steady states or oscillating ones.... Thoughtful physicists concerned with the workings of thermodynamics realized how disturbing is the question of, as one put it, "how a purposeless flow of energy can wash life and consciousness into the world"" (Gleick 1987: 308).
So don't go around blaming "god" for the way things are. We are the way we are because of chaos. Chaos create incredible beauty, and allows infinite variety. Praise "god" for such a methodology for creation. Be thankful for random chance... gods loaded dice, for without them, life and choice could not exist. So what if some beings are born different... without that difference, life could not exist. Sometimes life contains errors (as we see it) and appears to be unfair to the life form with such a "burden" (remember - this is a perception about functionality). A child born with a "defect" (variation far from the norm and function)... is not to be pitied, but simply accepted. Don't complain to god, that is not gods fault... there is no fault. It is simply how the universe works. Sometimes we can "fix" it, sometimes not (under current circumstances). Don't judge, or complain. Just accept, look for creative ways to celebrate reality to the best of your ability.
I cannot see like an eagle, nor smell like my dog (although some people may say I do smell like my dog). I do not beat my chest and wail about how "god" is unjust in not making my eyes see infrared and ultraviolet or giving me a nose that can detect the tracks of my friends who walked across my lawn yesterday. I would not be the same creature if I had those abilities. Nor should I cry and moan because my child might be born with webbed fingers or with Downs syndrome. Such an event is simply a fact that must be dealt with, just as having a "within normal limits" child. Don't blame... accept. Don't cry, find the joy and celebrate it.
There is a clear order to the evolution of the universe, moving from simplicity and symmetry to greater complexity and structure. As time passes, simple components coalesce into more sophisticated building blocks spawning a richer, more diverse environment. Accidents and chance, in fact, are essential in developing the overall richness of the universe. In that sense (although not in the sense of quantum physics), Einstein had the right idea: God does not play dice with the universe. Though individual events happen as a matter of chance, there is an overall inevitability to the development of sophisticated complex systems" (Smoot & Davidson 1993: 296).
"A gram of dried-out DNA - about the size of two pencil erasers - stores as much information as maybe a trillion CD-ROM disks" (Siegfried 2000: 97).
Gatlin (1972: 1) defined life as an information processing system - a structural hierarchy of functioning units - that has acquired through evolution the ability to store and process the information necessary for its accurate reproduction".... and "DNA sequences are very long. The minimum DNA content per haploid cell (the genome size) ranges from about 104 base pairs for bacteria to over 109 base pairs for mammals..... Since there are four sets of DNA bases, over 4109base sequences are possible for present-day organisms. This number is greater than the estimated number of particles in the universe" (Gatlin 1972: 4).
There does not appear to be any such thing as fundamental chaos. For all things in the Universe to be random in relation to all other things, each thing must be aware of the location (position) and velocity of all other things: (i.e. - be teleological). This appears to relate to the uncertainty principle.
Does the uncertainty principle require that there is no fundamental chaos in the Universe? Does this in turn imply that there is always some potential for structure, some level of potential difference, some level of potential energy and therefore change?
If light is space and mass distorts space (light).... then matter is "aware" of light and light is "aware" of matter. The structure of the universe is based on this awareness (distorted structure or density). It structures the universe and is infinite in scope: mass distorts the structure... therefore the entire universe is changed. Mass and light have a fundamental teleological relationship to each other ... they break the uncertainty principle! Since bent space (or differential density of space) is the fundamental structure of the universe and since bent space is "gravity"... the structure of the universe can be described as being based on gravity. If one thinks of light as having a density of 1 (one) and mass as having a density of 0 (zero) then one can view gravity as a negative process (i.e. - "other" mass drops down your gravity well).
Does this mean there is a fundamental design? Does this teleology imply purpose?
In quantum physics there is a series of odd experiments. If one sends light from a source through tiny slits, the light shows interference patterns of light and dark stripes. Light appears to go through both slits and interferes with itself like it is a wave. It is now possible to shoot light out photon by photon. When each photon (particle) is shot at a double slit, it arrives on a photo plate as a single photon (particle).... but as one after another photons are shot, the points of light on the plate take on the appearance of the stripes of light and dark stripes. Each photon appears to leave as a particle, go through both slits like a wave, and arrive as a particle that knows how to arrange itself into the statistical interference pattern. When one hole is closed, the pattern stops and the light piles up like anything tossed into a pile.
"This behavior encompasses two mysteries. First, how does the single photon go through both holes at once? Secondly, even if it does perform this trick, how does it "know" where to place itself in the overall pattern? Why doesn't every photon follow the same trajectory and end up in the same spot on the other side" (Gribbin 1995: 5).
The same experiment has been done with electrons and helium atoms. Known as the Copenhagen Interpretation, it was proposed that what passes through the wall is a wave of probability, that collapses into a particle when it reaches the plate and is observed as to its location.
But if one slit is closed, how does the photon know this ... that the other hole is closed? A simple particle could not know if other paths were present. When the experiment is changed to open or close the hole after the photon is fired, it will "choose" the appropriate path! If one looks at the slits to see if the photon is going through both or one or the other holes, , only particles are seen and the pattern disappears into a pile of light under each slit! "The act of observing the electron wave makes it collapse and behave like a particle at the crucial moment when it is going through the hole" (Gribbin 1995: 13) and if it went through the other hole (not under observation) it only goes through and arrives as a particle as well!
"Take the Copenhagen Interpretation literally, and it tells you that an electron wave collapses to a point on a detector screen because the entire Universe is looking at it. This is strange enough; but some cosmologists (among them Stephen Hawking) worry that it implies that there must actually be something "outside the Universe" to look at the Universe as a whole and collapse its overall wave function. Alternatively, John Wheeler has argued that it is only the presence of conscious observers, in the form of ourselves, that has collapsed the wave function and made the Universe exist. On this picture, everything in the Universe only exists because we are looking at it" (Gribbin 1995: 15-16).
Sort of OMNI-ZEN of Zens?
Remember that Einstein's theory that all motion is relative, that anybody can say they are at rest and measure all motion relative to themselves. Enter inertia and the Lorentz-Fitzgerald contraction: that "while motion makes lengths shrink, it makes time intervals expand. The two effects are matched with one another, so that the amount by which a moving object shrinks is exactly balanced by the amount of time expands for it" (Gribbin 1995: 77-78). Therefore:
"The Lorentz transformations tell us that time stands still for an object moving at the speed of light. From the point of view of the photon, of course, it is everything else that is rushing past at the speed of light. And under such extreme conditions, the Lorentz-Fitzgerald contraction reduces distance between all objects to zero. You can either say that time does not exist for an electrodynamic wave, so that it is everywhere along its path (everywhere in the Universe) at once; or you can say that distance does not exist for an electrodynamic wave, so that it "touches" everything in the Universe at once" (Gribbin 1995: 79).
Son of Zen of Zens?
Feynman (i.e. - quantum electrodynamics or QED) came up with the concept that photons and electrons take all possible routes in the experiment, not just straight lines but the most complex wiggly ways one can imagine. Take the same experiment and add two more slits, then more and more and more until there are more slits than obstruction, and then no obstruction at all adds up to integrating all of the probabilities for every conceivable path. Those which approach a straight line as a limit are more probably than those that do not and make up the "classical" path of physics (the straight line).
QED does not distinguish between forward and backward time:
"Waves moving outwards from an electron, or a radio antenna, are called "retarded" waves, because they arrive somewhere else after they have been emitted. Waves traveling backwards in time are called "advanced" waves because they arrive somewhere before they have been emitted somewhere else" (Gribbin 1995: 104).
A digression into polarization. Light is stranger still. If a polarizer is held up in front of light, only those photons that are tilted in the correct direction get through, like a wiggled rope up and down through the slits in a fence. Wiggle the rope sideways, and the wave is stopped by the slits. Place two polarizers at right angles and light is stopped. Put the second at 45 degrees and, wow, half the light gets through! Add a third at right angles to the first and 25 percent of the light gets through.... but remove the center one and no light gets through!
Calcite crystals split light into two equally strong beams, one of which is polarized at 45 degrees from the other, but:
"When the beam is so weak that only single photons are passing through the experiment, the light behaves as if each individual photon has split in two and followed both paths through the experiment, recombining with itself to restore the original polarization" (Gribbin 1995: 114).
In 1992, Japanese researchers (Yutaka Mizobuchi and Yoshiyuke Ohtake) observed photons exhibiting both wave-like properties and particle-like properties at the same time. This implies a photon mat be in two places at the same time! But is this explained by quantum uncertainty? A photon has a small probability of just popping up in a vacuum, the so called "quantum fluctuation". But the Japanese experiment seemed to indicate that while the original photon was detected in only one place, its effect were happening somewhere else at the same time. This has lead a few cosmologists to assume the Universe is a quantum fluctuation.
In another experiment set up by NIST, beryllium ions were heated to test a quantum theory for transition states to see what would happen if they watched the experiment... which stopped the boiling of the ions! As long as they peeked at the experiment faster than the 256 milliseconds needed for 100% transition, the watched quantum pot refused to boil (Gribbin 1995: 135)!
David Bohm has created a model where particles always have a distinct position and velocity, but that any attempt to measure them destroys the information about them and alters the "pilot" wave associated with the particles. His "pilot" wave shape determines how it influences particles, any change, no matter how strong or weak. Also, this wave responds instantaneously, everywhere, to any local disturbance (it is non-local). He developed this so that everything is connected to everything else and affected by everything that happens to anything else instantaneously (Gribbin 1995: 158-159). Another view is that of multiple Universes, each splitting for every choice at the quantum level, billions upon billions of universes in every dust mote.... as many as it takes to carry out every option (supported by David Deutsh)
Ernst Mach was concerned with inertia. "How does an object that is given a push instantly take stock of how that push is going to affect its motion relative to all the matter in the Universe, and respond accordingly?" (Gribbon 1995: 179).
"The nature of time is fundamental to all of the scientific understanding of the world. In quantum physics, the "unmeasured" state of the Universe is a superposition of all possible states, and the physics has to take into account (in principle) of all those states" (Gribbon 1995: 180).
Thermodynamics come into play when lots of particles are in operation, and the arrow of time comes into being. Ilya Prigogine proposes that thermodynamics are real and models of particle physics are not real, only approximations of reality. Any theory that violates thermodynamics is wrong, even Newton's laws or Schroedinger's equations (Gribbon 1995: 181).
Gribbon proposes that the instantaneous feedback in quantum physics is the core problem, as if: "each charged particle - including each electron- is instantaneously aware of its position in relation to all other charged particles in the Universe....why do ordinary lumps of matter resist being pushed around, and how do they know how much to resistance to offer when they are pushed? Where does inertia itself come from? ...Gravitation mass determines the strength of the force which an object extends out into the Universe to tug on other objects; inertial mass, as it is called, determines the strength of the response of an object to being pushed and pulled by outside forces - not just gravity, but any outside forces... Newton described what happens if you take a bucket of water hung from a long cord, twist the cord up tightly, and then let go... Newton pointed out that the concave shape of the surface of the rotating water shows that it "knows" that it is rotating. But what is it rotating relative to? The relative motion of the bucket and water seems completely unimportant. If the bucket and the water are both still, with no relative motion, the water is flat; if the bucket is rotating and the water is not, the surface is still flat even though there is relative motion between the water and the bucket; if the water is rotating and the bucket is not, there is relative motion between the two and the surface is concave; but if the water and the bucket are both rotating, so that once again there is no relative motion between the water and the bucket, the surface is concave. So, newton reasoned, the water "knows" whether or not it is rotating relative to absolute space" (Gribbon 1995: 224-227).
"If absolute space really exists, why doesn't it provide a way of identifying where we are located in an absolute sense, one that need not use our position relative to other material objects as a reference piint? And, if absolute space really exists, how come it can affect us (causing our arms to splay if we spin, for example) while we apparently have no way to affect it" (Greene 2004:33)?
"Mach argued that in an otherwise empty universe there is no distinction between spinning and not spinning - there is no conception of motion or acceleration if there are no benchmarks for comparison - and so spinning and not spinning are the same" (Greene 2004:34-35).
Another detour into string theory. Chu proposed that particles interact in time symmetrical ways with retarded and advanced waves that we see as gravity by averaging over all interactions. Since strings are so small that 1020 of them would be needed to stretch across a proton, their average effect is smooth and continuous... so Newtonian physics emerges as a statistical averaging on the behavior of billions of particles. Keep in mind that many people think string theory is now dead (1999 at the edge of 2000)
"The link with thermodynamics is explicit. The key concept in thermodynamics is entropy, a property which measures how close a system is to equilibrium. Chu's description shows that Einstein's equation of motion is the correct description of particle trajectories under the equilibrium condition of maximum entropy. But as in the original Wheeler-Feynman theory (and attempts to incorporate Mach's Principle into the general theory of relativity) there must be complete absorption of all radiation from strings today into the future - in other words, the Universe must be closed" (Gribbon 1995: 232-233).
John Cramer suggests that a typical quantum transaction is shaking hands somewhere in space and time. A particle send out both a retarded wave (positive energy) and an advanced wave (negative energy) and the retarded wave heads off into the future until it encounters an electron which absorbs the energy... this produces a new retarded wave that cancels out the original so there is no retarded wave. The advanced wave heads off into the past along the same path as the retarded wave until it hits the same particle that emits an advanced wave that cancels it out. No time has passed, but there was a double wave linking the two, half retarded and half advanced and because two negatives make a positive, there is a retarded wave left.... the direction of time. This is atemporal because it happens all at once, time canceling out time, but leaving times arrow. This means there is not need for an observer in the original quantum experiments:
"In Cramer's view of events, a retarded "offer wave" (monitored in "pseudo-time" for the purpose of this discussion) sets off through both holes in the experiment. If the screen is set up, the wave is absorbed in the detector, triggering an advanced "confirmation wave" which travels back through both slits of the apparatus to the source. The final transaction forms along both possible paths (actually, as Feynman would have stressed, along every possible path), and there is interference.
If the screen is down, the offer wave passes on to the two telescoped trained on the slits. Because each telescope is trained on just one slit, it is only possible for any confirmation wave produced when the offer wave interacts with the telescopes itself to go back to the source through the slit on which that telescope is trained. And, of course, the absorption event must involve a whole photon, not a part of a photon. Although each telescope may send back a confirmation wave through its respective slit, the source has to "choose" (at random) which one to accept, and the result is a final transaction which involves the passage of a single photon through a single slit. The evolving state vector of the photon "knows" whether the screen is going to be up or down because the confirmation wave really does travel back in time through the apparatus, but the whole transaction is, as before, atemporal" (Gribbon 1995: 214).
As Gribbon put it for the polarization experiment:
"If the confirmatory waves do not match an allowed polarization correlation, then they cannot be "verifying" the same transaction, and they will not be able to establish the handshake. From the perspective of pseudo-time, the pair of photons cannot be emitted until an arrangement has been made to absorb them, and that absorption arrangement itself determine the polarization of the emitted photons, even though they are emitted "before" the absorption takes place. It is literally impossible for the atoms to emit photons in a state that does not match the kind of absorption allowed by the detectors. Indeed, in the absorber model the atom cannot emit photons at all unless an agreement has already been reached to absorb them" (Gribbon 1995: 242).
Greene (1999) indicates that string theory resolves many issues... "the different vibrational patterns of a fundamental string give rise to different masses and force changes" (Page 143).
"Greater energy means greater mass, and vice versa. Thus, according to string theory, the mass of an elementary particle is determined by the energy of the vibrational pattern of its internal string.... Since the mass of a particle determines its gravitational properties, we see that there is a direct association between the pattern of string vibration and a particle's response to the gravitational force.... The electric charge, the weak charge, and the strong charge carried by a particular string, for instance, are determined by the precise way it vibrates. Moreover, exactly the same idea holds for the messenger particles themselves. Particles like photons, weak gauge bosons, and gluons are yet other resonant patterns of string vibration. And of particular importance, among the vibrational string patterns, one matches perfectly the properties of the gravitron, ensuring that gravity is an integral part of string theory" (Greene 1999: 145).
"So we see that, according to string theory, the observed properties of each elementary particle arise because its internal string undergoes a particular resonant vibrational pattern. This perspective differs sharply from that espoused by physicists before the discovery of string theory; in the earlier perspective the differences among the fundamental particles were explained by saying that, in effect, each particle was viewed as elementary, the kind of "stuff" each embodied was thought to be different. Electron "stuff", for example, had negative electric charge, while neutrino "stuff" had no electric charge. String theory alters this picture radically by declaring that the "stuff" of all matter and all forces is the same. Each elementary particle is composed of a single string - that is, each particle is a single string - and all strings are absolutely identical. Differences between particles arise because their respective strings undergo different resonant vibrational patterns" (Greene 1999: 145-146).
"In 1974, when Scherk and Schwarz proposed that one particular pattern of string vibration was the gravitron particle, they were able to exploit such an indirect approach and thereby predict the tension on the strings of string theory. Their calculations revealed that the strength of force transmitted by the proposed gravitron pattern of string vibration is inversely proportional to the string's tension. And since the gravitron is supposed to transmit the gravitational force - a force that is intrinsically quite feeble - they found that this implies a colossal tension of a thousand billion billion billion billion (1039) tons, the so-called Planck tension. ... the huge string tension causes the loops of string theory to contract to minuscule size. Detailed calculation reveals that being under Planck tension translates a typical string having Planck length - 10-33 centimeters -" (Greene 1999: 148).
All of this means the energy levels of the theorized string are very high. Particles have a fixed spin that never changes, a charge and mass... and so does a black hole.... except that its spin is not fixed... and that there is a fundamental lower limit to the size of the universe... the Planck length. So there is no "singularity".... it creates a cosmic bounce when pushed to hard.
"According to string theory, the universe is made up of tiny strings whose resonant patterns of vibration are the microscopic origin of particle masses and force changes. String theory also requires extra space dimensions that must be curled up to a very small size to be consistent with our never seeing them. But a tiny string can probe a tiny space. As a string moves about, oscillating as it travels, the geometrical form of the extra dimensions plays a critical role in determining resonant patterns of vibration. Because the patterns of string vibrations appear to us as the masses and charges of the elementary particles, we conclude that these fundamental properties of the universe are determined, in large measure, by the geometrical size and shape of the extra dimensions" (Greene 1999:206).
"We concluded that as a Calabi-Yau shape goes through a space-tearing conifold transition, an initially massive black hole becomes even lighter until it is massless and then it transmutes into a massless particle - such as a massless photon - which in string theory is nothing but a single string executing a particular vibrational pattern. in this way, for the first time, string theory explicitly establishes a direct, concrete, and quantitatively unassailable connection between black holes and elementary particles" (Greene 1999: 331-332).
In addition, Greene notes (page 358) that Brandenberger and Vafa showed that if one runs the clock backwards into the big bang, that when the universe shrinks to Planck length and gets hotter, all of the dimensions are curled up into a Planck sized nugget and then the temperature decreases with a bounce back outward as some of the dimensions uncurl into spatial dimensions to create a new universe. Thus time is always there and there is always some level of fully curled dimensionality, and there is no singularity.
But hold everything, the 1998 Super-Kamiokande experiment to find neutrinos appears to suggest that they do have mass, about one Ten-millionth of the mass of the electron. This, if true, spells doom for the current Standard Model! Muon neutrinos mutate into tau neutrinos. The Sudbury Neutrino Observatory (under construction) may resolve this conclusion.
Some cosmologists, like Anatoli Vankov, think our universe is one small bubble in an infinite Grand Universe (GU) model. He believes there was no "big bang", but instead our universe is a matter based fluctuation in an infinite chaotic 3D space of matter and anti-matter filled with limited volume typical universes. Some of these are matter universes and some are anti-matter. When matter and anti-matter bump in the GU, a bubble is created and which ever had the greater mass will determine the left-over matter or anti-matter in the resulting bubble. He feels this explains the Baryon asymmetry, cosmic ray energy, and dark matter in our bubble. Sort of infinite foam that is constantly changing and creating and absorbing bubble universes like a bubble bath. Rees (1979) also goes along with the "multiverse" concept as a way to resolve the anthropic issue:
In an publication "Starting Point", Steve Nadis (2013:36-41) illustrates a basic conundrum about the origin of the universe. Quantum physics explains how the spontaneous creating of a universe out of nothing can occur, but this requires the preexisting quantum physics law before spacetime came into existence. Sounds like the chicken and the egg issue. That is explained by co-evolution, and so can this conundrum. Universes evolve and their physics evolve as well.
"Complex evolution would occur only in "oases" where the constants had propitious values. Our oasis must be at least 10 billion light-years across because the physical laws seem the same everywhere we can observe. But the "desert" beyond it could come into view in the remote future, when, maybe 1012 years or more from now, light from the edges of our domain has had enough time to reach us... The other universes may even be completely disjoint from ours, so that they will never come within the horizon of our remotest descendants. We may be part of an infinite and eternal multiverse within which new domains "sprout" into universes whose horizons never overlap - ironically, the steady-state concept can then be revived, but applied to the multiverse rather than its constituent universes... The multiverse could encompass all possible values of fundamental constants, as well as universes that follow life cycles of very different durations: some like ours, may expand for much more than 10 billion years; others may be "stillborn" because they recollapse after a brief existence, or because the physical laws governing them aren't rich enough to permit complex consequences... Natural selection of "favored" universes seems the stuff of science fiction. However, the American cosmologist, Lee Smolin conjectures that the multiverse could display the effects of heredity and selection. When a black hole collapses, he speculates that another universe sprouts from its interior, creating a new expanse of space and time disjoint from our own. Small universes, in which there was too little space or time to form many black holes, would not leave many progeny. Nor, he argues, would even a large universe if its physics prohibited stars from ever terminating as black holes" (Rees 1979: 248-249).
Rees (1997: 23) also says "The size of our universe shouldn't surprise us: its extravagant scale is necessary to allow enough time for life to evolve on even one planet around one star in one galaxy" and "We are clearly not a typical planet in the universe: we are on a planet with special properties, orbiting around a stable star. Somewhat less trivially, we are observing the universe not at a random time, but at a time when the requirements for complex evolution can be met".
Goldsmith still says: "Why is there something rather than nothing? Like the origin of life, which has proven a much greater riddle than life's evolutionary history, the origin of the cosmos poses a sterner problem than deciphering its past to predict its future. The multiverse concept takes the problem and casts it away as far as possible, into infinite resesses of time and across cosmic boundaries. All may agree, however, that the problem remains: How did the multiverse itself begin?" (2000: 209).
An example of the absurdity of the multiverse concept is Robert A. Heinlein's book "The Number of the Beast". It allows him to engage in an egotistical fantasy of sex and a fear of death based on the concept. Anything a person can visualized/fantasize is true in an alternate universe.
The chapter aptly named "New Dimensions in Nonsense" in "Bankrupting Physics" (Unzicker and Jones 2013: 159-172) deals with "Branes, multiverses and other supersicknesses: physics goes nuts". As they noted: "The road to ignorance is paved with good editions". - George Bernard Shaw. "Should these concepts turn out to be true, I shall not be ashamed to be the last one to believe". - Ernst Mach.
In my own opinion, that we exist on a planet with a large moon is also a critical factor. The geological activity and plate tectonics is probably related to our almost double-planet circumstance with our constantly changing gravity well maintaining much of the long history of life. The tidal impacts on the seas also probably had a great speeding-up effect on Earth's evolutionary history and the moon acted as an additional barrier to occasional hits from asteroids and comets.... allowing greater time depth to evolutionary processes. The large outer planets are also acting like mine sweepers, cleaning out incoming missile as Jupiter recently did. There may be far fewer places that meet so many localized circumstances above and beyond simply having a planet of the right size and distance from its sun for life to evolve and to continue sufficiently long for intelligence sufficient for human-like consciousness.
"... rather than living in an eternal cosmos, we live in a young world, the story of whose maturation we see spread out before us as we look out with our telescopes and antennas. This makes it possible to ask, as scientific questions, not only how was the world we see around us made, but what existed before this world? It is not much of an exaggeration to say that the question of what happened during, and perhaps even before, the Big Bang is slowly coming into focus in the last years of this century in the same way that the question of what happened before the origin of our species came into focus during the last" (Smolin 1997: 17).
"Until recently, most astronomers belived that the universe had entered a very boring middle age. According to this paradigm, the early history of the universe - that is, until about six billions years after the big bang - was an era of cosmic fireworks: galaxies collided and merged, powerful black holes sucked in huge whirlpools of gas, and stars were born in unrivaled profusion. In the following eight billion years, in contrast, galactic mergers became much less common, the gargantuan black holes went dormant, and star formation slowed down to a flicker. Many astronomers were convinced that they were witnessing the end of cosmic history and that the future held nothing but the relentless expansion of a becalmed and senescent universe" (Barger 2005:48).
"By examining the x-rays emitted by the cores of these relatively close galaxies, researchers have discovered many tremendously massive black holes still devouring the surrounding gas and dust. Furthermore, a more thorough study of the light emitted by galaxies of different ages has shown that the star formation rate has not declined as steeply as once believed" (Barger 2005:48).
"The emerging consensus is that the early universe was dominated by a small number of giant galaxies containing colossal black holes and prodigious bursts of star formation, whereas the present universe has a more dispersed nature - the creation of stars and the accretion of material into black holes are now occurring in a large number of medium-size and small galaxies. Essentially, we are in the midst of a vast downsizing that is redistributing cosmic activity" (Barger 2005:48).
The January 1999 Scientific American contained a series of articles about cosmology. Hogan, Kirshner & Suntzeff report on current research on the age of the Universe based on supernovae which indicates the universe is bigger, emptier and expansion may be speeding up, not slowing down! Astronomers claim they can monitor how long a supernova lasts and compute its inherent brightness to within 12 percent. The "High-Z Team" has been looking for such events in the most distant galaxies and now have a few events dating between 4-7 billion light years ago when the universe was only half to two thirds its present size. Both teams found that the supernovae were fainter than expected by about 25%.
Saul Perlmutter's team "had started this whole project years earlier hoping to measure the rate of deceleration of the universe - he never really expected our universe to be expanding faster all the time" (Aczel 1999:8).
"What puzzled scientists was the question: Why? What was the explanation for the unprecedented new findings? The answer seemed to be that there is yet another mysterious force in the universe - something which had never been directly observed. That something, which physicists call a negative pressure or a vacuum energy, or just a "funny energy" was counteracting the attracting force of gravity. There was something out there that was pushing away the galaxies - accelerating them on their mutual retreat from each other" (Aczel 1999:10-11).
"Electric and magnetic fields, like position and velocity, cannot be simultaneously determined. If one is known, the other is necessarily uncertain. For this reason the fields are in a constant state of jittering fluctuation that cannot be eliminated. And as you might expect, this leads to a certain amount of energy, even in absolutely empty space. This vacuum energy has lead to one of the greatest paradoxes of modern physics and cosmology" (Susskind 2006:29).
"Raffiniert ist der Herr Gott, aber boshaft ist er nicht" - Albert Einstein. Meaning "tricky (crafty, shrewd) is the Lord God, but malicious He is not" (Aczel 1999:13).
"If the universe is made of normal matter, gravity must steadily slow the expansion. Little slowing, as indicated by the supernovae measurements, thus implies that the overall density of matter in the universe is low" (Hogan, Kirshner & Suntzeff 1999: 50-51).
"Although this conclusion undermines theoretical preconceptions, it agrees with several other lines of evidence. For example, astronomers have noted that certain stars appear to be older than the accepted age of the universe - a clear impossibility. But if the cosmos expanded more slowly in the past, as the supernovae now indicate, the age of the universe must be revised upward, which may resolve the conundrum" Hogan, Kirshner & Suntzeff (1999: 51).
The big surprise is that the supernovae we see are fainter than predicted even for a nearly empty universe (which has a maximum negative curvature). Taken at face value, our observations appear to require that expansion is actually accelerating with time. A universe composed of only normal matter cannot grow in this fashion, because its gravity is always attractive. Yet according to Einstein's theory, the expansion can speed up if an exotic form of energy fills empty space everywhere. This strange "vacuum energy" is embodied in Einstein's equations as the so called cosmological constant" (Hogan, Kirshner & Suntzeff 1999: 51).
"The universe is out of control. Not only is it expanding but the expansion itself is accelerating. Most likely, such expansion can end only in one way: in stillness and total darkness, with temperatures near absolute zero, conditions utterly inhospitable to life (Kaku, 2004: 47).
"We may be living in the only epoch in the history of the universe when scientists can achieve an accurate understanding of the true nature of the universe" (Krauss & Scherrer 2008: 47).
In 5 billion years the sun will swell to a red giant and Andromeda galaxy will fill the sky. The sun will burn out. In 100 billion years the Milky Way will be a merged supergalaxy and all other galaxies will be beyond our view horizon. There will be no evidence of cosmic expansion. The big bang afterglow will be too dilute to measure. In 100 trillion years the last stars will burn out leaving a black hole in the center of what was the Milky Way. The universe will be black. (Krauss & Scherrer 2008: 50-51).
Evidence for Dark Energy.
"Physicists, scrambling to their blackboards, deduced that a "dark energy" of unknown origin must be acting as an antigravitational force, pushing galaxies apart. The more the universe expands, the more dark energy there is to make it expand even faster, ultimately leading to a runaway cosmos (Kaku, 2004:48).
"... dark energy makes up a full 73 percent of everything in the universe. Dark matter makes up 23 percent. The matter we are familiar with - the stuff of planets, stars, and gas clouds - makes up only about 4 percent of the universe (Kaku, 2004:48).
"Scientists are just starting the long process of figuring out what dark energy is and what its implications are. One realization has already sunk in: although dark energy betrayed its existence through its effect on the universe as a whole, it may also shape the evolution of the universe's inhabitants - stars, galaxies, galaxy clusters. Astronomers may have been staring at its handiwork for decades without realizing it (Conselice 2007:35).
"Supernova Explosions: In an expanding universe, galaxies move apart at a speed that depends on the distance between them. Supernovae offer a way to measure this effect: their spectral redshift reveals the speed of their host galaxies, and their brightness reveals distance. It turns out that galaxies billions of years ago were moving slower than a simple extrapolation from the current rate of expansion imply. The expansion rate must have increased over that time - the hallmark of dark energy" (Conselice 2007:37).
"Taking in the amount of matter, both visible and dark, to be about 30% of the critical density, the supernova researchers concluded that the accelerated expansion they had observed required an outward push of a cosmological constant whose dark energy contributes about 70 percent of the critical density" (Greene 2004: 300).
"This is a remarkable number. If it's correct, then not only does ordinary matter - protons, neutrons, electrons - constitute a paltry 5 percent of the mass/energy of the universe, and not only does some currently unidentified form of matter constitute at least five times that amount, but also the majority of the mass/energy in the universe is contributed by a totally different and rather mysterious form of dark energy that is spread throughout space"
"But there is a second, equally important reason why 70 percent is a remarkable number. A cosmological constant that contributes 70 percent of the critical density would, together with the 30 percent coming from ordinary matter and dark matter, bring the total mass/energy of the universe right up to the full 100 percent predicted by inflationary cosmology! Thus, the outward push demonstrated by the supernova data can be explained by just the right amount of dark energy to account for the unseen 70 percent of the universe that inflationary cosmologists had been scratching their heads over"
"Taking in the amount of matter, both visible and dark, to be about 30% of the critical density, the supernova researchers concluded that the accelerated expansion they had observed required an outward push of a cosmological constant whose dark energy contributes about 70 percent of the critical density" (Greene 2004: 301).
"Cosmic Microwave Background Radiation: Images of the background radiation contain spots whose apparent size reflects the overall geometry of space and therefore the density of the universe. This quantity exceeds the amount of matter (both ordinary and exotic), so a missing component such as dark energy must make up the difference. In addition, the background radiation has been slightly reworked by the gravitational fields of cosmic structures. The amount of reworking depends on how the expansion rate has changed over time and matches what dark energy would do." (Conselice 2007:37).
Galaxy Configuration: Galaxies are not sprinkled randomly throughout the heavens. Instead they are arranged in patterns, one of which resembles the spots in the microwave background. It can be used to measure the total mass of the universe and confirm the need for dark energy." (Conselice 2007:37).
"Gravitational Lensing: A lump of mass can serve as a lens; its gravity bends light. Such a lens can produce multiple images, like a fun-house mirror. If the light source is directly behind it - an alignment becomes more probable the bigger the universe is, which in turn depends on the amount of dark energy. A weaker lens can still bend light by a small angle that depends on its mass. Studies on this process have revealed how clumps of matter have grown over time and found the imprint of dark energy" (Conselice 2007:37).
"Galaxy Clusters: X-ray observations trace the evolution of the mass of galaxy clusters. Dark energy is required to explain when and how they formed" (Conselice 2007:37).
"To understand the influence of dark energy on the formation of galaxies, first consider how astronomers think galaxies form. Current theories are based on the idea that matter comes in two basic kinds. First, there is ordinary matter, whose particles readily interact with one another and, if electrically charged, with electromagnetic radiation. Astronomers call this type of matter "baryonic" in reference to its main constituent, baryons, such as protons and neutrons. Second, there is dark matter (which is distinct from dark energy), which makes up 85 percent of all matter and whose salient property is that it comprises particles that do not react with radiation. Gravitationally, dark matter behaves just like ordinary matter" (Conselice 2007:37).
"According to models, dark matter began to clump immediately after the big bang, forming spherical blobs that astronomers refer to as "halos." The baryons, in contrast, were initially kept from clumping by their interactions with one another and with radiation. They remained in a hot, gaseous phase. As the universe expanded, this gas cooled and the baryons were able to pack themselves together. The first stars and galaxies coalesced out of this cooled gas a few hundred million years after the big bang. They did not materialize in random locations but in the centers of the dark matter halos that had already taken shape" (Conselice 2007:37).
"Detailed studies indicate that a galaxy gets bent out of shape when it merges with another galaxy. The earliest galaxies we can see existed when the universe was about a billion years old, and many of these indeed appear to be merging. As time went on, though, the fusion of massive galaxies became less common. Between two billion and six billion years after the big bang - that is, over the first half of cosmic history - the fraction of massive galaxies undergoing a merger dropped from half to nearly nothing at all. Since then, the distribution of galaxy shapes has been frozen, an indication that smashups and mergers have become relatively uncommon" (Conselice 2007:37).
"Since the universe was half its current age, only lightweight systems continued to create stars at a significant rate. This shift in the venue of star formation is called galaxy downsizing" (Conselice 2007:39).
"Another oddity is that the buildup of supermassive black holes, found in the centers of galaxies, seems to have slowed down considerably. Such holes power quasers and other types of active galaxies, which are rare in the current universe; the black holes in our galaxy and others are quiescent. Are any of these trends in galaxy evolution related? Is it really possible that dark energy is the root cause?" (Conselice 2007:39).
"Some astronomers have proposed that internal processes in galaxies, such as energy created by black holes and supernovae, turned off galaxy and star formation. But dark energy has emerged as possibly a more fundamental culprit, the one that can link everything together. The central piece of evidence is the rough coincidence in timing between the end of most galaxy and cluster formation and the onset of the domination of dark energy. Both happened when the universe was about half its present age" (Conselice 2007:39).
Some additional ideas on loop quantum gravity are needed before I advance my own model of how the universe works. This theory predicts that space and time are made up of discrete pieces. It assumes spacetime is not fixed, but is always changing, and a point in spacetime is defined by what is happening to it (background independence), not by its "location" (diffeomorphism invariance).
"The possible values of volume and area are measured in units of a quantity called the Planck length. This length is related to the strength of gravity, the size of quanta and the speed of light. It measures the scale at which the geometry of space is no longer continuous. The Planck length is very small: 10-33 centimeters. The smallest possible nonzero area is about a square Planck length, or about 10-66 cm2.The smallest nonzero volume is approximately a cubic Planck length, 10-99 cm3. This, the theory predicts that there are about 1099 atoms in every cubic centimeter of space. This quantum of volume is so tiny that there are more such quanta in a cubic centimeter than there are cubic centimeters in the visible universe (1085)" (Smolin 2004:71).
"Time flows not like a river, but like a ticking of a clock, with "ticks" that are about as long as the Planck time: 10-43 second" (Smolin 2004:72).
My model of the universe
I had this thought in the middle of the night. I was wondering if the universe could be modeled as having an overall frequency wave expressed as the cosmological non-constant? Visualize a huge sine wave of such low frequency (around 13 billion years) that "we" are riding along on it? Inflation = initial (transition near base of wave) up swing, last 13 or so billion years the upswing as a slow to change but upward movement (along the center of the wave), then not that long ago, the transition to the faster change as the frequency approaches the top of the curve?
That the universe is ringing like a huge bell, and the effects of that (gravitational?) wave = impacts on the distribution and nature of the matter/energy and "inflation" of the cosmos? This would mean that the cosmological constant is really the cosmological non-constant. And perhaps that "gravity" becomes a positive in the presence of matter and negative in the absence of matter. Gravity is not a weak force, but a complex force with different effects. The sum of the positive and negative effects = a greater force than just measurement on the positive? This may explain the anomalies found in the "spacecraft" exiting our sun's gravitational field?
During the initial big bang, before baryonic matter formed, and there was a phase transition "pre-Planck or non-Planck" physics, and this is what is called dark matter and dark energy. The pre-Planck or non-Planck matter/energy (positive and negative gravity)cannot interact with the Planck scale baryon particles in the electromagnetic spectrum simply because of the difference is scale. Gravity, however, is not electromagnetic and still operates between dark matter and normal matter.
"One place where we expect special relativity to crumble is at the Planck length.... Quantum theory tells us that this scale represents a threshold below which the classical picture of spacetime disintegrates. Einstein's special theory of relativity is part of that classical picture, so we might expect it to break down at that point" Smolin 2006: 224.
As previously noted:
I came up with my own explanation by making time more complex: that large scale spacetime is a warped version of quantum spacetime, and that entangled particles are "joined at the hip" by time (as a direction) not changing as far as the two particles are concerned... only the other "three" dimensions rotating around time until some large scale interaction breaks that no-time-has-passed connection. In other words it breaks the time joined-at-the-hip connection to reveal what appears at the macro scale spooky action at a distance. It was not. The distance between them is/was "TIME".
TIME is a dimension. I am convinced that the passage of time is a byproduct of the expansion of the universe. We are inside that expansion and it's limits are set as the speed of light AND the shape of space. The speed of light is a "relative constant" in that it is the byproduct of the expansion, the cosmological variable (cannot call it a constant as it is changing). "C" is the relative local limit of change. Think of a three dimensional point traveling along the expansion of "space" through "time" as the time side of the spacetime variable shift the location of the "space" side of spacetime. As an entity approaches the speed of light, it approaches the limits of cosmological expansion, and thus the limits of time in spacetime, as they are one and the same. Black holes have a spacetime horizon... they have no interior as time approaches zero is a limit at the horizon. A black hole is only a black horizon, it has no "interior". The arrow of time is the arrow of expansion. As there is evidence that expansion is speeding up, then that limit is changing. Time as the 4th dimension comes from the cosmic variable driving change in spacetime resulting in increased entropy.
In my model, non-Planck space, matter and energy are non-classical and spacetime is different.
Black holes, which seem to make up 1/2 or 1 percent of the mass of each galaxy are the connectors between Planck scale matter/energy and dark matter/energy. This consistency makes me suspect that black holes convert normal matter/energy into dark matter/energy (i.e. - convert baryonic matter/energy into non-baryonic matter/energy). Thus, the amount of dark matter and dark energy is increasing, and the amount of normal matter/energy is decreasing. The rate of conversion early on was super high: creating inflation, making the universe "flat". A huge amount of normal matter/energy was converted to dark matter/energy, fueling "inflation". As the amount of normal matter/energy dropped to some critical size, the rate of expansion dropped almost to zero, but continued on. Then, as the relative percentage of dark matter/energy reached a critical point, dark energy began speeding up the rate of expansion again.
Planck loop quantum gravity and a non-Planck loop quantum gravity are equal effects while all other parts of the electromagnetic spectrum are not equal, are out of phase with each other.
Black holes create order in Planck physics by converting Planck matter/energy into dark matter and energy. The "cosmological constant" is a measure of this conversion process. Times arrow is an expression of this conversion as well. Time ticks in two ways, Planck time in the normal matter spacetime universe and non-Planck time in the "dark" matter spacetime universe. This difference drives the cosmological "constant" or rather cosmological "variable".
I wrote to David Spurgel at Princeton about this theory. He replied "We don't know what is happening beyond the Planck scale. There has been a lot of speculation about Planck scale stable dark matter particles; however, there are only toy models: we don't really understand the Planck scale!"
He also noted "If there was physics going on at the Planck scale that was responsible for the dark energy, then we would expect that the energy density of the dark energy would be (M_Planck)4 --- 10120 times bigger than the observed value. It is the discrepancy between our expectation for Planck scale effects and the observed value that has driven physicists to be so interested in the problem."The end of the universe?
Krause (1999) entitled "Cosmological Antigravity" summarizes thought about the cosmological constant as an outcome of virtual particles which produce measurable effects. The big problem is that Quantum theory predicts a spectrum of virtual particles spanning every possible wavelength, creating an infinite energy. The energy for the set of existing theory on particles predicts a constant at least 120 orders of magnitude larger than the total energy of the universe.
"A lower density of matter, signifying an open universe with slower deceleration, would ease the tension somewhat. Even so, the only way to lift the age of the universe above 12.5 billion years would be to consider a universe dominated not only be matter but by a cosmological constant. The resulting repulsive force would cause the Hubble expansion to accelerate over time. Galaxies would have been moving apart slower than they are today, taking longer to reach their present separation, so the universe would be older" (Krause 1999: 56).
Research seems to show that luminous matter accounts for between 10-20% of the mass of the universe - including protons and neutrons as well as dark matter candidates - is still only about 60% of what is needed to flatten the universe.
"The cosmological constant changes the usual simple picture of the future of the universe. Traditionally, cosmology has predicted two possible outcomes that depend on the geometry of the universe, or equivalently, on the average density of matter. If the density of a matter-filled universe exceeds a certain critical value, it is "closed", in which case it will eventually stop expanding, start contracting and ultimately vanish in a fiery apocalypse. If the density is less than the critical value, the universe is "open" and will expand forever. A "flat" universe, for which the density equals the critical value, also will expand forever but at an ever slowing rate" (Krause 1999: 58).
"Yet these scenarios assume that the cosmological constant equals zero. If not, it - rather than matter - may control the ultimate fate of the universe. The reason is that the constant, by definition, represents a fixed density of energy in space. Matter cannot compete: a doubling in radius dilutes its density eightfold. In an expanding universe the energy density associated with a cosmological constant must win out. If the constant has a positive value, it generates a long-range repulsive force in space, and the universe will continue to expand even if the total energy density in matter and in space exceeds the critical value. (Large negative values of the constant are ruled out because the resulting attractive force would already have brought the universe to an end.)" (Krause 1999: 58).
The cosmological constant would have to account for 40-70% of the energy to make the universe flat... fine tuning to 123 decimal places that leaves the 124th untouched... or the "constant" may be a variable, changing over time... but that requires that we live in our universe at a time when the density of matter is comparable to the energy of space!
Bucher & Spergel (1999) tackle "Inflation in a Low-Density Universe" in the next article. The variable omega is the ratio between the gravitational energy of the universe to kinetic energy contained in the motion of that matter as the universe expands. If equals 1 it is stable, if it varies up or down, over time it will either drop to zero or become infinite. Since the universe is billions of years old, must be exactly 1 or very close to 1 (within one part in 1018). In order to deal with this factor, inflation was proposed to explain the uniformity of the universe. If is greater than 1, the universe has a positive curvature like a ball. If is less than 1, the universe has a negative curvature like a saddle. Inflation flattens the universe so that the occupied portion appears to approach 1 as a limit and the irregularities are evened out. Current research on matter has an of about .3. Is a flat universe the outcome of inflation?
"If the inflaton field had a different potential energy function, inflation would have bent space in a precise and predictable way - leaving the universe slightly curved rather than exactly flat. In particular, suppose the potential-energy function had two valleys - a false (local) minimum as well as a true (global) minimum. As the inflaton field rolled down, the universe expanded and became uniform. But then the field got stuck in the false minimum. Physicists call this state the "false vacuum", and any matter and radiation in the cosmos were almost entirely replaced by the energy of the inflaton field. The fluctuations inherent in quantum mechanics caused the inflaton field to jitter and ultimately enabled it to escape from the false minimum - just as shaking a pinball machine can free a trapped ball" (Bucher & Spergel 1999: 65).
"The escape, called false-vacuum decay, did not occur everywhere at the same time. Rather it took place at some random location and then spread. This process was analogous to bringing water to a boil" (Bucher & Spergel 1999: 65).
"In false-vacuum decay, quantum fluctuations played the role of the random atomic motion, causing bubbles of true vacuum to nucleate. Surface tension destroyed most of the bubbles, but a few managed to grow so large that quantum effects became unimportant. With nothing to oppose them, their radius continued to increase at the speed of light. As the outside of a bubble passed through a point in space, the inflaton field at that point was jolted out of the false minimum and resumed downward descent. Thereafter the space inside the bubble inflated much as in standard inflationary theory. The interior of this bubble corresponds to our universe. The moment that the inflaton field broke out of its false minimum corresponds to the big bang in older theories" (Bucher & Spergel 1999: 65-66).
"For points at different distances from the center of nucleation, the big bang occurred at different times. This disparity seems strange, to say the least. But careful examination of the inflaton field reveals what went on. The inflaton acted as a chronometer: its value at a given point represented the time elapsed since the big bang occurred at that point. because of the time lag in the commencement of the big bang, the value of the inflaton was not the same everywhere; it was highest at the wall of the bubble and fell off steadily toward the center. Mathematically, the value of the inflaton field was constant on surfaces with the shape of hyperbolas" (Bucher & Spergel 1999: 66).
The value of the inflaton is no mere abstraction. It determined the basic properties of the universe inside the bubble - namely, its average density and the temperature of the cosmic background radiation (today 2.7 degrees C above absolute zero). Along a hyperbolic surface, the density, temperature and elapsed time were constant. These surfaces are what observers inside the bubble perceive as constant "time". It is not the same as time experienced outside the bubble.... loosely speaking, time represents the direction in which things change, and change inside the bubble is driven by the inflaton" (Bucher & Spergel 1999: 66).
"According to relativity, the universe has four dimensions - three for space, one for time. Once the direction of time is determined, the three remaining directions must be spatial; they are the directions in which time is constant. Therefore, a bubble universe seems hyperbolic from the inside. For us, to travel out in space is, in effect, to move along a hyperbola. To look backward in time is to look toward the wall of the bubble. In principle, we could look outside the bubble and before the big bang, but in practice, the dense, opaque early universe blocks the view" (Bucher & Spergel 1999: 66).
"This melding of space and time allows an entire hyperbolic universe (whose volume is infinite) to fit inside an expanding bubble (whose volume, though increasing without limit, is always finite). The space inside the bubble is actually a blend of both space and time as perceived outside the bubble. Because external time is infinite, so is internal space" (Bucher & Spergel 1999: 66).
"This seemingly bizarre concept of bubble universes frees inflationary theory from its insistence that equal one. Although the formation of the bubble created hyperbolas, it said nothing about their precise scale. The scale is instead determined by the details of the inflaton potential, and it varies over time in accordance with the value of . Initially, inside the bubble equals zero. During inflation, its value increase, approaching one. Thus, hyperbolas start off with an abrupt bend and gradually flatten out. The inflaton potential sets the rate and duration of flattening. Eventually inflation in the bubble come to an end, at which point is poised extremely near but very slightly below one. Then starts to decrease. If the duration of inflation inside the bubble is just right (to within a few percent), the current value of will match the observed value" (Bucher & Spergel 1999: 66).
But Luminet, Starkman and Weeks (1999) indicate there has been a flurry of papers about the topology of the universe in the 1990's. They indicate the universe may be finite and the illusion of infinite space is created by light bent around the topology of space.... like a hall of mirrors. They say that the "infinite" universe is an unwarranted conclusion from Einstein's general theory of relativity. The universe may be multiple connected system, which is preferred by some schemes for unifying the fundamental forces of nature. They indicate that both mach's inertia issue and that a low-volume universe is more probably than a large-volume one argue for a finite universe derived from a finite quantum fluctuation. A finite universe does not require an edge and is a hyper sphere or hyperbolic manifold and not embedded in any higher dimension space.
"The universe, too, can be measured in units of radians. Diverse astronomical observations agree that the density of matter in the cosmos is only a third of that needed fro space to be Euclidean. Either a cosmological constant makes up the difference,.... or the universe has a hyperbolic geometry with a radius of curvature of 18 billion light-years. In the latter case, the observable universe has a volume of 180 cubic radians - enough room for nearly 200 of the Weeks polyhedra. In other words, if the universe has the Weeks topology, its volume is only 0.5 percent of what appears to be. As space expands uniformly, its proportions do not change, so the topology remains constant" (Luminet, Starkman & Weeks 1999: 94-95).
"There are three basic hypothesis for the birth of the universe, which are advocated, respectively, by Andrei Linde of Stanford University, Alexander Vilenkin of Tufts University and Stephen Hawking of the University of Cambridge. One salient point of difference is whether the expected volume of a newborn universe is very large (Linde's and Vilenkin's proposals) or very small (Hawking's). Topological data may be able to distinguish among these models" (Luminet, Starkman & Weeks 1999: 97).
"If observations do find the universe to be finite, it might help to resolve a major puzzle in cosmology: the universe's large-scale homogeneity. The need to explain this uniformity led to the theory of inflation, but inflation has run into difficulty of late, because in its standard form it would have made the cosmic geometry Euclidean - in apparent contradiction with the observed matter density. This conundrum has driven theorists to postulate hidden forms of energy and modifications to inflation ... An alternative is that the universe is smaller than it looks. If so, inflation could have stopped prematurely - before imparting a Euclidean geometry - and still have made the universe homogeneous. Igor Y. Sokolov of the University of Toronto and other used COBE data to rule out this explanations space is a 3-torus. But it remains viable if space is hyperbolic" (Luminet, Starkman & Weeks 1999: 97).
Interestingly enough, they state that "The theories of everything, such as string theory, are in their infancy and do not have testable consequences. But eventually the candidate theories will make predictions about the topology of the universe on large scales". This suggests some people thing string theory is still viable.
"The new observations of faraway supernovae imply that space is not only expanding - it is accelerating its expansion. So something is pushing space outwards. What could it be? According to quantum physics, space, the "vacuum," is not a vacuum at all - it is teeming with energy. Virtual particles appear and disappear continuously in what we think of as empty space. There is a great amount of energy in what looks like perfect emptiness, and we don't understand this energy or where it's coming from. The vacuum is like a contracted spring that wants to burst out. The pressure exerted by the invisible spring packed with energy makes the space in which it is hidden expand. But the spring relaxes at a much slower rate than the expansion it is causing, and so the expansion is accelerating its pace. The energy of the vacuum, the force pushing space outwards, is modeled by Einstein's cosmological constant" (Aczel 1999: 179-180).
Paul Steinhardt calls this pressure force "quintessance"... a fifth force of nature.
"Based on observations of various z-levels, it now seemed that from the time right after the big bang to about seven billion years ago the universe was indeed slowing down its expansion. But the density of matter in the universe was simply not enough to slow the expansion to a halt. As the universe continued to grow, its mass diluted, allowing Einstein's "funny energy" to take over. Seven billion years ago, the expansion rate thus started to pick up speed, and the universe is now expanding faster all the time" (Aczel 1999: 216).
Goldsmith (2000: 1) describes the non-zero cosmological constant universe:
"Imagine a strange universe in which the expansion of the cosmos, instead of being slowed by gravity, undergoes a continuous acceleration from the presence of a mysterious form of energy. This energy, concealed from any direct detection by its complete transparency, permeates seemingly empty space, furnishing the cosmos with a "free lunch" of just the sort that old wives tales forbid. Just as amazingly, every cubic centimeter of the new space that the outgoing cosmic expansion creates likewise teams with this invisible energy, the existence of which endows each volume of space with a tendency to expand. As a result, the universe multiples its energy content many times over as time goes by. The increase in its hidden energy makes the universe accelerate ever more rapidly, eventually driving its basic units of matter to utterly unfathomable separations. Instead of a chance to contact, perhaps to recycle itself through another big bang, this universe faces a future in which all cosmic distances grow billions of times their present immense values. As this happens, the average density of matter in the universe falls ever more rapidly towards zero, because the energy of empty space makes the universe expand at a continuously increasing rate."
Recent observations of early supernova suggests this may be true. That there is a non-zero cosmological constant and that it is changing over time: "If the cosmological constant equals zero, then the universe is positively curved and will eventually contract, if and only if the actual value of the density exceeds the critical density. Conversely, if the actual density falls below the critical density, the universe must ne negatively curved and will expand forever. If the actual density exactly equals the critical density, space in the universe must be flat, and the universe will expand more and more slowly as time goes on, but will cease its expansion completely only after an infinite amount of time has passed" (Goldsmith 2000: 46).
Based on observation, space is flat (the universe looks the same in any direction, even those parts that have never been in causal contact). The mass of the universe combined with the cosmological constant must be unity (1). If the cosmological constant is zero, then the mass is 1, exactly the critical density for a contracting universe. If the cosmological constant has any value, then the critical density must be less than 1, and the negatively curved constantly expanding universe is the resulting model.
The early universe mass was concentrated in a very small area. The initial "big bang" had a huge cosmological constant that allowed inflation: "In other words, inflation can turn a region thirteen orders of magnitude smaller than a proton into a volume a million times larger than the visible universe today! The exact - or even the approximate - sizes of the small and large volumes scarcely matter. What counts is that the inflationary era could take any incredibly small volume of space and - in a mere 10-30 second - make it a region far, far larger than the visible universe" Goldsmith 2000: 55). The effects of mass slowed this expansion to nearly zero, but not zero, as there was a non-zero cosmological constant. If the constant had been zero, the universe would have started collapsing at that point, as gravity took over. This huge quick expansion explains how the universe which is not presently in causal contact looks the same in all directions... it was in contact at the beginning of the expansion phase.
But what next? Because the constant was not zero, the universe continued to expand. At first, the constant was very close to zero and the mass was very close to 1. But over time, the relation has been changing. The relative amount of mass has been shrinking in its effects and the relative pressure of the cosmological constant has been increasing.
"No matter what the curvature of space may have been before the inflationary era began, the increase in size by a factor of 1060 or so would inevitably have made space effectively flat. More precisely, inflation would have made any small region of space, such as the visible universe today, seem perfectly flat (to about 1 part on 1060), just as a tiny fraction of a balloon's surface seems flat to those who remain within that region. If inflation did in fact increase the size of a once-tiny region of space by a factor of 1060, then, as we have seen, the entire visible universe, extending 15 billion light-years in all directions from us, would span less than 1 part on 10 of the inflated volume. In that case, everything we can observe, or hope to observe, within the volume that we call the visible universe amounts to looking at less than one square millimeter on the surface of a balloon the size of a town. Not surprisingly, this square millimeter would seen almost perfectly flat" (Goldmith 2000: 58-59).
Currently, the measure of the mass of the universe suggests a value of about 0.3 and the cosmological constant as about 0.7, so the cosmological constant is now more than double the value for the mass of the universe. In the far off future, the mass will approach zero and the cosmological constant will approach unity as a limit.
This present a mental model where at the initial "big bang", mass was close to zero and the cosmological constant was close to unity as a limit... then during the inflation, mass approached unity and the cosmological constant approached zero as a limit. This was followed by our current cycle where mass is again approaching zero and the cosmological constant approaching unity as a limit. This suggests a cyclical system where the ending condition describes the beginning condition. Perhaps there is no need to have a "big crunch" to recycle the universe... rather it redefines itself by its limits?
That life in a time when the two factors are close together in order of magnitude involves the evolutionary history of the universe. When mass was the heavy hitter, stars were producing the heavy elements and seeding dust clouds to produce planets with the required elements for life. In the future, this seeding may be so great as to heavy metal poison any possible evolution of life and later still, there will simply be a star shortage to power life.
Before Goldsmith's book came out, I was wondering is mass expands along with space/time, then this has some implications for a number of theories. If mass is along for the ride and its "relative density" changes over time as space expands then this has implications, as noted before, for the cosmological constant issue and recent findings that the rate of expansion of the universe is increasing. If we are looking into the past, we are looking at a smaller universe where matter may have greater mass density than now. So the difference may account for the observation that things are faster now than in the past? As well, the very small changes in any single particle or string, when in very large structures may create enough summation of effects to change how the structure looks and behaves... could this be the so called "dark matter" effect? If this is true, what are the implications for black holes? Their density would change as well... causing... with enough time, their re-entry into the normal universe? If particles are folded strings with parts of their dimensions hyper-folded so that they do not affect the larger universe... there are implications as well.
Gregory Bothun (University of Oregon) indicated to me that the spinning of milli-second pulsars would be coupled to any string based gravitational waves from the early quark-sized universe (if there were strings). Since there is no evidence for such a gravitational wave background (gravitational Brownian movement?), cosmic strings can be ruled out as physical entities. Other physicists are telling me that string theory is dead but that barygenesis continues to be a problem. They also tell me that mass does expand with space/time but the amount of change is vanishingly small.. About 100 km/s per Mpc (so two objects separated by 1 Megaparsec (3.26 million light years) would have a relative expansion velocity of 100 km/s and the Milky Way (0.1 Mpc) would only expand at 10 km/s but has a rotation of about 250 km/s... so the expansion rate is overwhelmed by its rotation. Thus the change due to space/time is small in relation to the changing mass density of the universe. (Thanks to the physics department at the University of Oregon).
If the universe is expanding and an increasing rate, then the cosmos will expand to a size where galactic matter will be so separated that it will no longer be visible... a very lonely universe indeed. The Star Trek universe of inter-galactic travel better happen soon, or it will never happen. Or perhaps we are in a packet of space where things are doing different things than other areas of our universe... we simply do not know.
Adams & Laughlin summarize the fine-tuning of our universe (expanding on earlier aspects of this) ...
"Having looked at impossible events, and improbably events, let's now consider the most extraordinary event that did take place - the ascent of life. Our universe is rather convenient for life as we know it. In fact, all four windows of astrophysics play a vital role in its development. Planets, our smallest window of astronomy, provide the home for life. They provide the petri dishes which life can arise, evolve, and develop. Stares are obviously important as they provide the energy source that drives biological evolution. Stars play a second fundamental role, as the alchemists that produced the elements heavier than helium - the carbon, oxygen, calcium, and other nuclei that make up life forms"(Adams & Laughlin 1999: 195)
"Although less obvious, the galaxies are also important. Without the binding influence of galaxies, the heavy elements produced by stars would spread out over the universe. These heavy elements are the essential building blocks of both planets and life-forms. The galaxies, with their large masses and strong gravitational attraction, keep together the chemically enriched gas left over from stellar death. This previously processed gas is then incorporated into future generations of stars, planets, and people. The gravitational attraction of galaxies thus ensures that heavy elements are readily available for successive generations of stars, and for the production of rocky planets like earth" (Adams & Laughlin 1999: 196).
"Let's illustrate the required fine-tuning of our universe a bit more. Galaxies, one of the astrophysical entities required for life, are produced as gravity wins its battle with the expansion of the universe and instigates the collapse of local regions. If the gravitational force was much weaker; or the cosmological expansion rate was much faster; then no galaxies would have formed within the current age of the cosmos. The universe would continue to spread itself out, but would contain no gravitationally bound structures, at least not by this time in cosmic history. On the other hand, if gravitational forces was much stronger of the expansion rate was much slower, then the entire universe would recollapse into a big crunch before galaxies even begin to form. In either case, no life would evolve in our present universe. The interesting case of a universe filled with galaxies and other large cosmic structures thus requires a reasonably delicate compromise between the strength of gravity and the expansion rate. And our universe has realized just such a compromise" (Adams & Laughlin 1999: 197).
Adams & Laughlin see the universe as evolving through five stages or ages: 1) the primordial era that creates the structure and process of the universe from which the forces are created and matter evolves; 2) the stelliferous era where stars are formed, galaxies evolve and stars cook up the elements needed for development of planets (we live in this era) ; 3) the degenerate era when star formation has ended and stars have turned into red and brown dwarfs which radiate themselves away and then protons decay so that all elements evaporate; 4) the black hole era as they radiate heat until they explode; and 5) the dark era when the universe contains photons of colossal wave lengths as well as neutrinos, electrons and positrons that slowly annihilate each other.
Our sun is about six billion years old and will last for another six billion or so before using up all of its hydrogen... but its power output will slowly increase. By about six billion years from now, the sun will swell to a red giant and the Earth will move out to about the orbit of Mars and the Earth will be a burned out molten rock. The core of the sun becomes a white dwarf star while the deep red stellar atmosphere expands. Then the sun will burn helium into carbon as a huge helium flash that will be comparable to the power produced by all the stars in the Milky Way. This is followed by a stable conversion of helium to carbon for about a hundred million years... and the Earth would resolidify with no hint of its past. The Stelliferous Era will end when the universe is about one hundred trillion years old.
A recent report suggests researchers have found frame-dragging around a massive black hole with the Rossi X-ray Timing Explorer... a distorted area of space rotating around a black hole where as the disk spins the very space the disk inhabits is warped along, twisting space and time. In another recent development, there is a gamma haze around our galaxy that should not be there, suggesting the annihilation of dark matter as one possible scenario.
If space/time did not exist before the Higgs excitation and since the the Higgs did not exist either, it not only created the universe, it created the conditions under which it could create the universe. Therefore one could view the Higgs as "god" (the ultimate definer of the universe). If this is a simple zero sum process, then those that worshiped the sun (our local thermodynamic source) were probably closer to "god" than anyone. An infinite universe is a universe with infinite information. I am forced to conclude that the structure is irrelevant, it is the process that is important. The multiverse theory of infinite tension and release through phase transition seems to me the better solution. The resultant universes would inherit the characteristics of the parent universe. There is no beginning or end, only process.
Before talking about life, it is perhaps wise to talk about death. There is strong evidence that the extinction of the dinosaurs was caused by an asteroid (2/3rds of all species went extinct). What are asteroids? Currently there are about 50,000 catalogued. Most are found between Mars and Jupiter, the main asteroid belt. Three objects make up about half the mass of all of the objects: Ceres, Pallas and Vesta (933, 523 and 501 kilometers in diameter). The area of the belt is so huge that each object is several million kilometers apart (forget the movies with spaceships dodging asteroids). The Trojan asteroids reside at the 1:1 resonance with Jupiter. Comets come from the Kuiper Belt, and a cloud of objects called the Oort Cloud. Kuiper objects orbit beyond Neptune. Large objects have been found, and the count as of 1999 was 179. Pluto is now considered to be a Kuiper object rather than a planet by some astronomers.... and the larger objects are now dubbed "plutinos".
Comet Shoemaker-Levy 9 proved the massive energies involved in a hit by such an object. The expanding rings were gravity waves rather than sound waves. Two years after the impacts, Jupiter was showing considerable changes in its atmospheric chemistry. We now know that objects larger than 1 kilometer cause global consequences. The Tunguska event was a small 50 meter stony asteroid and hit with the power of 10-20 megatons of TNT! It flattened 2000 square kilometers of forest.
"Dendrochronologist Mike Baillie at Queens University in Belfast, Northern Ireland, has found evidence of multiple short-term catastrophic climate changes over the past 6,000 years. Dendrochronolgy uses the study of tree rings to determine chronology and climate of a region. Baillie has found episodes of low growth rates at sites around the world in tree rings that coincide with upheaval in human affairs. The most recent corresponds to the Dark Ages in Europe around A.D. 530 to 540. Similar global climate changes have been found around 2345 B.C., 1628 B.C., 1150 B.C., 207 B.C. and 44 B.C. Baillie suggests that the impact of small objects exploding in the atmosphere or in the oceans would inject enough dust into the atmosphere to reduce sunlight - similar to the dark impact scars caused by Comet Shoemaker-Levy 9 when it hit Jupiter in 1994. This would cause years without summers, crop failures, famine and other side effects. These small objects, he believes, are part of the Taurid complex and have impacted during periods when the core of the debris complex intersects the orbit of earth. Many of the legends and mythology of mankind may well be linked to the appearance of bright comets in the sky overhead" (Scotti 2000: 186).
One final "bit"... John Wheeler suggests that physical phenomenon are somehow defined by the questions we ask of them... "the it from bit". Several physicists are now looking at quantum theory in terms of information theory "and have found the Heisenberg's uncertainty principle, wave-particle duality and nonlocality can be formulated more powerfully in the context of information theory" (Horgan 2000: 349).
I still like recursive dreams and a fractal-like universe... sweet dreams.
Smolin's Trouble with Physics
"Nature is in an obvious sense 'unified.' The universe we find ourselves in is interconnected, in that everything interacts with everything else. There is no way we can have two theories of nature covering different phenomena, as if one had nothing to do with the other. Any claim for a final theory must be a complete theory of nature. It must encompass all we know" (Smolin 2006: 5).
"Problem 1: Combine general relativity and quantum theory into a single theory that can claim to be the complete theory of nature" (Smolin 2006: 5).
"General relativity has a problem with infinities because inside a black hole the density of matter and the strength of the gravitational field quickly become infinite" (Smolin 2006: 5).
"Quantum theory, in turn, has its own trouble with infinities. They appear whenever you attempt to use quantum mechanics to describe fields, like the electromagnetic field. The problem is that the electric and magnetic fields have values at every point in space. This means that there are an infinite number of variables (even in a finite volume there are an infinite number of points, hence an infinite number of variables). In quantum theory, there are uncontrollable fluctuations in the values of every quantum variable. An infinite number of variables, fluctuating uncontrollably, can lead to equations that get out of hand and predict infinite numbers when you ask questions about the probability of some event happening, or the strength of some force" (Smolin 2006: 6).
"Problem 2: Resolve the problems in the foundations of quantum mechanics, either by making sense of the theory as it stands or by inventing a new theory that does make sense" (Smolin 2006: 8).
"Problem 3: Determine whether or not the various particles and forces can be unified in a theory that explains them all as manifestations of a single, fundamental entity" (Smolin 2006: 11).
"For all its usefulness, the standard model has a big problem: it has a long list of adjustable constants. When we state the laws of the theory, we must specify the values of those constants. As far as we know, any values will do, because the theory is mathematically consistent no matter which values we put in. These constants specify the properties of the particles. Some tell us the masses of the quarks and the leptons, while other tell us the strengths of the forces. We have no idea why these numbers have the values they do; we simply determine them by experiments and then plug in the numbers" (Smolin 2006: 12-13).
"There are about twenty such constants, and the fact that there are that many freely specifiable constants in what is supposed to be a fundamental theory is a tremendous embarrassment. Each one represents some basic fact of which we are ignorant: namely, the physical reason or mechanism for setting the constant to its observed value" (Smolin 2006: 13).
"Problem 4: Explain how the values of free constants in the standard model of particle physics are chosen in nature" (Smolin 2006: 13).
"Over the last decades, astronomers have done a very simple experiment in which they measure the distribution of mass in a galaxy in two different ways and compare the results. First they measure the mass by observing the orbital speeds of the stars; second they make a more direct measurement of the mass by counting all the stars, gas, and dust they can see in the galaxy. The idea is to compare the two measurements: Each should tell them both the total mass of the galaxy and how it is distributed. Given that we understand gravity well, and that all known forms of matter give off light, the two methods should agree" (Smolin 2006: 14).
"They don't. Astronomers have compared the two methods of measuring mass in more than a hundred galaxies. In almost all cases, the two measurements don't agree, and not by just a small amount but by factors of up to 10. Moreover, the error always goes in one direction: There is always more mass needed to explain the observed motions of the stars than is seen by directly counting up all the stars, gas, and dust" (Smolin 2006: 14).
In other words: dark matter not visible by normal observations.
"We have recently discovered that when we make observations at still larger scales, corresponding to billions of light-years, the equations of general relativity are not satisfied even when the dark matter is added in. The expansion of the universe, set in motion by the Big Bang some 13.7 billion years ago, appears to be accelerating, whereas, given the observed matter plus the calculated amount of dark matter, it should be doing the opposite - decelerating" (Smolin 2006: 15).
So general relativity is wrong "Or there is a new form of matter - or energy (recall Einstein's famous equation E=mc2, showing the equivalence of energy and mass)- that becomes relevant on these very large scales: That is, this new form of energy affects only the expansion of the universe. To do this, it cannot clump around galaxies or even clusters of galaxies. This starnge new energy, which we have postulated to fit the data, is called the dark energy" (Smolin 2006: 15).
"Most kinds of matter are under pressure, but the dark energy is under tension - that is, it pulls things together rather than pushed them apart. For this reason, tension is sometimes called negative pressure. In spite of the fat that the dark energy is under tension, it causes the universe to expand faster. If you are confused by this, I sympathize. One would think that a gas with negative pressure would act like a rubber band connecting the galaxies and slow the expansion down. But it turns out that when the negative pressure is negative enough, in general relativity it has the opposite effect. It causes the expansion of the universe to accelerate" (Smolin 2006: 15-16).
"Fully 70 percent of the matter density appears to be in the form of dark energy. Twenty-six percent is dark matter. Only 4 percent is ordinary matter. So less than 1 part in 20 is made out of matter we have observed experimentally or described in the standard model of particle physics. Of the other 96 percent, apart from the properties just mentioned, we know absolutely nothing" (Smolin 2006: 16).
And I think it is non-Planck matter and energy.
Problem 5: Explain dark matter and dark energy. Or, if they don't exist, determine how and why gravity is modified on large scales. More generally, explain why the constants of the standard model of cosmology, including the dark energy, have the values they do" (Smolin 2006: 16).
"The chief lesson of general relativity was that there is no fixed-background geometry for space and time; ignoring this meant that you could simply choose the background. This sent us back toward a Newtonian point of view, in which particles and fields inhabit a fixed background of space and time - a background whose properties are fixed eternally. Thus the theories that developed from ignoring gravity are background-dependent" (Smolin 2006: 54).
"Once quantum mechanics was fully formulated, the quantum theorists turned their attention to unifying electromagnetism with quantum theory. As the basic phenomena of electromagnetism are fields, the unification that would eventually result is called quantum field theory. And because Einstein's special theory of relativity is the right setting for electromagetism, these theories can also be seen as unifications of quantum theory with special relativity" (Smolin 2006: 55).
"The quantum theorists already knew that for every electromagnetic wave there is a quantum particle, the photon. It took only a few years to work this out in detail, but the result was just a theory of photons moving freely, the next step would be to incorporate charged particles, such as electrons or protons, and describe how they interact with photons. This goal was a fully consistent theory of quantum electrodynamics, or QED" (Smolin 2006: 55).
"One QED was understood, the task was to extend quantum field theory to the strong and weal nuclear forces. This would take another quarter century, and the key would be the discovery of two new principles: The first defined what electromagnetism and these nuclear interactions have in common. It is called the gauge principle, and as I will describe, it leads to a unification of all three forces. The second principle explains why, although unified, the three forces are so different. It is called spontaneous symmetry breaking. These two principles together form the cornerstone of the standard model of particle physics. Their precise application had to await the discovery that particles like the proton and neutron are not elementary after all, instead, they are made of quarks" (Smolin 2006: 55-56).
"The gauge principle is best understood in terms of something physicists refer to as symmetry. Put simply, a symmetry is an operation that doesn't change how something behaves relative to the outside world. For example, if you rotate a ball, you don't change it; it's still a sphere. So when physicists talk of a symmetry, they can be referring to an operation in space, like rotation, that doesn't change the result of an experiment. But they can also be talking about any kind of change we make to an experiment that doesn't alter the outcome. For example, suppose you take two groups of cats--- say, east-side cats and west-side cats -- and you test their abilities in jumping. If there is no difference in the average jump a cat can make, then we say that cat-jumping is symmetric under the operation of trading all your east-side cats for west-side cats" (Smolin 2006: 56-57).
Makes me try to visualize the universe as a surface and it's expansion as symmetric so that expansion does not change its operation as space.
"But there are special situations in which the symmetries completely determine the forces. This turns out to be the case for a special class of forces called gauge forces" (Smolin 2006: 57).
"There are two things we do need to know about the gauge principle. One is that the forces it leads to are conveyed by particles called gauge bosons. The other thing we need to know is that the electromagnetic, string, and weak forces each turned out to be forces of this kind. The gauge boson that corresponds to the electromagnetic force is called the photon. Those that correspond to the strong force holding the quarks together are called gluons. Those corresponding to the weak force have a less interesting name - they are called, simply, weak bosons" (Smolin 2006: 57-58).
"The phenomena you hope to unify are different - otherwise there would be nothing surprising about their unification. So even if you discover some hidden unity, you still have to understand why and how it is that they appear so different" (Smolin 2006: 58).
"....Einstein had a wonderful way of solving this problem for special and general relativity. He realized that the apparent differences between the phenomena were not intrinsic to the phenomena but were due entirely to the necessity of describing the phenomena from the viewpoint of an observer. Electricity and magnetism, motion and rest, gravity and acceleration were all unified by Einstein in this way. The difference that observers perceive between them are therefore contingent, because they reflect only the viewpoint of the observer" (Smolin 2006: 58-59).
"In the 1960's, a different solution to this general problem was proposed: The differences between unified phenomena were contingent, but not because of a viewpoint of particular observers. Instead, physicists made what seems at first an elementary observation: The laws may have a symmetry that is not respected by all features of the world they apply to" (Smolin 2006: 59).
"...physicists say that the symmetry is spontaneously broken. By this we mean that it is necessary that the symmetry break, but that how it breaks is highly contingent. This spontaneous symmetry breaking is the second great principle the underlies the standard model in particle physics" (Smolin 2006: 59-60).
"This mechanism of spontaneous symmetry breaking can happen to the symmetries between particles in nature. When it occurs for the symmetries that, by the gauge principle, give rise to the forces of nature, it leads to the differences in their properties. The forces become distinguished, they can have different ranges and different strengths. Before the symmetry breaks, all four fundamental forces can have an infinite range, like electromegnetism, but afterward some will have finite range, like the two nuclear forces" (Smolin 2006: 60).
"...there is a particle whose existence is a consequence of spontaneous symmetry breaking. This is called the Higgs boson" (Smolin 2006: 61). Named for Peter Higgs who was at Edinburgh University at the time.
"The use of spontaneous symmetry breaking in a fundamental theory was to have profound consequences, not just for the laws of nature but for the larger question of what a law of nature is. before this, it was thought that the properties of the elementary particles are determined directly by eternally given laws of nature. but in a theory with spontaneous symmetry breaking, a new element enters, which is that the properties of the elementary particles depend in part on history and environment. The symmetry may break in different ways, depending on conditions like density and temperature. More generally, the properties of the elementary particles depend not just on the equations of the theory but on which solution to those equations applies to our universe" (Smolin 2006: 61).
In reading this I wondered if the physicists that predict the future of our universe need to include the possibility of future symmetry breaking, changing the fundamental nature of nature. My question was answered.
"It opens up the possibility that many - or even all - properties of elementary particles are contingent and depend on which solution of the laws is chosen in our region of the universe or in our particular era. They could be different in different regions. They could even change in time" (Smolin 2006: 62).
"In the early 1970's, the gauge principle was applied to the strong nuclear force, the force that binds the quarks, and it was found that a gauge field is responsible for that force, too. The resulting theory is called quantum chromodynamics, or QCD for short" (Smolin 2006: 62).
"The discovery that all three forces are expressions of a single unifying principle - the gauge principle - is the deepest accomplishment of theoretical physics to date" (Smolin 2006: 62).
"All three forces were now understood to be expressions of the same principle, and it was obvious that they should be unified. To unify the particles, however, you need a big symmetry that includes them all" (Smolin 2006: 63).
"Of the big ideas that have been invented and studied during these years, the one that has gotten the most attention is called supersymmetry" (Smolin 2006: 67).
"Quantum theory says that particles are waves and waves are particles, but this does not really unify the particles with the forces. The reason is that in quantum theory there remain two broad classes of elementary objects. These are called fermions and bosons" (Smolin 2006: 67).
"Supersymmetry offers a way to unify these tow big classes of particles, the bosons and the fermions. And it dies so in a very creative way, by proposing that every known particle has a heretofore unseen superpartner" (Smolin 2006: 67).
"Fermions must obey the exclusion principle, invented by Wolfgang Pauli in 1925, which says that two fermions cannot occupy the same quantum state. This is why all electrons in an atom do not sit in the lowest orbital; once an electron is in a particular orbit, or quantum state, you cannot put another electron in the same state. The Pauli exclusion principle explains many properties of atoms and materials. Bosons, however, behave in the opposite way. They like to share states. When you put a photon into a certain state, you make it more likely that another photon will find its way to that same state. This affinity explains many properties of fields, like the electromagnetic field" (Smolin 2006: 67-68).
"The electrical repulsion between two protons is stronger than their gravitational attraction by a huge factor, around 1038. There are also huge differences in the masses of the particles. For example, the electron has 1/1,800 the mass of a proton. And the Higgs boson, if it exists, has a mass of at least 120 times that of the proton" (Smolin 2006: 70).
Smolin points out that there is a hierarchy of scales from the Planck mass to the vacuum energy of spacetime. "Why is nature so hiearchical? Why is the difference between the strength of the strongest force and weakest force so huge? Why are the masses of protons and electrons so tiny compared to the Planck mass or the unification scale? This problem is generally referred to as the hierarchy problem" (Smolin 2006: 71).
"The hierarchy problem contains two challenges. The first is to determine what sets the constants, what makes the ratios large. The second is how they stay there. This stability is puzzling, because quantum mechanics has a strange tendency to pull all masses together toward the value of the Planck mass" (Smolin 2006: 71).
I simply have the mental muddle of recognizing that the tendency to pull requires "time"... and if "particles" are SPACEtime, there is no "time" available for them to be pulled. They are out of the "time" loop in that respect? The are stable because they are outside the process of "time"? To add "time" converts the "particle of frozen space" into a "wave of massless time"???
"It turns out that to protect the mass of the Higgs from being pulled up to the Planck mass, we have to tune the constants of the standard model to the amazing precision of thirty-two decimal places" (Smolin 2006: 71).
Physicists began to create models with even more or even smaller particles that make up other particles. The suggestion of techiquarks combined with QED resulted in a new force dubbed Techicolor!
"In supersymmetry-theory convention, the superpartners of fermions begin with an 's,' like the selectron, while the superpartner of bosons end in 'ino'" (Smolin 2006: 74).
"A new superpartner is simply postulated to go along with each known particle. Not only are there squarks and sleptons and photinos, there are also sneutrinos to partner neutrinos, higgsinos with the Higgs, and gravitinos to go with gravitons. Two by two, a regular Noah's ark of particles. Sooner or later, tangled in the web of new snames and naminos, you begin to feel like Sbozo the clown. Or Bozo the clownino. Or swatever" (Smolin 2006: 75).
Don't tell me physics is not funny nor physicists lack a sense of humor.
"Doing this to the standard model of elementary-particle physics, with no additional assumptions, results in a contraption called the minimally supersymmetric standard model, os MSSM. As noted... the original standard model has about 20 free constants we have to adjust by hand to get predictions that agree with experiment. The MSSM adds 105 more free constants. The theorist is at liberty to adjust them all to ensure that the theory agrees with experiments. If this theory is right, the God is a techo-geek" (Smolin 2006: 75).
"The main lesson of general relativity is that the geometry of space is not fixed. It evolves dynamically, changing in time as matter moves about. There are even waves - gravitational waves - that travel through the geometry of space (Smolin 2006: 81).
"It is important to absorb this point completely. The geometry of space is not part of the laws of nature. There is therefor nothing in those laws that specifies what the geometry of space is (Smolin 2006: 81).
"This means that the laws of nature have to be expressed in a form that does not assume that space has any fixed geometry. This is the core of Einstein's lesson. We encapsulate it in a principle we described earlier, which is background independent (Smolin 2006: 81).
"The key question of for a quantum theory of gravity is then the following: Can we extend to quantum theory the principle that space has no fixed geometry? That is, can we make quantum theory background independent, at least with regard to the geometry of space? If we can do this, we will automatically merge gravity and quantum theory, because gravity is already understood to be an aspect of dynamical spacetime geometry (Smolin 2006: 83).
".. a big problem because gravitational waves interact with each other. They interact with anything that has energy, and they themselves carry energy. (and) Once the gravitational waves interact with one another, they can no longer be seen as moving on a fixed background. The change the background as they travel" (Smolin 2006: 85).
Another big problem is with information and black holes. "During the life of a black hole, it will pull in huge amounts of matter, carrying huge amounts of intrinsic information. At the end, all that's left is a lot of hot radiation - which, being random, carries no information at all - and a tiny black hole. Did the information just disappear?" (Smolin 2006: 91).
"This is a puzzle for quantum gravity, because there is a law in quantum mechanics that says that information can never be destroyed. The quantum description of the world is supposed to be exact, and there is a result implying that when all the details are taken into account, no information can be lost. Hawking made a strong argument that a black hole that evaporates away loses information. This appears to contradict quantum theory, so he called this argument the black-hole information paradox. Any putative quantum theory of gravity needs to resolve it" (Smolin 2006: 91).
"Energy is mass. Motion and rest are indistinguishable. Acceleration and gravity are the same" (Smolin 2006: 96).
"In most theories, particle motion and the fundamental forces are two separate things. The laws of motion tells how the particle moves in the absence of external forces. logically there is no connection between that law and the laws that govern the forces" (Smolin 2006: 107).
"In string theory, the situation is very different. The laws of motion dictates the laws of the forces. This is because all forces in string theory have the same simple origin - they come from the breaking and joining of strings" (Smolin 2006: 107).
"As a one-dimensional string moves through time, it makes a two-dimensional surface in spacetime. This surface has a certain area, defined roughly as the product of its length and its duration in time." (Smolin 2006: 109).
"The string moves so as to minimize this area. that is the whole law. It explains the motion of strings and, once strings are allowed to break and join, the existence of all forces. It unifies all the forces we know with a description of the particles. And it is far simpler than the laws describing any of the things it unifies" (Smolin 2006: 109-110).
"Quantum theory - in particular, the uncertainty principle - appeared to require a huge cosmological constant. If something is exactly still, it has a definite position and momentum, and this contradicts the uncertainty principle, which says you cannot know both these things about a particle. A consequence is that even when the temperature is zero, things keep moving. There is a small residual energy associated with any particle and any degree of freedom, even at zero temperature. This is called the vacuum, or ground state energy. When quantum mechanics is applied to a field, such as the electromagnetic field, there is a vacuum energy for every mode of vibration of the field. But a field has a huge number of modes of vibration, hence quantum theory predicts a huge vacuum energy" (Smolin 2006: 152).
To be quite frank, I do not quite see the problem. Time is a fourth physical dimension, no different than the other three dimensions. Notice that the discussion on fields talks about the mode of vibration, which is expressed through the fourth dimension, time. The vacuum, or ground state energy is, in my opinion, time. And the cosmological constant is an expression of expansion of the fourth dimension: time. The illusion of cosmological expansion is related to, and diluted by, the effect of the fourth dimension on the other three dimensions? If there are even more dimensions, the effect is even more diluted? The relationship between location and momentum is dependent on time: variation in the fourth dimension.
"...the fact that we are in a biofriendy universe cannot be used as a confirmation of a theory that there is a vast population of universes" (Smolin 2006: 163).
Back to string theory: "The graviton, the particle that carries the gravitational force, comes out of the vibrations of loops (i.e., closed strings). The photon, carrier of the electromagnetic force, also emerges from the vibration of a string. The more complicated gauge fields, in terms of our understanding of the strong and weak nuclear forces, also come out automatically; that is, string theory predicts generally that there are gauge fields similar to these, although it does not predict the particular mix of forces we see in nature" (Smolin 2006: 183).
"Thus - at least on the level of the boson, or force carrying particles, on a background spacetime - string theory unifies gravity with the other forces. All four fundamental forces arise as vibrations of one fundamental kind of object, a string" (Smolin 2006: 183).
In other words, as I interpret this, a string is, at a minimum, a four dimensional object in which the fourth dimension (time: i.e. vibration) is the key to unlocking string theory. The forth dimension defines the nature of the string and its forces. Our senses are time dependent. We travel with time, not through time. Time must be a variable, but we cannot detect the changes in time because we are within it. We can only look for possible effects of variations in time on the other dimensions. Time may be changing all the time and we cannot perceive it.
"What about unifying the bosons with the particles that make up matter, like quarks and electrons and neutrinos? It turns out that these also arise as states of vibrations of strings, when supersymmetry is added. Thus supersymmetric string theories unify all the different kinds of particles with one another" (Smolin 2006: 183).
"Moreover string theory does all this with a simple law: that the strings propagate through spacetime so as to take on the least amount of area. Nor is there any need to have separate laws describing how particles interact, the laws by which strings interact follow directly from the simple law that describes how they propagate. And since the various forces and particles all are just vibrations of strings, the laws that describe then follow as well" (Smolin 2006: 184).
"Einstein's general theory of relativity is a background-independent theory. This means that the whole geometry of space and time is dynamical, nothing is fixed. A quantum theory of gravity should also be background-independent. Space and time should arise from it, not serve as a backdrop for the actions of strings" (Smolin 2006: 184).
Thus, as I understand it, TIME is dynamical and NOT fixed. What about the scale of time?
"Forces in nature are characterized by just a few numbers - for example, the distance over which a force travels and a charge to tell us how strong it is. What characterizes the cosmological constant is a scale which is the distance scale over which it curves the universe. We can call this scale R. It is about 10 billion light years, or 1027 centimeters. What is weird about the cosmological constant is that its scale is huge compared with other scales in physics. Scale R is 1047 times as the size of an atomic nucleus and 1060 times as the Planck scale (which is about 10-20 times the size of a proton). So it is logical to wonder whether scale R might reflect some totally new physics" (Smolin 2006: 204-205).
Again, I think the cosmological scale is a field equal to the sum of the fourth dimension of strings: time.
"It's an item of faith among cosmologists that at the largest scales the universe should be symmetric - that is, any one direction should be like any other direction. This is not what is seen. The radiation in these large scale modes is not symmetric; there is a preferred direction. (It has been called the "axis of evil" by the cosmologists Kate Land and Joao Magueijo.) No one has any explanation for this effect" (Smolin 2006: 208).
Again, a sense of humor.
"Consider R divided by the speed of light:R/c. This gives us a time, and the time it gives us is roughly the present age of our universe. The inverse, c/R, gives us a frequency - a very low note, one oscellation per lifetime of that universe" (Smolin 2006: 200).
"The next simplest thing to try is c2/R. This turns out to be an acceleration. It is in fact the acceleration by which the rate of expansion of the universe is increasing - that is - the acceleration produced by the cosmological constant. Compared to ordinary scales, however, it is a very tiny acceleration: 10-8 centimeters per second. Imagine a bug crawling across the floor. It manages to go perhaps 10 centimeters per second. If the bug doubled its speed over the lifetime of a dog, it would be accelerating about as much as c2/R, a very small acceleration indeed" (Smolin 2006: 209).
"We do know things that accelerate this slowly. One example is a typical star orbiting in a typical galaxy. A galaxy orbiting another galaxy accelerates even more slowly. So, do we see anything different about the orbits of the stars with accelerations this tiny, compared to the orbits of stars with larger accelerations? The answer is yes, we do, and dramatically so. This is the problem of the dark matter" (Smolin 2006: 209).
"In each galaxy where the problem is found, it affects only stars moving outside a certain orbit. Within that orbit, there's no problem - the acceleration is what it should be if caused by the visible matter. So there seems to be a region in the interior of a galaxy within which Newton's laws work and there's no need for dark matter. Outside this region, things get messy" (Smolin 2006: 210).
The key question is: Where is the special orbit that separates the two regions? We might suppose it occurs at a particular distance from the center of the galaxy. This is a natural hypothesis, but it is wrong. Is the dividing line at a certain density of stars or starlight? Again, the answer is no. What seems to determine the dividing line, surprisingly, is the rate of the acceleration itself. As one moves farther out from the center of the galaxy, accelerations decrease, and there turns out to be a critical rate that marks the breakdown of Newton's law of gravity. As long as the acceleration of the star exceeds this critical value, Newton's law seems to work and the acceleration predicted is the one seen. There is no need to posit any dark matter in these cases. But when the acceleration observed is smaller than the critical value, it no longer agrees with the prediction of Newton's law" (Smolin 2006: 210).
"What is this special acceleration? It has been measured to be 1.2 X 10-8 centimeters per second. This is close to c2/R, the value of the acceleration produced by the cosmological constant" (Smolin 2006: 210)!
"The other possibility is that there is no dark matter and Newton's law of gravity breaks down whenever accelerations get as small as the special value of c2/R. In this case, there needs to be a new law that replaces Newton's law in these circumstance. In his 1983 paper, Milgrom proposed such a theory. He called it MOND, for 'modified Newtonian dynamics.'" (Smolin 2006: 211).
"It would be easy to disregard MOND if not for the fact the Milgrom's law suggests that the scale of the mysterious cosmological constant somehow bears on whatever is determining how stars move in galaxies. Just from the data, it appears that the acceleration c2/R plays a key role in how stars move" (Smolin 2006: 213).
Recently astronomers from the Keck in Hawaii examined the spectra of light from very ancient quasars to see if alpha (the ratio of the squared electron charge divided by the speed of light multiplied by Planck's constant) has always been the same. "They deduced from their data that around 10 billion years ago, alpha was smaller by about 1 part in 10,000" (Smolin 2006: 217).
"This is a small change, but if it holds up, it is a momentous discovery, the most important in decades. This would be the first time that a fundamental constant of nature had been seen to vary over time" (Smolin 2006: 217). In other words a physics constant is a variable!
"Yet another manifestation of the scale R may be the mysterious neutrino masses. You can convert the length scale R to a mass scale, using just the fundamental constants of physics, and the result is the same order of magnitude as the differences between the masses of the various kinds of neutrinos. No one knows why neutrinos, the lightest particles there are, should have masses related to R, but there it is - another tantalizing hint" (Smolin 2006: 217).
Smolin has worked on what is called deformed or doubly special relativity (DSR). Part of this theory related to the Planck length: "The question is, will all observers agree on what this shortest length is" (Smolin 2006: 227)?
According to Einstein's special theory of relativity, different observers disagree on the length of moving objects. An observer riding on the meter stick will say it is a meter long. But any observer moving with respect to it will observe it to be shorter. Einstein called this the phenomenon of length contraction"(Smolin 2006: 227).
"But this implies that there cannot be such a thing as a "shortest length." No matter how short something is, you can make it still shorter by moving it relative to it very close to the speed of light. Thus there appears to be a contradiction between the idea of the Planck length and special relativity"(Smolin 2006: 227).
In DRS II, it was proposed that in beginning, the speed of light was faster. "As you go back farther back in time and he temperature approaches the Planck energy, the speed of light becomes infinite. It took somewhat longer to show that this led to a version of a variable-speed-of-light theory that was consistent with the principles of general relativity, but we eventually got there, too. We call this theory "Gravity's Rainbow, after Thomas Pynchon's novel" (Smolin 2006: 233).
I believe that my Planck model is quantum mechanical and spacetime is an emergent phenomenon from the interaction between a Planck scale "brane" and a non-Planck scale "brane"... to put in "brain" terminologies. The causal structure of spacetime arises from this interaction and determines the emergent geometry of spacetime. Singularity "time" continues into the emergent non-Planck space. Black hole information continues into the emergent non-Planck spacetime, eliminating the information paradox.
Lee Smolin (1997: 156) defines life as:
1) a self-organized non-equilibrium system such that
2) its processes are governed by a program which is stored symbolically and
3) it can reproduce itself, including the program (with redundancy and variations).
But what is life? In Marc Kaufman's book "First Contact:Scientific Breakthroughs in the Hunt for Life Beyond Earth" (2011:57) this issue is discussed. As extremophiles are discovered all over the earth, this question becomes more muddied. Evolutionary biologist Andrew Ellington (University of Texas, Austin) declares "It is my position that there is no such thing as life" (Kaufman 2011:38). "If we haven't figured out what life is by now, there is little hope that we will figure out a definitive definition in the near term, and there is no research program that I can imagine, at any price, that will provide such a definition" (Kaufman 2011:38).
Life has been found between 12 and 25 miles up in the stratosphere in high ultraviolet light. Life has been found as deep into the earth as humans have drilled or mined. Life has been found inside glacial ice nearly three miles down, in fact a living ecology where bacteria produce a protein that alters the freezing of ice crystals (Kaufman 2011:24-33).
"Remarkably, even though scientists fully understand neither the physical basis of life nor the unfolding of the universe, they can make educated guesses about the destiny of living things. Cosmological observations now suggest that the universe will continue to expand forever- rather than, as scientists once thought, expanding to a maximum and then shrinking. Therefor, we are not doomed to perish in a fiery "big crunch" in which any vestige of our current or future civilization would be erased. At first glance, eternal expansion is cause for optimism. What could stop a sufficiently intelligent civilization from exploiting the endless resources to survive indefinitely" (Krauss & Starkman 2002:52)?
"Yet life thrives on energy and information, and very general scientific arguments hint that only a finite amount of energy and a finite amount of information can be amassed in even an infinite period. For life to persist, it would have to make do with dwindling resources and limited knowledge. We have concluded that no meaningful form of consciousness could exist forever under these conditions" (Krauss & Starkman 2002:52).
"Life as we know it depends on stars. But stars inevitably die, and their birth rate has declined dramatically since the initial burst about 10 billion years ago. About 100 trillion years from now, the last conventionally formed star will wink out, and a new era will commence. Processes currently too slow to be noticed will become important: the dispersal of planetary systems by stellar close encounters, the possible decay of ordinary and exotic matter, the slow evaporation of black holes"(Krauss & Starkman 2002:53).
"Even after an eternity of hard and well-planned labor, living beings could accumulate only a finite number of particles, a finite quantity of energy and a finite number of bits of information. What makes this failure all the more frustrating is that the number of available particles, ergs and bits may grow without bound. The problem is not necessarily the lack of resources, but rather the difficulty in collecting them" (Krauss & Starkman 2002:53).
"The culprit is the very thing that allows us to contemplate an eternal tenure: the expansion of the universe. As the cosmos grows in size, the average density of ordinary matter sources of energy declines. Doubling the radius of the universe decreases the density of atoms eightfold. For light waves, the decline is even more precipitous. Their energy drops by a factor of 16 because the expansion stretches them and thereby saps their energy" (Krauss & Starkman 2002:53).
"As a result of this dilution, resources become ever more time-consuming to collect. Intelligent beings have two distinct strategies: let the material come to them or try to chase it down. For the former, the best approach in the long run is to let gravity do the work. Of all the forces of nature, just gravity and electromagnetism can draw things in from arbitrarily far away. But the latter gets screened out: oppositely charged particles balance one another, so that the typical object is neutral and hence immune to long-range electrical and magnetic forces. Gravity, on the other hand, cannot be screened out, because particles of matter and radiation only attract gravitationally; they do not repel" (Krauss & Starkman 2002:53).
"Even gravity, however, must contend with the expansion of the universe, which pulls objects apart and thereby weakens their mutual attraction. In all but one scenario, gravity eventually becomes unable to pull together larger quantities of material. Indeed, our universe may have already reached this point; clusters of galaxies may be the largest bodies that gravity will ever be able to bind together. The lone exception occurs if the universe is poised between expansion and contraction, in which case gravity continues indefinitely to assemble increasingly grater amounts of matter. But that scenario is now thought to contradict observations, and in any event it poses its own difficulties: after 1033 years or so, the accessible matter will become so concentrated that most of it will collapse into black holes, sweeping up any life-forms. Being inside a black hole is not a happy condition. On the earth, all roads may lead to Rome, but inside a black hole, all roads lead in a finite amount of time to the center of the hole, where death and dismemberment are certain"(Krauss & Starkman 2002:53,54).
The cosmic dilution of energy is truly dire if the universe is expanding at an accelerating rate. All distant objects that are currently in view will eventually move away from us faster than the speed of light and, in doing so, disappear from view. The total resources at our disposal are therefore limited by what we can see today, at most" (Krauss & Starkman 2002:54).
Based what was said about the creating of carbon, iron, etc. in stars, and that the universe cooks up the stuff needed for life, does this mean life is a natural outcome? The best book I have seen on this issue is Paul Davies' The Fifth Miracle: The Search for the origin and Meaning of Life. As he notes, some scientists now believe that the universe is rigged in favor of life.... that life will evolve if conditions are right. This has been bolstered by the finding of planets around stars, the assumption being that more planets means greater chances for life to evolve. The finding of organic chemicals in space, comets and meteorites, has had its impact as well. The ease with which organic molecules are created by "simple" lab experiments has contributed to this point of view.
Davies looks at all of this data carefully. He points out that life is not only molecular hardware but also is a complex information-processing software system. While it is easy to examine the hardware, it is a huge leap from that into the software: "Whatever remarkable chemistry may have occurred on the primeval Earth or some other planet, life was sparked not by a molecular maelstrom as such, but - somehow! - by the organization of information" (Davies 1999: 19).
But keep in mind that Davies thinks life was an act of god, not an act of random nature, which will come out in the following quotes.
The oldest true animal fossils known (Ediacara) date to about 560 million years. There were probably one-celled organisms around long before. Some people think life in some form is perhaps 3.8 billion years old, or older. This is based on carbon-isotope ratios in rocks.
"Biological complexity is instructed complexity or, to use modern parlance, it is information-based complexity.... I shall argue that it is not enough to know how life's immense structural complexity arose; we must also account for the origin of biological information" (Davies 1999: 31).
Davies lists the following attributes: autonomy; reproduction; metabolism; nutrition; complexity; organization; growth and development; information content; hardware/software entanglement; permanence and change as key variables in defining life.
"Inside each and every one of us lies a message. it is inscribed in an ancient code, its beginnings lost in the mists of time. Decrypted, the message contains instructions on how to make a human being. Nobody wrote the message; nobody invented the code. they came into existence spontaneously. Their designer was Mother Nature herself, working only within the scope of her immutable laws and capitalizing on the vagaries of chance. the message isn't written in ink or type, but in atoms, strung together in an elaborately arranged sequence to form DNA, short for deoxyribonucleic acid. It is the most extraordinary molecule on Earth" (Davies 1999: 40-41).
"DNA is incredibly, unimaginably ancient. It most certainly existed three and a half billion years ago. It makes nonsense of the phrase "old as the hills": DNA was there long before any surviving hills on Earth. Nobody knows how or where the first DNA molecule formed" (Davies 1999: 41).
In essence, all life is a thermodynamic machine. All living things obey the second law of thermodynamics. Life makes order, but at the expense of the greater environment. As long as there is any source of free energy, life is possible. What is probably is a much smaller set. Life also taps into metastable states, using enzymes to catalyze reactions that would otherwise be slow or not happen at all. As Shannon has shown, information is the opposite of entropy. Living systems exploit both energy and information from the environment. As Davies noted "This is essentially what Schrödinger meant when he said that an organism makes a living by "drinking orderliness""(Davies 1999: 57).
"The error catastrophe is crucially important for the problem of biogenesis. In modern organisms, sophisticated proofreading and error-correction mechanisms are employed to keep the error rate down. Cells can call upon a suite of enzymes, evolved over billions of years, to finesse the copying process. No such enzymes would have been available to the first organisms. Their replication must have been extremely error-prone. according to Eigen's rule, this means that the genomes of the first organisms (or the prebiotic replicators) must have been very short in length if they were to evade this error catastrophe. But here we hit a paradox. if a genome is too short, it can't store enough information to build the copying machinery itself. Eigen believes that the simplest replication equipment requires much more information than could ever have been accommodated in a primitive nucleic-acid sequence. To reach the sort of length needed for the necessary copying enzymes, the genome risks falling foul of the very error catastrophe it is trying to combat. To put it simply: complex genomes demand reliable copying, and reliable copying demands complex genomes" (Davies 1999: 59-60).
Davies makes a key point that connects us back to the universe... DNA stores the instructions to build a functional organism: it contains information - where did the information content of the universe come from? Recall It from Bit. Gravity acted to concentrate mass and create stars, solar systems, galaxies, etc.
"Just as life seems to go 'the wrong way" thermodynamically, so too does gravitation go "the wrong way". A smooth gas grows into something clumpy and complex. Order appears spontaneously. In informational terms, this seems all back to front. A uniform gas, by its very simplicity, can be described with a very little information, whereas a star cluster or a galaxy requires a lot of information to describe it. In some yet ill-understood way, a huge amount of information evidently lies secreted in the smooth gravitational field of a featureless, uniform gas. As the system evolves, the gas comes out of equilibrium, and information flows form the gravitational field to the matter. Part of this information ends up in the genomes of organisms, as biological information" (Davies 1999: 64).
"The upshot of these gravitational processes was that an entropy gap opened up in the universe, a gap between the actual entropy and the maximum possible entropy. The flow of starlight is one process that is attempting to close the gap, but in fact all sources of free energy, including the chemical and thermal energy inside the Earth, can be attributed to that gap. Thus all life feeds off the energy gap that gravitation has created. The ultimate source of biological information and order is gravitation" (Davies 1999: 64).
Thus there is a deep connection between life and the origin and development of the universe (page 66) and that information is a global quantity (page 67). Davies also believes the meaningful information is closely tied to complexity, another aspect of life: non-linear feedback systems and that "It suggests that we will not be able to trace the origin of biological evolution to the operation of local physical forces and laws" (page 67).
Lab experiments have shown that, under specific conditions, complex organic chemicals can be produced. Davies (page 86-87) talks about how the famous Urey/Miller experiment with methane, ammonia, hydrogen, water and a spark to produce amino acids is misleading. The gas mixture was wrong. The best guess for earths early atmosphere is carbon dioxide and nitrogen, which does not yield amino acids. In addition the water base is a problem:
"The second step on the road to life , or at least the road to proteins, is for amino acids to link together to form molecules known as peptides. A protein is a long peptide chain, or a polypeptide. Whereas the spontaneous formation of amino acids from an inorganic chemical mixture is an allowed downhill process, coupling amino acids together to form peptides is an uphill process. It therefore heads in the wrong direction, thermo-dynamically speaking. Each peptide bond that is forged requires a water molecule to be plucked from the chain. In a watery medium like a primordial soup, this is thermo-dynamically unfavorable. Consequently, it will not happen spontaneously: work has to be done to force the newly extracted water molecule into the water-saturated medium... So a watery soup is a recipe for molecular disassembly, not self-assembly" (Davies 1999: 89).
Davies says that it has been calculated that if the entire universe were the watery soup, perhaps once in its history a peptide bond would be made... but if it is strongly heated, driving out the water as steam... such linkages are much more possible (page 90).
"So far I have just been talking about making proteins by linking amino acids into peptides. But proteins are only a small part of the intricate fabric of life. There are lipids and nucleic acids and ribosomes, and so on. And we hit yet another snag. It is possible that scientists using complicated and delicate laboratory procedures may be able to synthesize piecemeal the basic ingredients of life. What is far less likely is that the same set of procedures would yield all the required pieces at the same time. Thus, not only is there a mystery about the self-assembly of large, delicate, and very specifically structured molecules from an incoherent mêlée of bits, there is also the problem of producing, simultaneously, a collection on many different types of molecules....
No single molecule carries the spark of life, no chain of atoms alone constitutes an organism. Even DNA, the biological supermolecule, is not alive. Pluck DNA from a living cell and it would be stranded, unable to carry out its familiar role. Only within the context of a highly specific molecular milieu will a given molecule play its role in life. To function properly, DNA must be part of a large team, with each molecule executing its assigned task alongside the others in a cooperative manner" (Davies 1999: 92).
He goes on to say "If everything needs everything else, how did the community of molecules ever arise in the first place?" Good question. It is obvious that the earliest "living" things were simple and "far sloppier" than any living organism today, with their billions of years of evolutionary history (experience and selection). "Crude machines are more robust than sophisticated ones" (page 93).
"In the previous section, I presented the fantastic odds against shuffling amino acids at random into the right sequence to form a protein molecule by accident. That was a single protein. Life as we know it requires hundreds of thousands of specialists proteins, not to mention nucleic acids. The odds against producing just the proteins by pure chance are something like 1040 000 to 1. This is one followed by forty thousand zeros, which would take up an entire chapter of this book if I wanted to write it out in full. Dealing a perfect suit of cards a thousand times in a row is easy by comparison. In a famous remark, the British astronomer Fred Hoyle likened the odds against the spontaneous assembly of life to those for a whirlwind sweeping through a junkyard and producing a fully functioning Boeing 747" (Davies 1999: 95).
"On a recent trip to Europe to attend a conference on extraterrestrial life, I flipped through the airline's in-flight entertainment guide, only to find that the search for life beyond Earth was on offer as part of their program. The promotional description said "With a half-trillion stars wheeling through the spiral patterns of the Milky Way Galaxy, it seems illogical to assume that among them only one world supports intelligent life". The use of the word "illogical" was unfortunate, because the logic is perfectly clear. there are indeed a lot of stars - at least ten billion billion in the observable universe. But this number, gigantic as it may appear to us, is nevertheless trivially small compared with the gigantic odds against the random assembly of even a single protein molecule. Though the universe is big, if life formed solely by random agitation in a molecular junkyard, there is scant chance it has happened twice" (Davies 1999: 95).
But Davies has an axe to grind against natural evolution. Complex hydrocarbons are found scattered through the universe. All matter, living or dead, incorporates information about its environment consisting of other matter, energy and information. Some are pretty simple, or at least appear simple. But as noted, our knowledge of what matter, energy and information are is still limited. But all "thing" map in this environmental complex through some forms of matter, energy, and information exchange. A sandy beach is a sandy beach because of its environmental relationships. A chemical molecule is what it is because of its environmental relationships.
Life is a highly probably outcome, not a highly improbable one. In a recent experiment (2008) with slime molds, biophysicist Toshiyuki Nakagaki modified temperature and moisture condition 10 minutes in each hour for Physarium slime-mold amoebas crawling across an agar plate. They changed their behavior. They continued their response when the conditions were not changed for a while, gradually going back to their stable mode. But when a single temperature and moisture change was redone they resumed their previously learned response. They chemically "remembered" the environmental changes and responded without a brain or nerve tissue. Nakagaki suggested reconsideration of the definition of intelligence.
"About 65 million years ago, the Earth was struck by an asteroid some 10 km in diameter with a mass of well over a trillion tonnes. We now know the immediate impact of this eventmegatsunamis, global wildfires ignited by giant clouds of superheated ash and, of course, the mass extinction of land-based life on Earth."( the following quotes about the impact are from: http://www.technologyreview.com/blog/arxiv/27720/)
"But in recent years, astrobiologists have begun to study a less well known consequence: the ejection of billions of tons of life-bearing rocks and water into space. By some estimates, the impact could have ejected as much mass as the asteroid itself."
"The question that fascinates them is what happened to all this stuff."
"Today, we get an answer from Tetsuya Hara and buddies at Kyoto Sangyo University in Japan. These guys say a surprisingly large amount of Earth could have ended up not just on the Moon and Mars, as might be expected, but much further afield."
"In particular, they calculate how much would have ended up in other places that seem compatible for life: the Jovian moon Europa, the Saturnian moon Enceladus, and Earth-like exoplanets orbiting other stars."
"Their results contain a number of surprises. First, they calculate that almost as much ejecta would have ended up on Europa as on the Moon: around 10^8 individual Earth rocks in some scenarios. That's because the huge gravitational field around Jupiter acts as a sink for rocks, which then get swept up by the Jovian moons as they orbit."
"But perhaps most surprising is the amount that makes its way across interstellar space. Last year, we looked at calculations suggesting that more Earth ejecta must end up in interstellar space than all the other planets combined."
"Hara and co go further and estimate how much ought to have made its way to Gliese 581, a red dwarf some 20 light years from here that is thought to have a super-Earth orbiting at the edge of the habitable zone."
"They say about a thousand Earth-rocks from this event would have made the trip, taking about a million years to reach their destination.""Of course, nobody knows if microbes can survive that kind of journey or even the shorter trips to Europa and Enceladus. But Hara and buddies say that if microbes can survive that kind of journey, they ought to flourish on a super-Earth in the habitable zone."
"That raises another interesting question: how quickly could life-bearing ejecta from Earth (or anywhere else) seed the entire galaxy?"
"Hara and co calculate that it would take some 10^12 years for ejecta to spread through a volume of space the size of the Milky Way. But since our galaxy is only 10^10 years old, a single ejection event could not have done the trick."
"However, they say that if life evolved at 25 different sites in the galaxy 10^10 years ago, then the combined ejecta from these places would now fill the Milky Way."
"There's an interesting corollary to this. If this scenario has indeed taken place, Hara and co say: "then the probability is almost one that our solar system is visited by the microorganisms that originated in extra solar system.""
Davies explores DNA and its four base-code (A, G, C and T). Perhaps the best line in his book is found in this description: "DNA contains the total information needed to build and operate the organism to which it belongs. Viewed like this, life is just a string of four-letter words" (page 104). DNA, he notes, is helpless without its proteins that build things like cell walls and act as enzymes to supervise and accelerate chemical reactions required to fuel life. RNA uses four bases as well (A, G, C and U) and one form serves as a messenger between the DNA and the places where proteins are made. RNA may be the key candidate for the earliest replicating molecule from which life originated. RNA does may tasks and is itself a weak enzyme. Ion an experiment, Sol Spiegelman used a small RNA virus called Qß in a medium with its own replication enzyme plus a supply of salts and raw materials. He decanted some into a new medium, and then did this a number of times. The RNA dropped parts of its genome, going from 4,500 bases to 220 so it could replicate as fast as possible. It was called Spiegelman's monster (Davies 1999: 126-127). In a later experiment by Manfred Eigen, the same medium and enzymes produced spontaneous RNA strands that reproduced.,, but this only worked with a specially prepared replication enzyme extracted from a living organism... so life was not created from non-life. (Davies 1999: 127-128).
Because DNA uses an intermediary for many purposes, a single code change in the messenger is lethal in that it changes not one, but many dependent proteins... so normal Darwinian error/selection is questionable.
"A possible resolution has been suggested by Carl Woese. He thinks the code assignments and he translation mechanism evolved together. Initially there was only a rough-and-ready code, and the translation process was very sloppy. At this early stage, which is likely to have involved less than the present complement of twenty amino acids, organisms had to make do with very inefficient enzymes: the highly specific and refined enzymes life uses today had not yet evolved. Obviously some coding assignments would prove better than others, and any organism that employed them would prove better than others, and organism that employed the least error-prone assignments to code for its most-important enzymes would be on to a winner" (Davies 1999: 111).
Here is the kernel of it all: "The striking utility of encoded genetic information stems from the fact that amino acids "understand" it. The information distributed along a strand of DNA is biologically relevant. In computerspeak, genetic data are semantic data."
"Another way of expressing this is to say that genes and proteins require exceedingly high degrees of specificity in their structure. As I stated in my list of properties in chapter 1, living organisms are mysterious not for their complexity per se, but for their tightly specified complexity. To comprehend fully how life arose from nonlife, we need to know not only how biological information was concentrated, but also how biologically useful information came to be specified, given that the milieu from which the first organism emerged was presumably just a random mix of molecular building blocks. In short, how did meaningful information emerge spontaneously from incoherent junk" (Davies 1999: 112-113)?
"Viewed this way, the problem of the origin of life reduces to one of understanding how encoded software emerged spontaneously from hardware. How did it happen? How did nature "go digital"? We are dealing here not with a simple matter of refinement and adaptation, an amplification of complexity, or even the husbanding of information, but a fundamental change of concept" (Davies 1999: 115).
Here is a tough thing to understand... using the algorithmic definition of randomness, already discussed earlier, a DNA sequence is more information rich as it appears to be more random as a sequence.... because the content cannot be reduced by any simple formulas! So an information rich DNA sequence must look "random" ... and genomes must incorporate this randomness. Thus, a functioning genome is a random sequence, but a sequence that encodes biologically relevant information!
"The conclusion we have reached is clear and it is profound. A functional genome is bothrandom and highly specific - properties that seem almost contradictory. It must be random to contain substantial amounts of information, and it must be specific for that information to be biologically relevant. The puzzle we are then faced with is how such a structure came into existence. We know that chance can produce randomness and we know that law can produce a specific, predictable end-product. But how can both properties be combined into one process? How can a blend of chance and law cooperate to yield a specific random structure" (Davies 1999: 119-120)?
Davies reports that "Recently Reza Ghadri of the Scripps Institute in San Diego, discovered that some small peptide chains can indeed self-replicate. Moreover, they can apparently correct replication errors "as if they had a mind of their own"" (page 133). He also talks about Freeman Dyson's view that life is a fusion or symbiosis of two separate organic processes: a protein with metabolism and a replicative molecule that had no metabolism (page 134). Others have suggested that chaotic complexity in feedback was part of this process, where self-organization was a critical bifurcation: "That life is a consequence, not of special organic chemistry, but of universal mathematical rules that govern the behavior of all complex systems regardless of what they are made of" But then he goes on to point out that life is not just self-organization, but specified internal organization (Davies 1999: 140-141).
Davies is certain that the abundant life found deep inside the Earth's rocks, is the basis for understanding biogenesis. Life probably evolved and survived the massive bombardment early in planetary evolution by living deep inside rock. The large impacts 3-4 billion years ago would have created surface temperatures up to 3000 degrees Celsius, melting rock to a depth of a kilometer. The finding of "superbugs", life that survives on sulfur, in salt, on hydrogen gas (making methane), or at low or high temperatures... is revolutionizing our view of life and its origins. Pyrodictium occultum, for example, was still alive after an hour in an autoclave at 121 degrees Celsius. At three kilometers down, the Taylorsville borehole found ten million bacteria per gram of rock. Seabed drills have found up to ten million bacteria at 750 feet, with the numbers rising with depth (page 168-173). There is increasing evidence that the oldest life forms were hyperthermalphiles that oxidized sulphur or hydrogen sulfide and did not need or want light... and many were oxygen haters... so as Davies notes, "Eden" was "Hell" (hot and sulphurous) (page 180).
Research on hot black smokers like the Lost City complex has redefined the limits of life. "In a series of recent studies over the past few years, biochemist William Martin of Heinrich-Heine University in Germany and geochemist Michael Russell of the NASA Jet Propulsion Laboratory in Pasadena examined the precise chemical steps required to produce methane abiotically, that is, without living organisms in environments such as in Lost City. They found that each step is replicated in the biological pathways of organisms that generate methane. From this work, Martin and Russell proposed that on the Early Earth, sites like Lost City produced methane geochemically and that primordial lifeforms may have simply co-opted each oif the chemical steps themselves, leading to what might have been the origin of the biochemical pathway." Bradley 2009:66)
"In claiming that water means life, NASA scientists are not merely being upbeat about their project. They are making - tacitly - a huge and profound assumption about the nature of nature. They are saying, in effect, that the laws of the universe are cunningly contrived to coax life into being against the raw odds; that the mathematical principles of physics, in their elegant simplicity, somehow know in advance about life and its vast complexity. If life follows from soup with causal dependability, the laws of nature encode a hidden subtext, a cosmic imperative, which tells them: "Make life!" And, through life, its by-products: mind, knowledge, understanding. It means that the laws of the universe have engineered their own comprehension. This is a breathtaking vision of nature, magnificent and uplifting in its majestic sweep. I hope it is correct. It would be wonderful if it were correct. But if it is, it represents a shift in the scientific world-view as profound as that initiated by Copernicus and Darwin put together. It should nor be glossed over with glib statements that water plus organics equals life, obviously, for it is far from obvious" (Davies 1999: 246).
Finding uncontaminated life of truly different origins will transform our basic metaphysical understanding of the universe. Given that matter was tossed from Mars to Earth means matter was tossed from Earth to Mars... so life on Mars can be contamination error. Totally foreign life on Mars, that shares no biological heritage with Earth would be the most important discovery of our time. It will shake our metaphysical universe.
"Recent experiments suggest it would have been possible for genetic molecules similar to DNA or to its close relative RNA to form spontaneously. And because these molecules can curl up in different shapes and act as rudimentary catalysts, they may have become able to copy themselves - to reproduce - without the need for proteins. The earliest forms of life could have been simple membranes made of fatty acids - also structures known to form spontaneously - that enveloped water and these self-replicating genetic molecules. The genetic material would encode the traits that each generation handed down to the next, just as DNA does in all things that are alive today. Fortuitous mutations, appearing at random in the copying process, would then propel evolution, enabling these early cells to adapt to their environment, to compete with one another, and eventually to turn into the lifeforms we know" (Ricardo & Szostak 2009: 54-56).
"Chemists have long been unable to find a route which nucleobases, phosphate and ribose (the sugar component of RNA) would naturally combine to generate quantities of RNA nucleotides" (Ricardo & Szostak 2009: 57).
"In 2005 Matthew Pasek and Dante Lauretta of the University of Arizona discovered that the corrosion of schreibersite in water releases its phosphorus component. This pathway seems promising because it releases phosphorus in a form that is both much more soluble in water than phosphate and much more relative with organic (carbon-based) compounds" (Ricardo & Szostak 2009: 57).
"In the presence of phosphate, the raw materials for the nucleobases and ribose first form 2-amino-oxarole, a molecule that contains part of a sugar and part of a C or U nucleobase. further reactions yield a full robose-base block and then a full nucleotide. The reactions also produce 'wrong' combinations of the original molecules, but after exposure to ultraviolet rays, only the 'right" versions - the nucleotides - survive" (Ricardo & Szostak 2009: 57).
"After chemical reactions created the first genetic building blocks and other organic molecules, geophysical process brought them to new environments and concentrated them" (Ricardo & Szostak 2009: 58).
"In the water solutions in which they formed, nucleotides would have little chance of combining into long strands able to store genetic information. But under the right conditions - for example, if molecular adhesion forces brought them close together between microsopic layers of clay - nucleoides might link up into single strands similar to modern RNA" (Ricardo & Szostak 2009: 58).
"Once released from clay, the newly formed polymers might become engulfed in water-filled sacs as fatty acids spontaneously arranged themselves into membranes. These protocells probably required some external prodding to begin duplicating their genetic material and thus reproducing. In one possible scenario, the protocells circulated between cold and warm sides of a pond, which might have been partially frozen on one side (the early earth was mostly cold) and thawed on the other side by the heat of a volcano" (Ricardo & Szostak 2009: 59).
"On the cold side, single RNA strands acted as templates on which new nucleotides formed base pairs (with As pairing with Us and Cs with Gs), resulting in double strands. On the hot side, heat would break the double strands apart. Membranes could also slowly grow until the protocells divided into 'daughter' protocells, which could then start the cycle again" (Ricardo & Szostak 2009: 59).
"One reproduction cycles get going, evolution kicked in - driven by random mutations - and at some point the protocells gained the ability to reproduce on their own. Life was born" (Ricardo & Szostak 2009: 59).
Given the massive complexity of sub-system environmental niches on the surface and sub-surface of a planet like the earth, the probability of multiple eco-niches friendly towards assembly of complex organic chains must approach 100% as a limit. The formation of life on a planet within the free water orbital zone is so high in my estimation, that it does NOT appear seems much more unlikely.
From what I know about general systems theory and thermodynamics, life, by necessity maps its environment in its thermodynamic requirements. Life must seek the raw materials (resources) required for self-maintenance, growth and replication. This mapping is the informational content of the organism... it is the "mind" (i.e. physical context of the organism integrated into it's structure and process). Thus mind (structure), knowledge (storage) and understanding (process) are an outgrowth of all life, no matter how simple.
That life is an informational machine appears to be a given. That the universe contains information is also a given. That only variety can modulate or constrain variety is a rule. That chaos creates information appears to be true.... that strangely enough... chaos is information (bad pun?). Oddly enough, therefore if "god" is information then god is chaos.... and "god" is the universe... if you believe in "god". The universe does exist, is a given. Therefore, does "god" exist? If so, then not in the "form" most people think of a "god"... but simply as a process based in physics... impersonal... uncaring... as a complex open ended non-linear mutual-causal feedback system with unlimited possibilities within the contextual and historic constraints of our universe... therefore, you are "god" in the sense that everything you do, or do not do, changes the history and context of the universe.
The oldest know life on Earth has been dated to over 3,465,000,000 years from the Apex basalt formation in Australia. This is a complex ecosystem consisting of bacteria colonies.
"If I am right about these relations, the presence of cyanobacteria in this early 3,500-Ma-old community tells us that early evolution proceeded very far very fast" (Schopf 1999: 97).
Schopf defined the universals of life: 1) water as the medium; and 2) CHON (SP) (Carbon, Hydrogen; Oxygen, Oxygen, Nitrogen as well as Phosphorus and Sulphur). Water consists of H and O, protein consists of C, H, O, N and S, fat and carbohydrates consist of C, H and O, and DNA, RNA and ATP consist of C, H, O , N and P.
"Life is made of CHON because these elements are plentiful, four of the five most common in the Universe. (The fifth abundant element, helium, doesn't count. Helium is inert, nonreactive - a great gas for balloons, but an element unable to join with others to form robust chemical compounds.) There was plenty of CHON around when life got started. Moreover, all these elements are able to combine with one another to form small sturdy molecules such as methane (CH4), carbon dioxide (CO2), and ammonia (NH3), compounds that because they dissolve in water (H2O, another linked pair of the prime elements) can play an active role in the workings of life" (Schopf 1999: 108).
"All life is made from CHON(SP) and is composed of the same three dozen kinds of fundamental CHON(SP)-containing building block molecules. These small compounds, such as amino acids, sugars, and the purines and pyrimidines of DNA and RNA, are called monomers, and in all are linked together to form the same few kinds of large polymer molecules (such as proteins, carbohydrates, and nucleic acids) so important to life" (Schopf 1999: 108).
"We do know that organic compounds are widespread in the Cosmos. Diverse organic molecules in the gigantic dust clouds swirling through interstellar space have shown their telltale signatures in the microwave region of the electromagnetic spectrum. Discovered at a rate of about four compounds each year, more that eighty-five different kinds have been identified, the largest consisting of thirteen atoms (the hydrogen cyanidelike compound HC11N)" (Schopf 1999: 133).
"All constituents of Urey's primitive atmosphere (CH4, NH3, H2O, H2) have been identified in these enormous clouds, as have many of the organic compounds pivotal in Miller-type early-Earth experiments: hydrogen cyanide (HCN), key for synthesis of amino acids and nucleic acid purine bases (adenine and guanine); thioformaldehyde (CH2S), for sulfur-containing amino acids (cysteine and methionine); cyanoacethylene (HC3N), for nucleic acid pyrimidine bases (cytosine, uracil, and thymine); formaldehyde (H2CO), for monosaccharides, including the ribose sugar of RNA; acetaldehyde (CH3CHO), for the deoxyribose sugar of DNA; and cyanogen (C2N2) and cyanamide (H2NCN), for forming the monomer-to-monomer linkages of polypeptides, polysaccharides, and polynucleotides; even methanol (CH3OH, antifreeze) and ethanol (C2H5OH, the active agent of beer and liquor). All have been detected in prodigious quantities, in some interstellar clouds, they have a total mass greater then the Earth" (Schopf 1999: 133-134).
"Cells build polymers by adding monomers one at a time to the end of a growing polymeric chain. To make the strong chemical bridge (covalent bond) that attaches each monomer to the lengthening chain, organisms use a process known as dehydration condensation. Consider, for instance, how amino acids polymerize to form a protein. All amino acids-even the simplest, glycine (H2N-CH2-COOH)-are short linear molecules, at one end having a cluster of three atoms, the amino group H2N, and at the other a four-atom combination, COOH, the carboxylic acid group. Cells order amino acids head-to-tail in protein polymers by connecting the carbon of the COOH end of the growing polymer to the nitrogen of the H2N end of each new amino acid added. But before the connecting bridge can be fit in place, space has to be freed on the carbon and nitrogen atoms. With the help of a bridge-building enzyme and powered by cellular energy, this is done by cutting away OH (hydroxyl) from the COOH of the polymer and excising one of the hydrogens from the H2N of the amino acid, liberated chemical fragments that unite to form a molecule of water (OH + H H2O). This water-forming bridge-building process is repeated over and over as amino acids are added until the protein is completed" (Schopf 1999: 147-138).
Schopf notes that trying to do this in water is like trying to dry your hands with a wet cloth... but more on this later. What about the chicken-and-the-egg problem of nucleic acids that build enzymes but enzymes are needed to make nucleic acids? This may be solved by RNA.
"Like enzymatic proteins, ribozymes can cut molecules apart or paste them together, and a number of them can do both. Some are self-splicers, able to snip away one part of their own length and glue the leftovers back together. Others can cut out a section of themselves and move it to another spot in the molecule. Still others can engineer assembly of fresh RNA strands" (Schopf 1999: 141).
He feels that cells are like bubbles of soap. That because like dissolves like and because water caries a positive charge, oil-like organic compounds will ball up. He believes the primordial soup was a bit soapy and soaps bridge the chemistry between water and oil.
"Cells originated by soaplike chemistry. The primordial soup was a dilute consommé in which hydrophobic organic compounds clumped together naturally because of their chemical makeup. Among these were chains of hydrogen and carbon, hydrocarbons like the tails of soap molecules, some of which had charged atoms at one end. Like soaps, these packed together to make up thin-skinned bubbles in which the charged hydrophilic atoms formed the outer surface and the hydrocarbon tails pointed inward, mixing with the organics clumped inside" (Schopf 1999: 142).
Schopf believes the bubbles contained organic chemicals such as primitive RNA. The process of snipping and adding in most bubbles lead to nothing, but in at least one... it lead to a double film wall, later strengthened by proteins to become a primitive cell wall. He feels that the structure and replication process evolved together, but that metabolism evolved over time and that the heterotrophs evolved before the photoautotrophs for the following reasons.
"Among the earliest forms of life were some that lived by glycolysis, a form of fermentation (anaerobic metabolism) in which a molecule of the six-carbon sugar glucose (C6H12O6) is split in half to make two molecules of a three-carbon compound called pyruvate. This produces energy, given off when the chemical bonds of glucose are broken apart, some of which is stored for later use in a chemical known as ATP (adenosine triphosphate). Two units of energy (two energy-rich molecules of ATP) are made every time a molecule of glucose is broken down"
"Glycolysis dates from near life's beginnings. It is fundamental to life, present in all organisms, a package of ten enzyme-speeded steps too large to have been originated more than once. Moreover, it is chemically the simplest energy-making process in biology, takes place in the watery cytosol of cells (rather than needing membranes or organelles like later-evolved systems), yields much less energy than more advanced mechanisms, and is anaerobic like the early environment"
Glycolysis requires glucose fuel. But Miller-type early-Earth experiments show that many other sugars were present also in the primordial soup. Why was glucose pegged as the universal fuel of life? Probably because it is especially sturdy, the least susceptible of all six-carbon sugars to break down by changes in temperature, acidity, and the like. In the harsh early environment, glucose was the sugar most likely to be available to life" (Schopf 1999: 150).
But such primitive life would quickly run out of sugar made by random chance. "Manufacture of glucose (technically, "glucose biosynthesis") involves eleven enzyme-aided steps. Seven of these use the same enzymes as glycolysis but operate in the opposite direction. Without changing glycolysis in any way, genes for seven of its enzymes were duplicated and their enzymes used along with four new ones to construct the glucose-making system. Rather than inventing a brand-new set of genes and enzymes, evolution was conservative and economical" (Schopf 1999: 151).
Ammonia was also in short supply needed for its nitrogen:
"The agent that evolved to harvest nitrogen (to combine it with hydrogen and "fix" it in the form of ammonia) is called the nitrogenase of Nif complex, and its driving force is a protein called ferredoxin. Because N2-fixation costs cells much energy, the Nif complex kicks in as a last resort, used only after supples of ammonia and nitrate are exhausted. A system so costly would never have evolved had it not been crucial to life's survival.
The ferredoxin-driven Nif complex dates from early in Earth history when the environment was all but oxygen-free. Most early evolved bacteria and archaeans can fix atmospheric nitrogen, whereas eukaryotes - all later-evolved - cannot, and like other especially ancient enzyme systems, the Nif complex is brought to a standstill by trace amounts of molecular oxygen. N2-fixation happens only if O2 is shut out, even in oxygen-producing cyanobacteria, where special cells and chemical mechanisms have evolved to protect its workings"
"How did ferredoxin originate? The fifty-five amino acids that make up the ferredoxin of a typical bacterium (Clostridium) are ordered in a way that reveals the history of the molecule. The protein started out as a snippet only four amino acids long. The gene for this quartet was copied repeatedly to make up a longer gene for a proto-ferredoxin composed of twenty-eight amino acids, seven of the quartets linked in a chain. Mutations then added an amino acid and switched several, and the mutated gene for the twenty-nine amino acid-long protein was duplicated to make a new gene for a primitive ferredoxin fifty-eight amino acids in length. After a few more mutations, three amino acids were cut away at one end of the molecule to give the ferredoxin of modern Clostridium....
"Gene xeroxing seems to have been especially common during life's early development, when CHON and energy were in short supply. Laboratory studies on experimentally starved bacteria show that almost all survivors are mutants that have extra copies of metabolic enzymes" (Schopf 1999: 153-155).
The next step above aerobic heterotrophs were autotrophs capable of photosynthesis, but were anoxygenic, based on the H2S + CO2 CH2O + S process. Hydrogen sulfide was probably locally abundant around hot springs, fumaroles. Then came aerobic systems:
"First glycolysis breaks down glucose to make pyruvate and two ATPs (and water) for every molecule of glucose used. Second, the pyruvate is split apart by a cyclic system (the citric acid cycle) to form two more ATPs, electrons, and carbon dioxide. Third, molecular oxygen is pumped in and electrons from the citric acid cycle are conveyed along a string of enzyme-driven electron carriers to produce thirty-two more ATPs. Overall, thirty-six ATPs are made from each molecule of glucose broken down" (Schopf 1999: 159).
"What are the evolutionary roots of this life-sustaining, notably cost-effective process? The first part, glycolysis, is already familiar. Inherited from primitive anaerobic heterotrophs, it long predates the appearance of oxygen-breathing forms of life. The second, the citric acid cycle, is also a hand-me-down, the borrowed but reversed version of the cyclic "dark reactions" of bacterial photosynthesis. And the third, the oxygen-consuming part of the system, is a revamped form of the chemistry that links two light-sensitive photosystems in oxygen-producing photosynthesis. By remodeling and reusing inventions perfected earlier, evolution once again was conservative and economical" (Schopf 1999: 159).
The process of creating free oxygen was slowed up by the rusting of the earth as reflected in the iron oxide layers abundant in rocks older than 2,000,000,000 years ago. Oxygen did not accumulate in significant quantities in the atmosphere until this rusting was completed. Single cell algae are found around this time and sexual reproduction not until later.
"Whether large or small, living or fossil, life comes in just two varieties: (1) prokaryotes, nonnucleated microbes of the Bacterial and Archaeal domains. The only life on earth for most of the planet's history; and (2) eukaryotes, members of the Eucaryal domain earmarked by cells like ours that have chromosomes packaged in a saclike nucleus" (Schopf 1999: 237).
This latter group includes the nucleus for DNA; ribosomes where proteins and other compounds are made; chloroplasts where sunlight is used to make food; and mitochondria where food is broken down for energy. Chloroplasts are absent in protozoans, fungi and animals and mitochondria are absent diplomonads and microspordia (both parasites). This suggests that the earliest eukaryotes could neither photosynthesize nor breathe oxygen (Schopf 1999: 237-238).
Schopf believes sexual reproduction did not evolve until about 1,000,000,000 years ago, demonstrated by the explosion of stromatolites, cyanobacteria and acritarchs about this time. About 800,000,000 to 700,000,000 years ago, multi-celled life evolved.Life History
Combining Davies and Schopf one gets a basic understanding of the process and how simple yet complex it was. Depending on ones outlook, life is either rare or common.
In the 60's astronomers found formaldehyde, hydrogen cyanide, and isocyanic acid in clouds of interstellar gas. This was followed by methanol, ammonia and water ice. A new experiment by Louis Allamandola used these simple compounds near absolute zero and a vacuum with ultraviolet radiation. "He has manufactured intricat molecular rings containing carbon, nitrogen, and hydrogen; fatty-acid like molecules that look like and behave like the membranes protecting living cells; and nucleic acids or nuc;leotides, the primary components of RNA and DNA." (Grant 2010:43)
"Notable space chemicals found in just the past few years include the sweet (a sugar, glycolaldehyde), the fragrant (ethyl formate, which smells like rum), and the explosive (fulminic acid, used in detonators). Most exciting, some of the molecules resemble those associated with life" In 2007 an international team found amino acetonitrile (NH2CH2CN), a molecule structurally similar to glycine - one of the building blocks of proteins."(Grant 2010:43)
"Life remained almost exclusively unicellular for the first five sixths of its history - from the first recorded fossils at 3.5 billion years to the first well-documented multicellular animals less than 600 million years ago.... This long period of unicellular life does include, to be sure, the vitally important transition from simple prokaryotic cells without organelles to eukaryotic cells with nuclei, mitochondria and other complexities of intracellular architecture - but no recorded attainment of multicellular organization for a full three billion years" Gould 2000: 277).
"More curiously, all major stages in organizing animal life's multicellular architecture then occurred in a short period beginning less than 600 million years ago and ending by about 530 million years ago - and the steps within this sequence are also discontinuous and episodic, not gradually accumulative" (Gould 2000: 278).
"Three billion years of unicellularity, followed by five million years of intense creativity and then capped by more than 500 million years of variation on set anatomical themes can hardly be read as a predictable, inexorable or continuous trend towards progress or increasing complexity" (Gould 2000: 278).
But multicellular life is dependent on a basic change in the atmosphere, surviving on a poison: oxygen! And computer simulations suggest that basic anatomical themes based on simple rules of competition and reproduction, lead to an explosion of variety that quickly osculates and converges on set themes. There is nothing surprising in this process. In fact, that this did not happen would be the more surprising.
"We now know that the Ediacaran radiation was indeed abrupt and that the geologic floor to the animal fossil record is both real and sharp. More important, we have reason to believe that the emergence of animals was closely linked to unprecedented changes in the Earth's physical environment, including a significant increase in atmospheric oxygen that may have made the evolution of large animals possible" (Knoll 2000: 321).
Spitsbergen rocks contain multicellular seaweed at 800 million years. The sediments were so finely laminated, that is there were any browsers (tracks, trails, burrows), they would have shown up. Other rocks at 1.4 billion years also show multicellular seaweeds. But oxygen levels never rose about 1 percent until the end of the Proterozoic era. The sudden rise was what powered multicellular animal life.
ECONOMIC REALITY 1
Culture, like life, is a thermodynamic machine.
The best way to create models for economics lies in an energy-information-economic paradigm based on general systems theory and an understanding of the "thermodynamics" of ecology. Thermodynamics in terms of the demands on all living systems to exploit matter and energy (and to some extent - information) for self-maintenance and growth through replication. The environment consists of three more or less separate but interdependent sub-sets:
1) The physical environment consisting of geomorphic surfaces and their hydrologic structure within a climatic regime. It includes a mosaic of minerals valued by human groups. The physical environment tends to be the slowest to change, fluctuate and the tends to be the most predictable for modeling.
2) The biotic environment consisting of plants and animals and their ecological relationships to the physical and cultural environments. This sub-system is more likely to change, fluctuate, and is more difficult to predict and model.
3) The cultural environment consisting of cooperating and non-cooperating other human beings and their relationship to the physical and ecological environments. This sub-system is the most likely to change, fluctuate and is the least predictable for modeling. Culture is an idiom that expresses in symbolic forms how human groups map access to, and control over, valued resources.
A thermodynamic characteristic (in fact, requirement) of all living things is their ability to map out matter and energy as information about resources, their relative distribution, relative abundance, relative activity, and relative predictability. There is an isomorphic correlation between the form (demands) of general living systems and economics as a model.
The following drawings illustrate the model of the interrelationship between the demands of systems frameworks and economic terms. The general systems terms are on the left and their economic parallels are on the right.
All systems consist of sub-systems that are isolated from other sub-systems as "firms" by a set of constraints (i.e. - they have limited material, energy and information flows at some boundary) as well as a primary means of material, energy and information input (constraint). The sub-system is a "firm", an internally related group of variables more or less isolated (never completely) from external factors. The correlation between systems variables and economic variables is clear from the illustration: input = production; throughput = distribution; feedback = capital or investment; output = consumption; storage is "feedback" kept on hand more or less in stasis until needed = storage. The basic constraint is a set of variables that control input = the economic and political strategy (adaptive framework) of the social group or "firm". Just as feedback (material, energy or information) is used to modify input, throughput or output ... capital/investment (material, energy or information) is used to modify production, distribution and consumption.
"In short, technology is all about making life (and death) more comfortable, while science is simply an effort to understand the world" (Singh 2004: 19).
Social groups or firms, use exploitive technology and strategies to maintain access to and control over valued environmental resources: physical, biotic and cultural. They gather resources directly or store information about resources and their relative abundance, predictability and activity. They also create a complex cultural idiom to store or invest matter, energy or information as information by manipulation of cooperating other human beings. If kinship, marriage, economics, politics, law, warfare and religion are examined within this paradigm, they show up as complex "maps" of control systems. They are a form of cultural contract between parties expressed in complex symbolism. Since circumstances change, since living systems are dynamic, and since individuals are born, grow through stages, and die, the contractual idiom is kept in symbols that can be interpreted in many ways (deliberate vagueness). In small scale societies, their re-interpretation is the subject of almost constant public consensus debate. In large scale societies, re-interpretation keeps lawyers employed.
Humans have a tendency to create meaning in random parts of the physical, biotic and with other cooperating and non-cooperating other human beings. This tendency is called PATTERNICITY.
Michael Shermer argues that human brains are "belief engines: evolved pattern recognition machines that connect the dots and create meaning out of the patterns that we think we see in nature. Sometimes A really is connected to B; sometimes it is not. When it is, we have learned something valuable about the environment from which we can make predictions that aid in survival and reproduction. We are the ancestors of those most successful at finding patterns. This process is called associative learning, and it is fundamental to all animal behavior, from the humble worm ,C. elegans to H. sapiens."(Shermer 2008:48)
"In a September (2008) paper in the Proceedings of the Royal Society B,, The Evolution of Superstitious and Superstitions-like Behavior, Harvard University biologist Kevin R. Foster and University of Helsinki biologist Hanna Kokko test my (Shermer's) theory through evolutionary modeling and demonstrate that whenever the cost of believing a false pattern is real is less than the cost of not believing a real pattern, natural selection will favor patternicity."(Shermer 2008:48)
Human culture is filled with a mix of elaborate patternicities. People know and name their landscapes, plants, animals, people as well as their relationship to them. Their explanations for natural and cultural phenomena are varied and complex. Humans find ways to explain what they observe, sometime useful, sometimes not. But the false patterns and names are better than no patterns at all.
Small scale societies are tied more directly to their biotic and physical environments than complex ones. The agents of production, distribution and consumption tend to be limited to family or other kin groups. Social groups tend to be homogenous, each pretty much a mirror image of all others. The more complex and larger the society, the more diverse social firms become. They can also become specialized sub-firms, with little to do with primary food, material or energy production.
Cultures change because of changes in the physical, biotic or cultural environments. Social groups also grow in population over time. Technologies for production, distribution, storage, and consumption can be changed to increase carrying capacity. In addition, cultural systems can change to increase access to, and control over, valued resources in all three environments. The adaptive/economic strategy can be modified to change carrying capacity. This results in changes called "evolution" in anthropology. There is nothing linear about the changes or which element (variable) or combination of elements(variables) will change. Change in one variable does tend to create the possibility of more change in the same variable (within limits). There can be gradual changes in existing variables or a change in degree so great as to be a change in "kind" of the variable. For example, gathering can intensify within a natural biotic system. Groups can shift to fire to modify a biotic system (low level plant/animal management) to increase valued biota, but still gather. Groups can then begin to manipulate the plants and/or animals such that they are no longer natural species, and are now dependent on humans. Groups are no longer gatherers, they are plant/animal managers (i.e.- agriculturalists/herders).
Politics is control over, and access to, valued resources of the society of cooperating human beings as a whole as opposed to non-cooperating other human beings. No social group operates in a vacuum. All resources are irregular in distribution and density. All groups impinge on the resource territories of other groups. This interleaving of resources and resource needs creates a demand for access and control systems expressed either as economics (mostly internal) or politics (mostly external). They blur together at many levels and within the idiom (symbolism) of the culture.
All human beings operate within a social framework defined in the cultural idiom. All human beings exploit matter, energy and information in a social matrix of cooperating and non-cooperating other human beings. Cooperative economic and political action reduces variation in access and control over valued resources and changing conditions. Cooperative sharing of information within a cultural framework increases the range of information about the three environments, increases alternatives for dealing with fluctuations in the distribution of valued resources. It also increases alternatives in the face of random chance disorder as well as increases access and control in the face of non-cooperating other human beings. There is always some point where human beings are in a position of competition, active or passive, for valued resources. This can arise from internal or external expansion in numbers. Cooperative (political) action reinforces mutual rights and smooth out differential distribution of resources within the environments. Group cooperation creates a tightrope of zones and relationships that express this active and passive cooperation and passive and active competition. Human beings manipulate each other through symbols (the cultural idiom).
The network of interrelated and intercommunicating human beings creates social groupings. Shared needs and values decrease to competitive needs and values through a continuum. The fundamental social "firm" is found repeating throughout the social system within variations. Each person is born into an existing cultural milieu. Each person learns the network of demands and supports from nearby other human beings. Not all learn the same information, and the information is always "loose" to allow manipulation. This shared learned information is the cultural idiom. Some members are better at the process than others. The symbolic idiom sometimes seems superficially remote from human-land, human-biota, and human-human relations. The very active human side of the equation often masks the more passive biotic and land relationships. Symbols can represent an action, or class of actions, with its/their associated demands and supports.
Other human beings, whether cooperative, or non-cooperative, are the patent manipulators and are the most active variables in changing relationships to resources. People manipulate people to reach a real energy, matter or informational end. Manipulation of people through complex symbols is the means to and end. There is never one solution, but a complex and very rich cultural idiom that defines access to, and control over, valued resources in all three environments.
Culture is thus a dependent phenomenon, related to the demands of living systems and thermodynamic process. It operates as a filter between human beings and human groups. It contains latent values associated with matter, energy and information related to access and control.
All resources are patchy to some extent. There are seasonal and long term cyclical and linear patterns of changes in the physical environment. The biotic sub-system fluctuates more strongly seasonally and also exhibit climax trends and long term cyclical and linear changes as well. The cultural sub-system is the most active in change and fluctuations (fission-fusion-flux). All resources can be placed into a scale that defines their them as relatively predictable, relatively abundant and relatively active or sessile. Physical resources are the most predictable and the most sessile, and abundance is patchy but fairly universal regionally. For those resources that are valued and very patchy, trade networks arise for their distribution.
Although the model is matter/energy/information dependent, there is no one to one correlation between social firms and their complex of environments. All social firms have an exploitive strategy that is a mix of all three environments. Every strategy is a compromise reflecting all three.
WHAT DOES THIS HAVE TO DO WITH RELIGION, you must be screaming by now. Well, some people worship money. That is why the dollar is sometimes called The Almighty Dollar ... as in God Almighty (see omnipotence).
The key religious aspect to this cultural overview is its symbolic milieu. As cultural animals, human create rich symbolic systems that attempt to understand "Life, the Universe, and Everything" (with apologies to Douglas Adams for stealing the title of one of his books.... but that may result in the sale of a few more books? See, religion and economics do have something to do with each other).
To un-digress, at one time, there were many different religious theories about life, the universe and everything (which I will call LUE from now on). LUE's are as varied as the cultures of the past and present. Over time, larger and larger socio-economic groups formed into political entities and the religious variability was preempted by mega-religions. Here on earth, the biggies (2005 Wikipedia data) are Christians (33%), Muslims (20%), Hindus (13%), Buddhists (6%), Sikhs (.39%), Jews (.23%) and the rest (28%) a wide range including 12% non-religious and 2% atheist.
Human culture is a symbolic milieu for operation of a thermodynamic machine filled with non-linear mutual causal feedback loops. It is designed to be loose, to be subject to differential consensus or interpretation. But it is also inherently systemic and thermodynamic in its interaction with the physical, biotic and cultural sub-systems. Religion is the symbolizing of the symbols and processes. It validates access to, and control over, valued resources in the physical, biotic and cultural environments.
Anthropology has shown, for example, that burial of the dead is often used to validate ownership rights over resources. Burial mounds in the Midwest, are political and economic statements. A mound starts with a burial, As more burials are added, the mound gets larger. The mound, and its contents, acts as a visual signal. Burial in the mound is proprietary, so the presence of ancestor burials validates claims of historical continuity. The bigger the mound, the longer and greater the claim. Polities can be defined by hierarchies of burial mound size. In the Mayan area, temple mounds served similar purposes. Temples were built over at regular calendar intervals. The bigger the temple, the older the temple. Each temple was a place for blood offering for a political group based on kinship. The relative measure of temple size was a measure of political longevity of dynastic power. The large stone burial tombs in ancient Britain had the same purpose, visible statements of kin group history and control of a region: we are what our ancestors were, we are where our ancestors are buried ... your ancestors are not buried here... you have no valid claim. The size of our burial complex relates to the size and longevity of our claim.
Human groups that lived on small islands have an ecological ethic learned the hard way. Small islands have limited land areas surrounded by oceans. Populations exploiting island resources grow, like any populations. They quickly fill up the available land and tend to over exploit their resources until there is ecological failure. Like all people, successive generations learn. Island societies donate exploratory populations seeking uninhabited other islands for their excess populations. Warfare becomes endemic on islands with too many people.
The following ethics generally apply to island populations (from Kalo Kanu O Ka'ina - A Cultural Landscape Study of Ke'Anae and Wailuanui, Island of Maui prepared by Davianna McGregor, 1995:103-104):
1) Take only what you absolutely need;
2) Don't waste resources;
3) Take according to the natural cycle of living things, both plant and animal. Allow resources to reproduce before harvesting. Do not take fish during spawning seasons.
4) Alternate places exploited for any type of resource. Do not keep going back to the same place, allow things to replenish themselves;
5) If a place declines, declare it off limits until it recovers and help it where possible;
6) Resources are abundant and accessible to those who possess the knowledge and skills so there is no need to overuse any place or resource;
7) Respect and protect knowledge passed down through the generations;
8) Respect the rights and resources of others as you respect your own;
9) Keep focused on what you are exploiting, do not change your plans without knowledge or understanding;
10) Share what you have with others;
11) Take care of the elderly with the experience and knowledge, respect the resources and the people who use and know them.
Remember: islands are just very small continents. The rates of impacts on small islands are shorter, otherwise the same longer term issues apply. You have been informed (instructed, advised, alerted, and warned).
The EARTH is just one small speck in the observable universe and the observable universe is just one speck in the full universe and our universe may be just one speck in a fractal mass of false vacuum. While it is difficult for human beings, try to be humble. Love what you have, enjoy what you have, but guard carefully what you have! Love in moderation, enjoy in moderation and guard in moderation.
The universe is in a state of change. Its structure is changing. It had a beginning ( the big bang), a childhood (the development of matter), an adulthood (the stelliferous era in which we now live, an old age (when the stars burn out), a senile era (when black holes evaporate), and a death (when particles evaporate). The Newtonian concept of an eternal universe is gone. The idea of absolute and unchanging physical law is also gone. Certainly laws were nullified by the big bang and will be nullified by the death of the universe. If this is so, and if the laws break down on both ends, how does on determine the difference between the beginning and the end? Something to think about on a rainy winter day.
"I believe that we are beginning to see evidence of an alternative view. In this view it becomes possible to imagine that a great deal of the order and regularity we find in the physical world might have arisen just as the beauty of the living world came to be: through a process of self-organization, by means of which the world has evolved over time to become intricately structured" (Smolin 1997: 15).
I do not have any answers, and neither does anyone else, but if you do not follow these latter guidelines, may "god" grant your (spirit, soul, essence, force, vitality) a universe created along your (lines, methods, policy, values) that you will be unable to escape from, and you and your posterity will have to (live, endure, persist, abide) with forever! Unfortunately the rest of use have to live there too.
Sometimes I wonder if the universe is simply the tension between the intent (purpose, plan, function, role, goal, target, meaning, etc.) of its physical "laws, principles, or rules" that cannot be achieved (reached, attained, realized, accomplished) by its expression within that universe? That something is created out of nothing by the simple fact that the something is not fully expressed by its own paradigm? Makes my brain twist up in a knot. "Sweet dreams are made of this, who am I to disagree?" (Eurythmics).