New Iconography of Life's Tree shows that maximal diversity in anatomical forms (not in number of species) is reached very early in life's multicellular history. Later times feature extinction of most of these initial experiments and enormous success within surviving lines. This success is measured in the proliferation of species but not in the development of new anatomies. Today we have more species than ever before, although they are restricted to fewer basic anatomies.
For reasons related to the chemistry of life's origin and the physics of self- organization, the first living things arose at the lower limit of life's conceivable, preservable complexity. Call this lower limit the "left wall" for an architecture of complexity. Since so little space exists between the left wall and life's initial bacterial mode in the fossil record, only one direction for future increment exists - toward greater complexity at the right.
Thus, every once in a while, a more complex creature evolves and extends the range of life's diversity in the only available direction. In technical terms, the distribution of complexity becomes more strongly right skewed through these occasional additions. But the additions are rare and episodic. They do not even constitute an evolutionary series but form a motley sequence of distantly related taxa, usually depicted as eukaryotic cell, jellyfish, trilobite, nautiloid, eurypterid (a large relative of horseshoe crabs), fish, an amphibian such as Eryops, a dinosaur, a mammal and a human being.
We cannot even imagine how anthropogenic intervention might threaten their extinction, although we worry about our impact on nearly every other form of life. The number of Escherichia coli cells in the gut of each human being exceeds the number of humans that has ever lived on this planet.
One might grant that complexification for life as a whole represents a pseudo-trend based on constraint at the left wall but still hold that evolution within particular groups differentially favors complexity when the founding lineage begins far enough from the left wall to permit movement in both directions. Empirical tests of this interesting hypothesis are just beginning (as concern for the subject mounts among paleontologists), and we do not yet have enough cases to advance a generality. But the first two studies - by Daniel W. McShea of the University of Michigan on mammalian vertebrae and by George F. Boyajian of the University of Pennsylvania on ammonite suture lines - show no evolutionary tendencies to favor increased complexity.
Moreover, when we consider that for each mode of life involving greater complexity, there probably exists an equally advantageous style based on greater simplicity of form (as often found in parasites, for example), then preferential evolution toward complexity seems unlikely a priori. Our impression that life evolves toward greater complexity is probably only a bias inspired by parochial focus on ourselves, and consequent overattention to complexifying creatures, while we ignore just as many lineages adapting equally well by becoming simpler in form. The morphologically degenerate parasite, safe within its host, has just as much prospect for evolutionary success as its gorgeously elaborate relative coping with the slings and arrows of outrageous fortune in a tough external world.
This long period of unicellular life does include, to be sure, the vitally important transition from simple prokaryotic cells without organelles to eukaryotic cells with nuclei, mitochondria and other complexities of intracellular architecture - but no recorded attainment of multicellular animal organization for a full three billion years. If complexity is such a good thing, and multicellularity represents its initial phase in our usual view, then life certainly took its time in making this crucial step. Such delays speak strongly against general progress as the major theme of life's history, even if they can be plausibly explained by lack of sufficient atmospheric oxygen for most of Precambrian time or by failure of unicellular life to achieve some structural threshold acting as a prerequisite to multicellularity.
More curiously, all major stages in organizing animal life's multicellular architecture then occurred in a short period beginning less than 600 million years ago and ending by about 530 million years ago - and the steps within this sequence are also discontinuous and episodic, not gradually accumulative. The first fauna, called Ediacaran to honor the Australian locality of its initial discovery but now known from rocks on all continents, consists of highly flattened fronds, sheets and circlets composed of numerous slender segments quilted together. The nature of the Ediacaran fauna is now a subject of intense discussion. These creatures do not seem to be simple precursors of later forms. They may constitute a separate and failed experiment in animal life, or they may represent a full range of diploblastic (two-layered) organization, of which the modern phylum Cnidaria (corals, jellyfishes and their allies) remains as a small and much altered remnant. In any case, they apparently died out well before the Cambrian biota evolved. The Cambrian then began with an assemblage of bits and pieces, frustratingly difficult to interpret, called the "small shelly fauna."
The subsequent main pulse, starting about 530 million years ago, constitutes the famous Cambrian explosion, during which all but one modern phylum of animal life made a first appearance in the fossil record. (Geologists had previously allowed up to 40 million years for this event, but an elegant study, published in 1993, clearly restricts this period of phyletic flowering to a mere five million years.) The Bryozoa, a group of sessile and colonial marine organisms, do not arise until the beginning of the subsequent, Ordovician period, but this apparent delay may be an artifact of failure to discover Cambrian representatives.
They resemble upside-down ice cream cones or egg cartons, but a new analysis suggests that the odd sedimentary structures found in Western Australia are among the earliest signs of life on the planet.
Turned to Stone
Abigail Allwood of Macquarie University in Sydney and colleagues analyzed a 6-mile stretch of the rock formations and identified seven different types of stromatolites. Aside from ice cream cones and egg cartons, the researchers also found stromatolites that look like fossilized sand dunes or choppy ocean waves that have been frozen and turned to stone.
The researchers believe the eclectic mix of stromatolites was formed not by one creature, but many. Allwood said her team was able to recover a few scraps of organic matter from the site which they will begin analyzing soon. "The sample is so small, however, that it is difficult to say much more about the organisms that made the stromatolites than that they were microbial," says Allwood.
If the stromatolites do turn out to have a biologic origin, it could change how scientists think about life on early Earth. Many current theories about early life state that the first organisms arose around hydrothermal vents and other extreme environments.
But the Australian stromatolites are thought to have formed in relatively normal marine conditions. If the stromatolites were formed by microbes, then life must have adapted to normal, non-extreme environments even as early in the planet's history as 3.4 billion years ago. Earth is about 4.5 billion years old. Furthermore, life by that time would have already diversified enough to form complex ecosystems.
"We hope this will move us beyond the question of whether or not life simply 'existed' at the time to looking at the conditions that nurtured early ecosystems," Allwood said.
What were the first living cells like? Scientists don’t know for sure, for we lack good data from the first 2 billion years of Earth’s history—a period known as the Archaean. Most likely they were tentative microscopic entities—microbes fragile enough to be destroyed by strong bursts of energy, yet sturdy enough to reproduce, thereby giving rise to generations of descendants.
As Earth cooled, several of the energy sources capable of producing the acids and bases began to diminish. Geologic and atmospheric activity declined and as gases thickened the air, lesser amounts of solar ultraviolet radiation reached Earth’s surface. Laboratory experiments show that these changing conditions are not conducive to the continued production of the heterotrophs’ food supply, which is why we don’t see a thick film of organic acids and bases floating on today’s oceans and rivers.
Whereas originally Earth’s waters had plenty of juicy organic molecules on which the heterotrophs could feed, the denser atmosphere and weakened tectonics meant fewer food sources over time. Consumed more rapidly than it was replenished, the organic soup gradually thinned, creating a crisis for the multiplying cells. Those primitive cells then had to compete with one another while scrounging for the decreasing supply of nourishing acids and bases. Eventually, the heterotrophs devoured every bit of organic matter floating in the ocean. The organic production of acids and bases via lightning, volcanoes, or solar radiation simply couldn’t satisfy the voracious appetite of the growing population of heterotrophs.
This scarcity of molecular food was a near-fatal flaw in life’s early development. Had nothing changed, Earth’s simplest life forms would have proceeded toward an evolutionary dead end—starvation. Earth would be a barren, lifeless rock, and our story aborted. Fortunately, something did change. It had to; nothing fails to change. And one change that did occur enabled the story to continue—not by some design and not solely by chance, rather more likely by the usual mixture of chance and necessity operating over long durations. At least partly, successful evolution is often a case of being at the right place at the right time.
Other cells—the forerunners of plants, called autotrophs (for they were self-nourishing)—invented a new way to get energy, thereby conceiving a unique opportunity for living. (Some researchers claim that the first cells were likely already autotrophic, acquiring energy directly from the environment and skipping altogether the heterotrophic stage.) This novel biological technique employed carbon dioxide (CO2), the major waste product of the fermentation process. While the earliest cells were busily eating organic molecules in the sea and thus polluting the atmosphere, more advanced cells were learning to use these pollutants to extract energy. In this case, the energy wasn’t derived from the consumed gas, but from another well-known source—the Sun. This newly invented process is photosynthesis, perhaps the greatest single metabolic invention in history.
The key here is the chlorophyll molecule, a green pigment having its atoms arranged so that light, when striking the surface of a plant, is captured within the molecule. Advanced cells containing chlorophyll thereby extract energy from ordinary, gentle sunlight (not harsh ultraviolet radiation) by means of a chemical reaction that exploits that sunlight to convert carbon dioxide and water into oxygen and carbohydrates; simplified, it’s symbolized by the formula:
Carbon dioxide (CO2) + water (H2O) + sunlight --> oxygen (O2) + carbohydrate (CH2O).
The oxygen gas escapes into the atmosphere, while the synthesized carbohydrate (sugar) is used for food. This, then, is another way a cell can “eat,” or extract energy from its environment—hence its name: photo, meaning “light”; synthesis, “putting together.”
How did some protoplant, microbial cells develop photosynthesis? To be sure, it was primitive bacteria that invented photosynthesis, not plants per se, which emerged much later. But how they actually did it, biologists are again uncertain, other than to presume that random events first altered the DNA molecules in some early cells, which then determinedly sucked up the needed solar energy to survive. They no longer had to compete for the organic acids and bases in the primal ocean. They were selected by Nature to endure because they adapted to the changing environment. And with photosynthesis came a big advantage since the new cells could persist on merely inorganic matter. The autotrophs were clearly more fitted for survival during what was probably the first ecological crisis on our planet.
The photosynthetic process continues to this day as plants routinely use sunlight to produce carbohydrates as food (for both metabolic function as well as cellulose structure). The plants, in turn, release oxygen gas that animals, including ourselves, breathe. Photosynthesis is, in fact, the most frequent chemical reaction on Earth. In round numbers, each day ~400 million tons (~1012 kilograms) of carbon dioxide mix with ~200 million tons of water to make ~300 million tons of organic matter and another 300 million tons of oxygen gas. Yet despite these large numbers, it’s still the small but abundant stuff that does much of it: Fully half of today’s global photosynthesis and oxygen production is accomplished by single-celled marine plankton living in the top oceanic layer where enough light penetrates to support their growth.
By loss of their food source, the ancient and primitive heterotrophs were naturally selected to die. The better adapted autotrophs were naturally favored to live. Life on Earth was on its way toward using a primary and plentiful source of energy—that of our parent star—in a reasonably efficient and direct manner. It all began not quite 3 billion years ago.
Photosynthesis over eons of time is, by the way, partly responsible for the fossil fuels. Dead, rotted plants, buried and squeezed below layers of dirt and rock, have chemically changed over megacenturies into oil, coal, and natural gas. Such fossil fuels, with their vast quantities of solar energy trapped in carbohydrates, have made industrial civilization possible. But those fuels are virtually nonrenewable, at least over time scales shorter than tens of millions of years. Billions of years of energy deposits in rotted organisms will be depleted shortly—oil and gas in the 21st century and coal not more than a few hundred years thereafter. Once again, things will have to change, just as they’ve changed in the past.
The use of sunlight by cells was a double achievement of great importance for life on Earth. Not only did the Sun provide an unlimited source of energy and assure a dependable supply of food, but it also drastically changed Earth’s atmosphere by helping to generate oxygen gas. Oxygen became another pollutant of the early air, an inevitable result of autotrophs’ photosynthesizing, much as the heterotrophs had soiled the primordial air even earlier with carbon dioxide gas. No anerobic organism could have escaped this “oxygen holocaust.” Atmospheric change has had an enormous influence on the abundance and diversity of life on Earth. The photosynthetic release of oxygen into an atmosphere that previously had little or none of it ensured great changes not only in the environment but also among life forms dependent on that environment. Interacting with the Sun’s ultraviolet radiation, the diatomic oxygen molecule (O2) breaks down into two oxygen atoms (O). Three oxygen atoms then recombine high in the atmosphere, molding large quantities of triatomic oxygen (O3), or ozone. (Derived from the Greek and meaning “to smell,” that pungent ozone gas can often be sensed near thermal copying, or Xerox, machines that use ultraviolet radiation.) Ozone now completely surrounds our planet in a thin shell at an altitude of ~50 kilometers (~30 miles), effectively shielding the surface from further exposure to harmful high-energy radiation.
None of this happened overnight. The ozone layer needed time to thicken enough to screen out most of the harmful ultraviolet radiation. The process was an accelerating one: Oxygen-producing autotrophs had an increased chance for survival and therefore replication. The more offspring they produced, the more oxygen they dumped into the atmosphere. And more oxygen meant more ozone, more protection from solar radiation, and enhanced opportunities for survival. But it still took time for the protective ozone to cumulate. Perhaps as much as 2 billion years after the onset of photosynthesis were needed since dissolved iron in the oceans would have combined with any free oxygen, removing it from the atmosphere until waters everywhere were saturated.
Models of Earth’s early atmosphere imply that the ozone layer started to form, or at least oxygen gas had initially begun to rise, somewhat 2 billion years ago. Deposits of oxidized iron (called “red-bed” sediments or banded-iron formations) in the geological record of that date, now mined for their metal, support the view that oxygen was then hardly 1% of Earth’s air, well below the ~20% we enjoy today. Some of the most ancient fossils, dating back earlier than this as noted shortly, do show evidence for chlorophyll products, suggesting that oxygen was then being released into the atmosphere—but to what extent is unknown. Other models imply that oxygen didn’t reach its current levels nor did the ozone layer become a fully effective shield of solar radiation until ~0.5 billion years ago. Fossil evidence also supports that argument as life rather suddenly became varied and widespread ~550 million years ago, before which only primitive life forms existed. Shortly thereafter, a rapid surge in numbers and diversity of complex living organisms came forth—a population explosion of the first magnitude.