Argument by Design ver. 4.0
Ver 1.0:
Medrash Temurah:
Ver 2.0:
The Rambam's version of the proof in Moreh Nevuchim II invokes the Aristotilian notions of form and substance. We find that without an intellect giving the process a desired end product, natural processes reduce forms from functional to non-functional. People make objects out of metal, nature takes the substance and eventually turns it into a useless lump of rust.
Therefore, the notion of an infinitely old universe is untenable. In an infinite amount of time, all functional forms would have disintegrated.
Ver 3.0:
This is roughly the same argument as the Rambam's, brought up to date with 19th century thermodynamics. Rather than speaking of functional forms, we recast the question into one of a lack of entropy.
All processes require an increase of entropy. Entropy is simply a fancy word for what boils down to randomness in the small scale. A visible state has more entropy if its molecules are more random. When you spill a drop of ink into water, the ink spreads until it's all a light blue liquid. Entropy increased. In microscopic terms, the molecules of ink and water started out nearly ordered, with all the ink in one spot at the surface of the water, and ended up an even random mixture of ink and water molecules.
Given an infinitely old universe, entropy would be at a maximum. All of existance would be a thin mixture of nuclear particles, or perhaps hydrogen atoms.
The requirement that entropy increase does not rule out evolution. Entropy could be decreased in the order and design of living beings at the expense of increased randomness elsewhere, say in the arrangement of molecules in the air, or of energy or even a thin stream of atmosphere leaking off the earth. If the increase in entropy offsets the decrease inherent in life, the ledgers are okay.
ver 3.5
In the 20th century science accepted the notion of the Big Bang, and finally realized the universe has a finite age. The challenge shifted from proving the universe has a finite age to proving that the origin shows intent.
The entropy version of the argument can make the transition. By definition, low entropy states are unlikely ones. In fact, Roger Penrose in The Emperor's New Mind computes just how unlikely. Given the current estimate of 1060 nuclear particles in the universe, the probability of the universe begining in a low entropy state is 1010123. That's a number so huge, it has 10123 zeros in it!
To assume that the universe shot odds that long is irrational. Clearly the moment of origin wasn't random, and statistics isn't a meaningful way to model it.
ver 4.0
However, using information theory we can raise questions about the existance of ordered items, from atoms to stars and solar systems to the evolution of life.
Much has been made of the notion of "irreducible complexity", introduced by Michael Behe, a biochemist. If some living system requires multiple parts, each of which serve no purpose alone, how did the system evolve? How can the mutations that produce part A be coordinated with those that produce part B? He therefore argued that evolution demonstrates intelligent design, that there is a Designer who is loading the dice, doing that coordination base on his desired end goal.
However, there is also a standard reply. Perhaps the organism had an A' that was part of a different function, and a B' used either for this function on its own, or in a third system. Then, as A' and B' shifted to make this new system, the new system made the old functions obsolete (e.g. there's a new means of locomotion, and now the fins are redundant) and A and B emerged to more simply address the new, more efficient, method of solving the need.
Chalmers definition of "information" (as opposed to Claud Shannon's earlier definition, still used in telecommunication) makes a distinction between two kinds of unpredictability: information and noise.
Take a stream of information. Fortunately people today are pretty well exposed to the notion that any such stream can be transmitted as a sequence of ones and zeros. If there are patterns in that sequence, we can reduce them by simply describing the pattern rather than sending each one. A message that is composed of 10101010... for 1 million bits (spots that could be either 1 or 0) can be sent quite concisely, as something representing ("10" repeat a million times). One needn't send 2 million bits to do it. Even if certain sequences of bits are more frequent (such as that representing the word "However" in one of my postings) we can give them a shorthand and sent the sequence in fewer bits. This is how information is compressed in zip files or the advertised 5x speed enhancement on dial up connections. Claude Shannon, the father of Information Theory, defined information in a message as the minimum number of bits (spots that could be either 1 or 0) with which it could be represented. Therefore randomness, which can not be reduced to a description of an algorithm, contains the most information.
John von Neumann, in his seminal speeches on Automata Theory (published as a book in the 1950s), spoke about the information content inherent in a machine. You can compare two machines by looking at the number of bits it would take to describe them. If the machine has fewer parts, it will require fewer bits. Similarly if the parts are simpler. Also, if the parts do not require the same precision in order for the machine to work, one can describe them in fewer bits. von Neumann found that machines below a certain information threashold can only make machines simpler than themselves.
These automata, this interacting collections of parts, is Behe's irreducibly complex system presented in other terms. And von Neumann usefully gives us a method for measuring them.
As opposed to Claude Shannon's definition of "information", G J Chaitin launched a feild called "algorithmic information theory" that gives a generalized version of von Neuman's measure to define "information". Randomness comes in two sorts: information that is useful to the message, and noise, the static that garbles it. Information is only that which is necessary to describe the message to the precision necessary to reproduce what it describes.
So how did complex automata, such as life, emerge? Invoking the roll of randomness and evolution, von Neumann argues that proto-life (or the proto-solar system) did not produce the information in the resulting system itself. Information came in from the outside.
That outside information is provided by evolution involves two basic steps: the introduction of mutations, and the filtering process of which mutations survive. Yes, mutations add randomness and Shannon-information to the system. But why would that randomness be Chaitin-information rather than noise? In fact the leading cause of the static on your radio is the very source of many of the mutations that evolution requires -- cosmic radiation. It would be like the probability of static just happening to produce the recipe for an award winning pie. (Actually, that's a huge understatement.) Needless to do the math to show that even in 5 billion years, it just won't happen.
To make the probabilities more likely, one needs to invoke "survival of the fittest". It's not billions of years of distinct rolls of the dice, but the successful rolls are links and combine. The flaw here is a shift in the definition of "successful". Successful at surviving is not correlated to the notion of being part of an automaton in the future. The evolution of "part A" in some irreducible system is not more likely because it can come from A', which is useful alone. One needs to also look at the likelihood of A' arising, the likelihood that it could be reused, that there is a path from one system to another, etc... Since they're uncorrelated, once you multiply the probabilities together, you couldn't have improve the odds over simply tossing a coin for each bit.
Which argument is most convincing? Version 4.0, based on math, many models of the cosmology, geology and biology of our origins, but very rigorous, or Rabbi Aqiva's simple appeal, using a comparison, to show how the point should be self-evident? The ver 1.0, being closest to reducing the claim to a postulate, carries for me the most appeal.
Rabbi Aqiva gives us the tools for emunah. Building on that emunah, we can understand it in greater depth, subtlety and beauty using these more formal forms of the argument. But the formality hides the dependence on assumptions from which to reason, not replaces them.
Medrash Temurah:
"G-d created" (Gen. 1:1): A hereic came to Rabbi Aqiva and asked, "Who made the universe?". Rabbi Aqiva answered, "Haqadosh barukh Hu". The heretic said, "Prove it to me." Rabbi Aqiva said, "Come to me tomorrow".One can argue that Rabbi Aqiva's students realized that his proof was far from rigorous. His reply revolves around giving a parable to make the conclusion self-evident. Not contructing a deductive argument.
When the heretic returned, Rabbi Aqiva asked, "What is that you are wearing?"
"A garment", the unbeliever replied.
"Who made it?"
"A weaver."
"Prove it to me."
"What do you mean? How can I prove it to you? Here is the garment, how can you not know that a weaver made it?"
Rabbi Akiva said, "And here is the world; how can you not know that Haqadosh barukh Hu made it?"
After the hereitc left, Rabbi Aqiva's students asked him, "But what is the proof?" He said, "Even as a house proclaims its builder,a garment its weaver or a door its carpenter, so does the world proclaim the Holy Blessed One Who created it.
Ver 2.0:
The Rambam's version of the proof in Moreh Nevuchim II invokes the Aristotilian notions of form and substance. We find that without an intellect giving the process a desired end product, natural processes reduce forms from functional to non-functional. People make objects out of metal, nature takes the substance and eventually turns it into a useless lump of rust.
Therefore, the notion of an infinitely old universe is untenable. In an infinite amount of time, all functional forms would have disintegrated.
Ver 3.0:
This is roughly the same argument as the Rambam's, brought up to date with 19th century thermodynamics. Rather than speaking of functional forms, we recast the question into one of a lack of entropy.
All processes require an increase of entropy. Entropy is simply a fancy word for what boils down to randomness in the small scale. A visible state has more entropy if its molecules are more random. When you spill a drop of ink into water, the ink spreads until it's all a light blue liquid. Entropy increased. In microscopic terms, the molecules of ink and water started out nearly ordered, with all the ink in one spot at the surface of the water, and ended up an even random mixture of ink and water molecules.
Given an infinitely old universe, entropy would be at a maximum. All of existance would be a thin mixture of nuclear particles, or perhaps hydrogen atoms.
The requirement that entropy increase does not rule out evolution. Entropy could be decreased in the order and design of living beings at the expense of increased randomness elsewhere, say in the arrangement of molecules in the air, or of energy or even a thin stream of atmosphere leaking off the earth. If the increase in entropy offsets the decrease inherent in life, the ledgers are okay.
ver 3.5
In the 20th century science accepted the notion of the Big Bang, and finally realized the universe has a finite age. The challenge shifted from proving the universe has a finite age to proving that the origin shows intent.
The entropy version of the argument can make the transition. By definition, low entropy states are unlikely ones. In fact, Roger Penrose in The Emperor's New Mind computes just how unlikely. Given the current estimate of 1060 nuclear particles in the universe, the probability of the universe begining in a low entropy state is 1010123. That's a number so huge, it has 10123 zeros in it!
To assume that the universe shot odds that long is irrational. Clearly the moment of origin wasn't random, and statistics isn't a meaningful way to model it.
ver 4.0
However, using information theory we can raise questions about the existance of ordered items, from atoms to stars and solar systems to the evolution of life.
Much has been made of the notion of "irreducible complexity", introduced by Michael Behe, a biochemist. If some living system requires multiple parts, each of which serve no purpose alone, how did the system evolve? How can the mutations that produce part A be coordinated with those that produce part B? He therefore argued that evolution demonstrates intelligent design, that there is a Designer who is loading the dice, doing that coordination base on his desired end goal.
However, there is also a standard reply. Perhaps the organism had an A' that was part of a different function, and a B' used either for this function on its own, or in a third system. Then, as A' and B' shifted to make this new system, the new system made the old functions obsolete (e.g. there's a new means of locomotion, and now the fins are redundant) and A and B emerged to more simply address the new, more efficient, method of solving the need.
Chalmers definition of "information" (as opposed to Claud Shannon's earlier definition, still used in telecommunication) makes a distinction between two kinds of unpredictability: information and noise.
Take a stream of information. Fortunately people today are pretty well exposed to the notion that any such stream can be transmitted as a sequence of ones and zeros. If there are patterns in that sequence, we can reduce them by simply describing the pattern rather than sending each one. A message that is composed of 10101010... for 1 million bits (spots that could be either 1 or 0) can be sent quite concisely, as something representing ("10" repeat a million times). One needn't send 2 million bits to do it. Even if certain sequences of bits are more frequent (such as that representing the word "However" in one of my postings) we can give them a shorthand and sent the sequence in fewer bits. This is how information is compressed in zip files or the advertised 5x speed enhancement on dial up connections. Claude Shannon, the father of Information Theory, defined information in a message as the minimum number of bits (spots that could be either 1 or 0) with which it could be represented. Therefore randomness, which can not be reduced to a description of an algorithm, contains the most information.
John von Neumann, in his seminal speeches on Automata Theory (published as a book in the 1950s), spoke about the information content inherent in a machine. You can compare two machines by looking at the number of bits it would take to describe them. If the machine has fewer parts, it will require fewer bits. Similarly if the parts are simpler. Also, if the parts do not require the same precision in order for the machine to work, one can describe them in fewer bits. von Neumann found that machines below a certain information threashold can only make machines simpler than themselves.
These automata, this interacting collections of parts, is Behe's irreducibly complex system presented in other terms. And von Neumann usefully gives us a method for measuring them.
As opposed to Claude Shannon's definition of "information", G J Chaitin launched a feild called "algorithmic information theory" that gives a generalized version of von Neuman's measure to define "information". Randomness comes in two sorts: information that is useful to the message, and noise, the static that garbles it. Information is only that which is necessary to describe the message to the precision necessary to reproduce what it describes.
So how did complex automata, such as life, emerge? Invoking the roll of randomness and evolution, von Neumann argues that proto-life (or the proto-solar system) did not produce the information in the resulting system itself. Information came in from the outside.
That outside information is provided by evolution involves two basic steps: the introduction of mutations, and the filtering process of which mutations survive. Yes, mutations add randomness and Shannon-information to the system. But why would that randomness be Chaitin-information rather than noise? In fact the leading cause of the static on your radio is the very source of many of the mutations that evolution requires -- cosmic radiation. It would be like the probability of static just happening to produce the recipe for an award winning pie. (Actually, that's a huge understatement.) Needless to do the math to show that even in 5 billion years, it just won't happen.
To make the probabilities more likely, one needs to invoke "survival of the fittest". It's not billions of years of distinct rolls of the dice, but the successful rolls are links and combine. The flaw here is a shift in the definition of "successful". Successful at surviving is not correlated to the notion of being part of an automaton in the future. The evolution of "part A" in some irreducible system is not more likely because it can come from A', which is useful alone. One needs to also look at the likelihood of A' arising, the likelihood that it could be reused, that there is a path from one system to another, etc... Since they're uncorrelated, once you multiply the probabilities together, you couldn't have improve the odds over simply tossing a coin for each bit.
Which argument is most convincing? Version 4.0, based on math, many models of the cosmology, geology and biology of our origins, but very rigorous, or Rabbi Aqiva's simple appeal, using a comparison, to show how the point should be self-evident? The ver 1.0, being closest to reducing the claim to a postulate, carries for me the most appeal.
Rabbi Aqiva gives us the tools for emunah. Building on that emunah, we can understand it in greater depth, subtlety and beauty using these more formal forms of the argument. But the formality hides the dependence on assumptions from which to reason, not replaces them.