# Good Math/Bad Math

## Monday, March 20, 2006

### The conflict between IC and IT arguments

As I was responding to comments over the weekend, something interesting struck me about the irreducible complexity (Behe) and the information theory arguments against evolution that I discussed last week. The two arguments are actually contradictory. Anyone who accepts one really should not under any circumstances accept the other.

Behe's argument is strictly constructive: that is, it relies on monotonically increasing complexity (which means monotonically increasing information). It argues that in evolution, structures can only be created by adding elements. If structures can become less complex, then the irreducible complexity argument fails.

To see why, just think about a complexity landscape. (Now, for clarity, remember that the complexity landscape is different from the fitness landscape; they both cover the same graph of nodes, but the values at any point in the landscape are different for fitness and complexity.) The complexity landscape can be seen as a three-dimensional surface, with the elevation of any point being its complexity value.

Dembski basically argues that an evolutionary process can only climb hills: it can only increase the complexity, so it can't go downhill. So it can't get to an IC system, unless the IC system is on an uphill slope; and he argues that there's no way to get to that point - essentially that the surface is discontinuous under the point of the IC system, so you can't climb to it. But if you can go downhill, then you can climb over a peak, and descend to the IC system by throwing away unneeded parts. If you can ever decrease complexity, then there is no way to avoid the fact that you can reduce to a minimum point. (Per Chaitin, you can't be sure that that's the lowest point, just that it's a local minimum. There might be a lower point, but you'll need to climb up out of the local minimum "valley" to get to someplace where you go down
to reach it.) (As an aside: if anyone knows of good surface-drawing software for a macintosh, please let me know; this would be a lot easier to explain with a picture, but I don't have any software handy where I can quickly and easily make a surface.)

The "information theory" argument is that evolutionary systems cannot be constructive. They argue that an evolutionary process can never produce information - and thus cannot ever produce new structures. If this is the case, then you can't create an irreducibly complex structure by addition - if it happens naturally, it can only happen by destruction of information: reducing the complexity of structures until they have the minimum information that they need to produce a required structure.

You can't have it both ways. If you want to make the Behe argument about irreducible complexity, then you must argue that evolution is monotonically increasing with respect to information/complexity. If it can ever decrease, then it becomes possible for a random process to reach an IC state. If you want to make so-called information theory argument, then you're setting a basic premise that information/complexity in a random system is monotonically decreasing.

So anyone who tries to use both arguments without explicitly separating them is either thoroughly clueless or deliberately deceptive.

• "You can't have it both ways" seems to be a recurring phrase when dealing with this type of fanciful hypothesis which tries to contradict scientific thinking as applied to ethical issues and the like.

By  Thomas Winwood, at 10:42 AM

• I think the explanation is simpler: Behe just didn't consider the fact that an irreducibly complex (i.e., minimal) system can be built by building an *overly* complex system and then taking out the unnecessary parts.

An arch is irreducibly complex - take away one stone and it collapses. An arch plus scaffolding is redundant. Taking away the scaffolding and leaving the arch makes it seem like the arch had to have been built "all at once", not one stone at a time - but only because you don't understand that the stones you see aren't the whole story. (Arch analogy due to Cairns-Smith, IIRC.)

Dembski, on the other hand, either doesn't know what he's talking about or is deliberately obfuscating. "Information" doesn't mean what he wants it to mean, as you point out. A list of consecutive daily high temperatures contains information, but that doesn't mean that that information couldn't be generated by mindless processes.

By  Anonymous, at 11:28 AM

• Per Chaitin, you can't be sure that that's the lowest point, just that it's a local minimum. There might be a lower point, but you'll need to climb up out of the local minimum "valley" to get to someplace where you go down to reach it.

Do you really need to call upon Chaitin for this? It seems like a basic result of ordinary multi-variable calculus (on optimization over surfaces and finding global maxima)

Oh, and grapher, the built in graphing tool, does a pretty nice job painting surfaces.

By  Marcus, at 2:09 PM

• I'm definitely not the mathematician to get too involved with this discussion, but I don't understand the local/global optimization issues.

The example it brings to mind is the set of computer algorithm known as 'simulated annealing'.

A simplistic description: If you have a problem that doesn't lend itself to formal analysis (the classic example is the traveling salesman problem – what path between destinations on a list minimizes the salesman's travel distance?).

To solve the problem, you get a non-optimized solution, (maybe horribly inefficient, but all the stops are hit) then just make random changes and see what happens. If the change improves the solution, then keep it, else reject it. This will quickly take you to a local limit.

But… (the cool part) occasionally, even if the solution makes things worse, keep it anyway. This gives you a mechanism to shake out of local optimums. If your search space is complex enough then you may move towards a different local optimum. If you have multiple searches active at once you may even get different solutions competing against one another.

This sort of search seems to be exactly the sort of thing that evolving species would use.
(Now we're back to my ignorance again) Why do people argue about local minimums and what is and is not possible optimizing when the real world is under no constraint to use the optimum. It just needs something that works?

By  Rich, at 2:26 PM

• rich:

Thanks for bringing up simulated annealing. I've been thinking about saying something about it.

And yes, it *is* obvious that you can work your way past a local minimum using something like simulated annealing. And in fact, it isn't even necessary to bring something like SA into the picture to get around that kind of problem. In reality, things like fitness and complexity aren't simple monodimensional values. The way that you "move downhill" in the complexity surface is by moving *somewhere* on a corresponding fitness surface. So you have those other surfaces affecting the motion and the overall fitness (survival-and-reproduction) value.

But what's going on under the covers is that people like Behe want to convince folks to adopt their religion. In order to do that, they deliberately create false versions of evolution that they can punch convincing holes in.

Behe's argument fundamentally depends on the idea that evolution can *only* go uphill. If you can only go uphill, you'll don't need to worry about finding a local or global minimum - you can't hit either without going downhill.

By  MarkCC, at 2:41 PM

• If you're a creationist I actually don't see a real contradiction between the two "criticisms".
I mean Behe's arguement implies that no biological organism can become "less complex" meaning that all organisms begin at some minimal complexity level and the IT arguement says that information cannot be created by random sources (ie genetic mutations) so biological organisms cannot increase in complexity. Put the two arguements together and you say that biological organisms can neither increase nor decrease in complexity - and so all organisms were created "as is". They've neither evolved nor devolved since creation. Now of course both of these arguments are wrong on their own but being wrong has never stopped creationists before.

By  CBBB, at 5:34 PM

• I think you meant Behe, not Dembski, at the beginning of the 4th paragraph.

Certainly Behe is implicitly arguing that evolution can't go downhill in complexity, but does he ever explicitly argue for that? It's been a long time since I perused Behe's book, but I like the explanation proposed by anonymous: Behe failed to consider that complexity can decrease while fitness still increases.

By  ivan m., at 5:52 PM

• Ivan:

Maybe at first, Behe didn't consider the possibility of decreasing into IC. But he has been repeatedly been informed about that possibility over the space of years, which leaves me with no choice but to conclude that he is deliberately ommitting it because it doesn't fit the conclusion that he wants.

By  MarkCC, at 6:12 PM

• Dembski basically argues that an evolutionary process can only climb hills: it can only increase the complexity, so it can't go downhill.

Uhhh...are you sure about this? Dembski's "Law of Conservation of Information" is that no new information can arise via a natural process. Although, I bet one could find where Dembski does argue the above as well (i.e. when he tries to support Behe's work) so he is your classic case of wanting it both ways.

By  Steve, at 7:06 PM

• It's not just Dembski, garden variety YECs - like Ken Ham, constantly argue that mutations cannot create new information.

By  CBBB, at 8:43 PM

• You can find Grapher in the Utilities folder of the Applications folder.

By  Carl, at 8:50 AM

• For drawing surfaces (and a whole lot else,) you might want to take a peak at Blender (http://blender.org). Creating a surface grid and muck about with its vertices is pretty easy to do.

By  Anonymous, at 10:49 AM

• I also like Grapher very much. Simple and fairly powerful.

By  molybdenum1, at 12:12 PM

• "You can't have it both ways. If you want to make the Behe argument about irreducible complexity, then you must argue that evolution is monotonically increasing with respect to information/complexity."

I think that this misses the point about the evolution of bio-functions. They are meaning in the sense, they target solutions to problems. Yes surely Behe's view would indicate increased semantic content in the celluar (or other level) signals taking place in an IC function. But this can be true with lower K-C complexity and a shorter message in bits. A error corrected message can be shorter and less complex, yet more targeted to effect a specified condition.

What needs to increase is the productivity of the message.

By  ncg, at 3:01 PM

• steve wrote - "Dembski's "Law of Conservation of Information" is that no new information can arise via a natural process."

Well, if metal computation as design is not natural - yes. I happen to think it is very natural. CSI, according to B. Dembski, occurs during intelligent efforts, evidenced by human artifacts.

I would be of the opinion that "human intelligence" as being separate from "natural" science is silly. I think living things designed functional improvements themselves, through mentally causative processes.

By  ncg, at 5:27 PM

• Rich, simulated anealing is a cool algorithm, but life follows genetic algoritms (an improvement of simulated anealing, based on, well, evolution). And while simulated anealing can go very far down at the fitness surface, genetic algorithms goes up most of the time, and relies on a population of different beings to reach the global maximum. That doesn't mean that it can't go down, but it doesn't work well it you let it go very down (that means, keep the mutations rare and small).

Anyway, all that is irrelevant, because an up path at the fitness surface can have any shape at the complexity surface. And, wile the complexity surface is static, the fitness surface changes with time, making any path possible.

• I'm a very pro evolutionist atheist, but your argument is pointless. Anybody can make two arguments that are contradictory as hedges against one argument being false and vice versa, in fact, reductio ad absurdum arguments are by their nature about assuming a falsity to prove its opposite.

Think of it this way: "I know whether or not evolution can add complexity, and that is that it can't, but if you're not convinced that it can't and still believe that it could, it can't reduce it."

The evidence for either argument is bad by itself, but your particular criticism here is vacuous since one _can_ make both these arguments consistently and no, they don't need to explain why they can because it's only your misunderstanding that they are separate causing you to confound the two.

By  Anonymous, at 9:19 PM

• anonymous:

Look at what I said again: anyone who tries to use both arguments without separating them.

The way that creationists commonly present them, they consider these two to be compatible argument: that is, they assert both the IC *AND* IT arguments; not IC *OR* IT. They're not asserting that *if* one of the argument fails, then you can apply the other one; they're asserting that both are valid and true reflections of reality at the same time.

I assert that combining the two that way is a contradiction.

By  MarkCC, at 9:50 AM

• Ther arguments are not contradictory - they simply use different measures of information. For example in statistical mechanics, "information" and "entropy" are reversely correlated - when one increases the other decreases, and vice versa. And yet they are both information measures - "information" measures the amount of information known to you, and "entropy" measures the amount of information unknown to you - i.e. statistical fuzz.

Clearly the two arguments are using similarly converse measures. Macroscopically unavailable information, i.e. randomness, always increases, where macroscopically available information always either decreases or stays the same.

This is not rocket science - it is a distinction described in any decent thermodynamics text.

By  Mark Butler, at 3:07 PM

• markb:

You can't just wave your hands and say "thermodynamic entrory is the same as algorithmic information theory entropy". They are two different things. A string of information has *no meaning* in thermodynamics. And "macroscopically unavailable information" is a meaningless term in information theory.

Kolmogorov/Chaitin entropy is *not* thermodynamic entropy. As any halfway decent textbook on algorithmic information theory will demonstrate.

But what Dembski and Behe are claiming have nothing to do with thermodynamics. They are talking about the same "information" - it is information in the sense of AIT.

By  MarkCC, at 4:03 PM