The Halting Problem
Sorry for the delay in posting this; Blogger's been very flaky since last night, and I wasn't even able to read this blog, much less post to it.
I promised I'd say a bit more about the halting problem. It's an interesting thing: one of the foundational theorems of what's became known as computer science. As far as I know, computer science is just about the only field where the first and deepest rule is called a problem.
As I said before, around the beginning of the 20th century, there was a major effort to try to systematize all of mathematics. The idea was that if you could create the right way of axiometizing things, and the right set of logical rules, then you could create a perfect, complete, consistent version of mathematics.
There was one particular thing that this formalization was intended to avoid: paradox.
Anyone who's watched the old Star Trek is familiar with the old paradox couplet: "Everything I say is a lie. I am lying." The second statement is neither true nor false. That's an example of an old classic set of self-referential paradoxical statements that can occur in math, and the existence of those statements drove mathematicians absolutely crazy: because math is supposed to be a system in which every statement is either true or false - and paradoxical statements are neither true nor false. (Another nice example in more mathematical terms is from set theory. Think about sets which contain other sets as members. Now, if you're permitting self-referential sets, you can have the set of all sets that contain themselves as a member. Fine so far. But what about the set of all sets that do not contain themselves as a member? Is it a member of itself.)
The attempt to reformulate mathematics hinged on the idea of preventing a mathematical statement from being self-referential.
Godel blew that out of the water, using pure mathematical logic. While I generally say that Godel showed that you can't create a formal system that is both complete and consistent, that's not really a precise way of phrasing what Godel proved. What Godel really proved is closer to the statement that any formal system which is complete cannot be prevented from including self-referential statements. The way that happens is through something called models - a formal system is only valid if there is a model which fits its system of rules. Well, no matter what you do, if a system is capable of being represented by a model which is complete, then it is also capable of being represented by a model which encodes self-referential statements, and thus paradox.
The problem with Godel is that it's really tough to understand. It took me a couple of years, and several tries with several different textbooks before I really, truly understood what a model was, and why it mattered. That's just the first line of Godel's proof. It's very abstract, very deep into a kind of mathematical logic that few people really get.
Enter Alan Turing and friends.
They were neck deep in the early attempt to formalize math. They were interested in what you could do with a mechanical computing device, which is a question that naturally when you consider the perfect complete mathematical system. If that system exists, then you can use a computer program with the rules of that system to enumerate every true statement - or alternatively, you can write a program which can take any statement as input, and tell you whether or not that statement is true.
You can't do that - that's what Godel proved. But Turing and gang found an easier way to prove it: the halting problem.
The halting problem considers a fairly simple question: can I write a program which looks at any arbitrary computer program, and tells you whether or not that other program will eventually stop when you run it.
Written pseudo-mathematically:
The answer is no. If it were yes - then you'd have the decision procedure that the mathematicians wanted. Because any formal system can be encoded into a computer program; and that program can simple run through the rules in every possible sequence until it discovered a proof for that statement. If it never stops, that means the statement isn't true.
The real beauty of phrasing it as the halting problem is that you can prove it in two different ways, and they're both easy to understand. One of them attacks it by showing that the computing system can't be consistent, the other that the computing system can't be complete.
The first way is easier. If there is a program p which solves the halting problem, then I can write a program q, which runs p(q,i), and if p(q,i) says that it will halt, it goes into an endless loop, and never halts. If p(q,i) says it won't halt, then it halts.
It's a classic example of the self-reference problem - but it's one which in fundamentally unavoidable - because a computer program is just a number. So if you can create a computing system in which program can process numbers, you cannot possibly prevent it from processing programs. And that's exactly what Godels theorem also says: that any complete model can encompass a model which encodes self-reference.
Some folks feel like that proof is cheating: you're using a trivial paradox to break the system. Can you write a program which does not just do that shallow paradox?
Yup, you sure can - you can show that it's incomplete.
Take your computing system, S. Now, in your system S, write a program which
does the following:
What you've done is written a program which cannot be represented as a program
for S - meaning S is not complete.
And thus, computer science shows that can be used to squash all of the rest of mathematics. (My math can beat up your math!)
I promised I'd say a bit more about the halting problem. It's an interesting thing: one of the foundational theorems of what's became known as computer science. As far as I know, computer science is just about the only field where the first and deepest rule is called a problem.
As I said before, around the beginning of the 20th century, there was a major effort to try to systematize all of mathematics. The idea was that if you could create the right way of axiometizing things, and the right set of logical rules, then you could create a perfect, complete, consistent version of mathematics.
There was one particular thing that this formalization was intended to avoid: paradox.
Anyone who's watched the old Star Trek is familiar with the old paradox couplet: "Everything I say is a lie. I am lying." The second statement is neither true nor false. That's an example of an old classic set of self-referential paradoxical statements that can occur in math, and the existence of those statements drove mathematicians absolutely crazy: because math is supposed to be a system in which every statement is either true or false - and paradoxical statements are neither true nor false. (Another nice example in more mathematical terms is from set theory. Think about sets which contain other sets as members. Now, if you're permitting self-referential sets, you can have the set of all sets that contain themselves as a member. Fine so far. But what about the set of all sets that do not contain themselves as a member? Is it a member of itself.)
The attempt to reformulate mathematics hinged on the idea of preventing a mathematical statement from being self-referential.
Godel blew that out of the water, using pure mathematical logic. While I generally say that Godel showed that you can't create a formal system that is both complete and consistent, that's not really a precise way of phrasing what Godel proved. What Godel really proved is closer to the statement that any formal system which is complete cannot be prevented from including self-referential statements. The way that happens is through something called models - a formal system is only valid if there is a model which fits its system of rules. Well, no matter what you do, if a system is capable of being represented by a model which is complete, then it is also capable of being represented by a model which encodes self-referential statements, and thus paradox.
The problem with Godel is that it's really tough to understand. It took me a couple of years, and several tries with several different textbooks before I really, truly understood what a model was, and why it mattered. That's just the first line of Godel's proof. It's very abstract, very deep into a kind of mathematical logic that few people really get.
Enter Alan Turing and friends.
They were neck deep in the early attempt to formalize math. They were interested in what you could do with a mechanical computing device, which is a question that naturally when you consider the perfect complete mathematical system. If that system exists, then you can use a computer program with the rules of that system to enumerate every true statement - or alternatively, you can write a program which can take any statement as input, and tell you whether or not that statement is true.
You can't do that - that's what Godel proved. But Turing and gang found an easier way to prove it: the halting problem.
The halting problem considers a fairly simple question: can I write a program which looks at any arbitrary computer program, and tells you whether or not that other program will eventually stop when you run it.
Written pseudo-mathematically:
- Given a computing system S(p,i), which is described as a function from a program p, and an input i;
- Can a write a program which p will take a pair (q,i) as input, and answer
true in S(q,i) will eventually halt?
The answer is no. If it were yes - then you'd have the decision procedure that the mathematicians wanted. Because any formal system can be encoded into a computer program; and that program can simple run through the rules in every possible sequence until it discovered a proof for that statement. If it never stops, that means the statement isn't true.
The real beauty of phrasing it as the halting problem is that you can prove it in two different ways, and they're both easy to understand. One of them attacks it by showing that the computing system can't be consistent, the other that the computing system can't be complete.
The first way is easier. If there is a program p which solves the halting problem, then I can write a program q, which runs p(q,i), and if p(q,i) says that it will halt, it goes into an endless loop, and never halts. If p(q,i) says it won't halt, then it halts.
It's a classic example of the self-reference problem - but it's one which in fundamentally unavoidable - because a computer program is just a number. So if you can create a computing system in which program can process numbers, you cannot possibly prevent it from processing programs. And that's exactly what Godels theorem also says: that any complete model can encompass a model which encodes self-reference.
Some folks feel like that proof is cheating: you're using a trivial paradox to break the system. Can you write a program which does not just do that shallow paradox?
Yup, you sure can - you can show that it's incomplete.
Take your computing system, S. Now, in your system S, write a program which
does the following:
- On input 1, run S(1,1). If S(1,1) eventually stops, return S(1,1)+1.
- On input 2, run S(2,2). If S(2,2) stops, return S(2,2)+1.
- ...
- On input x, run S(x,x). If S(x,x) stops, return S(x,x)+1
What you've done is written a program which cannot be represented as a program
for S - meaning S is not complete.
And thus, computer science shows that can be used to squash all of the rest of mathematics. (My math can beat up your math!)
8 Comments:
Our Fearless Blogmath said,
"(Another nice example in more mathematical terms is from set theory. Think about sets which contain other sets as members. Now, if you're permitting self-referential sets, you can have the set of all sets that contain themselves as a member. Fine so far. But what about the set of all sets that do not contain themselves as a member? Is it a member of itself.)"
Ow! I think a neuron just fried. I shouldn't try to wrap my brain around this stuff right after standing and lecturing for five hours ...
Keep up the good work there. Looking forward to seeing you hack&slash through more Bad Math.
andrea
By Anonymous, at 5:46 PM
I'm having trouble with that last bit. Why can't the program be represented as a program in S?
(There's a little voice in my brain yelling "diagonal argument!" but I don't listen to him since that discussion about silicone)
By Lifewish, at 8:25 AM
Lifewish:
You're right, it's a diagonal argument. Basically, you're creating a program whose output differs from every other program in at least one value.
You're defining a non-computable function f. f is non computable, because for any program p, there is at least one one input i where f(i) is different from S(p,i).
It's a bit tricky - it's kind of thinking around corners a bit. But the real trick is just that for any input value, you've guaranteed that you've created a function whose results are not exactly the same as any input. (To be really complete, you need to play a few games with distinguishing total functions, but that's not really a big deal - it's just a slightly more complicated diagonalization, but the gist is the same - if you can follow this argument, then you can follow the full-depth version; it's just a more elaborate setup to get exactly the same result.
By MarkCC, at 12:34 PM
andrea:
Believe me, I understand your confusion - it's why I used the "Star Trek" form of the paradox first, and just did the set theory one as a parenthetical aside.
One way to make it easier to understand (at least to me :-) ) is to think of sets a little differently. The easiest way to understand sets is as a collection of objects - a set of coins, a set of numbers. You don't get wierd paradoxical things very easily in that kind of setting.
But there's another way to think of a set: you can think of a set as a function which returns "true" for values that are members of the set, and false for values that aren't members of the set.
So instead of thinking of the set S as a collection of objects, just think of it as a function, f_s.
Now, you can define a function g which just asks,
what does f_s return if you give it s as an input? If f_s(s) returns true, then g(s) returns true; if f_s(s) returns false, then g(s) also returns false. g is then the function for the set of all sets that do contain themselves as members.
Is g a member of itself? Well, that's actually your choice. There are two different perfectly valid functions that fit g - one which contains itself, one which doesn't. There's no parodox there - it just happens that the definition of g fits two different sets: one which contains itself, one which doesn't.
But flip g: call it g'. If g(i) returns true, then g'(i) returns false. g' is the function for the set of all sets that do not contain themselves as members. g' is inconsistent: if g'(g') returns true, then g' is a member of itself, which is wrong: g'(g')=true means that g' is not a member of itself, so g'(g') should have been false. And if g'(g') returns false, then that means that g' is not a member of itself, which weans that g'(g') should have returned true.
It's hard to wrap your head around - but it actually makes sense that it's hard to wrap your mind around :-) Because it's paradox - something which is unavoidably inconsistent and non-sensical.
By MarkCC, at 12:46 PM
I think the "library catalogue" analogy is best.
If you catalogue all the books in the library, you're going to end up cataloguing the catalogues themselves. Now, most catalogues, for example the catalogue of books about cats, will not contain themselves. A few of them will, though, for example the catalogue of all catalogues (just like the "contents" page of a book often lists itself).
The question, however, arises when you consider the catalogue of all catalogues that don't contain themselves. If you decide that this catalogue should be listed in itself then poof! it no longer meets the criterion. However, if you decide that this catalogue shouldn't be listed in itself then poof! it does meet the criterion.
Now, in the real world this sort of thing isn't really a showstopper. In fact, the common-or-garden electric buzzer works off precisely the same principle - it ensures that wherever the diaphragm inside it comes to rest is in fact precisely the wrong place. However, in the world of logic, this sort of thing is to be avoided where at all possible, and is generally considered to be a sign of an argument gone astray.
By Lifewish, at 4:26 PM
Lifewish:
The library analogy is fantastic - definitely by far the best intuitive analogy that I've seen. Thanks!
By MarkCC, at 6:54 PM
I took Peter Denning's Operating Systems course at Purdue in something like 1983. Total waste of time. His claim to fame was a proof that a Least Recently Used (LRU) page replacement system was the best one could do for demand paging. Nothing incorrect with the proof, except that it was wrong. One of the assumptions was that the operating system could not predict the page replacement needs of a program. However, several operating systems were developed that could detect sequential page activity, and perform read ahead. Needless to say, this blows away the performance of LRU.
The reason i thought the course was useless was that i had written an operating system or two, and it was clear than Denning hadn't. He sounded convincing, but as it didn't match reality, there was much nonsense.
All this to say that while in theory, math can be of great benefit to practical computation, in practice, it often isn't. The canonical saying is this. The difference between theory and practice is that, in theory, they are the same.
By Stephen, at 1:17 PM
Here's a way of thinking about the Russell problem that might be less brain-bending.
Sets can have all kinds of things as members, such as objects, other sets, and even (in some cases) themselves. But can there be a set that contains everything? The answer is no, and the proof is by a kind of diagonal argument.
Let D be some set. Some of the things in D might be sets, some might not. Moreover, some of the sets in D might contain themselves, while others (and those things that aren't sets) do not. So let's gather together all the things in D that don't contain themselves, and form a new set R_D from it.
In set-theory notation:
R_D = {x in D | x not in x}
Now, have we created a new set by forming R_D, or just identified one of the sets that was already in D? Well, if R_D is in D, then we have two options:
i) R_D is not a member of itself. But then it would meet the defining criterion above - it's a member of D and not a member of itself - so it should have been a member of R_D.
ii) R_D is a member of itself. But then R_D contains an element that is both a member of D and a member of itself, in contradiction of its definition.
Thus if we assume that R_D is a member of D then we get a contradiction. So the natural conclusion is that R_D isn't a member of D. Thus, like Cantor's diagonal argument, this gives us a way of taking any set D and producing something that's guaranteed not to be a member of D. And that seems to prove that no set D could contain everything there is.
By logopetria, at 9:45 PM
Post a Comment
<< Home