tag:blogger.com,1999:blog-23588438Sun, 14 May 2017 19:58:39 +0000Good Math/Bad MathFinding the fun in good math; Shredding bad math and squashing the crackpots who espouse it.http://goodmath.blogspot.com/noreply@blogger.com (MarkCC)Blogger136125tag:blogger.com,1999:blog-23588438.post-115039620494693196Thu, 15 Jun 2006 18:26:00 +00002007-03-28T02:59:41.143-04:00Reminder: GM/DM doesn't live here anymoreJust a reminder for folks who haven't updated their links yet:<br /><br />Good Math, Bad Math no longer lives here at blogger. It's now part of the ScienceBlogs coterie, and can be found <a href="http://www.scienceblogs.com/goodmath">here</a>.<br /><br />There've been a lot of new posts over at SB since the move:<br />- Category theory.<br />- More bad gematria gibberish.<br />- More debunking of creationists<br />- More programming language discussionshttp://goodmath.blogspot.com/2006/06/reminder-gmdm-doesnt-live-here-anymore.htmlnoreply@blogger.com (MarkCC)3tag:blogger.com,1999:blog-23588438.post-114968924111394855Wed, 07 Jun 2006 14:06:00 +00002006-06-07T10:07:21.143-04:00Category Theory at GM/BM's new homeThe first post in my new series on category theory is up at <a href="http://www.scienceblogs.com/goodmath">the new home of GM/BM.</a>http://goodmath.blogspot.com/2006/06/category-theory-at-gmbms-new-home.htmlnoreply@blogger.com (MarkCC)2tag:blogger.com,1999:blog-23588438.post-114961495687246573Tue, 06 Jun 2006 17:27:00 +00002006-06-06T13:29:16.903-04:00New GoodMath post at scienceblogs: Election Fraud?I've gotten an absolutely unprecedented number of requests to write about <a href="http://www.rollingstone.com/news/story/10432334/was_the_2004_election_stolen/1">RFK Jr's Rolling Stone article</a> about the 2004 election. I've written about it over at GM/BM/s <a href="http://www.scienceblogs.com/goodmath">new home at scienceblogs</a>.http://goodmath.blogspot.com/2006/06/new-goodmath-post-at-scienceblogs.htmlnoreply@blogger.com (MarkCC)0tag:blogger.com,1999:blog-23588438.post-114951731704938655Mon, 05 Jun 2006 14:19:00 +00002007-03-28T03:01:06.400-04:00More AiG lying with statistics: John Woodmorappe and Noah's ArkToday's sample of bad math is something I talked about way back in my talk.origin days in usenet; and which has also been discussed by lots of other people. I thought it was too old, too well covered to be worth taking time and space here, but I keep having people forward it to me, suggesting I post something on it here. So who am I to stand in the way of the desires of the readers of this blog?<br /><br />Back in 1996, a guy publishing under the pseudonym of "John Woodmorappe" wrote a book called "<a href="http://shop5.gospelcom.net/epages/AIGUS.storefront/en/product/10-3-078">Noah's Ark: A Feasibility Study</a>." AIG has an article discussing the basic ideas of the book <a href="http://www.answersingenesis.org/creation/v19/i2/animals.asp">here</a>, and it is this article that I'll be using as the focus of my post.<br /><br />Woodmorappe's text purports to be a details study of whether or not it was technically feasible for the biblical story of the ark to be true. He tries to demonstrate the possibility of truth by purportedly showing how all of the animals could fit on the ark, and proposing mechanisms by which they could be fed, wastes could be disposed of, etc.<br /><br />The AiG article focuses on that first issue: is it possible that the ark, as described in the bible, have enough room to fit all of the animals? They admit that the answer to this comes down to two questions:<br /><ol><li> How many animals needed to be on the ark?<br /></li><li> How much space was needed for each animal?<br /></li></ol>They play a few games to try to reduce the number of animals that they need to make space for, ending up with 16000 individual animals. The reasoning to get that number of pretty questionable, but it's not worth quibbling over, because of what comes next.<br /><br />Woodmorappe wants to figure out how much space is needed by 16000 animals. How does he do that? Easy. He figures out an average amount of space needed by each animal, and multiplies it by the number of animals. Doesn't sound bad as a way of making an estimate, right?<br /><br />Except... How does he figure out how much space is needed by each animal?<br /><br />He figures out the <em>median</em> size of the animals in the ark; and determines that the median-sized animal would need a space of .5 by .5 by .3 meters - that is less than 1/10th of a cubic meter per animal. Multiply that by the number of animals, and you come up with 1200 cubic meters. Not terribly much space at all - which would mean that there would be plenty of room for food for the animals.<br /><br />So what's wrong?<br /><br />There are two different figures in statistics that can be used as an 'average' value, the mean, and the median. The mean is what we normally mean by average: take all of the values, add them up, and divide by the number of values. The median is a way of selecting a "typical" individual to represent the average: take all of the values, lay them out in order, and pick the value in the middle.<br /><br />Woodmorappe uses the <em>median</em>. The median represents a "typical" individual; it's the value in the center. The nice thing about the median is that it gives you an idea of what a typical individual is like, without having it shifted by outliers in your data. But it's useless for trying to extrapolate to things about the population - because the outliers <em>are part of the population</em>; you can't eliminate them from your calculations about the population.<br /><br />For example, I've had 7 pets in my lifetime. Their weights were 18lbs (a big fat cat), 15 lbs (a tiny poodle), 20 lbs (a smallish mutt), 20 lbs (another mutt), 10 lbs (a normal sized cat), 18 lbs (another poodle), and 78lbs (a golden retriever).<br /><br />The median size of my pets? Well, in order they were (10, 15, 18, 18, 20, 20, 78); so the median was 18 lbs.<br /><br />The mean size of my pets? (10+15+18+18+20+20+78)/7 = 25.5lbs.<br /><br />If I use the median to reason about the group of pets, I'd conclude that all of my pets together would weight 126 lbs. But that's not true: they would weigh 179lbs! I'd be off by 54 lbs! Because by using the median instead of the mean, I've ignored the outlier.<br /><br />That's exactly what Woodmorappe does. He concludes that the median animal is the size of a rat, and a rat, 1/10th of a cubic meter is more than enough space. But he's very deliberately eliminated the effects of elephants, dinosaurs (which Woodmorappe claims were on the ark!), horses, cows, antelope. They're all outliers in his data. By using the median, he's been able to reduce the space requirement by a huge factor - by at least something between one and two orders of magnitude!<br /><br />This isn't sloppiness. No one would reasonably even <em>compute</em> a median size for something like this. This is a deliberate attempt to lie with numbers: by taking advantage of the fact that most of his readers aren't going to notice that he uses the median rather than the mean for an average (or in fact by relying on the fact that most don't even know the difference), he's lying to them to help them reach the conclusion that they really want.<br /><br />Quite simply, Woodmorappe deliberately makes a mathematical error, and then uses that error to lie.http://goodmath.blogspot.com/2006/06/more-aig-lying-with-statistics-john.htmlnoreply@blogger.com (MarkCC)21tag:blogger.com,1999:blog-23588438.post-114936496478105043Sat, 03 Jun 2006 20:00:00 +00002007-01-25T21:23:00.346-05:00Site changes completeSorry for the burst of index posts for those of you reading via RSS aggregators. The site changes are done now; there's a menu in the sidebar linking to the topic indices, and I've added a couple of links to the (still too small) blogroll. I might still make a polish change or two to the site, but nothing that should trigger any new RSS updates. Please let me know if you happen to notice any problems with the indices or sidebar.http://goodmath.blogspot.com/2006/06/site-changes-complete.htmlnoreply@blogger.com (MarkCC)2tag:blogger.com,1999:blog-23588438.post-114936322547824106Sat, 03 Jun 2006 19:33:00 +00002006-06-03T15:37:06.046-04:00Information Theory (index)<ul><li><a href="http://goodmath.blogspot.com/2006/03/some-good-math-introduction-to.html"> Some good math: An Introduction to Information Theory, part 1</a></li><li><a href="http://goodmath.blogspot.com/2006/03/introduction-to-information-theory.html">An Introduction to Information Theory, Part 2: Entropy</a></li><li><a href="http://goodmath.blogspot.com/2006/03/really-bad-math-evolution-and.html">Really Bad Math: Evolution and Information</a></li><li><a href="http://goodmath.blogspot.com/2006/03/problem-with-irreducible-complexity.html">The Problem with Irreducible Complexity</a></li><li><a href="http://goodmath.blogspot.com/2006/03/another-take-on-information-theory.html">Another Take on Information Theory </a></li></ul>http://goodmath.blogspot.com/2006/06/information-theory-index.htmlnoreply@blogger.com (MarkCC)0tag:blogger.com,1999:blog-23588438.post-114936319728646623Sat, 03 Jun 2006 19:32:00 +00002006-06-03T15:34:38.843-04:00Group Theory (index)<ul><li><a href="http://goodmath.blogspot.com/2006/04/fun-stuff-group-theory-corrected.html"> Fun Stuff: Group Theory</a></li><li><a href="http://goodmath.blogspot.com/2006/04/some-applications-of-group-theory.html">Some Applications of Group Theory, promoted from comments</a></li><li><a href="http://goodmath.blogspot.com/2006/04/group-theory-what-is-symmetry-why-do-i.html">Group Theory: What is symmetry? Why do I care?</a></li><li><a href="http://goodmath.blogspot.com/2006/04/group-theory-3-expanding-on-symmetry.html">Group Theory 3: Expanding on Symmetry</a></li><li><a href="http://goodmath.blogspot.com/2006/04/group-isomorphism-defining-symmetry.html">Group Isomorphism: Defining Symmetry Transformations</a></li><li><a href="http://goodmath.blogspot.com/2006/04/permutations-and-symmetry-groups.html">Permutations and Symmetry Groups</a></li><li><a href="http://goodmath.blogspot.com/2006/04/groups-subgroups-and-group-actions-oh.html">Groups, Subgroups, and Group Actions, Oh my!</a></li><li><a href="http://goodmath.blogspot.com/2006/04/cyclic-groups.html">Cyclic Groups </a></li></ul>http://goodmath.blogspot.com/2006/06/group-theory-index.htmlnoreply@blogger.com (MarkCC)0tag:blogger.com,1999:blog-23588438.post-114936315364787365Sat, 03 Jun 2006 19:31:00 +00002006-06-03T15:32:33.650-04:00Automata Theory (index)<ul><li> <a href="http://goodmath.blogspot.com/2006/03/halting-problem.html"> The Halting Problem </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/03/playing-with-mathematical-machines.html"> Playing with mathematical machines </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/03/turing-machine-tricks.html"> Turing Machine Tricks </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/computer-science-math-and-languages.html"> Computer Science, Math, and Languages </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/regular-languages.html"> Regular Languages </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/nondeterminism-in-finite-state.html"> Nondeterminism in Finite State Machines </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/context-free-languages.html"> Context Free Languages </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/context-sensitive-languages.html"> Context Sensitive Languages </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/level-0-recursive-and-recursively.html"> Level 0, recursive and recursively enumerable languages </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/busy-beavers.html"> Busy Beavers </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/minsky-machine.html"> The Minsky Machine </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/minsky-machine-to-play-with.html"> A minsky machine to play with </a><br /></li></ul>http://goodmath.blogspot.com/2006/06/automata-theory-index.htmlnoreply@blogger.com (MarkCC)0tag:blogger.com,1999:blog-23588438.post-114936309988873624Sat, 03 Jun 2006 19:31:00 +00002006-06-03T15:31:39.893-04:00Rebutting Creationism (index)<ul><br /><li> <a href="http://goodmath.blogspot.com/2006/03/really-bad-math-evolution-and.html"> Really Bad Math: Evolution and Information </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/03/studying-bad-probability-creation_14.html"> Studying Bad Probability: the Creation Science Research Center </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/03/king-of-bad-math-dembskis-bad.html"> The King of Bad Math: Dembski's Bad Probability </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/03/problem-with-irreducible-complexity.html"> The Problem with Irreducible Complexity </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/03/conflict-between-ic-and-it-arguments.html"> The conflict between IC and IT arguments </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/03/more-bad-probability-cheating-with.html"> More bad probability: Cheating with Chance from AiG </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/03/berlinskis-bad-math.html"> Berlinski's Bad Math </a> '<br /></li><li> <a href="http://goodmath.blogspot.com/2006/04/one-last-stab-at-dembski-vacuousness.html"> One last stab at Dembski: the Vacuousness of Specified Complexity </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/04/berlinski-responds_06.html"> Berlinski responds </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/04/qa-roundup-no-free-lunch.html"> Q&A roundup: No Free Lunch </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/04/dembski-and-displacement.html"> Dembski and Displacement </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/relativity-and-young-earth-bogosity.html"> Relativity and Young Earth Bogosity </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/creationists-and-extra-solar-planets.html"> Creationists and Extra-solar planets </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/probability-and-fine-tuning.html"> Probability and Fine Tuning </a><br /></li></ul>http://goodmath.blogspot.com/2006/06/rebutting-creationism-index.htmlnoreply@blogger.com (MarkCC)0tag:blogger.com,1999:blog-23588438.post-114936305894951046Sat, 03 Jun 2006 19:30:00 +00002006-06-03T15:30:58.950-04:00Logic (index)<ul><br /><li> <a href="http://goodmath.blogspot.com/2006/03/bit-of-logic.html"> A bit of logic </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/03/calculus-no-not-that-calculus_29.html"> Calculus - no, not that calculus! </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/04/quick-logic-reasoning-and-semantics.html"> Quick Logic: Reasoning and Semantics </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/04/more-logic-models-and-why-they-matter.html"> More logic: models and why they matter </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/04/correcting-my-models-post-or-why.html"> Correcting my models post; or, why MarkCC is a dummy </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/logic-fun-intuitionistic-logic.html"> Logic Fun: Intuitionistic Logic </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/moving-towards-models-kripke-semantics.html"> Moving towards models: Kripke Semantics </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/finally-kripke-model-for.html"> Finally: the Kripke Model for Intuitionistic Logic </a><br /></li></ul>http://goodmath.blogspot.com/2006/06/logic-index.htmlnoreply@blogger.com (MarkCC)0tag:blogger.com,1999:blog-23588438.post-114936300019965748Sat, 03 Jun 2006 19:29:00 +00002006-06-03T15:30:00.200-04:00Rebutting PEAR (index)<ul><br /><li> <a href="http://goodmath.blogspot.com/2006/04/bad-math-of-paranormal-research-pear.html"> The Bad Math of Paranormal Research: PEAR </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/04/sunday-snack-finishing-up-pear.html"> A Sunday Snack: Finishing Up the PEAR </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/repearing-bad-math.html"> RePEARing Bad Math </a><br /></li><li> <a href="http://goodmath.blogspot.com/2006/05/pear-yet-again-theory-behind.html"> PEAR yet again: the theory behind paranormal gibberish </a><br /></li></ul>http://goodmath.blogspot.com/2006/06/rebutting-pear-index.htmlnoreply@blogger.com (MarkCC)5tag:blogger.com,1999:blog-23588438.post-114936272161046566Sat, 03 Jun 2006 19:23:00 +00002006-12-29T08:31:08.403-05:00Lamda Calculus (Index)<ul><li><a href="http://goodmath.blogspot.com/2006/05/my-favorite-calculus-lambda-part-1.html"> My Favorite Calculus: Lambda (part 1)</a></li><li><a href="http://goodmath.blogspot.com/2006/05/genius-of-alonzo-church-numbers-in.html">The Genius of Alonzo Church: Numbers in Lambda Calculus</a></li><li><a href="http://goodmath.blogspot.com/2006/05/booleans-and-choice-in-lambda-calculus.html">Booleans and Choice in Lambda Calculus</a></li><li><a href="http://goodmath.blogspot.com/2006/05/why-oh-why-y.html">Why oh why Y?</a></li><li><a href="http://goodmath.blogspot.com/2006/05/from-lambda-calculus-to-combinator.html">From Lambda calculus to Combinator Calculus </a></li><li><a href="http://goodmath.blogspot.com/2006/05/types-in-lambda-calculus.html">Types in Lambda Calculus</a></li><li><a href="http://goodmath.blogspot.com/2006/06/finally-modeling-lambda-calculus.html">Finally, Modeling Lambda Calculus: Programs are Proofs!<br /></a></li></ul>http://goodmath.blogspot.com/2006/06/lamda-calculus-index.htmlnoreply@blogger.com (MarkCC)0tag:blogger.com,1999:blog-23588438.post-114935320572922807Sat, 03 Jun 2006 16:31:00 +00002006-06-03T12:46:45.760-04:00Reader poll about topics; and some site maintenance stuffI'm going to be doing a bit of work on the site this weekend to try to create a couple of indices for some of the series of posts that I've written. (After three months, I'm starting to realize that the site could use a bit or organization :-) ) So there's going to be a flurry of activity around here that's not terribly interesting, but it will make things easier to find as a result.<br /><br />I'm also trying to decide what subjects I should write about next. I've got a few possibilities in mind, and I thought I'd ask you folks what you're interested in. Let me know which of the following interest you most; or feel free to "write in" your own suggestions.<br /><br /><ol><li><span style="font-weight: bold;">Topology</span>. I'm far from an expert on the subject, but it's an interesting one, and I'm willing to study up and write about it, so long as people are patient with me when I inevitably make mistakes.<br /></li><li><span style="font-weight: bold;">More group theory</span>. I've got enough time to be able to continue the series of posts on group theory.</li><li><span style="font-weight: bold;">Modal logics</span>. In the whole lambda calculus thing, I mentioned the idea of modal logics. There are some really fascinating modal logics (I particularly like temporal and linear logics) that allow you to reason in different ways.<br /></li><li><span style="font-weight: bold;">Graph theory.</span> Graph theory is a fascinating area of discrete math that think is a lot of fun.</li><li><span style="font-weight: bold;">Category theory.</span> Category theory is similar to group theory; it's one of those fields based on abstracting something down to its bare essentials. In cat theory, it's the concept of a function as a mapping from one thing to another.</li></ol>http://goodmath.blogspot.com/2006/06/reader-poll-about-topics-and-some-site.htmlnoreply@blogger.com (MarkCC)22tag:blogger.com,1999:blog-23588438.post-114927214642421641Fri, 02 Jun 2006 17:33:00 +00002006-06-02T14:15:46.460-04:00Astoundingly Stupid Math: the Bullshit of Homeland Security Funds AllocationHat tip to <a href="http://nielsenhayden.com/makinglight/archives/007602.html#128642">Making Light</a> for the following.<br /><br />As you must have heard, this week, the US Department of Homeland Security released information about the allocation of anti-terrorism funds to US cities. New York City, which is about 15 miles south of where I'm sitting right now, had its funding cut by 40%. The mayor of NYC pointed out the stupidity of this, saying "Whenever we catch someone involved in a terrorist plot, they've got a map of NYC in their pocket".<br /><br />More information about the allocation has gradually been getting out. It turns out that the allocation method was remarkably simple. In their applications for funding, cities listed assets that they needed to protect. What DHS did was take the number of listed assets from all of the cities that were going to be recipients of funds, and give each city an amount of funding proportional to the number of assets they listed.<br /><br />So, the Empire State building is equal to the neighborhood bank in Omaha. The stock exchange on Wall Street is equal to the memorial park in Anchorage, Alaska. Mount Sinai hospital is equal to the county hospital in the suburbs of Toledo, Ohio. The New York subway system (18.5 billion passenger-miles per year) is equal to the Minneapolis transit system (283 million passenger-miles per year). The Brooklyn Bridge is equal the George Street bridge in New Brunswick, NJ.<br /><br />We know, perfectly well, how to estimate relative values and risks - the insurance company does it every day. Did our government do that? No. They ignored the established math of how to do it right, in favor of a stupid, a mind-bogglingly stupid, a <em>deliberately</em> stupid formula that would screw over every single actual high-risk location in the country, so that they could shove the money at people who don't need it.http://goodmath.blogspot.com/2006/06/astoundingly-stupid-math-bullshit-of_02.htmlnoreply@blogger.com (MarkCC)7tag:blogger.com,1999:blog-23588438.post-114926258780051308Fri, 02 Jun 2006 15:31:00 +00002006-06-02T11:36:27.833-04:00Finally, Modeling Lambda Calculus: Programs are Proofs!Now that we've worked through intuitionistic logic and it's model; and we've worked from the untyped lambda calculus to the simply typed lambda calculus; we're finally ready for the model of lambda calculus. And this is where it gets <span style="font-style: italic;">really</span> fun.<br /><br />Think about the types in the simple typed language calculus. Anything which can be formed from the following grammar is a lambda calculus type:<br /><code><br />type ::= primitive | function | ( type )<br />primitive ::= A | B | C | D | ...<br />function ::= type -> type</code><br /><br />The catch with that grammar is that you can create type expressions which, while they are valid type definitions, you can't write a single, complete, closed expression which will actually have that type. (A closed expression is one with no free variables.) When there is an expression that has a type, we say that the expression <em>inhabits</em> the type; and that the type is an inhabited type. If there is no expression that can inhabit a type, we say it's <em>uninhabitable</em>.<br /><br />So what's the difference between inhabitable type, and an uninhabitable type?<br /><br />The answer comes from something called the <em>Curry-Howard isomorphism</em>. For a typed lambda calculus, there is a corresponding intuitionistic logic; a type expression is inhabitable if and only if the type is a <em>theorem</em> in the corresponding logic.<br /><br />Look at the type "A -> A". Now, instead of seeing "->" as the function type constructor, try looking at it as logical implication. "A implies A" is clearly a theorem of intuitionistic logic. So the type "A -> A" is inhabitable.<br /><br />Now, look at "A -> B". That's <em>not</em> a theorem, unless there's some other context that proves it. As a function type, that's the type of a function which, without including any context of any kind, can take a parameter of type A, and return a value of a <em>different</em> type B. You can't do that - there's got to be some context which provides a value of type B - and to access the context, there's got to be something to allow the function to access its context: a free variable. Same thing in the logic and the lambda calculus: you need some kind of context to establish "A->B" as a theorem (in the logic) or as an inhabitable type (in the lambda calculus).<br /><br />It gets better. If there is a closed LC expression whose type is a theorem in the corresponding intuitionistic logic, then the expression that has that type <em>is a proof</em> of the theorem. Each beta reduction is equivalent to an inference step in the logic. The logic corresponding to the lambda calculus <em>is</em> it's model. In some sense, the lambda calculus and intuitionistic logic are just different reflections of the same thing.<br /><br />There are two ways of demonstrating the isomorphism: there's the original way that Curry did it, using the combinator calculus; and there's something called the <em>sequent calculus</em>. I learned it using the combinator version, so I'll run through that quickly. Another day, probably early next week, I'll do the sequent version.<br /><br />Let's recall what a model is. A model is a way of showing that <em>every</em> statement in the calculus is valid in some concrete universe of values - that there is a correspondence between some set of real entities and the entities in the calculus, where statements in the calculus correspond to valid statements about the real entities. So we don't actually need to do the full isomorphism; we just need to show a homomorphism from calculus to logic. (An isomorphism means you can go both ways: from calculus to logic and from logic to calculus. Homomorphism is only from calculus to logic.)<br /><br />So what we need to do is explain exactly how to take any complete lambda calculus expression, and translate it into a series of valid intuitionistic logic statements. Since the intuitionistic logic itself has been proven valid, if we can translate lambda calculus to IL, then we'll have proven the validity of lambda calculus - meaning that we'll have shown that computations in the lambda calculus are valid computations, and the lambda calculus is, indeed, a fully valid, effective computing system.<br /><br />How do we get from combinators (which are just a shorthand for lambda calculus without variables) to intuitionistic logic? It's actually amazingly easy.<br /><br />All proofs in an intuitionistic logic come down to a series of steps, each of which is an inference using one of two fundamental axioms:<br /><ol><li> <code>A implies B implies A</code><br /></li><li> <code>(A implies B implies C) implies ((A implies B) implies (A implies C))</code><br /></li></ol>Let's rewrite those using arrows, like a type: <code> A -> B -> A</code>; and <code>(A -> B -> C) -> (A -> B) -> A -> C</code>.<br /><br />Do those types look familiar to you? Take a look back at <a href="http://goodmath.blogspot.com/2006/05/types-in-lambda-calculus.html">the post on simple typed lambda calculus</a> if they don't. Those are the types of the S and K combinators.<br /><br />The steps to the model should be pretty obvious now. Types in a lambda calculus correspond to types of atoms in an intuitionistic logic. Functions are inference rules. Each function can be reduced to a combinator expression; and each combinator expression is an instantiation of one of the fundamental inference rules of the intuitionistic logic. The function is a <em>constructive proof</em> of the theorem in the corresponding logic.<br /><br />Now, is that cool, or what?<br /><br />(To which any normal person would respond "what". But then, I'm obviously not a normal person, I'm a math geek.)http://goodmath.blogspot.com/2006/06/finally-modeling-lambda-calculus.htmlnoreply@blogger.com (MarkCC)6tag:blogger.com,1999:blog-23588438.post-114925728955717188Fri, 02 Jun 2006 13:39:00 +00002006-11-16T12:19:24.770-05:00Friday Random Ten, 6/2Yes indeed. it's friday again. Where did my week go?<br /><ol><li> <span style="font-weight: bold;">Phish, "My Friend, My Friend"</span>. I actually really like certain jam bands. Phish is often quite good. This is one of my favorites of their studio tracks. (And, if you look at the cover of the album, and you draw a line through the arm of the guy on the bed, you can make a 23.5 degree angle! Phish is in on the secret!)</li><li><span style="font-weight: bold;">Deanta, "The Rocky Reels". </span>Deanta is a wonderful traditional Irish band, which prominently features one of my favorite Irish flutists, Deirdre Havlin. Beautiful music, arranged in a wonderfully sparse way. Bouzouki, flute, fiddle, piano, and harp. Just stunning.</li><li><span style="font-weight: bold;">Porcupine Tree, "Prodigal".</span> Pretty much what you expect from PT: somewhat odd dark chords, an unexpectedly smooth voice, kinda catchy melody. Cool stuff. Not their best, but still awfully good.</li><li><span style="font-weight: bold;">Sonic Youth, "Disappearer".</span> Sonic Youth often do things with interesting tonalities, microtones, and strange extended jams involving playing distorted guitars with all sorts of peculiar implements. This is a perfect example of that.</li><li><span style="font-weight: bold;">King Crimson, "The Power to Believe 1: a Capella".</span> I love the stuff that Fripp and company have been up to in the last couple of years. They've been getting even stranger with time. After the ProjeKCts work, they did this album, and it really shows the way that all that free-improv work affected them.</li><li><span style="font-weight: bold;">Miles Davis, "Deception". </span>It's Miles, what more do you need to say?</li><li><span style="font-weight: bold;">Nightingale, "The Discharged Drummer"</span>. Nightingale is a wonderful French-Canadian acoustic folk trio, and one of the best contradance bands in the world. Beautiful little track, this one.</li><li><span style="font-weight: bold;">Pain of Salvation, "Rope Ends". </span>Neo-progressive spin-off of the Flower Kings. Very cool band.<br /></li><li><span style="font-weight: bold;">Broadside Electric, "With her head tucked underneath her arm".</span> Broadside is a Philadelphia based electric folk band which plays a lot of stuff based on ballads from Childe's compilation. They also mix in traditional irish, klezmer, and progressive rock influences. The usual lineup is lead singer/guitarist/dulcimer, electric violin, chapman stick, tinwhistle/oboe, and drum. This is one of their humorous songs, about the ghost of Ann Boleyn haunting the tower of London. (Who doesn't love lyrics like " Along the draughty corridors for miles and miles she goes/She often catches cold, poor thing, it's cold there when it blows/And it's awfully awkward for the Queen to have to blow her nose/With her head tucked underneath her arm!") It's not one of their originals, but who else has recorded it with a klezmer violin solo?</li><li><span style="font-weight: bold;">Pat Donohue, "The Other End of the Mississippi River Blues". </span>Pat is the guitarist from the Guys All-Star Shoe Band on Prairie Home Companion. He's the best finger-style guitarist that I've ever seen in person. He's also quite a goof: Pat's own description of what he does is "go out and find perfectly good songs, and then ruin them". This one is one of his originals. Silly, but fun. ("They've got magnolia trees, sweet southern breeze; I've got ice and snow, 26 degrees below".)<br /></li></ol>http://goodmath.blogspot.com/2006/06/friday-random-ten-62.htmlnoreply@blogger.com (MarkCC)3tag:blogger.com,1999:blog-23588438.post-114912242449565112Thu, 01 Jun 2006 00:39:00 +00002007-02-08T17:08:48.956-05:00Improbable is not impossible, reduxI actually decided to say a bit more about the concept of improbable vs. impossible after my last post.<br /><br />As I've said before, the fundamental idea behind these arguments is what I call the "big numbers" principle: We (meaning human beings) are very bad at really grasping the meaning of very large numbers. It's hard to really grasp what a million is: if you counted one number a second, with no breaks of any kind, it would take you more than 11 days to count to one million. When we start getting to numbers that dwarf a million, our ability to really understand what they mean - to really grasp on a deep, intuitive level what they mean - just completely goes out the window.<br /><br />What does 10^20 mean? If it takes 11 days to count to one million, then it would take roughly 11 times 10^14 days - or 3 times 10^12 (aka 3 trillion) <em>years</em> to count to 10^20. That just doesn't mean <em>anything</em> on an intuitive level. So what does 10^100 mean?<br /><br />What happens when we start to look at probability? Well, it's really easy to get huge numbers in probability calculations - the kinds of numbers that we can't really grasp. And because those numbers are so meaningless to us, so utterly beyond the reach of what we really comprehend, the probabilities of astonishingly common events can <em>seem</em> like they're entirely beyond the realm of possibility: they <em>must</em> be impossible, because how could anything so easily comprehensible create numbers so incomprehensible?<br /><br />Who hasn't shuffled cards? Who hasn't played a game of klondike solitaire? And yet, every time we do that, we're witnessing an event with a probability of roughly 1 in 8 times 10^67!<br /><br />If something as common as shuffling a deck of cards can produce a result with odds of 1 in 10^67, then think of the probability of something uncommon: what's the probability of the particular arrangement of molecules in a particle of dust? Or the shape of a comets tail? Those are all natural, possible events - and the probability of them happening the way that they do is pretty much beyond our capacity to imagine.<br /><br />When you grasp that, then you start to realize just how meaningless the big numbers argument is. No matter how <em>improbable</em> something is, that does not mean that it is <em>impossible</em>. Impossible is a probability of 0. And <em>any</em> probability, no matter how small, is almost certainly the probability of <em>something</em> that's occurring in the universe, somewhere, right now.http://goodmath.blogspot.com/2006/05/improbable-is-not-impossible-redux.htmlnoreply@blogger.com (MarkCC)21tag:blogger.com,1999:blog-23588438.post-114911964561401208Wed, 31 May 2006 23:53:00 +00002006-06-02T12:52:37.913-04:00Repeat after me: Improbable IS NOT ImpossibleI got a request to respond to look at <a href="http://standonbible.blogspot.com/2006/04/improbable-versus-impossible-is-id.html">this</a>. It's yet another one of those innumerate jackasses arguing that god must exist, because life is too unlikely otherwise.<br /><br />It's the same old crap, but with one interesting twist - this guy tries to provide an argument for why anything less probable than a specific level of improbability is absolutely physically impossible. In his own words:<br /><blockquote>In a previous post, I presented a logical, factual argument for the existence of God. But one of my readers, Seanny McShawn, took issue with my syllogisms, saying that I was simply making an argument “from incredulity”.<br /><br />When someone tries to win an argument based on simple probabilities, this is called an “argument from incredulity.” This is a logical fallacy. In other words, the sheer unlikeliness of a scenario does not preclude its possibility and cannot be proof against it.<br /><br />But was I arguing from “incredulity”?<br /></blockquote>The answer is, pretty much, "yes".<br /><blockquote>Physicists estimate that in our universe there are 10^80 particles. Mathematicians say that the mathematical level of absolute impossibility is 1 chance in 10^50.<br /></blockquote>Not any competent mathematician.<br /><blockquote> However, the physical level of absolute impossibility is 1 chance in 10^80, and here’s why:<br /><br />On the basic level, probability is defined by the ‘particle’ example: finding a specially marked particle among 500,000 particles is beating odds of 1 in 500,000. In a universe that has 10^80 individual particles, the most improbable scenario is finding a specially marked particle in the entire universe. Due to the size of our universe, it is impossible to have a more improbable set of odds than 1 chance in 10^80. Anything that is more improbable than the most improbable is by all standards absolutely impossible.<br /></blockquote>I have <em>never</em> seen this particle example nonsense. But nonsense is what it is. It is, in fact, astoundingly easy to demonstrate.<br /><br />Ready? Are you sure? Ok. Here goes:<br /><pre>adfjiopwqeruoipboisdupoqwerbjksggfiuqwerup9ujdknc<br />blkjdbfqlijwecna34kjvldjfghqiopewufnadsmvbviuhwro<br />wprotiudvncmbcxbnajkhgpqrjwopeporewgjvdlkfmfkpowe<br />ri3nwe9mfwkdgnguipewiofkxnvcvpuiweklhhjnjwrn9fpuo<br />isdhpvjpndjkgnewqhtrepiuwifjoxcjvkneqotiurhwepafk<br />makdlgnqwejrhpqfioudshnfeiqjhweiufh<br /></pre>That's 286 random characters, produced by pounding my hands on my keyboard, and then deleting any punctuation characters (because I'm too lazy to figure out how many unshifted characters there are on my keyboard, so I stuck to numbers and letters.) The odds of generating that particular string are 36^286. That's quote a bit less likely than 10^80. But guess what? It's possible!<br /><br />Or, just to annoy the fundies, go get yourself a deck of Tarot cards. (Hey, they're good to have. There are some damned neat poker variants that you play with a Tarot deck, not to mention some funky solitaires.) A tarot deck has 78 cards. Shuffle the deck. Lay the cards out in order. The odds of that ordering? Less than 1 in 10^115. But entirely possible.<br /><br />What is the probability of a solar flare taking a specific shape? Think of the trillions of trillions of particles in a flare, and the unique shape of each one. The probability of those shapes? Quite a bit worse than 1 in 10^80.<br /><br />This one in 10^80 stuff is pure nonsense. It's just another attempt to create an arbitrary threshold beyond which we can say something is impossible.<br /><br />He then goes on to repeat yet another variant of the "odds of life based on the probability of a particular protein" gibberish. I've covered that plenty of times on this blog; for example, my analysis of Berlinski's version of this argument <a href="http://goodmath.blogspot.com/2006/03/berlinskis-bad-math.html">here</a>.<br /><br />So this guy concludes with:<br /><blockquote>This is not an argument from incredulity. This is an argument from facts: cold, hard facts. Since any set of odds above 1 in 10^80 is absolutely impossible, random chance could not and did not produce life.<br /></blockquote>Except that it <em>is</em> an argument from incredulity, utterly lacking in cold, hard facts. In fact, it's trivially refuted by cold hard facts; and it's based on one of the most pathetic attempts I've seen so far to define "impossible" in terms of probability.http://goodmath.blogspot.com/2006/05/repeat-after-me-improbable-is-not.htmlnoreply@blogger.com (MarkCC)15tag:blogger.com,1999:blog-23588438.post-114910056475935786Wed, 31 May 2006 18:29:00 +00002006-05-31T14:36:04.790-04:00Magic 23.5 update: Osborne runs away.Just thought I'd put up a quick little note here. I found that the author of the "23.5" stuff, after giving up on responding here, has set up a nice little website - sans comments - in which he defends himself from the criticisms here.<br /><br />Since I went to the trouble of publically presenting his defense on the front page of this blog, and engaging in a civil discussion with him in the comments, I'm more than a little bit ticked at this. He's obviously welcome to handle things as he wants, but I find it rather cowardly of Gary to silently drop out of a public, open discussion here, and then continue it on a private site, where no one who disagrees with him is permitted to respond.<br /><br />I'm not going to link to it, since he's got rather a lot of advertisements set up (you actually have to sit through an ad on the main link to the site before you're allowed to read any content), and I'd rather not send cash in his direction as a reward for this. If you really care, you can find it either through google, or through the sitemeter referrals link on the bottom of the front page of this blog.http://goodmath.blogspot.com/2006/05/magic-235-update-osborne-runs-away.htmlnoreply@blogger.com (MarkCC)19tag:blogger.com,1999:blog-23588438.post-114909770695822238Wed, 31 May 2006 17:46:00 +00002007-02-24T07:22:06.913-05:00Practical Applications of Good Math: Type Checking in Programming Languages<em>(Since I see that there are still links pointing at this post, I'll point out here that this blog has moved to <a href="http://www.scienceblogs.com/goodmath">scienceblogs</a>. This discussion was continued in a post there.)</em><br /><br />I thought today I'd go a bit off topic, and rant a little bit about stuff that's been bugging me lately in my real work. It does, in its bizzare way, relate to the lambda calculus series I've been writing lately.<br /><br />What I do professionally is research in a field called collaborative software development: how to build tools that help groups of people work together to build large systems. It's an interesting thing to do, with a huge range of things that can be done, ranging from better programming languages to better configuration management systems to better programming environments to entirely new kinds of collaborative tools. It involves practical programming, math, psychology and human factors, and a few other interesting things.<br /><br />My approach to this kind of work is to view the key problem as being one of communication. Most problems in large systems are ultimately not algorithmic errors - they're errors where some key piece of information was not correctly communicated between developers. For example, one of the most common bugs in large systems occurs when the implementor of a piece of code makes assumptions about how their code is going to be called; and the developer who uses that code makes <em>different</em> assumptions. To be concrete; I experienced this recently on my project at work. I'm working on a bit of code, and one function I call takes a lot of parameters, each of which is an array of objects. Most of the time, you only supply a real value for one or two out of a list of 8 parameters. I assumed that for unused parameters, it made sense to pass null - that's how I would have written that method. The person who wrote it expected callers to <em>never</em> pass null, but to use empty arrays for unused parameters. Program compiles, deploys, and boom: exceptions out the wazoo. (There was a dynamic check for null, but because of some proxy stuff we do, it was very hard to track it down.)<br /><br />This isn't a result of incompetence on anyone's part. This project has some of the finest developers that I've ever met working on it. The person who wrote that method with the awful parameter list is actually a pretty great programmer - one of the top people from one of the top groups of developers that I know of. I don't think I'm incompetent either :-). So why did the error happen?<br /><br />Because a key piece of information didn't get passed between us. Between the size of our system, the number of people involved, the way the code is partitioned into components, and where the documentation lives, the information about what to do about empty parameters got lost.<br /><br />What does any of this have to do with lambda calculus? Types.<br /><br />That problem wouldn't have happened if we were programming in my favorite programming language, OCaml. In OCaml, to be able to pass a "null" value, I actually need to specify that in the parameter declaration (as an Option type). There's no way I can pass a null to a function that wasn't expecting one.<br /><br />It's a common assertion among a lot of the elite hackers out there that "real programmers don't use strongly typed languages", or the <a href="http://www.paulgraham.com">Paul Graham</a> line about strongly typed languages. For example, here's <a href="http://www.paulgraham.com/javacover.html">Paul on Java</a>:<br /><blockquote>It's designed for large organizations. Large organizations have different aims from hackers. They want languages that are (believed to be) suitable for use by large teams of mediocre programmers-- languages with features that, like the speed limiters in U-Haul trucks, prevent fools from doing too much damage.<br /></blockquote> This attitude really bugs me. The thing is, types are meta-information about a program - and <em>useful</em> meta-information. And the more people you have involved in a system, the more important it becomes to have strong, formal information about parts of the system that allow automatic checks.<br /><ul><li> The elitism issue. "Yeah, I'm a super-duper programmer who doesn't make mistakes; people who don't work the way I do are just inferior." Yeah, tell that to the guys who wrote OCaml.<br /></li><li> The cluelessness issue: "I write programs in Perl/Python/CommonLisp/... with three other guys who are all lisp wizards, and if we don't have a problem, then no one should, so that typing stuff is just nonsense to cover up for incompetence." There are problems that emerge as the size of a system and the number of people working on it increase. Three people hacking on something is very different from 30 people hacking on it, which is very different from 300 people. As the number of people involved increases, you need to do more to ensure that information gets to where it's needed.<br /></li><li> The information loss issue. This is the one that actually bugs me the most. Programmers <em>know</em> what types they're using. They know what types they're passing. They know information about the parameters they're taking, and the parameters that they're passing. Good type system give them a way to <em>write that down</em> - and not just write it, but make it <em>a part of the code</em>, so that it can be tested. Writing it down as a part of the code means that the information is recorded; that it's maintained; and that it's checked. Not declaring types saves you a bit of keyboard time - but it means that there's important information in your head which you're not writing down. (I have exactly the same rant about parallelizing compilers, which is that you have to structure your code very carefully and precisely to allow the compiler to figure out the information that you knew before you wrote the code; but instead of letting you write it down, you have to trick the compiler into figuring it out on its own.)<br /></li></ul>The most common response from people like Paul to rants like mine is "Strong testing, not strong typing." The idea being, type checking is really something like a very simple kind of compile time testing, which is inferior to a real, complete test suite.<br /><br />Sure, strong typing in a language is no replacement for testing. But why on earth should I write tests to verify correct typing, when I can just put the type information <em>in line in the code</em> and have it tested automatically? Typing and type checking is <em>one part</em> of the process; having a typed language doesn't mean you shouldn't write tests; writing tests doesn't mean that compile-time type checking is useless. Pulling the typing information out of line into a test is, frankly, just dumb: the more you spread the information around between multiple places, the less likely it is to be maintained, or even <em>found</em>.<br /><br />I'm a huge unit testing fan; but one of the major limitations of testing is information scattering. You're putting information about the invariants and requirements of your code in the logic of the tests - and because of the multidimensional nature of software systems, that inevitably means that some of the information is here, some there. Trying to assemble the information you need from a list of test failures can be incredibly frustrating; and those tests often don't get maintained, so things change, but the tests aren't changed to thoroughly respect the new semantics. Compile time checking can't get out of synch with the current state of the code.http://goodmath.blogspot.com/2006/05/practical-applications-of-good-math.htmlnoreply@blogger.com (MarkCC)31tag:blogger.com,1999:blog-23588438.post-114900952596799840Tue, 30 May 2006 17:01:00 +00002006-05-30T13:18:46.000-04:00Types in Lambda CalculusNow that we've got intuitionistic logic under our belts, we're ready to get back to lambda calculus: we've got the logical tools we need to define the model. Of course, nothing is ever quite that simple, is it?<br /><br />What we've talked about so far is the simple untyped lambda calculus. That's how LC started; it's the first version that Church invented. But it did end up with some problems, and to get around that, a concept of <em>types</em> was introduced, leading to the simple typed lambda calculus, and on from there to a world of variants - SystemT, SystemF, the Lambda Cube (not related to the time cube :-)), and so one. And eventually, people realized that the untyped lambda calculus was really just a simple pathologically simple case of a typed lambda calculus - it's a typed LC with only one type.<br /><br />The semantics of lambda calculus are easiest to talk about in a typed version. For now, I'll talk about the simplest typed LC, known as the <em>simply typed lambda calculus</em>; and how it's ultimately semantically equivalent to an intuitionistic logic. (In fact, every version of typed LC corresponds to an IL; and every beta reduction in an LC corresponds to an inference step in an IL proof; that's why we needed to sidetrack into intuitionistic logic to get here.)<br /><br />The main thing that typed lambda calculus adds to the mix is a concept called <em>base types</em>. In a typed lambda calculus, you have some universe of atomic values which you can manipulate; those values are partitioned into simple types. Base types are usually named by single lower-case greek letters; since that's a pain on Blogger, I'll use upper-case english letters for type names. So, for example, we could have a type "N", which consists of the set of natural numbers; a type "B" which corresponds to boolean true/false values; and a type "S" which corresponds to strings.<br /><br />Once we have basic types, then we can talk about the type of a function. A function maps from values of one type (the type of parameter) to values of a second type (the type of the return value). For a function that takes a parameter of type "<code>A</code>", and returns a value of type "<code>B</code>", we write its type as "<code>A -> B</code>". "->" is called the <em>function type constructor</em>; it associates to the right, so "<code>A -> B -> C</code>" means "<code>A -> (B -> C)</code>".<br /><br />To apply types to the lambda calculus, we do a couple of things. First, we need a syntax update so that we can include type information in lambda terms. And second, we need to add a set of rules to show what it means for a typed program to be valid.<br /><br />The syntax part is easy. We add a ":" to the notation; the colon has an expression or variable binding on its left, and a type specification on its right. It asserts that whatever is on the left side of the colon has the type specified on the right side. A few examples:<br /><ol><li> <code>lambda x : N . x + 3</code>. This asserts that the parameter, <code>x</code>, has type "<code>N</code>", which is the natural numbers. There is no assertion of the type of the result of the function; but since we know that "+" is a function with type "<code>N -> N</code>", which can infer that the result type of this function will be "N".<br /></li><li> <code>(lambda x . x + 3) : N -> N</code>. This is the same as the previous, but with the type declaration moved out, so that it asserts the type for the lambda expression as a whole. This time we can infer that "<code>x : N</code>" because the function type is "<code>N -> N</code>", which means that the function parameter has type "<code>N</code>".<br /></li><li> <code>lambda x : N, y : B . if y then x * x else x</code>. A two parameter function; the first parameter is type "<code>N</code>", the second type "<code>B</code>". We can infer the return type, which is "<code>N</code>". So the type of the full function is "<code>N -> B -> N</code>". This may seem surprising at first; but remember that lambda calculus really works in terms of single parameter functions; multi-parameter functions are a shorthand for currying. So really, this function is: <code>lambda x : N . (lambda y : B . if y then x * x else x)</code>; the inner lambda is type "<code>B -> N</code>"; the outer lambda is type "<code>N -> (B -> N)</code>".<br /></li></ol>To talk about whether a program is <em>valid</em> with respect to types (aka <em>well-typed</em>), we need to introduce a set of rules for type inference. When the type of an expression is inferred using one of these rules, we call that a <em>type judgements</em>. Type inference and judgements allow us to reason about types in a lambda expression; and if any part of an expression winds up with an inconsistent type judgement, then the expression is invalid. (When Church started doing typed LC, one of the motivations was to distinguish between values representing "atoms", and values representing "predicates"; he was trying to avoid the Godel-esque paradoxes, by using types to ensure that predicates couldn't operate on predicates.)<br /><br />I'm going to have to adopt a slightly unorthodox notation for type judgements; the standard notation is just too hard to render correctly with the software I'm using currently. The usual notation is something that looks like a fraction; the numerator consists of statements that we know to be true; the denominator is what we can infer from those. In the numerator, we normally have statements using a <em>context</em>, which is a set of type judgements that we already know; it's usually written as an uppercase greek letter. For type contexts, I'll use the names of greek letters written in uppercase letters. If a type context includes the statement that "<code>x : A</code>", I'll write that as "<code>CONTEXT |- x : A</code>". For the fraction inference notations, I'll use a two lines labelled "Given:" and "Infer:". (You can see what the normal notation looks like by visiting <a href="http://en.wikipedia.org/wiki/Simply_typed_lambda_calculus">wikipedia's STLC page</a>.)<br /><br /><span style="font-weight: bold;">Rule1: (Type Identity)</span><code><br />Given: nothing<br />Infer: x : A |- x : A</code><br />The simplest rule: if we have no other information except a declaration of the type of a variable, then we know that that variable has the type it was declared with.<br /><br /><span style="font-weight: bold;">Rule2: (Type Invariance)</span><code><br />Given: GAMMA |- x : A, x != y<br />Infer: (GAMMA + y : B) |- x : A</code><br />This is a statement of non-interference. If we know that "<code>x : A</code>", then inferring a type judgement about any other term cannot change our type judgement for "x".<br /><br /><span style="font-weight: bold;">Rule3: (Parameter to function inference)</span><code><span style="font-weight: bold;"> </span><br />Given: (GAMMA + x : A) |- y : B<br />Infer: GAMMA |- (lambda x : A . y) : A -> B</code><br />This statement allows us to infer function types: if we know the type of the parameter to a function is "<code>A</code>"; and we know that the type of the value returned by the function is "<code>B</code>", then we know that the type of the function is "<code>A -> B</code>".<br /><br />And finally, <span style="font-weight: bold;"> Rule4: (Function application inference)</span><code><br />Given: GAMMA |- x : A -> B, Gamma |- y : A<br />Infer: GAMMA |- (x y) : B</code><br />If we know that a function has type "<code>A -> B</code>", and we apply it to a value of type "<code>A</code>", the result is an expression of type "<code>B</code>".<br /><br />These four rules are it. If we can take a lambda expression, and come up with a consistent set of type judgements for every term in the expression, then the expression is well-typed. If not, then the expression is invalid.<br /><br />Just for kicks, here's how we'd write types for the SKI combinators. These are incomplete types - I'm using type variables, rather than specific types. In a real program using the combinators, you could work out the necessary substitutions of concrete types for the type variables. Don't worry, I'll show you an example to clarify that.<br /><ul><li> <b>I combinator</b>: <code>(lambda x . x) : A -> A</code><br /></li><li> <b>K combinator:</b>: <code> (lambda x : A . ((lambda y : B . x) : B -> A)): A -> B -> A<br /></code></li><li> <b>S combinator:</b> <code>(lambda x : A -> B-> C . (lambda y : A -> B . (lambda z : A . (xz : B -> C) (yz : B)))) : (A -> B -> C) -> (A -> B) -> C</code><br /></li></ul>Now, let's look at a simple lambda calculus expression: <code>lambda x y . y x</code>. Without any type declarations or parameters, we don't know the exact type. But we do know that "x" has some type; we'll call that "A"; and we know that "y" is a function that will be applied with "x" as a parameter, so it's got parameter type A, but its result type is unknown. So using type variables, we can say "<code>x : A, y : A -> B</code>". We can figure out what "<code>A</code>" and "<code>B</code>" are by looking at a complete expression. So, let's work out the typing of it with x="3", and y="<code>lambda a : N. a*a". We'll assume that our type context already includes "*" as a function of type "N -> N -> N".<br /></code><ul><li> <code>(lambda x y . y x) 3 (lambda a : N . a * a)</code></li><li> Since 3 is a literal integer, we know its type: <code>3 : N</code>.<br /></li><li> By rule 4, we can infer that the type of the expression "<code>a * a</code>" where "<code>a : N</code>" is "<code>N</code>"; (* : N -> N -> N; it's applied to an N and an N) and therefore, by rule 3 the lambda expression has type "N -> N". So with type labelling, our expression is now: <code>(lambda x y . y x) (3 : N) (lambda a : N . (a * a):N) : N -> N</code><br /></li><li> So - now, we know that the parameter "x" of the first lambda must be "N"; and "y" must be "<code>N -> N</code>"; so by rule 4, we know that the type of the application expression "<code>y x</code>" must be "N"; and then by rule 3, the lambda has type: <code>N -> (N -> N) -> N</code>.<br /></li><li> So, for this one, both A and B end up being "N".<br /></li></ul>So, now we have a simply typed lambda calculus. The reason that it's simply typed is because the type treatment here is minimal: the only way of building new types is through the unavoidable "->" constructor. Other typed lambda calculi include the ability to define <span style="font-style: italic;">parametric types</span>, which are types expressed as functions ranging over types.http://goodmath.blogspot.com/2006/05/types-in-lambda-calculus.htmlnoreply@blogger.com (MarkCC)0tag:blogger.com,1999:blog-23588438.post-114894733156753584Mon, 29 May 2006 23:49:00 +00002006-07-12T10:28:14.493-04:00PEAR yet again: the theory behind paranormal gibberishI've been looking at PEAR again. I know it may seem sort of like beating a dead horse, but PEAR is, I think, something special in its way: it's a group of people who pretend to use science and mathematics in order to support all sorts of altie-woo gibberish. This makes them, to me, particularly important targets for skeptics: if they were legit, and they were getting the kinds of results that they present, they'd be demonstrating something fascinating and important. But they're not: they're trying to use the appearance of science to undermine science. And they're incredibly popular among various kinds of crackpottery: what led me back to them this time is the fact that I found them cited as a supporting reference in numerous places:<br /><ol><li>Two different "UFOlogy" websites;</li><li>Eric Julien's dream-prophecy of a disastrous comet impact on earth (which was supposed to have happened last thursday);</li><li>Three different websites where psychics take money in exchange for psychic predictions or psychic healing;</li><li>Two homeopathy information sites;</li><li>The house of thoth, a general clearinghouse site for everything wacky.</li></ol>Anyway, while looking at the stuff that all of these wacko sites cited from PEAR, I came across some PEAR work which isn't just a rehash of the random number generator nonsense, but instead an attempt to define, in mathematical terms, what "paranormal" events are, and what they mean.<br /><br />It's quite different from their other junk; and it's a really great example of one of the common ways that pseudo-scientists misuse math. The paper is called "M* : Vector Representation of the Subliminal Seed Regime of M5", and you can find it <a href="http://www.book-of-thoth.com/portalsupport/coppermine/albums/userpics/m3_16%283%29.pdf">here</a>.<br /><br />The abstract gives you a pretty good idea of what's coming:<br /><blockquote>A supplement to the M^5 model of mind/matter interactions is proposed wherein the subliminal seed space that undergirds tangible reality and conscious experience is characterized by an array of complex vectors whose components embody the pre-objective and pre-subjective aspects of their interactions. Elementary algebraic arguments then predict that the degree of anomalous correlation between the emergent conscious experiences and the corresponding tangible events depends only on the alignment of these interacting vectors, i. e., on the correspondence of the ratios of their individual ‘‘hard’’ and ‘‘soft’’ coordinates. This in turn suggests a subconscious alignment strategy based on strong need, desire, or shared purpose that is consistent with empirical experience. More sophisticated versions of the model could readily be pursued, but the essence of the correlation process seems rudimentary.<br /></blockquote>So, strip out the obfuscatory babble, what does this actually say?<br /><br />Umm... "<span style="font-style: italic;">babble babble</span> complex vectors <span style="font-style: italic;">babble babble babble</span> algebra <span style="font-style: italic;">babble babble</span> ratios <span style="font-style: italic;">babble babble</span> correlation <span style="font-style: italic;">babble babble</span>." Seriously: that's a pretty good paraphrase. That entire paragraph is <em>meaningless</em>. It's a bunch of nonsense mixed in with a couple of pseudo-mathematical terms in order to make it sound scientific. There is <em>no</em> actual content to that abstract. It reads like a computer-generated paper from <a href="http://pdos.csail.mit.edu/scigen/">SCIgen</a>.<br /><br />(For contrast, here's a SCIgen-generated abstract: "The simulation of randomized algorithms has deployed model checking, and current trends suggest that the evaluation of SMPs will soon emerge. In fact, few statisticians would disagree with the refinement of Byzantine fault tolerance. We confirm that although multicast systems [16] can be made homogeneous, omniscient, and autonomous, the acclaimed low-energy algorithm for the improvement of DHCP [34] is recursively enumerable.")<br /><br />Ok, so the abstract is the pits. To be honest, a <em>lot</em> of decent technical papers have really lousy abstracts. So let's dive in, and look at the actual body of the paper, and see if it improves at all.<br /><br />They start by trying to explain just what their basic conceptual model is. According to the authors, the world is fundamentally built on consciousness; and that most events start in a a pre-conscious realm of ideas called the "seed region"; and that as they emerge from the seed region into experienced reality, they manifest in two different ways; as "events" in the material domain, and as "experiences" or "perceptions" in the mental domain. They then claim that in order for something from the "seed region" to manifest, it requires an interaction of at least two seeds.<br /><br />Now, they try to start using pseudo-math to justify their gibberish. I'm going to modify the notation slightly to make it readable without using any symbols that are problematic on blogger; you can check the paper to see that I'm not changing anything but trivial notation.<br /><br />Suppose we have two of these seed beasties, S_1, and S_2. Now, suppose we have a mathematical representation of them as "vectors". We'll represent that as a function, rep(S).<br /><br />A "normal" event, according to them, is one where the events combine in what they call a "linear" way (scare-quotes theirs): <code>rep(S_1) + rep(S_2) = rep(S_1 + S_2)</code>. On the other hand, events that are perceived as anomalous are events for which that's not true: <code>rep(S_1) + rep(S_2) != rep(S_1 + S_2)</code>.<br /><br />We're already well into the land of pretend mathematics here. We have two non-quantifiable "seeds"; but we can add them together... We're pulling group-theory type concepts and notations, and applying them to things that absolutely do not have any of the prerequisites for those concepts to be meaningful.<br /><br />But let's skip past that for a moment, because it gets infinitely sillier shortly.<br /><br />They draw a cartesian graph with four quadrants, and label them (going clockwise from the first quadrant): T (for tangible), I (for intangible - aka, not observable in tangible reality), U (for unconscious), and C (conscious). So the upper-half is what they consider to be observable, and the bottom half is non-observable; and the left side is mind and the right side is matter. Further, they have a notion of "hard" and "soft"; objective is hard, and subjective is soft. They proceed to give a list of ridiculous pairs of words which they claim are different ways of expressing the fundamental "hard/soft" distinction, including "masculine/feminine", "particulate/wavelike", "words/music", and "yang/yin".<br /><br />Once they've gotten here, they get to my all-time favorite PEAR statement; one which is actually astonishingly obvious about what they're really up to:<br /><blockquote>It is then presumed that if we appropriate and pursue some established mathematical formalism for representing such components and their interactions, the analytical results may retain some metaphoric relevance for the emergence of anomalous mind/matter manifestations.<br /></blockquote>I love the amount of hedging involved in that sentence! And the admission that they're just "appropriating" a mathematical formalism for no other purpose than to "retain some metaphoric relevance". I think that an honest translation of that sentence into non-obfuscatory english is: "If we wrap this all up in mathematical symbols, we can make it look as if this might be real science".<br /><br />So, they then proceed to say that they can represent the seeds as complex numbers: <code>S = s + i(sigma)</code>. But "s" and "sigma" can't just be simply "pre-material" and "pre-mental", because that would be too simple. Instead, they're "hard" and "soft"; even thought we've just gone through the definition which categorized hard/soft as a better characterization of material and mental. Oh, and they have to make sure that this looks sufficiently mathematical, so instead of just saying that it's a complex, they present it in both rectangular and polar coordinates, with the equation for converting between the two notations written out inside the same definition area. No good reason for that, other than have something more impressive looking.<br /><br />Then they want to define how these "seeds" can propagate up from the very lowest reaches of their non-observable region into actual observable events, and for no particular reason, they decide to use the conjugate product equation randomly selected from quantum physics. So they take a random pair of seeds (remember that they claim that events proceed from a combination of at least two seeds), and add them up. They claim that the combined seed is just the normal vector addition (which they proceed to expand in the most complex looking way possible); and they also take the "conjugate products" and add them up (again in the most verbose and obfuscatory way possible); and then take the different between the two different sums. At this point, they reveal that for some reason, they think that the simple vector addition corresponds to "rep(S_1) + rep(S_2)" from earlier; and the conjugate is "rep(S_1+S_2)". No reason for this correspondence is give; no reason for why these should be equal for "non-anomalous" events; it's just obviously the right thing to do according to them. And then, of course, they repeat the whole thing in polar notation.<br /><br />It just keeps going like this: randomly pulling equations out of a hat for no particular reason, using them in bizzarely verbose and drawn out forms, repeating things in different ways for no reason. After babbling onwards about these sums, they say that "Also to be questioned is whether other interaction recipes beyond the simple addition S_12 = S_1 + S_2 could profitably be explored."; they suggest multiplication; but decide against it just because it doesn't produce the results that they want. Seriously. In their words "but we show that this doesn't generate similar non-linearities": that is, they want to see "non-linearities" in the randomly assembled equations, and since multiplying doesn't have that, it's no good to them.<br /><br />Finally, we're winding down and getting to the end: the "summary". (I was taught that when you write a technical paper, the summary or conclusion section should be short and sweet. For them, it's two full pages of tight text.) They proceed to restate things, complete with repeating the gibberish equations in yet another, slightly different form. And then they really piss me off. Statement six of their summary says "Elementary complex algebra then predicts babble babble babble". Elementary complex algebra "predicts" no such thing. There is no real algebra here, and nothing about algebra would remotely suggest anything like what they're claiming. It's just that this is a key step in their reasoning chain, and they absolutely cannot support it in any meaningful way. So they mask it up in pseudo-mathematical babble, and claim that the mathematics provides the link that they want, even though it doesn't. They're trying to use the credibility and robustness of mathematics to keep their nonsense above water, even though there's nothing remotely mathematical about it.<br /><br />They keep going with the nonsense math: they claim that the key to larger anomalous effects resides in "better alignment" of the interacting seed vectors (because the closer the two vectors are, in their framework, the larger the discrepancy between their two ways of "adding" vectors); and that alignments are driven by "personal need or desire". And it goes downhill from there.<br /><br />This is really wretched stuff. To me, it's definitely the most offensive of the PEAR papers. The other PEAR stuff I've seen is abused statistics from experiments. This is much more fundamental - instead of just using sampling errors to support their outcome (which is, potentially, explainable as incompetence on the part of the researchers), this is clear, deliberate, and fundamental misuse of mathematics in order to lend credibility to nonsense.http://goodmath.blogspot.com/2006/05/pear-yet-again-theory-behind.htmlnoreply@blogger.com (MarkCC)7tag:blogger.com,1999:blog-23588438.post-114868889775122695Sat, 27 May 2006 00:13:00 +00002006-05-26T20:14:57.783-04:00Finally: the Kripke Model for Intuitionistic LogicAs promised, today, I'm going to show the Kripke semantics model for intuitionistic logic.<br /><br />Remember from yesterday that a Kripke model has three parts: <code>(W, R, ||-)</code>, where W is a set of worlds; R is a transition relation between worlds; and "||-" is a forces relation that defines what's true in a particular world.<br /><br />To model intuitionistic logic, we need to add some required properties to the transition and forcing relations. With these additional properties, we call the transition relation "<="; "A <= B" means that B is accessible from A. For an intuitionistic Kripke model, the transition relation "<=" must be both reflexive (A <= A) and transitive (if A <= B and B <= C, then A <= C), making it a <em>pre-order</em> relation - that is, a relation that has the essential properties of "<=" that is necessary to define a partial order over its domain. The forces relation, "<code>||-</code>" also needs some additional conditions: (these conditions are based on the definitions from the <a href="http://en.wikipedia.org/wiki/Kripke_semantics">wikipedia article</a> on Kripke models; I can't remember them without help, so I used the article as a reference.)<br /> <ol><li> <code>for all w,u in W such that w <= u, for all statements s if w ||- p, then u ||- p</code>. This condition basically says that if a statement is true/valid in a particular world w, then it must be true in all worlds accessible from w - meaning that in intuitionistic logic, once a statement is proven, it can't be taken back.<br /> </li><li> <code>for all w in W, w ||- A and B if/f w ||- and w ||- B</code>. "A and B" is provable in w only if both A and B are provable.<br /> </li><li> <code>for all w in W, w ||- A or B if/f w ||- A or w ||- B</code>. "A or B" is provable in w only if either A or B is probable in w.<br /> </li><li> <code>w ||- A implies B if/f for all u such that w <= u, (u ||- A) implies (u ||- B)</code>. Being able to prove that "A implies B" in w means that in every world accessible from w, if you can prove A, then you can prove B.<br /> </li><li> There is no contradictory statement (written _|_) such that <code>w ||- _|_</code>.<br /></li></ol>The last line in there is actually a bit of a cheat: you can think of intuitionistic logic as a logic where at a given point of time, a statement can have three truth values: T (top), which means provably true; <code>_|_</code> (bottom) which means provably false, and unproved. The last property of the forces relationship means that under no conditions can the "forces" relation say that a proposition is true if it is provably false.<br /><br />So: suppose we have an intuitionistic logic consisting of a set of statements, which we'll call X. The Kripke model for X is the triple <code>(W, <=, M)</code>, where W is the set of worlds related by a "<=" relation that has the intuitionistic properties that we just discussed. <code>M</code> is a set of sets: <code>for all w in W: exists one M_w in M</code>, where M_w is an <em>L-structure</em> for X. (An L-structure is a formal way of defining what statements are proved true - and for statements with variables, the <em>bindings</em> of those variables to values for which the statement has been proved true. For the purposes of understanding, you can think of M_w as a function that takes a logical statement, and a list of values for the variables in that statement, and returns "true" if the statement is proved true, and false if the statement is proved false; and the statement <em>is not</em> in the domain of the function if it's truth value hasn't been proved.) The M_w's for intuitionistic logic have a set of properties that come from the meaning of statements in the logic:<br /> <ol><li> if <code>u <= v</code>, then <code>domain(M_u) subset domain(M_v)</code><br /> </li><li> For any function f, in the logic, <code>for all a in domain(f) in M_u, f(a) in M_u = f(a) = M_v</code>.<br /> </li><li> For any predicate <code>P(a_1,a_2, ..., a_n)</code>, if <code>P(a_1,a_2, ..., a_n)</code> is true in M_u, then it is true in M_v.<br /></li></ol>Now, finally, we can get to the last step in defining our model of intuitionistic logic: the forces relation based on M. What we need to do to make M work as a forces relation is to separate it into two parts: the truth/validity of <em>concrete</em> statements (that is, statements with all variables bound to specific values), which we write using the old forces notation, <code>||-</code>; and the mapping of variables and functions to values. We call that mapping an <em>evaluation function</em>; for a statement A in X, we write A[e] for A with variables bound by e.<br /><br />So: given an evaluation <code>e</code> of variables from M_w, <code>w ||- A[e]</code> is defined by:<br /> <ol><li> <code>w ||- P(a_1,...a_n)[e] if/f P(a_1[e],...,a_n[e])</code> is in <code>M_w</code>: Evaluating a predicate means the same thing as keeping the predicate unchanged, and evaluating each of its parameters separately.<br /> </li><li> <code>w ||- (A and B)[e] if/f w ||- A[e] and w ||- B[e]</code>: a typical and definition.<br /> </li><li> <code>w ||- (A or B)[e] if/f w ||- A[e] or w ||- B[e]</code>: a typical or definition.<br /> </li><li> <code> w ||- (A implies B)[e] if/f forall w <= u: u ||- A[e] implies u ||- B[e]</code>: an implication is true if, in the model, you can prove its constituents in the next step. (This is the opposite of its definition in the logic: in logic, we'd say that given A implies B and A, you can conclude B. In the model, we go the other way, and say that you can conclude that A implies B if, for all possible transitions, the next world allows you to conclude either A or not B.)<br /> </li><li> <code>w NOT ||- _|_</code>: no w can force a contradiction.<br /> </li><li> <code> w ||- (exists x : A)[e] if/f exists a in M_w such that w ||- A[e], and e maps x to a</code>. An exists statement is provable only if we can show a concrete, specific value for which the statement is true.<br /> </li><li> <code>w ||- (forall x : A)[e] if/f forall u such that w <= u: forall a in M_u: u ||- A[e] where e maps x to A</code>: a universal statement is provable if every possible next world can prove that the statement with all possible bindings of variables are true.<br /></li></ol>What we've done now is to define a way of describing the meaning of any statement in intuitionistic logic, and any reasoning step in intuitionistic logic in terms of sets, in a way where if we do the translation of the logical statement down to the sets, the statements about the sets will always be true and correct; and if we reason from the sets that correspond to the logic, anything we can do will produce a well-defined, meaningful, and valid statement in the logic.<br /><br />The particularly nice thing about an intuitionistic logic with a Kripke model is that for any statement which is provably true, we can use the model to find specific values for the variables in the statement for which its true; and if a statement is false, we can find specific values for the variables in the statement that produce a concrete counterexample. We <em>can't</em> always do that in regular predicate logic: regular predicate logic allows us to do inferences from things like "exists" statements if we know that some value must exist for which the statement is true, even if there's no way for us to figure out what that value <em>is</em>. Likewise, in regular predicate logic, we can infer that a forall statement is false if we know that there must be some value for which it's false; in intuitionist logic, we need to be able to show a <em>specific example</em> for which it's false.http://goodmath.blogspot.com/2006/05/finally-kripke-model-for.htmlnoreply@blogger.com (MarkCC)2tag:blogger.com,1999:blog-23588438.post-114865084876804537Fri, 26 May 2006 13:18:00 +00002006-05-26T11:52:11.096-04:00Friday Random Ten, 5/26<ol><li><span style="font-weight: bold;">Lunasa, "Punch".</span> Trad irish, by the best in the business.<br /></li><li><span style="font-weight: bold;">Dirty Three, "In fall". </span>The Dirty Three is one of those "post-rock ensembles" which have me seriously hooked. Very, very cool stuff, which kind of defies description. Just get some and listen to it!</li><li><span style="font-weight: bold;">The Flower Kings, "Monsters and Men"</span>. A 20 minute long track of the newest FK release. As usual for the Flower Kings, it's bloody brilliant.</li><li><span style="font-weight: bold;">Peter Hammill, "The Wave". </span>Peter Hammill is one of the original progressive rock guys, the founder of "Van Der Graff Generator". This is off of a live solo CD with Peter doing vocals and keyboards, accompanied by an electric violin. That's it - no drums, no guitars, just voice, keyboard, and violin.<br /></li><li><span style="font-weight: bold;">Glass Hammer, "Run Lissette".</span> Glass Hammer is a recent discovery of mine; they're an American neo-progressive band. They sound something like a cross between Yes and old Genesis, with a bit of Flower Kings for good measure. They're a really great band.</li><li><span style="font-weight: bold;">Porcupine Tree, "Sleep of No Dreaming". </span></li><li><span style="font-weight: bold;"><span style="font-weight: bold;">John Corigliano, "Etude Fantasy 5: Melody" from "Phantasmagoria".</span></span> Corigliano is one of the finest composers working today, and this is a great piece of his work. Not quite up there with his clarinet concerto, which is one of my very favorite modern compositions, but with stuff this good, "not as good as his best" is also "100 times better than anything else".</li><li><span style="font-weight: bold;">ProjeKct Two: "Sector Drift [The Planet Zarg Quartet Pt. 6]".</span> Goofy free improv from one of the King Crimson projects - Fripp, Belew, and Gunn, with Belew playing a percussion synthesizer instead of a guitar. Surprisingly good considering that Belew had no experience with the instrument before recording this, live.</li><li><span style="font-weight: bold;">Tempest, "The Barrow Man".</span> An neoprogressive electric folk band. These guys are interesting. The leader of the band is Swedish; they play a blend of Swedish and Celtic based melodies, in a progressive rock style. Sort of Fairport Convention on steroids, led by an electric mandolin.</li><li><span style="font-weight: bold;">Whirligig, "Through The Bitter Frost And Snow".</span> A really beautiful ballad from a NYC based trad Irish band. There's also an Irish trad Irish band called Whirligig, which has nothing to do with the NYC band - one of those unfortunate collisions that tends to happen with very small-label bands.</li></ol>http://goodmath.blogspot.com/2006/05/friday-random-ten-526.htmlnoreply@blogger.com (MarkCC)6tag:blogger.com,1999:blog-23588438.post-114859640780310529Thu, 25 May 2006 22:32:00 +00002006-05-25T18:33:27.836-04:00Magic-23 update: the author respondsI received a response to my post <a href="http://goodmath.blogspot.com/2006/05/magic-23.html">Magic-23</a> from the author of the article I was mocking. As I did earlier with Berklinski, I think that if the author is willing to take the time to respond to me, it's only fair to let their response be seen by readers. Mr. Osborne took the time to write a very civil response, and so I'm going to post it here, with my own comments at the end. His message is unedited, and quoted verbatim:<br /><blockquote>Hi Mark,<br /><br />I read your essay 'Good Math, Bad Math', which criticizes my work on the 23.5-degree references and makes it all look rather silly, which is disappointing really.<br /><br />I'm not really that interested in the '23' number phenomenon, neither am I "in on a joke", as one of your correspondents states, and neither am I out to exploit anything and make a quick buck.<br /><br />This is a real phenomenon. I have spent years researching this, and many others have verified my findings.<br />I think my presentation is well balanced and is merely based on what I have discovered despite my own beliefs, if any.<br />Someone has to discover and bring attention to this. Surely it was intended that someone should.<br /><br />Attached is a chapter from my book. This is what all these references are leading to and more.<br /><br />Also, viewing the images would help. See here, another version.<br /><br />http://www.freewebs.com/garyosborn/235degrees.htm<br /><br /><br />Also take a look at this version of the 23.5 Degree presentation on my website.<br /><br />http://garyosborn.moonfruit.com/revelations<br /><br />Also I don't have a book entitled 'Revelations'.<br /><br />So at the end of the day, I am merely bringing attention to the things I have discovered and I don't see that as wrong, and I don't think my findings are "chock full of insanity" either.<br /><br />Don't want to lecture you Mark, but first and foremost. I'm always careful not to believe in something that will stop me finding other alternatives, and most of the time, and if I search long enough within my own mind, I find that I really don't believe in anything 100%.<br />I always try to remain neutral and balanced.<br />We are all prone to experiencing, seeing and believing whatever it is we are focusing on at the time and really because we are creating everything ourselves, and many of us don't realise it.<br /><br />Read my article The Synchroholoform on my website. There's a "bigger picture" to all this and it appears to be related to the creative nature of our own consciousness and that's something I always consider.<br /><br />Oh and I too like The Flower Kings and go to see them whenever the opportunity arises - so that's something we have in common.<br /><br />Kind Regards,<br />Gary Osborn<br /></blockquote>My response is going to be brief.<br /><br />Every crackpot in the world, upon being confronted by a debunker, invariably comes back with some kind of comment about having an open mind. Having an open mind is important - but so is being skeptical. There's a lot of gibberish out there, and there are a lot of people who are either wrong, crazy, or deliberately deceptive: you should always be open to new ideas, but at the same time, you have to look at them very carefully. The reason that I think the 23.5 degree stuff is so goofy isn't because I don't have an open mind - it's because it doesn't meet the test of credibility.<br /><br />For it to be believable, I would need to accept:<br /><ul><br /> <li> That you can, looking at a painting, distinguish between the<br /> "perspective angle" of 22.5 degrees, and the axis angle of 23-23.5 degrees.<br /> </li><li> That painters throughout history could both distinguish and paint angles so accurately that they could deliberately make a line between 1/2 and 1 degree different from the standard perspective angle.<br /> </li><li> That there was such a threat from the church that this knowledge had to be kept secret, <em>but</em> at the same time, it was so widespread that there are dozens of different painters and architects who all not only knew about it, but visibly inserted it in their work.<br /></li></ul> I don't buy that, at all.<br /><br />But it gets worse: to get to your conclusion about a catastrophe that pushed the earth onto its tilted axis, I would have to accept:<br /> <ul><li> Our understanding of gravity - which tests out accurately to a incredible degree - is entirely wrong. We can plan space probe trajectories that do double gravitational slingshots and wind up in exactly the right position - travelling millions of miles, and yet reaching a very precise destination with near-perfect accuracy.<br /> </li><li> The stability of the solar system is an illusion. The fact that we have a uniform ecliptic plane, with a regular pattern of near-circular orbits is something close to impossible to explain mathematically if we assume that there was a catastrophic change in the solar system during the period of human history.<br /> </li><li> Something drastic enough to tilt the earth on its axis and reconfigure the entire solar system occurred, and yet life on earth was pretty much unaffected; and the event left no geological evidence at all.<br /> </li><li> A catastrophic change occurred recently enough that people like the Egyptians were able to record it; and yet there are <em>no</em> explicit records of the event - not from the Egyptians, the Babylonians, the Chinese - none of them recorded it in anything like explicit terms - all just used highly ambiguous metaphorical allegories.<br /></li></ul>Nope. No way this is credible - physics, mathematics, geology, and history all argue against it.http://goodmath.blogspot.com/2006/05/magic-23-update-author-responds.htmlnoreply@blogger.com (MarkCC)36