Good Math/Bad Math

Monday, May 22, 2006

RePEARing Bad Math

I was looking at the site-meter statistics for this site, and who was referring to it, and I came across a discussion linking to the Global Consciousness Project. The GCP is a very flaky train-wreck which is trying to measure the purported cumulative affects of human consciousness on, well, on any old random thing they can think of. It turns out that the GCP is another face of PEAR at Princeton. Man, what is it with Princeton?

What GCP does is run the PEAR random number generators non-stop, all the time, logging the results. Then when there's an "interesting" event in the world, they got back to the logs, and see if they can find any "anomalous patterns" in the data, where anomalous patterns are short periods of time where the mean of the random numbers is different from the normal expected mean.

Here's an example of what they do, in their own words, from here:
The first formal prediction for the EGG project was made by RDN, in 1998-08-08, while traveling in Halifax. It concerned the Embassy bombings in Nairobi and Tanzania, on 1998-08-07 at 07:35 UTC. Upon my return home, prior to examining data, I made the specific predictions described below. These terrorist attacks exemplify a tearing of the social fabric that would shock a global consciousness temporarily. For this particular event, we had only the general outline for predictions: a period of time, say 10 minutes, surrounding the point event, and a period, say a few hours, following the event during which the world becomes conscious of what has happened.

An exploratory analysis based on 15-minute data segments from three eggs looked at the event-period from 07:15 to 07:45, and a three-hour consciousness-spreading period from 07:15 to 10:00. The associated probabilities indicate significant deviations for both time-periods. At that time we did not have sophisticated processing capabilities, but a hand calculation could be made using the automatically generated Z-scores for 15-minute blocks. It indicated significant deviations in both the short period defined for the event: Chi-square 18.039, 9 df, p=0.035, and for the aftermath, inclusive of the event: Chi-square 69.536, 36 df, p=0.00066.
So: they pick an event that they believe should affect the "global consciousness". Then they take short time periods associated with the event, and search for a time period where the mean value from their random number generator is not exactly what it should be.

They have an extensive list of events on their site, which they claim are used "rigorously" to test whether there are anomalous patterns associated with events. For most of them, they were able to discover one second intervals that were "anomalous".

What's wrong with this?

Given a huge database consisting of sequences of random numbers, you expect to see small deviations. If it never deviated from the mean, you would actually conclude that the data was fake; you expect to see some fuzz. Now, take that huge quantity of data (they're generating 200 bits per second at each of 98 different random number generators); and take a swath of time (minutes to hours) associated with an "event", and see if you can find a one-second period of time for which the random numbers deviate from the expected mean. For any event, at any point in time, you can probably find a "significant" deviation for a couple of seconds; and they consider one second enough to be meaningful.

They actually make data from their generators available through their website; when I get a chance, I'm going to try to prove my point by trying a few dates that have no particular significance to anyone but me: the times my children were born (7/30/2000, 9:35am, and 4/10/2003, 8:40pm), the tenth anniversary of my wedding (the specific 20 minutes of the ceremony) (6/5/2004, 2-2:20pm), and the only time my dog every bit anyone (that being me, for freaking out while he vomited all over my house on, I think, 7/10/1998, around 5pm).

I'll let you know the results.


  • It would be interesting to see them make some general prediction something like "there is a great disturbance in the force", so somewhere something bad (or good) has happened rather than all the post hoc stuff.

    It shouldn't be too hard for them to automate it so that any time periods that had a large deviance from the mean were identified, and then they would have a heads up on something? Of course then they might find themselves with lots of false positives.

    If their hypothesis was correct, I'm surprised they only have a one second block for 9/11. If anything should resonate through the global consciousness it should be that. I can remember spending a morning just watching the news about the event and spending most of my time at Uni later that day just following the online news and talking aobut it with other students. Hell, this wasn't in the US, nor am I American.

    9/11 was an event that at the very least did shake a large portion of the western world. If they can only pick up a single second from that day as being significant, then that pretty much highlights the innanity of the project. Even if logic and common sense hasn't already done that.

    By Blogger Darkling, at 9:58 PM  

  • Oh, but Mark, if (I mean when) you do find those anomalies next to your personally important times, it will only go to show just how powerful an influence you have on the global consciousness!

    But seriously, their a priori specification of what they are looking for seems as vague as were most of the "Bible codes," leaving easily enough wiggle room to be able to find these patterns when they went looking for them. The point is that, in order to show that this is a real phenomenon, they also need to show that looking for the same thing in the same way using the same parameters does not succeed when there is not an "interesting" event at that time. And I can just imagine that if an anomaly were found, one could always subsequently find an "interesting" event which could be held to have caused the [global change in consciousness which caused the] anomaly. Post hoc reasoning is so much fun, and so much easier than doing science.

    By Blogger A little night musing, at 10:47 PM  

  • I believe this is an example of what's referred to as a Texas Sharpshooter Fallacy.

    By Anonymous Stormy Dragon, at 11:47 PM  

  • Given a huge database consisting of sequences of random numbers, you expect to see small deviations. If it never deviated from the mean, you would actually conclude that the data was fake; you expect to see some fuzz.

    Ha ha ha.

    This should be obvious to anybody who actually paid attention in their introductory statistics class. If you have a 5% signficance level, and you run 100 regressions on randomly generated data how many of these specious regressions are going to return "significant" results? About 5%? Gee, what a shock.

    By Blogger Steve, at 1:35 PM  

  • And if after a shocking event nothing turned up in their time series they can always argue "hey, apparently the global consciousness did not resonate very strongly, let's try to understand why not".

    By Anonymous The Reverent Bayes, at 4:40 PM  

  • Better yet, have a random number generator pick some dates and see if there's significant deviation...

    By Anonymous TW Andrews, at 2:34 PM  

  • It would be simple to disprove this approach using a Monte Carlo simulation. Just take randomly selected intervals of duration t, compute the mean of the numbers generated during those intervals. For arbitrary values of t it would be possible to construct a probability distribution. Then, you can derive the expected probability of a "global consciousness event" occuring completely at random (a placebo event). If the mean of your "real" event exceeds the mean for the placebo events, then you're onto something. Of course, we'd expect at least a few of the GCEs to exceed the critical value with some frequency, so to prove the detectors are working you'd need to specify some relevant metrics for defining a GCE and so forth.

    Pointless, really, except as an exercise in educating statistics students. That would be a fun exercise. Analyze known random data for patterns (assuming they use real random generators). What is the expected frequency of a given pattern?

    By Anonymous Anonymous, at 3:42 PM  

  • Firstly i apologise for my rash post earlier , It was early in the morning where I live and i noticed that many on your site were being quite abusive about an organisation that i hold in high regard.

    Firstly you claim in one of your arguments you claim that PEARs statistics showed no statistical differences. That was the whole point of the study, they were finding statistical differences, not just patterns, the differences were a mystery even other skeptics confirm this [see susan blackmore] but the patterns they showed hinted that it was in fact the concioussness of the participants making the change. Secondly if the results were just random fuzz they would not keep getting the same results throught over 25 years of tests. They did a trim in their statistics to account for the 5% fuzz and still got an effect if you look And also if there is no effect and they are just massageing the data to fit there needs then how come in 25 years noone has shut them down if why their research is bunk is a simple as you state, surely all the people of high education who are against this field[ and there are many] would have pointed this out. Secondly another skeptic called S.jeffers had the same complaints as you, did three tests the first two had no effect so he assumed there was nothing to fins, but then the third test showed the effect, see jeffers [ see freedman, m , jeffers,s , sager,k, binns,m and black,s 2003] yet the faculty stays open and continues to gain funding. If it were as simple bunk as you state it would have been easily spotted and eliminated 24 years ago.

    I asked the question to dr Dean Radin and he replyed
    "People who accuse others of massaging data to fit their preconceived notions fail to appreciate that such accusations cut both ways. I.e., biased assessments can just as easily confirm or deny the true situation. So when it comes to assessing the validity of one person's opinion vs. 25 years of laboratory data, the data is going to win that argument"

    Also people on your site claim PEAR is a "black eye" at princeton, then how come it was so popular it got its own course at the university? with many distinguished people backing it up.[ I am not appealing to authority just mentioning it] look at the further activities on the site.

    It seems that you just do not like the conclusions of the PEAR group so you are chipping away at it, but others more qualified, better informed and angrier have tried and failed.

    By Anonymous Anonymous, at 11:13 AM  

  • oooh also finally just thought i would let you guys know that PEAR is still being taken seriously, see the latest AAAS proceeding in the retrocausation section for proof of this heres a link to a newspaper article on it.

    By Anonymous Anonymous, at 11:17 AM  

  • and finally, if PEAR is just looking for non-existant patterns in data then surely other investigators would not find the exact same patterns? yet they have. This includes skeptics such as stanly jeffers, they are anomalies and the skeptics claim they are not psi, but they cannot claim there are no anomalies. ask any parapsychogists even the skeptical ones. [ p.s james randi is not a parapsychologist, hes a debunker]

    By Anonymous Anonymous, at 4:31 PM  

  • anon:

    First, why do you insist on coming back to two month old posts that no one is looking at anymore? I deliberately moved a post over the scienceblogs, and left a note about it here so that I don't need to follow discussions in both places.

    Second, GCP "anomalies" are not what *anyone* with a decent training in statistics would call anomalies - as I explained in the post. The method of finding the "anomalies" is far too subjective; the magnitudes of the "anomalies" are *expected* if you're allowed to pick your sample from a wide range of possibilities.

    And "Ask any parapsychologist" is, frankly, stupid. Parapsychologists are, *by definition*, people who believe in "paranormal" activity - which is exactly what PEAR is trying to prove. WRT to things like PEAR, a "skeptical parapsychologist" is a non-sequiter.

    By Blogger MarkCC, at 5:45 PM  

  • sorry i did not know about this link and did not find it, please post it again. And according to the united states board of statistics the anomalies are in fact the kind of anomalies someone with decent training in statists, jessica utts who is a statistician by trade confrims this, and yet you seem to be privvy to some secret statitstical knnowledge we are not and also you did not explain how skeptics who do the same tests get the same patterns sometimes [ look at the 2003 test done by Simon jeffers an adamant skeptic for proof of this] If they are just making up patterns how can others who are not in the least connected with them find the same patterns?

    Also they did not pick and choose from the data they looked at it as a whole and even trimmed it for special performers, and the 5% fuzz expected and still got the effect, that is why the investigation has been going on so long.
    But you have made it quite clear that anyone that questions you here will be met with sarcasm so I will not post again if i make you uncomfortable. i just hope my posts have made you consider it a bit more

    By Anonymous Anonymous, at 6:02 PM  

  • oooh and i forgot, if you want a skeptical parapsychologist see richard wiseman, he doesnt beleive in psi but studies the psychology of people that do, he works at hertfordshire university. your knowledge of parapsychology seems limited. are you sure that concerning yourself with there methods is the best use of your time?

    By Anonymous Anonymous, at 6:05 PM  

  • anon:

    Look in the "scienceblogs" link in the header to the blog.

    By Blogger MarkCC, at 6:09 PM  

Post a Comment

<< Home