!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

CHANCE News 10.01

November 28, 2000 to January 10, 2001

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Prepared by J. Laurie Snell, Bill Peterson, Jeanne Albert, and Charles Grinstead, with help from Fuxing Hou and Joan Snell.

Please send comments and suggestions for articles to
jlsnell@dartmouth.edu

Back issues of Chance News and other materials for teaching a Chance course are available from the Chance web site:

Chance News is distributed under the GNU General Public License (so-called 'copyleft'). See the end of the newsletter for details.

Chance News is best read with Courier 12pt font and 6.5" margin.

===========================================================

You prepare yourself to win. You prepare yourself for the possibility that you won't win. You don't really prepare yourself for the possibility that you flip the coin in the air and it lands on its edge and you get neither outcome.

Al Gore

===========================================================

Contents of Chance News 10.01

Note: If you would like to have a CD-ROM of the Chance Lectures that are available on the Chance web site, send a request to jlsnell@dartmouth.edu with the address where it should be sent. There is no charge.
                                        <<<========<<



>>>>>==============>
Forsooth! RSS News, Nov. 2000

Single men are about 1.8 times as likely to die as comparable married men; single women are about 1.5 times as likely to die as comparable married women.

The Guardian
7 September 2000

A survey of 7,600 American adults...did not walk frequently enough or fast enough to gain real health benefits. Only a third walked at least four times per week and only a quarter walked at a brisk enough pace, which translated as just six per cent of walkers met the basic requirements for both speed and frequency.

The Times
26 August 2000

Forsooth! RSS News Dec. 2000

People who enjoy themselves without feeling guilty are less likely to become ill, according to researchers at Hull University. Eating chocolate, drinking, shopping and sex were all given the seal of approval. Thirty healthy students took part in the study.

Times Higher Ed. Suppl.
13 October 20000

There is no cure for Ebolla. It kills everyone in its path. Only 3 out of 10 will survive.

The Independent
17 Oct 20000

Extreme events will be the norm.

John Prescott(The Guardian)
1 November 2000

Finally, Douglas Rogers sent us the following candidate for a Forsooth!

Rich celebrities can afford to go on shopping sprees at a whim; but one in ten of the population --and one in five women--are shopaholics and their bank accounts do not always stand the strain of that personality disorder.

Dr Thomas Stuffaford(The Times)
27 November 2000

                                        <<<========<<



>>>>>==============>
Dan Rockmore suggested the next article.

Public lives.
New York Times, 4 Jan. 2001, B2
James Barron

The cover of the pre-Oscar issues of Entertainment Weekly for the past two years featured movies and actors who won Oscars. This years cover includes photographs of the stars of "Crouching Tiger, Hidden Dragon" Michelle Yeoh and Chow Yun Fat.

Commenting on the possibility of the continuing this "streak", Mark Harris, assistant managing editor of Entertainment Weekly, stated:

There's no surer way to snap a streak than to identify it as a streak.
DISCUSSION QUESTION:

This reminds Laurie of his favorite use of probability theory. When faced with a potential disaster -- for example being in an airplane in a major storm -- Laurie worries that the disaster will happen, thereby decreasing the probability of it happening. This is based on the theory that the disaster that does Laurie in will, with high probability, be one that he is not anticipating. Does this sound reasonable?
                                        <<<========<<



>>>>>==============>
ESPN To call next Florida election.
Slate, Sports nut, Dec. 27, 2000
Gregg Easterbrook

Easterbrook writes:
Shortly before the season began, ESPN ran no fewer than 15 sets of predictions of winners for the six NFL divisions and six wild-card slots. This was an astonishing exercise in covering every base, as ESPN NFL regulars made complete forecasts, seeming to guarantee somebody would get it right if only by blind chance. Instead all 15 sets of predictions were wrong. Fifteen people each picking 12 positions offers 180 permutations, and even with this incredible wiggle room, the ESPN meta-forecast whiffed(to blow or be blown in gusts or puffs).

The NFL consists of the American Football Conference (AFC) and the National Football Conference (NFC). Each Conference has three divisions: East, Central, and West. Each division has five teams with the exception of the Central division in the AFC which has six teams.

The playoffs involve the six teams with the best record in their division and six other "wild cards" three chosen from each of the two conferences.

Reader Milton Eisner observed that the list leads to considerably more than 180 possibilities. His football savvy son Jason explained to him that his first calculation was wrong because he had assumed that the three wild cards within a conference had to come from different divisions. In fact they can come from any division and, for example, could all come from the same division.

Having all this straight, Milton observed that the number of different ways to pick six division champions and six wild cards (noting that one division has six teams and the others have five each) is 6 * 5^5 * C(12,3) * C(13,3) = 1,179,750,000, significantly greater than the 180 stated in Slate.

In his "Tuesday Morning Quarterback" column (Tuesday, Jan 9) Easterbrook comments on Eisner's calculation:

Reader Milt Eisner proffered this calculation,
6 * 5^5 * C(12,3) * C(13,3), yielding precisely
1,179,750,000 permutations. TMQ takes your word
for it. ESPN had 1,179,750,000 chances and still
whiffed.
DISCUSSION QUESTIONS:

(1) The 15 ESPN expert's predictions can be found at: ESPN experts' picks for 2000

What would be a reasonable way to evaluate the predictions of the experts? Look at their predictions and see who did the best according to your evaluation.

(2) Milton writes about his own attempt to answer question (1)

If you just count how many of 12 playoff teams were correctly predicted (disregarding who was division champ and who was a wild card), only 2 of 15 predictors was able to get as many as 8 teams correct! Nine got 7 correct and the remaining 4 got only 6 correct.

This leads to the following problem: How many ways can you pick 12 NFL playoff teams, disregarding who is a champion and who is a wild card? You must pick 6 teams from each conference and there must be at least 1 from each division.
Milton gets 30,078,125 for an answer. Is he right? Considering only the problem of predicting the 12 playoff teams did the experts do significantly better than guessing?
                                        <<<========<<



>>>>>==============>
Craig Fox (Duke U.) and Yoval Rotenstrich (U. of Chicago) sent us the following question. You can find the answer and a discussion of why you might have found this problem puzzling at the end of this newsletter.

QUESTION: You will receive a prize if both a fair coin lands "heads" AND a fair die lands "6". After the coin is flipped and the die is rolled you ask if AT LEAST ONE of these events has occurred and you are told "yes." Given this information, would you rather get the prize if both events have occurred or start from scratch with a new bet that will pay the prize only if a new die will land "6"?
                                        <<<========<<



>>>>>==============>
Reader Doug Rodgers suggested our next article.

CERN's Gamble shows perils, rewards of playing the odds.
Science Magazine, Vol.289, 2260-2262
Charles Seife

In the domains of physics and astronomy, researchers look for objects that are at the edge of observability. This pushing of the observable envelope means that there is always some doubt about whether the observations are real or imagined. Currently, a team of physicists in Geneva is trying to gather evidence for the existence of the Higgs boson, an elementary particle that is predicted by the Standard Model in physics.

The question in this case, as in other similar cases, is what constitutes enough evidence to claim that the existence of this particle has been shown? The current prevailing sentiment in physics and astronomy is that the five-sigma rule should be applied to evidence. This rule essentially means that one should have enough evidence so that the probability that the evidence is due to chance, assuming that the particle does not exist, is less than one in 3 million.

It is not clear to this reviewer how one can determine the above probability. One has to make assumptions about the way that the world works, and then, under those assumptions, work out the probability that the observations would come out the way that they do. If these calculations were in the right ballpark, then one would almost never see a five-sigma result that was later shown (by more observations, presumably) to be incorrect. Even a three-sigma result should only be overturned one time in a thousand.

However, according to John Bahcall, a particle physicist and astrophysicist at Princeton University, "half of all three-sigma results are wrong." In fact, this article gives examples of five- and six-sigma results that have been subsequently overturned. At least twice, astronomers have claimed to have found evidence that show the existence of planets around other stars, and this evidence was in the five-to-six-sigma range. The Higgs boson was claimed to have been observed, at the five-sigma level, in the early 1980's.

One of the reasons that people worry about vanishing five-sigma results is that Nobel prizes sometimes hinge on such results. It is generally thought, for example, that the first research team to establish the existence of the Higgs boson will be awarded a Nobel.

Val Fitch, who won the 1980 Nobel prize for discovering charge-parity violation in K mesons, says that the statistical analysis "is based upon the assumption that you know everything and that everything is behaving as it should. But after everything you think of, there can be things you don't think of." In general, physicists and astronomers are more likely to accept an observation, even at the 3-sigma level, if it fits in with current theory. They are likely to be skeptical of a five-sigma observation if it runs counter to what they think should be true.

DISCUSSION QUESTIONS:

(1) Do you think that the 5-sigma claims make sense in the context of the examples discussed in this article?

(2) Willaim Jefferys (Figuring the Odds. Science 2000 290: 1503-1504. (in Letters) writes: Bayesians are not surprised that results at several standard deviations are often spurious as Charles Seife points out. Why does Jefferys say this? (See his letter for more details)
                                        <<<========<<



>>>>>==============>
Under suspicion: the fugitive science of criminal justice.
The New Yorker, January 8, 2001, pp. 50-53
Atul Gawande

This article is an introduction to some of the procedural issues involved in conducting and interpreting the outcomes of police lineups. The main focus is on the ability of eyewitnesses to correctly identify a suspect, and, conversely, how such factors as the make-up of the lineup, the method of viewing the lineup, and the information provided to the witness affect the chances that the witness will pick an innocent person. The importance of the issue is borne out by the fact that, as the article states, "in a study of sixty-three DNA exonerations of wrongfully convicted people, fifty-three involved witnesses making a mistaken identification, and almost invariably they had viewed a lineup in which the actual perpetrator was not present."

Gawande describes some of the findings of researchers in this area, including Gary Wells, a psychologist at Iowa State University. Although the New Yorker article contains only a modest amount of data, Wells's web page contains more information on the topic, including a video lineup where you can test your skill at identifying a "rooftop bomber". Especially useful is the paper "The Informational Value of Eyewitness Responses to Lineups: Exonerating versus Incriminating Evidence", by Wells and Olson, also available at the website. The paper includes a very clear description of the relationship between the (prior) probability that a suspect in a lineup is the actual culprit, and the (posterior) probability that the suspect is the culprit, given certain outcomes following a lineup.

Using experimental data obtained from "staged crime" experiments, Wells and Olson compare, for a given lineup procedure and makeup, rates of identification of the suspect given that the suspect is or is not the culprit. They also determine similar rates for identifying "fillers" (the other members of the lineup, assumed innocent); for rejecting the entire lineup ("not there" responses); and for "don't know" responses. In addition, Wells and Olson compare these rates across different lineup designs, such as sequential lineups, in which witnesses are shown potential suspects one at a time, and lineups in which fillers are chosen according to varying criteria.

The preceding information is then used to derive what the authors call "information gain curves" for several outcomes. For example, for different lineup designs, they graph the quantity |p(SC|IDS) - p(SC)| , where SC = "suspect is culprit", and IDS = "witness identifies suspect" as a function of p(SC). These curves are interpreted as a graphical representation of the new information, expressed as a probability, that the outcome of a lineup provides. Using the information provided, it is not difficult to derive some intriguing results. For example, the probability that a suspect in a lineup is the culprit, given that two witnesses each identify a filler, is surprisingly low: even if p(SC) = .9, the posterior probability drops to .30; if p(SC) = .5, it drops to .05!

DISCUSSION QUESTIONS:

(1) The paper gives information gain curves for the outcomes IDfiller = "witness identifies filler" and "not there" responses.

(a) In both cases, the curves indicate that these responses decrease the probability that suspect is the culprit (i.e., they tend to be exonerating.) Is this what you'd expect? Why or why not?

(b) Which of the above responses do you think yields the biggest information gain?

(c) If witnesses are not explicitly advised that a lineup might not contain the culprit, it turns out that identifying a filler is more exonerating of the suspect than a "not there" response. Is this what you'd expect? Why or why not? How does this compare to your answer in part (b)?

(2) Information gain is larger for sequential lineups than for simultaneous lineups (the usual kind}. What might account for this?

(3) Wells and Olson attempt to combine the effects of two witnesses if one identifies the suspect while the other identifies a filler. Since the two information-gain curves corresponding to these outcomes cross for p(SC) = .53, the authors conclude that "if we assumed a .53 prior probability that the suspect is the culprit, the two eyewitnesses would tend to cancel each other out." How are they combining probabilities here? What is wrong with this approach?
                                     <<<========<<



>>>>>==============>
Ask Marilyn.
Parade Magazine, 10 December 2000, 11
Marilyn vos Savant

Here is a lottery question posed by a reader.

A dozen glazed doughnuts are riding on the answer to this question: Are the odds of winning in a lotto drawing higher when picking six numbers out of 49 or when picking 5 numbers out of 52?

Marilyn says that it is "vastly easier" to win in the 5 out of 52 case. She adds that this holds whether or not the selections are made with replacement.

DISCUSSION QUESTIONS:

(1) What are the relevant probabilities?

(2) Do you agree that the second game is "vastly easier" to win?
                                     <<<========<<



>>>>>==============>
Fingerprinting fingerprints.
The Economist, 16 December 2000, pp. 89-90

We have presented many stories about "DNA fingerprinting" in Chance News. The present article deals with fingerprinting in its original sense. According to the article, the widespread acceptance of forensic fingerprinting resulted in its name being attached to the DNA procedures. Ironically, the attention now being paid to scientific evidence in the courts has led fingerprinting itself to be called into question. Speaking at a recent meeting on Science and the Law sponsored by the US Justice Department, Dr. Simon Cole of Cornell University argued that the procedure has never been placed on rigorous scientific footing.

The article sketches the history of fingerprinting, starting with the work of Francis Galton. In 1892, Galton estimated the probability of two random prints matching was one in 64 billion. His idea was to compare the "points of similarity" (also known as "Galton details"), which are places at which ridges in the print either terminate or split. Typical prints have from 35 to 50 such points. Modern forensic practice varies but tends to consider the evidence convincing when 8 to 16 points of similarity are identified between a suspect's prints and those found at a crime scene. However, the methodology is based on expert practice, not on any statistical analysis that would quantify the degree of certainty.

There are two further complications. First, evidence from crime scenes often consists of partial prints, and there is no theoretical basis for estimating the chance of a match in such cases. Second, the prints being compared are not obtained under the same conditions. The arrest record consists of carefully inked prints of suspects, whereas "latent" prints at the crime scene may only be visible with special chemicals or lighting.

The article cites a pivotal 1991 case involving a robbery in Pennsylvania. Byron Mitchell was implicated by two latent prints in the getaway car. His lawyer is still appealing the case, on the basis of an informal experiment that was actually done by the FBI in the course of the trial. The Bureau sent the latent prints together with those from Mitchell's arrest record to 53 state law enforcement agencies. Of the 35 that responded, eight failed to match one of the latent prints, and six failed to match the other. The article calls this an average failure rate of 20%. Later, however, when the Bureau followed up with enlarged photographs of the latent prints indicating which fingers they allegedly matched, most of the labs agreed with the identification.

It remains to be seen whether this case will inspire future challenges to forensic fingerprinting.

For more about the history of fingerprint you can read the Linqua Franca feature article "The Myth of Fingerprints" by Simon Cole.

DISCUSSIONS QUESTIONS:

(1) Do you have any idea how Galton might have estimated his one in 64 billion probability? Does Galton explain this in his book Finger Prints (London: Macmillan and Co., 1892?

(2) How was the 20% "average failure rate" calculated? Does it mean that 20% of the labs would have cleared Mr. Mitchell?

(3) Are you concerned about the response rate?
                                     <<<========<<



>>>>>==============>
After standing up to be counted, Americans number 281,421,906.
The New York Times, 29 December 2000, A1
Steven A. Holmes

Figures from the 2000 Census have been released, which put the US population at 281,421,906. This is nearly 6 million more than the estimate of 275,843,000 that the Census Bureau made on October 1.

Debate continues as to whether statistical adjustment would yield more accurate figures. Republicans argue that the higher than expected total shows that increased efforts to improve traditional counting, which included an advertising campaign to encourage compliance, have paid off. Kenneth Blackwell of Ohio, who co-chairs a board that monitors the Census, said: "We may have a situation where the differential undercount is wiped out." But Census Director Kenneth Prewitt was more cautious, commenting that "There is no way I can tell you today that these numbers are accurate. We are going to work these data backwards and forwards to find out how accurate we are, and then we're going to tell you."

A Supreme Court decision last year ruled that statistically adjusted data could not be used for apportioning seats in the House of Representatives. Thus the overall impact on the 2003 Congress is already known. A total of 12 seats will shift in the reapportionment, with ten states losing seats and eight states gaining (for example, New York will lose two and California will gain two). On the other hand, the Court did not rule on whether states could use statistical data when redrawing their own congressional districts. Census officials are expected to announce in February whether they believe that sample data from 314,000 households should be used for this purpose. This is sure to provoke additional political debate.

DISCUSSION QUESTION:

Tim Erickson suggested the following discussion questions to the Fathom listserve:

The census (www.census.gov), has posted the populations of the states as they will be used for reapportionment, as well as the "resident population" which is what we usually see, I think.

So: how many people does it take to get a representative? Which state's representatives give their constituents the most electoral "leverage", i.e., the fewest people per rep? Which state grew the most in the decade? Does it matter whether that's percentage or absolute? If two states switch rank in population, what does that look like on the scatter plot(See NY and TX)?
Tim suggested that we could easily add our own questions to this.
                                     <<<========<<



>>>>>==============>
Studies find no short-term links of cell phones to tumors.
Los Angeles Times, 20 December, 2000, A1
Thomas H. Maugh II

Cellular-telephone use and brain tumors.
New England Journal of Medicine, Jan. 11, 2001
Posted early at New England Journal of Medicine On-line
Peter D. Inskip and others

Handheld cellular telephone use and risk of brain cancer.
JAMA, 20 Dec 2000, Vol 284, No. 23, pp 3001-3007
Joshua E. Muscat and others

There has been some concern that cell phones could increase the chance of brain tumors because they transmit radio frequency energy from an antenna held next to the head.

The brain-cancer issue became an issue of public concern in 1993, when David Reynard of St. Petersburg, Florida appeared on the CNN program "Larry King Live" maintaining that his wife had contracted brain cancer from a cell phone he had bought for her.

This article reports on two very similar studies to estimate the risk, if any, of using a cell phone. One of these studies was published in the December 20 issue of JAMA and the second was posted early on the website of the New England Journal of Medicine. Both studies found no increased risk of cancers among those who used cell phones over a period of two or three years. A large European study now underway will help settle the question of whether those who use cell-phones over longer periods of time are also at no increased risk for brain cancer. Results from this study are not expected until 2003 at the earliest.

Both studies are case-control studies. (See item 7 in Chance News 9.07 for a discussion of how case-control studies work). The study reported in JAMA had 469 men and women, ages 18 to 80 who were diagnosed with brain cancer between 1994 and 1998. These were compared with 422 people matched by age, sex, race, years of education and occupation, who did not have brain cancer. The study reported in the NJM had 782 patients who were diagnosed with brain tumors between 1994 and 1998 and compared them with 799 people who were admitted to the same hospitals for conditions other than cancer.

DISCUSSION QUESTIONS:

(1) What are some of the biases that can occur in case control studies like these?

(2) In an related editorial in the NEJM Dr. Dimitrios Trichopoulos make the following concluding comment:

There is another lesson to be learned about the alarms that have been sounded about public health during the past five years. When the real or presumed risk involves communicable agents such as prions that cause mad cow disease, no precaution, however extreme, can be considered excessive. By contrast, for non-communicable agents such as radio-frequency energy, the lack of a theoretical foundation and the absence of empirical evidence of a substantial increase in risk legitimize cautious inaction, unless and until a small excess risk is firmly documented.
Do you agree with this?
                                     <<<========<<



>>>>>==============>
Errors plagued election night polling service.
The Washington Post, 22 December 2000
Howard Kurtz

This article is based on a confidential report by the Voter News Service prepared by VNS editorial director Murray Edelman and obtained by the Washington Post. This report was the result of an internal investigation by VNS on what went wrong on election night.

The article reports the following rather bewildering series of errors. It is hard to tell from the article whether these statements are in the report or just the reporters version of what was in the report.

The first comment is that VNS wildly underestimated the size of Florida's absentee vote. The group thought absentee ballots would make up 78.2 percent of the overall vote, instead of the actual figure of 12 percent.

The article then states at 7:50 p.m. election night VNS projected a 7.3% lead for Gore which led the network to call for Gore to win the election. The article accounts for this 7.3% error as follows:

1.3 percentage points came from the fact that VNS projected that absentees would vote 22.4 percent more for Bush than Election Day voters, when the actual figure was 23.7 percent. (This seems to be comparing apples and oranges)

Another 2.8% was the cause of problems with the exit polls but the article states that this was "within the normal range" for exit polls.(If it was within the normal range how do they know it was an error?)

The article then reports that the remaining 3.2% was the result of two kinds of errors in the exit poll model itself.

First, the model used results of past elections to correct any exit poll errors. For this VNS used Florida Governor Jeb Bush's 1998 victory as the best predictor of how his brother would fare this year; but that of Robert J. Dole's 1996 bid would have produced a better estimate.(hindsight is great!)

Secondly, VNS uses raw vote count to help correct any exit-poll errors. At 7:50 the exit poll in Tampa was off by 16 percentage points, inflating Gore's estimate lead while at this time Tampa had not reported any votes leaving VNS unable to modify its errors.

Having thus accounted for the 7.3% error making Gore the favorite at 7:50, the article turns to the projection at 2:10 in the morning that Bush would win. The article states that this projection was also based on bad VNS data.

At 2:10 with 97 percent of the state's precincts reporting, VNS estimated that there were 179,713 votes outstanding when in fact more than 359,000 votes came in after 2:10. In Palm Beach County alone VNS projected 41,000 votes still to come when in fact there were 129,000 still to come.

These errors were compounded by local problems reporting the vote. At 2:08, Gore's total in Volusia County dropped mysteriously by more than 10,000 votes, while nearly 10,000 votes were added to Bush's total. This error increased Bush's lead by 20,348 votes, giving him a 51,433-vote lead over Gore as estimated by VNS.

Other errors mentioned are: Brevard County later increased Gore's total by 4,000, with no increase for Bush which appeared to be a correction of an earlier error. Finally, the article reports that VNS's quality-control system was so inadequate that it failed to reject an early report that 95% of Duval County had voted for Gore.

For a somewhat more coherent account of the VNS errors and the problems that the networks had, see the "CBS Election Night Report".

DISCUSSION QUESTIONS:

(1) Well, what do you make of all this? Which of these errors do you think could really have been avoided?

(2) Do you think the networks would do better by having more than one source for their data?
                              <<<========<<



>>>>>==============>
Communicating statistical information.
Science, 22 Dec 2000, pp. 2261-2262
Ulrich Hoffrage, Samuel Lindsey, Ralph Hertwig, Gerd Gigerenzer

It is well-known that people often understand probability statements in terms of frequency concepts rather than probability statements. This is particularly true in problems related to false positive rates. Here is an example of such a problem with two different formulations which the authors call "a probability formulation" and "a natural frequency formulation".

Probability formulation:

The probability of colorectal cancer can be given as .3%. If a person has colorectal cancer, the probability that a hemoccult test is positive is 50%. If a person does not have colorectal cancer, the probability that that person still tests positive is 3%. What is the probability that a person who tests positive actually has colorectal cancer?

Natural frequency formulation:

Out of every 10,000 people 30 have colorectal cancer. Of these, 15 will have a positive hemoccult test. Out of the remaining 9970 people without colorectal cancer, 300 will still test positive. How many of those who test positive actually have colorectal cancer?

(A hemoccult test is the familiar fecal occult blood test that you are often invited to take when you have a physical).

The authors state that in their studies only 1 out of 24 physicians gave the correct answer when given the probability version while 16 out of 24 got it correct when given the frequency interpretation.

The author report similar differences both in other medical examples and in studies with law students who were asked what conclusions they would draw from a match in the DNA profile of a suspect and evidence left at the scene of a crime, given the relevant statistical information.

We have discussed this research before (See Chance News 6.10) We mention it here again because this Science article has more up-to-date references of the authors' work.

DISCUSSION QUESTIONS:

(1) Why do you think the authors call the frequency approach the natural frequency approach?

(2) Do you see any problem with equating a .3 percent chance of colorectal cancer with the statement: Out of every 10,000 people 30 have colorectal cancer?
                                     <<<========<<



>>>>>==============>
Joan Garfield (poker enthusiast?) suggested the next article.

Fortune's smile.
Harper's Magazine, December 2000
James McManus

31st Annual World Series of Poker.
Discovery Channel Video
VHS#759308, $29.95

James McManus is a poet and free-lance writer who teaches writing and literature in the School of the Art Institute of Chicago. He was assigned by Harper's Magazine to write an article about the 31'st annual championship "World Series of Poker" held at Binion's Casino in Las Vegas May 15-18, 2000.

In this game 512 players each contribute $10,000 to a pot and play a form of poker called Texas Hold-em until they run out of money. They are allowed to use only money accumulated from their play. When all but one player has been driven out of the game, the roughly 50-million dollar pot is divided among the 45 players who survive the longest. This year the lone survivor -- the champion -- received $1,500,000, and the last four survivors received $896,500, $570,500, $326,000, $247,760 respectively. Others in the 45 who survived the longest received smaller prizes.

Jim was a recreational poker player and had never played in a poker tournament. He decided to try to enter this tournament to give him a better feeling for his subject. In the year before the tournament, Jim read numerous poker books and played computer poker games to improve his poker skills.

Harpers paid Jim $5,000 for the article and provided, in advance, $4,000 for food, travel, and lodging. Jim skimped on the expenses and budgeted part of this money to play in the World Series of Poker satellite games. These games have lower stakes and are designed to give players an opportunity to raise the required $10,000 to play in the championship game. After failing to win in two of these satellites and down to $1,000, Jim entered a satellite game in which ten players contributed $1,000 to make a $10,000 pot. The players play until one person had the whole pot and this turned out to be Jim.

This allowed Jim to enter the championship game and his Harper's article gives a blow by blow account of the game. We will give a brief description of the tournament but we cannot do justice to his story. We found it also helpful to view the Discovery Channel Video of the tournament.

The play extended over four days. On the fourth day there were only 6 players left. They were:

Seat PlayerAmount
1 Chris Ferguson$2,853,000
3Jim McManus$554.000
5Roman Abinsay$521,000
6Steve Kaufman$511,000
2Hasan Habib$464,000
4T.J. Cloutier $216,000

All but McManus are seasoned tournament players. It is generally thought that if the players are comparable the chance of winning is proportional to the amount that they have. This would give Jim about a 10% chance of winning and Coutier only about a 5% chance of winning. However, as the following description of the players suggests, Jim and Coutier are hardly comparable players.

Cloutier played pro football for the Montreal Allouettes before taking up poker. Cloutier is now considered by many to be the best living poker player. He has won 51 tournaments and more money than any other player. He was twice a finalist but never a winner in the World Series of Poker. Chris is the co-author of the book "Championship No-Limit and Pot-Limit Hold'em" which Jim studied to prepare for the game.

Hasan Habib grew up in Pakistan and has been playing poker since 1985. He started with minor tournaments and worked his way up to the major tournaments where he is now considered a top player.

Steve Kaufman is a Rabbi and Professor of Bible and Cognate Literature at Hebrew Union College. He has a Ph.D. from Yale University. While poker is a hobby for him it also has nicely complemented his academic salary.

Roman Abinsay has been described as "the quiet man." On day three, without anyone noticing, he increased his stack from $18,000 to $521,000.

Chris 'Jesus' Ferguson was described by Will Buckley (Observer, 4 June 2000) as: "Long-hair, beard and moustache, wears stetson and shades. Looks like Our Lord and Saviour might have done if he'd concentrated on poker." Chris recently received a PhD in Computer Studies from UCLA.

Texas Hold'em is a seven-card game. Each player is dealt two cards face down and these are the only cards unique to a player's hand. After an initial round of betting, three shared cards are turned over. This is called the "flop." After another round of betting a fourth shared card, called the "turn," is exposed. Still more betting and then the final shared card, called the "river," is revealed and the best five-card hand that can be made from a player's seven cards wins.

You have to read the article to understand the emotional roller coaster that the players go through in playing in this series. Here is a sample: the hand that did Jim in.

On the ninth hand of the final day Jim is dealt the AC,QC which is considered a good hand. Habib bets all his money (about $400,000). Jim, thinking that Habid is trying to "steal" the pot calls his bet. The other players drop out. If Jim wins he will have about $900,000, enough to give him a serious chance of winning the tournament. Since there can be no more betting, Jim exposes his two cards, AC, QC and Habid shows his two cards AH, 4H. The flop is 9S, 6S, KS. Only a 4 or an ace can save Habid. Habit stands up getting ready to shake hands with Jim. Jim writes: "my hear pounds spasmodically, but I'm still feeling thoroughly confident." Alas, chance rules and the river is the 4H. Rather than having the $900,000 he anticipated, Jim is reduced to $105,000 and this lasts him only two more hands. However, he leaves the tournament with his fifth place $247,760 winnings and enough material for his article and a book that he plans to write.

The final play came down to a battle between TJ and Chris. TJ had managed to win enough so that they each had about the same amount of money. The tournament ended with a final hand which was remarkably similar to the one that put Jim out of the game. Here is the description as given by Mike Paulle at the Binion World Series of Poker homepage:

On that last hand, T. J. had gone all-in (bet all his money) for almost 2 1/2 million dollars over an initial raise by Chris. After several minutes of thought, Ferguson called the biggest bet of his life with an A 9.

When Ferguson called Cloutier's bet, the hands were turned over. The crowd gasped as it looked like T. J. Cloutier, after years of frustration, would finally get what he so richly deserved--a World Championship gold bracelet to go with the four event bracelets he already had.

Cloutier had an A Q to Ferguson's A 9. One hand could barely be more of a favorite over another. The flop came 2 K 4. Another King came on the turn. Ferguson could have been saved, on the river, if a deuce or a four came. The crowd was all ready to roar for T.J.'s triumph. But Chris knew exactly what he needed for a last card. Before the audience could even comprehend that Cloutier had lost, Ferguson's arms shot up with clenched fists when the nine came down.
                                     <<<========<<



>>>>>==============>
Answer to the Fox-Rotenstrich question: You should start from scratch with the new bet.

Analysis: The flip of a coin and the roll of a die yield twelve equiprobable events: {T1, T2, T3, T4, T5, T6, H1, H2, H3, H4, H5, H6}. Given the information that either the coin has landed heads or the die has landed six or both, seven events remain possible: {T6, H1, H2, H3, H4, H5, H6}. The winning event, H6, therefore has a probability of only 1/7, which is obviously less than the probability of a new die landing "6", which is 1/6.

Why is the answer counterintuitive? It appears that most people erroneously consider the flip of the coin and the roll of the die "separately." That is, people reason incorrectly that if either the coin flip or die roll is already good, and one wins the prize if both are good, then the probability of winning the prize must therefore be somewhere between the probability of a good die roll (1/6) and the probability of a good coin flip (1/2). Explicitly listing the relevant sample space makes the correct answer clear, because doing so properly "combines" the flip of the coin and the roll of the die.

In fact, this problem can be generalized: it is easy to show that for any two independent events with probabilities p and q in (0,1), P (pq|pq) < min (p,q).

Thanks to Maya Bar-Hillel for comments and suggestions.
                                     <<<========<<



>>>>>==============>
Chance News
Copyright (c) 2001 Laurie Snell

This work is freely redistributable under the terms of the GNU General Public License as published by the Free Software Foundation. This work comes with ABSOLUTELY NO WARRANTY.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

CHANCE News 10.01

November 28, 2000 to January 10, 2001

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!