!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

CHANCE News 8.01

(10 December 1998 to 20 January 1999)

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Prepared by J. Laurie Snell, Bill Peterson and Charles Grinstead, with help from Fuxing Hou, and Joan Snell.

Please send comments and suggestions for articles to jlsnell@dartmouth.edu

Back issues of Chance News and other materials for teaching a Chance course are available from the Chance web site:

Chance News is distributed under the GNU General Public License (so-called 'copyleft'). See the end of the newsletter for details.

Chance News is best read using Courier 12pt font.

===========================================================

I don't mean to sound crass about this but it is going to sound crass -- we don't care what they die of as long as they die on time.

Cindi McCary
Approaches and Social Aspects of Viaticals
Chance Lectures Dartmouth College 1998

===========================================================

Contents of Chance News 8.01

<<<========<<




>>>>>==============>
We have put the videos of the Last Chance Lectures held at Dartmouth. Dec 11-12, 1998, on the Chance web site. Viewing these require at least a 58kbs connection and a computer with a speed of at least 150 megahertz. We hope you will try one or more of the videos and let us know if they work or if you have trouble with them. In this Chance News, we have included reviews of Hal Stern's 1997 Chance Lecture and Clark Chapman's 1998 Chance Lecture. The experience writing these reviews convinces us that your students will learn a lot if you ask them to write about what they found interesting in "what the experts say."
<<<========<<




>>>>>==============>
This issue is late because we went to the math meetings in Texas. It was fun to read some new newspapers. Here is a letter to the editor we noticed in the Dallas Morning Star, Jan 12, 1999, 8A

I was really angry when I read the front page article Jan. 9 stating that a poll finds 54% of Texans back a quick vote on Bill Clinton. The Dallas Morning News Poll is not accurate of what people think. A random poll of 514 people does not accurately portray the majority of people in the state who were not asked.

Our independent poll, of people we meet in the metroplex during everyday sales is very different. The random people we ask are in favor of impeachment and feel that Bill Clinton should be forced from office. This includes people who voted for him one or more times.

I have been thoroughly disgusted with the "random" polls conducted in New York implying that Americans favor Bill Clinton and think he is doing a good job --- usually polling 500 people. This is propaganda and manipulation of public opinion through public news sources and the information being put out is a lie! Most people do not feel the way polls imply. It is not appreciated by most Americans.

Carol Yocum
Copper Canyon

<<<========<<




>>>>>==============>
Shortly after Tom Gilovich's great talk on streaks in sports at the Chance Lectures, our photographer Bob Drake showed us the following article from our local paper. Here are some exerts from this article:

North Carolina Women have the touch
Valley News, Dec. 21, 1998 B8
The Associated Press

This article describes the North Carolina-Alabama basketball game won by North Carolina 90-75.

North Carolina coach Sylvia Hatchell just waited to see who had the right touch.

Juanna Brown emerged as the sharp shooter, and Hatchell made sure she got the ball. Brown scored 26 points. "We've had four or five players in double figures all year, and you never know who's going to be hot," Hatchell said. "Today it was Juana, and I was telling our players to get her the ball. That's not good coaching -- that's just common sense."

"It was a team win, but even in a team concept you want to get the ball to the person with the hot hand," said Brown, whose six 3-pointers were one off the school record. "I had the hot hand and I was calling for it." Alabama's Dominique Canty led Alabama with 26 points. "We knew Brown was a good shooter, but not like that. She was in a zone," Canty said.

DISCUSSION QUESTION:

Why can't statistics capture these moments?
<<<========<<




>>>>>==============>
Norton Starr sent us the latest howlers from the Royal Statistical Society News, January, 1999 issue, p. 13:

So confident is he of Oracle's superiority over Microsoft SQL Server 7.0 (launched last week) that Oracle CEO Larry Ellison is offering $1 million to anyone who can get SQL 7.0 to run less than 100% slower than Oracle 8.

PC Week    24 November 1998

------------------------------------------------

The Centre for the Study of Higher Education (University of Melbourne) has sought Macquairie's support to survey a representative sample of their academic staff. Interested members of staff should contact Professor John Loxton (x7442) about obtaining copies of the survey questionnaire.

Staff News (Macquairie Univeristy)    13 November 1998

---------------------------------------------------

The meeting is open to all fellows and non-fellows of the RSS.

Quality Improvement Committee notice RSS Meetings December 1998 [No, we don't have a third category!]

---------------------------------------------------

<<<========<<




>>>>>==============>
The following suggestion of Beth Chance might also be considered a howler.

Microsoft witness attacked for contradictory opinions
The New York Times, 15 Jan. 1999, C2
Joel Brinkley

David Boies, the Government's lead trial lawyer, challenged the testimony of Richard Schmalensee, an economist, Dean of the Sloan School of Management at MIT, and a witness for Bill Gates. Schmalensee wrote that, in a survey of software developers, "85% predicted that Microsoft's integration of Internet functions into Windows would help their company, and 83% predicted it would help consumers."

Boies produced an E-mail message written last February by Bill Gates saying: It would HELP ME "EMENSELY to have a survey showing that 90% of developers believe that putting the browser into the operating system makes sense. Ideally we would have a survey before I appear at the Senate on March 3rd.

In a later series of e-mail messages, Microsoft employees discuss how they would pose the questions to get the responses that Bill Gates wanted. Schmalensee said that, had he known the origin of the polling information, he would still have quoted the percentages but might have added "an explanatory phrase."

DISCUSSION QUESTION:

In another article on this topic (U.S.A. Today, Jan 15, 1999, 2B by Paul Davidson) we read:

Microsoft executive Nathan Myhrvoid responded to Gate's E-mail message that it is crucial to phrase the survey question properly. He strongly advises against using the word "browser" because it suggests a separate thing. The survey instead refers to the "integration of browser technologies into the operating system."

Do you think this biased the results?
<<<========<<




>>>>>==============>
Founding father
Nature 396, 13-14 (5 Nov. 1998)
Eric S. Lander and Joseph J. Ellis

Jefferson fathered slave's last child
Nature 397, 27-28 (5 Nov. 1998)
Eugene A. Foster, M. A. Jobling, P. G. Taylor, P. Donnelly,
P. de Knijff, Rene Mieremet, T. Zerjal, C. Tyler-Smith

Through the miracles of DNA testing, we can now be an audience to not just one, but two sex scandals involving Presidents. The fact that one of the Presidents (Thomas Jefferson) has been dead for over 150 years does not seem to significantly decrease the interest being paid to the question of whether or not he was the father of any of the offspring of one of his slaves, Sally Hemmings.

The first article cited above is a summary of the second article, in which the results are given. Sally Hemmings accompanied the recently widowed Thomas Jefferson to France in 1786, where Jefferson served as ambassador. Hemmings was sent to accompany Jefferson's youngest daughter. They returned to the United States in 1789, and she bore five children between 1790 and 1808. It has been established that Jefferson and Hemmings were both at Monticello when each of these children was conceived. (This reminds this reviewer of the following howler: In a paternity suit, the defendant is able to establish that he was out of the country where the child was conceived between 290 and 240 days before the birth of the child. Assuming that the gestation period is normally distributed with mean 270 days and standard deviation 10 days, what is the probability that the defendant is the father of the child?)

It turns out that Y chromosomes are passed largely intact from father to son, so they can be used to trace paternal lineages. In order to make such tracings accurate, many different regions of the Y chromosome are used. In each of these regions, the variants of the genes that are present are recorded. The set of such variants on one such Y chromosome is called a haplotype. In the present case, a haplotype using 19 regions was employed. Since Jefferson had no verified male offspring, his haplotype was inferred by using that of his paternal grandfather. The haplotype of his paternal grandfather was obtained by using male-line descendants from that grandfather. It is claimed that the particular haplotype inferred from Jefferson's is quite rare, since it was not seen in a sample of 1200 people worldwide.

Descendants of Thomas Woodson, Hemming's first offspring, do not have the Jefferson haplotype, but descendants of Eston Hemings Jefferson, her last son, do have the Jefferson haplotype. The authors of the second article conclude that indeed, Jefferson was Eston's father.

In several letters that appeared subsequent to the above articles, it was pointed out that Jefferson had a brother Randolph, who had five sons. All six of these males had the same haplotype as Jefferson, since this haplotype was inferred from descendants of Jefferson's paternal grandfather. It is also known that at least one of these sons was raised by Thomas Jefferson. In their reply to these letters, the authors back off quite a bit from the statement in the title of their original article. They state that "[w]e know from the historical and the DNA data that Thomas Jefferson can neither be definitely excluded nor solely implicated in the paternity of illegitimate children with his slave Sally Hemmings. When we embarked on this study, we knew that the results could not be conclusive, but we hoped to obtain some objective data that would tilt the weight of evidence in one direction or another. We think we have provided such data and that the modest, probabilistic interpretations we have made are tenable at present."

DISCUSSION QUESTION:

Could a Bayesian approach shed some light on this controversy?
<<<========<<




>>>>>==============>
Watching those who watch public opinion
The Christian Science Monitor, 19 November 1998, p.3
Daniel B. Wood

This article is a bit dated but still relevant in light of the impeachment trial now underway. On the eve of Ken Starr's appearance before the House Judiciary Committee, this article questioned the roll of polls in the impeachment debate, asking "Do polls tell the truth, the whole truth, and nothing but the truth?"

Polls have drawn criticism both for what they haven't predicted and for what they have. In the former category we have the unexpectedly strong Democratic result in the midterm elections. In the latter, we have former VP Dan Quayle's complaint that Americans "are far more turned off with Bill Clinton...than all of these public opinion polls are expressing."

Clinton's job approval rating has been withstanding the scandal, and polls consistently show most voters not in favor of removing him from office. Says Susan Pinkus, of the Los Angeles Times poll: "From all parts of the country and all kinds of media, we have found the same result. In a way, that validates us all."

Nevertheless, experts recommend caution in interpreting poll results as the impeachment process progresses. Polls are good at measuring basic attitudes at a fixed point in time, but less useful in assessing either the reasons for those attitudes or hypothetical scenarios that would cause those attitudes to change.

The article gives the following checklist to help readers evaluate polls:

• Disregard any poll in which respondents are self- selected. These include call-in polls, mail-in polls and 900-number polls.

• Stick with reputable polls that have gained widespread acceptance. Some national polls are Gallup, Roper, Harris, or CNN, CBS/New York Times, and ABC.

• Bring a healthy skepticism to polls that don't divulge the questions asked, statistical margins of error, how respondents were chosen, how they were conducted (phone or in person), and when.

• Watch out who is paying for or sponsoring a poll.

• Be alert to the bias of journalists or others who attempt to interpret partial poll results. Stick to media that have poll editors who can compare different polls on the same subject.

• Pay attention to respondents' ages and where in the US they live.

DISCUSSION QUESTIONS:

(1) Do the polls showing that people don't want Clinton removed from office necessarily mean that they are not turned off by his behavior?

(2) What do you think is meant by the term "partial poll results" on the checklist?
<<<========<<




>>>>>==============>
Madison Avenue and violence don't mix
The Wall Street Journal, 1 December 1998, p. B9
Sally Beatty

A study in the December issue of The Journal of Experimental Psychology: Applied reports that viewers who watched a violent film clip had poorer recall of advertising messages than viewers who watched non-violent material. This suggests that violent TV shows may not be desirable for advertisers.

The study involved 720 undergraduate students at Iowa State University. Scenes from 10 different movies were used. "The Karate Kid III," "Die Hard," "Cobra," "Single White Female," and "The Hand that Rocks the Cradle" provided violent scenes; "Gorillas in the Mist," Awakenings," "Chariots of Fire," "Field of Dreams," and "Never Cry Wolf" were sources of non-violent scenes. Included with each clip was one of three possible commercial messages, chosen at random. One was for Wisk laundry detergent, one for Plax mouth rinse, and one for Krazy Glue. When the students were later tested on brand-name recognition, brand-name recall, and advertising message, those who had watched the violent scenes scored lower.

Professor Brad Bushman, the Iowa State professor who directed the study, explained that the students who saw the violent scenes reported feeling angrier. He reported that such anger appears to be the major reason people had trouble remembering the ads. Critics of the study expressed concern about generalizing from college students to the population at large. Furthermore, Jim Spaeth, president of the Advertising Research Foundation, expressed concern about the use of movie clips. The research might say something about the reaction to feature films shown on TV, but this does not represent the majority of programming.

Bushman countered by saying that all the movies he used had in fact appeared on television. As to the first criticism, he says: "My question, is why would we expect the memory of college students to differ from the memory of others?"

DISCUSSION QUESTIONS:

(1) Can you suggest any possible explanations besides anger for the effects observed in the study?

(2) Do you find Bushman's rebuttals of his critics to be convincing?
<<<========<<




>>>>>==============>
Ask Marilyn
Parade Magazine, 3 January 1999, p. 16
Marilyn vos Savant

In an earlier column (Parade, 29 November 1998, p. 26) Marilyn responded to the following question:

You're at a party with 199 other guests when robbers break in and announce they're going to rob one of you. They put 199 blank pieces of paper in a hat, plus one marked "you lose." Each guest must draw a piece, and the person who draws "you lose" gets robbed. The robbers think you're cute, so they offer you the option of drawing first, last or any time in between. When would you take your turn?

Marilyn said she would choose to draw first, explaining that "It would make no difference to my chances of losing--any turn is the same--but at least I'd get to leave this party as soon as possible." Not all of her readers agreed, and the present column contains responses from some of them.

One letter argues for drawing first: "You said any turn is the same, but I believe that would be true only if the partygoers all had to replace the papers they drew before another selection was made. But if they keep the papers (the scenario intended by the question), wouldn't the odds of losing increase as more blanks were drawn? If so drawing first is best."

Another reader argued for drawing last: "Though you have a 1-in-200 chance of getting a blank paper and not being robbed if you go first, the odds are 199 in 200 that the drawing will end with a loser (other then you) before you draw if you go last. You should go last."

Marilyn restates her original position that it makes no difference where in the process you draw. She argues that the answer would be the same as if everyone drew simultaneously, in which case it would be more intuitive that everyone has the same 1-in-200 chance. She offers another argument based on people buying tickets for a church raffle, explaining that it makes no difference whether you buy your ticket immediately when you arrive or wait until just before the drawing.

DISCUSSION QUESTIONS:

(1) The second letter actually starts on a more caustic note: "You are wrong so often, it's frightening." Is Marilyn wrong this time? How does the argument in the second letter really support a conclusion that the last position is best?

(2) If 100 blanks have been drawn, the chance that the next slip says "you lose" is indeed 1 in 100. Why doesn't this mean you should draw early if you have the choice?

(3) The first letter touches on the distinction between sampling with replacement and without replacement, which Marilyn does not directly address. Is the answer really the same in both scenarios?

(4) When would you choose to make your draw?
<<<========<<




>>>>>==============>
Rain on the way?
Sunday Telegraph (London), 3 January 1999, p. 6
Robert Matthews

The Meteorological Office has plans to introduce new weather maps that are color-coded to indicate the probabilities associated with forecasts. This enhancement has been made possible by the development of "ensemble forecasting," in which high-speed computers are used to simulate multiple scenarios, using slight perturbations of weather data to investigate forecast sensitivity. John Kettley of the Meteorological Office is quoted as saying: "We now have more confidence in the ensemble model that we are using, so that we could put some probability figures on, say, the chances of winds reaching over 80 mph in a storm." During the calm summer months, researchers hope that it will be possible to give forecasts of up to 10 days.

The article states that "some people may find the idea of a '70 per cent chance of rain' hard to understand--as either rain will fall or won't--but scientists are confident that audiences will rapidly take to the concept of what is, in effect, forecasters giving the odds on rain or shine."

For more discussion of ensemble forecasting, see the Video Lecture by Daniel Wilks of Cornell, in the Chance Database.

DISCUSSION QUESTIONS:

(1) What do you think a 70% chance of rain means?

(2) What are the odds associated with a 70% chance?

(3) Do you think the public would prefer probabilities or odds?
<<<========<<




>>>>>==============>
Outer limits
The New York Times, 13 December 1998, Section 7, p. 4sd
Book Review Desk

There are two letters to the editor here relating to an earlier review in the Times of Amir Aczel's book "Probability 1: Why There Must Be Intelligent Life in the Universe" (We also discussed this book in Chance News 7.09 and 7.10).

The first is from Aczel himself.

To the Editor:

In his review of my book "Probability 1: Why There Must Be Intelligent Life in the Universe" (Nov. 15), John Durant criticized my use of a standard mathematical formula, the union rule for independent events, in computing the probability of life outside the Earth. He wrote, "Instead of the probability approaching 1, it can just as easily approach 0."

As every student of elementary statistics knows, the probability of a union of independent events with non-zero probabilities cannot converge to zero. Mathematically, the probability of a union of events is an increasing function of the number of events. The question, therefore, is not whether the probabilities will increase -- they always do -- but rather how far they will go: will they reach 1? And here I found that even a tiny initial probability (many orders of magnitude smaller than probabilities used in science) still leads to virtual certainty once it is compounded over the tremendous number of stars in the universe. I stand behind my book's conclusion.

Amir D. Aczel
Waltham, Mass.

In the review in question, Durant had stated that most people would probably not dispute Aczel's claim that the universe probably contains an enormous number of planets that could potentially support life. His concern was with the Aczel's probability assignments:

The trouble starts when we come to estimate the probability that life will emerge on any of these planets. The problem is that we don't have a good theory of the origin of life on Earth. Without one, it is anybody's guess how likely this event actually was. Out of thin air Aczel conjures the figure of 1 in a trillion for this likelihood and concludes that the probability of life existing on at least one other planet is virtually 1.

This is statistical sleight of hand. What if the real value for this crucial probability is far, far smaller? What if, in fact, the probability of life appearing in any one suitable home is tiny in relation to the number of suitable homes available? Now the answer to our question changes. Instead of the probability approaching 1, it can just as easily approach 0.

DISCUSSION QUESTION:

There seem to be two arguments for convergence going on. Aczel is looking at a fixed probability of life p, and letting the number of planets get large. Durant seems to be worried about what happens as p gets small. How would you sort this out?

The second letter to the editor took a different point of view.

To the Editor:

John Durant's discussion of extraterrestrial life misses the main point, I think. Statistically considered, there may well be a certainty of life elsewhere in the universe, but as any unmarried New Yorker can tell you, the issue is not whether there are potential mates out there (probability 1), it's whether you can find someone suitable for a meaningful relationship.

Assume that next week we receive unambiguously clear and intelligible radio signals from a planet circling Albireo, a gorgeous double star in the constellation Cygnus. Even with this amazingly good luck, considering the size of the universe, these signals would have been sent to us in about 1608, when Shakespeare was finishing "Antony and Cleopatra" and the first colonists were dying in Virginia. If we answered the signals quickly, the Albireans would -- if they were listening -- discover us in 2388, and we would be able to receive their response in 2778. However devoutly we may wish this consummation, I think that our needs, desires and interests in communicating with Albireo will most likely be unimaginably different that year than in 1998:
probability 1.

The search for extraterrestrial life in the universe is a fine endeavor, and I have no wish to trivialize it; but as a practical matter the search is restricted to our immediate celestial neighborhood, with probability issues very different from those raised in Durant's review. There are so many beautiful stars, but so little time.

Vincent A. Wald
Brooklyn

<<<========<<




>>>>>==============>
The next two articles and possible discussion questions were suggested by Norton Starr. For these studies it is particularly interesting to compare the newspaper articles with the sources sinces the sources are freely available on the web.

Heaven! Longer life and chocolate, too
The New York Times, 22 Dec. 1998 F7
Associated Press

Life is sweet: candy consumption and longevity
British Medical Journal, 19 Dec. 1998; 317:1683-1684
I-Min Lee, Ralph S. Paffenbarger Jr.

This article was widely reported in the press especially since it appeared right before Christmas.

Here is the information about the study as given in the article in the British Medical Journal.

The Harvard alumni health study is an ongoing study of men entering Harvard as undergraduates between 1916 and 1950. The study involved 7841 men, free of cardiovascular disease and cancer, who responded to a health survey in 1988 providing information on consumption of candy as well as other health habits. The authors obtained death certificates for men who died up to the end of 1993.

In the BMJ article you will find a table that compares the consumers of candy with the non-consumers. From the information in this table the authors conclude:

Consumers and non-consumers of candy differed in several ways. Those who did not indulge were older, leaner, and more likely to smoke. They drank more, ate less red meat and vegetable or green salad and were more likely to take vitamin or mineral supplements.

The study found that between 1988 and 1993, 514 men died: 7.5% of non-consumers, but only 5.9% of consumers. This gave an age-adjusted relative risk of .83 for the consumers compared to a 1 for the non-consumers. When they used a multivariate model to adjust for other factors, including cigarette smoking, they found a relative risk for the non-consumers of .73.

Among the consumers, mortality was lowest for those who consumed candy 1-3 times a month (relative risk .64) and highest among those who consumed candy three or more times a week (relative risk .84). The authors conclude that, as with most things in life, moderation seems to be paramount.

The questions asked did not permit the experimenters to differentiate between sugar candy and chocolate. Chocolate, like red wine, contains an antioxidant which is believed to protect against heart disease The authors suggest this might explain their results.

DISCUSSION QUESTIONS:

(1) The article, like many other medical studies, used the Cox proportional hazards regression to estimate relative risks. See if you can find how this method works from the web or a medical statistics book. Could you explain it to your class?

(2) The authors comment that, in testing for differences between the consumer and non-consumers groups, in the case of continuous variables such as age, since the distributions were not normal they used non-parametric Wilcoxon rank sum tests. Find out how this test work and see if you think the assumptions for the test are reasonably satisfied.

(2) The article reports: "using life table analysis ... we estimated that (after adjustment for age and cigarette smoking) candy consumers enjoyed, on average .92 added years of life ... compared with non-consumers." How do you think they did this?

(3) The authors confess to an average consumption of one bar a day each. Does their study suggest that they should cut down their consumption of candy? Do you think they will?

(4) Is a study like this completely convincing that eating candy can lead to a longer life?

(5) Do you think it was a coincidence that this article appeared just before Christmas?
<<<========<<




>>>>>==============>
Getting it right on the facts of death
The New York Times, 22 Dec. 1998, F7
Lawrence Altman

Accuracy of death certificates for coding coronary heart disease as the cause of death
Annals of Internal Medicine, 15 Dec. 1998
Donald M. Lloyd-Jones, et. al.

Editorial: Fifty years of death certificates
Annals of Internal Medicine, 15 Dec. 1998
Claude Lefaunt, et. al.

When a patient dies, the doctor is required to fill out a death certificate specifying the cause of death. These death certificates have many important uses. They provide information to the families that can have effects on their health especially in the case of hereditary diseases. They can also affect insurance payments. They are used by policy makers to allocate money for future research. Finally, they are used to assess the results of medical studies.

There have been previous studies showing that a significant proportion of the death certificates do not give the correct cause of death. The death may be sudden and the doctor have little information to go on. Also, doctors may be influenced by family wishes to not have certain kinds of death recorded.

This study concentrates on estimating the accuracy of the death certificates with respect to heart disease. To do this, the researchers considered 2,683 participants in the Framingham heart study who died between 1948 and 1988. They asked a panel of three experts to examine all the information available at the time of the deaths and classify the cause of death as: (1) coronary heart disease, (2) stroke, (3) other cardiovascular disease, (4) cancer, (5) other, or (6) unknown.

The death certificate gave coronary heart disease as the cause of death in 942, or 35% of the deceased, as compared to 758, or 28.3% for the expert panel. For those 85 or older the death certificate assigned coronary heart disease for twice as many as did the expert panel. The authors found indications that, when doctors were uncertain of the cause of death, they listed the death as coronary heart disease. The difference between the cause given on the death certificate and the panel was significantly less for cancer where there was a longer history to go on.

The authors claim that this level of error has serious implications for studies which are carried out to test drugs for treatment of heart disease. They argue that it produces a bias towards the hypothesis that the drug is not effective.

The authors admit that there are problems in generalizing their results to other communities. In particular, there were very few minorities in the Framingham study. Also, the doctors may have been especially careful because they realized that the results were going to be scrutinized for the Framingham study. Another potential problem with the study is that the experts could also have made errors. However, the authors state that their results are consistent with previous studies and conclude that it is important to improve the process of filling out death certificates.

DISCUSSION QUESTIONS:

(1) What causes for death would you expect to be under-reported?

(2) For a given death, let E = 1 if the expert panel assigned coronary heart disease as the cause of death and E = 0 otherwise. Let D = 1 if the doctor assigned coronary heart disease as the cause of death on the death certificate and D = 0 otherwise. The authors call P(D = 1|E = 1) the sensitivity, and P(D = 0|E = 0) the specificity. From their data they find that the sensitivity = 83.8% and specificity = 84.1%. To illustrate the bias that errors in classification can produce in a medical trial, the authors consider a study for which the true risk for heart disease is 10% among the exposed group and 5% among the unexposed group and the sensitivity and specificity are the same as they found in their study. Then they claim that the relative risk would be calculated as 1.18, considerably smaller than the true relative risk of 2.0. How did they get the relative risk of 1.18?

(3) Using the notation of the last question, the positive predictive value is P(E = 1|D = 1) and the negative predictive value is P(E = 0|D = 0). These were found to be 67.4% and 92.9% respectively. How do you interpret these values? Why are they so different? The authors found that the positive predictive value for men was bigger than that for women for all ages. Why do you think this was the case?

(4) In the absence of a high level of accuracy in death certificates, how might you design a procedure for correcting or partially offsetting the various biases?
<<<========<<




>>>>>==============>
Statistics: Myths, monsters and maths
The Economist, 28 November 1998, p. 87

This article describes research by Charles Paxton of Oxford University, who is using statistics to estimate the number of yet undiscovered salt water species whose length exceeds 2 meters. By 1995, the number of such species identified had reached 217, but the rate of new finds has been decreasing. By examining the pattern of discoveries over the years 1830-1995, Paxton estimates that 47 such species remain to be found. His findings are being published in the latest issue of the Journal of the Marine Biological Association.

The article reports that Paxton is using a technique invented in 1943 by the famous statistician Ronald Fisher. The procedure is explained using the analogy of a large tin of assorted chocolates (suggesting a possible classroom activity for those who like generating edible data!). By repeatedly shaking the tin, removing a single chocolate, and noting its type, a pattern will begin to develop. In the early stages, many new types will be "discovered," but gradually the periods between new finds will begin to increase. The article describes the time series plot of the number of known types as approximately following a hyperbola. "Because there are only so many kinds of chocolate available for discovery, the hyperbola rises rapidly, but tends towards a fixed upper limit." Paxton's estimate comes from fitting such a curve to his data.

Underlying the procedure is the assumption that the sampling rate is constant over time. The article speculates that the decline of whaling might have decreased the number of opportunities to observe large species, in which case the figure of 47 would be an underestimate. On the other hand, this may be more than offset by the sophistication of modern capture techniques, so the sampling rate might actually be rising.

The article playfully refers to the large species Paxton investigated as "sea monsters" (hence its title). Indeed, the author speculates that Paxton's current research on fresh water species could lead to predictions of unknown creatures like the Loch Ness monster!

See the article "Hidden Truths" in Chance News 7.06 for further discussion of the problem of estimating the number of species.

DISCUSSION QUESTIONS:

(1) Does Paxton believe that exactly 47 remain? What is missing from the description in the article?

(2) Even without the details of the estimation procedure, does it make sense that changes in the sampling rate would bias the estimate in the direction indicated in the article?
<<<========<<




>>>>>==============>
Norton Starr also suggested the following series of articles on the relation between mathematics and statistics. We found it hard to believe that workers in the same field, namely statistics, can have such wildly different ideas about their subject. However, as Norton remarks, if you get as far as the discussion you will find some light at the end of the tunnel.

Statistics and mathematics - trouble at the interface?
P. Sprent, The Statistician (Journal of the Royal Statistical Society, Series D) 47:2 (1998) 239-244;

Breaking misconceptions - statistics and its relationship to mathematics, D. J. Hand. ibid., 245-250;

Mathematics: governess or handmaiden?, S. Senn. ibid., 251-260;

Statistics and mathematics: the appropriate use of mathematics within statistics, R. A. Bailey. ibid., 261-272;

Discussion on the papers on 'Statistics and mathematics', ibid., 273-290.

Is statistics a branch of mathematics? Do mathematical definitions, theorems, and proofs belong in statistics courses and journals? These issues are addressed in the four articles cited, whose texts were lectures at a meeting of the Royal Statistical Society in June, 1997.

Numerous examples are given both of mindlessly abstract rigor thrown at statistics students/practitioners and of the "unreasonable effectiveness of mathematics" in grounding and clarifying a statistical environment. An instance of the latter is Sir Ronald Fisher's use of finite abelian group theory in his work on factorial designs. It's further noted that the theory of irreducible characters allows the extension of his method of confounding to factorial designs of arbitrary sizes.

Although one has the impression that the authors may be speaking through one another without perceivable effect, their remarks crystallize complaints and views that must surely permeate both academic and industrial statistics. The extensive discussions that follow the four papers are, if anything, more intriguing than the papers themselves.

DISCUSSION QUESTION:

Reading these papers makes one think that many of the problems the authors discuss would go away if colleges and universities typically considered statistics a seperate discipline like math or physics with its own program or department. Do you think statistics is a seperate discipline and should have its own home?
<<<========<<




>>>>>==============>
The risk to civilization from extraterrestrial objects
Chance Lecture, Dartmouth College, Dec. 11, 1998
Clark R. Chapman
Southwest Research Institute Boulder, Colorado

Clark Chapman received an undergraduate degree in Astronomy from Harvard, Master's Degree in Meteorology from M.I.T., and PhD in Planetary Science from M.I.T. (1972). He is a leading researcher in planetary cratering and in the physical properties of the smaller bodies of the solar system (asteroids, comets, planetary satellites, the planet Mercury).

Chapman begins with the story of how the planets were formed and how this resulted in rocky and metallic debris being left over to move around the sun in their own orbits. Those in orbit outside the planets are mixtures of ice and dust and called comets. Most of the remaining debris is rocky or metallic in orbit between Mars and Jupiter. This development is beautifully illustrated by drawings and photographs.

Concern about the hazards of objects from outer space hitting the Earth is quite recent. Chapman remarks that before 1800 peasants and others reported seeing objects fall from the sky, but these claims were not taken seriously (like flying saucers today). A shower of rocks occurred on the day of a scientific meeting in Italy in 1790 making believers out of the scientists. On the first day of the 19th century the first asteroid was discovered. Space exploration in the 60's showed that virtually every planet and moon in our solar system was covered with craters believed to be the results of impacts from extraterrestrial objects. In 1994 we were able to watch the spectacular Shoemaker-Levy comet crash into Jupiter resulting in a black area in its atmosphere the size of the Earth.

Methods have been developed to determine if craters on the Earth and other planets and moons were the result of impacts by objects from space. It is also possible to determine when such impacts occurred. Studying those on the Earth has led to the widely held theory that the disappearance of the dinosaurs and indeed 2/3 of the species on the Earth 65 million years ago was the result of the impact of an asteroid or comet about 6 miles in diameter. About 200 impacts from extraterrestrial objects have been identified on the Earth's surface. The well-known Meteor Crater in Arizona is nearly a mile in diameter and believed to be the result of an iron meteorite weighing more than a million tons hitting the Earth about 50,000 years ago. It is believed that a 1908 explosion over the Tunguska River region of Siberia, resulting in the destruction of over 600 square miles of forest, was caused by the atmospheric impact of an asteroid.

The hazard to the Earth from extraterrestrial objects comes from the fact that objects in space can have their orbits perturbed by chaotic dynamics or collisions to cross the orbit of the Earth. Researchers estimate that they have identified 10% of existing objects whose orbits will cross that of the Earth. The largest such object has a diameter of about 5 miles. None of these are expected to hit the Earth but, of course, nothing is known about the other 90%. The damage to the Earth can be caused by an object exploding when it comes into Earth's atmosphere such as the 1908 Tunguska explosion or by crashing into the Earth itself. The damage obviously depends on the size and energy of the object. There have been no documented deaths caused by such events though the Tunguska explosion clearly would have killed many people had it occurred over a city rather than a forest. A dog was reported to be killed and numerous roofs and cars have been damaged by meteorites.

Chapman classifies the possible outcomes of an extraterrestrial object hitting the Earth into four categories:

(1) High altitude disintegration: negligible surface damage.

(2) Local effects: projectile explodes in atmosphere near Earth or hits the Earth causing localized damage such as the Tunguska explosion.

(3) Global effects: materials in the atmosphere from the impact cause short term climatic changes resulting in global loss of food crops leading to larger scale famine, disease, and possible breakdown of civilization.

(4) Mass extinction: mass extinction (many species lost forever, nearly all humans die).

The threat to an individual depends on the frequency of the event and how many people are expected to be killed by it, that is, on the expected number of people killed. In their article in Nature, Chapman and Morrison provide estimates and detailed calculations for the expected number killed for an even more detailed classification of objects. Based on this analysis they find that in assessing an individual's risk we get a good approximation by considering only events in the third category. A typical object in this category is about a mile in diameter, is a 2-million megaton event, will occur about once every 300,000 years, and can be expected to kill about 25% of the world or about 1.5 billion people.

We view these events as a Poisson process with rate 1 per 300,000 years. Then the expected number of people killed in one year is 1.5billion/300000 = 5000. Thus the probability that a randomly chosen person is killed in a particular year is about 5000/6billion or 1 in 1.2 million. This translates into 1 in 18,500 chance of being killed in a 65 year lifetime. Chapman and Morrison are responsible for the well known statistic that the chance as being killed in a lifetime by an asteroid is about the same as by being killed in an airplane accident.

What is the probability that you are killed in an airplane accident? Arnold Barnett addressed this question in his 1997 chance lecture also available on the Chance website. Barnett pointed out that there are many ways to estimate this probability, but the way that he has found most meaningful is the following: choose a particular class of flights, for example, domestic jet flights in the United States. Assume that we choose a flight at random from all the flights in a given year. What is the probability we will be killed by this flight? Data in recent years estimates this probability to be about 1 in 7 million. Thus, if we make 6 such flights a year, we have 1 chance in 1.17 million of being killed in an airplane crash. If we do this for 65 years we have a 1 in 18,000 chance of being killed in an airplane crash --- close to the 1 in 18,500 chance of being killed by an asteroid. Chapman describes this as: saying that we have about the same chance that our epitath will say that we died by an airplane accident as by an asteroid.

In his talk, Barnett discussed how he explains to those afraid of flying that the one in 7 million risk should not be taken seriously. He remarks that his first idea was to tell them that winning the Massachusetts state lottery is about three times as likely as being killed in a plane crash. This did not work since they fully expected to win the lottery. He finally found that telling them how long they would have to live to expect to die of an airplane accident was more convincing. In our case this would be about 300,000 yers.

People find this hard to accept. They might say: airplane accidents happen regularly and people are killed in these accidents. There has never been a person killed by an asteroid hitting the Earth. In trying to make people take the asteroid threat seriously, the fact that you will have to wait about 300,000 years to have a reasonable chance of this event happening, which was so effective in Barnett's case, goes in the wrong direction in this case. On the other hand, comparing the odds of being killed by an asteroid in one year with those of winning the lottery (that did not work for Barnett to calm those afraid of flying because they expected to win the lottery) should work just fine to convince people that the risk of being killed by an asteroid is real.

Chapman discussed his ideas on how people perceive such risks as well as a variety of ways to explain why the risk is real and should be taking seriously. He remarked that, with proper lead-time, it might be possible to destroy or deflect the asteroid by a nuclear device. Congress has recommended that the feasibility of this be studied, but Chapman remarked that not much has come from a government initiative.

This is only a sketch of Chapman's fascinating story of hazards from outer space and you will surely enjoy watching the entire lecture.

DISCUSSION QUESTIONS:

(1) Are you convinced that your risk from dying from the impact of an asteroid is about the same as your dying from an airplane accident? Why not?

(2) In discussing the feasibility of developing nuclear devices to deflect asteroids or comets, Carl Sagan remarked that the risk of such devices being used for other forms of destruction might be an even bigger risk to mankind. What do you think about this argument?

(3) Do you think the government should aggressively support methods to discover and prevent impacts on Earth by extraterrestrial objects even if it cost several billions of dollars?

(4) How would you compare the reliability of the estimates for being killed by an airplane and those of being killed by an asteroid or comet?
<<<========<<




>>>>>==============>
Statistics in Sports
Chance Lecture, Dartmouth College, Dec 13, 1997
Hal Stern
Iowa State University

Hal Stern is Professor of Statistics in the Department of Statistics at Iowa State University. He received his PhD from Stanford University in 1987. Hal has research interests in theoretical and applied statistics including Bayesian data analysis, paired comparisons and ranking and applications to biology, social sciences and sports. He is the editor of Chance Magazine.

Hal begins his talk by explaining why sports is a natural topic to include in a statistics course. Sports are an important part of the students' lives. Those on a team know the importance of studying records of their opponents' previous games to predict how their opponent will perform in difference situations. Listening to a professional baseball game is a statistical lab in itself. Today's sports news provides tomorrow's statistical problem. See for example the article in the current issue of The Journal of Statistic Education (Vol. 6 No. 3) by Jeffrey S. Simonoff. Jeff provides the day-by-day data for the home-run race between Mark McGwire and Sammy Sosa and suggests interesting statistical problems relating to this data. Hal himself writes a column for Chance Magazine entitled "A Statistician Reads the Newspaper" in which he discusses interesting statistical problems related to his own reading of current sports news.

In this talk Hal considers three quite different examples to show how probability and statistical theory, applied to real sports data, can enhance the understanding of a sport and help determine optimal game strategies. These examples are: the use of Markov chain theory to determine baseball strategies, the use of regression to rate college football teams, and estimating, at a particular point during the game, the probability that your team wins. We provide a brief description of his examples but you will get the whole story only by watching the video.

It's the World Series and the Yankees are playing the Dodgers. The Yankees are ahead 4 to 3 in the bottom of the ninth with the Dodgers at bat. They have a man on first and no one out. The manager has to decide if the next batter should make a sacrifice bunt. A successful bunt will put the runner on second with one out. Will this increase the chances of the runner on first scoring? How will it affect the chance of getting more than one run and winning the game?

To shed light on this question, Hal forms a Markov Chain with states indicating the bases occupied and the number of outs. For example state (13,1) means there are runners on 1st and 3rd and one out. There are 24 such states with a 25th state corresponding to the end of the inning (3 outs). The transition probabilities are determined from the probability that the batter makes an out, single, double etc. estimated from data.

Once we have the transition matrix, standard Markov Chain techniques can be used to find the probability that at least one run will score as well as the expected number of runs starting from any state. In our Yankee-Dodger game we start in state (1,0), a man on first and no outs. If the manager calls for a sacrifice which is successful we will move from state (1,0) to state (2,1). Then starting in state (2,1) the Markov chain computations give us:

After a successful sacrifice the probability of scoring is .42 and the expected number of runs is .69.

With no sacrifice we are in state (1,0) and starting in this state the Markov chain computationgs give us:

With no sacrifice, the probability of scoring is .39 and the expected number of runs scored is .85.

Thus, if the manager's primary goal is to stay in the game, he should call for a sacrifice bunt. If he wants to go for a win in this inning he should ask the batter to go for a hit. Hal remarks that a manager's job is to make decisions on the basis of his (intuitive) estimates of such probabilities and expectations.

For his second example, Hal turns to Division A-1 college football. This includes 112 teams each playing 11 or 12 games. At the time of Hal's talk, Dec. 1997, the season ended with two teams, Michigan and Nebraska undefeated. Unlike other sports, these teams do not meet in a final series to determine the winner. However, teams are rated by polls of sports writers and by newspapers including the New York Times and U.S.A. today. Hal shows us a simple way that statistics can contribute to determining a rating.

The idea is to try to find a rating of the teams that could be used to predict outcomes of games in their division. Let R(i) be the rating for team i and H be a constant representing the home-team advantage. We will use our rating R to predict the final point-spread (home team - visitor) of a game between teams h (home-team) and v (visiting team) by R(h) - R(v) + H. We choose the R(i)'s to, in hindsight, make these predictions look as good as possible over the set of all games played in the season just finished. We do this by minimizing the sum of the squares of the differences between the predicted point spread (R(h) - R(v) + H) and the actual point spread (score(h) - score(v)), summed over all games played in the season. That is, we determine the rating R by a regression model. Hal carried out these computations and found that Nebraska ranked in first place while Michigan ending up in 7th place, suggesting that Michigan scored less or played weaker opponents or both.

Having used the idea of hindsight prediction to determine ratings, Hal shows us that the ratings can, in fact, be used to predict the outcome of future games. He determines the ratings using results of the first half of the season and then uses these ratings to predict the winner for games in the second half of the season, updating the ratings as he goes along. Trying this out on professional football games Hal was able to predict the winner in the second half of the season 63.6% of the time compared to about a 66.3% record for the Las Vegas experts -- you can never quite beat the pros!

A friend once suggested that we might as well simply start professional basketball in the fourth quarter with the teams tied 100 to 100, since the first three quarters never seem to have much effect on who wins. On the other hand, our own feeling is that same situation in professional baseball is just the opposite. Instead of stretching after the seventh inning we might better just go home. We know who's going to win.

For his final application, Hal checks out such claims by considering the probability of winning computed at different times during the game. He first looks at the data. He finds that the team ahead at the end of the third quarter (seventh inning) won the game 82% of the time for basketball, 89% of the time for baseball and 85% of the time for football. Thus we see that there is a germ of truth to our folklore, in that the first 3/4 of the game has the least effect in basketball and the most effect in baseball; but it would hardly support playing only the fourth quarter of basketball or leaving a baseball game after the seventh inning!

Of course we should be able to make a better prediction about the winner by using the score at a particular time in the game. Hal shows how this could be done for professional basketball. He finds, in this case, that the distribution of point-spread (home team - visitor) at the end of each quarter is approximately normal with mean value about .5 in the first three quarters and .2 in the final quarter and, in each case, standard deviation around 7.5. Hal uses this to develop a model which gives the probability the home team wins given the point-spread at any time during the game. Using this model we can see graphically the probability that the home-team wins given the score at any particular time.

In this video Hal will show your students that statistical theory has immediate application to one of their favorite subjects: sports.

DISCUSSION QUESTIONS:

(1) Do you think managers and coaches would be willing to use mathematical models to improve their understanding? Do you think they should?

(2) Hal mentioned that the understanding of the difficulty of showing that there really are streaks in sports, say in basketball, increases our understanding of the sport. If this is true should it affect the way the game is played? How?

(3) On the homepage of the American Mathematical Society you will find a link to Ivars Peterson column where you will find an article "Who's really no. 1". In this article Peterson discusses a new attempt by officials of the Bowl Championship Series (BCS) to pick two contenders for number one. Read his article and some of the references which discuss the results for this year. Give your opinion of the success of the BCS formula.
<<<========<<




>>>>>==============>

Chance News
Copyright © 1998, 1999 Laurie Snell

This work is freely redistributable under the terms of the GNU General Public License as published by the Free Software Foundation. This work comes with ABSOLUTELY NO WARRANTY.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

CHANCE News 8.01

(10 December 1998 to 20 January 1999)

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!