Prepared by J. Laurie Snell, Bill Peterson and Charles Grinstead, with help from Fuxing Hou, and Joan Snell.
Please send comments and suggestions for articles to
jlsnell@dartmouth.edu.
Back issues of Chance News and other materials for teaching a Chance course are available from the Chance web site:
Chance News is distributed under the GNU General Public License (so-called 'copyleft'). See the end of the newsletter for details.
Chance News is best read using Courier 12pt font.
===========================================================
Take a Chance on Christmas
Laurie Snell
===========================================================
Contents of Chance News 7.11
<<<========<<
>>>>>==============>
Recent gems from "Forsooth!", a column in RSS News, a publication
of the Royal Statistical Society. These were taken from the
November, 1998 issue, page 19.
<<<========<<It's official: the age of a car does affect the driver's chance of surviving a serious accident...
Ages of vehicles involved in fatal and serious accidents in the Newcastle area since 1992:
cars built: pre-1970 1.9% 1971-75 8.6% 1976-80 16.3% 1981-85 30.3% 1986-90 25.4% 1991-95 15.4% 1996 to date: 2.1%[Isn't it amazing that cars built since 1996 have been in relatively few accidents since 1992?]
The Newcastle Herald (Australia) 11 April 1998
---------------------Kellogg's Nutri-Grain: 92% Fat Free
Claim on wrapper
---------------------Of those aged more than 60 living alone, 34% are women and only 15% are men.
Canberra Times 6 September 1998
CHICAGO (Reuters) - Republican George Ryan held on to a double-digit lead over Democratic Rep. Glenn Poshard in the Illinois governor's race, but his support had dwindled over the past two weeks, a poll published Wednesday said. Ryan, Illinois' secretary of state, led Poshard 48% to 37% ahead of the Nov. 3 election, based on the Mason-Dixon Political/Media Research poll of 823 likely voters conducted Oct. 24-25 and sponsored by Copley Newspapers. Two weeks ago, Copley's poll showed Ryan leading Poshard 51% to 36%. The polls had a margin of error of 3.5 points.DISCUSSION QUESTION:
Uncle John replies to Barry that the difference between the candidates percentages increased from 11% to 15% and 4% is more than the margin of error, 3.5 and so is significant. Should Barry again say: "Er, come again?"
<<<========<<DISCUSSION QUESTION:
Microsoft announced that "ClearType improves display resolution by as much as 300 percent." Do you think they mean "improves resolution by a factor of three?" If so, how much improvement in percentage should they claim?
<<<========<<If one asks a statistician or probabilist what the phrase "six sigma" makes them think of, the answer that most would give involves the tail(s) of a normal distribution, or perhaps more generally, a very small probability. This article describes how "six sigma" has been appropriated by the business community to refer to a method and a mindset in the area of quality control.
In the Six Sigma model, if one wants to increase the probability that a product is defect-free, one first determines all of the steps that are currently used to produce that product. The product that is used as an example in the article is something called a diagnostic scanner, which is a device used in medicine to produce images of the inside of a human body. This scanner, as might be imagined, is very complicated to design and build. In an effort to increase the quality and longevity of the product, engineers at General Electric Medical Systems (Gems), broke down their existing scanner into its component parts, and considered how to design each part so that it was more reliable.
The next step in the Six Sigma model consists of considering how much the improvement of each part's design helps the overall quality of the finished product. Using computers, one can make trial runs in which various possibilities for improving certain parts are assumed, and then the change in overall quality is computed. By making many such computer runs, one can determine on which parts it is worth concentrating one's attention.
In the case of the scanner, these computer runs showed that there were several different ways to obtain the same change in overall quality of the product. Not surprisingly, some of these ways were more expensive and/or more time-consuming than others. Given these comparisons, the engineers were able to make more informed choices about which parts to concentrate on, thus cutting both production time and costs.
In all of the descriptions that we could find of this model, the "six sigma" phrase was said to correspond to a production process that produces no more than 3.4 defects per million. If one calculates the probability that a normal random variable takes on a value at least 6 standard deviations from the mean, one obtains the value 2 per billion. For a one-tailed test this would be 1 per billion. Both of these answers are must smaller numbers than 3.4 out of one million. We were worried, as you might also be, by this discrepancy. Various explanations, none very enlightening, were found in the literature on this subject. The following explanation seemed to us to be somewhat believable.
The six sigma approach deals with the production process, and does not take into account the fact that raw materials used in this process are subject to variations, which in many cases affect the degree of variability of the finished product. It is claimed that a value of 1.5 sigma is a good one to assume for the degree of variability of many types of inputs. (Don't worry; if the last sentence is murky to you, it's not any murkier to you than it is to us!) In any event, this means that one is really striving for 4.5 sigma quality (4.5 = 6 - 1.5) when one says six sigma. When we calculate the probability that a normal random variable exceeds its mean by more than 4.5 standard deviations, we obtain the 3.4 out of one million.
DISCUSSION QUESTION:
(1) In the article, the following statement is made: ``Sigma is the Greek letter that statisticians use to define a standard deviation from a bell curve. The higher the sigma, the fewer the deviations from the norm -- or, in industry parlance, the fewer the defects." Comment on the error(s), if any, in this statement.
<<<========<<Wiseman describes two experiments in the UK conducted through the mass media by an initiative called "MegaLab UK." The first, called "The Truth Test," tried to determine through which medium-- television, print, or radio--people can detect lies best. The second, called "The Justice Experiment," examined whether a jury's decision is affected by a defendant's appearance. The larger aim of the article is to applaud examples of "helping to communicate scientific methods to the public."
For The Truth Test, a well-known British Political commentator was interviewed twice on the same topic. In one interview he consistently lied; in the other he consistently told the truth. Both versions were broadcast on British television, on radio, and transcripts of both were printed in the Daily Telegraph. The public was then asked to identify the lies for each medium, and 41,000 people responded (by phone). The results: lies were detected by 73.4% of the radio audience, 64.2% of the newspaper readers, and 51.8% of the television viewers.
In the Justice Experiment, two versions of a film summing up a court case were broadcast on television: one in which the defendant was played by an actor who was deemed (by an independent panel) to resemble the stereotype of a criminal, and one in which the defendant was portrayed by an actor not resembling the stereotype. The two versions were identical in other respects. Of 64,000 people phoning in, 41% found the criminal stereotype guilty, versus 31% for the non-criminal type. A variant of this experiment was conducted in the Daily Telegraph using different photos of the "defendant," and here the criminal type was chosen 33% of the time versus 27% for the non-criminal type.
DISCUSSION QUESTIONS:
(1) The article states that in the "Truth Test," "all three groups could detect lies at above chance levels..." Is this true? In addition, "there were hugely significant differences between the detection rates of the three groups." What does this mean?
(2) In the "Justice Experiment", the article states that "there was a difference of a highly statistically significant 10% simply because of the defendants appearance." Do you agree?
(3) Why do you think the percentages for the print version of the "Justice Experiment" differed from the television version?
<<<========<<The argument over the methods to be used in the 2000 Census reached the Supreme Court recently. The Court agreed to hear arguments in two challenges to the Administration's plan to use statistical sampling in an attempt to make the Census more accurate. One of these challenges has been brought by the House of Representatives, and the other by a group of individuals and local governments.
The arguments made by both sides were greeted with some skepticism by the justices. It was suggested that the House had not shown that it would be harmed by the Administration's proposal, therefore it lacked the legal standing to bring the suit. Justices Antonin Scalia and David Souter wondered whether a resolution of this suit by the Supreme Court would invite other squabbles between the executive and legislative branches to be spilled over to the Supreme Court.
Other justices challenged the assertion of the plaintiffs' lawyer that the Constitution does not allow the Census Bureau to determine the number of people living in an apartment building, for example, without counting them. "If people do not respond, must census-takers write down 'zero'?" Justice John Stevens asked. The answer given by the plaintiffs' lawyer was "Yes." Justice Stephen Breyer added "Even if the lights go on and off in the evening?"
DISCUSSION QUESTIONS
(1) Suppose that the only purpose of the census was to determine the number of people in the United States. How would you propose this estimate be made?
(2) What answer do you think Justice Breyer got to his last question?
<<<========<<The College Board has completed a study of the question of whether coaching improves one's SAT scores. There has been a long-running debate over whether students can improve their SAT scores by taking courses, such as those offered by Kaplan Educational Centers or Princeton Review. Kaplan has stated that the average increase in one's SAT scores after taking their course is 120 points (out of 1600 possible points), while Princeton claims an average increase of 140 points.
The College Board has long maintained that their tests are objective measures of a student's academic skills (whatever that means), and that preparation courses, such as those offered by the companies mentioned above, do not improve a student's score. It should be noted here that the College Board itself publishes preparatory material for the tests, maintaining that familiarity with the test styles improves scores.
This debate is of some importance in relation to minority college admissions. If, in fact, one can significantly improve one's scores through coaching, then people who can afford to pay for coaching would have an unfair advantage over people who are less well off.
Attempts to determine who is right using statistics are faced with several complications. First, the set of people who choose to take preparation courses is self-selected. Second, those who choose to enroll in such courses seem to be more likely to employ other strategies, such as studying on their own (wow! what a concept!) to help them get a better grade. Third, it is likely that if one takes the SAT test several times, one's scores will vary to a certain extent.
The results of the College Board study, which was undertaken by Donald E. Powers and Donald A. Rock, are that students using one of the two major coaching programs were likely to experience a gain of 19 to 39 points more than those who were uncoached. We note that this is much less than was claimed by these coaching services (see above). The study concludes that there was no significant improvement in scores due to the coaching.
We will now attempt an explanation of why the difference in the gains mentioned above are statistically insignificant. In fact, the College Board claims that the test has a standard error of 30 points. To understand what this means, suppose we compute, for each student who takes the SAT more than once, the difference between his or her first and second SAT scores. Then the data set of all such differences has a sample standard deviation of 30 points. This means that the difference in the average gains for coached and uncoached students is about the same as the standard error of the test.
DISCUSSION QUESTIONS:
(1) How do you think they actually carried out this study?
(2) How big a problem do you think the self selection is? Could it be avoided?
<<<========<<This is John Paulos' latest book. It aims to bridge the gap between the process of story telling and the process of formal logic and mathematics. He does this using examples from probability and statistics. Paulos spoke about his book at the Second Annual Chance Lectures this week and his talk will be available to you soon from our web-site. You can also read an interesting article about Paulos and his new book at abcnews.
Paulos approaches the relation between stories and mathematics at several different levels and points of view. The most familiar to us is that the solution to a problem in probability or statistics often depends on the context or the story that gave rise to the problem. This is an important idea in the statistics reform movement and is stressed in most modern statistics textbooks. The Chance course is an extreme example where the whole course is based on studying concepts from probability and statistics in the context of current news.
As Paulos remarks, most of the great discoveries of mathematics appear first in story form. One of the charming aspects of older accounts of mathematics was that these stories were often included in the publication of the mathematical results. For example, James Bernoulli writes of how he thought about his celebrated "law of large numbers" before he proved it. Here are two quotations from his treatise Ars Conjectandi where his proof of the law of large numbers occurred.
In speaking of the number of trials necessary to make a judgment about the probability of an event Bernoulli writes:
Further, it cannot escape anyone that for judging in this way about any event at all, it is not enough to use one or two trials, but rather a great number of trials is required. And sometimes the stupidest man--by some instinct of nature per se and by no previous instruction (this is truly amazing)--knows for sure that the more observations of this sort that are taken, the less the danger will be of straying from the mark.
But he goes on to say that he must contemplate another possibility.
Something further must be contemplated here which perhaps no one has thought about till now. It certainly remains to be inquired whether, after the number of observations has been increased, the probability is increased of attaining the true ratio between the number of cases in which some event can happen and in which it cannot happen, so that this probability finally exceeds any given degree of certainty; or whether the problem has, so to speak, its own asymptote--that is, whether some degree of certainty is given which one can never exceed.
The Census 2000 is a current example of a statistical problem that has to be understood in the context of current affairs. The Census has become a political story, as the Republicans demand that sampling not be used and the Democrats that it be permitted. This debate has led to delicate arguments about the meaning of the word "enumerate" as it appears in the Constitution and has led Congress to ask the Supreme Court to rule on the legitimacy of the Census Bureau using sampling in the counting the population in the year 2000. This issue would likely not even be raised by a group of statisticians, simply given the problem of counting how many people there are in the United States. In this example, we can say very little about how this gap will be bridged except that it will be by April Fool's Day 2000.
The relation between the story and the mathematical problem that it suggests can be very subtle. Consider the following variation of the two-girl problem, all too familiar to Chance News readers and considered by Paulos in his book "Innumeracy." You are told that a family has two children, one named Myrtle. What is the probability the Myrtle's sibling is a girl? Suppose we formalize this into the probability problem: Given that a family is chosen at random from the set of all families with two children and there is a girl in the family, what is the probability the second child is a girl? This now a well-defined elementary probability problem with answer is 2/3. Is this a reasonable answer for the story question? We have assumed that the family is randomly chosen from the families with two children, which is not part of the story. We also have ignored the knowledge that the family has a child named Myrtle which is part of the story. If we try to take Myrtle into account in our mathematical version of the story we find that we need to know the probability that a family would name a girl Myrtle and then we find that the probability that Myrtle's sibling is a girl depends on this probability so our 2/3 answer is no longer correct.
In the last example we went from the story to a mathematical problem. Paulos gives examples going in the other direction. He suggests that understanding a solution to a mathematical problem can give insight into how life stories might go. As an example, he discusses the argument of Scarne that it is impossible to order poker hands when you introduce wild cards. (See Chance News 6.01). Scarne argued that, if you are playing with wild cards, deuces wild, and keep the traditional order of the hands, players will use the wild card to make three-of-a-kind when they have a pair rather than to make two pairs because three-of-a-kind is a more valuable hand than a pair. But then three of a kind will be more likely than two pair suggesting that two pair should beat three of a kind. But if you make this rule-change, players will use a wild card to form two pairs and now two pairs will be more likely than three of a kind. Paulos relates this to human behavior by suggesting that we are often led to a goal because it is considered unlikely to achieve. Greater striving increases the probability of its occurrence, making it less valuable by removing its appeal causing fewer people to attempt to achieve the goal
Paulos also emphasizes that the way we think about a story is often very different from the way we think about mathematics. We illustrate this again by an example familiar to Chance News readers but not actually considered in this book. This is one of Kahneman and Tversky's most famous examples, the Linda problem. Subjects are told the story:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student she was deeply concerned with issues of discrimination and social justice and also participated in anti-nuclear demonstrations.
They are asked to rank the following statements by their probabilities.
Linda is a bank teller.
Linda is a bank teller who is active in the feminist movement.
Kahneman and Tversky find that about 85% of the subjects rank "Linda is a bank teller and is active in the feminist movement" as more probable than "Linda is a bank teller." (Judgement uner uncertainty: Heuristic and biases, Kahneman, Slovic, and Tversky, Cambridge University Press, 1982, p. 496.) They called this the "conjunction effect" but later it became known as the "conjunction fallacy," as if there was something wrong with the subjects' responses. But notice the subjects were given a story, not a mathematical problem. There was no particular incentive for them to change it into a mathematical problem. They might rather view it as a detective story where, as you get more information about the suspect, the likelihood that the suspect is guilty increases. They might also follow Peter Doyle's advice and ask: Why am I being told this story? Surely, not to give the answer "Linda is a bank teller." Kahneman and Tversky explain these subjects behavior in terms of their representation theory which says that subjects make the choice that best represents the actual situation. Again this applies to the story. and might be reasonable there, even though it obviously leads to problems in a mathematical formulation of the story.
As in his previous books, Paulos gives a huge number of examples, always described in a lively manner and often with a good deal of humor thrown in to emphasize a point. For example, we liked the following:
Applications of mathematics are subject to critique and dispute but not its theorems. Our failure to appreciate this fact may lead us to a fate similar to that of the now extinct tribe of hunters who, being experts on the theoretical properties of arrows (vectors), would simultaneously aim arrows northward and westward whenever they spotted a bear to the northwest.
One of the goals of artificial intelligence is to have computers interact directly with people and their stories rather than with other machines and mathematical models. This has led logicians, such as Jon Barwise (Paulos' thesis advisor) and Keith Devlin to develop a logical structure to deal directly with stories.
Paulos devotes a chapter to these efforts. He describes the difference between existential logic, used in science and mathematics and intensional logic used in everyday life. Paulos points out that in existential logic, sets are determined by their members, while in intensional logic this is not true. In mathematical arguments we can freely substitute the square root of 9 for the number 3. In real life, even though Lois knows that Superman can fly, Lois does not know that Clark Kent can fly.
When we talk about streaks in sports, our statistical results are possible because we can substitute the outcomes of tosses of a coin, heads and tails, for hits and misses when a basketball player makes a shot. However, students find this hard to understand because they know when they are "in a zone" and cannot imagine a coin "in a zone."
In his last chapter Paulos returns to the idea that big ideas in mathematics could play an important role in attempting to formalize a mathematics of stories. For example, since stories convey information it would be expected that Shannon's information theory could be adapted to a more general theory of stories.
In the same way, the concept of complexity developed by Chaitin and Kolmogrov to provide a definition for a random sequence (see Chance News 6.07) should help in formalizing the complexity of stories. A sequence of 0s and 1s can be completely ordered, in the sense that it has a very succinct description. It can also look completely random, in the sense that it has no discernible patterns. Finally, there are some sequences that seem to have some regularities, but are nonetheless somewhat complex. Paulos claims that stories might be thought of as corresponding to sequences of the third type, in that they contain "order and redundancy, as well as complexity and disorder."
We hope our brief remarks have convinced you that there is much to think about in this elegant book.
<<<========<<This article was written before the election but we will discuss it in terms of the election having occurred as it has.
The Harris Poll tried an experiment in the November 3, 1998 elections. They used the Internet to poll prospective voters in elections. Of course this goes against everything we have told our students about the need for random samples. However, George Terhanian, director of Internet research at Harris, believed that he could adjust for the non-randomness of the samples.
Terhanian has built a database of 1.3 million Internet users, mostly from on-line sweepstakes sponsored by the on-line advertising firm Matchlogic recently bought by Excite.
These people have agreed to answer political surveys by e-mail and have provided information about their age, income, and state of residence.
According to the article: Terhanian believes that Internet users are more like the U.S. populace than generally is understood. Since 1995, he's tracked Internet use through the Harris Poll. Each month, Harris phones a random sample of Americans to ask questions about everyday life. These polls show that women are on- line almost as much as men and that Blacks constitute 11 percent of the Internet users, close to their 13 percent share of the U.S. population.
For the California election, 80,000 Net users agreed to participate in the Harris Poll. Terhanian selected 5,375 of these chosen to reflect the state's population.
Mark DiCamillo, who runs the California Field Poll, said that he has been skeptical because of the concern that the Internet users would not be representative of the population. Presumably he had this concern even when comparing users and non-users with the same demographic characteristics. However, he remarked that, if their methods are successful, it would open up a new frontier. He said that cost and speed favor the Net. It takes six days and $50,000 for Field to do a telephone poll while E-mail surveys can be done overnight for a fraction of the cost.
We made a comparison between the outcomes of the Harris internet polls and the more traditional poll. You will see that the Harris poll did about as well as the standard polls. We asked David Moore from Gallup about this and he remarked that the Harris electronic poll is very similar to the quota sampling method that was used quite success in the 30's and 40's in this country. It was used in Great Britain before the last election and is still used in other countries. He also made the following interesting historical comment:
Although the 1948 election fiasco is often mentioned in the same breath as "the next election Gallup switched to probability sampling," in fact the 1948 error was due to last-minute changes (actually, in the last 10 days!!) in voter preference (i.e., polling stopped too early) rather than to a faulty sample.
Here are the comparisons of the Harris electronic polls and the standard polls. "Other" means undecided or another candidate. We were helped in this by poll results forwarded by the Council for a Livable World and information available from the CNN web site.
Arizona Governor Hull(R) Johnson(D) Other Harris(10/19-21)(732) 55 35 10 Harris(10/29-31)(850) 55 38 7 Beh. Res(10/12-15)(519) 53 31 16 Final 61 36 3 Arizona Senator McCain(R) Ranger(D) Other Harris(10/19-21)(732) 66 18 16 Harris(10/29-31)(850) 64 27 9 Beh. Res.(10/12-15)(519) 60 18 22 Final 69 28 3 California Governor Lungren(R) Davis(D) Other Harris(10/19-21)(4,531) 39 50 11 Harris(10/29-31)(5,735) 42 53 5 Field(10/22-27)(678) 39 53 8 Field(10/26-28)(809) 43 50 7 Final 38 58 4 California Senator Fong(R) Boxer(D) Other Harris(10/19-21)(4,531) 45 46 9 Harris(10/29-31)(5,735) 46 48 6 Field(10/22-27)(678) 42 51 7 Field(10/26-28)(809) 45 43 12 Final 43 53 4 Florida Governor Bush(R) McKay(D) Other Harris(10/19-21)(2,295) 56 40 4 Harris(10/29-31)(2,819) 58 41 1 IMR of Tampa(10/22-27)(670) 51 40 9 Mason-Dixon(10/26-28)(822) 51 43 6 Final 55 45 0 Florida Senator Crist(R) Graham(D) Other Harris(10/19-21)(2,295) 31 60 9 Harris(10/29-31)(2,819) 39 61 0 IMR of Tampa(10/22-27)(670) 25 61 14 Mason-Dixon(10/26-28)(822) 31 59 10 Final 37 63 0 Georgia Governor Millner(R) Barnes(D) Other Harris(10/19-21)(1,174) 47 42 11 Harris(10/29-31)(1,408) 50 43 7 Mason-Dixon(10/26-28)(809) 46 43 11 Final 44 53 3 Georgia Senator Coverdell(R) Coles(D) Other Harris(10/19-21)(1,174) 53 37 10 Harris(10/29-31)(1,408) 56 38 6 Mason-Dixon(10/26-28)(809) 51 38 11 Final 52 45 3 Illinois Governor Ryan(R) Poshard(D) Other Harris(10/19-21)(1,209) 55 37 8 Harris(10/29-31)(847) 55 41 4 Market Shares(10/17-20)(1,099) 49 34 17 Mason-Dixon(10/24-26)(813) 48 37 15 Final 51 47 2 Illinois Senator Fitzgerald(R) Mosley-Braun(D) Other Harris(10/19-21)(1,209) 52 38 10 Harris(10/29-31)(847) 58 39 3 Market Shares(10/17-20)(1,099) 48 38 14 Mason-Dixon(10/24-26)(813) 49 41 10 Final 51 47 2 Indiana Senate Helmke(R) Bayh(D) Other Harris(10/19-21)(831) 27 63 10 Harris(10/29-31)(977) 33 64 3 Mason-Dixon(10/11-13)(804) 22 67 11 Mason-Dixon(10/25-27)(819) 33 58 9 Final 35 64 1 Massachusetts Governor Cellucci(R) Harshbarger(D) Other Harris(10/19-21)(907) 56 36 8 Harris(10/29-31)(1,083) 54 43 3 RKM(10/19-20)(402) 46 41 13 Globe(10/27-28)(400) 46 41 13 Final 51 47 2 Michigan Governor Engler(R) Fieger(D) Other Harris(10/19-21)(1,535) 61 36 3 Harris(10/29-31)(1,855) 62 38 0 Mitchel Res.(10/20-23)(600) 62 30 8 Final 62 38 0 New York Governor Pataki(R) Vallone(D) Other Harris(10/19-21)(1,792) 48 25 27 Harris(10/29-31)(2,243) 49 27 24 Quinnipiac Col.(10/20-25)(547) 57 24 19 Mason-Dixon(10/26-28)(808) 54 29 17 Final 54 33 13 New York Senator D'Amato(R) Schumer(D) Other Harris(10/19-21)(1,792) 46 44 10 Harris(10/29-31)(2,243) 46 50 4 Quinnipiac Col.(10/20-25)(547) 44 48 8 Mason-Dixon(10/26-28)(808) 43 46 11 Final 45 54 1 North Carolina Senator Faircloth(R) Edwards(D) Other Harris(10/19-21)(1,163) 38 47 15 Harris(10/11-138)(1,418) 39 54 7 Mason-Dixon10/10/11-13)(836) 45 43 12 Mason-Dixon(10/26-28)(827) 44 43 13 Final 47 51 2 Ohio Governor Taft(R) Fisher(D) Other Harris(10/19-21)(1,793) 50 39 11 Harris(10/29-31)(2,149) 52 42 6 U. of Cinn..(10/22-26)(754) 51 35 14 Mason-Dixon(10/25-27)(815) 49 42 9 Final 50 45 5 Ohio Senator Voinovich(R) Boyle(D) Other Harris(10/19-21)(1,783) 55 37 8 Harris(10/29-31)(2,149) 58 39 3 U. of Cinn.(10/22-26)(754) 62 34 4 Mason-Dixon(10/25-27)(815) 55 37 8 Final 56 44 0 Pennsylvania Governor Ridge(R) Itkin(D) Other Harris(10/19-21)(1,611) 64 16 20 Harris(10/29-31)(1,812) 62 21 17 Millersville. U.(10/23-27)(481) 66 15 19 Mason-Dixon(10/26-27)(807) 59 23 18 Final 57 31 12 Pennsylvania Senator Spector(R) Lloyd(D) Other Harris(10/19-21)(1,611) 57 25 18 Harris(10/29-31)(1,812) 63 32 5 Millersville U.(10/23-27)(481) 61 17 22 Mason-Dixon(10/26-27)(807) 57 34 9 Final 61 35 4 Texas Governor Bush(R) Mauro(D) Other Harris(10/19-21)(3,248) 73 22 5 Harris(10/29-31)(1,523) 73 21 13 Mason-Dixon(10/29)(504) 69 25 6 Final 69 31 0 Washington Senator Smith(R) Murray(D) Other Harris(10/19-21)(831) 41 54 5 Harris(10/29-31)(1,523) 45 53 2 Mason-Dixon(10/26-27)(801) 41 49 10 Final 42 58 0
DISCUSSION QUESTIONS:
(1) What differences do you see in the outcomes of the Harris Internet polls and the conventional polls?
(2) Do you think the Internet polls have a rosy future?
<<<========<<This article describes an effort, now in its beginning stages, being undertaken by the author, together with other members of the Royal Statistical Society, to create a scale that will allow all risks to be quantified and compared. The reason for this effort is that the public, in general, does not have the knowledge to compute the relative risks of various activities. This lack of knowledge leads to many irrational decisions on the part of the affected parties. A similar proposal was made by John Paulos in his book "Inumeracy." Paulos suggested a logarithmic scale which he described as a "kind of Richter scale which the media could use as a shorthand for indicating degrees of risk."
In general, there are two types of consequence when an undesirable event occurs. The first type is the loss of quality of life and the second type is financial loss. In the second case, one might attempt to quantify the risk by using the expected value of the financial loss suffered. If this loss would occur over a long time interval, should the undesirable event occur, one could discount the loss using, for example, the real rate of return of riskless investments. This is a fairly standard idea.
To quantify the loss of quality of life, however, is more difficult. First of all, there are certain types of events which have, as possible outcomes, instant death, while others have consequences which, if felt at all, occur many years after the event has occurred. Other effects, called chronic effects, may be felt continuously. Secondly, if one uses the expected loss of full-quality life as a measure of the risk involved with some activity, one finds that this expected loss tends to decrease as the individual's age increases. This leads to the questionable conclusion that for certain activities, older people should assign a lower risk value to the activity than should younger people.
To get around this conclusion, the author suggests that one measure the risk, to a given individual, of an activity by considering two factors: 1) the expected percentage decrease D in the person's life expectancy that will occur should the person undertake the risky activity, and 2) the expected quality of life q (where q is measured on a scale from 0 to 1, with 0 representing death or coma, and 1 representing good health) if the person undertakes the activity.
Let E be the expected lifetime when the risk is not taken and S the expected shortening of life that is a consequence of the risk. The author defines the loss function L = S x (1-q)/E. L, S, and E are all allowed to depend on age, sex, and degree of exposure. For example, for smoking, the degree of exposure would be the number of cigarettes smoked per day. If the consequence of the risk is instant death, with probability p, such as would be the case in rock climbing, then S = pE and q = 0, so the loss function L is just the probability p of death. Finally, if the risk takes place over a period of time, the loss is discounted by a discount factor exp^(-rt). Combining the loss function and the discount factor, the author defines the risk factor R = L exp(-rt). When the risk is constant over a period of time, such as the risk from being killed by an asteroid, the risk magnitude RM is obtained by integrating L exp(-rt) over the period of time involved.
To handle monetary risks the author proposes using the expected loss to measure the risk. Of course to make it comparable to R it would be necessary to normalize this. One way he suggests is to take the ratio of the expected loss to the value of a life, though he realizes this might not be a very practical solution.
If the risk magnitude, denoted by RM, is taken as the measure of the risk, then for any risk, RM is between 0 and 1. No matter how complicated the risk, the risk magnitude RM is the same as a simpler risk of playing Russian roulette with the number of fake bullets and real bullets chosen to give probability RM for getting a real bullet.
The author chooses to make the final risk measure more like the Richter scale by transforming R to obtain the risk factor RF = 8 + log_(10)(R). This makes his risk fall in the interval [0.8]. Our own preference would be to stick with R itself.
All this will become clearer if you work your way through the following examples provided by the author.
The following examples cover all the various classes of risk discussed in the text. The data used are generally from the Health & Safety Executive. For smoking, data have been interpreted from the information given by Doll, Peto et al (BMJ no 6959, vol 309, 1994).
Asteroid impact (destroying civilization): Involuntary chronic exposure to a natural risk; consequence instant death; lifetime exposed bodily risk for a new-born UK male.
Life expectation is 80 years. The probability of impact is 10^(-8) per year. Choose the discount rate to be 2%. The risk magnitude over the lifetime is obtained by integrating 10^(-8) x exp(-.02t) over the interval from 0 to 80. Carrying this out gives the risk magnitude RM = .0000004 with corresponding risk factor of RF = 8 + log10(.0000004) = 1.6.
Note: risks due to extra-terrestrial impacts of a lesser magnitude should be added in as these may dominate the overall risk of merely living on the planet Earth.
Homicide: Chronic exposure to a risk from society; consequence instant death. Lifetime exposed bodily risk for a new-born UK male.
Life expectation is 80 years. Choose the discount rate to be Probability per year (average for whole population) = 1 in 145,000. Then the risk magnitude over lifetime is obtained by integrating 1/145000 x exp(-.02) over the interval 0 to 80. This gives RF = .00038 with corresponding risk factor RF = 8 + log(.00038) = 4.6.
100-mile rail journey: Acute exposure to bodily risk from a necessary activity; consequence instant death (note: greater probability of lesser consequence should really be included, as this may dominate).
Probability = .00022 per million miles. Risk magnitude for 100 mile journey RM = .00022 x 100/1,000,000 = .000000022 which is a risk factor of RF = 8 + log10(.000000022) =.34.
1000-mile flight: Acute exposure to bodily risk from an activity; consequence instant death. Probability = .0005 per million miles. Risk magnitude RM = .0005 x 1000/1,000,000 = .0000005 which is a risk factor of RF = 1.7.
100-mile car journey: Acute exposure to bodily risk from a necessary activity; consequence instant death.
Probability = .008 per million miles. Risk magnitude RM = .008 x 100/1000000 = .0000008 which is a risk factor of RF =1.9.
Lifetime car travel: Habitual exposure to bodily risk from a necessary activity; consequence instant death.
Assume 10,000 miles/year and a discount rate of 2%. Probability = .008 per million miles or .00008 per year. To get the risk factor we integrate the .00008 x exp^(-.02t) from 0 to 80. This gives a risk magnitude RF = .0032 and risk factor RF = 5.5.
Deep sea fishing: Habitual exposure to a bodily risk from an occupational hazard.
Rock climbing (one session): Acute exposure to bodily risk from a voluntary activity; consequence instant death.
Assume average ability, session of 4 hours. Probability = 0.04 per 1000 hours. This is gives a risk magnitude for 4 hours =.04 x 4/1000 = .00016 which is a risk factor of 4.2.
Rock climbing (20 year participation): Habitual exposure to bodily risk from a voluntary activity; consequence instant death.
Man aged 20 when starting. Assume 30 hours per year. Then probability per year = .04 x 30/1000 = .0012. Thus the risk magnitude is obtained by integrating .0012 x exp^(-.02) from 0 to 20 which gives RM =.02 and risk factor RF = 6.3.
Smoking cigarettes: Habitual exposure to bodily risk from a voluntary activity; consequence premature death.
Man in UK aged 35 assumed to smoke cigarettes for the rest of his life. Lifetime expectation without smoking = 46 years.
Expectation smoking 10/day = 40 yr; expected shortening = 6 yr;
Expectation smoking 20/day = 38 yr; expected shortening = 8 yr;
discount factor = exp(-(.02 x 38)) = 0.47.
Expectation smoking 40/day = 34 yr; expected shortening = 12 yr;
discount factor = exp(-(.02 x 34)) = 0.51.
Risk magnitude for 10/day = 6/46 x 0.43 or RM = 0.059 and RF = 6.8.
Risk magnitude for 20/day = 8/46 x 0.47 or RM = 0.082 and RF = 6.9.
Risk magnitude for 40/day = 12/46 x 0.51 or RM = 0.119 and RF = 7.1.
If you play traditional Russian roulette with 1 good bullet and 5 duds your RM = 1/6 = .167 and RF = 7.2 which is approximately what we obtained for smoking 40 cigarettes a day.
In his paper, the author gives a more complete list of risk factors in order of increasing risk. We have added the corresponding risk magnitudes.
100-mile rail journey .000000022 .3 Destructive asteroid impact .0000004 1.6 (new-born male) 1000 mile flight .0000005 1.7 100-mile car journey .0000008 1.9 (sober middle-aged driver) Rock-climbing (one session) .00016 4.2 Homicide (new born male) .00038 4.6 Lifetime car travel(new-born male) .0032 5.5 Accidental falls (new-born male) .003 5.5 Rock climbing over 20 years .02 6.3 Deep sea fishing (40-year career) .027 6.4 Continuing smoking cigarettes (middle-aged 35: 10/day) .059 6.7 (middle-aged 35: 20/day) .082 6.9 (middle-aged 35: 40/day) .119 7.1 Russian roulette (one game) .167 7.2 Suicide 1.0 8
DISCUSSION QUESTIONS:
(1) It was asserted above that when considering a certain risky activity, the age of the individual should not affect the evaluation of the activity. Consider this proposition with respect to certain activities, such as scuba diving, foreign travel, and smoking. Is this proposition reasonable, in your opinion?
(2) Which would you prefer to have as a measure of risk: the risk magnitude or the risk factor?
(3) Do you think a single scale is sufficient to compare all risks?
<<<========<<Jerry Grossman wrote to us that the Kevin Drake Game in the Small- world network article of the last Chance News (Chance News 7.09) should have been called the Kevin Bacon Game. Kevin Bacon is a movie actor.
<<<========<<This work is freely redistributable under the terms of the GNU General Public License as published by the Free Software Foundation. This work comes with ABSOLUTELY NO WARRANTY.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!