CHANCE News 7.02

(27 January 1998 to 20 February 1998)


Prepared by J. Laurie Snell, Bill Peterson and Charles Grinstead, with help from Fuxing Hou, Ma.Katrina Munoz Dy, Pamela J. Lombardi, Meghana Reddy and Joan Snell.

Please send comments and suggestions for articles to jlsnell@dartmouth.edu.

Back issues of Chance News and other materials for teaching a Chance course are available from the Chance web site:


It is a part of probability that many improbable things will happen.
Agathon (445 - 400 BC)

Contents of Chance News 7.02


We asked Bible Code expert Brenden McKay to see if the Bible could be of any help to Mr. Starr in his investigation. Here is his answer:

Dear Laurie,

As you requested, I consulted the Bible Code for hints as to the outcome of The Bill and Monica Show. First I tried the Hebrew text of Genesis, but without much success. "Clinton" does not appear there as an Equidistant Letter Sequence (ELS) and "Bill" is so pervasive that it appears in close conjunction to practically everything...

Then I thought, "Americans speak a rough approximation of English, so clearly I should consult an English text". So I tried the King James Authorized translation of Genesis instead. Sure enough, there were "Clinton", "Monica", and all the rest of them. Still, things didn't seem quite right. For one thing, "Clinton" was closer to "Hillary" than he was to "Monica"! Then I noticed something curious indeed. Time after time, whenever "White" was close to "House", nearby was "Snell". I'm sorry, Laurie, but you're going to have to explain these:

             o a b r a M 
             e s f r O m
             t h i N e t
             n d I t c a
             e C a n a a
             A r e u n t
             a n d w h E    LAURIE and MONICA
             e t h e f I
             e r s f o R
             d v e n t U
             m t h e f A
             f h i s o L
Do you know how fantastically unlikely it is to find "Laurie" and "Monica" in the same 12x6 rectangle?


How likely is it?

Brendan and others who are studying recent research on Bible codes would like to counter the impression, often given by the press, that the mathematical community agrees that the existence of Bible codes has been scientifically verified. They are asking those who agree that there are serious problems with the research purporting to show the validity of Bible codes, to sign a petition to that effect. You can find the petition and how to sign it.

The Dilbert comic strip from January 13 to January 17 featured Dilbert and Ratberd discussing coin tossing and ESP. You can view these amusing strips if you act fast. Today (Friday, Feb. 20) is the last day that the January 13 strip will be on Dilbert's website.

Peter Ayton suggest the first article and also some points for discussion.

Such a perfect deal.
Mirror, 27 Jan. 1998, p.11
Andrew Young

This article reports on a Whist game that was dealt in England. The reader does not need to know the rules of Whist. The only relevant rule here is that all of the cards in a standard 52-card deck are dealt out, with 13 going to each of 4 players. It is claimed that in one deal, each of the hands contained a full suit of 13 cards. This, of course, was considered newsworthy.

One can compute that, if the deal is random, then the probability of this event is 1 out of 2235197406895366368301560000. (The article's headline almost got this right; there was a 9 at one point where there should have been a 5.)

It is possible that something even more amazing occurred. One of the participants in the game (in fact it was the dealer; hmm...) stated: "Astonishingly, I got them all in the right order." If we instead insist that each player be dealt a hand in increasing order, then the probability of this event is 1 out of 3360757298789328273819193202350156957303729393370136576000000000000. (Aren't computers wonderful?) Let us be generous and discuss the original event, namely that each hand contains a complete suit, dealt in any order.

How are we to come to terms with this claim? Here is an argument which is sometimes used to support such claims. In a random deal, each possible arrangement of cards is equally likely to occur, so the above deal is no more unlikely to occur than any other single deal. Therefore, we should not be surprised if it occurs once. (Before we discuss this argument, does this argument imply that we should not be surprised if it occurs twice?)

One might attack the above argument as follows. Suppose we let Deal A denote one prescribed deal, say one in which each hand receives between 2 and 5 cards of each suit, and let Deal B denote the deal described in the article. While it is true that Deal A and Deal B are equally likely to occur in a random deal, it is not true that they have the same surprise value. In other words, if Deal A were to occur, it would not make the news; but if Deal B were to occur, it would be newsworthy.

How might one quantify surprise value? One way is to describe certain types of patterns which, if they occur in a deal, would occasion some remarks. For example, if any one player receives all of the aces and kings, that would certainly be noticed. Once these patterns have been described, one could define a hand to be surprising if it had at least one of these patterns (and in fact, perhaps one could quantify, if one wanted, how surprising a hand is by how many patterns it contained).

It is difficult to list all of the pattern types that might be called surprising, but even if we allow that there might be 1,000,000 such patterns, with the average number of deals exhibiting at least one of these patterns to be 1,000,000,000, then the number of surprising deals is around 10^15, so only 1 out of about 10^13. The point is that surprising hands do not occur very often, so if one is claimed to have occurred, perhaps one should look for other explanations for its occurrence.

There are at least two other explanations for the occurrence of the reported deal. One is that someone set up the deck in advance. The other is that someone (or some group of people) is not being truthful. Regarding the first alternative, someone might say ``But in the article, the women said that the cards were shuffled and dealt." Nevertheless, it would not take a great deal of effort for the perpetrator of the switch to create a diversion, such as a terrific crash of dishes in another room, or a screaming child upstairs. It takes only about 3 seconds to switch decks. What is the probability that this occurred in the reported instance? This is an ill-formed question, but certainly it would not be terribly surprising to us if we were subsequently told that this in fact happened, so this means that we do not attribute an extremely low probability to this alternative.

The second alternative, that some person is being less than truthful, is difficult to discuss quantitatively. It would be interesting to attempt to quantify the probability that a statement made by an individual is true or false, and if it is false, that the individual knows it is false. This probability would of course depend upon the type of statement being made, the situation in which it is being made, and the person who is making it. Nevertheless, it is safe to say that at least 1 in 1,000,000 statements made by people are lies.

We now have three possible explanations for the reported occurrence. Let us call these three explanations events R (for random), S (for switch), and T (for lack of truth). Now we can ask the question: Given that either R or S or T occurred, what is the probability that R occurred? It is easy to see that the answer is given by the expression P(R)/(P(R) + P(S) + P(T)). In this case, if we use 1/1000 for P(S) and 1/1,000,000 for P(T), we obtain the answer 4.47 x 10^(-25). Under the same conditions, the probability of a switch is about .999 and the probability of a lie is about .00099


(1) Can the reader give a derivation of the probability, stated above, that a random deal will result in all 4 players receiving exactly one suit? Can you give a derivation of the probability that, in addition, each suit will be dealt in increasing order?

(2) Suppose that 5 billion people play Whist continuously, with one hand being dealt every minute in each group of 4 players. Given that there are 525600 minutes in a year (neglecting leap years), how long would these people have to play before the expected number of deals of the type reported on in this article exceeds 1? (Hint: If your answer is not much, much longer than the present-day estimate for the age of the Universe, which is 1.5 x 10^10 years, then you have made a mistake in your calculations.)

(3) How can we quantify the types of probabilities mentioned above (such as the probability that there is a deck-switch, or the probability that someone is lying)?

(4) The event in question was a charitable event. Does this suggest that someone wanted some publicity?

John A. Perazzo suggested that the statistical community might be able to shed some light on the recent controversy over the judging of the Olympic ice dancing contest.

It truly was a waltz in the park.
Los Angeles Times, 17 Feb, 1998, N3
Mike Penner

This article discusses the controversy over the judging of the Olympic Ice Dancing contest. Claims have been made that the decision had been determined by the judges before the games even started.

In this article, we read that evidence of the rankings being predetermined is that the order of the top nine teams remained unchanged throughout four rounds of dances, with the lone exception of Bourne and Kraatz whose ranking went from fifth on the first day to fourth on the last day.

In addition Grishuk and Platov appeared not to be penalized for serious mistakes during their performance and many thought that U.S. champions Punsalan and Swallow provided an outstanding performance but ended up in 7th place.

Another article reported that Tracy Wilson, former Olympic medalist and now commentator for CBS, said that she was told, before the competition started, what the order at the finish would be. In particular, that Grishuk and Platov would win the gold, Krylova and Ovgsiannikov would be second and Canadians Bourne and Kraatz would be dropped to fourth place to allow the French pair, Anissina and Peizerat to win the bronze.

The most popular explanation for all this is that the five judges on the panel from former Eastern bloc countries--Russia, Ukraine, Lithuania, Poland and the Czech Republic agreed to form a block that would support each others' candidates. Some suggested, based upon the Canadian experience, that France was also part of the conspiracy.

To test these claims statistically, you will need the data and to know how the scoring is done. You will find the data here.

As to the scoring, there are four parts to the contest: compulsory dance 1, compulsory dance 2, original dance, and free dance. In each part, the 9 judges assign numbers between 0 and 6 to two aspects of the performance, for example, technique and presentation. These are the two sets of 9 numbers you see displayed after the performance. In each part, after each contestant has performed, the contestants who have performed are ranked using the total of the judges points assigned. Then when the event is finished, final rankings for the contestants are determined by assigning first place to the contestent ranked first by the most judges, 2nd place by the person ranked 2nd by the most judges among the remaining contestants etc. There is a convention for ties. After all four of the performances have been completed a final ranking is determined by a weighted average of the rankings in the individual parts: 20% for the compulsory dances, 30% for the original dance, and 50% for the free dance.


Which of the claims made about irregularity of the judges do you think could be tested statistically? How would you go about doing this testing?

After the deaths of Sunny Bono and Michael Kennedy, several newspapers evaluated safety on ski slopes. They tried in different ways to indicate that skiing was not as dangerous a sport as these deaths suggested. We show here how several different newspapers tried to convince their readers of this.

Thinking twice on the slopes.
Boston Globe, 7 Jan. 1998, A1
Tony Chamberlain

This article quotes Steven Over, head of the National Ski Patrol in Denver, as saying:

Records kept by his organization and the National Safety Council show that, since 1984, skiers have made 52.25 million visits to the slopes annually, and an average of 34 of them have died each year.
The article states that those figures are confirmed by the National Ski Areas Association, a trade group representing 330 resorts that account for 90 percent of America's skier visits. Over goes on to say:
In an ironic way, this kind of publicity sheds light on the figures, and the figures show that, compared to other outdoor sports, skiing and boarding are safer than many other activities - like boating or biking, for instance.

What additional information does the reader need to decide if skiiing is less dangerous than boating or biking?

The next article tries to give some figures to bolster the argument that they are more dangerous.

Bono, Kennedy fit ski accident profile.
News and Observer (Raleigh N.C.), 7 Jan. 1998, A10
Brigid Schulte

Here we read:

With the publicity of these two high-profile skiing deaths turning a spotlight on ski injuries, the ski industry has been quick to point out that in 1995, 716 people died in boating accidents, 800 bicyclists died and 89 people were hit by lightning.


Do we now have enough information?

Finally, in the next article we find some more meaningful numbers for comparison of skiing and bicycle riding.

Ski safety becomes hot topic.
Denver Post, 7 Jan. 1998
Penny Parker

Rose Abello, spokeswoman for Colorado Ski Country USA, said:

Nationally, skiing fatalities are less than one per one million skier days according to the National Ski Areas Association. A skier day is one lift ticket sold or given to one skier for all or part of a day.

The article goes on to say:

In 1995, there were 17 drowning deaths per 1 million water-sport participants, and 7.1 deaths per million bicyclists, according to the National Safety Council.

Surely, at last, we can say that skiing is more dangerous than biking or water-sports. Do you agree?

In the next article we find that men are the real problem.

Two deaths put focus on skiing safety
Hartford Courant, 7 Jan. 1998, A1
Jim Shea

Jasper Shealy, the country's leading expert on skiing injuries and an engineering professor at the Rochester Institute of Technology stated that:

If you want to eliminate deaths in skiing, eliminate male participants. Shealy said males account for 85 percent of all skiing deaths. The National Ski Areas Association reported that of the 36 skiers who died in 1996, 32 were male.

What additional information do we need to be really sure that men are the problem?

In the next article we discover that the 34 deaths per year is not total deaths but rather deaths from head injuries.

Deaths throw spotlight on skiing safety.
Charleston Gazette, 7 Jan, 1998, 1A
Chris Newton, The Associated Press

According to this article an estimated 34 skiers per year die of head injuries. This is, of course, different from the previous articles that referred to an average of 34 deaths a year caused by skiing. The article gives the source of its information to be a study of the American Medical Association (AMA): Report of the council on scientific affairs, CSA Report 1-I-97. The aim of this study was to estimate the cost effectiveness of requiring helmets for skiers.

The study used data on injuries collected at Sugarbush ski resort in Vermont between the years 1981 and 1997. During this period there was an average of 3 deaths per year from head injuries. Using an estimate of 8.74% for the share of skiers who ski at Sugarbush gives the estimate of 34 deaths per year nationally from head injures. The report concluded that, as of September 1997, there is insufficient scientific evidence to support a policy of mandatory helmet use.


(1) Is it possible that almost all deaths due to skiing are caused by head injuries?

(2) Would it be possible to establish a margin of error for the estimate of 34 daths a year nationally?

In our final article we find the risk of skiing compared to the risk of being killed by lightening.

Sonny Bone dies in ski accident.
Los Angeles Times, 7 Jan. 1998, A1
David Ferrell

Rick Kahl, editor in chief of Skiing magazine quotes figures from the National Severe Storms Laboratory says:

Fewer people die skiing than get killed by lightning every year. Lightning takes the lives of 89 people per year in the United States. It's an incredible fluke that anyone famous gets killed skiing. It's a fluke beyond flukes that two famous people get killed within a week of each other.

Can you name a single famous person who was killed by being struck by lightning?

Dispute over census sampling leads to suit, oversight panel.
The Philadelphia Inquirer, 13 February, 1998, page 29
Randolph E. Schmid

A suit was filed in federal court challenging the plan to use statistical sampling in Census 2000. A House committee has also formed a new subcommittee to oversee the Census. The chair of the subcommittee stated that 'Common sense says we simply need to count everybody.'

In the article, it is claimed that 26 million errors were made in the 1990 Census. These errors included not counting 10 million people, counting 6 million people twice, and counting 10 million people in the wrong place.

The main reason for the suit and the subcommittee formation appears to be that the Republican Party is fearful that statistical sampling would lead to higher counts in areas that are normally Democratic. Since the Census is used in redistricting, these higher counts could translate into more House seats held by Democrats. The suit claims that the U. S. Constitution makes it clear that the Census must be 'an actual enumeration of the people.' We quote here Clause 3 from Article 1, Section 2 of the Constitution:

Clause 3: Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers, which shall be determined by adding to the whole Number of free Persons, including those bound to Service for a Term of Years, and excluding Indians not taxed, three fifths of all other Persons. The actual Enumeration shall be made within three Years after the first Meeting of the Congress of the United States, and within every subsequent Term of ten Years, in such Manner as they shall by Law direct. The Number of Representatives shall not exceed one for every thirty Thousand, but each State shall have at Least one Representative; and until such enumeration shall be made, the State of New Hampshire shall be entitled to chuse three, Massachusetts eight, Rhode-Island and Providence Plantations one, Connecticut five, New-York six, New Jersey four, Pennsylvania eight, Delaware one, Maryland six, Virginia ten, North Carolina five, South Carolina five, and Georgia three.
The Commerce Department has responded to the suit by noting that sampling has been upheld by numerous courts and by the Justice Department in the Bush, Carter, and Clinton administration.


(1) Leaving aside for the moment that the Census does much more than simply enumerate the people in the United States, suppose that all we wanted was an accurate estimate of the number of people living in this country. Can you think of a way to achieve this estimate, using public records?

(2) Comment on the quote "Common sense says we simply need to count everyone."

(3) Do you think that using statistical sampling instead of straight enumeration would affect the composition of the Senate (as opposed to the House of Representatives)?

(4) Is it possible to imagine being able to educate the general public about statistical sampling?

Grade Inflation lowers value of Princeton degrees, faculty study says.
The Philadelphia Inquirer, 13 February, 1998, R6
Associated Press

A study by the faculty at Princeton University concluded that too many students are getting A's and B's. The 1973-77 and 1992-97 periods were compared. In the first of these periods, 69% of the grades given were between B- and A+, while 17% were between D- and C+. In the second period, the corresponding percentages were 83 and 10.

The reason for the concern is that the lack of spread in the grade distribution is thought to make it more difficult for graduate schools and employers to evaluate student applicants. The study reported that `Graduate and professional schools and prospective employers ought to have better information than we now give them when they decide on admitting or hiring our graduates.'


(1) In statistics, one tests hypotheses and attempts to accept or reject them according to various mathematical criteria. In the present situation, one could state the hypothesis that there was no grade inflation from the first time period to the second. Of course, one would probably end up rejecting this hypothesis. How might one test this hypothesis? Can you imagine a situation (say at a smaller college) where the same data would not lead to a rejection of this hypothesis?

(2) One possible explanation for the increase in the average grade is that students are `getting smarter,' whatever that means. Is there a way to determine whether this is in fact the case?

(3) Do you think that, if you gave the current Princeton calculus students the exam given to the students in 1973, they would do significantly better then the 1973 students did?

Cheerios lower cholesterol a little in study.
Minneapolis Star Tribune, 14 January 1998, 3E.
Gordon Slovut

General Mills, the maker of Cheerios, has mounted a new advertising campaign, based on the results of a study conducted at the University of Minnesota Heart Disease Prevention Clinic. The following is from the back panel of a new Cheerios box:

.. participants who added two 1-1/2 ounce servings (3 cups daily) of Cheerios to a diet low in saturated fat and cholesterol dropped their cholesterol levels more than those who followed the low saturated fat and cholesterol diet alone. Researchers concluded that soluble fiber from whole-grain oats in Cheerios may give an additional cholesterol lowering boost when added to a heart healthy diet!
The article complains that General Mills initially overstated its case by suggesting that the reduction may be up to 18%. Only one of the 62 people in the study who ate Cheerios experienced that decrease. At the beginning of the six-week study period, the average cholesterol level of the 62 men and women assigned to eat Cheerios was 245.5 milligrams per deciliter. At the end of the study, their average was 239.96, which represents a 2.4% decrease. During the same period, the cholesterol levels for 62 subjects assigned to eat corn flakes increased from an average of 243.7 to an average of 247.3, an increase of 1.4%. Both groups adhered to an overall low-fat, low-cholesterol diet for the six weeks. In addition, they ate 1.5 ounces of their assigned cereal in the morning and evening.

Dietitian Helenbeth Reiss Reynolds, a co-author on the study who is also a paid consultant to General Mills(!), expressed regret about the emphasis on the 18% figure. But she added that five people in the Cheerios group saw their cholesterol drop by more than 10%, while only one person in the corn flakes group experienced that level of improvement.

The University of Minnesota researchers noted in their research report a number of studies have now shown that foods which are high in soluble fiber (such as the whole grain oats found in Cheerios) can reduce cholesterol beyond what would be achieved by a low-fat, low-cholesterol diet alone.


(1) The article interprets the data as saying that Cheerios "appeared to do about 3.8% better than corn flakes as a cholesterol lowering agent." What does this mean? A TV ad for Cheerios cites the study as finding that eating Cheerios produced an average 4% reduction in cholesterol. What do you think of this interpretation?

(2) What do you make of the fact that average cholesterol increased for the corn flakes group? (According to package labeling, one serving of Cheerios has 2 grams of fat and no cholesterol, while one serving of Kellogg's corn flakes has no fat and no cholesterol.)

(3) Knowing that five people in the Cheerios group saw their cholesterol drop by more than 10%, while the group averaged only a 2.4% drop, what can you say about the others in the group?

(4) The article notes that overall, body weight did not change in either group during the study. Does this seem at all surprising?

The Fortune Sellers
John Wiley 1998
ISBN: 0471181781 $20.97 at Amazon.com
William A. Sherden

This is a lively and informative account of the present state of the art of forecasting. Sherden discusses seven different fields: meteorology, economics, investments, technology assessment, demography, futurology.

In only two of these fields, weather forecasting and demography, does Sherden feel that the ability of experts to predict the future is worth paying anything for.

Sherden's story about the ability of farmer's almanacs to predict is typical of his analysis of other forecasting schemes. He starts with a amusing story. He tells us that in 1994, both The Old Farmer's Almanac and the Farmers' Almanac predicted a stormy 1995 winter with a record snowfall. In the months that followed, snow blower sales increased 120%, there was a 42% increase in the sale of rock salt for the highways, snowmobile sales increased 46%, the sale of four-wheel-drive vehicles increased 60%. and snow shovels sold out early in the season. In fact, this winter was only an average winter when it came to temperature.

Sherden then remarks that the editors of The Old Farmer's Almanac have generally stated that its forecasts are 80 to 85 percent accurate. To check this claim Sherden imagined that he was a farmer in Omaha, Nebraska who had used their average monthly temperature predictions for the past 30 years. This farmer would have found that the almanac had a 48.99 percent success rate in predicting whether average monthly temperatures were above or below seasonal norms. It's accuracy in predicting temperatures was 73%, not to far from the 85% they claim. However, using seasonal average temperatures as a naive forecast yielded 90 percent accuracy.

Sherden provides examples from almost every field that shows that naive forecasts do as well or better than sophisticated forecasting methods. In predicting population grown using a ruler to extend past growth works about as well as official forecasts for a couple of decades and after that all forecasts are bad. Neither fancy forecasting methods or naive forecasting methods detect sudden changes such as the baby boon in birth rates and crashes in the stock market.

As in the case of The Old Farmer's Almanac, the money that is made by using the science of forecasting is usually made by the forecaster and not by the users of the forecasts. Of course, this is particularly easy to show in the case of those who advice about the stock market.

Sherden is pretty hard on the forecasters but we have to admit that he has done his homework.


(1) The latest results in the Wall Street Journal contest of the darts against the pros show that the pros are ahead of the darts by a score of 54 to 37 in the 91 6-month contests since July 1990. When pitted against the Dow Jones Industrial Average the pros are ahead by a much smaller margin of 47 to 44. In those 91 contests, the pros have posted an average six-month investment gain of 11.2%, compared with 5.4% for the darts and 6.9% for the Dow industrials. Does this convince you that experts can predict the stock market?

(2) It has been claimed that much of the better performance of the pros can be attributed to the fact that their choices themselves cause the values of these stocks to rise and also, since they are not spending their own money, they would tend to choose more volatile stocks that the darts. What do you think of these arguments?

(3) Sherden says that meteorologists use two naive forecasts to judge forecasting skill: persistence (The weather tomorrow will be the same as it is today) and climatology: (the prediction that tomorrow's weather will be the same as seasonal averages). How would you implement the climatology forecast, for example, for the probability of rain? Which do you think would be the better of these two naive methods for forecasting?

(4) Economist claim that they learn a great deal from studying economic models even though they may not lead to realistic forecasts. Similarly, geologists have learned a lot about what makes earthquakes even though they cannot predict when they will occur. Does it seem strange that we can learn so much about a subject and still not be able to use this knowledge to make predictions?

(5) Sherden (and the philosopher Karl Popper) argue there can be no "laws of human behavior" as their are "laws of physics" and hence no real ability to predict human behavior. Do you believe this? Don't theories such as those of Tversky and his colleagues allow you to predict human behavior?

Post-disaster suicide: A study in despair.
The Boston Globe, 5 February 1998 A3.
Dolores Kong

A study published in the New England Journal of Medicine investigated incidence of suicides in the wake of natural disasters. Data were collected from 377 counties that had been declared federal disaster areas between 1982 and 1989.

In areas stuck by earthquakes, the annual suicide rate jumped from 19.2 per 100,000 to 31.1 per 100,000 during the year following the quake. In areas hit by hurricane, the rate increased from 12.0 to 15.7 per 100,000 in the two years following the storm. In areas hit by floods, the rate increased from 12.1 to 13.8 per 100,000 people in the four years following the flood.

Speculating on why rates remained elevated for longer after floods, the researchers noted that floods happen repeatedly and cause more damage, leading to greater debt.


(1) Why do you think different time intervals following the disaster were used in the reporting? Is it clear that suicide rates remain elevated longer after floods?

(2) Comparing the three areas in non-disaster times, we see that the annual suicide rate of 19.2 per 100,000 in earthquake areas is about 50% higher than the 12.0 and 12.1 per 100,000 in hurricane or flood areas. In fact, this difference is even larger than the increase produced by floods or hurricanes. What do you make of this?

Jobless surveys -- with grain of salt; fuzziness of regional
statistics makes trend-spotting risky.
The Washington Post, 26 January 1998, F31
Peter Behr

Unemployment rates in the three districts that compose the Washington area have dropped within the past year. Those numbers, however, come with margins of error that compromise their credibility as future indicators of the area's economic strength.

Michael Funk, of the Regional Economic Studies Institute at Towson State University in Maryland, argues that the effort to measure economic improvement through monthly unemployment rates is dangerous. Monthly unemployment rates are based on telephone surveys of residents, and accuracy depends on the number of residents surveyed. With budget constraints on the Labor Department, less residents are surveyed and as a result, the data has become less reliable.

According to the Bureau of Labor Statistics, the error rate for monthly surveys is about twice that of the annual jobless average. In Maryland the margin of error for monthly surveys is plus/minus 0.6%; in Virginia it is plus/minus 0.5%; and in the District, it is plus/minus 0.8%.

The key in determining trends lies in looking at several economic factors instead of relying on only one set of data, such as monthly unemployment figures. To evaluate unemployment, economists suggest looking at three or four months together, instead of one, in order to discern a pattern of economic improvement or decline. They also suggest adding the unemployment figures to other economic benchmarks to see if they are headed in the same direction. For instance, the low jobless rate in Northern Virginia coincides with the surge in jobs there and with employers' complaints that they can not find enough workers.


(1) Do you think the stock market takes into account the margin of error when it responds to a change in unemployment record?

(2) Does the margin of error tell the whole story about possible errors in a telephone survey? What are other sources of error?

Many don't tell about HIV, study finds.
The Boston Globe, 9 February 1998, A3
Associated Press

Surveys conducted at Boston City Hospital and Rhode Island Hospital between 1994 and 1996 found that 4 out or 10 HIV-infected people failed to inform sexual partners of their condition. Investigators questioned 203 HIV-positive patients, 129 of whom reported sexual activity within the previous six months.

Patients with only one sexual partner were three times more likely to tell their partners than subjects with multiple sexual partners.

The article notes that 41% of the subject were infected through intravenous drug use, 20% were men infected through homosexual contacts, and 39% were infected by heterosexual contact.


(1) What--if anything--can you tell about the rate of informing partners among subjects who had only one partner?

(2) If a patient has had three different partners, and informed one of them, how should this person count in the data? What effects would different ways of counting have on the findings?

(3) Would you expect rates of informing partners to be different among the three infection groups (IV drug use, homosexual contact, and heterosexual contact)? Is this sample large enough to make meaningful comparisons?

Study says truck safer than car in crash.
The Boston Globe, 10 February 1998, A3.
Associated Press

The Insurance Institute for Highway Safety reports that in collisions between automobiles and light trucks (this category includes pickups and sport utility vehicles), people in automobiles had a four times higher risk of death. The worst scenarios involved automobiles being stuck on the side by light trucks, where there were 27 car deaths for every truck death. The dramatic increase is attributable to the fact that truck frames ride higher off the ground and strike cars in the middle of the doors.

These findings are troubling in light of the increasing popularity of sport utility vehicles. Still, most automobile fatalities occur in collisions with other cars or large trucks or in single vehicle accidents. The article reports that only 10% occur in crashes with pickups and 4% in crashes with sport utilities.

Side-impact crashes account for only 30% of the 32,000 annual deaths in cars and trucks. Head-on or nearly-frontal crashes are still the most deadly, accounting for half the deaths.


(1) From the data given, can you determine the number of annual deaths from crashes between cars and light trucks? If not, what else do you need to know?

(2) Do car drivers have a higher risk of dying at the wheel than (light) truck drivers?

Death penalty expansion bill opposed.
Union Leader, Manchester NH
John Toole

Does capital punishment deter murder?
John Lamerpti (Professor of mathematics Dartmouth)
Available from the Chance Database under Teaching Aids.

The Union Leader article describes a hearing held in connection with a House Bill designed to expand the use of the death penalty in New Hampshire to include additional "heinous" crimes. The bill is supported by the Governor Jeanne Sheen and Attorney General Phillip McLaughlin and House Republican leaders.

Those opposed to the extension outnumbered those in favor, 5 to 1. Most of those opposed spoke, in fact, in favor of eliminating the death penalty altogether.

The mother of an only child who was killed said: "I don't believe this state or any government is competent to kill on my behalf."

John Lamperti testified against the bill arguing that the death penalty is not a deterrent to murder. He is quoted as saying:

For decades, murder has, on the average, been more common in states with capital punishment than in those where it is not used.
John Lamperti's article provides background material for his testimony. It is written to be read by the general public. It begins with two examples to show how statistics has been used in public policy: the polio vaccine clinical trial and the statistical evidence that smoking causes lung cancer. He then gives an analysis of the statistical studies that have been carried out to help determine if data supports the conclusion that capital punishment is a deterrent to murder. He concludes that it does not.


(1) Arguments have been given that some murders can be attributed to capital punishment laws. How could this be?

(2) As Lamperti points out in his article, care must be made in concluding that capital punishment is not a deterrent to murder solely from the fact that states that do not have capital punishment, on average, have fewer murders. Why is this? What kind of study could be done to give more convincing evidence that the death penalty is not a deterrent?



Send comments and suggestions to:



CHANCE News 7.02

(27 January 1998 to 20 February 1998)