CHANCE News 8.09

(October 7, 1999 to November 16, 1999)


Prepared by J. Laurie Snell, Bill Peterson and Charles Grinstead, with help from Fuxing Hou, and Joan Snell.

Please send comments and suggestions for articles to

Back issues of Chance News and other materials for teaching a Chance course are available from the Chance web site:

Chance News is distributed under the GNU General Public License (so-called 'copyleft'). See the end of the newsletter for details.

Chance News is best read using Courier 12pt font.


His Sacred Majesty, Chance, decides everything.



Contents of Chance News


Note: If you requested a CD-ROM of the Chance Lectures and have not received it please send another request to jlsnell@dartmouth.edu with the address where it should be sent. Others who would like this CD-ROM are also invited to send such a request. There is no charge.

Norton Starr has passed on the latest Forsooth items from the RSS news:

The met Office give a 40 per cent chance of clear skies for the eclipse, but they admit to a 30 percent chance that they may be wrong.

Daily Telegraph
5 August 1999

In Britain you have a slightly better chance of adopting a baby less than a year old than you do of winning the lottery; but only slightly better (175 lottery million- aires a year, 300 babies adopted)

The Guardian
(article by Matthew Engel)
29 May 1999

The late Cardinal Hume had suggested that the second millenium should be seen in with a prayer, led by the Archbishop of Canterbury, in the Milllennium Dome.

Daily Telegraph (editorial)
22 June 1999

As usual, it is a good exercise to be sure you understand for each of these why the RSS can say Forsooth!

The United Nations estimated that on October 12 the population of the earth would reach 6 billion. This event was widely covered in the media and there were numerous discussions of the predictions of Malthus with commentaries on how right or wrong he was. Anne Schwartz pointed out to us that there is a very complete discussion of all this topic including links to UN population graphics, a current article from USAToday, and an extensive discussion of Malthus and his famous prediction at Jim Morrow's web site

Jim is currently teaching the Mount Holyoke quantitative literacy course. This is a case study course with case studies this year: I Salem Village Witchcraft, II Earnings and Discrimination and III Population and Resources. Click here for more information about this course.

Professor Jenny George who teaches at the Melbourne Business School sent us the next article. She provided the abstract and the discussion questions.

Shipley's hopes dashed by All Blacks' defeat.
The Telegram, 2 Nov. 1999
Paul Chapman in Wellington

This article is available from Electronic Telegraph (search on title of the article)

The Rugby World Cup has recently been held in Wales and the loss of the favourites, New Zealand, in the semi-finals to unfancied and under-performing France was completely devastating for most New Zealanders. Many political commentators believe it will hurt the chances of the National Party, the current incumbent political party, in being re-elected on November 27. According to the article:

A 1996 study found that the All Blacks' performances was a reliable indicator in determining election results. It showed that in 17 elections since 1949 the higher the All Blacks's win rate the more likely the incumbent government was to be re-elected.


The following data for 1949-1996 shows whether the incumbent political party remained in power in New Zealand (1=incumbent won election, 0=incumbent lost) and gives the All Black's proportion of wins (wins divided by total games played) during the election year and the year before the election.

Year  Incumbent  Rugby      Rugby Prop. wins
       wins?     Prop.wins   (previous year)
1949      0           0           1
1951      1           1           0.75
1954      1           0.75        0
1957      0           1           0.75
1960      0           0.25        0.75
1963      1           1           0.8
1966      1           1           0.75
1969      1           1           1
1972      0           1           0.25
1975      0           1           0.75
1978      1           0.86        0.67
1981      1           0.88        0.5
1984      0           0.75        0.71
1987      1           1           0.5
1990      0           0.86        1
1993      1           0.71        0.67
1996      1           0.9         0.83

(1) From this data, do you believe that a relationship between rugby results and election results exists? What evidence do you have?

(2) If you wanted to do a regression analysis to examine the relationship between the rugby results and the election results, what additional data would you need?

(3) Create a new binary variable that will indicate whether the rugby team was improving. Use conditional probabilities to decide whether this variable is related to election results.

(4) Can you suggest any other explanations for a correlation between rugby success and election success, apart from the newspaper article's implication that rugby success/failure is causing the election results?

Note: More details about the All Black games can be found here.

We were puzzled by the irregularity in the number of games played each year. Prof. George explained this:

The All Blacks are a national team and so they generally only play other countries. Also until 1996 or so, rugby was an amateur sport (and quite rigidly enforced as such). Back in the 1950s when travel was more difficult there was usually only one series of test matches per year either played while touring Australia, South Africa or Great Britain (in particular) or against a team that was touring New Zealand. Now, of course, the teams play much more often, hence the rise in the number of tests played each year. The All Blacks do play a few provincial/regional teams while on tour in another country but these are generally not considered very important certainly not in any way comparable to a test match (i.e. a match against another national team).

This is one of the reasons that test match outcomes have traditionally been so important in New Zealand. If the team loses the series against South Africa (a common occurrence in the 1950s - 1960s) then there may not be a chance for revenge for several years.

This article made us to ask the age old question: Is the annual contribution of the Dartmouth alumni correlated with the success of the football team? We will see if we can answer this question by the time of the next chance news. If others want to look at this question for their school we will include these results also.

Harold Brooks sent us a note about a recent NBC Evening News program with Tom Brokaw where the incidence of breast cancer and death rate from breast cancer on Long Island were compared to National averages with the implications that they were significantly higher. Harold wondered how they determined that the differences are significant.

The statistics given by Brokaw were also given in the following article:

Trying to map elusive N.Y. cancer source. Los Angeles Times, 18 October, 1999, A1 Marlene Cimons

The National Cancer Institute is sponsoring a series of town meetings in the Long Island area in which researchers from the NCI will try to learn about environmental hazards that might have existed before records were kept. They will do this by listening to what people remember, or might have heard from a parent or grandparent.

Researchers then plan to incorporate this information into their program to use Geographic Information Systems (GSI) to develop a computerized map that would indicate "hot spots" that might lead to a link between chemicals and breast cancer.

This article states:

All over the country's northeast corridor, from the mid- Atlantic to New England, researchers have noted a higher- than-expected incidence of breast cancer and deaths resulting from the disease. Nationally, the cancer institute reports, there were 110.6 cases of breast cancer for every 100,000 women, with a death rate of 25.4%. But in Nassau County on Long Island, it is 117.8 cases per 100,000 women with a 30.5% death rate. And in Suffolk County it is 113.6 cases per 100,000 women, with a death rate of 31.1%.


As Harold said: how do they determine if these differences are significant?

Note: Harold Brooks does research on weather forecasting and you can find lots of interesting things about this topic at his web site.

Of particularly interest to us is his evaluation of five different weather predictions sources in Oklahoma City: evening forecasts from the three network TV stations, a daily newspaper and the Oklahoma City National Weather Service. This study is discussed in the paper Verification of Public Weather Forecasts and the data is available.

Harold recently taught a course on weather verification and you can find notes and lots of useful references on this topic here.

After a crash, fear overtakes logic.
New York Times, 2 November 1999
John Paulos

Paulos remarks that the extensive news coverage of the four airplane crashes off the Eastern Seaboard (TWA Flight 800 off Long Island in 1996, Swissair Flight 111 off Nova Scotia last year, John F. Kennedy Jr.'s plane off Martha's Vineyard last summer and now Egyptair Flight 990 off Nantucket) will tempt people to look for strange links between them. He says:

It's important to understand that this recent spate of crashes is most likely a series of random events.

He goes on to say:

Humans have an innate tendency to attribute significance to anomalies and coincidences. An event with a one-in-a- million chance of happening to any American on a given day will, in fact, occur 260 times each day in this country.

Paulos suggests that the media examine a routine flight, say from New York to Madrid, in the same detail as is being done for the Egyptair flight. He expects that they would find about the same number of disquieting stories about the plane's history, last minute changes in the passenger list etc., as are being found relating to the Egyptair flight.

Remarking on the safety of air travel Paulos quotes Arnold Barnett's assertion that a passenger who daily and randomly takes a jet flight between American cities would, on average, go 19,000 years before dying in a crash. We will discuss this way of measuring risk later.


How did Paulos arrive at the number 260? Can you think of any event which has a one-in-a-million chance of happening to any American today?

Just reading the news is dangerous.
New York Times, 4 November, 1999, B1
John Tierney

A major air disaster like that of the Egyptian air flight 990 is the center of news broadcasts for at least a week after the crash. It is natural to have stories of the grieving relatives, the many theories about the cause of the crash, and a review of other similar disasters. In this article, John Tierney remarks that social scientists know from studies of attitudes about risks that some people will be scared away from airplanes on their next trip. He tells us that if just 1 percent of domestic travelers were to drive for their next trip instead of flying this would lead to 5 million extra passenger miles on the road causing an estimated 40 additional deaths and 3,500 additional injuries.

Barry Glassner, a professor of sociology at the University of Southern California has written a new book called "The Culture of Fear: Why Americans Are Afraid of the Wrong Things." In this book he points out that before the ValuJet crash in 1996, cut-rate airlines had been flourishing and the low rates were saving about 200 lives a year by inducing people to fly instead of drive.

Tierney wonders if we will reach the day that the communication industry could be sued by someone who is so moved by a story of a grieving relative, or a lucky escape, that he tears up his Thanksgiving plane ticket and has a horrific car crash.


How do you think they estimate, after a major airplane accident, the number of people who will drive for their next trip instead of fly?

Risks in everyday life.
Chance Lecture by Arnold Barnett
This lecture can be viewed here.

After a major airplane crash, Arnold Barnett, at the M.I.T. Sloan School of Management, is the one the media turns to for expert advice on the risk of flying. In his Chance Lecture at Dartmouth, Professor Barnett discussed several topics related to risks in everyday life. One of these is the risk of flying.

The risks of flying, driving, and train travel is often expressed as the number of deaths per distance traveled. Recall that this is the proposal of the committee of the Royal Statistical Society working on a "Richter" scale for risk. In Chance News 8.04, we estimated these risk in the U.S to be:

Risk by car    1.12 deaths per billion miles
Risk by train   .88 deaths per billion miles
Risk by air     .87 deaths per billion miles

While it might be reasonable to think of risk of driving or traveling by train to be proportional to the distance traveled, Arnold argues that this is not appropriate for travel by airplane. His reason for this is the estimate that 90 percent of all fatal accidents occur in the takeoff or landing phase. Arnold argues that this makes it reasonable to assume that the risk of flying does not depend on the distance traveled.

Barnett describes the way he would defined the risk of flying as follows: On New Year's Eve all the flights for the next five years are put in a large bowl and you reach in and pull out a flight and you take this flight. The risk is then the probability that you lose the lottery and are killed on that flight. By flight Barnett means a non-stop trip from one city to another.

Of course, this risk has to be estimated from previous years' experience. Barnett does this by determining the number of flights N in the period considered and for each accident i finding the proportion x(i) of people killed on this accident. The desired risk is then the sum x(i)/N over all accidents i that occur in the period of time considered. Using data for five-year periods Arnold estimates the risk of death by flying to be about 1 in 7 million.

Arnold remarked that his first idea of how to explain how small this risk is was to say that it is comparable to winning the Massachusetts lottery. However, he then realized that most people expect to win the lottery and remarked that "this juxtaposition of their worse fears and greatest hopes did not work." Instead he found it more useful to tell them that they would have to take a flight every day for 19,000 years to have a reasonable chance of dying in an airplane crash. They would then say "hey, I'm not going to live that long!" and be at peace with flying.

We carried out this method of computing the risk of flying using data available on the National Transportation Safety Board for U.S. domestic and international flights between the years 1993 and 1998 and got a risk of about 1 in 6 million. Arnold considered only U.S. domestic flights so this may be why he got a lower risk.

Another such lottery that could be considered to define the risk of flying would be to put in the bowl all possible tickets for the period of time being considered. To be consistent with our previous assumption, we assume that separate tickets are given for each segment of a flight from city A to city B via city C. Now for this "ticket lottery", the probability that you lose the lottery can be estimated from past data by the ratio of the number of fatalities to the number of plane departures during that period.

To estimate this ticket lottery risk we again looked at the data for flights between the years 1993 and 1998. There were 7,789,367,000 enplanements (revenue paying passenger boarding a plane) during these years and a total of 2,207 fatalities during this period. Thus if we were to use the ticket lottery the risk would be 2307/7789367000 which means a risk of about 1 in 3.5 million as compared to the flight lottery risk of 1 in 6 million.

Peter Doyle pointed out that the choice of flight lottery or ticket lottery is the same as that in the "registrar problem": how should the average class size be determined? As a simple example of this problem assume that a college has 120 students and each student takes one of three classes. One class ends up with 100 students and the other two with 10 students each. Averaging these three classes gives an average class size of 40. However, if for each student we write down the number of people in the class chosen by the student and average these, we get an average class size of 85. Of course, the first method would given the best impression in a College Guide but perhaps the second would be the most useful from the student point-of-view.

In his lecture Barnett also discusses the well-known claim: the drive to the airport is the most dangerous part of your airplane trip. Here he makes use of the research of three General Motors' statisticians: Is it safer to fly or drive?, Evens, Frick, and Schwing, Risk Analysis, Vol. 10, No. 2, 1990.

These authors point out that, unlike in the case of an airplane risk, for driving the risk depends heavily on characteristics of the person at risk, i.e., the driver. To account for this they multiply the average overall risk for the driver by factors to adjust for age, alcohol, seat belt, car mass, and type of road. From this they estimate a risk of a "low risk" driver (40 year old, alcohol-free, belted driver traveling on rural interstate roads in a car 700 pounds heavier than the average) to be about 1 fatality per billion miles of travel.

Barnett remarks that the travel to and from the airport is typically about 15 miles. Thus the risk from driving to and from the airport is about 1 in 30 million which is only about 1/4 the risk of the flight. Thus for a safe driver it is not true that the most dangerous part of the trip is the drive to and from the airport.

The risk of driving increases linearly with distance, while the risk of flying is independent of the distance traveled. Thus there is a point at which the risk of driving is equal to the risk of flying the same distance. This is at about 144 miles. Beyond this distance the safety of flying increases compared to driving the same distance. Barnett tells us that "for every hour you save by traveling by jet you add 67 seconds to your expected lifetime."


(1) In a paper, "Airline Safety", Management Science, Vol. 33, No. 1, January 1989, Barnett and Higgens explain why they use the flight lottery rather than the ticket lottery:

The use of deaths in the numerator is problematic for two reasons. When a Boeing 727 hits a mountain killing all passengers, the implications about safety are not three times as grave if there were 150 on board rather than 50. And a crash that kills 18 passengers out of 18 should be distinguished from another that kills 18 out of 104.

What do you think about this argument? Can you think of arguments in favor of the ticket lottery? Does the registrar example suggest that the airline companies would prefer the flight lottery and passengers the ticket lottery? Which lottery would you favor using as a measure for risk of flying?

(2) Bill Peterson suggests making the registrar example even more like the airplane example by considering the chance of a student failing a course. Let p(k) be the probability that a student fails course k. This can be estimated by f(k)/s(k) where f(k) is the number of students who fail course k and s(k) the number of students in course k. Would you compute the failure rate a by the average of the p(k)'s over all courses or the average over student course experience: sum(p(k)*s(k))/sum(s(k)) where the sums are over all courses k?

(3) Show that in the registrar problem the student average is always at least as great as the course size average. Note that if our estimates are correct it is not true the ticket average is always at least as big as the flight average.

(4) Here are two scenarios provided by Bill Peterson to show that the flight lottery can be make the risk of flying seem both more and less risky than the ticket(passenger) lottery:

Flight   Passengers   Deaths   Rate
A           10             9         0.9
B           100            10        0.1

Flight average:  (0.9 + 0.1)/2 = 0.5
Passenger avg:          19/110 = 0.17

Flight   Passengers   Deaths   Rate
A           10             1         0.1
B           100            90        0.9

Flight average:  (0.1 + 0.9)/2 = 0.5
Passenger avg:          91/110 = 0.83

Bill remarks: this makes the passenger average look more reasonable as a summary. Why do you think he says this? Do you agree?

(5) In an article, "The statistics of safe travel" in the book Teaching Statistics At Its Best, published by the Teaching Statistics Trust in 1994, Francis Lopez-Real Cameroon argues that, when expressing the risk of methods of travel, there is no God given reason for choosing deaths per passenger mile rather than, for example, deaths per hour of travel. To show us that any method might have its difficulties, Cameroon asks us to consider the following hypothetical situation:

Suppose we could travel to Jupiter and back in a space ship of the future. The distance from earth to Jupiter is approximately 390 million miles. Imagine that the ship carries four passengers on each trip. Assume that the first two trips are a complete success but on the third trip, on re-entering the Earth's atmosphere, the ship explodes and everyone is killed.

In the Richter scale proposed by RSS how would the risk of space travel compare with the risk of travel by train, car, and plane? Would you feel safe going on the next trip to Jupiter?

(6) Cameroon assumes that cars and trains travel about 50 miles an hour and an airplane about 500 miles an hour. Using this, he transforms the deaths per passenger mile into deaths per hour. Using the data we have provided at the beginning of this review show that using deaths per passenger hour of travel makes traveling by air more dangerous than traveling by train or car. How would you decide which measure of risk to recommend to your Uncle George if he asks you "what is the safest method of travel?"

Playing it safe can sure be risky.
David Leonhardt
The New York Times
October 24, 1999, pg. 3

This article illustrates, using examples that occurred during the last baseball playoff season, a common error that people make when comparing the risks of various consequences of a decision that they are to make. This error is called the principle of availability, a name given to it by Amos Tversky and Daniel Kahneman in 1982. It refers to a situation in which consequences of a decision in one direction are much easier to imagine (i. e. these consequences are more 'available') than consequences from making the opposite decision.

The two baseball incidents that illustrate this principle both involve an intentional walk. In the first case, the Mets and the Braves were tied at 9-9 in the 11th inning of a playoff game, and the Braves had runners at first and third base. A single run for the Braves would win the game and the playoff series for them. Rather than pitch to Brian Jordan, the Mets' manager elected to intentionally walk him, and have his pitcher pitch to Andrew Jones, who is considered less of a threat at the plate than is Jordan. The manager found it easier to visualize Jordan making a base hit (which would happen less than one-third of the time) than the alternative, which is that the pitcher might make a mistake when pitching to the next batter, or that the next batter might get a base hit. The fact that the bases were loaded after the intentional walk probably increased the probability of these last two possibilities, since the pitcher no longer had any 'wiggle room.' Indeed Jones got a walk and the game was over.

The second incident involved the Indians and the Red Sox. In this case, the Indians issued an intentional walk to Nomar Garciaparra to load the bases. The next batter hit a grand slam to give the Red Sox a 7-5 lead. This reviewer thinks that this decision was not nearly as questionable as the one made by the Mets, because Garciaparra had the highest batting average in the American League this season (and his average was more than 50 points higher than the hitter who followed him) and it was only the third inning of the game, so the pitcher still had plenty of wiggle room.

Examples of this principle in real life abound. As remarked earlier some people, after hearing of a plane crash, will opt to drive to their destination rather than take a plane, even though travel by plane is safer. (In this case, plane crashes make the news, while car crashes are less newsworthy, leading to a situation in which it is easier to imagine one than the other.) People buy burglar alarms rather than getting rid of the gun that they keep in the house, because it is easier to imagine a burglary than an accident involving the gun.

Baseball's all-time best hitters.
Michael J. Schell
Princeton University Press, 1999

Michael Schell is Associate Professor of Biostatistics at the University of South Carolina and a long time baseball fan. In this book he combines these two interests to attempt to answer who the 100 best hitters have been in major league baseball. While there are obviously many ways to define "best hitter" Schell takes as a starting point the 100 best lifetime batting averages as listed in Total Baseball. The top ten are:

1.  Ty Cobb           .366    1917
2.  Roger Hornsby     .358    1926
3.  Joe Jackson       .356    1914
4.  Ed Delahanty      .346    1896
5.  Ted Williams      .344    1950
6.  Billy Hamilton    .344    1895
7.  Tris Speaker      .344    1918
8.  Dan Brouthers     .342    1892
9.  Babe Ruth         .342    1925
10. Harry Heilmann    .342    1923

The date is the mid-career point for the player.

He then develops an adjusted batting average by adjusting these raw averages to take into account

(1) Late career declines
(2) Changes in the rules
(3) Changes in league batting talent
(4) Differences in ball parks.

The top ten by his adjusted batting averages are:

1.  Tony Gwynn       .342 
2.  Ty Cobb          .340   1916 
3.  Rod Carew        .332   1976 
4.  Joe Jackson      .331   1915 
5.  Rogers Hornsby   .330   1923 
6.  Ted Williams     .327   1949 
7.  Honus Wagner     .326   1907 
8.  Stan Musial      .325   1952 
9.  Wade Boggs       .324 
10. Nap Lajoie       .322   1906 

The effect of the adjustments is to give current players a position in the top 10, and more generally to make the top 100 spread more or less uniformly over the entire period of major league baseball (rather than concentrated on the early days of baseball as is the case when using raw averages.)

The process of making these adjustments illustrates many basic statistical concepts. Histograms and time plots are used liberally in the process of determining which changes in the rules made it significantly easier or harder for the players to get hits. As Steven Jay Gould observed in his explanation for the lack of 400 hitters, the league batting averages have not changed very much through the years but the standard deviation has significantly decreased. Gould's arguments lead Schell to consider this decrease in standard deviation as evidence for an overall increase of the quality of the hitters. He then uses the league standard deviation to adjust batting average to take into account this improvement.

Of course, throughout the author needs to determine whether the observed changes in league batting averages are due to changes in the rules, differences in parks, etc., or just chance fluctuations. For example, regression analysis is used to show that there are significant differences between the hitting performance in different ball parks.

While the principle aim of the author is clearly to give a more rational method for comparing the great hitters of baseball, at the same time this beautifully written book shows statistics in action in the context of the great American pastime of baseball.


(1) Can you live with Ty Cobb being number 2 and Babe Ruth not even in the top ten hitters?

(2) For players with more than 8000 at bats, Schell uses the batting average for the first 8000 at bats instead of the lifetime batting averages, so as to not penalize the player who continues to play beyond the age where he could be expected to hit up to his natural ability. Do you agree with this adjustment?

(3) Do you think it would be better to try to develop a ranking that takes into account other aspects of hitting such as homeruns. If so, how would you do this?

Obesity linked to short life span.
The Boston Globe, 7 October 1999, pA3
Katherine Webster (Associated Press)

Even with all the conflicting health claims found in the news, you might have thought the experts at least agreed that being overweight was bad for you. Still, according to Dr. JoAnn Manson of Harvard, some have had "lingering questions about whether weight alone increases the risk for disease." But now the case may be closed, thanks to a recent study in the New England Journal of Medicine (Body-Mass Index and Mortality in a Prospective Cohort of U.S. Adults, Calle et. al., 7 October, 1999, Vol.341, No. 15, See also accompanying editorial.)

There was one notable exception to the findings: among black women, the most obese did not have significantly higher risk than those who were not overweight. According to June Stevens, a nutritionist at the University of North Carolina, this trend has shown up in other studies as well. However, Stevens and Manson both feel that the present study understates the risks to obese black women. As explained in the article, "slender, nonsmoking black women have a higher risk of death to begin with than their white counterparts, probably because they have less access to health care."


(1) How do you think the study isolated the risk attributable to "weight alone?"

(2) The article notes that, in contrast to some previous research indicating that weight is less of a problem as we age, the new study found increased death rates among overweight people in all age groups. How do you think there could have been confusion on this point?

(3) Wouldn't problems of health care access apply to all black women? Wouldn't you expect obesity to create additional effects?

Are fund managers irrelevant? An 18th-Century theory suggests not.
New York Times, 10 Oct. 1999, sec.3 p.7
Mark Hulbert

Should investors avoid all actively managed mutual funds? A study in Bayesian performance evaluation. Klaas Baks, Andrew Metrick, and Jessica Wachter. See Papers on the Web.

A long-running debate in financial circles concerns the question of whether there exist any mutual fund managers who can consistently outperform market index funds. Actively managed funds consist of a set of stocks that are chosen by the funds' managers. This set changes over the years, as the managers react to perceived changes in the market. An index fund is a set of stocks that is chosen so that the fund's return matches the return of a certain stock market index, such as the S&P 500 or the NASDAQ average.

Index mutual funds have a 'head start' relative to actively managed mutual funds in two respects. Generally speaking, the transaction costs, i.e. the costs associated with buying and selling the stocks in the funds, are lower for index funds than for actively managed funds, since there is very little turnover in the first type of funds. In addition, the management fees, i.e. the annual amount that is needed to pay the salaries of the fund managers, is smaller for index funds, since very little management of such funds is needed.

Several studies have been conducted to determine whether any managers consistently outperform the stock market indices. These studies have generally shown that the actively managed mutual funds do not outperform the indices, after expenses and fees. In any study of mutual fund returns, one will always find some that outperform the average fund. If the average fund performance is close to the performance of the indices, one will find some mutual funds that outperform these indices. One usually even finds a few mutual funds that outperform the indices over long stretches of time. The central question is whether these 'over-performing' funds are the result of randomness or manager skill. The conclusions of these studies have tended to be that one cannot reject the hypothesis that the over-performance of these funds is due to chance.

The present article adopts a different approach to the problem. One starts with a function f that represents the 'prior beliefs' that a given investor has concerning the skills of mutual fund managers. This investor may believe that, given any real number r, a certain percentage of mutual fund managers will achieve a return that differs from the market by r (this number r may be positive or negative, and it takes into account the expenses and fees mentioned above). The function f encodes these beliefs in the form of a distribution function. This function may have a negative mean, which corresponds to the situation of the investor believing that, on average, he or she would be better off investing in index funds. Nevertheless, the function may also be positive for certain managers (although the investor may not know which managers these are). For other investors, this function f may not take on positive values at all. Of course, different prior belief functions will yield different conclusions about which sorts of funds one should invest in.

The authors then use a Bayesian analysis to compute posterior estimates for the expected rates of return for the actively managed funds. If some funds have expected rates of return that exceed the index returns, then one should invest in these funds. The point that the authors are trying to make is that, even if the prior belief function posits a very small but positive probability that there exist fund managers who can outperform the market, there will be some funds with an excess rate of return.

We now quote from the conclusion of the paper:

Should investors avoid all actively managed mutual funds? The average active fund underperforms index funds on a risk- adjusted basis. Skilled management, if it exists at all, is difficult to detect. When we analyze a sample of 1437 managers extant at the end of 1996, we cannot reject the null hypothesis that the best performance is due to chance. These facts by themselves might lead investors to shun actively managed funds. Our analysis shows that this conclusion is premature. Given our current methods of performance evaluation, the prior beliefs necessary to support some investment in active managers could not possibly be distinguished from 'zero skill among managers' unless we could observe hundreds of thousands of managers over many decades. Thus, we conclude that the case against actively managed funds cannot rely solely on the statistical evidence.

This reviewer is left with the following questions about this article. First, even if we have prior beliefs that include the possibility of managers who can out-perform the market, is it clear that we can find these managers by observing the data? Second, is it reasonable that someone's prior beliefs about how the world works affects the way it actually works? These questions should not be taken as criticism of the article under review; rather, they should serve as points for discussion.


Are you convinced by the authors conclusion:

The authors concluded:

Thus, we conclude that the case against actively managed funds cannot rely solely on the statistical evidence.
What do they mean by this?

Number crunching column.
The Toronto Sun, 10 October 1999, p55
Maryanna Lewyckyj

This piece is presented as "a roundup of numbers in the news recently."

Some seem like straightforward tabulations. For example

US$ 2.22 million: Total amount of damages awarded to 13 passengers traumatized by severe turbulence on a 1995 American Airlines flight.

US$ 215,000: Largest amount awarded to an individual passenger.

28: Seconds of turbulence on the flight.

Others seem to involve estimates, though the methodology is not described. For example:

302: Hours per year the average person spends listening to voice mail and responding to messages.

10.7%: Percentage of work week the average white-collar employee spends looking for misfiled or lost information.

5.1: Average hours per week employees spend "futzing" with their computers (attending courses, waiting for their computer to process a command, solving computer problems.)


(1) Do you think there were only 13 people on the flight? Why do you think the damage awards differed?

(2) What population is being referred to in the "averages" from the workplace examples? What problems do you see with the estimates being made?

A disputed study suggests possible harm from genetically altered food.
The New York Times, 15 October 1999, pA29
Andrew Pollack

The British medical journal The Lancet has decided to publish a controversial study purporting to show that genetically altered potatoes are unsafe. Dr. Richard Horton, editor of the journal, concedes that the findings are "preliminary and non-generalisable, but at least they are now out in the open for debate." Also appearing in the journal is a critique of the paper by a research group from the Netherlands. Some of The Lancet's own referees had expressed their own concerns about the paper during the peer review process.

The research, conducted by researchers Arpad Puszta and W.B. Ewen of Scotland, investigated potatoes implanted with a gene from the snowdrop plant. The gene allows the potatoes to produce a chemical called "lectin," which makes plants more resistant to insects. Rats who were fed these genetically altered potatoes showed a thickening of their stomach walls and parts of their small intestines. In the control groups, rats were fed either normal potatoes or potatoes "spiked" with lectin. Some rats in the "spiked" group experienced the stomach effects, but none of the controls exhibited the intestinal changes. The authors inferred that these latter effects could be attributed to the process of genetic engineering itself, rather than simply to the lectin. The Netherlands team, in its critique, argued that the effects reported were inconsistent. Furthermore, they cited unexplained differences between the normal potatoes and genetically altered potatoes that could not be accounted for by the genetic engineering process.

Editors from other journals criticized the decision to publish, and questioned the rationale of going ahead just to make the data available to the public. Floyd Bloom, the editor of Science had this comment: "If you're going to take it just because it's controversial, well, there are a whole lot of controversial things."


Why do you think The Lancet decided to publish the study?

FBI's report of falling crime greeted by applause, debate.
The Boston Globe, 18 October 1999, pA1
Lorraine Adams and David A. Vise (Washington Post)

The US crime rate has been dropping for most of this decade. The 1998 data just released by the FBI document a seventh straight year of decline. Attorney General Janet Reno cited a number of factors contributing to the trend, including more police officers, increased cooperation among law enforcement groups, and continuing gun control efforts.

Among criminologists, however, there is no consensus about the cause of the trend. One theory has held that the decline in the number of 15-24 year olds in the population is largely responsible, since this age group has historically had the highest crime rate. But Steven D. Levitt of the University of Chicago has shown that even after adjusting for demographic changes, the crime rate has still declined. Murkier still are the effects of our increasing incarceration rate, which continue to be widely debated.

Moreover, as James Alan Fox of Northeastern University explains, local trends can get masked in the national average. Although the national homicide rate is at a thirty-year low, the same is not true for all population groups in all parts of the country. Furthermore, we have not experienced thirty years of decline. According to the article, while youth homicide rates have dropped by half over the last five years, they are still twice as high as fifteen years ago. Fox also points out that biggest decline have been among black males, a group that saw large increases during the crack epidemic in the 1980s. White teens in suburban areas have not experienced comparable decreases.

An August article in the Journal of Research in Crime and Delinquency noted:

The rise and now the sudden drop in crime rates offers a kind of natural experiment for investigating the macro forces that shape crime trends...Because it ranks as one of the most pressing theoretical and policy issues...today, the lack of systematic research on crime trends of the past decade or so is both surprising and disappointing.


What is meant by the phrase "natural experiment" in the last paragraph? What kind of analysis is being called for?

What is your real age?
The Boston Globe, 18 October 1999, pD4
Matt Villano

Dr. Michael Roizen of the University of Chicago has written a best-selling book based on his formula for computing people's RealAge, which he describes as their biological--rather than chronological--age. The formula quantifies how lifestyle choices will ultimately affect length of life. Roizen's idea is that directly presenting the impact on longevity will give people a tangible incentive to adopt healthy behaviors.

As described in the article,

...the RealAge formula is based on the same equation economists use to calculate the net-present value of investments. Take your age. Subtract a year and a half if you ride a motorcycle. Add eight years if you smoke. Subtract half a year for a sense of humor. Then get ready for some serious math--based on their research of more than 25,000 clinical trials, Roizen and a team of scientific investigators have uncovered more than 120 behaviors that affect biological age.

Roizen has set up a web site where you can learn your own RealAge. This requires that you complete an on-line questionnaire about your general health and behavior. It contains 10-25 questions in each of seven categories, and, according to the article, it can take up to two hours to complete (Your reviewer confesses that he was not excited about this prospect--or about sending out medical information over the internet!). The estimate of your RealAge comes with individualized recommendations about what you can do to reduce it. This amounts to adding years to your life, if you believe Roizen's principle. While use of the web site is free, the article cautions that you will receive follow-up advertisements from the RealAge Drugstore.


(1) The description of the formula implies that riding a motorcycle increases your life span. What do you think about this?

(2) Does it make sense that the effects described (motorcycle, smoking, humor) would be additive?


Chance News
Copyright © 1998 Laurie Snell

This work is freely redistributable under the terms of the GNU General Public License as published by the Free Software Foundation. This work comes with ABSOLUTELY NO WARRANTY.


CHANCE News 8.09

(October 7 1999 to November 16 1999)