CHANCE News 7.05

(27 April 1998 to 26 May 1998)


Prepared by J. Laurie Snell, Bill Peterson and Charles Grinstead, with help from Fuxing Hou, Ma.Katrina Munoz Dy,Pamela J. Lombardi, Meghana Reddy and Joan Snell.

Please send comments and suggestions for articles to jlsnell@dartmouth.edu.

Back issues of Chance News and other materials for teaching a Chance course are available from the Chance web site:

Chance News is distributed under the GNU General Public License (so-called 'copyleft'). See the end of the newletter for details.


I guess I think of lotteries as a tax on the mathematically challenged.
Roger Jones

Contents of Chance News 7.05


On May 20 the Powerball lottery established a new jackpot record: The jackpot was $104.3 million in cash or $7.7 million a year for 25 years according to which choice the buyer made. There was one winner.


In the Powerball lottery you choose 5 distinct numbers from 1 to 49 for the white balls and one number from 1 to 42 for the red Powerball. To win the jackpot you must get them all correct. 138.5 million tickets were sold. Assuming that all tickets were machine picked (over two thirds are), what was the chance, before the drawing, that there would a winner? That there would be more than one winner?

Hope in the lab: A special report
A cautious awe greets drugs that eradicate tumors in mice
The New York Times, 3 May,1998, Section 1, Page 1
Gina Kolata

Test on mice block a defense in cancer
The New York Times, 27 November, Section 1 Page 28
Nicholas Wade

These two articles provide a case study for how a science writer describes a scientific discovery can affect public reaction to the news.

Both articles discuss the research of Judah Folkman and his colleagues on the use of two drugs, angiostatin and endostatin, to cure cancer in mice. The drugs work by interfering with the blood supply of the tumors.

The Wade article was based on research reported in Nature (Nature 390,404: November 1997). In this article it was reported that angiostatin and endostatin working together could cause tumors in mice to shrink to microscopic size.

Both articles speak of the amazement of fellow scientists at these discoveries. What are the differences?

The opening paragraph in the Wade article remarks that this research on mice "may well prove relevant to the treatment of human cancer."

The opening paragraph of the Kolata article is:

Within a year, if all goes well, the first cancer patient will be injected with two new drugs that can eradicate any type of cancer, with no obvious side effects and no drug resistance --- in mice.
In the Wade article, Dr. Folkman states that "we are several years away from clinical trials."

In the Kolata article, Nobel Prize winner James Watson, director of the Cold Harbor Cancer Research Center, is quoted as saying that "Judah is going to cure cancer in two years."

Both articles are describing the same drugs and experiments. The Kolata article does a better job of describing how the drugs work. Her style is much more dramatic. This is particularly true in her description of an experiment with 20 mice. The mice had large tumors which were removed. Then 10 of the mice were injected with angiostaten and 10 with water. After 15 days all ten mice given the drug had no tumors and all ten given water had huge new tumors. The Kolata article also included numerous references to the fact that the drugs needed to be tested on humans and that the results for humans are often quite different than for mice.

The Wade article had no great effect on the public. The Kolata article raised the hopes of cancer patients who called their doctors and researchers and pleaded to be given these new drugs. It caused the price of the stock of the company that owned the rights to the drugs to rise in one day from a price of $12 to $51.80. This cancer story was the cover story of Time, Newsweek and USA today.


(1) What do you think caused the difference in the response to the two articles describing the same research? Was it the result of the dramatic statement of Watson? The vivid descriptions of the cures in mice? The fact that the Kolata article was on the front page and the Wade article on page 28?

(2) Time magazine gave a reasonably balanced discussion of the drugs and the danger of all the hype about them. However, the cover had Cancer with huge letters and a big X written over it. At the bottom in smaller letters we see: How to tell the hype from the hope. Newsweek and U.S.A. Today had similar covers suggesting a cancer cure. Why do you think the covers were not consistent with the cautionary approach of their articles?

(3) In a letter to the editor, Watson stated that he had talked to Gina Kolata about Folkman's work at a dinner they attended, but he had not made the quote provided in the article. The Times stated that it was standing by its story. Who do you think is right? Even if he did say "Judah is going to cure cancer in two years," should Kolata have confirmed it in a more formal setting?

(4) After her article appeared, a book agent wrote an e-mail message to Kolata suggesting that she submit a proposal for a book on the drugs. The agent suggested that he could get her a $2- million dollar contract. Kolata provided the proposal but, after consulting with the Times editors, she withdrew the proposal. Kolata wrote a very successful book on cloning after breaking the cloning story. What is the difference?

Hans van Maanen is Science Editor for Het Parool, an evening newspaper from Amsterdam, The Netherlands. He sent us an article that he had written that appeared in Het Parool on 2 April 1998, pp. PS 2 and 3. Fortunately we were able to have it translated by one of our students. We found it interesting article that the writer himself makes a statistical argument rather than reporting arguments of others. We received permission to post the translation on our Chance web site. You can find it under Teaching Aids. Here is a commentary on the article from Hans.

In Holland, as I'm sure in other countries as well, there has been a furor over organ transplants and homosexuality. The fear of aids strikes again: according to the Council of Europe, homosexuality is a ground for exclusion for organ and tissue donation.

In the article, I try to show that admitting homosexual men to the pool would lead to one extra case of HIV contamination for every 30,000 transplantations. With 1,200 organ transplants and 5,000 tissue transplants per year, this leads to one extra HIV contamination in eight years. There are about 200 new cases of HIV in Holland each year. This contrasts sharply with the risk of getting cancer through a donated organ - about 400 cases in eight years and the risk of a number of more or less serious complications through other bacteria and viruses (mainly CMV) that went unnoticed.

Finally, by excluding homosexuals, one does not catch the men one wants to catch. In Holland, it turns out that bisexual men do not practice safe sex like homosexuals do. Bisexual men are a greater risk than homosexual men. But even so, one should not ask about an abstract entity like sexual preference but about actual sexual practices of the deceased. It seems no one in Holland has done these calculations before - not the Council of Blood Transfusions, not Eurotransplant, not the Department of Health. Have they been done in other countries? I wonder.


(1) Do you think an article like this includes too many numbers for a newspaper in the U.S.?

(2) An article in the Chicago Tribune on 17 Sept. 1997 reports:

In a move that other medical centers challenge on medical and moral grounds, University of California-San Francisco Medical Center is offering organ transplants to people with the AIDS virus.
The article goes on to say that HIV patients would receive organs from donors who are HIV-negative but considered "high risk" for infection--people who are gay, have multiple sex partners or a history of intravenous drug use -- organs that otherwise would have been thrown away.

What do you think the medical and moral arguments are?

The Spring issue of Chance Magazine arrived and, as usual, has a number of interesting articles. Maya Bar-Hillel, Drior Bar Natan, and Brenen McKay write on their research on the Bible Codes which we have reviewed in previous issues of Chance News. Another article we particularly enjoyed was the following:

A selection of selection anomalies
Chance Magazine, Vol. 11, No. 2, Spring 1998, pp. 3-8
Howard Wainer, Samuel Palmer, and Eric T. Bradlow

If you are tired of using the Literary Digest story to show the dangers of using a non-random sample, this article will give you a number of interesting alternatives.

The authors discuss four examples which they describe as: (1) The Most Dangerous Profession, (2) Scientific Publication Is Getting Easier, (3) What do Changing SAT Scores Mean, and (4) Bullet Holes and a Model for Missing Data.

Example (4) is a classic. It came from the work of Abraham Wald in the second world war. Wald was asked to help decide where to add extra armor to airplanes on the basis of patterns of bullet holes on returning aircraft. Wald observed that the hits seemed to occur randomly over the plane except for certain regions where there were no hits. He reasoned that the planes that had been hit in these regions were the ones that had not returned and recommended further protection for these regions.

Example (3) is the kind of non-random sample that appears regularly in the news. From 1962 to 1980 the average total SAT scores steadily declined from 1080 to about 900. In 1972 the College Board declared that this was primarily because students were taking watered-down courses. A similar explanation was given in a front- page story in the New York Times in 1982. Another news story blamed the fall-out from the above-ground nuclear tests. Time magazine blamed it on television, divorces, and softening of standards. Newsweek gave the explanation that most of us would give: it was the result of a larger proportion of high school students taking the SAT scores and within this group more minorities who had not traditionally done as well on standardized tests. However, the authors remark that "this interpretation falls flat on its face when you look at the data since 1985".

We leave examples (1) and (2) for you to enjoy when you read the article.


(1) To justify their remark that the minority explanation "falls flat when you look at the data since 1985", the authors provide a graph showing that, from 1985 to present, the SAT scores rose from an average of 900 to an average of about 1015 while the proportion of non-Hispanic whites in the U. S. population dropped steadily from about 83% to 68%. Would it have been better to show that the proportion of non-Hispanic whites among those who took the SAT tests steadily declined during this period? Do you think it did?

(2) As reported in Chance News 4.05, Abelson, in his book "Statistics as Principled Argument," discussed a study that found that the average life expectancy of famous male orchestral conductors was 73.4 years, significantly higher than the life expectancy for males, 68.5. Jane Brody reported in her New York Times health column that this was thought to be due to the arm exercise a conductor gets. A reader wrote a letter to the editor pointing out that this difference could be explained simply in terms of a non-random sample. What was his explanation?

Best-of seven playoff series
Chance Magazine, Vol.11, No. 2 Spring 1998
Hal S. Stern (Column editor: A statistician reads the sports page)

Hal Stern devotes this column to a discussion of the binomial model for sports competitions that have a best-of-seven playoff series.

Stern remarks that the 1952 article by Fred Mosteller, (The World Series Competition, Journal of the American Statistical Association, Vol 47, pp. 355-380) was one of the first research articles applying probability and statistics to sports. We remark that another of the first such studies was made by Bernoulli himself who, in his famous Ars Conjectandi, applied Bernoulli trials to tennis strategy. See discussion question (4).

The Bernoulli model assumes independence between games and that each team has a constant probability of winning a game throughout the series. Mosteller found both of these assumptions reasonable based on the data in the first half of the century.

Hal examines these same questions using the more complete data that we now have. He concludes that the independence assumption is still reasonable but the constant probability is not. One reason for this is that the home team advantage in baseball seems real.

Best-of-seven playoffs also occur in professional basketball and hockey. Unlike baseball, in these sports, the teams competing in the playoff have also played against each other a number of times in the regular season. For these sports, Hal applies a logistic regression model to consider how playing at home and the winning ratio for the two teams during the regular-season affect the outcome of the playoff. He finds that, for basketball, both the home team variable and the regular-season win ratio are predictors for the outcome of the playoff series while, for professional hockey, the win ratio is highly significant but the home team advantage is not.


(1) Hal points out that "one interesting finding in World series data is that there have been many more series lasting the full seven games than six games (33 vs. 21)". He asserts that the best we could expect with a binomial model would be to have an equal number of 6-game and 7-game series. Why is this?

(2) If you assume a Bernoulli model for a World series between the Yankees and the Dodgers with the Yankees having a probability .55 for winning each game, how likely do you think it is that the Yankees will win the series? (See Hal's article to check your answer.)

(3) Consider again a Yankee-Dodger series where the Yankees win each game with probability .55. How would you find the number of games required to give the Yankees a 95% chance of winning the series? (Hint: As Stern remarks, in a best out of 2n+1 games series, you could require that teams play all 2n+1 games and the winner would not be changed.)

(4) Evidently, in Bernoulli's time it was the custom in tennis to equalize a game by allowing the weaker player to have a single free point at any time during the game. If you are the weaker player, and assuming a Bernoulli model for the outcome of the points, when should you choose to take your free point? When would you choose to take it?

Ask Marilyn
Parade Magazine, 3 May 1998
Marilyn vos Savant

A reader writes:

When I was a boy, my Grandfather took me to baseball games and paid our way with side bets. First, he'd bet that the winning team would score more runs in one inning than the losers scored in the whole game. He told me this is true over 75% of the time, but I have no idea where he got his information. Can you support this with any solid research?

Marilyn reports that David W. Smith at the University of Delaware told her that this phenomenon is referred to as the "big bang theory" and that it occurs in nearly 50% of the games. Harold Brooks told us that the term "big bang" is due to Bill James who verified the 50% figure by looking at World Series games between 1903 and 1983.

We also consulted Hal Stern who wrote:

I have one year's worth of data on line which led to two approaches:

1) directly from the data (1986 National League)

Pr(winning team's best inning > losing team's total) = 419/968 = .43

Pr(winning team's best inning >= losing team's total) = 651/968 = .67

2) 1986 data gives the following distribution for runs scored per half-inning: Data from 1986 season (NL) 968 games, first 8.5 innings (16456 total half-innings)

runs    freq     prob  p(n)/p(n-1)

0      12087    .7345    ----- 
1       2451    .1489    .2028 
2       1075    .0653    .4386 
3        504    .0306    .4688 
4        225    .0137    .4464 
5         66    .0040    .2933 
6         29    .0018    .4394 
7         12    .0007    .4138 
8          5    .0003    .4167 
9          2    .0001    .4000
If we use this to simulate baseball games (assuming independent identically distributed (iid) half-innings)

Pr(winning team's best inning > losing team's total) = .46

Pr(winning team's best inning >= losing team's total) = .68

From this we learn the iid model is not too bad. This also supports Marilyn's answer and maybe suggests where the .75 came from, namely from Grandfather really betting that the winning team would score as many runs in one inning as the losing team did in the whole game.


(1) Why do you think Hal was interested in looking at p(n)/p(n-1)?

(2) What aspects of the game would suggest that the runs scored in innings might not be identically distributed random variables?

Global-scale temperature patterns and climate forcing over the past six centuries
Nature, 23 April 1998, 779-787
Mann, Bradley, and Hughes

Large scale temperature measurements only go back about 100 years. This relatively short history of keeping temperature records makes it hard to assert unequivocally that the .3 -.6 degree centigrade global warming during this century is due to human-induced changes in the chemical composition of the atmosphere rather than to natural variations in the temperature through time.

To help settle this question, the authors of this article have attempted to reconstruct temperature variations in the Northern Hemisphere for the last 600 years, using proxy data for temperature. This includes: tree rings, ice cores, coral records, and historical temperature information. You can learn how temperature information is deduced from such data at the N0AA Paleoclimatology home page.

The authors begin by looking at proxy data and temperature data for the years from 1905 to 1995 for which temperatures are known. They do a mutivariable regression with the proxy data as independent variables and the annual temperature averages as dependent variable. They then assume the same dependence between proxy data and temperature in previous centuries to deduce the yearly temperature averages from 1400 to 1995.

The statistical techniques used allow the authors to put confidence limits on their temperature estimates. The final graphs of annual average temperatures lead to the conclusion that the years 1990, 1995, and 1997 are three standard deviations greater than any previous year back to 1400.

The authors then consider the correlation between their temperature time series from 1610 to 1995 and the time series representing phenomenon believed to have a major affect on the temperature. For the latter time series they use the three time series representing: CO2 measurements as a proxy for the total greenhouse-gas changes, reconstructed solar irradiance variations, and the historical "dust veil index" for explosive volcanism.

This correlation suggested that the unusually cold period of the mid-seventeenth century was due to the decrease in solar irradiation during this period and that the subsequent warming period from the early nineteenth century through the mid-twentieth century was due to an increase in the solar irradiation during this period. The explosive volcanism was negatively correlated with temperature throughout the period studied and was most pronounced in a 200-year window centered at 1830 which included the most explosive volcanic events. The Greenhouse effect showed no sign of significance until a large positive correlation emerged coming into the twentieth century. Solar irradiation had leveled off in the mid-twentieth century, leading the authors to assert that the increase in temperature was indeed due to the Greenhouse effect.


(1) The authors are assuming a linear relation between the proxy variables and temperature. Does this seem reasonable?

(2) What kind of arguments do you think those who do not believe in the Greenhouse effect would make against taking studies like this seriously?

Adolescent sexual behavior, drug use, and violence:
Increased reporting with computer survey technology
Science, 8 May, 1998 867-873
C. F. Turner, et. al

A common challenge in statistical sampling is to figure out how to collect data in a way that does not skew the results. This problem is especially prevalent when the survey concerns behavior that is either risky, illegal, or stigmatized. It is thought, not surprisingly, that in face-to-face interviews, some people will not honestly admit to participation in activities of this nature.

An alternative way to gather data is to give self-administered paper questionnaires. These are filled out by the subjects, without an interviewer present, and are then sealed in an envelope and turned in. However, it is frequently the case that an identification number appears on the questionnaires, thereby potentially raising suspicions among the subjects about the privacy of their responses. In addition, collecting data in this way requires that the subjects be literate.

In this article, the authors report on a new survey technique that they claim largely bypasses the problems mentioned above. This technique consists of a self-interview, in which the subjects answer questions given to them in an audio format. A computer controls the questions, and the answers are stored on a laptop computer. The questions also appear on the computer screen, so that the respondents can read them if they wish.

Aside from the claimed increase in privacy that this survey method affords, there are several other benefits of this technique. First, each question is given in precisely the same way to all subjects. Second, the computer controls branching through the questionnaire. (For example, if the respondent answers no to a certain question, the computer may automatically skip the next three questions.) Third, the questions can be asked in several languages.

In the survey being reported on in this article, 1672 males between the ages of 15 and 19 were asked about their behaviors in three areas: sex, drugs, and violence. In areas, such as male-male sex, drug use, and knife or gun use, the prevalence of such behavior, using this survey technique were, often, markedly higher than the results obtained using paper questionnaires. For example, the percentages who reported carrying a gun within the last 30 days were 12.4 for the computer survey and 7.9 for the paper survey. The percentages who reported ever having taken street drugs using a needle were 5.2 and 1.4.

It is interesting to note that in those areas to which there is less societal stigma attached, such as heterosexual intercourse (not involving a prostitute) and use of marijuana and alcohol, the two survey techniques obtained roughly the same results. In fact, higher percentages for making a woman pregnant, fathering a child, and having intercourse with a female were obtained in the paper survey than in the audio-computer survey.

The authors point out that they have not proved that this new technique results in more accurate collection of data. However, they also say that the results obtained are more in line with those obtained from retrospective reports from adults concerning their adolescent behaviors.


(1) How valid do you think the authors' claim is concerning the greater protection of the subjects' privacy when computers are used rather than paper?

(2) One of the most important things to consider when evaluating this new survey technique is whether the collected data is closer to the truth than data that is collected by other techniques. We noted above that, in all cases involving behavior that might be considered risky, illegal, or stigmatized, the data collected using the audio-computer technique had higher prevalence. Do you think this implies that this data is, in fact, closer to the truth? Can you give reasons why respondents might be less truthful when using this technique than when, say, they are filling out a paper questionnaire?

(3) Can you give reasons why the paper survey technique obtained higher percentages than the audio-computer technique in the measurements concerning making a woman pregnant, fathering a child, and having intercourse with a female?

In census debate, politics count
The Plain Dealer,11 May 1998, B7
Tom Brazaitis

During a recent congressional hearing on the upcoming Census, Representative John Shadegg (R-Arizona) told expert witnesses on the panel that he was having trouble explaining the proposed statistical plan to constituents back home: the plan to reach 90% of Americans by mail-in surveys and door-to-door interviews, and the remaining by sampling. What people want to know is "how anyone can say they are going to count 90% of the population if they don't know what the total population is until they've counted." When one of the witnesses began explaining estimation based on reports of letter carriers, Shadegg and his colleagues began to laugh. The postal service is so frequently the object of jokes that the congressmen doubted they could sell such a plan! Nevertheless, vacancies reported by letter carriers do represent one tool for updating the previous census; others include birth and death records and immigration data.

The article points out that there has never been a perfectly accurate census. The 1990 census, which is estimated to have missed 1.6 to 1.8 percent of the population, was the second most accurate effort ever. Unfortunately, it was the first census that was less accurate than its predecessor. Only 67% of the population mailed in census forms, and efforts to locate the rest of the population had varying success rates among different socioeconomic groups. Proponents of sampling have argued that the old head counting methods will become increasingly inadequate as our society grows more complex.


(1) In the sampling plan, if we reach only 88% of the population by mail-in rather than the proposed 90%, will the Census be off by 2%? Do you think your congressional representative could answer that question?

(2) Representative Tom Sawyer of Ohio favors the statistical plan. He observes that the mail-in surveys, which were introduced in 1960, can sometimes entail more guesswork than "enumeration." Indeed, mail carriers are sometimes interviewed to find out how many people live in a dwelling. Thinking back to Shadegg's comments, who do you think fares better in public opinion, mail carriers or statisticians?

Smoking tied to less breast cancer: But risks said to outweigh benefits
The Boston Globe, 20 May 1998, A12.
Associated Press

A study in the Journal of the National Cancer Institute looked at histories of breast cancer in 372 women who have BRCA gene mutations, a known risk factor for breast cancer. Half of the women in the study were smokers and half were not. Incidence of breast cancer was 54% lower in heavy smokers than in nonsmokers. There was also a dose-response effect: the more a women with the mutation smoked, the lower the risk of breast cancer. Some breast cancers are linked to estrogen, and women with BRCA mutations are known to be susceptible to estrogen-induced cell growth. Since smoking is known to lower the body's production of estrogen, this may in part explain the link.

Researchers were quick to point out that, among its many negative effects on health, smoking increases the risk of certain other cancers--including lung cancer, which is the leading cause of cancer death in women. Thus the present study should not be taken as encouraging women to smoke. However, the results do suggest that some compound in tobacco smoke has preventative effects, and research to isolate that compound may lead to treatments down the road.

About one woman in 250 carries mutations of the BRCA1 or BRCA2 gene, but about 10% of all women with breast cancer have a BRCA mutation. According to some estimates, up to 80% of women with the mutation will develop breast cancer at some point during their lifetimes.

Politics of youth smoking fueled by unproven data
The New York Times, 20 May 1998, A1.
Barry Meier

Tobacco companies have been harshly criticized for advertising that targets underage smokers. Senator John McCain of Arizona has called for action to save "3000 kids a day from starting this life-threatening addiction." President Clinton has stated that one million people will die prematurely if Congress fails to pass tobacco legislation this year. This apparently echoes statements from the American Cancer Society that a 60% reduction in youth smoking would decrease premature deaths by one million.

Since the vast majority of long-term smokers begin as teenagers, there is general agreement that reductions in youth smoking would have important health benefits. But the article notes that there is little hard data to support claims that youth smoking behavior can be affected by increasing cigarette prices or curbing advertisements.

For example, in testimony before Congress, Deputy Treasury Secretary Lawrence Summers cited studies showing that every 10% increase in the price of a pack of cigarettes would produce up to a 7% reduction in the number of children who smoke. But Donald Kenkel of Cornell argues that these studies were flawed since their conclusions were based on correlations between youth smoking and cigarette prices in different states at a given point in time. He cites a Cornell study which, instead, tracked youth smoking rates and cigarette prices over a period of years. There it was found that states that increased cigarette taxes did not have significantly fewer children starting smoking, as compared with states that increased taxes more slowly or not at all.

Kenkel adds that, since the $1.10 price increase (over the current $2.00 a pack) being considered by Congress is so much larger than any historical price increase, he doubts that any reasonable estimate can be made for its effect. But this statement was challenged by Jonathan Gruber of the Treasury Department, who noted that Canada doubled cigarette prices from 1981 to 1991 and experienced a 50% reduction in youth smoking.

The frequently cited figure that the tobacco legislation would lead to a 60% decrease in youth smoking actually originated in a settlement plan reached last June with tobacco companies. It was not based on any scientific evidence--it was set as a target. Further penalties were to be imposed on the industry if this target was not met.

The article points out that the tobacco industry is also happy to "play both sides of the statistical fence". Last year, the industry estimated that the price increases imposed by the settlement would lead sales to drop by 43% over a decade. But now they argue that the price increases being considered by Congress would undermine efforts to reduce youth smoking by encouraging a black market in cigarettes that could easily target teenagers.


(1) Explain more fully what is wrong with the reasoning based on correlations between youth smoking and cigarette prices in different states at one point in time.

(2) The article cites Gruber as saying that the Cornell study had its own methodological flaws, but does not elaborate. Can you imagine what these criticisms might have been?

Painters centre one eye in portraits
Nature, Vol. 392,30 April 1998, pp. 877-78
Christopher W. Tyler

The author describes a study he carried out to verify that portrait painters consistently center one eye horizontally in the canvas.

The study involved portraits painted over the last 600 years by 265 portrait painters. Tyler chose the first portrait occurring in each source that showed only one person above the waist and with both eyes visible. Many of the portraits drawn had the subject's head turned as opposed to more frontal poses.

The author called the eye nearest the vertical center line of the canvas the "most-centered" eye. The distribution of the distance of this eye to the vertical center line appears to be normally distributed with a relative small standard deviation. As would be suggested by this, the distance of the midpoint of the eyes from the vertical center line has a bi-model distribution. Other features of the head do not seem to share this centered property. The horizontal mouth position has a distribution much more spread out than that for the most-centered eye.


(1) Do you think that the author would have got the same results if he'd looked at the last portrait painting of each of his sources?

(2) It is supposed to be very hard to get your research published in journals like Nature and Science. Yet they seem to be quite happy to print articles like this based on a very small amount of data and not replicated. Are these two facts consistent?

First gene to be linked with high intelligence is reported found
The New York Times, 14 May 1998, A16
Nicholas Wade

A quantitative trait locus associated with cognitive ability in children Psychological Science, Vol. 9, No. 3, May 1998, 159-166 R. Plomin et al

Psychologists who study intelligence as measured by IQ tests have concluded that genetic factors contribute about 50% to the variation of intelligence in a population.

Researchers have begun to look for genes that affect intelligence. In the area of medicine a single gene can be responsible for a disorder. In the case of intelligence it is believed that there will be a number of genes, each of which has a small positive or negative effect on intelligence. The total genetic influence on intelligence would then be probabilistic in nature, depending on the number of the "intelligence" genes which happen to have alleles associated with a positive effect on intelligence.

The study reported here was led by American behavioral geneticist Robert Plonin who works at the Institute of Psychiatry in London. Plonin and his colleagues claim in this article in Psychological Science to have found one such intelligence gene.

These researchers carried out a three-stage experiment. The first stage was exploratory in nature, aimed at identifying a gene that might be associated with intelligence. The other two stages were planned to replicate findings made in this first exploratory stage.

For the exploratory stage the researchers used 51 children identified with a high IQ (mean IQ 163 s.d. 9.3) and a control group of 51 children with average IQ (mean IQ 103, s.d. 5.6). They looked at what is called the long arm of Chromosome 6 and considered 37 markers for different genes. For each marker they looked at the most common allele, to see if the two groups differed with respect to this allele. They found a signicant difference for a marker in the IGF2R gene. There were 7 possible alleles for this marker, and the most common allele (no. 4) was present in 66% of the high IQ group and 81% of the control group.

For the first replication, high IQ children were chosen from a study of mathematically precocious youth (SMPY). 52 of the highest scoring children in this study were selected. These students either had both verbal and math SAT scores of at least 630 or a verbal score of at least 550 and a math score of at least 700. A control group of 50 children with average SAT scores was chosen. The results were similar to the first experiment. Allele 4 of IGF2R was present in 63% of the high IQ group and 78% of the control group. A second replication was carried out by combining the groups used in the first two stages.

The marker for the IGF2R that the researchers used gets trimmed away before the message is translated into a working protein. So probably it is not itself responsible for the effect on intelligence but is near some translated region that is.

As remarked earlier, any one gene such as IGF2R that is associated with high IQ will be expected to have a statistical effect on a population but may have no effect on an individual. The authors argue that a single gene such as the IGF2R gene will account for only about 2% or about 4 points in the variance of the IQ in a population. Thus they do not encourage a flurry of testing to see who has the favorable version of this gene. But they predict that this testing will be offered within the next two months.


In another article (Genetics and Intelligence: What's New?)Vol. 24 No. 1,1997, 53-57) Plomin and Petrill discuss why they believe that about 50% of the variance of intelligence in inherited. They give the following argument:

The correlation of IQ between adopted non-twin siblings living apart is about .24. This correlation for "genetic" relatives adopted apart means that 24% of the IQ variance in the population covaries for relatives who are 50% similar genetically but who are not similar for environmental reasons. Because these first-degree relatives are only 50% similar genetically, the correlation of .24 is doubled to provide an estimate of heritability, suggesting that about half (48%) of the IQ variance among individuals in the population can be accounted for by genetic differences among them.

Does this make sense to you?

As elderly get active, injuries up
The Boston Globe, 29 April 1998, A3.

According to the Consumer Product Safety Commission, the number of elderly Americans who suffer injuries during exercise rose 54% between 1990 and 1996, with the number of injuries rising from 34,400 to 53,000 a year. Since the number of Americans aged 65 and older grew by only 8% over this period, the trend cannot be explained by the aging of the population.

For strenuous activities such as weight training and aerobics, the increase was even greater--173%. They note that this is the first time snowboarding and rollerblading accidents were reported among the elderly.


(1) Does this mean that weight-training and aerobics represent the most dangerous form of exercise?

(2) Apart from the snowboarding comment, how would you determine whether the figures simply reflect greater numbers of elderly exercising or a trend towards riskier exercise regimes?

Bearing false witness to the pollster (Unconventional wisdom: New facts and hot stats from the social sciences)
Washington Post, 17 May 1998, C5
Richard Morin

Sociologist Stanley Presser of the University of Maryland and his research partner Linda Stinson of the U.S. Bureau of Labor Statistics have demonstrated that people exaggerate when pollsters ask them about their church-going habits.

The two researchers analyzed time-use diaries from the 1960s, 1970s, and 1990s and compared them to results obtained by the Gallup Organization and the National Opinion Research Center (NORC). The diaries revealed that the percentage of Americans who attend church was 42% in 1965 and 26% in 1994. By contrast, the Gallup and NORC polls reveal that the proportion has not changed over the past thirty years: a 1993-4 NORC survey reported 38% of its respondents attended mass, a number that has varied little since the 1970s.

Presser and Stinson speculate that the discrepancy arises from the way the data were collected. The Gallup/NORC polls specifically ask respondents if they had attended religious services in the past seven days whereas the time-use diaries are based on random samples of Americans who were repeatedly interviewed and asked how they had spent the previous day. In addition, the researchers suspect that the Gallup/NORC respondents felt the need to impress their interviewers, whereas the diary participants were only asked to account for how they spent their time and were unlikely to have felt the pressure to exaggerate their church-going habits.

Safety efforts tied to drop in child deaths
The Boston Globe, 5 May 1998, A1.
Glen Johnson, The Associated Press

The National Safe Kids campaign, chaired by former Surgeon General C. Everett Koop, reported a 26% drop in child deaths due to accidents over the last decade. Increased use of seatbelts and bike helmets was cited as an important factor in the improvement. Nevertheless, accidents remain the main cause of death for children, with death rates exceeding birth defects, cancer and homicide by a factor of 4.

A number of more detailed figures are supplied in the article. Here are some examples. In 1987, the death rate from accidental injuries was 15.56 per 100,000 children aged 14 or under. By 1995, the rate was only 11.45 per 100,000. Of the 6600 accidental deaths in 1995, the 1800 motor vehicle deaths represented the largest contributor, for a death rate of 3.06 per 100,000.


(1) Does the 26% drop square with the accidental death rates reported later? What do you make of this?

(2) Can you reconcile the motor vehicle death counts with the rates reported?


Chance News
Copyright (C) 1998 Laurie Snell

This work is freely redistributable under the terms of the GNU General Public License as published by the Free Software Foundation. This work comes with ABSOLUTELY NO WARRANTY.


CHANCE News 7.05

(27 April 1998 to 26 May 1998)