!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

               CHANCE News 4.02
              (11 Jan 1995 to 1 Feb 1995)


!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Prepared by J. Laurie Snell, with help from 
William Peterson, Fuxing Hou and Ma.Katrina 
Munoz Dy, as part of the CHANCE Course Project 
supported by the National Science Foundation.

Please send comments and suggestions for articles to
jlsnell@dartmouth.edu

Back issues of Chance News and other materials for
teaching a CHANCE course are available from the
Chance Web Data Base
http://www.geom.umn.edu/docs/snell/chance/welcome.html

      =============================================
        Out of the air a voice without a face
        Proved by statistics that some cause was just
        In tones as dry and level as the place.       
                                  W. H. Auden
      ============================================

FROM OUR READERS

Gerry Grossman asked: Is there any evidence that the 
probability of a boy following a boy is greater than a 
boy following a girl?  This referred to a discussion 
question in the last Chance News relating to a
Marilyn vos Savant column. 

We put this question to the Princeton demographer 
Ansley J. Coale.  He said the answer was no and he 
referred us, for references, to a recent paper he wrote 
with Judith Banister, "Five decades of missing females 
in China" (Demography,  August 1994). We could not find 
the answer to Gerry's question there, but the paper was 
interesting in its own right. It attempts to explain 
why, in China, the sex ratio-the ratio of males to 
females-has, for five generations, been higher than 
would be expected to result from the typical sex ratio 
at birth (between 1.05 and 1.07, favoring males) and 
typical mortality rates. The authors analyze a huge 
amount of data and suggest that differences can be 
explained in terms of high rates of female infanticide 
in the 1930's and 40s, the famine in 1959-6, the 
selective termination of childbearing following a male 
birth beginning in the 70s, and the emerging impact of 
sex-selective abortion on the sex ratio at birth that 
began in the 80s. 

We did find Gerry's question discussed in the book: 
"The Genetics of Human Populations" by L. L. Cavalli-
Sforza and W. F. Bodner.  They describe a long history 
of trying to determine if the sex distribution with a 
fixed family size reasonably fits a binomial 
distribution with a fixed probability p for a boy. The 
data most analyzed was collected by A. Geisler, a 
registrar in Saxton, Germany. He collected information 
on more than 4 million births between the years 1876-
1885.  Analysis of this data suggests that the simple 
binomial model is spoiled by variation in the p values 
between families and a small but significant negative 
correlation for the sexes between successive births. 
Later two large studies showed a positive correlation 
between successive births, and a third study showed 
good agreement with the simple binomial model. In other 
words, there is not a clear answer to the question 
raised by Grossman.

DISCUSSION QUESTIONS:

(1) Assume that each child born in a family is a boy 
with a probability p, the same for all families, and 
independent of the sex of any other children in the 
family. Assume further that families have children only 
until they have a boy.  Show that you can expect a 
fraction p of the children to be boys.  

Assume now that the families are divided into three 
equal groups: in the first group the probability of a 
child being a boy is 1/4, in the second this probability 
is 1/2 and in the third it is 3/4.  Without a stopping 
rule we would expect the number of boys in the 
population to be the average of these p values or 1/2.  
Show that with the stopping rule of stopping the first 
time a family gets a boy, the average number of boys 
will be less than 1/2. (In his book "Applied 
Mathematical Demography," p.335, Nathan Keyfitz shows 
that when the p values differ between families, the 
proportion of boys will be given by the harmonic mean 
of the p values which is always less than the average 
of the p values.)

(2) How can large studies give statistically significant
results that are so inconsistent?

(3) How do you think Coale and Banister determined that
there was "selective termination of childbearing 
following a male birth beginning in the 70s" given
that this does not effect the proportion of male 
children that are born?
                  <<<========<<

>>>>>==========>> Bill Peterson received the following note from Franci Farnsworth in his (Middlebury) Grants Office: "I heard one for you on NPR this morning -- the advice was not to bother buying flight insurance when traveling because 'you have a better chance of getting hit by lightning than dying in a plane crash.' "That got me to thinking -- are the chances of getting hit by lightning based on total population or just people during electric storms? are the chances of dying in a plane crash figured on the basis of total population or just those who fly during a given period of time or passenger miles or just those who are in the air at any given point in time? And what do we really know about these odds anyway? "Is that what your CHANCE classes are all about?" DISCUSSION QUESTIONS: How do you think the chance of getting hit by lightning and of dying in a plane crash are estimated? <<<========<<

>>>>>==========>> FROM THE INTERNET The UCLA statistics program homepage offers a number of interesting statistical resources. One that would be particularly useful for a CHANCE course is a series of case studies. These case studies arise from consulting, journal articles etc. They are designed to have students look at data and run programs to plot histograms, do a regression analysis, etc. in an interactive way. The programs are actually run on your machine so you have to have on this machine the statistical package Lisp-Stat. This package is available from their server for the Unix workstations, MS-Windows, the Macintosh and, in the spirit of the Web, it is free! While their experience so far has been with teaching upper-level and graduate courses, Jan de Leeuw and her colleagues at UCLA plan to develop a set of user friendly tools, including an on-line textbook, to assist in teaching an elementary statistics course. These will all be easily obtained from their Web server. Other work along this same line is being done at Penn State. Information about that program can be found at The Penn State program. <<<========<<

>>>>>==========>> Erich Friedmann has proposed the following interesting game. 1. Every player secretly writes down a positive integer (1,2,3,....) and their name on a slip of paper. 2. The slips are collected, and the winner is the player who picked the lowest number that no one else picked. 3. If, by some coincidence, no such person exists, the game is a tie. A dozen or so friends of Eric plan to play this game by e-mail and they have until midnight February 14 to figure out a good method for picking their number. A colleague of mine, Tom Sundquist, is one of the players and wants to make his choice by a probability distribution consistent with a Nash equilibrium point. Such a point can be described as follows. Assume there are three players and these players choose probability distributions p, q, and r to pick their numbers. Then the triple {p,q,r} is a Nash equilibrium point if no player can increase his probability of winning by changing his distribution while the choice of the other two players remain as specified by the equilibrium point. If the possible numbers to submit is limited to (1,2,...,m), Tom has shown that there is a Nash equilibrium point in which each player uses the same probability distribution p to pick a number. This distribution p is characterized by the property: if all three players use the distribution p, then the probability that a player wins, given that he chose number x, is independent of x. Using this property, Tom computed the distribution p for 3 players as m varies. His results give: m = 1 p = {1} m = 2 p = {.5,.5} m = 3 p = {.464 .268, .268} m = 4 p = {.4578 .2517, .1453, .1453} Further computations suggest these distributions converge rather rapidly to a limiting value as m tends to infinity. Knowing this limiting value for k players would help Tom make his choice and would be a fitting way to celebrate John Nash getting a Nobel Prize for his work on game theory. The first person to solve this problem will get special mention in the next Chance News. Discussion question: Consider the three strategies: player 1 always chooses 2 and the other two players always choose 1. Show that this is a Nash equilibrium. <<<========<<

>>>>>==========>> ARTICLES ABSTRACTED

<<<========<<

>>>>>==========>> Jim Baumgartner suggested our first two articles: Comment: The intellectual free lunch. The New Yorker, 6 Feb 1995, p. 4 Michael Kinsley

A poll released by the Program for International Policy Attitudes showed that 75% of Americans believe the United States spends too much on foreign aid, and 64% want foreign-aid spending cut. Respondents were also asked what they thought the share of the federal budget currently goes to foreign aid was.The median answer was 15% and the average 18%. The correct answer is less than 1%. The answers to the question of how much would be "too little" had a median of 3%. Kinsley suggests that more interesting than what we learn about people's actual opinions from the poll is the observation that Americans are willing to have strong opinions about things they know very little about. He chides the pollsters for not asking whether the respondent knows anything about the topic of the poll and for assuming that people should have opinions on all subjects at all times and all their opinions should be weighted equally. <<<========<<

>>>>>==========>> The megalab truth test. Nature, 2 Feb 1995, Scientific Correspondence p. 391 Richard Wiseman

Wiseman writes a letter on an experiment that he did to test people's ability to detect lying. A well-known British commentator was interviewed twice about his favorite films. In one interview he consistently told the truth and in the other he consistently lied. Transcripts of these interviews were printed in "The Daily Telegram", broadcast on BBC Radio and shown on BBC television. People were asked to decide which interview had the lies and to respond by telephone. A large number of people (41,471) responded. The percentages in each of the three groups that correctly identified the untruthful version were: 73.4% for the radio listeners, 64.2% for the newspaper readers, and 51.8% for the television viewers. These differences were significant and were taken to support the hypothesis that visual cues reduce a person's ability to detect lies. Wisemen feels that his results, and others consistent with his, suggest there should be less reliance on visual cues such as eye contact in trying to detect lying. Wiseman mentions some of the defects in the design of this study, but remarks that it does illustrate how one can do simple tests on important issues. He comments that this study received wide media coverage. <<<========<<

>>>>>==========>> The writer of this letter to Nature, Richard Wiseman, seems to be an interesting fellow. He is a senior lecturer at the University of Hertfordshire and also a prize-winning magician. With Robert Morris, Koestler Professor of Parapsychology at Edinburgh University, he just published "Guidelines for Testing Psychic Claimants" designed to help police faced with psychics offering to solve murders etc. Wiseman is also doing an interesting study about luck. Here is an account of this study. <<<========<<

>>>>>==========>> Some people have all the luck; or do they? The Independent, 20 Nov 1994, p 70. Simon Beckett

Richard Wiseman is heading a two-year research program at the University of Hertfordshire to study "luck". The OED describes luck as "chance regarded as the bringer of good or bad fortune". If the outcome is favorable, it is called good luck and, if unfavorable, it is bad luck. People who claim they are lucky will claim that they have more favorable than unfavorable chance events. The Hertfordshire team administered a survey that indicated most people feel they are lucky rather than unlucky. Wiseman says they can't be right because, rationally, luck has to be evenly distributed on some level. Wisemen says that people regard luck in one of three ways. (1) They are born lucky or unlucky and there is nothing they can do about it. (2) It is possible to influence your luck for better or worse. (3) Someone or something is controlling your luck, giving it or taking it away at certain times. In a pilot study, volunteers are asked to relate their experiences and given a questionnaire to establish their views and beliefs on luck. They are given a brief coin flipping exercise to see how they will react to such a test. From the pilot study, the researchers will get a pool of about 200 subjects including subjects who consider themselves lucky, unlucky, or neither. These subjects will be given two basic tests: one, to see if they can guess whether heads or tails will come up in a coin tossing sequence; and, the second to see if they can tell which of four possible pictures are in a sealed envelope. In the coin-tossing experiment in the pilot study, those who thought they were lucky tended to overestimate their performance but did do better than those who thought they were unlucky. In future tests, researchers will test if those who consider themselves lucky are better able to pick up cues than those who do not. They will also explore superstitious behavior. DISCUSSION QUESTIONS: (1) Can you imagine a kind of "law of large numbers of luck"? (2) Do you believe that there should be about the same number of lucky and unlucky events in people's lives? Why? (3) What experiment would you recommend if you were part of this research team? <<<========<<

>>>>>==========>> An elusive picture of violent men who kill mates. The New York Times; 15 Jan 1995, Section 1: pg. 22 Daniel Goleman

Is it possible to foresee which violent husbands will kill their wives? Social scientists say that statistical data can run estimates on large pools but trying to predict a specific case cannot be done with accuracy. The most definitive study on marital violence is a 1985 random national survey of 6,002 U.S. households. The researchers claim an underestimate due to the percentages of those who won't admit to violence. From the study it is estimated that, of the approximately 60 million couples in the United States, about 12% of the wives will be subjected to an incident of at least mild violence, with 3.4% being subjected to severely abusive violence. At least 90% of the men who kill their wives are among this most abusive group. The problem with prediction is that while a history of wife-battering typifies men who kill their wives, there are few clues to identify who will actually do so. Dr. Richard Gelles, a sociologist at the University of Rhode Island who conducted the 1985 study, has said that there are 10 common factors among the abusive men surveyed. He is quoted as saying: For men with 7 or more of these 10 factors, the rate of extreme violence is 17 times higher than average. Even so, among such men 4 in 10 will not be abusive. Dr. Gelles said that, even if considerations such as psychological profiles and emotional elements are factored into the model for prediction, there is only a 70% prediction rate for the abusive men who will go on to murder. Also he added, there are chances for errors that must also be factored: This means your prediction will be wrong 30% of the time. If I were able to screen all the two million or so men in this country who are severely abusive, and with a 30% error rate, and with such a rare event as murder, I could falsely identify as prospective murderers 600,000 men who are innocent. Julie Blackmun, a psychologist, sums up the conundrum: "We only have broad statistical correlations. You can't do experiments on this where you systematically vary the elements." DISCUSSION QUESTION: What do you think the screening process that Gelles refers would be? Why would it would lead to falsely identifying 600,000 innocent men? <<<========<<

>>>>>==========>> Inevitable illusions: how mistakes of reason rule our minds. Wiley 1994, ISBN 0-471-58126-7 Massimo Piattelli-Palmarini

This is a well-written popular book describing the work of Kahnamen and Tversky on heuristics and biases. The author includes a discussion of the usual probability paradoxes and Bayes probability problems. While it is nice to have popular books on such a subject, I always wonder why exactly the same examples are used -- the Linda problem, which of the following coin tossing sequences are most likely, the Monty Hall problem etc. There must, by now, be lots of other new examples! The author describes the work of some recent critics of this kind of research that was new to me at least. I recommend this book for anyone who wants a gentle introduction to the interesting work of Kahnamen and Tversky. <<<========<<

>>>>>==========>> Olive oil linked to low rates of breast cancer. The Boston Globe, 18 January 1995, p1. Richard A. Knox

Research published today in the "Journal of the National Cancer Institute" found that women who consume olive oil at more than one meal a day have a 25% lower rate of breast cancer than those who ingest it less frequently than once a day. The study involved 2400 Greek women, including 820 with newly diagnosed breast cancer (94% of all cases in greater Athens during a recent three- year period). High consumption of vegetables and fruit also has high protective value. The research also suggests a possible reason why research in the 1980s failed to establish an expected association between overall fat consumption and breast cancer. It appears that the type of fat consumed is more important than the total amount. Even though Greek women derive 42% of their daily calories from fat--more than the 35% figure for American women--the Greek breast cancer death rate is 42% lower than in the U.S. However, olive oil represents half the Greek fat intake, compared to only 5% for Americans. <<<========<<

>>>>>==========>> U.S. is Considering a Large Overhaul of Economic Data. The New York Times, 16 Jan 1995, A1 Robert D. Hershey Jr.

Federal statisticians are fixing deficiencies in systems used to gauge nation's economic performance. The current system fails to capture the array of new technologies and structural changes in today's $7 trillion economy. Alan Greenspan, chairman of the Federal Reserve, has said that the Consumer Price Index overstates inflation by a maximum of 1.5 percentage points. With Social Security and other entitlements (as well as adjustments in tax brackets) pegged to the inflation rate, Republicans in Congress were quick to seize on the corollary that by changing the index's formula, they could free $150 billion in Federal funds over the next five years without a single specific budget cut. "Our statistical system can only meet the expectations of its users by changing as rapidly as the economy it measures," said Everett M. Ehrlich, Under Secretary for Economic Affairs at the Commerce Department, in announcing, in late December, the review of its national income and product accounts, which make up the gross domestic product. The GDP is the set of figures, created in the late 20s when goods were more important than services, that provides quarterly totals of economic activity and the degree to which it is expanding or contracting. So far, the official figures still reliably tell in which direction the economy is moving, whether the pace is accelerating or decelerating and whether it is high or low relative to the trend. The traditional metaphor of the GDP as the bottom line on a cash register tape tallying a cart full of toasters, haircuts, Cheerios, and gallstone surgery "is becoming evermore inappropriate," Mr. Ehrlich said. He pointed, for example, to Wall Street investment bankers who are working on deals abroad. "They are in effect exporting U.S. know-how, but that doesn't get measured," Mr. Duncan said. <<<========<<

>>>>>==========>> Sorting out fears over Tylenol. The Boston Globe, 16 January 1995, p41. Richard A. Knox

This article discusses fears over emerging evidence linking acetaminophen, the pain-killing ingredient in Tylenol, to liver and kidney failure. It is estimated that 55 million Americans a year take Tylenol, and twice that number take an acetaminophen-containing pill or liquid. It has been known for decades that acetaminophen is potentially toxic to the liver. In particular, alcohol consumption is known to increase the proportion of acetaminophen converted to a toxic metabolite. In a widely reported story last fall, a Virginia man required a liver transplant after consuming the recommended dose of Extra-Strength Tylenol for three days, along with a few glasses of wine at night. He sued and was awarded an $8.2 million settlement (the case is now under appeal). Specialists now recommend that anyone who consumes more than three alcoholic drinks a day should restrict acetaminophen use to half the recommended maximum dose. The link between acetaminophen and kidney problems is not as clear. A team based at Johns Hopkins University and the University of Geneva studied acetaminophen use among 716 people with kidney failure and 361 similar people without kidney disease. Long term use of acetaminophen (an average of 1 pill a day over an extended period) was associated with a doubled risk of kidney failure; lower risk was reported for moderate users. In a controversial extrapolation from these figures, the researchers estimated that up to 10% of the 190,000 American cases of kidney failure may be attributable to acetaminophen use. DISCUSSION QUESTION: Critics cited in the article claim that these last estimates are flawed, because patients with kidney failure are often urged to take acetaminophen rather than other pain killers that are known to be toxic to the kidneys. How would this bias the estimate? <<<========<<

>>>>>==========>> Chance of a major quake within 30 years put at 86%. Los Angleles Times, 21 Jan 1995, A1 Robert Lee Hotz

This article states that a team of researchers have released new estimates for the probabilities of earthquakes in Southern California, updating their predictions made in 1988. They feel that they have much more information now which suggests a slightly larger risk than they had believed earlier. They now estimate that there is an 86% chance for a very large earthquake (magnitude 7 or greater) caused by the San Andreas and other large faults in Southern California. There have been three earthquakes of this size since 1989. The article gives a chart that provides more detailed information about their predictions for the probability of a quake on three major faults in the next 30 years. The chart shows the largest possible quake each segment could generate, the chance that the fault could generate a quake during the next 30 years and the prediction's margin of error. The original probabilities calculated in 1988 are also listed. Fault/Segment Mag Prob Error 1988 San Andreas-Carrizo 7.51 18 9 10 San Andreas-Mojave 7.56 26 11 30 San Andreas-San Bern. Mtns. 7.28 28 13 20 San Andreas-Coachella Valley 7.48 22 12 40 San Jacinto-San Bern. Valley 6.87 37 17 20 San Jacinto-San Jacinto Valley 6.96 43 18 10 San Jacinto-Anza 7.36 17 12 30 San Jacinto-Coyote Creek 6.94 18 13 NA San Jacinto-Borrego Mountain 6.77 6 8 10 San Jacinto-Superstition Mtns. 6.65 9 6 NA San Jacinto-Superstition Hills 6.63 2 6 NA Whittier-Whittier 6.91 5 3 NA Whittier-Glen Ivy 6.87 12 15 NA Whittier-Temecula 6.96 16 10 NA Whittier-Julian 7.26 5 5 NA Whittier-Coyote Mountain 6.91 1 4 NA DISCUSSION QUESTIONS: (1) Is the information in the table sufficient to obtain the 86% probability quoted in the article. If not, what other information would you need to have? (2) Does it make sense to talk about the margin of error. If so, what does it mean? <<<========<<

>>>>>==========>> Secondhand smoke: is it a hazard? Consumer Reports, January 1995, p27-33.

This article describes the mounting evidence about health risks from exposure to secondhand smoke, as described in recent EPA reports, and the tobacco industry's campaign to discredit the scientific basis for those findings. The conclusion: "The tobacco merchants claim there's still a controversy. We don't buy it." There is some nice non-technical discussion here of the methodology of epidemiological studies and the issues involved in applying meta-analysis to combine the results. The article responds directly to some of the industry's criticisms of the EPA report on secondhand smoke, some of which were articulated in Jacob Sollum's article in the National Review, (see Chance News, June 10, 1994). For example, the alleged changing of the threshold for significance from 5% to 10% is described here as the result of using one-tailed rather than two-tailed tests. The article argues that the use of a one-sided test is entirely appropriate when there is already independent evidence that a substance is harmful. Also discussed are controversies about possible confounding variables, and criticisms about studies allegedly excluded by the EPA. DISCUSSION QUESTION: Do you think that using a one tailed test justifies making the threshold of significance at the 10% level? <<<========<<

>>>>>==========>> Study finds traumatic memories can be recovered. The Boston Globe, 26 January 1995, p1. Alison Bass

A study by researchers at the University of New Hampshire found evidence that women who were sexually abused as children can forget the abuse and then, in some cases, remember it later. In a survey of 129 women who had been taken to hospital emergency rooms as children after being molested, 49 (38% of the group) did not remember the abuse. And some of the 80 who did remember, reported that there was a period of time during which they had forgotten it. These findings have implications for the debate over so-called "recovered" memories of abuse, which critics have dismissed as products of suggestion by therapists. Psychiatrist Harrison Pope of the Harvard Medical School still expresses concern about the study, noting that there may be other reasons why some of the women in the sample did not acknowledge earlier abuse. For example, about one-third of the women were under the age of six at the time of the abuse and may have been too young to remember it. Still, of the 87 women who were over six, 28% did not remember it. DISCUSSION QUESTION: A data graphic shows the percentages of respondents not recalling abuse for different age (at time of abuse) groupings: 55% for age 3 and under, 62% for age 4-6, 31% for age 7-10, and 26% for age 11-12. A caption says that the fact that the youngest group remembered more than the 4-6 age group suggests that factors other than cognitive development influence memory of trauma. Do you agree with this interpretation of the data? <<<========<<

>>>>>==========>> Awareness of AIDS low, expert warns. The Boston Globe, 31 January 1995, p1. Richard A. Knox

The latest statistics from the Centers for Disease Control show AIDS as the leading cause of death among Americans in the 25-44 age group. Low awareness is cited as a key factor in the ongoing spread of the disease. For example, a recent national study of 2500 newly diagnosed people found that almost 60% did not get tested for HIV until they were already suffering from an AIDS related disease. DISCUSSION QUESTION: A data display accompanying the article says that in the last year "the proportion of AIDS cases increased: from 16.2% to 18.1% among women, from 36.1% to 39% among African-Americans, and from 17.7% to 18.7% among Hispanics." What does this mean? Can you suggest a clearer way to express it? <<<========<<

>>>>>==========>> Cancer deaths to taper off next century, report says. The Boston Globe, 1 February 1995, p1. Judy Foreman

A study and editorial published today in the "Journal of the National Cancer Institute" predicts that, although cancer will surpass heart disease as the leading cause of death in the U.S. within a few years, by early in the next century cancer rates will have declined to such a point that it will no longer be a major problem. The study says that increased incidence of newly-diagnosed cancer over the last two decades (18.6% increase among white men, 12.4% among white women) is primarily attributable to known factors. For example, the increase in lung cancer is caused by smoking, and the increase in breast and prostate cancers is due to early detection programs. This explanation runs counter to the belief of many environmentalists, who have felt that much of the increased incidence is due to environmental hazards. The research team looked at new cases of cancer reported through the SEER (Surveillance, Epidemiology, and End Results) program and at data from the National Center for Health Statistics. The national mortality statistics for 1992 mark the beginning of a decline in the overall cancer mortality rate, a trend which the researchers see continuing into the next century. Cancer surpasses heart disease mortality in the near term only because rates for the latter are decreasing even more steeply. DISCUSSION QUESTION: The article quotes Tufts University cell biologist Ana Soto as saying: "You cannot discard the environmental link just because some other study can account for those cases because of early detection. You have to investigate environmental causation on its own merit because there is enough evidence that it is plausible." To what evidence do you think Dr. Soto is referring? What further investigation might be proposed? <<<========<<

>>>>>==========>> The Bell Curve. Hernstein and Murray The Free Press 1994 Part IV Living Together

Part IV of "The Bell Curve" begins with Chapter 17 which discusses attempts to improve IQ scores. Some early studies suggested that better nutrition did not improve IQ. However, while these were large studies they werenot controlled studies. Two more recent controlled studies, one in Great Britain and another in California, showed a significant difference for the group given vitamin and mineral supplements compared to the group given placebo. In the California study, the average benefit for providing the recommen- ded daily allowances was about four points in nonverbal intelligence. The authors feel that improved nutrition is effective but suggest that there are still questions about how effective. Next, the role of improved education in raising IQ scores is considered. The authors cite studies suggesting that the worldwide increase in average IQ can be attributed to increased schooling. They conclude that variation in the amount of schooling accounts for part of the observed variation of IQ scores between groups. The authors discuss the various studies to see how improving educational opportunities might increase IQ. They start with the negative results obtained by the Coleman study, a large national survey of 645,000 students. This survey did not find any significant benefit to IQ scores that could be credited to better school quality. (A discussion of this report is on the video series "Against All Odds"). Studies on the Head Start Program showed that this program increased IQ significantly during the period of the program, but that these differences disappeared over time. The authors mention some other more positive results but conclude that, overall, what we know about this approach is not terribly encouraging. More positive results are cited for the hypothesis that IQ scores can significantly be improved by adoption from a poor environment to a good environment. One meta-study concluded that the increase in IQ would be about 6 points. Two small studies in France suggested that a change in environment from low socio-economic status to high socio-economic status could result in as much as a 12-point increase in IQ. Chapter 18 is titled "The Leveling of American Education". The authors begin with a look at what test scores say about the changes in student's abilities from the 50's to the present. They present a graph of the composite score of Iowa 9th-graders on the Iowa Test of Basic Skills. The graph shows a steep improvement from the 50's to the 60's, followed by a significant decline until the 70's followed by steady improvement to a new high by the 90's. Graphs of national SAT scores show that these scores remained about the same from the 50's to the 60's and then declined significantly (about 1/2 standard deviation on verbal and 1/3 standard deviation on math) from the 60's to the 80's and then remained about the same from the 80's to the 90's. The authors argue that the familiar explanation which claims that the great decline in SAT scores was caused by the "democratization" during the 60's and 70's is not correct. They point out that the SAT pool expanded dramatically during the 50's and 60's while average scores remained constant. In addition, throughout most of the white SAT score decline the white SAT pool was shrinking, not expanding. They next look at what has happened to the most gifted students. They provide a graph showing the percentage of 17-year olds who scored 700 or higher on the SAT scores. The percentage for math scores decreased from 1970 to 1983 and then increased to their highest ever in 1990. Verbal scores decreased during this first period and remained steady after that. They give the following explanation for the changes illustrated by these graphs. The decline in both the Iowa scores and the SAT scores of the 60's are attributed to what they call the "dumbing down". This period was characterized by simplifying the text books -- fewer difficult words, easier exercises, fewer core requirements, grade inflation etc. They suggest that the "dumbed down" books would actually help the lower end of the spectrum of students and so would account for the increase in overall preparation indicated by the Iowa scores from the 80's to the 90's. The verbal SAT scores did not increase because of the use of the dumbed down books, the increased use of television, and decrease in writing generally, including letter writing. The math SAT scores did not decrease during this period because algebra and calculus are more constant subjects and harder to dumb down. In their discussion of policy implications, they are not very optimistic about new government policies being able to solve general education problems. They point out that surveys have shown most American parents do not support drastic increases in their children's work load and, in fact, that the average American has little incentive to work harder. They argue that educators should return to the idea that one of the chief purposes of education is to educate the gifted and "foster wisdom and virtue through the ideal of the educated man". Chapter 19 is on affirmative action in higher education. The authors present statistics on the differences in SAT score between various groups. Evidently these statistics are more easily obtained from private schools than from public schools. Their first graph shows how the average SAT scores of blacks and Asians differ from whites for entering students at a group of selective schools. The median total SAT score for blacks was 180 points less than for the whites, the median for Asians was 30 points higher than for the whites. The range of difference for blacks went from 95 (Harvard) to 288 (Berkeley). Data for students admitted to medical schools and law schools also show significant differences. In all cases average test scores for those admitted tends to follow the differences in the scores nationally. The authors give the three reasons for academic institutions to give an edge to black students: institutional benefit, social utility, and just desserts. Accepting these, they propose a way to determine a reasonable advantage by trying to decide between two students differing only as to minority or white and privileged or underprivileged. They show that black enrollments in college increased dramatically after the 60s, when affirmative action was introduced. It dropped off in the late 70's and has pretty well leveled off with a slight increase since then. Thus, they say that affirmative action has been successful in getting more minority students into colleges. However, they feel that the differences in performance, drop out rates, and the way that students, black and white, view these differences is harmful. The authors feel that such would not be the case if the admission policies were changed to continue to make a serious effort to attract minority applications but adopt an admission policy that would not make such large differences between the SAT distributions. The result would be a more consistent performance among the various groups and more harmony among the student body. Chapter 20 considers in a similar way, affirmative action in the workplace. As in the case of education, the authors argue that affirmative action has had the desired effect of removing disparities in job opportunities and wages that were obviously due to discrimination. However, they look at data that suggest the results of affirmative action have gone beyond that to give a significant advantage to blacks in clerical jobs and even more so in professional or technical jobs, at least in terms of groups with comparable IQ scores. Their previous research on relation of IQ to job performance leads them to conclude that this has serious economic implications. They feel it leads to increased racial tensions. They conclude that anti- discrimination laws should be replaced by vigorous enforcement of equal treatment of all under the law. Chapter 21 is entitled The Way we Are Headed. The authors return to their earlier concerns that we are moving in the direction of (a) an increasingly isolated cognitive elite, (b) a merging of the cognitive elite with the affluent, and (c) a deteriorating quality of life for people at the bottom end of the cognitive ability distribution. This leads them to some pretty gloomy predictions. In the final chapter, called "a place for everyone", they give their ideas on how to prevent this. A somewhat simplified version of the author's views is: we should accept that there are differences, cognitive and others, between people, and figure out ways to make life interesting and valued for all, in terms of the abilities that they do have. DISCUSSION QUESTIONS In the California nutrition study, some of those in the treated group had a large increase, about 15 points, in their verbal scores and some had no increase at all. Why might some not have had any increase? The authors review the evidence that coaching increases SAT scores. They cite a recent survey of the studies that suggested that about 60 hours of studying and coaching will increase combined math and verbal scores on average of about 40 points. Does this seem consistent with what you have experienced or know about coaching for SAT scores? <<<========<<

>>>>>==========>> This ends my attempt to say what is in the book "The Bell Curve". There have still been very few reviews that discuss seriously the statistical problems involved in this book. Three reviews that I am aware of are: Curveball. The New Yorker, 21 Nov 1994, pp. 139-149 Stephen Jay Gould

Gould asserts that the central argument of "The Bell Curve" centers on a largely biological explanation of intelligence which requires "the validity of four shaky premises. (1) Intelligence, in their interpretation must be represented as a single number, (2) it must be capable of ranking people in a linear order, (3) it must be genetically based, and (4) it must be effectively immutable." Gould states that "The Bell Curve fails because most of its premises are false." Gould agrees that the book uses serious statistics but feels that the authors select studies that fit their agenda and do not give a balanced treatment. He argues that they often do not answer the right question. For example, he agrees with their analysis that tests do not have statistical bias for different groups. But he says that the real question is: is the 15 points difference between whites and blacks due to biases in a social sense which he claims is very different from asking if it is due to statistical biases. Regarding the authors use of correlation, he claims they do not sufficiently emphasize how small many of the correlations are and the variation possible in their regression lines. Gould feels that the authors should have explained factor analysis as he did in his book "The Measurement of Man". If they had, the reader would realize that the infamous g could not be a single factor and that the concept of g is nothing but a mathematical construct. (Charles Murray replied to this review and Stephen Gould replies to this in the December 6 issue of "The New Yorker" in the mail section on page 10) <<<========<<

>>>>>==========>> Is the Bell Curve statistically sound? Siam News, Jan 1995, pp 6-8 James Case

Case tries, as I have tried, to give an idea of what is actually in the book. He succeeds, as I did not, in doing so in a review of reasonable length. Most of what he says you already know, if you have followed my summaries. He concludes by remarking: At a guess, the book's most lasting contribution will be in documentation of the nation's increasing stratification according to IQ intelligence and/or wealth, and its shrill warning against any alliance of the cognitive and affluent elites. Each group reinforces the other's instinctive arrogance by lending its prestige to the never-ending quest of every elite to "restructure the role of society" to its own advantage. DISCUSSION QUESTION: Do Case's concluding remarks apply to college professors? <<<========<<

>>>>>==========>> I am told that there was an excellent discussion of The Bell Curve on CNN at 10 a.m. on Dec 9. Here is a description of this program. Ladner of Howard University hosted a symposium on the recently published book "The Bell Curve: Intelligence and Class Structure in American Life." Participants included: Steven Jay Gould of Harvard University; Donald Stewart of the College Board; Nancy Cole of the Educational Testing Service; Janet Norwood of the Urban Institute; Edmund Gordon of the City College of New York; and Curtis Banks of Howard University. I did not see it, but I have ordered a tape and will comment on it when I have seen it. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! CHANCE News 4.02 (11 Jan 1995 to 1 Feb 1995) !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Please send suggestions to: jlsnell@dartmouth.edu