!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

CHANCE News 10.02

January 11, 2001 to February 12, 2001

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Prepared by J. Laurie Snell, Bill Peterson, Jeanne Albert, and Charles Grinstead, with help from Fuxing Hou and Joan Snell.

Please send comments and suggestions for articles to
jlsnell@dartmouth.edu

Back issues of Chance News and other materials for teaching a Chance course are available from the Chance web site:

Chance News is distributed under the GNU General Public License (so-called 'copyleft'). See the end of the newsletter for details.

Chance News is best read with Courier 12pt font and 6.5" margin.

===========================================================

We are usually convinced more easily by reasons we have found ourselves than by those which have occurred to others.

Blaise Pascal

===========================================================

Contents of Chance News 10.02

Note: IIf you would like to have a CD-ROM of the 1997 and 1998 Chance Lectures also available on the Chance web site send a request to
jlsnell@dartmouth.edu
with your address. There is no charge.
                                        <<<========<<



>>>>>==============>
Here are two Forsooth! items from RSS News January 2001, Volume 28 Number 5

A sample is said to be unbiased if the mean and variance are the same for the sample as for parent population

AS Guru
(BBC educational web site)

Since 1997 Freightliner's failure rate has fallen by 325%

Freightliner Southampton
26 November 2000

Tom Bickel sent us the following Forsooth! item from our student newspaper:

Levin said that in the 1960s, the clearance rate for homicides approached 90 percent. But, since then, that figure has dropped to about 60 percent. In other words, 30 percent more murder cases go unsolved now than they did 40 years ago.

The Dartmouth
Feb. 2, 2001

                                        <<<========<<



>>>>>==============>
Here are some additional web sites that might interest our readers.

Chance at Middlebury

Bill Peterson teaches a Chance Course at Middlebury College and this is the web page for this year's course. You can see there the topics he covered as well as links to materials he used in his course. For example, class 6 has a link to USATODAY's archive of their News snapshots

Bill provides some of his favorite snapshots illustrating the various sins of poor graphics.
                                        <<<========<<



>>>>>==============>
Planet: probabilistic learning activities

David Harris at the International School of Toulouse is developing a web site in the spirit of his Pascal quotation (see above). We recommend looking at his "Activities" to see what David has in mind.
                                        <<<========<<



>>>>>==============>
Speaking of activities, we received the following notice about Deb Rumsey's web site that will provide activities freely available to teachers of statistics.

The new on-line Statistics Teaching and Resource (STAR) Library is now ready to accept submissions! The objectives of the STAR Library are to provide a library of teaching activities for introductory statistics courses that is 1) peer reviewed; 2) free and easily accessible to the public; and 3) easily customizable.

We would like to invite you to submit one or two or your teaching activities for publication in our first issue. To take a look at the organizational structure of the STAR Library, go to STAR Llibrary and take the tour.

                                        <<<========<<



>>>>>==============>

Robin Lock does an excellent job of identifying and providing links to web resources for use in teaching an introductory statistics course. However, there does not seem to be such a site for resources for teaching an introductory probability course. We recently had occasion to make some remarks about such web resources and you can find these remarks together with the relevant links here.

In particular, you will find references to sites with sets of probability problems. For example, an interesting set of probability problems and their solutions, taken from rec.puzzle archive.

We enjoyed the following variation on the famous birthday problem:

At a movie theater, the manager announces that they will give a free ticket to the first person in line whose birthday is the same as someone who has already bought a ticket. What position in line gives you the greatest chance of having the first duplicate birthday?

Another interesting problem is:

Are the digits of pi random (i.e., can you make money betting on them)?

Here the most interesting question is to ask if the notion of randomness used in the solution of the problem makes sense.
                                        <<<========<<



>>>>>==============>
The Fathom software group at Key Curriculum Press has redone its web site Fathom Resource Center. and you will find a number of interesting resources there.

We enjoyed browsing through the census resources (go to links, then to curriculum resources). In particular they have there a link to an intriguing NASA activity "Star Census" (how you count stars using a paper towel tube)
                                        <<<========<<



>>>>>==============>
And speaking of the census we received the following note from reader Milton Eisner:

You were wrong in the last Chance News about California gaining 2 seats. Here are the changes for the 108th Congress:

       -2: NY, PA
       -1: CT, IL, IN, MI, MS, OH, OK, WI
       +1: CA, CO, NC, NV
       +2: AZ, FL, GA, TX
        0: all others
There is a map at www.census.gov.

I ran Webster and Hamilton apportionments on the state populations. Webster gives the same result as the official method, Equal Proportions. Under Hamilton, there is one change: UT +1, CA 0.

This note made us wonder what the illustrious Dartmouth graduate Daniel Webster had, in fact, contributed. We found the history how apportionment fascinating and have included it at the end of this Chance News.
                                     <<<========<<



>>>>>==============>
Some top-scoring schools faulted.
Many question MCAS assessment
The Boston Globe, January 10, 2001
Anand Vaishnav

The flawed logic of annual tests to assess schools
The New York Times, January 24, 2001
Richard Rothstein

The Boston Globe article describes a recent state report of test results of 1,539 schools in Massachusetts. For each school, the state compares average (scaled) scores from 1999 and 2000 on the Massachusetts Comprehensive Assessment System (MCAS) exam to a baseline average from 1998. Based on its 1998 performance on the MCAS, each school was targeted to show an increase in the average of its 1999 and 2000 average scores, from 1 to 2 points for schools with higher baselines, to 4 to 7 points for those with lower baselines. According to the article, 56 percent of the 1,539 schools failed to meet their targets.

Controversy has arisen over the fact that several schools with the highest average scores in 1998 did not meet their targets, despite other traditional indicators of student success (e.g., Merit Scholarship finalists and high SAT scores.) Meanwhile, several schools with lower 1998 average scores have met or exceeded their targets.

These events have prompted a lively discussion at the newsgroup sci.stat.edu on the degree to which the reported results are an example of regression to the mean. Questions are also raised over whether the 1 to 2 point targeted gain for the higher performing schools is too small a change to be statistically significant (the exam is scaled to a range 200 to 280).

We found this discussion by going to www.deja.com. Since then, Deja has been taken over by Google and if you go to Deja you end up at www.groups.google.com

where you are invited to search on all newsgroups. If you search for "Gene Gallagher" you will can read the first message in the thread and then if you choose "view thread" you will get the remaining messages.

The New York Times article discusses related problems in President Bush's proposed annual school testing, and is generally critical of the program.

Rothstein cites research by Thomas J. Kane, an economist at the Hoover Institution, and Douglas O. Staiger, an economics professor at Dartmouth College, on the accuracy of testing in North Carolina (one of several states where teachers receive bonuses if there are sufficient gains in test scores.)

According to Kane, nearly one third of the variance in a school's reading scores from year to year is attributable to differences in the overall make-up of each year's crop of students. Nearly another third arises from random fluctuations of day-to-day test averages.

Rothstein also briefly notes several other potential difficulties with making funding and promotion decisions based on test-score results. For example, from size alone one would expect bigger variation in scores from year to year in smaller rather than larger schools.

DISCUSSION QUESTIONS:

(1) Apparently the targeted average test score gains are the same for schools with similar baseline averages, regardless of the size of the school. Does this make sense?

(2) Since the students, the teachers (perhaps), and even the tests themselves vary from year to year, how should one interpret a 1 to 2 point average gain (or loss)? How would knowing the correlation between tests help determine the extent to which regression to the mean is at work?

(3) Rothstein mentions that in California, schools are expected to show average gains among low-income and minority groups, as well as school-wide. Under this arrangement, (as opposed to school-wide gains only), he says that economically and culturally diverse schools are less likely to be rewarded for "false" gains, but more likely to be penalized for "false" declines (those that merely reflect random fluctuations.)
Explain.
                                     <<<========<<



>>>>>==============>
Ask Marilyn.
Parade Magazine, 14 January 2001, p. 14
Marilyn vos Savant

A reader asks:

A man lives near a pair of railroad tracks: Northbound trains run one way; southbound on the other. He makes a habit of walking to the tracks, watching for the next train, and then going home. The man does this randomly day and night, seven days a week. Over the years, he sees two trains going north for every one going south. One day, he comments to a conductor that there are two northbound trains for every southbound one. But the conductor insists that there are exactly the same number going each direction. How can this be true?

Marilyn gives a good explanation. This puzzle has been around for some time. This reviewer first encountered it in Fred Mosteller's classic "Fifty Challenging Problems in Probability With Solutions." (The version there is called The Unfair Subway; it's problem number 24).

DISCUSSION QUESTION:

This can be viewed as a problem of sampling bias. Can you explain how?
                                     <<<========<<



>>>>>==============>
Coins and confused eyewitnesses:
Calculating the probability of picking the wrong guy.
Who's Counting, ABCNEWS.com, 1 February 2001.
John Allen Paulos

This piece was inspired by the New Yorker article "Under Suspicion: The Fugitive Science of Criminal Justice" which considered the difficulties with eyewitness identifications from police lineups (see Chance News 10.01).

Paulos illustrates the difficulties by using a coin tossing example. He asks us to consider a "lineup" of three pennies, where we know that two are fair and one (the "culprit") has a 75% chance of heads. If we were to just guess, the chance of a correct identification is of course 1/3. But suppose that we had previously observed three tosses with one of these pennies, and it had come up heads each time. If we now identify this one as the "culprit," what is the chance we would be right? Paulos invokes Bayes theorem to show that the chance is now 63%.

Paulos says that the figures in this simple example are not out of line with actual experience in police lineups, where "the probability of a correct identification...is frequently as low as 60 percent, and, what's worse, innocents in the lineup are picked up to 20 percent or more of the time..."

DISCUSSION QUESTION:

(1) Can you reproduce the Bayes theorem calculation leading to the 63% figure?

(2) Suppose there are five coins in the lineup, four of which are fair. Again you have seen one come up heads three times in a row. Is the chance that this is the culprit better or worse?

(3) If in real life the chance of a correct identification can be as low as 60% and the chance of picking an innocent person as high as 20%, what is the third option?
                                     <<<========<<



>>>>>==============>
SUVs fair poorly in rollover ratings; critics slam methodology.
The Burlington (VT) Free Press, 10 January 2001, 1A
John Heilprin

For the first time, the National Highway Traffic Safety Administration has rated passenger vehicles for risk of rollover accidents. Such accidents are estimated to kill 10,000 people a year. The ratings can be found on the web at Rollover Resistance Ratings Information.

There are also graphics depicting rollover accidents, and statistical classifications of rollovers by vehicle type (passenger car, pickup truck, SUV and van).

The ratings were not favorable for sport utility vehicles (SUV), which are hot sellers for auto makers. Two General Motors SUVs received one-star ratings, the lowest score, while the Honda Accord Sedan received five stars. According to the NHSTA web site, a five-star rating means the vehicle has a less than 10% chance of rolling over in a single vehicle crash. The chance for a one-star vehicle is over 40%.

Auto industry spokespersons were critical of the new ratings, because they were not based on actual driving tests. Instead the government computed a "rollover resistance rating", based on such parameters as the vehicle's center of gravity and distance between its rear tires, to gauge how top-heavy the vehicle was. According to the web site "The Rollover Resistance Ratings of vehicles were compared to 220,000 actual single vehicle crashes, and the ratings were found to relate very closely to the real-world rollover experience of vehicles.

The study did not assess the actual causes of the accidents.

DISCUSSION QUESTIONS:

(1) The article reports that "More than 60 percent of SUV occupants killed in 1999 died in rollover crashes...Twenty three percent of car occupants who were killed died in rollovers." Does this mean that risk of dying in a rollover are greater in an SUV? If not, what does it mean?

(2) Can you suggest any lurking variables that might affect the observed relationship between rollover ratings and vehicle experience?
                                     <<<========<<



>>>>>==============>
Vermont homicides hit 11 last year.
The Burlington (VT) Free Press, 11 January 2001, 1A
Emily Stone

There were a total of 11 homicides in Vermont last year. The article notes that "while the last four months of the year were particularly bloody--8 homicides occurred since late September--the yearly total is still relatively low for the state." A display accompanying the article, entitled "Year-end rash of killings," gives the dates of last year's homicides: 1 Jan, 16 Feb, 1 May, 26 Sept, 27 Sept, 25 Oct, 27 Nov, 29 Nov, 3 Dec and 21 Dec. A table includes the homicide totals for the last decade:

    Year    Number of victims

    1990      14
    1991      24
    1992      21
    1993      15
    1994       5
    1995      13
    1996      11
    1997       9
    1998      12
    1999      17
    2000      11

DISCUSSION QUESTIONS:

(1) How would you describe the distribution of the number of killings for the years shown? In what sense is year's total "relatively low for the state?"

(2) In what sense do this year's data represent a "year-end rash of killings"?
                              <<<========<<



>>>>>==============>
Franci Farnsworth suggested the following article.

Sky is falling... but experts insist that humans are safe as rain of space junk begins.
The Rutland (VT) Herald, 28 January 2001, C1
Seth Borenstein

The article reports that in the next several weeks, scientists will engineer the re-entries of two space vehicles, while another nine "pieces of space junk" will return to Earth haphazardly. Since 1957, a total of 26,643 objects have been launched into orbit, and 17,681 have returned. To date, no person has been hit by falling debris. The next big event scheduled is the return of the Russian space station Mir, which should crash in the ocean east of Australia on March 6, after being deliberately nudged out of orbit by an unmanned cargo ship. Because Mir's descent is being aimed at uninhabited area, scientists believe it will pose no risk to humans.

Still, many people remember when Skylab crashed in Australia in 1979. And only last April, debris from a 1996 Delta rocket launch landed near workers in South Africa. Nicholas Johnson, who manages NASA's orbital debris program, says that "for every inch of space occupied by a human at any given moment there is many thousands of times more open space." Nevertheless, he adds that "Eventually, statistics say someone's going to get hit."

The article cites figures from the National Safety Council showing that a person has a 1 in 4,762 chance of dying from something falling on him or her (e.g., a falling tree or collapsing building). Johnson says that the chance of being hit by space debris is much less. The article points out that the National Safety Council computes risk based on ratios of number of events to numbers of people, whereas NASA looks at a specific object and assesses the chance it will hit an uninhabited area. NASA's goal is to keep the chance of hitting a person below 1 in 10,000.

DISCUSSION QUESTIONS:

(1) In a 1994 article in Technology Review, "How Numbers are Tricking You", Arnold Barnett pointed out a problem with expressing risk as the chance that "someone" would be hit by debris from Skylab. His point was that it was unclear whether "someone" means "at least one" person or "exactly one." NASA sources had stated that the chance someone on earth would be hit was 1 in 150, and since there were 4 billion people on earth, any given person had a 1 in 600 billion chance. What assumptions was NASA making?

(2) Suppose each re-entering object independently has a 1 in 10,000 of hitting at least one person. How many such objects would be required to give a better than even chance of such an accident?
                                     <<<========<<



>>>>>==============>
Jumping Champions.
Scientific American, Dec. 2000, Mathematical recreations, p. 106
Ian Steward

Jumping Champions.
Experimental Mathematics, Vol. 8, No. 2, pgs. 107-118
Andrew Odlyzko, Michael Rubenstein, and Marek Wolf

The set of prime numbers has fascinated mathematicians for thousands of years. Although the primes are seemingly chaotic, there are many patterns that one can find if one peruses the set. Many of these patterns are so simple that a child can understand them. However, in general, it is very difficult to prove that such patterns persist. In particular, if the pattern involves an additive property, then it is unlikely that much can be rigorously proved about the pattern.

Perhaps two of the best-known patterns are described in the conjectures known as Goldbach's Conjecture and the Twin Prime Conjecture. The first of these states that every even number greater than four is the sum of two odd primes. The second of these states that there are infinitely many pairs of primes that differ by two.

In 1791, Gauss conjectured that the number of primes not exceeding a real number x is asymptotic to the integral from 2 to x of 1/log(t). This can be interpreted probabilistically as follows. If one chooses a random integer near x, the probability that it is prime is roughly 1/log(x). Gauss was finally shown to be correct in 1896, by Hadamard and de la Vallee Poussin. This result is known as the Prime Number Theorem.

One can use this probabilistic interpretation to give an estimate on the number of twin primes n, n+2 not exceeding x. If one compares this estimate with actual counts, one finds that the agreement is quite striking. Unfortunately, this estimate has not yet been rigorously proved to be correct. In fact, it is not even known whether there are infinitely many twin primes.

The present paper considers differences between consecutive primes. In particular, the authors ask which numbers appear most often as differences between consecutive primes. Once again, using heuristic reasoning, it is possible to get some feel for what is probably true. However, even the simplest guesses cannot be proved at present. In fact, even when certain strong hypotheses about the regularity of the primes are assumed (one of these is given below), some of these guesses cannot be rigorously proved.

Given a positive integer n, a "jumping champion" for the interval [1, n] is a number m such that it appears more often as the difference between consecutive primes up to n than any other number. For example, up to n = 127, the number k = 2 appears at least as often as the difference between consecutive primes as any other number, so 2 is a jumping champion for this value of n. In fact, 2 is a jumping champion for all n between 5 and 127. For various values of n between 131 and 941, the values k = 2, 4, and 6 are jumping champions. However, for all values of n above 941 for which jumping champions have been calculated, k = 6 is the unique jumping champion.

Despite this evidence, this paper presents heuristic arguments to show that it is reasonable to conjecture that for n around 1.74 x 10^35, the number 30 replaces 6 as the jumping champion. Furthermore, for n around 10^425, the number 210 should replace 30 as the jumping champion.

One notes that the numbers 2, 6, 30, and 210 are products of initial sets of primes. The authors conjecture that the only jumping champions are these numbers and the number 4. Another related conjecture that has been made by others is that the jumping champions tend to infinity with n. This conjecture was proved, under a strong assumption, by Ernst Straus and Paul Erdos in 1980. The assumption, called the Hardy-Littlewood k-tuple conjecture, is very easy to state. Suppose that 0, a(1), a(2), ..., a(k-1) are positive integers. Then, unless there is a trivial divisibility condition that stops the set {p, p+a(1), p+p(2), ..., p+a(k-1)} from consisting of all primes infinitely often (a prime k-tuple), then infinitely many such prime k-tuples will exist. (A stronger form of this conjecture asserts that the asymptotic density of such prime k-tuples can be computed in terms of the a(i)'s.) For example, suppose that a(1) = 2 and a(2) = 4. Then it is easy to see that the set {p, p+2, p+4} cannot be a prime 3-tuple infinitely often, since exactly one of these three numbers is divisible by 3. However, no such condition prevents the set {p, p+2, p+6} from being a prime 3-tuple, so the conjecture asserts that there are infinitely many such prime 3-tuples. (Note that the twin prime conjecture is a special case of this conjecture, with a(1) = 2.)

DISCUSSION QUESTIONS:

(1) If pi(x), the number of primes not exceeding x, is asymptotic to x/log(x) (this is implied by the Prime Number Theorem), what is the average size of the differences between consecutive primes not exceeding x?

(2) Suppose for each integer n > 1, we let T(n) be a Bernoulli random variable that takes on the value 1 with probability 1/log(n). Then the Law of Large Numbers implies that the sum T(2) + T(3) + ... + T(x) has mean equal to 1/log(2) + 1/log(3) + ... + 1/log(x). This sum can be shown to be asymptotic to x/log(x). (If you want to try to show this, note that there are (x-1) summands, and 'most' of them are about 1/log(x) in size, since the function log(x) grows so slowly.) How big is the variance? The amount that pi(x) differs from x/log(x) is connected with the Riemann zeta function and the Riemann Hypothesis, arguably the most famous unsolved problem in mathematics.
                                     <<<========<<



>>>>>==============>
Here is our story of apportionment. We follow the treatment of S. J. Brams, Paradoxes in politics, The Free Press, 1976.

The constitution requires that each state have a number of members in the House of Representatives in proportion to their population. Thus after each Census the number of the representatives from each state needs to be adjusted using the new population figures. This process is called apportionment.

Assume that there are N members in the House of Representatives. If the census reports that the total US population is P and the population of the ith state is p(i) the constitution would require that the ith state have q(i) = (N*p(i))/P representatives. This number q(i) is called the "quota" for state i. The lower quota is the integer obtained by rounding the quota down and the upper quota is the integer obtained by rounding it up. We cannot simply round the quotas off to the nearest integer since we may not get a sum of N when we do this. So we must find a more complicated way to turn the quotas into integers adding to N. It turns out that, as usual, the devil is in the details. Up to the present time four different methods have been used to determine the apportionment numbers and they all seem to have serious problems.

In 1792 Congress passed an apportionment act that allotted 120 representatives to the 15 states in the Union at that time. The act proposed that the apportionment be made by a method devised by Alexander Hamilton.

Here is Hamilton's method:

Assign initially each state its lower quota. If there are surplus seats give them, one at a time, to states in descending order of the fractional parts of their quotas.

We illustrate the various methods in terms of an example used by Brams. In this example there are five states with total population 26,000. The House has 26 members. Using Hamilton's method we obtain:

                    Hamilton's method  
                          
                                Quota      Lower
State i         p(i)            q(i)       quota   a(i) 

A              9,061            9.061       9       9      
B              7,179            7.179       7       7      
C              5,259            5.259       5       5                   
D              3,319            3.663       3       4
E              1,182            1.182       1       1      

Total         26,000           26.000      25      26

We give each state the lower quota giving us 25 seats and then state D gets the additional seat because it has the largest remainder .663.

Washington asked Jefferson and others for advice before signing the bill and Jefferson convinced Washington that a method he has developed was better than Hamilton's. Washington made the first veto of a bill and Jefferson's method was used instead of Hamilton's. Here is Jefferson's method.

To describe Jefferson's method it is useful let SD = P/N. This number, called the "standard devisor", represents the number of people per representative and could be considered the district size if it were an integer. Note that q(i) = p(i)/SD.

Jefferson would again temporarily assign the lower quota to each state. If this gives the correct number of representatives than stop. If not find, by trial and error, a number MD (modified divisor) so that when the modified quotas ma(i) = p(i)/MD are rounded down you do get a total of N. These modified quotas are used for the apportionment. Using this method on our example we obtain:

                     Jefferson's method.

                                  SD = 1000       MD = 906.1

State i     Population p(i)    p(i)/D    a(i)   p(i)/MD   ma(i)

A              9,061            9.061     9     10.000    10
B              7,179            7.179     7      7.923     7
C              5,259            5.259     5      5.804     5
D              3,319            3.319     3      3.663     3
E              1,182            1.182     1      1.304     1

Total         26,000           25.000    25     28.694    26

In 1832 Daniel Webster argued that the Jefferson method neglect of fractional remainders deprived the New England States of representatives, since they together got two fewer seats than New York, which had 40,000 fewer inhabitants. Webster proposed a method very similar to Jefferson's except that instead of rounding the quotas down he rounded them to the nearest integer.

                    Webster's method 

                              D = 1000         MD = 957.2

State i         p(i)           p(i)/D    a(i)   p(i)/MD   ma(i)

A              9,061            9.061     9      9,466     9
B              7,179            7.179     7      7,500     8
C              5,259            5.259     5      5,494     5
D              3,319            3.319     3      3,467     3
E              1,182            1.182     1      1,235     1

Total         26,000           26.000    25     27,162    26

Note that using this method rather than Hamilton causes A to lose a seat and B to gain a seat.

Despite the failure of Hamilton's method to gain acceptance when he proposed it, it was incorporated into the apportionment act of 1850. At the time of the 1880 census the House of Representatives had 299 members and in the apportionment using Hamilton's method Alabama was given 9 representatives. The chairman of the Committee on the Census asked the Census Office to compute apportionments based on the 1880 census for House sizes between 275 and 300 members. When the size was set at 299 Alabama was entitled to 8 seats but when the size was set at 300 they were entitled to only 7 seats! thus increasing the number of representatives would decrease the number of seats Alabama would have. This bizarre behavior caused a storm of protest in Congress and was named the "Alabama paradox." And of course, it can even happen in our example. If we apply Hamilton's method to the three values of N, N = 25, N = 26, and N = 27 we obtain:

                    Hamilton's method

                      N = 25        N = 26        N = 27

State i    p(i)     q(i)  a(i)    q(i)  a(i)    q(i)  a(1)

A         9,061     8.713   9     9.061   9     9.410  9
B         7,179     6.903   7     7.179   7     7.455  8
C         5,259     5.057   5     5.259   5     5.461  6
D         3,319     3.191   3     3.319   4     3.447  3
E         1,182     1.137   1     1.182   1     1.227  1

Total    26,000    25.000  25    26.000  26    27.000 27

We see that when N = 25 State D has 3 representatives; when this increases to N = 26 it has 4 representatives; but then when N = 27 State D loses a representative.

The Alabama paradox led mathematicians and others to look more carefully at the mathematics of apportionment schemes.

It seemed reasonable to require that an apportionment method have the property that the number of representatives for a state could not decrease when the number of members in the House is increased. This is now called the "monotone property".

The Jefferson and Webster methods have the monotone property and so the government went back to Webster's method for the 1910, 1930 censuses.

Mathematician Edward Huntington and Census statistician Joseph Hill made a study of methods for apportionment that have the monotone property and recommended a method called the "equal proportions" that was put in use starting with the 1940 census and is still in use.

The Huntington-Hill method is the same as the Webster method except for one change. Webster's method rounds numbers in the usual way: down if the fractional part is less than .5 and up if it is greater than .5. Using .5 means that we are rounding according to the arithmetic mean of the upper and lower quotas. Huntington-Hill replace this by rounding by the geometric mean of the upper and lower quotas. Recall that the geometric mean of two numbers a and b is sqr(a*b).

Carrying this out for our example and combining this with our previous results leads us to the following summary for the apportionment that results from the four methods that have been used by the census.

       Hamilton   Jefferson  Webster   Huntington-Hill


A         9          10         9          9     
B         7           7         8          7
C         5           5         5          6  
D         4           3         3          3
E         1           1         1          1

Total    26          26        26         26

Note that no two methods give the same apportionment!

To see that significant differences can happen in real situations you can find the four methods carried out for the 1990 census at Larry Bowen's web site.

You will see that Webster's method gives very similar results to the Huntington-Hill method but the results using Jefferson's methods are quite different. The same is true for the census 2000.

Alas the story does not end here. Recall that the constitution requires that the states should be apportioned seats in proportion to their population. While it is usually not possible to make this exactly true it would seem we should make it as close as possible. That it is should be either its upper or their lower quota. An apportionment scheme that has this property is said to satisfy quota. While the Huntington-Hill method has the monotone property it does not necessarily satisfy quota. It is assumed that if this ever shows up in practice it will lead to the requirement at the apportionment should satisfy quota. If so mathematicians Balinski and Young have such a method ready for immediate use. But will it ever end?

For more about this topic see:

Apportionment Schemes and the Quota Method.
American Mathematical Monthly, Vol. 84, No. 6.
(Jun. - Jul., 1977) pp. 450-455
M. L. Balinski, H. P. Young

Available from JSTOR.

The constitutionality of the current method of apportionment was challenged in the courts in the early 1900's. This issue reached the Supreme Court in 1992 and they ruled that it was constitutional. For an interesting discussion of these legal battles see here.
                                     <<<========<<



>>>>>==============>
Chance News
Copyright (c) 2001 Laurie Snell

This work is freely redistributable under the terms of the GNU General Public License as published by the Free Software Foundation. This work comes with ABSOLUTELY NO WARRANTY.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

CHANCE News 10.02

January 11, 2001 to February 12, 2001

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!