Natalie Woolman’s article in Varsity (Friday 28th September) made an impassioned rebuttal of a myth I had not previously encountered, that pooled students are less successful than direct entry students. This is a laudable sentiment, but perhaps unfortunately, I believe that the author’s mathematical analysis does not support it.

 

Let me assure you that despite my hailing from the current top-ranked college, Emmanuel, I do not condone the “snobbery” of inter-collegiate rivalry (not in public, anyway). Neither do I espouse the view that pooled students are in some way inferior to direct-entry students. But statistically significant results demand explanation.

 

The article began by citing Dr. Bateman’s discovery that “average Baxter scores for Part II exam results [for 2000-2006] show no significant difference between [direct-entry students] and those admitted via the January pool”. The Baxter Tables are the university’s own statistical analysis of college performance, and are privately distributed to Senior Tutors. In addition, the Baxter Tables ignore the ‘fudge factor’ that Tompkins uses to compensate for the different subjects read by a college’s students. They are kept so secret that even Wikipedia, the font of all knowledge, lacks an entry on them, and my own internet search yielded little more than a few numbers out of context. Since we as students are clearly not privy to the Baxter scores, our acceptance of this evidence must be a little cautious.


Pooled students should not have to rely on a “barrage of statistics” to prevail against the myth of their inferiority.

However, I am sure Dr. Parks’ “statistical study of engineering results” is good evidence in favour of the author’s position.

 

The article continues: “By ranking the colleges … based on the percentage of offers made through the winter pool, and comparing the resulting list with the Tompkins Table valid for that intake, I explored the hypothesis that colleges with a greater number of pooled candidates would flounder with Tompkins…no such relationship was evident”. Allow me to present my findings. Following the author’s method, I ranked each of the 29 colleges in the Tompkins Table by order of the proportion of offers made to winter pool students out of the total number of winter admissions for a given year. I then compared this ranking with the Tompkins Table position for each college two years later, when the new admissions would have sat their first exams, and their results would be reflected in the Tompkins Table. I then applied a statistical test, the Spearman Rank Test, to obtain a test value representing the strength of the correlation between the two ranked lists of colleges.

 

So: considering the 29 colleges in the Tompkins Table, the test values for the winter admissions of 2005, 2004, and 2003 all indicated a statistically significant association between the proportion of pooled students admitted from the winter pool and the Tompkins rank two years later, at a 95% confidence level, and at 99% confidence for the 2003 winter admissions - a direct contradiction of the author’s assertion above.

 

A colleague of mine suggested that graduate colleges might be one of the mysterious “confounding circumstances” that the author mentions. Indeed, the Baxter Tables are only compiled for the 24 traditional colleges with a large proportion of undergraduates. I duly repeated the tests for these 24 colleges only. With this limited set of colleges the test value for the winter admissions of 2005 indicated no correlation between proportion of pooled students and the Tompkins Table. However, the test values for 2004 and 2003 showed a significant correlation at 99% confidence!

 

What should we make of this persistent association? Are pooled students really inferior? Is there some obscure connection between a college’s applications policy and its academic performance? Perhaps the top colleges get so many applications that they don’t need to take pooled students? I am unable to judge which of these possibilities is the most convincing, but at the same time I have no desire to because I believe there is a more reasonable explanation.

 

First let us remember that an association between two variables does not indicate a causal connection – there are other criteria to be met. Secondly, I have not considered possible confounding factors such as the wealth of a college; the institutions and practices at a college; and differences in the number of applications to each college. Thirdly, I would question the reliability of the Tompkins Table as a measure of college success. As Mary Beard asks in The Times Online: “Will all these clever firsts seem like the college’s greatest successes after a decade or so?” There are many university alumni who have become hugely successful despite a relatively ‘poor’ showing in exams. In any case, there is hardly any difference between the Tompkins percentages of the top 50% of colleges, making the use of statistical tests on a ranked list difficult to justify. Finally, the many limitations of both my investigation and Ms. Woolman’s outweigh our seemingly authoritative results – for example, without access to the raw data on which the Tompkins Table is based, I am unable to look in isolation at a single year’s intake of students and that year’s exam success.

 

Pooled students should not have to rely on a “barrage of statistics” to prevail against the myth of their inferiority. As Ms. Woolman’s article unintentionally shows: with a little luck, and the general apathy of the student body, you can even become the president of CUSU.