4

Most colleges and universities in the US base their admissions on high school grades plus a standardized test (SAT or ACT). To decide whether a particular piece of data X is a good admissions criterion, people seem to focus on how well X correlates with college grades. (This may be freshman grades or total college GPA. Some people think success in getting a degree would be more appropriate. For the purposes of this question, I don't think these distinctions are relevant.)

Isn't this correlation a completely incorrect way of evaluating X as a tool for admissions?

It seems like an apples-to-oranges comparison. A student who has low X is admitted to a nonselective school, where standards are low and they compete with other low-X students. Grades at this school measure how well they did based on the low standards prevailing there. Students who have high X are admitted to a selective school. A "B" grade at Berkeley is not the same as a "B" grade at Cal State Dominguez Hills. A high-X student goes to Berkeley and gets a "B" in calculus. A low-X student goes to CSDH and gets a "B" in calculus. This shows up in the statistics as a lack of correlation between X and grades, since the students differed in X but got the same grade. But isn't that misleading, since the grades mean different things at the two schools?

Is there some more logically justifiable statistic to use in evaluating whether X is a good admissions criterion?

  • 1
    Bingo. This certainly identifies a nonsensical aspect of "higher education". I have no simple solution... but can confirm that the nonsense is real. So whatever decisions we/anyone makes, if we want them to be genuinely sensible, should take this nonsense into account. (Usually this is "impossible", a.k.a. impossible to document un-prejudicially...) – paul garrett Mar 31 '14 at 02:20
  • Since most schools are ranked, couldn't one create: score = f(grade, school rank) and see how X affects score? – Akavall Mar 31 '14 at 03:38
  • Basic rules of statistics are violated: there is no control group. You'd need a school that manages to admit everybody and provide everybody with high-quality teaching (as opposed to, let's do one year of admitting everybody with our current staff situation so we can figure out good criteria), then correlations would be meaningful. – Raphael Mar 31 '14 at 12:10
  • @Raphael: You'd need a school that manages to admit everybody and provide everybody with high-quality teaching This sounds like a description of a community college. But regardless of the quality of the teaching, community colleges have lower standards. –  Apr 01 '14 at 05:34
  • @BenCrowell: They don't get a representative sample for other reasons, but maybe they have better chances of figuring out some correlations than other schools. – Raphael Apr 01 '14 at 07:24
  • My university also uses the rank of the applicant's high school, at least for in-state students, presumably (hopefully!) as measured by the performance of past admitted students from that school. So the valedictorian from a low-ranked high school may be rejected, even though middle-of-the-road students from higher-ranked schools are accepted. – JeffE Jul 11 '14 at 14:04
  • I cannot see the relevance of the tag "graduate admissions" to this discussion. As to whether X is a "good" admissions criterion for undergraduate study, it depends mainly on how well the performance of the student meets or exceeds the expectations (or hopes) of the university for its students. Since these expectations are not necessarily quantifiable, statistical analysis is dubious (I believe this was also paul garrett's point). – MikeV Aug 07 '15 at 17:50

2 Answers2

1

Define "effectiveness of criteria for admission". Collect data on the potentially relevant independent variables. Do statistical analysis on the resulting data set.

Be careful, you'll probably be considering just the people who were admitted, so the results will be skewed.

vonbrand
  • 9,982
  • 1
  • 25
  • 46
-2

A more meaningful study might ignore college grades, which as you point out depend on the type of institution, and focus on downstream accomplishments. It isn't just local and community colleges that distort the picture; several highly ranked Universities have A-no credit policies, so that everyone graduate with a 4.0. Many education schools are effectively A-no credit, with overall GPA's of ~3.9. I'd like to know how many university professors had low grades or SAT scores, how many NAS members had low grades or SAT scores, and what the profile of CEOs, successful writers, lawyers, engineers etc. was. The purpose of college is not the production of am impressive transcript.

  • 1
    The question asks for a statistic to use in evaluating admissions criteria, and as far as I can tell you do not propose one. This does not seem to answer the question. – ff524 Jul 11 '14 at 12:10
  • Your comment seems quite authoritative, but on examining the question it doesn't just ask for a more appropriate statistical test, but for criteria other than grades or graduation rates. It does not seem that you have read the question. – John salerno Jul 11 '14 at 13:21
  • 1
    The actual question, as expressed in the title and the last line of the post, asks for statistics by which to evaluate effectiveness of criteria for admission (grades are an example of a criteria for admissions). The rest of the post is supposed to give context. – ff524 Jul 11 '14 at 13:25
  • Saying the wrong thing repeatedly does not make it so. The title was 'What is a correct way of evaluating the effectiveness of criteria for admissions?' It does not mention statistics, which is your own narrow interpretation. Try thinking outside the box, and improving your reading comprehension. – John salerno Jul 12 '14 at 01:22