Better at Sums Than at Summarizing: The SAT Gap
These very small changes are not a fluke. They are mirrored elsewhere. On the other college entrance exam, the ACT, math scores grew at about the same rate as on the SAT, while language and reading scores were nearly unchanged from 1992.
On the National Assessment of Educational Progress, given to a sample of 12th graders, scores also improved in the 1990's in math but not in reading. The pattern also holds on standardized tests like the Iowa Test of Educational Development, which many high schools use to measure achievement.
Nobody really knows why we seem to make more progress in math than reading. But one likely cause is that students learn math mostly in school, while literacy also comes from habits at home. Even if reading instruction improves, scores would suffer if students did less out-of-school reading or had a less literate home environment.
In a 1999 survey, the National Assessment asked 17-year-olds whether they saw adults reading at home daily. Thirty-four percent said yes, down from 42 percent in the mid-80's. Fifty-three percent said they read for fun - not for school - at least every week, down from 64 percent in the mid-80's. In 1999, more hours were spent watching television.
It is also quite likely that high school instruction has improved more in math than in English. The National Science Foundation has spent more than $1 billion since 1990 to train math and science teachers. The National Council of Teachers of Mathematics has pressed for more mathematical reasoning and less memorizing of facts and formulas. The SAT, and to a lesser extent the ACT, reflect this new emphasis, as does the main National Assessment.
Although many high school students take more and better math courses today, fewer do so in English. The emphasis in reading has been on teaching phonics in elementary school, not on improving how high school students interpret literature, a skill that means a lot on tests like the SAT. Courses like film criticism can count in some high schools as meeting English requirements, but do little to prepare students for college entrance exams.
Sometimes, there seems to be an explanation for the math-verbal gap. But on closer examination, it works for only one test but not for others. For example, the growth of private coaching may seem to cause the gap. Officials of the leading test preparation companies, Kaplan and the Princeton Review, say their courses are more effective in math than reading, in part because students do not trust shortcuts in reading the way they do in math.
Thus, students willingly estimate math answers using techniques like rounding. They consider estimation legitimate, because they were taught it in school. But when coached to "estimate" a written passage by finding and reading just the topic sentence and jumping to the multiple-choice questions, students resist this effective timesaver.
Although that may partly explain the greater growth in math than in reading scores on the SAT, it cannot account so well for the gaps on the ACT or Iowa tests, for which many fewer students receive coaching. Test preparation does not exist for the National Assessment.
Another factor, also more applicable to the SAT than the ACT, is immigration. The share of SAT test takers whose first language is not English has gone up, and those students have more of a disadvantage in reading than in math. On the SAT this year, students whose home language is not exclusively English scored close to native English speakers on the math test, 509 vs. 517. But they scored far below native speakers on the verbal test, 468 vs. 515. Indeed, the SAT math-verbal gap virtually disappears for native English speakers.
Unfamiliarity with English is just one characteristic that depresses those students' SAT verbal scores. Immigrant parents are increasingly less likely to be college graduates, and so to provide literate homes in any language. But the ACT is given mostly in Midwestern states with fewer non-English speakers. So immigration cannot explain the math-verbal gap on that test.
If nothing else, these complexities should keep us from relying on test scores alone to evaluate education. Some reasons for lower verbal scores, like watching too much television, will not be affected by policies to improve schools. Others, like more immigrant students taking competitive college entrance tests, should be a source of pride, even if they depress average scores.