Taking a test online rather than on paper might hurt students’ test scores — at least at first — according to a study from the American Institutes for Research (AIR).
In a study published in the February issue of the Economics of Education Review (presented in an April 2018 working paper), AIR researchers James Cowan and Ben Backes looked at the scores of Massachusetts students during the first two years of online testing. In 2015 and 2016, about half of Massachusetts students took PARCC, the state-administered end-of-year test, online, while half took a paper-and-pencil version.
According to the AIR researchers, the students who took the test online performed as if they’d had five fewer months of academic preparation in math and 11 fewer months of preparation in English than their peers who took the test on paper. There was no reason to believe that they were actually less prepared; instead, the mode of testing appeared to be dampening their results. On the English test, the scores of students from low-income families, English Language Learners, and students with disabilities seemed to be especially affected by the switch online. But the discrepancies in scores shrank in the second year of testing, suggesting that, once students get used to the new format, it is not as much of a hindrance.
The students who took the test online performed as if they’d had five fewer months of academic preparation in math and 11 fewer months of preparation in English than their peers who took the test on paper.
While Cowan and Backes aren’t sure why the Massachusetts students initially performed worse on computer-based testing, they have some hypotheses: students might not have been used to the types of questions asked, the format of the test, or the technology being used.
“It could be the case that there are some other technical skills being tested [by computer testing], or it could be the preparation of the school to administer the test,” Backes says.
Nevertheless, since end-of-year state tests continue to be used for decisions ranging from which schools to close to which teachers to reward with bonuses, it is important that policymakers and educators who make decisions using standardized-testing data give consideration to whether the mode of testing might affect the results they are seeing.
“The state should think about how schools might be penalized for something out of their control,” Backes says.
A 2016 paper from Wendy Gelbart from the University of Nevada at Las Vegas offers some tips for helping prepare students with learning disabilities to the transition to computer-based testing that can be applied more generally.