Technology offers a huge opportunity for schools to personalize learning for each student, but there is limited high-quality evidence to show which products are effective. And, even when product developers provide evidence, school leaders often lack the time or research expertise to analyze study results and determine whether the evidence is strong enough to support a purchasing decision.
To help district leaders quickly evaluate the evidence developers provide, we teamed up with colleagues at the Johns Hopkins University Center for Research and Reform in Education to create a tool to analyze product evaluation studies.
The tool includes 12 multiple-choice questions on the relevance, source, and design of a product evaluation study. The final score can help district leaders decide whether to run a pilot in their own schools, or if they can move on to consider other aspects of a technology purchase, such as cost, fit with current IT system, and required professional development. If a pilot is needed, leaders can check out Digital Promise’s tips to guide the pilot process, and a report on how districts across the country run technology pilots.
We hope this tool will help leaders be more confident that products they select will meet district goals. It complements efforts to support developers in building stronger evidence of their products’ effectiveness, such as the U.S. Department of Education’s Ed Tech Developers’ Guide, and Digital Promise’s Using Research in Ed Tech guide.
By strengthening both sides of the market, we can do a better job getting high-quality products into the hands of the students and teachers who need them.