AI and emerging technologies are evolving at an unprecedented rate. The research necessary to drive the design, evaluation, and understanding of the impact of the tools needs to keep up. While these novel technologies offer opportunities to apply research insights in new and exciting ways, the speed of change also creates barriers around our conception of “evidence,” as what is tested may be an outdated version of the tool by the time the results are published. How are education leaders, researchers, and providers grappling with this tension?
In our recent webinar, we explored strategies to answer this question with Mario Andrade, superintendent of Nashua School District, Natalia Kucirkova, co-founder and director of the International Centre for EdTech Impact, Ran Liu, VP of AI and chief AI scientist at Amira, and Julia Wilkowski, the pedagogy team lead at Google.
Even while edtech incorporates new technologies, it should remain rooted in what we know about learning. Leveraging learning theories and research-based instructional practices when designing, testing, and improving a product enables developers and education leaders to invest in tools best positioned to positively impact learning.
Kucirkova shared that some of the most valuable professional development opportunities often come from participating in established edtech and education conferences. These annual gatherings create dedicated spaces where educators, educational researchers and edtech developers can learn from one another. The Centre actively supports events like the annual meeting of the UNESCO Global Alliance on Science of Learning for Education or the Digital Learning week, which serve as essential spaces for cross-sector, international exchange related to research-based edtech.
Making informed, responsible decisions around AI and emerging technologies requires continuous evaluation of how these tools affect learners’ outcomes and overall well-being. These evaluations must be grounded in meaningful measures of learning. Clearly defining learning outcomes enables education leaders, educators, researchers, and developers to collaboratively evaluate the quality and safety of these tools. Paired with a commitment to continuous improvement, ongoing evaluation of products helps drive iterative improvements to both individual products as well as the industry at large.
Liu shared that the most important thing is to measure whether a product is working, and to ensure that what you’re measuring is aligned with your goals. At the short-term and simplest level, you can leverage data collected within the product. A very concrete example is developing new micro-interventions that kids using the product receive when they demonstrate a certain reading skill struggle. Liu and team consider future opportunities students will have to demonstrate growth on that particular skill, as well as whether the learning that they expect to drive is playing out in all of their future opportunities.
From a superintendent’s perspective, Andrade discussed the importance of knowing the problem that they’re solving for, as well as desired outcomes, from the start. When districts select a technology-enabled product, they should be clear on their needs and what would make the investment in a new product worthwhile. When reflecting on a partnership with an edtech provider, districts need to know whether the metrics they wanted to improve actually improved.
Understanding the breadth of benefits and risks that comes with new technology requires developers to collaborate with experts across a range of disciplines—including subject matter experts, human development experts, learning scientists, and researchers—as well as the educators and learners intended to use the tool. Deeper multidisciplinary collaboration can drive the development of products designed to positively impact learning in ways that have been historically unimaginable.
A simple way to begin a collaboration is to simply reach out. Kucirkova shared that many researchers are eager to share their findings with practitioners and the wider public, and they welcome direct engagement from edtech providers too. Reaching out to individual researchers or connecting through existing formal and informal research networks can open valuable pathways for collaboration. The International Centre for EdTech Impact curates a network of more than 1,200 researchers working on topics directly relevant to edtech. These experts are available to advise on product–research fit, collaborate on studies, or discuss the latest evidence in their domains.
From Wilkowski’s perspective, some of the most interesting projects Google has embarked on recently involve partnering with major school districts to find out where they can assess, at a broad scale, key pain points, and what’s working and not working when it comes to AI tools. These collaborations are working to identify not only student outcomes, but also teacher efficiency, and even applications of AI and emerging technology to enhance creativity or to reach more of their students in their classrooms.
“R&D must be an expectation of leaders, school boards, and systems, not to be an option or a choice. R&D needs to be a fundamental way embedded into our daily practice.” – Mario Andrade
Learn More