The rapid emergence of generative AI has left many educators with fundamental questions about when and how to safely adopt this emerging technology in K-12 contexts. AI is being developed at a lightning pace with little consideration of the safety of children and how these systems and tools can better serve historically and systematically excluded learners. Still, many people are already using algorithms pervasively in and out of school.
In recent years, there has been an emergence of resources for educators to understand what AI is and how to integrate AI into teaching and learning (see resources below). AI literacy has emerged as a skill set for teachers and students to safely use emerging technologies in teaching and learning. Nonprofit organizations, thought leaders, and school districts have begun to make progress in defining AI literacy, and examples of such frameworks are listed at the end of this blog post. We build on these definitions to present a comprehensive framework for AI literacy that seeks to empower district leaders, teachers, and learners to make informed decisions about how to integrate AI in the world and specifically in education. The framework emphasizes that understanding and evaluating AI are critical to making informed decisions about if and how to use AI in learning environments.
AI literacy builds on years of work in digital readiness, media literacy, and computational thinking. Just like these more familiar domains, AI literacy applies common 21st century skills such as communication, collaboration, critical thinking, and creativity. Building on definitions of AI and generative AI from the U.S. government, Digital Promise defines AI literacy as follows:
AI literacy includes the knowledge and skills that enable humans to critically understand, use, and evaluate AI systems and tools to safely and ethically participate in an increasingly digital world.
Our AI Literacy framework, pictured below, includes three components: Understand, Use, and Evaluate.
Understanding AI is a technical knowledge set. It applies and extends computer science and computational thinking practices of using data and creating automations and underlying skills such as algorithmic thinking, pattern recognition, abstraction, and decomposition.
Understanding AI is an essential component of AI literacy because in order to make informed decisions about using and evaluating AI, users should have a technical understanding of how artificial intelligence uses large datasets to develop associations and automate predictions.
Building on the research-based 4 As: framework for families’ AI literacy dimensions, we distinguish between three ways to engage with AI in educational contexts: Interact, Create and Apply.
Evaluating AI is the most critical element of AI literacy. All too often, people use AI passively without considering the privacy, safety, or societal implications of doing so. To be truly AI literate, users must take a more active approach, with awareness of the data the algorithm is using and how it is being applied and shared. Building from the SAFE Benchmarks framework, we have identified four components of evaluation, summarized in the table below:
Transparency
Supporting users to understand what data and methods were used to train this AI system or tool.
What AI model and methods were used to develop this tool?
What datasets were used to train this AI model?
Safety
Understanding data privacy, security and ownership.
How is information being collected, used, and shared?
How do we prevent tools from collecting data and/or delete data that was collected?
Ethics
Considering how datasets, including their accessibility and representation, reproduce bias in our society.
How is AI perpetuating issues of access and equity?
Who is harmed and benefitting, and how?
Impact
Examining the credibility of outputs as well as the efficacy of algorithms and questioning the biases inherent in the use of AI systems and tools.
Is this AI algorithm the right tool for impact?
Is this AI output credible?
How do we center human judgment in decision making?
Transparency
Supporting users to understand what data and methods were used to train this AI system or tool.
What AI model and methods were used to develop this tool?
What datasets were used to train this AI model?
Safety
Understanding data privacy, security and ownership.
How is information being collected, used, and shared?
How do we prevent tools from collecting data and/or delete data that was collected?
Ethics
Considering how datasets, including their accessibility and representation, reproduce bias in our society.
How is AI perpetuating issues of access and equity?
Who is harmed and benefitting, and how?
Impact
Examining the credibility of outputs as well as the efficacy of algorithms and questioning the biases inherent in the use of AI systems and tools.
Is this AI algorithm the right tool for impact?
Is this AI output credible?
How do we center human judgment in decision making?
Table 1. Four components of Evaluating AI, a critical component of AI Literacy
Digital Promise is expanding and applying this framework to support learners, teachers, education leaders, and caregivers with the knowledge and resources they need to understand, use, and evaluate AI. AI literacy is a critical skill set for educators and learners to decide if and how to use AI systems for teaching and learning. In the coming months, we will share research-based guidance and practical tools to promote AI literacy as it becomes more pervasive in and out of the classroom.
Learn more about AI in K-12 Education
Emerging Frameworks for AI Literacy