Guest Post: 3 Guiding Principles for Responsible AI in EdTech – Digital Promise

Guest Post: 3 Guiding Principles for Responsible AI in EdTech

Split image showing hands with sparkles on the left and a secure laptop with a magnifying glass examining digital content on the right.

July 3, 2025 | By and

In April, Digital Promise launched its newest product certification, Responsibly Designed AI, which helps districts make more informed procurement decisions. At a time when many edtech solutions are rapidly integrating artificial intelligence (AI) capabilities, it’s important for developers to think critically about how they are doing so responsibly. This blog is the third in a series of four posts exploring how edtech can be powered by AI in ways that best support educators’ and learners’ pedagogical needs, agency, and safety. Each blog post is written by an edtech developer whose product was among the first cohort to earn the Responsibly Designed AI certification. Read the second post here.

Our journey began in 2021, in the midst of a world turned upside down. As longtime colleagues in the professional services world, we—as co-founders and now vice president and chief technology officer of CheckIT Learning, a subsidiary of CheckIT Labs—felt a growing desire to contribute to something bigger than ourselves. We believed that technology, when paired with the right intention and scientific insight, could transform education—not just to help students succeed academically, but to empower them to become agents of change in a complex, fast-moving world.

That vision lives on through our continued commitment to building tools that respect learners, empower educators, and prepare this generation for the challenges ahead.

Our learning management system (LMS), CheckIT Learning, powered by Cleo—a neuroscience-informed AI mentor—supports both teachers and students to understand how learning works and is designed to help students develop effective study habits, strengthen executive functioning, and build metacognitive skills that support long-term learning.

But with AI’s growing role in the classroom, the stakes are high. Educational tools are shaping identity, confidence, and opportunity. That’s why we didn’t just focus on what Cleo could do, we focused on how it does it, and who it does it for.

Designing AI Responsibly

When we learned about Digital Promise’s Responsibly Designed AI product certification, it felt like a natural fit. Their framework gave us a clear way to assess how we were doing and where we could do better. The process pushed us to formalize what we’d already built: a culture of care, evidence, and user inclusion.

Achieving certification sends a clear message to schools, educators, and partners: We take our responsibilities seriously and are committed to building lasting trust, not solely through what Cleo can do as an AI mentor, but through how it has thoughtfully and intentionally been designed to support instruction and learning.

We centered the development of Cleo on three core design principles that guide how AI can—and should—be built to best serve educators and learners.

Principle 1: The Right Tools for the Right Purpose

Not all AI models are built equally, and the best tools require thoughtful implementation. That’s why every function Cleo performs—whether it’s generating study strategies, supporting executive functioning, or helping teachers design accommodations—is grounded in purposeful, evidenced-based design.

Cleo is powered by a tailored stack of AI models that were intentionally selected based on each model’s comparative strengths:

  • OpenAI’s o3-mini for neuroscience-based insights
  • Wolfram Alpha for math-related tasks
  • Microsoft’s Phi-4 for natural conversation and feedback
Principle 2: Bias Isn’t Just a Technical Issue, It’s a Human One

We’ve seen what happens when bias goes unchecked. It erodes trust, reinforces inequality, and quietly excludes voices that need to be heard. That’s why we built multiple layers of bias detection and prevention into Cleo’s development:

  • Scenario testing that reflects diverse classrooms and learner profiles
  • Filters before and after every large language model (LLM) interaction to catch harmful or inappropriate outputs
  • Ongoing input from educators trained in identifying subtle forms of bias
  • Real-time user feedback that flags unintended impacts or misalignments

And when bias is identified, we act. Our AI ethics and bias team collaborates with our developers to refine prompts, adjust filters, and test updates before they’re deployed. While no system is flawless, ours is designed to learn and adapt continuously.

Principle 3: Transparency Is a Requirement, Not a Bonus

We don’t believe in black box AI, especially in education. Students and teachers deserve to know how Cleo works and how it is evolving.

That’s why we offer:

  • Release notes that explain what’s changed
  • Before-and-after examples of updates
  • Interaction logs that support user understanding
  • Documentation aligned with Microsoft’s Responsible AI Standard

And most importantly, we invite users and Science of Learning experts into the process at all levels from concept to implementation. Every piece of feedback we receive is an opportunity to grow, both the product and the trust behind it.

Designed to Empower, Built to Transform

We built our LMS to help students understand themselves as learners, give them agency, build a growth mindset and a sense of empowerment. We build it to give teachers the tools to support them with science and empathy.

Responsible AI isn’t optional. It’s the future edtech developers need to build one student, one insight, and one ethical decision at a time.

Learn more about ways to evaluate the tools you consider through Digital Promise product certifications. Sign up for our newsletter and follow Digital Promise on Instagram, Facebook and LinkedIn to stay updated on edtech and emerging technologies.
Take our brand survey
Loading...