In April, Digital Promise launched its newest product certification, Responsibly Designed AI, which helps districts make more informed procurement decisions. At a time when many edtech solutions are rapidly integrating artificial intelligence (AI) capabilities, it’s important for developers to think critically about how they are doing so responsibly. This blog is the third in a series of four posts exploring how edtech can be powered by AI in ways that best support educators’ and learners’ pedagogical needs, agency, and safety. Each blog post is written by an edtech developer whose product was among the first cohort to earn the Responsibly Designed AI certification. Read the second post here.
That vision lives on through our continued commitment to building tools that respect learners, empower educators, and prepare this generation for the challenges ahead.
Our learning management system (LMS), CheckIT Learning, powered by Cleo—a neuroscience-informed AI mentor—supports both teachers and students to understand how learning works and is designed to help students develop effective study habits, strengthen executive functioning, and build metacognitive skills that support long-term learning.
But with AI’s growing role in the classroom, the stakes are high. Educational tools are shaping identity, confidence, and opportunity. That’s why we didn’t just focus on what Cleo could do, we focused on how it does it, and who it does it for.
When we learned about Digital Promise’s Responsibly Designed AI product certification, it felt like a natural fit. Their framework gave us a clear way to assess how we were doing and where we could do better. The process pushed us to formalize what we’d already built: a culture of care, evidence, and user inclusion.
Achieving certification sends a clear message to schools, educators, and partners: We take our responsibilities seriously and are committed to building lasting trust, not solely through what Cleo can do as an AI mentor, but through how it has thoughtfully and intentionally been designed to support instruction and learning.
We centered the development of Cleo on three core design principles that guide how AI can—and should—be built to best serve educators and learners.
Not all AI models are built equally, and the best tools require thoughtful implementation. That’s why every function Cleo performs—whether it’s generating study strategies, supporting executive functioning, or helping teachers design accommodations—is grounded in purposeful, evidenced-based design.
Cleo is powered by a tailored stack of AI models that were intentionally selected based on each model’s comparative strengths:
We’ve seen what happens when bias goes unchecked. It erodes trust, reinforces inequality, and quietly excludes voices that need to be heard. That’s why we built multiple layers of bias detection and prevention into Cleo’s development:
And when bias is identified, we act. Our AI ethics and bias team collaborates with our developers to refine prompts, adjust filters, and test updates before they’re deployed. While no system is flawless, ours is designed to learn and adapt continuously.
We don’t believe in black box AI, especially in education. Students and teachers deserve to know how Cleo works and how it is evolving.
That’s why we offer:
And most importantly, we invite users and Science of Learning experts into the process at all levels from concept to implementation. Every piece of feedback we receive is an opportunity to grow, both the product and the trust behind it.
We built our LMS to help students understand themselves as learners, give them agency, build a growth mindset and a sense of empowerment. We build it to give teachers the tools to support them with science and empathy.
Responsible AI isn’t optional. It’s the future edtech developers need to build one student, one insight, and one ethical decision at a time.
We want to hear from you!
Please take this 5-minute survey and help us serve you better.