Designing With, Not For: How We Co-Designed a New Product Certification for Ethically Designed AI Tools – Digital Promise

Designing With, Not For: How We Co-Designed a New Product Certification for Ethically Designed AI Tools

Product Certification New Product Certification for Ethically Designed AI Tools icon

November 13, 2024 | By , and

Artificial intelligence (AI) has quickly become such a pervasive component of the edtech space, leaving little time to pause and reflect on what role we want AI to play and how we can ensure these tools are created ethically. These uncertainties have brought a mix of excitement and fear, as well as a lack of focus on ethics in AI for education.

Our team engages with the complexity of this rapidly emerging technology in education. We believe that the best way to define expectations for AI is to listen to, learn from, and develop standards with those most affected by its use in edtech. This approach has been central to our work in developing an AI certification that supports informed procurement. It involved a deeply collaborative and iterative process to understand how AI is perceived, understood, and integrated into teaching and learning.

Below, we outline the steps of this co-design process, which aims to explore the complexity and gather insights across multiple perspectives to collaboratively create criteria that lay the groundwork for ethically designed AI tools. The result of this process is the development of a new product certification designed to support districts in making empowered edtech evaluation and procurement decisions while encouraging developers to design with these principals in mind.

Co-Design Process

1. Existing frameworks

The relative novelty of AI in the education space has flooded the market with innovative tools and concepts while ethical considerations of AI in education have started to come into focus. Many thoughtful leaders have begun to create frameworks, toolkits, and resources to support thinking around ethical and equitable AI in teaching and learning. Our team has engaged deeply in learning about these resources, often in collaboration with their creators. Check out our blog post, Can AI Be Ethical? Understanding What This Means for the Education Sector, to learn more about the established frameworks that have informed our thinking.

2. AI Pilot cohort

Our approach centers around an understanding of what AI in education looks like today and what role the field wants AI to take in classrooms and schools. In addition to leading the AI-Powered Innovations in Mathematics Teaching & Learning work to understand projections for AI in education, Digital Promise worked as a learning partner to study the biggest successes and challenges with developing and implementing AI tools that support teaching and learning across a portfolio of 28 exploratory projects. Members from this pilot cohort engaged in a variety of activities that allowed space for reflection and iteration. Throughout these projects, many teams surfaced concerns around the ethical development and use of AI. They noted that very few districts had policies around AI and found it challenging to create rubrics due to the rapid pace of AI development. Through these pilots, we gained insights into how we could best support districts and edtech developers through the development of standards that identify essential components of an ethical AI product. A few takeaways include:

  • A need to develop awareness of bias that can come from AI, and how to mitigate those risks.
  • A need for data transparency, including what is used in training data, what data is being collected (and not collected), and what it is used for.
  • The importance of professional learning that spans beyond an individual tool and enables practitioners to understand how AI works.

3. Co-design sessions

Districts’ priorities are the driving force behind this certification. We collaborated with more than 20 school districts across the country to learn together how AI can be integrated into schools in powerful and safe ways. Many of these districts had consulted with teachers, students, and community members, sharing insights from across these groups. Similar to our discussions with developers, district leaders expressed a mix of caution and excitement, as well as varying stages of AI adoption and integration, including the creation of acceptable use policies. Through these co-design sessions around ethical AI, we identified several commonalities, including:

  • Safety (e.g. what harms could occur, cybersecurity, etc.)
  • Data privacy (student data, data sharing, IP, etc.)
  • Transparency (vendor policies, user awareness, data visibility, etc.)
  • Data management (who owns the data, how is it stored, what is it being used for etc.)
  • Output accuracy (users need a way to verify accuracy of outputs, products need a way to handle inaccurate outputs, etc.)
  • Bias (gender, racial, historical content, etc.)
  • Training Data (knowing what data is used for training, was there consent, who is included in the training set, etc.)

“For any digital tool that we are putting in front of our educators and students, it is essential that we start with policies and a co-constructed vision to find tools with shared values that are student and equity-centered. In the context of AI, this is especially true as end users – our students and teachers – are trusting that resources have mitigated bias around gender, race, and more. This responsible approach for the developers and end users is an important priority. In addition to the development of content, comes the guidance for practical use at the ground level. When adults and students implement AI with this shared guidance to deepen learning, creation and development of 21st century skills, it removes the concern of academic integrity and empowers users all around.”

-Rebekah Kim, associate superintendent of teaching and learning at Kent School District

4. Survey

Based on our discussions across these projects, we created a list of criteria that surfaced related to ethical AI. We then shared a survey requesting feedback on these criteria and asked respondents to designate them as high, medium, or low priority. Seventeen respondents, most who were district leaders, ranked the following as top rated high priority criteria:

  • Access to information about how learners’ data is used (e.g., to train the AI, sold to third parties, etc.)
  • Data from diverse populations trains the AI
  • Clear measures to identify and address bias in the AI
  • Accessible and readable measures in the case of data breach (e.g. an incident response plan).
  • Accessible information about how the company ensures data security

5. Feedback from co-designers

We listened and learned across the research and co-design discussions, and drafted a preliminary list of criteria that reflected our findings. We shared these back with district co-designers to integrate their feedback, as well as conducted interviews with a handful of edtech developers to get their input on additional clarity to the application. Together, we finalized our initial list of requirements to pilot in the field this fall with the Ethically Designed AI product certification.

A product earning this certification demonstrates the following:

  • The applicant has a public, free-to-access privacy policy, or similar document, to help current and prospective users understand:
    • What nonpublic personal information (NPI) – including PII – data is collected, how it is stored, and for how long it is stored;
    • How this data is or may be used (e.g. to train the AI model); and
    • Whether this data is shared with or sold to third parties.
  • The applicant has a public, free-to-access policy or other document describing how the product ensures data security, including the data breach or other incident response plans.
  • The applicant has documentation about the datasets used to train the AI model(s) used by the product, including at least two demographic or distinguishing components such as age, race/ethnicity, IEP/disability status, socioeconomic status, or other components relevant to the diversity of learning experiences.
  • There are processes in place for the product team to monitor bias in the AI model. The applicant provides two examples of instances where these processes identified bias and describes the changes made to reflect the bias that was identified. Additionally, users can report instances of bias through the tool directly (for example, through a ticketing system).
  • There are labels within the product to indicate when AI is being used to generate content (images, word problems, literature, narrative feedback, chat communication, etc.) that a user might mistake as being created by a human.
  • Educators or individual users can override at least one AI decision or recommendation to support learning.

Getting Involved

We’ve invited a select number of providers to participate in a closed pilot of the application to test and refine the application materials. Eight district and school leaders will join Digital Promise staff as assessors for the pilot applications to provide final feedback on the certification requirements to ensure a meaningful tool for the field. We are excited to launch this certification in 2025!

In the meantime, there are several ways to get involved:

  • Start asking your community about what their vision and concerns are with AI. Create spaces for students to share how they want to use AI, hear families’ considerations and concerns, and work with teachers to understand what types of professional learning and engagement they would like for support.
  • Work across district and school leadership teams to determine how your team can leverage established standards in procurement and contract renewal processes.
  • Ask all the questions. When you talk with a provider, every question you have is a good one to ask. If you don’t feel confident with the terminology, check out this helpful glossary to feel empowered to ask for transparency about how the model was created and how it is continuously improved to meet your learners’ unique needs.
Sign Up For Updates! Email icon

Sign up for updates!

×