Can AI Be Ethical? Understanding What This Means for the Education Sector – Digital Promise

Can AI Be Ethical? Understanding What This Means for the Education Sector

July 9, 2024 | By

Can artificial intelligence (AI) be ethical?

While this is a complex question, the short answer that can help us engage in deeper conversation is that AI itself cannot be ethical or equitable. At its core, AI is a computer responding to human-designed prompts. According to the Center for Integrative Research in Computer and Learning Sciences (CIRCLS), AI is defined as “systems [that] use hardware, algorithms, and data to create ‘intelligence’ to do things like make decisions, discover patterns, and perform some sort of action.” When exploring ethics in AI, the conversation should center human judgment and justice by focusing on the processes and data used to train and improve or “finetune” the technologies as well as consider what the technologies are asked to do. Therefore, we have the opportunity to do this work in an ethical and equity-centered way.

“AI development has moved so quickly that policies necessary to safeguard students and their data have not kept pace. Evaluating AI through an ethics lens is a critical step in providing necessary student protections.” – Milton Rodriguez, Senior VP of Innovation and Development at KIPP Chicago

What does Ethically Designed AI look like?

Over the last few years, several organizations have explored the question: Can AI be ethical?

  • The White House developed the Blueprint for an AI Bill of Rights to explore safe and effective systems; algorithmic discrimination protections; data privacy, notice, and explanation; and human alternatives, consideration, and fallback.
  • The Friday Institute developed a set of key questions for schools and districts to ask based on the AI Bill of Rights, along with a MAZE framework to identify safe and effective AI tools.
  • The Edtech Equity Project developed a toolkit to support education leaders in determining if AI-powered tools are racially equitable.
  • A working group out of CIRCLS developed the Edtech Vendor Pledge, which describes three unique tenets:
    1. Including underserved and underrepresented groups in the design, development, deployment, and continuous monitoring of AI-powered technologies.
    2. Informing students, teachers, and administrators how and when AI decisions are being made.
    3. Promoting student, teacher, and parent control and agency when using AI-powered tools.
  • Encode Justice, a youth-driven initiative, published a global pledge that outlines a call to action for companies to:
    1. Label AI outputs with a well-established warning symbol or label and explicitly disclose the model of origin if the content has been created, altered, or manipulated in a significant way.
    2. Ensure that AI systems present clear and continuous indicators that users are interacting with a machine, not a human.
    3. Allow users to opt out of being subjected to an embedded AI system.
    4. Offer users agency and ownership over their personal data.

“As educational leaders, we have a responsibility to evaluate AI tools before integrating them into our classrooms and schools. Having product certification or requiring vendors to complete an AI fact sheet ensures we assess each solution’s purpose, training data, potential biases, and accessibility. By prioritizing transparency, fairness, and student privacy, we’re setting a standard and baseline for ethical AI use in education that aligns with our district’s values and prepares our students for an AI-driven future.” – Patrick Gittisriboongul, Asst. Superintendent, Technology & Innovation, Lynwood Unified School District.

The call for the ethical development of AI is increasing. Through the development of an Ethically Designed AI Product Certification, Digital Promise will accelerate landing on firm ground with this question and unify the message. We are collaborating with over 20 school districts across the country, including KIPP Chicago and those participating in the Responsible, Ethical, and Effective Acceptable Use Policies for the Integration of Generative AI in US School Districts and Beyond project (NSF 2334525) including El Segundo Unified School District, Fox Chapel Area School District, Gwinnett County Schools, Iowa City Schools, Lynwood Unified School District, Mineola Public Schools, Nashua School District, Oak Ridge School District, Roselle Public Schools, and San Ramon Valley Unified School District.

“Since AI tools use processes that are often hidden from users, it is incumbent on users to consider if the AI tools are generating ethical and equitable products. Developing an Ethically Designed AI Product Certification would communicate standards and expectations for edtech companies and provide guidance for educators when selecting tools.” – Mary Catherine Reljac, Superintendent, Fox Chapel Area School District

Join the effort to define our expectations for ethically designed AI

We invite education leaders, practitioners, students, families, and nonprofit thought partners to help us design the requirements for the certification. Share your thoughts with us in this brief survey!

AI edtech developers are invited to join the pilot of the certification and apply this summer. If you’re interested in participating, share your information here.

Sign Up For Updates! Email icon

Sign up for updates!

×