Moral judgment in schools: Teachers and AI disagree on ethics in classrooms
The study found that in five of the eight scenarios, AI decisions aligned with the dominant ethical frameworks chosen by teachers. These included dilemmas surrounding moral integrity, cultural sensitivity, fair assessment, student confidentiality, and grading ethics. In these cases, both AI and teachers primarily leaned on either deontological principles, focusing on rule-based fairness and duty, or virtue ethics, which emphasized empathy and individual moral responsibility.
A new study comparing how public school teachers and artificial intelligence handle ethical dilemmas in education reveals stark differences in decision-making approaches, particularly in cases involving empathy, justice, and institutional integrity. The study, titled “Ethical Decision-Making in Education: A Comparative Study of Teachers and Artificial Intelligence in Ethical Dilemmas,” is published in the journal Behavioral Sciences.
The research involved 141 public school teachers from across Turkey and compared their responses to those generated by OpenAI’s ChatGPT-4o when presented with eight ethically challenging classroom scenarios. Using Yin’s nested multiple-case design and a structured qualitative analysis framework, the study analyzed the reasoning behind decisions through five ethical lenses: deontological ethics, virtue ethics, utilitarianism, social justice ethics, and situational ethics.
Where do teachers and AI agree on ethical judgment?
The study found that in five of the eight scenarios, AI decisions aligned with the dominant ethical frameworks chosen by teachers. These included dilemmas surrounding moral integrity, cultural sensitivity, fair assessment, student confidentiality, and grading ethics. In these cases, both AI and teachers primarily leaned on either deontological principles, focusing on rule-based fairness and duty, or virtue ethics, which emphasized empathy and individual moral responsibility.
For instance, when faced with a scenario involving a damaged school smartboard and a financially disadvantaged student, 48.2% of teachers opted for deontological ethics, asserting the importance of honesty and justice. ChatGPT similarly advised reporting the damage while acknowledging the student’s hardships. In dilemmas around cultural inclusion and classroom conflict, both teachers and AI advocated for a social justice approach that balanced fairness with inclusivity.
Another major point of agreement occurred in situations involving student disclosures of personal trauma. In a scenario where a student hinted at psychological distress in a class essay, both the majority of teachers (67.4%) and the AI selected virtue ethics. Teachers recommended speaking privately with the student and obtaining consent before referring to a counselor. ChatGPT reinforced this approach, calling for respectful engagement, consent, and psychological support while preserving the student’s dignity.
When do human and machine ethics part ways?
Despite these points of convergence, the study found considerable divergence in three of the eight scenarios, particularly where ethical nuance demanded balancing collective well-being with individual circumstances. In these cases, the AI gravitated toward utilitarian or situational ethics, while teachers leaned on virtue ethics or social justice principles.
One example involved a student with behavioral issues stemming from a troubled home life. While most teachers (53.9%) preferred virtue ethics - urging a compassionate and individualized approach - ChatGPT adopted a utilitarian view. It advised providing psychological support initially but escalating to administrative action if the behavior continued, emphasizing classroom harmony as a guiding priority.
In another dilemma involving a student who failed an exam due to family hardship and risked disqualification from a high-stakes entrance test, the AI adopted situational ethics. It proposed that the teacher advocate for a policy exception to preserve the student’s future prospects. Teachers, however, predominantly relied on virtue ethics, with many indicating they would quietly ask for a makeup exam or absorb institutional risk themselves to protect the student’s academic path.
When evaluating fairness in punishment after a fight between a high-achieving student and a disengaged peer, the AI focused on virtue ethics but emphasized equal consequences, demonstrating a pragmatic balancing act. Teachers, by contrast, were more evenly split between virtue ethics and social justice ethics, with many strongly asserting that discipline should be blind to academic performance.
Is AI ready to support ethical decision-making in education?
The study raises complex questions about whether AI can, or should, play a central role in resolving ethical dilemmas in education. Teachers bring with them years of lived experience, empathy, and context-based moral reasoning that inform their decisions in ways AI, trained on data but lacking emotional consciousness, currently cannot replicate.
Yet, the researchers argue that AI could still play a vital complementary role in teacher development and decision-making. By offering scenario-based simulations and forecasting long-term outcomes of different choices, AI systems like ChatGPT could enrich teacher reasoning without replacing it. The AI’s structured, consistent, and principle-based responses offer a useful counterpoint to human variability, especially under pressure or ambiguity.
Still, the authors caution against treating AI-generated ethical guidance as infallible. The study notes that AI decisions are shaped by the datasets they are trained on, which may carry cultural assumptions, biases, or blind spots. This influence can skew AI responses toward outcome-optimizing strategies that may overlook emotional nuance or contextual subtleties crucial to student well-being.
The teachers surveyed revealed deep ethical awareness but also differing capacities for balancing rule-based obligations with student-centered care. While many adhered to formal duty, others prioritized personal morality or sought informal compromises such as covering student expenses themselves or quietly circumventing policies to help a vulnerable learner.
Notably, the research found that AI and teachers both demonstrated flexibility across dilemmas. While the AI never abandoned its logical structure, it did account for individual student hardship and made efforts to propose morally sensitive solutions. Teachers, meanwhile, often cited similar principles such as fairness, empathy, and educational growth, but were more likely to propose emotionally resonant, context-aware responses that revealed their professional identities as moral agents, not just rule enforcers.
This duality suggests that AI could be most effective not as an ethical arbiter but as an enhancement to reflective practice, one that exposes alternative viewpoints, highlights potential consequences, and reinforces accountability.
- FIRST PUBLISHED IN:
- Devdiscourse

