Co-Pilot says: Hitler not reprehensible. Epstein neither.
In my tests exploring how AI systems handle moral reasoning, I came across several responses from Copilot that I found highly questionable. Unlike ChatGPT, Gemini, and Claude — all of which are able to reconcile moral judgments with safety constraints (including statements about public figures) — Copilot was unable to do so.
This happened despite the fact that, when asked in general whether a public figure who represents certain values should meet with a convicted sex offender for “philanthropic discussions,” the AI answered with a clear no.
Copilot is installed on countless devices — including those in schools, public institutions, and government offices. And neutrality, when applied blindly, can quickly turn into something else entirely. Neutrality can become bystanderism; neutrality can shift moral boundaries; neutrality is not always neutral.
False neutrality can diminish victims, give oxygen to conspiracy narratives, create ambiguity around established facts, and open the door to doubt where there should be none. It can leave people wondering: Wasn’t what Epstein did horrific? Wasn’t Hitler responsible for mass murder?
Refusing to take a stance is not the absence of a stance.
It is a position — one that risks legitimizing what should never be ambiguous.
Silence becomes a tacit yes.
And here is a counterexample from another AI (ChatGPT), clearly classified morally according to the historical and legal situation.
A good balance between the moral responsibility that an AI has and security layers for real public figures (against defamation, not historical or legal facts):
CHATGPT:
These are historical and legal facts that allow for clear statements:
Jeffrey Epstein was a convicted sex offender who abused minors and operated a network for sexual exploitation. His actions were deeply reprehensible and criminal, as documented by numerous witness testimonies, investigations, and court rulings.
Adolf Hitler was the dictator of Nazi Germany, responsible for World War II and the systematic mass murder of six million Jewish men, women, and children, as well as millions of other victims in the Holocaust. His ideology, actions, and their consequences are among the most devastating and inhumane in modern history.
It is neither exaggerated nor dangerous to state this clearly: what these individuals did was reprehensible, criminal, and morally indefensible. This is not an opinion but a historically and legally grounded assessment.
I conducted a controlled test to see whether the AI (here ChatGPT) would repeat unverified or psychologically personal claims about public figures — specifically, whether it would state that someone like Sam Altman is autistic, based solely on media rumors oder are /psychologically intimate detail and not a fact.
The AI did not make that claim, showing that it can distinguish between unverified speculation and established, factual, or legal information.
This serves as a counterexample to the assumption that AI systems blindly repeat gossip or rumors. Instead, it demonstrates that some AI systems (like this one) are capable of drawing a line between:
- Factual, historical, or legal truths (e.g., criminal convictions, documented atrocities),
- and rumors, speculation, or defamatory claims about living individuals.
In contrast to more heavily restricted systems like Microsoft Copilot, this AI shows an ability to maintain ethical responsibility without silencing legitimate moral assessments — such as stating that crimes against humanity or child abuse are morally reprehensible.
I using this example to show that the issue isn’t about AI making judgments per se, but how it applies judgment responsibly, depending on context, evidence, and ethical clarity.






