Not just ‘David Mayer’: ChatGPT breaks when asked about two law professors

The curious case of ‘David Mayer’ and ChatGPT

For tech enthusiasts, this week brought an unusual glitch in ChatGPT: the model refused to generate the name “David Mayer” and would reportedly crash if users attempted to circumvent the restriction. While the exact cause remains unclear, this oddity underscores a fundamental truth about AI systems—they’re not mystical oracles but human-engineered constructs with intricate rules and sometimes unforeseen bugs.

It’s not just ‘David Mayer’

The way this happens is exactly what it sounds like: If you type the words “David Mayer,” “Jonathan Zittrain,” or “Jonathan Turley” anywhere in a ChatGPT prompt, including in the middle of a conversation, it will simply say “I’m unable to produce a response,” and “There was an error generating a response.” It will then end the chat. This has started various conspiracies and there is no obvious reason for ChatGPT to issue an error message like this. 

I did a test run myself

Why Does This Matter?

  1. Opaque Rules: AI systems, like ChatGPT, operate under layers of pre-defined filters, safeguards, and coded instructions that users never see. These rules aim to prevent harm, comply with ethics, and refine functionality but can occasionally misfire in bizarre ways.
  2. Human Fallibility: AI reflects the priorities and limitations of its creators. This glitch may be the result of an overly cautious filter, a programming oversight, or something as mundane as a typo in the backend code.
  3. Trust but Verify: The incident serves as a reminder to approach AI outputs critically. These systems are impressive but fallible, and their “logic” sometimes defies user expectations.

The Takeaway

The “David Mayer” mystery may be resolved soon (or remain one of tech’s amusing quirks), but it highlights the imperfect and evolving nature of AI. These tools are designed to assist, but understanding their boundaries and oddities is crucial in realizing their potential—and their limitations.