ChatGPT’s Mysterious Inability to Discuss “David Mayer”

When you ask ChatGPT who David Mayer is, the AI chatbot says it’s unable to answer. It also cannot write the name. Even with various attempts to trick it into doing so, ChatGPT refuses the task. Attempts by users to get it to say “Mayer” have failed. ChatGPT seems to have a “Voldemort” – a reference to the villain in Harry Potter whose name no one speaks.

This peculiarity was noticed by a Reddit user. Many others tested to see if they received the same response, and indeed, ChatGPT has the same difficulty with David Mayer for everyone. This raises the question of why this is happening.

When you search for David Mayer, the first result is David Mayer de Rothschild. He is noted as an adventurer and environmentalist, known for raising awareness about climate change. ChatGPT has not been known to deny climate change or engage in antisemitic conspiracy theories about the Rothschild family, a banking dynasty to which this David Mayer de Rothschild belongs. ChatGPT has no issues with the name Rothschild itself.

David Mayer is not a particularly rare name. ChatGPT can spell David and Mayer separately without issue. So why does the chatbot warn some users that further questions about the name could violate usage policies? One user reportedly received a response from ChatGPT stating that the name matches a “sensitive or flagged entity,” and that it must not violate personal rights or the rights of public figures or brands.

This has fueled rumors about the Rothschild-David. However, there is another David Mayer who might be the cause: a wanted criminal who once used the name David Mayer in the USA, which led to a man with the same name living in the UK being mistakenly blacklisted. Because of this error, the real and innocent David Mayer could not travel to the USA or receive mail from there.

Some speculate there might be a David Mayer involved in a legal dispute with OpenAI over copyright or personal rights, leading the company to block the name. The GDPR requires that false information about individuals be deleted if requested. There is a right to be forgotten, which allows people to request removal from Google search results. How this will be handled with AI models in the future is unclear. Information cannot be deleted from AI models after training; only specific answers can be forbidden through fine-tuning.

A user also found other names that present similar difficulties for ChatGPT. One is an Italian lawyer who claims to have invoked the right to be forgotten and demanded that OpenAI ensure ChatGPT does not discuss him.

All these theories share the idea that OpenAI actively enforced this ban. However, this might not be the case. Part of how AI models work remains a black box. They are trained with massive data sets, creating a structure of artificial neural networks. This can lead to incorrect connections, such as hallucinations. Training data can also be intentionally poisoned, meaning false information is fed to the AI. This could lead ChatGPT to believe David Mayer is an unspeakable name.

Exit mobile version