Mark Zuckerberg, CEO of Meta, is once again at the center of controversy. In an ongoing US lawsuit, prominent authors accuse the company of illegally using their copyrighted works to train Meta’s AI models. Reports suggest Zuckerberg personally approved the use of data from Libgen, a well-known collection of pirated content.
The lawsuit, filed by US authors Sarah Silverman, Richard Kadrey, and Christopher Golden, claims Meta used protected content for training its Llama models without the consent of the rights holders. Court documents reveal that Meta’s developers initially had concerns about using Libgen’s content due to its pirated nature. However, after internal discussions, they allegedly received approval from “MZ,” referring to CEO Mark Zuckerberg.
Additionally, Meta reportedly removed copyright notices from the training data to ensure that the AI models do not generate such notices in their responses. According to the authors’ lawyers, this step might have been taken to minimize legal risks. The most serious allegation is that Meta developers were forced to upload copyrighted material to file-sharing networks to download more data for training.
The legal battle Meta is currently facing is not an isolated case. OpenAI is also dealing with similar accusations. Numerous authors and media companies have sued OpenAI, accusing it of using their content without permission to train its AI models. At the heart of the debate is a question that needs to be resolved: Can AI companies use publicly available content, including copyrighted works, to train their models?
In the US, there is no clear legal regulation on this matter yet. Many companies rely on the principle of “Fair Use,” which allows the use of protected content under certain circumstances. Whether this argument also applies to the mass training of AI models remains unclear. The cases currently being negotiated could be crucial for the future of AI development.
Mark Zuckerberg is already known for his controversial strategies. In the past, Meta, the parent company of Facebook, has faced numerous scandals, from privacy violations to the deliberate spread of misinformation to the impact of the platform on the mental health of young users. Recently, Meta announced two major changes: After deciding to end fact-checking on Facebook in the US, the company now plans to reduce its efforts towards diversity and inclusion.
If the allegations against Meta are confirmed, the company could face one of the biggest lawsuits in its history. Given the billions Meta has invested in AI technologies, the consequences of a defeat would be significant, both financially and for the company’s reputation. At the same time, the case highlights the urgent need for clear legal regulations on handling copyrighted content in the context of AI training. Without such guidelines, the legal situation remains unclear, and companies like Meta or OpenAI can continue to exploit gray areas.