AI Developments: Morality Research, Legal Challenges, and Investment Growth

OpenAI is funding a research project on AI morality at Duke University called “Research AI Morality.” The project receives a $1 million grant over three years to develop moral AI. The project aims to explore how AI can predict moral decisions in medicine, justice, and business, to ensure AI acts in line with human values. The researchers believe this is essential for AI’s benefit. Defining morality for AI is complex, as AI must decide within these boundaries robustly and securely. AI learns from human decisions but is still susceptible to misuse. Ultimately, humans are responsible for moral decisions, a topic debated for centuries.

A coalition of Canadian media organizations is suing OpenAI for copyright violations. The Toronto Star, The Globe and Mail, and CBC allege OpenAI used news articles without permission to train ChatGPT. The coalition demands damages, profit surrender, and an injunction. They claim OpenAI illegally benefits commercially from their fact-checked content without compensation. Damages of up to $20,000 per article could raise the lawsuit to billions. Other media companies are also taking action against OpenAI.

Elon Musk has filed an injunction to restore OpenAI’s non-profit status. OpenAI largely gave up its non-profit status in October 2024 during a new funding round. Musk, a co-founder of OpenAI, left in 2018 and has been suing since March 2024, alleging a breach of the founding agreement. He accuses OpenAI of violating fiduciary duties and unfair competition. Musk argues OpenAI investors should not invest in other AI firms like his company xAI. OpenAI dismisses these accusations as unfounded.

Getty Images has sued Stability AI in London for copyright infringement, alleging Stability AI illegally copied and processed millions of Getty’s images without licenses. Previously, three US artists sued Stability AI, Midjourney, and DeviantArt for copyright violations. These court decisions will be significant, as it’s unclear if AI-generated works are independent or copies. Getty Images is open to licensed AI art but accuses Stability AI of not seeking licenses.

A study from University College London shows AI models outperform human experts in predicting research outcomes with 81.4% accuracy, compared to 63.4% for humans. Even the top 20% of human experts had a lower accuracy of 66.2%. AI models excelled in all neuroscience subfields, especially when integrating information from entire abstracts. Researchers confirmed the results weren’t due to memorization, suggesting AI models store scientific articles as general patterns, similar to human schema formation. Smaller models with 7 billion parameters achieved similar results as larger ones, but chat-optimized versions were less successful.

The researchers see potential for AI in research planning and execution. AI systems could play a crucial role by predicting the likelihood of various outcomes, enabling faster iterations and better-informed decisions in experiment design. However, they caution that scientists might hesitate to conduct studies with outcomes predicted differently by AI, even if unexpected results could lead to breakthroughs. Conversely, results predicted with high certainty by AI might be seen as less innovative.

A survey of 600 IT decision-makers in the US shows investments in generative AI have surged from $2.3 billion in 2023 to $13.8 billion this year, a sixfold increase. Currently, 60% of corporate spending on generative AI comes from innovation budgets, with 40% from regular budgets. Code copilots dominate specific use cases with a 51% adoption rate. Surprisingly, price plays little role in choosing AI solutions. Companies focus on measurable value and the tools’ adaptability to industry and company specifics.

Finally, heise online supports the AI Advent Calendar by the German Research Center for Artificial Intelligence and the Technical University of Kaiserslautern-Landau. It aims to foster curiosity and creativity among young people while providing valuable media literacy. Each day, a virtual door opens with playful tasks and inspiring insights into AI. The calendar covers AI basics and complex problem-solving. Students aged 14 and above enrolled in a German school can win various tech prizes, but it is open to everyone at ki-adventskalender.de.

Exit mobile version