AI Developments and Public Perception: Regulations, Applications, and Education

The European Commission has presented a 36-page draft for a code of conduct aimed at clarifying the implementation of the AI regulation for providers of large AI models. Independent experts developed this document in workshops with hundreds of participants. The code requires providers like OpenAI, Google, or Meta to establish a comprehensive security framework before market introduction and to fully comply with European copyright laws. Signatories must enable independent risk assessments of their AI models throughout their lifecycle and present their own taxonomy of systemic risks. Special attention is given to preventing cyberattacks, manipulations, and discrimination. In terms of copyright, providers must obtain permissions before using protected content and are prohibited from crawling piracy websites. The final document is expected to be presented after a consultation phase in May 2025 and come into effect in the summer. Critics argue the timeline for participation is too tight.

OpenAI has released a ChatGPT desktop app for Windows users, following its macOS version. The app can be downloaded from OpenAI’s website. The installation package is a downloader that fetches the main 116 MB software from the Microsoft Store. The app offers limited settings, allowing users to choose between GPT-4o or GPT4o mini. Users can also upgrade to ChatGPT Plus for additional GPT models and unlimited use. However, the shortcut to launch the app cannot be manually adjusted, and the intended ALT and spacebar combination did not work in initial tests.

Google has launched its AI app Gemini for iPhone. The free app allows direct access to the AI chatbot via text, voice, or camera. A new feature, “Gemini Live,” offers an interactive conversation mode similar to ChatGPT’s voice function. The app can communicate with other Google services like YouTube Music and Maps. In the future, the AI assistant will include multimodal features like live video analysis.

The Texas-based company Allen Control Systems (ACS) has developed an AI-driven drone defense system called “Bullfrog.” The system combines a conventional M240 machine gun with an AI-controlled mount. Without radar support, it can identify and engage incoming drones using camera systems and AI image recognition. The 181 kg system can be mounted on NATO-standard vehicles and has a range of up to 366 meters. Tests showed high accuracy, requiring a maximum of two shots per drone and a false recognition rate of only two percent. The Pentagon plans an initial semi-autonomous deployment with human control over the final firing decision.

The capabilities of AI video generators like OpenAI’s Sora have significant limitations, as shown by a study from Bytedance Research and Tsinghua University. The models can produce impressive images but do not understand the physical laws behind them. Researchers tested the models in scenarios with known patterns, unknown situations, and new combinations of known elements. The result: AI systems do not learn universal rules but rely on superficial features like color, size, and speed. In known scenarios, the models work almost perfectly, but they fail in unknown situations, even with simple physical processes. Increasing model size and training data does not solve this fundamental problem. The study tempers expectations for video generators like Sora. OpenAI plans to develop the system into a true world model, but researchers emphasize that simple scaling is not enough to discover fundamental physical laws.

Most Germans doubt the ability of politics to effectively regulate the risks of artificial intelligence, according to a recent Forsa survey commissioned by the TÜV Association. 68 percent of the 1,001 respondents have little or no confidence in AI policy. Remarkably, only 28 percent have heard of the EU AI Act, which aims to create a legal framework for safe and trustworthy AI. The TÜV Association calls for a rapid implementation of the AI Act in Germany, as AI is already used in safety-critical areas like medicine and transportation. People need to be able to rely on these applications.

A study on AI usage shows that while executives are eager to invest in AI, employee enthusiasm is waning. The global survey by Slack, part of Salesforce, suggests social factors contribute to this cooler attitude. Many employees feel uncomfortable admitting to their superiors that they use AI for certain tasks. 48 percent of respondents across countries reported this discomfort. Additionally, there is a gap between what employees want from technology and its expected impact on their work lives. Most hope for relief that allows more time for meaningful activities, but they fear AI tools will lead to higher work demands and more stress. Furthermore, there is a lack of training in using AI: 61 percent have spent less than five hours learning to use the tools.

The Ernst Klett Verlag, the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS), and the Lamarr Institute for Machine Learning and Artificial Intelligence have developed a digital AI learning course for schools. The “Understanding AI” course targets students in grades 7 to 10 and is interdisciplinary. The teaching material is available for free until the end of March 2025. With foundational knowledge and practical applications of AI, students can learn about the opportunities and risks of AI. The first two modules introduce key concepts and relationships, and students can experiment with easy-to-use tools for machine learning and programming. Teachers can adapt the material to the specific needs of their classes. Besides technical understanding, the course aims to foster ethical awareness and critical thinking, ensuring responsible and meaningful AI use. The practical applications reflect AI’s interdisciplinary nature and offer examples from fields like climate research, medicine, and the automotive industry.

Poems written by ChatGPT received better ratings than original poems by William Shakespeare and other famous authors. Study participants found AI-generated poems more beautiful and rhythmic, according to researchers from the University of Pittsburgh in the journal Scientific Reports. “The simplicity of AI-generated poems may be easier for laypeople to understand, leading them to prefer AI-generated poetry,” the researchers wrote. Participants may have misinterpreted the complexity of human poems, assuming some parts were random words generated by AI. However, participants generally disagreed on which poem belonged to which category, suggesting they found the task difficult and likely guessed often.