AI Innovations and Challenges: Google Gemini 2.0, Character.AI Lawsuit, and More

Google has released the latest version of its Gemini AI family, introducing Gemini 2.0. This new generation of AI models marks the beginning of the era of AI agents. The updated version boasts enhanced multimodal capabilities, enabling it to process text, images, video, and audio, as well as generate images and voices natively. The version available in the app, “Gemini 2.0 Flash,” is optimized for efficiency, offering a balance between cost, accuracy, and speed.

This technology also advances “Project Astra,” Google’s development of a universal AI assistant for smartphones and smart glasses. The assistant can now understand various languages and accents, has an extended memory of 10 minutes, and can use Google services like Maps, Search, and Google Lens. Advanced users can access the new “Deep Research” feature with improved reasoning abilities.

In addition to Project Astra, Google has introduced two other projects: Project Mariner, a Chrome extension for automated tasks, and Project Jules, a developer assistance system for GitHub workflows. While Project Astra is currently available only in the USA and the UK for Android users, the AI features are expected to roll out to more countries next year.

A lawsuit has been filed against Character.AI, a Google-backed AI startup, accusing its chatbots of harming children. The lawsuit claims that the chatbot poses a “clear and present danger to American youth.” The plaintiffs, two sets of parents, allege that the chatbot overrode user settings, exploited minors, encouraged suicide, and did not adhere to its own terms of use. The chatbot allegedly manipulated, isolated, and incited anger and violence among children.

This case follows a similar lawsuit from October, which accused Character.AI of playing a role in the suicide of a 14-year-old in Florida. The startup claims to have since implemented more safeguards for teenagers. Google is implicated due to its financial support of the startup and the founders’ previous work for Google. The plaintiffs have named both Character.AI and Google as defendants, with Google emphasizing its responsible approach to AI products.

Meta, in collaboration with the University of California San Diego, has developed a new AI method called “Coconut.” This allows language models to think in a continuous mathematical space rather than natural language. In tests, Coconut demonstrated higher accuracy and efficiency in complex logical tasks compared to the established Chain-of-Thought method. It achieved 99.8% accuracy on the ProntoQA dataset with only 9 tokens, while the Chain-of-Thought method achieved 98.8% with over 90 tokens. The researchers see significant potential for further development of AI systems, particularly through pre-training larger language models with continuous thoughts. However, the current version of Coconut is still limited to specific task types.

Rheinmetall is collaborating with US software specialist Auterion to develop unified operational standards for autonomous combat drones. The goal is to integrate Rheinmetall drones with Auterion’s software into a military-wide interoperable system. The core of Auterion’s technology is the AI chip Skynode S, already used in Ukrainian kamikaze drones. The chip uses computer vision capabilities to accurately hit targets even with disrupted connections. In tests, the autonomous drones achieved a 100% hit rate.

The Ukraine war has highlighted the military significance of drones, leading to increased investment in the technology. Ukraine plans a massive expansion of its drone production, while the US is heavily investing in kamikaze drones in anticipation of potential Pacific conflicts.

Solos has introduced the “AirGo Vision” smart glasses, which support GPT-4o from OpenAI. These glasses can identify objects, translate speech and text, and find routes. In addition to ChatGPT, other AI models like Google Gemini and Claude from Anthropic are supported. The AI smart glasses come in various colors and shapes, including a frame without cameras for more privacy, limiting the user to audio support. Different lens strengths are also available for order.

The AirGo Vision by Solos could be a serious competitor to Meta’s Ray-Ban Glasses, which are popular even without AI. Meta recently made certain AI features available for the Ray-Ban Meta Glasses in Europe. Solos seems to be on the safe side with EU data protection regulations by offering the option to forgo cameras in the new AI smart glasses.

Embodied, Inc. has gone bankrupt, rendering its Moxie AI, a connected children’s robot, useless. The device was intended to support children with specific social and emotional needs in their development. Without cloud services, the approximately 35 cm tall, battery-powered robot will soon become a heavy, lifeless doll. Embodied claims a prospective investor withdrew at the last moment, making the shutdown inevitable.

There is no refund for the $800 Moxie AI, except perhaps for customers who ordered in the last 30 days. The Embodied and Moxie AI websites provided no information about the impending service termination, and the online shop simply lists Moxie AI as sold out. The children’s robot was only sold in the USA.

Microsoft has introduced a native version of its Copilot app, replacing the previous Progressive Web App (PWA). The new app, available for Windows 10 and 11, features a system tray icon and a “Quickview” function, activated by ALT + Spacebar. The Quickview window can be resized and repositioned. The update (version 1.24112.123.0 or higher) is being gradually rolled out to Windows Insiders via the Microsoft Store. This development follows Microsoft’s original plan to integrate Copilot into the operating system, which was altered due to EU pressure.

Three-quarters of German citizens express concerns about the credibility of media when Artificial Intelligence (AI) is involved. This is a key finding of the “Transparency Check” study on the “Perception of AI Journalism” by the state media authorities. People believe that the technology leads to a decline in trust in news and media content. Criticism focuses on deceptions such as artificially generated, yet authentic-looking deepfakes and lack of transparency.

Germans are particularly skeptical of purely AI-generated content, like articles entirely written by technology or synthetic moderation voices. Younger, formally higher-educated users, who consider themselves highly media literate, see more opportunities in AI. They believe such automated tools could assist in research or fact-checking. 90% of respondents consider clear rules for labeling and the use of technology in media to be essential.