AI Developments and Challenges: Digital Violence, Business Innovations, and Ethical Concerns

The Federal Criminal Police Office has presented a report on crimes against women. The numbers are alarming: almost every day a femicide occurs in Germany. Digital violence against women has significantly increased, with over 17,000 cases registered in 2023, a 25% rise from the previous year. 62.3% of victims of digital violence are female. Crimes include cyberstalking and cybergrooming. The police have identified around 13,000 suspects.

The BKA attributes the rise to societal changes and patriarchal structures. The aid organization HateAid calls for stricter measures against deepfakes and image-based sexualized violence. The BKA has established measures such as the Central Reporting Office for criminal content on the internet (ZMI BKA) and collaborates with the Central Office for Combating Internet Crime.

Meta, the social media giant, is increasing its focus on the B2B sector and has established a new business unit for artificial intelligence led by Clara Shih. The new product group aims to make cutting-edge AI technology accessible to every business. Already, 200 million companies use Meta platforms like Facebook, Instagram, and WhatsApp monthly for business communication. With the new B2B unit, Meta is taking a step to develop its AI models and expertise as a business model.

Other companies like Microsoft, OpenAI, and Anthropic are also trying to sell their AI services and models to business customers. It is unclear if Meta will directly compete or build B2B AI services that companies can use within the context of Meta products. According to Shih, Meta also plans to integrate AI developments into AR glasses and VR headsets, seeing it as a “cross-generational opportunity” for companies to benefit from Meta’s global reach and AI leadership.

The Chinese AI and investment company High-Flyer has released DeepSeek-R1, a new AI model with reasoning capabilities, which aims to compete with Western AI models like OpenAI’s o1. These AI models are trained to think through complex problems more thoroughly before providing an answer. In popular AI benchmarks, DeepSeek-R1 reportedly matches the performance level of OpenAI’s o1-preview and significantly surpasses AI language models like GPT-4o and Claude 3.5 Sonnet from Anthropic.

However, DeepSeek-R1 also makes mistakes. For instance, in the game Tic-Tac-Toe, it chose a defensive move over a winning one. It also struggled with complex logical problems and could even be prompted to detail a recipe for the drug methamphetamine. Additionally, DeepSeek-R1 refuses to answer political questions related to China, likely to avoid conflicts with the Chinese government.

Amazon’s ambitious AI upgrade for Alexa faces significant technical challenges. The new AI version struggles with latency times of up to 10 seconds on older Echo devices, which make up a large part of the installed base. This is well above Amazon’s target of 2-4 seconds for simple queries. Internal documents reveal difficulties in integrating third-party services. Tests showed that connections to services like Uber and OpenTable often fail or time out with the new AI system.

Amazon’s internal tests also showed that the new AI version of Alexa struggles to maintain consistent performance across different types of queries. Even simple commands that currently work reliably showed inconsistent response times with the new system. These technical challenges have led to internal debates about whether Amazon should only release the new AI features for newer Echo devices. Managers fear that a slower and less reliable version of Alexa could undermine user trust.

Google is expanding the functions of its AI chatbot Gemini with a new memory feature that allows the system to remember users’ interests and preferences and tailor responses accordingly. The new feature is initially available only to Gemini Advanced subscribers in English. Users can provide information such as occupation, hobbies, or dietary habits either directly in conversation with Gemini or on a dedicated “Saved Info” page, where stored data can be viewed, edited, or deleted at any time.

Gemini indicates in its responses when it uses personal information. A similar feature was introduced by OpenAI for ChatGPT Plus in April. Philosophy professor Jonathan Birch of the London School of Economics warns of a potential societal split due to differing views on the sentience of future AI systems. Scientists speculate that AI could develop consciousness by 2035, sparking controversial debates about the definition and measurement of AI emotions and their rights.

Experts criticize the lack of interest from technology companies in the societal consequences of AI. Patrick Butlin from the University of Oxford warns of potential resistance from AI systems and calls for slower development. As a first step towards a solution, the establishment of measurable parameters for AI sensations is proposed.

Microsoft aims to enhance the security of its offerings and services and invites security researchers to participate in an expanded bug-bounty program, Zero Day Quest. The event offers $4 million in prizes for finding security vulnerabilities. The technology company then examines the weaknesses and develops security updates. This year, the focus is on AI and cloud services. The hunt for security vulnerabilities includes Azure, Dynamics 365, M365, Identity, Microsoft AI, and Power Platform.

Microsoft states that it has permanently doubled the rewards for software vulnerabilities in AI products. A maximum of $30,000 is available for a vulnerability that allows attackers, for example, to execute malicious code. The event runs from now until January 19, 2025. Among other things, the ten best security researchers will be invited to the onsite hacking event at the Microsoft campus in Redmond.

A survey by the career portal Indeed shows that almost a fifth of respondents already prefer working with AI over colleagues. 25% said they consider AI more competent than their colleagues. Another 29% believe AI is at least on par with experienced staff in the company. For 28%, the attitude towards collaboration with humans or machines is balanced. Just over half still prefer the human team over tools like ChatGPT.