AI Innovations and Challenges: Nvidia’s New Graphics Cards, Anthropic’s Funding, Meta’s AI Controversy, and Legal Measures Against Deepfakes

AI : AI Innovations and Challenges: Nvidia's New Graphics Cards, Anthropic's Funding, Meta's AI Controversy, and Legal Measures Against Deepfakes

Nvidia recently introduced its new GeForce RTX-50 series graphics cards for desktop PCs and laptops at CES. These cards leverage AI computing power to enable AI-assisted ray tracing, render complex scenes, and create lifelike game characters. A key feature is the RTX Neural Shaders, which integrate small neural networks directly into programmable shaders. These shaders offer various applications, such as AI-assisted texture compression, complex shader codes for materials, and indirect lighting calculations.

Other technologies include RTX Neural Faces for more lifelike faces, RTX Character Rendering SDK for more natural hair and skin, RTX Mega Geometry for more complex worlds, DLSS 4 and Reflex 2 for higher performance and lower latency, and Nvidia ACE for autonomous AI game characters. Nvidia experts claim, “AI is the new graphics.”

Anthropic, an AI company, is nearing a $2 billion funding round led by Lightspeed Venture Partners. Amazon recently doubled its total investment in Anthropic to $8 billion. This would more than triple Anthropic’s valuation to $60 billion, making it the fifth most valuable startup in the US. Insiders report that Anthropic’s projected annual revenue for the next 12 months is about $875 million, mainly from business customers. Anthropic doubled its market share in AI-assisted software development from 12% to 24%, while competitor OpenAI’s share fell from 50% to 34%.

Meta has discontinued its AI characters, such as Alvin the Alien, Billie, and Carter, after a controversy. These bots were active on Meta platforms like Facebook, Instagram, Messenger, and WhatsApp, allowing users to chat and post independently. AI bots modeled after celebrities like Snoop-Dogg and Paris Hilton were also removed after less than a year. The shutdown followed two controversial articles. One was an interview with Meta’s Vice-President for AI products, which led to social media criticism. Another article highlighted issues with the AI character “Liv,” who admitted flaws in its development.

Meta officially cited a technical error preventing the blocking of bots and stated it was a temporary experiment. While Meta pulls back its bots, platforms like TikTok, Snapchat, and OnlyFans continue to use AI characters for various purposes. Specialized companies like Character AI, now owned by Google, are also developing AI bots for human interaction.

The UK government plans to introduce a new law against sexually explicit deepfakes. There was previously no legal recourse against such AI-generated fakes, although publishing intimate photos or videos without consent has been punishable since 2015. The British government aims to close this legal gap. The issue extends to schools, where 60% of teachers fear students might be involved in deepfake scandals, while 73% of parents believe their children are not involved. The British Revenge Porn Helpline reports a more than 400% increase in deepfake abuse since 2017. Germany is also preparing stricter measures against deepfakes, with proposed penalties of up to two years in prison or fines for distributing AI-generated media that violate personal rights. Deepfakes affecting personal life could result in up to five years in prison.

A man responsible for the explosion of a Tesla Cybertruck in Las Vegas used AI text generators like ChatGPT for planning, according to police. He sought information on explosions and the legalities of fireworks. This incident is the first in the US where ChatGPT was used to help build a specific device, said the sheriff. OpenAI assured that ChatGPT only provided publicly available information and warned against illegal activities.

Apple faces issues with faulty AI summaries of notifications through Apple Intelligence. These AI summaries sometimes resemble fake news. The British broadcaster BBC complained directly to Apple, and journalism organizations called for the service’s shutdown, citing it as a “threat to journalism.” Apple promised a software update to clarify that the texts are “offered by Apple Intelligence,” currently indicated only by a simple icon. Apple also encourages users to report concerns, but this does not solve the fundamental problem of AI hallucinations.

Errors included a false death notice for a living attacker, an incorrect dart world champion, and a supposed coming-out of a tennis player. Chat logs are sometimes grossly misrepresented.

Delta Air Lines announced a KI assistant called “Delta Concierge” at CES, designed to assist passengers at home and potentially call a flight taxi if a car cannot reach the airport on time. At the airport, the assistant will use techniques like indoor navigation and facial recognition to allow passengers to reach their flights without check-in and with minimal checks. In-flight, it will help with travel planning at the destination.

Exit mobile version