Potential Risks of AI: Cybercrime and Market Instability

AI : Potential Risks of AI: Cybercrime and Market Instability

Black swans are very rare, but they do exist. A “Black Swan” event refers to an unexpected event with significant impact. For example, the CrowdStrike fiasco from last summer can be considered such an event. A faulty update disrupted airports, companies, and hospitals. The magazine Politico asked some experts about unpredictable moments that might occur in 2025. Artificial intelligence is seen as a significant risk by two authors.

Gary Marcus is concerned about generative AI. He is an AI expert and an emeritus professor of neuroscience at New York University. Marcus is a major critic of the current hype around generative AI. Although he believes the potential of AI is fundamentally limited, he writes that cybercriminals could use the technology to cause significant harm. It is the “perfect tool for cyberattacks.” AI-generated texts can be used for phishing attacks, and deepfakes can mislead people. An employee at a Hong Kong bank reportedly fell for such a video and sent $25 million to fraudsters.

Large language models are also vulnerable to attacks like jailbreaking and prompt injections. These involve tricking AI models into performing actions not intended by the provider. In a harmless scenario, chatbots might answer questions they shouldn’t, but worse would be the theft of account data. Even more severe scenarios are conceivable.

Marcus also warns that developers use generative AI tools for coding. They sometimes don’t understand what the AI has done, and they might not fully control all the code. This could lead to security vulnerabilities. Marcus is also concerned that US authorities are acting in a deregulated manner.

Amy Webb fears a stock market crash. As the CEO of the Future Today Institute and a professor at New York University’s Stern School of Business, Amy Webb is often called a futurist. She also sees a problem in the planned deregulation and dismantling of regulatory structures by the re-elected President Donald Trump. Botnets have already proven how easy and effective it is to use AI for spreading misinformation. “After the elections, malicious actors and national discontented parties in 2025 focus on a new target: the financial markets,” she writes.

AI holds masses of real-time data, financial reports, and economic indicators, and it can also summarize public sentiment from social networks. Deliberately spread misinformation and rumors could destabilize the markets. AI could also be used for dissemination, making it sound credible and identifying the best times for release.

In conclusion, both Gary Marcus and Amy Webb highlight the potential risks associated with AI, particularly the misuse by cybercriminals and the impact on financial markets. The combination of powerful AI tools and a lack of regulation could lead to unexpected and significant challenges in the future. The need for careful oversight and understanding of AI technologies is crucial to mitigate these risks.