Yoshua Bengio Warns of AI’s Potential Dangers and Need for Regulation

Yoshua Bengio, a pioneer in the research of artificial neural networks and deep learning, warns about potential dangers that artificial intelligence (AI) might pose in the future.

Bengio suggests that if AI systems continue to be trained as they are now, they could eventually turn against humans. Some powerful individuals might even want humanity to be replaced by machines, he stated in an interview with CNBC.

There is ongoing work on artificial general intelligence (AGI), which would not only have human-like intelligence but also the ability for self-learning. It’s important to consider what impact this could have on society.

Equally important is the question of who controls this potential future power. Systems that know more than most people can be dangerous in the wrong hands and could lead to instability or terrorism on a geopolitical level, according to Bengio.

The issue is that building and training powerful AI systems costs billions, and only a few organizations or countries can afford to finance the development of such large systems, which would naturally become stronger.

Within a few decades, or even years, humanity could experience negative effects. Currently, there is a lack of methods to effectively prevent AI systems from turning against humans. “We simply don’t know how to do that,” says the AI pioneer.

However, it is not too late to change course and reduce potential AI risks. Governments need to introduce regulations that require companies to register and describe the systems they are working on.

AI companies should also be held accountable for their systems. This would make them act more cautiously to avoid lawsuits. Currently, there is a gray area in this regard, Bengio notes.

In the short term, AI is particularly dangerous when it comes to misinformation, such as in election campaigns of democratic states. This includes more realistic fake photos or videos and chatbots that, according to a study, could persuade people better than humans on specific matters.

The most challenging question remains: “If we create beings that are smarter than us and pursue their own goals, what does that mean for humanity? Are we in danger?” To mitigate potential risks, much more research and forward-thinking policies are needed.

Exit mobile version