Predicting the future of Artificial Intelligence (AI) is challenging due to the rapid pace of development in the field. However, editors from the MIT Technology Review have attempted to highlight some upcoming trends, excluding obvious developments like smaller and more efficient Large Language Models (LLMs) and innovative AI agents.
1. Generative Virtual “Playgrounds”
If 2023 was the year of image generators and 2024 the year of AI video generators, what’s next? The answer might lie in generative virtual worlds, also known as video games. In February, Google Deepmind introduced a generative model called Genie, which could transform a simple image into a side-scrolling 2D platform game. By December, they unveiled Genie 2, capable of turning an initial image into an entire virtual world. Other companies are working on similar technologies. For instance, AI startups Decart and Etched showcased a Minecraft hack where every image in the game is generated in real-time. World Labs, co-founded by Fei-Fei Li, is developing Large World Models (LWMs) for similar purposes.
These systems have clear applications in video games. Generative 3D simulations could be used to explore new game design concepts and turn simple sketches into playable environments. Such virtual “playgrounds” could also train robots. World Labs aims to develop spatial intelligence, enabling machines to interpret and interact with the real world. However, robotic researchers lack good data on real-world scenarios to train such technology. Creating large virtual worlds for virtual robots to learn through trial and error could be a solution.
2. Large Language Models that “Think”
When OpenAI introduced its “o1” model in September, it marked a paradigm shift in how large language models function, providing better answers and more accurate reasoning. Just two months later, the “o3” model surpassed this paradigm. Unlike earlier models, which provided the first answer that came to mind, newer models are trained to work through answers step-by-step, breaking down complex problems into simpler steps. If one approach fails, they try another. This “Reasoning” technique can make large language models more accurate, especially in math, physics, and logic problems.
This technique is crucial for AI agents. In December, Google Deepmind introduced an experimental web-browsing agent named Mariner. During a demo, Mariner was tasked with finding a cookie recipe that matched a photo. The agent found a recipe and began adding ingredients to an online shopping cart but got stuck on which flour type to choose. Mariner explained its steps in a chat window, showing the ability to break down tasks into actions and select the one that solved the problem. This demonstrated a significant advancement in AI’s problem-solving capabilities.
Google Deepmind is also developing Gemini 2.0, a large language model using this step-by-step problem-solving approach. OpenAI and Google are just the beginning, as many companies are developing large language models with similar methods to improve tasks ranging from recipe creation to programming. “Reasoning” remains a top innovation in AI for 2025.
3. AI in Science
One of the most exciting uses of AI is accelerating discoveries in natural sciences. This potential was confirmed when the Nobel Prize in Chemistry was awarded to Demis Hassabis and John M. Jumper of Google Deepmind for their AI-based protein folding tools. The trend is expected to continue in 2025, with more large AI datasets and models focused on scientific research. Proteins were the first target due to existing excellent datasets for training AI models.
The search for the next big thing in science has begun, with material science being a possible area. Meta has released extensive training datasets and AI models to help scientists discover new materials faster. Hugging Face and the startup Entalpic launched the open-source project LeMaterial to simplify and accelerate material research. AI model manufacturers are also eager to offer their generative products as research tools. OpenAI tested its latest o1 model with scientists to evaluate its research support capabilities, with encouraging results.
Having an AI tool that works like researchers is a major dream in the tech sector. Anthropic founder Dario Amodei highlighted science, especially biology, as a key area where powerful AI could help. While we’re still far from this scenario, 2025 could see significant steps in this direction.
4. AI Firms in National Security
AI firms willing to provide tools for border surveillance, intelligence gathering, and other national security tasks could earn substantial revenue. The US military has launched several initiatives, such as the Replicator Program, investing a billion dollars in small AI drones, driven by the war in Ukraine. The “Artificial Intelligence Rapid Capabilities Cell” aims to integrate AI into battlefield decision-making and logistics. European forces are under pressure to increase investments in such technology, fearing reduced US spending on Ukraine.
These trends will continue to benefit defense tech companies like Palantir and Anduril, which capitalize on secret military data to train AI models. The defense industry’s deep pockets might also attract mainstream AI companies. OpenAI announced a partnership with Anduril for an anti-drone program, marking a shift from its previous stance against military collaboration. Other companies like Microsoft, Amazon, and Google have collaborated with the Pentagon for years.
AI companies spending billions on training and developing new models will face pressure to consider their revenue situation in 2025. They might find enough customers outside the defense sector willing to pay for AI agents to perform complex tasks or creative industries investing in improving image and video generators. However, many startups might be tempted to pursue lucrative Pentagon contracts. AI companies will need to consider whether working on defense projects contradicts their values. OpenAI justified its policy change by stating that “democracies should continue to lead in AI development” and argued that providing its models to the military supports this goal. In 2025, we’ll see if and how others follow the AI giant’s example.
5. Nvidia Faces Real Competition
During much of the current AI boom, Jensen Huang was the go-to person for startups developing their AI models. As CEO of Nvidia, his company became the market leader in GPU chips essential for training large language models, image generators, and more. However, the sector is changing. Major competitors like Amazon, Broadcom, and AMD have invested heavily in new AI chips, showing signs of competing with Nvidia hardware. A growing number of startups are also challenging Nvidia from a new angle, using novel architectures for more efficient and effective AI training.
In 2025, these experiments will still be in early stages, but a company might emerge, reducing reliance on a single manufacturer for top-tier AI. Alongside this competition, the geopolitical chip war with China will continue. The West aims to limit exports of advanced chips and technologies to China, while efforts like the US Chips Act boost domestic semiconductor production. Donald Trump might tighten export controls and impose tariffs on Chinese imports. In 2025, Taiwan, crucial for the US due to TSMC, could become central in this trade war. Taiwan announced plans to help companies move production from China to the island to avoid tariffs, potentially drawing criticism from Trump. The outcome of this conflict is uncertain, but it could incentivize chip manufacturers to reduce dependence on Taiwan, possibly leading to AI chips “Made in America.”