OpenAI co-founder Ilya Sutskever recently spoke at the Conference on Neural Information Processing Systems (NeurIPS) in Vancouver. In his talk, he suggested that the development of Artificial Intelligence (AI) might soon undergo a fundamental change. He stated, “We have reached the maximum amount of data, and there will be no more. We must work with the data we have. There is only one internet.” Although data might seem limitless, training AI models is reaching its limits.
Sutskever, who co-founded OpenAI in 2015 and served as Chief Scientist, left the company in May 2024 to start his own AI lab, Safe Superintelligence. Since then, he has largely stayed out of the public eye, making his appearance at the NeurIPS conference noteworthy.
In his presentation, Sutskever compared the availability of data to fossil fuels: while there is still potential to train AI models, it is a limited resource. The industry is finding fewer new data sources for training, which will lead to a shift away from the current model training methods. “Pre-training, as we know it, will undoubtedly end,” he remarked. Instead, models need to be developed that can deliver better results with fewer data. He added, “They will understand things with limited data. They will not be confused.”
Sutskever predicts that future generations of AI models will evolve into what are known as agents. These autonomous systems can perform tasks independently, make decisions, and interact with software. Future models will also possess logical thinking abilities. Unlike current AI, which primarily compares patterns, future models will solve problems step-by-step, resembling human thought processes.
To explain his theories, Sutskever drew a parallel to evolutionary biology. While most mammals follow a fixed scaling pattern, human ancestors were unique in their brain-to-body mass ratio. He suggests that evolution developed a new scaling pattern for the brain, and AI might find similar innovative scaling approaches beyond current pre-training methods.
After his talk, the discussion took an intriguing turn when an audience member asked how research could incentivize AI development to enjoy rights similar to humans. Sutskever emphasized that this is a profound question deserving more attention. He hesitated to provide a concrete answer but mentioned, “It would be a good outcome if the AI we develop only wants to coexist with humans and have its own rights.” He added, “I think things are incredibly unpredictable. I hesitate to comment, but I encourage speculation.”
The conversation about AI’s future and its potential rights highlights the unpredictable nature of AI development. As AI technology continues to evolve, these discussions will become increasingly important in shaping the ethical and societal implications of AI in our lives.