Exploring Self-supervised Learning in Mammalian Brain and AI: A Comparative Study

Our brain is capable of developing an innate understanding of physical reality, which we then use to analyze sensory data streaming to our brain. One theory advanced by some researchers is that the brain may use a method similar to so-called “self-supervised learning” techniques to do this. Within machine learning, this self-supervised method allows computer models to visually capture scenes based only on the parallels and differences between them, and without any predefined categories or additional information.

Researchers from the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at the Massachusetts Institute of Technology (MIT) have found that neural networks trained using self-supervised learning produce activity patterns that strongly resemble those in the brains of animals performing the same tasks. Accordingly, the phenomenon may indicate that mammalian brains use the same method to acquire models of the physical world and use them to predict events.

Recent research efforts have focused on building models based on the principle known as contrastive self-supervised learning. With this approach, an algorithm can classify objects based on their similarities to each other without the need for prior information or labels.

In one of their studies, the researchers trained self-learning models to predict the future state of their environment based on hundreds of thousands of real videos. As a result, the model was able to predict the hidden path of a ball very accurately, achieving a fault tolerance equivalent to that of neurons in mammalian brains.

In another study, the scientists focused on a specific type of specialized neuron called grid cells that help animals orient themselves. The scientists trained a contrastive self-supervised model to solve the same path integration task. Here, they found that the nodes in the model created different lattice structures with different periods, where the activation patterns closely resembled those of the lattice cells in the brain.

Funding for this research was provided by the K. Lisa Yang ICoN Center, the National Institutes of Health, the Simons Foundation, the McKnight Foundation, the McGovern Institute, and the Helen Hay Whitney Foundation.

Exploring Self-supervised Learning in Mammalian Brain and AI: A Comparative Study
Exit mobile version