Humans have the extraordinary ability to learn continually from experience. Not only can we apply previously learned knowledge and skills to novel situations, but we can also use these as the foundation for later learning. One of the grand goals of AI is to build artificial “continual learning” agents that construct a sophisticated understanding of the world from their own experience through the incremental development of increasingly complex knowledge and skills.
ContinualAI is an official non-profit research organization and the largest open community on Continual Learning for AI. Our core mission is to fuel continual learning research by connecting researchers in the field and offering a platform to share, discuss, and produce original research on a topic we consider fundamental for the future of AI. Read More
Daily Archives: April 10, 2021
AI Weekly: Continual learning offers a path toward more human like AI
State-of-the-art AI systems are remarkably capable, but they suffer from a key limitation: statisticity. Algorithms are trained once on a dataset and rarely again, making them incapable of learning new information without retraining. This is as opposed to the human brain, which learns constantly, using knowledge gained over time and building on it as it encounters new information. While there’s been progress toward bridging the gap, solving the problem of “continual learning” remains a grand challenge in AI.
This challenge motivated a team of AI and neuroscience researchers to found ContinualAI, a nonprofit organization and open community of continual and lifelong learning enthusiasts. ContinualAI recently announced Avalanche, a library of tools compiled over the course of a year from over 40 contributors to make continual learning research easier and more reproducible. The group also hosts conference-style presentations, sponsors workshops and AI competitions, and maintains a repository of tutorial, code, and guides. Read More
The Autodidactic Universe
We present an approach to cosmology in which the Universe learns its own physical laws. It does so by exploring a landscape of possible laws, which we express as a certain class of matrix models. We discover maps that put each of these matrix models in correspondence with both a gauge/gravity theory and a mathematical model of a learning machine, such as a deep recurrent, cyclic neural network. This establishes a correspondence between each solution of the physical theory and a run of a neural network.
This correspondence is not an equivalence, partly because gauge theories emerge from N → ∞ limits of the matrix models, whereas the same limits of the neural networks used here are not well-defined.
We discuss in detail what it means to say that learning takes place in autodidactic systems, where there is no supervision. We propose that if the neural network model can be said to learn without supervision, the same can be said for the corresponding physical theory.
We consider other protocols for autodidactic physical systems, such as optimization of graph variety, subset-replication using self-attention and look-ahead, geometrogenesis guided by reinforcement learning, structural learning using renormalization group techniques, and extensions. These protocols together provide a number of directions in which to explore the origin of physical laws based on putting machine learning architectures in correspondence with physical theories. Read More