小蓝视频

Skip to content

Opinion: Breakthrough algorithm equips AI systems to adapt

University of Alberta researchers unlock key to sustaining AI systems learning.
aihead-unsplash
The longer a system trains, the more it begins to lose plasticity, which limits its ability to learn and adapt to new information.

University of Alberta researchers have found that deep learning systems – advanced artificial intelligence models designed to process information through multiple layers, similar to how the human brain organizes information – lose their learning capacity over time. This loss of “plasticity,” or adaptability, poses a significant limitation for the future of AI.

But the researchers have now developed a solution they say can keep these systems flexible and ready to learn new information continuously.

“It’s this understudied phenomenon observed in artificial neural networks that when you train them for a really long time, they start losing their ability to learn,” explains J. Fernando Hernandez-Garcia, a PhD student in computing science and co-author of the published in Nature. Neural networks, built from layers of interconnected “neurons,” usually learn by adjusting the “weights,” or connection strengths, between these neurons. These connection weights function much like synapses in the brain, growing stronger or weaker as learning occurs. However, Hernandez-Garcia explains that the longer the system trains, the more it begins to lose plasticity, which limits its ability to learn and adapt to new information.

To address this issue, the U of A team, led by Shibhansh Dohare, the study’s first author and a fellow PhD student, developed an approach called “continual backpropagation.” Backpropagation, or “backprop,” is a widely used learning method allows AI systems to improve accuracy by adjusting neuron connections based on feedback from previous tasks. Continual backpropagation goes a step further by evaluating each neuron’s usefulness and resetting those that aren’t contributing significantly. This reset process restores plasticity in the system, keeping it capable of learning new information without losing adaptability.

“The basic idea is that in the beginning, when it was learning one or two tasks, the network had plasticity. But then it’s lost over time,” Dohare said. “Reinitializing (resetting certain neurons in the AI network to their original state) brings it back.”

This process, known as neurogenesis in biological systems, is also present in human and animal brains, says Rupam Mahmood, assistant professor in computing science and Canada CIFAR AI Chair at the Alberta Machine Intelligence Institute (). Deep learning models, which simulate how neurons form connections in the brain, rely on these “connection weights” to store information. Mahmood explains that, similar to the human brain, “the algorithms change the strength of those connections, and that’s how learning happens in these networks.”

Proving and measuring plasticity loss in these systems has traditionally been difficult due to the extensive computing power needed to run experiments over long periods, says Dohare. “You have to run these experiments for a long, long time, and it requires a lot of computational power,” he said. The study confirmed that, when using traditional backpropagation, a system’s ability to learn fades as tasks progress, underscoring the need for a solution like continual backpropagation.

Mahmood views this development as a crucial step for the future of deep learning in real-world applications. “It’s naturally expected of automated systems that they’re able to learn continuously,” he said. Current systems like ChatGPT, which only learn from specific sets of data, lack the capacity to pick up new information after they are initially trained. “Current deep learning methods are not actually learning when we are interacting with the system – they’re frozen in time,” Mahmood added.

As the cost of retraining deep learning models on new data remains high, continual backpropagation is seen as a practical solution for AI systems to remain adaptable and learn continuously without needing costly, resource-intensive updates. Hernandez-Garcia said the team’s research will help guide future systems to learn and adapt in real-time in ways that current models cannot.

The U of A study provides a glimpse into how AI systems might stay adaptable in a fast-paced digital world. “Our work is sort of a testing ground for what we are expecting in the future, which is deep learning systems 小蓝视频 employed in the real world and learning continually,” Mahmood said.

©

The commentaries offered on 小蓝视频 are intended to provide thought-provoking material for our readers. The opinions expressed are those of the authors. Contributors' articles or letters do not necessarily reflect the opinion of any 小蓝视频 staff.

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks