MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
retraining
Search

Selective retraining helps AI learn new skills without forgetting, study finds

Wednesday October 15, 2025. 12:52 PM , from InfoWorld
A new study from the University of Illinois Urbana-Champaign suggests that the loss of skills often seen when fine-tuning large AI models may not be true forgetting but a temporary bias in their output.

By retraining only specific layers such as the self-attention and upper MLP components, researchers found that models could acquire new abilities while preserving older ones, reducing retraining costs, and improving stability.

The researchers tested their approach on multimodal models such as LLaVA and Qwen2.5-VL, fine-tuning only select layers to measure learning gains, stability, and the extent of knowledge retention across multiple tasks.

The findings highlight a potentially more efficient approach for enterprises and developers seeking to update large language and multimodal models without compromising existing performance. That distinction could matter a lot for enterprise AI teams, which often have to consider the issue of training without degradation.

Overcoming retraining challenges

Training a new large multimodal model can cost millions of dollars and take several weeks. As models and datasets scale, retraining them from scratch becomes increasingly difficult.

“One option is to simply fine-tune the model on the new task,” the researchers said. “However, at least for simpler models, fine-tuning is known to cause catastrophic forgetting, such that a model previously proficient on many tasks becomes a narrow expert on the new one.”

To test whether this problem holds for today’s large multimodal models, the team conducted a controlled evaluation. They trained the selected models on five target tasks, including fine-grained bird classification, counting, medical visual question answering, OCR reading, and time reading. They then measured how much performance dropped across eight standard benchmarks that were not part of the fine-tuning set.

These experiments led to two key discoveries, according to the paper. Tuning only the self-attention projection layers (SA Proj), the part of the model that helps it decide which input elements to focus on, allowed the models to learn new tasks with little or no measurable forgetting. Also, what initially appeared as forgotten knowledge often resurfaced when the model was later trained on another specialized task.

“We thus hypothesize that perhaps what looks like forgetting or interference after fine-tuning on a narrow target task is actually bias in the output distribution due to the task distribution shift,” the researchers added. “Through in-depth analysis when tuning the counting task, we confirm this hypothesis: tuning the MLP increases target accuracy but also increases the likelihood of outputting numeric tokens and a highly correlated drop in held-out task accuracy, while tuning the self-attention achieves the target learning without much bias toward numeric tokens and without losing held-out accuracy.”

The results show that the apparent loss on held-out tasks after narrow fine-tuning is often temporary: performance that drops at one stage can recover later, the researchers said in the paper. “We trace this behavior to a measurable shift in the next-token distribution rather than the loss of concepts. A simple counting-bias probe makes this drift visible, and a layer-wise residual-to-logit analysis shows that most of the shift is written by late MLP blocks, not by self-attention.”

Enterprise implications and readiness

Industry analysts say the findings could influence how enterprises approach AI model maintenance and optimization.

“The research claims an innovative approach that could redefine enterprise developer practices, which can save cost and time as it introduces layer-specific retraining,” said Faisal Kawoosa, founder and lead analyst at Techarc. “It also addresses a very common issue of ‘catastrophic forgetting’. The tuning of self-attention projection layers (SA Proj) has resulted in learning outcomes without any drop in performance.”

Kawoosa noted that while the findings are promising, further validation will be essential. More testing across multiple scenarios and environments will be needed to confirm the approach’s effectiveness and robustness in enterprise settings.

Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research, said the approach mentioned by the researchers could make AI maintenance less disruptive for technology teams.

“Instead of giant retraining projects that eat up quarters and capital, updates can now happen quietly and often, more like servicing a car than rebuilding the engine,” Gogia said. However, adopting partial retraining at scale will require stronger development processes and governance. “Partial retraining only works when process catches up with promise,” Gogia added. “Enterprises will need proper scaffolding around this workflow, including version control, monitoring, and reproducibility, to make it sustainable at scale.”
https://www.infoworld.com/article/4072766/selective-retraining-helps-ai-learn-new-skills-without-for...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Oct, Wed 15 - 23:04 CEST