Transfer Learning vs Incremental Learning for training neural nets

Author: EIS Release Date: Sep 1, 2021


Brainchip, the AI processor specialist, has looked whether transfer learning is more efficient than incremental learning in training neural nets to perform AI/ML tasks..
 
 
In transfer learning, applicable knowledge established in a previously trained AI model is “imported” and used as the basis of a new model. After taking this shortcut of using a pretrained model, such as an open-source image or NLP dataset, new objects can be added to customize the result for the particular scenario.
 
The primary downfall of this system is accuracy. Fine-tuning the pretrained model requires large amounts of task-specific data to add new weights or data points. As it requires working with layers in the pretrained model to get to where it has value for creating the new model, it may also require more specialized, machine-learning savvy skills, tools, and service vendors.
 
When used for edge AI applications, transfer learning involves sending data to the cloud for retraining, incurring privacy and security risks. Once a new model is trained, any time there is new information to learn, the entire training process needs to be repeated. This is a frequent challenge in edge AI, where devices must constantly adapt to changes in the field.
 
“First and foremost is the issue of there being an available model that you can make work for your application, which is not likely for anything but very basic AI, and then you need enough samples to retrain it properly,” says Brianchip co-founder Anil Mankar,  “since this requires going to the cloud for retraining and then back down to the device, transfer learning is still a very complex and costly process, though it’s a nice option when and where it’s possible to use it.”
 
Incremental learning is another form that is often used to reduce the resources used to train models because of its efficiency and ability to accommodate new and changed data inputs. An edge device that can perform incremental learning within the device itself, rather than send data to the cloud, can learn continuously.
 
Incremental or “one-shot” learning can begin with a very small set of samples, and grow its knowledge as more data is absorbed. The ability to evolve based on more data also results in higher accuracy. When retraining is done on the device’s hardware, instead of cloud retraining, the data and application remains private and secure.
 
“Most of the time, AI projects don’t have large enough data sets in the beginning, and don’t have access to cloud computing for retraining, so they keep paying their vendor whenever anything changes,” says Mankar. “We generally recommend incremental learning because it addresses most of the shortcomings of transfer learning and requires dramatically lower computing costs.”