Transfer Learning, solution to your problems
Neural networks are a different breed of models compared to the supervised machine learning algorithms but the most prominent problem in them is the cost of running algorithms on the hardware. You need hundreds of GBs of RAM to run a super complex supervised machine learning problem and access to GPUs is not that cheap. Tranfer learning is a solution to all these problems.
“Transfer Learning” enables us to use pre-trained models from other people by making small changes. My talk will cover how we can use pre-trained models to accelerate our solutions. It would cover the following:
- What is transfer learning?
- What is a Pre-trained Model?
- Why would we use pre-trained models? – A real life example
- How can I use pre-trained models using keras.
- Ways to fine tune your model
- Retraining the output dense layers only
- Freeze the weights of first few layers
Basics of neural networks, deep learning and python
Divyam Madaan is an open source enthusiast contributing as a core developer at GCompris which is a high-quality educational software suite comprising of numerous activities for children aged 2 to 10. He has been a Season of KDE developer and also a Google summer of Code student, 2017.
He is a Pythonista who works on projects which make life easy for people using AI and the web. He has been working as a Software Developer at KDE from more than one year and has been a part of various deep learning projects and contests. He has taken various workshops and talks in his college for students. He has also been a speaker at KDE India Conference and his talks were selected at Akademy happening in Almeria, Spain, and FOSSCON in the US.