MITB Banner

What Does Freezing A Layer Mean And How Does It Help In Fine Tuning Neural Networks

Freezing a layer in the context of neural networks is about controlling the way the weights are updated. When a layer is frozen, it means that the weights cannot be modified further.

This technique, as obvious as it may sound is to cut down on the computational time for training while losing not much on the accuracy side.

Techniques like DropOut and Stochastic depth have already demonstrated how to efficiently train the networks without the need to train every layer.

Freezing a layer, too, is a technique to accelerate neural network training by progressively freezing hidden layers.

For instance, during transfer learning, the first layer of the network are frozen while leaving the end layers open to modification.

This  means that if a machine learning model is tasked with object detection, putting an image through it during the first epoch and doing the same image through it again during the second epoch result in the same value through that layer.

In other words, consider a network that has 2 layers. The first layer is frozen and the second layer not frozen. If we run 100 epochs we are doing an identical computation through the first layer for each of the 100 epochs.

The same images are run through the same layers without updating the weights. this means for every epoch the inputs to the first layer are the same(the images). The weights in the first layer are the same and the outputs from the first layer are the same(images * weights + bias).

By default the pre trained part is frozen and only last layers are trained and how big the change is on the weights in a layer is governed by learning rate.

Accelerated Training By Freezing

In this paper, we can see that the authors used learning rate annealing i.e, to change the learning rate layer by layer instead of the whole model.

Once a layer’s learning rate reaches zero, it gets set to inference mode and excluded from all future backward passes, resulting in an immediate per-iteration speedup proportional to the computational cost of the layer.

The results from the experiments made on popular models show a promising speedup versus accuracy tradeoff.

“For every strategy, there was  a speedup of up to 20%, with a maximum relative 3% increase in test error. Lower speedup levels perform better and occasionally outperform the baseline, though given the inherent level of non-determinism in training a network, we consider this margin insignificant,” say the authors in their paper titled FreezeOut

Whether this tradeoff is acceptable is up to the user.  If one is prototyping many different designs and simply wants to observe how they rank relative to one another, then employing higher levels of FreezeOut may be tenable.

If, however, one has set one’s network design and hyperparameters and simply wants to maximize performance on a test set, then a reduction in training time is likely of no value, and FreezeOut is not a desirable technique to use.

Based on these experiments, the authors recommend a default strategy of cubic scheduling with learning rate scaling, using a t_0 value of 0.8 before cubing (so t_0= 0.5120) for maximizing speed while remaining within an envelope of 3% relative error.

Key Takeaways

  • Freezing reduces training time as the backward passes go down in number.
  • Freezing the layer too early into the operation is not advisable.
  • Freezing all the layers but the last 5 ones, you only need to backpropagate the gradient and update the weights of the last 5 layers. This results in a huge decrease in computation time.

Here is a sample code snippet showing how freezing is done with Keras:

from keras.layers import Dense, Dropout, Activation, Flatten

from keras.models import Sequential from keras.layers.normalization

import BatchNormalization from keras.layers import

Conv2D,MaxPooling2D,ZeroPadding2D,GlobalAveragePooling2D model = Sequential()

#Setting trainable = False for freezing the layer

model.add(Conv2D(64,(3, 3),trainable=False))

Check the full code here

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories