top of page

Regularizers

Regularization is a technique used to prevent overfitting in deep learning models by adding a penalty term to the loss function. This term, called the regularization term, discourages certain model parameters from becoming too large. Common types of regularization include L1 and L2 regularization (also known as Lasso and Ridge regularization, respectively) and Dropout.

In this work, an application of regularizers to a specific CNN, LeNet-5, was carried out to see if it improves performance compared to the pedestrian classification with the same network mentioned in the annexes of the CNN(LeNet-5) & MNIST project.

dataset.png

Dataset

The database used was manually extracted from the JAAD database, with 200 images extracted from videos.

Selected Regularizers

Dropout,L1,L2 and Batch normalization

L1 and L2 regularization add a penalty term to the loss function that is proportional to the absolute value or square of the model parameters, respectively. Dropout randomly sets a percentage of the inputs to zero during training to reduce overfitting. Batch normalization is a technique used to improve the stability and performance of deep learning models.

Confusion matrix using Batch normalization.

Confusion matrix of a mix of the regularizers.

Confusion matrix using L1 and L2.

Confusion matrix using Dropout.

Results

After making a series of combinations with the regularizers, and doing their training, the results were evaluated using various strategies, one of which is the confusion matrix.

The two confusion matrices highlighted in red were the ones that gave the best results. The confusion matrix of the mix of regularizers provides stability, but the confusion matrix used by Batch normalization is unstable because gives more false positives, which is better than the two unhighlighted confusion matrices.

For more details, you can see the report here.

bottom of page