×

Search anything:

Huber and Hinge loss

Internship at OpenGenus

Get this book -> Problems on Array: For Interviews and Competitive Programming

Introduction

Loss functions are an important part of Machine Learning. Two common loss functions that we will focus on in this article at OpenGenus are the Huber and Hinge loss functions. Both of these functions are used in order to work with outliers to give better estimations. Huber loss functions deals mostly with regression while hinge loss functions deals more so with binary classification. The huber loss function essentially has characteristics of both Mean Squared Error and Mean Absolute Error. However, huber loss functions are less sensitive to outliers than Mean Squared Error, giving a more accurate prediction or estimation. Hinge loss functions deal with maximum-margin classification and is mostly used for Support Vector Machines.

Huber Loss
As mentioned, Huber Loss functions have the characteristics of both the Mean Squared Error and Mean Absolute Error. Here is how the Huber Loss function is defined:

HuberLoss(y_true, y_pred, delta) = 0.5 * (y_true-y_pred)^2 if |y_true-y_pred| <= delta delta * |y_true-y_pred| - 0.5* delta^2 if |y_true-y_pred| > delta;

What is means:
Y_true means the true values while y-pred means the predicted values. Delta is the parameter that determines whether the quadratic or linear components should be used. In this case, if the difference between y_true and y_pred is less than or equal to delta, the qudratic term is used. On the other hand, when it is greater than delta, the linear term is used.

Implementation
Now we will dive into the implementation of Huber Loss in Python. Given below is code that shows the implementation of this function:

import numpy as np

def huber_loss(y_true, y_pred, delta):
    residual = np.abs(y_true - y_pred)
    quadratic_term = 0.5 * np.square(residual)
    linear_term = delta * (residual - 0.5 * delta)
    loss = np.where(residual <= delta, quadratic_term, linear_term)
    return np.mean(loss)

# Sample Input and Output
y_true = np.array([1, 2, 3, 4, 5])
y_pred = np.array([1, 3, 3.5, 6, 7])
delta = 1.5

huber_loss_value = huber_loss(y_true, y_pred, delta)
print("Huber Loss:", huber_loss_value)

This code computes residuals between true and predicted values, returning the mean loss using either quadratic or linear terms. Given below is sample input and output using this implementation:

Sample input:
y_true = [1, 2, 3, 4, 5]
y_pred = [1, 3, 3.5, 6, 7]
delta = 1.5
Sample output:
Huber Loss: 0.775

Hinge Loss
As mentioned, Hinge Loss functions are used in binary classification tasks. This function is defined below:

HingeLoss(y_true, y_pred) = max(0, 1- y_true * y_pred)

What is means:
Y_true means the true class labels (-1 or 1) while y_pred means the predicted class scores. The function returns 0 if the predicted class matches the true class. On the other hand, it will increase linearly with the magnitude of incorrect prediction.

Implementation
Now we will look at the implementation of the Hinge Loss function in Python. Below is the code that shows this implementation:

import numpy as np

def hinge_loss(y_true, y_pred):
    loss = np.maximum(0, 1 - y_true * y_pred)
    return np.mean(loss)

# Sample Input and Output
y_true = np.array([-1, 1, 1, -1, -1])
y_pred = np.array([-0.5, 0.8, 1.2, -2.2, -1.5])

hinge_loss_value = hinge_loss(y_true, y_pred)
print("Hinge Loss:", hinge_loss_value)

This code computes the hinge loss using the maximum function from the NumPy library- comparing the margin between true and predicted values that have zero. This loss is averaged to provide mean hinge loss. Given below is the sample input and output using this implementation:

Sample input:
y_true = [-1, 1, 1, -1, -1]
y_pred = [-0.5, 0.8, 1.2, -2.2, -1.5]
Sample output:
Hinge Loss: 0.78

Conlusion

Both Huber and Hinge loss functions are great tools in Machine Learning. Although, hinge specializes in binary classification with support vector machines and huber specializes in regression with outliers. One must choose the appropriate loss function for each task in order to enhance the model given conditions and parameters that they must tackle. When properly implemented, one's model can greatly improve its accuracy, performance, preciceness.

Huber and Hinge loss
Share this