λ
home posts study uses projects

Cost Function

Jul 18, 2024

machine-learning

A cost function (also known as a loss function) measures how wrong a machine learning model is.

It’s different from the error of a neural network in that the error simply measures how far off a single guess was from the real answer. The loss function on the other hand tells you how bad it is that it failed to recognize something and you average the result over the entire data set (or batch).

It is an important distinction because in some contexts, you might want to punish certain results more, e.g getting a false positive might be worse than getting a false negative. In other cases a simple squared error will do just fine.

It is used for optimising the model by computing its derivative and using that for adjusting the weights and biases.

Examples

  • Mean Squared Error - 1ni=1n(yiyi^)2\frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y_i})^2 where yiy_i and yi^\hat{y_i} respectively mean the output of the model and the expected output, and nn is the total number of samples.
  • Binary Cross-Entropy - 1ni=1n[yilog(yi^)+(1yi)log(1yi^)]-\frac{1}{n}\sum_{i=1}^n [y_i\log(\hat{y_i}) + (1-y_i)\log(1-\hat{y_i})]

LλBS

All systems up and running

Commit c0dacf5, deployed on Nov 18, 2024.