F1 Score

F1 Score is a popular metric used to evaluate the balance between precision and recall in classification models. Learn how to calculate and interpret F1 Score.

F1 Score

F1 Score

The F1 Score is a metric used to evaluate the performance of a classification model. It is the harmonic mean of precision and recall, giving a single score that represents both metrics. The F1 Score is particularly useful when you have imbalanced classes or when you want to balance precision and recall.

Formula

The F1 Score is calculated using the following formula:

F1 Score = 2 * (precision * recall) / (precision + recall)

Precision and Recall

Precision and recall are two important metrics in classification evaluation:

  • Precision: Precision is the ratio of correctly predicted positive observations to the total predicted positive observations. It shows how many of the predicted positive instances are actually positive.
  • Recall: Recall is the ratio of correctly predicted positive observations to the all observations in actual class. It shows how many of the actual positive instances were predicted correctly.

Example

Let's say we have a binary classification problem with the following confusion matrix:

  Predicted Negative Predicted Positive
Actual Negative 800 100
Actual Positive 50 50

Using the confusion matrix, we can calculate the precision, recall, and F1 Score:

  • Precision: TP / (TP + FP) = 50 / (50 + 100) = 0.3333
  • Recall: TP / (TP + FN) = 50 / (50 + 50) = 0.5
  • F1 Score: 2 * (precision * recall) / (precision + recall) = 2 * (0.3333 * 0.5) / (0.3333 + 0.5) = 0.4

Therefore, the F1 Score for this classification model is 0.4.

Conclusion

The F1 Score is a useful metric for evaluating the performance of classification models, especially in scenarios where precision and recall are both important. By considering both precision and recall, the F1 Score provides a balanced assessment of a model's effectiveness.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow