Accuracy is a basic metric used to evaluate classification models. It tells us how often the model is correct overall.
Accuracy=Number of Correct PredictionsTotal Number of Predictions\text{Accuracy} = \frac{\text{Number of Correct Predictions}}{\text{Total Number of Predictions}}
Or in terms of confusion matrix terms:
Accuracy=TP+TNTP+TN+FP+FN\text{Accuracy} = \frac{TP + TN}{TP + TN + FP + FN}
BUT — accuracy can be misleading, especially if the dataset is imbalanced (e.g., 95% of one class and only 5% of another).
A confusion matrix is a table that helps visualize the performance of a classification model — particularly what it gets right and where it gets confused.
It breaks down predictions into categories to show:
| Predicted: Positive | Predicted: Negative | |
|---|---|---|
| Actual: Positive | TP (True Positive) | FN (False Negative) |
| Actual: Negative | FP (False Positive) | TN (True Negative) |