Python’s classification report explained

Crystal X
3 min readSep 15, 2022

The confusion matrix and classification report are sklearn’s goodness of fit indicators for classification models.

I have used the confusion matrix and classification report taken from the predictions that I produced using the titanic dataset to explain the metrics used in the classification report. The confusion matrix needs to be used in conjunction with the classification report to identify the true positives (TP), false positives (FP), true negatives (TN) and false negatives (FN).

This report utilises three common metrics to reflect how well the model was able to make predictions. These three metrics in the classification report are:-

  1. Precision,
  2. Recall, and
  3. F1 score.

Precision relates to the correct positive predictions relative to the total positive predictions. When I made predictions on the titanic dataset, the confusion matrix revealed that there were a total of 64 people who perished (0) and 25 people who survived (1). In the precision metric, 84% of the 0’s and 80% of the 1’s were correct.

--

--

Crystal X
Crystal X

Written by Crystal X

I have over five decades experience in the world of work, being in fast food, the military, business, non-profits, and the healthcare sector.

No responses yet