Error (misclassification)
What does each metric mean?
The ROC curve plots True Positive Rate (Recall) vs False Positive Rate across all possible thresholds. AUC summarises this into one number: 1.0 = perfect, 0.5 = random coin flip. The red dot shows where your current threshold sits on the curve.
Different thresholds optimise different metrics. This chart sweeps all thresholds and shows how each metric changes. Use this to find the best threshold for your specific problem.
K-Fold Cross-Validation splits data into K parts, tests on each fold in turn, and averages the results. This gives a more reliable performance estimate. Stratified K-Fold ensures each fold has the same class distribution as the full dataset.
Practice Problem (try it yourself)
Given this confusion matrix, calculate all metrics:
Predicted
Neg Pos
Actual Neg [ 90 10 ]
Actual Pos [ 5 95 ]
Click to reveal answer
- TP = 95, TN = 90, FP = 10, FN = 5
- Accuracy = (95+90)/200 = 92.5%
- Precision = 95/(95+10) = 90.5%
- Recall = 95/(95+5) = 95.0%
- F1 = 2(0.905 × 0.950)/(0.905+0.950) = 92.7%
- Specificity = 90/(90+10) = 90.0%