Brain Teaser: The Trust Calibration Challenge
An AI system's confidence correlates with accuracy as follows:
90-100% confidence: 95% accurate
80-89% confidence: 85% accurate
70-79% confidence: 75% accurate
Below 70% confidence: 60% accurate
If the system processes 1000 decisions with the confidence distribution: 200 high (90-100%), 300 medium-high (80-89%), 300 medium (70-79%), and 200 low (below 70%), how many decisions should you expect to be correct?
Answer:
Total decisions: 1000
Calculate the number of correct decisions for each confidence level:
High confidence:
Number of decisions: 200
Accuracy: 95%
Expected correct = 200 * 0.95 = 190Medium-high confidence:
Number of decisions: 300
Accuracy: 85%
Expected correct = 300 * 0.85 = 255Medium confidence:
Number of decisions: 300
Accuracy: 75%
Expected correct = 300 * 0.75 = 225Low confidence:
Number of decisions: 200
Accuracy: 60%
Expected correct = 200 * 0.60 = 120
Now, sum up the expected correct decisions from all categories:
Total correct = 190 + 255 + 225 + 120
Let's add them:
190 + 255 = 445
445 + 225 = 670
670 + 120 = 790
Therefore, you should expect 790 decisions to be correct.
Final Answer:
790
