Calculator Inputs
Example Data Table
This example shows sample scores, probabilities, predicted classes, and outcomes for a binary classification workflow.
| Case | Raw Logit | Calibrated Probability | Threshold | Predicted Class | Actual Label |
|---|---|---|---|---|---|
| A101 | 1.35 | 0.7941 | 0.50 | 1 | 1 |
| A102 | -0.42 | 0.3965 | 0.50 | 0 | 0 |
| A103 | 0.18 | 0.5449 | 0.60 | 0 | 1 |
| A104 | 2.05 | 0.8860 | 0.70 | 1 | 1 |
| A105 | -1.10 | 0.2497 | 0.30 | 0 | 0 |
Formula Used
Probability = 1 / (1 + e-z)
z = intercept + (x1 × w1) + (x2 × w2) + (x3 × w3)
Calibrated logit = (slope × raw logit) + intercept
Odds = p / (1 - p)
The calculator accepts a raw logit, a direct probability, odds, or a weighted feature score. It then calibrates the score, applies the logistic transform, compares the result with your threshold, and estimates related decision metrics.
How to Use This Calculator
- Select the score mode that matches your model output.
- Enter either a logit, probability, odds, or feature-based score inputs.
- Add calibration slope and intercept if you have validation adjustments.
- Choose a decision threshold for positive-class assignment.
- Set prevalence, costs, and sample size for business-oriented estimates.
- Submit the form to view results above the form.
- Download the result summary as CSV or PDF if needed.
Frequently Asked Questions
1. What does this calculator estimate?
It estimates the positive-class probability for a binary classifier. It also reports threshold-based class assignment, calibration output, confidence, risk band, and related decision metrics.
2. When should I use logit input?
Use logit mode when your model already returns a linear score before logistic transformation. This is common in logistic regression and many calibrated binary classification pipelines.
3. Why would I enter calibration values?
Calibration adjusts raw model scores to better match observed outcomes. Use it when validation data shows your original probabilities are systematically too high or too low.
4. What is the threshold used for?
The threshold converts a probability into a class label. Higher thresholds reduce positive predictions, while lower thresholds catch more positives at the cost of more false alarms.
5. Are the portfolio counts exact?
No. They are expected counts based on prevalence, probability, and sample size assumptions. They help planning, but they do not replace evaluation on real labeled test data.
6. What does lift versus prevalence mean?
Lift compares the estimated probability against the base event rate. A lift above 1 means the case appears more likely than an average observation.
7. Can I use weighted features instead of a model score?
Yes. The feature mode lets you combine an intercept with three weighted inputs. It is useful for quick scoring prototypes and transparent rule-based estimators.