Calculator Inputs
Example Data Table
| Amount | Velocity (1h/24h) | Distance (km) | Checks | Context | Estimated Probability | Band |
|---|---|---|---|---|---|---|
| 45.00 | 1 / 3 | 12 | AVS/CVV: Both, 2FA: Yes | Match, Country: Low, VPN: No | 6.20% | Low |
| 320.00 | 3 / 12 | 210 | AVS/CVV: One, 2FA: No | Partial, Country: Medium, VPN: Yes | 34.80% | High |
| 980.00 | 7 / 28 | 1850 | AVS/CVV: None, 2FA: No | Mismatch, Country: High, VPN: Yes | 78.10% | Critical |
Numbers are illustrative to show how inputs influence probability and band.
Formula Used
The calculator uses a weighted logistic model to convert signals into a probability: p = 1 / (1 + e-z). The score z is a weighted sum of normalized features: z = b + Σ(wi · xi).
- Normalization: numeric inputs are scaled into comparable ranges (for example, amount uses log scaling).
- Weights: each signal weight (w) increases or decreases risk based on its sign.
- Interpretation: positive contributions raise risk, negative reduce risk.
- Calibration: update weights using your approved/chargeback outcomes to reduce bias and drift.
How to Use This Calculator
- Enter transaction amount and recent activity counts.
- Provide security and verification outcomes (AVS/CVV, 2FA).
- Select context signals like country risk and category risk.
- Submit to view probability, band, and top drivers above.
- Export the session log to CSV, or print to PDF.
Why probabilistic scoring improves fraud operations
Static rules often miss new attacker patterns and overload review teams. A probability score turns weak signals into one comparable measure. Because the output is continuous, you can set different decision thresholds for digital goods, high value baskets, or first time buyers. Tracking daily average probability alongside chargebacks helps confirm whether changes in marketing, routing, or authentication increase exposure. Over time, calibrated probabilities support better staffing forecasts and clearer service level commitments.
Key signals that typically carry predictive value
Transaction amount, velocity, and distance shifts are strong indicators because criminals test cards quickly and ship away from the legitimate address. Verification outcomes, such as AVS and CVV matches, reduce uncertainty when they are reliable for your market. Device reputation summarizes cookie stability, prior successful payments, and anomaly rates. Country and category risk capture macro effects like dispute prevalence and resale demand. VPN detection is useful when combined with other deviations, not as a standalone blocker.
Interpreting the logit score and top drivers
The model computes a logit value z and converts it using the logistic function. Positive contributions raise risk; negative contributions lower it. Reviewing the top drivers helps analysts decide whether risk is explainable or suspicious. For example, a high probability caused by extreme distance and address mismatch is different from one caused by high category risk alone. Keeping contribution tables in case notes supports consistent manual decisions and faster audit responses.
Threshold design, step up actions, and loss control
A practical policy uses three bands: approve, step up, and hold. Step up actions include one time passwords, 3DS challenges, or identity checks for shipping changes. Holds trigger manual review with evidence, such as device history and prior successful verifications. Measure policy quality with approval rate, review rate, false positives, and fraud capture. Small threshold shifts can change margin, so run controlled tests and compare against a stable baseline period.
Calibration and governance for reliable outcomes
Weights should reflect your own outcomes, because fraud mix varies by region, channel, and season. Start with conservative settings, then recalibrate monthly using confirmed chargebacks and representment results. Monitor drift by checking whether observed fraud rates match predicted bands. Document feature definitions, caps, and data sources to prevent changes in upstream feeds. Finally, treat the score as decision support: maintain human override rules for edge cases, VIP customers, and regulatory constraints.
FAQs
How accurate is the probability output?
It is a directional estimate based on assumed weights and normalized inputs. Accuracy improves when you recalibrate weights using your confirmed fraud and chargeback outcomes.
What probability thresholds should I use?
Start with conservative bands, such as under 15% approve, 15–35% step up, 35–60% hold, and above 60% escalate. Then tune using measured false positives and losses.
Why do I see a high score with low amount?
Fraud often shows through velocity, location distance, mismatches, or weak verification. The model can surface multi-signal risk even when the basket value is small.
Can I change the model weights?
Yes. Edit the weight values in the code to reflect your risk appetite, data quality, and regional behavior. Keep a change log and validate against recent outcomes.
What does the contribution table mean?
It lists the strongest weighted impacts on the logit score. Positive contributions increase risk, while negative contributions reduce it. Use it to justify manual decisions.
Does the CSV or PDF include personal data?
Exports contain only the fields entered in the form and the derived results. Avoid entering sensitive identifiers if you plan to share files externally.