How to use this calculator
- Enter optional user details for reporting.
- Select the closest option for each signal.
- Open Advanced settings to adjust weights and appetite.
- Press Calculate Risk to view the score.
- Use the download buttons to export your report.
Formula used
Each signal is mapped to a 0–100 risk value. Category scores are the average of their signals.
Ratings are assigned from the final score: Low < 30, Medium 30–59, High 60–79, Critical ≥ 80.
Example data table
| User | Department | Role | Score | Rating | Top driver |
|---|---|---|---|---|---|
| Ayesha K. | Finance | Accounts Lead | 29.4 | Low | Data sensitivity (90) |
| Bilal S. | IT | Domain Admin | 61.2 | High | Role criticality (90) |
| Carla M. | Marketing | Content Editor | 0 | Low | Remote work (35) |
| Daniyal R. | Sales | Regional Rep | 28 | Low | Data sensitivity (60) |
| Ema V. | HR | Recruiter | 10.7 | Low | Data sensitivity (60) |
These rows are generated using the default weights and options shown above.
Identity context into measurable risk
A user risk rating turns identity context into a comparable number. The calculator scores each user on a 0–100 scale, combining role criticality, privilege, and exposure into a baseline. Higher scores signal greater likelihood or impact from misuse, compromise, or policy violations. Use results to prioritize reviews, reduce standing privileges, and focus monitoring across the organization. Trend scores monthly to validate joiner, mover, and leaver processes.
Normalizing signals for fair scoring
Inputs are normalized into values between 0 and 100, so different signals combine fairly. Strong authentication maps near 10, while missing multi-factor maps near 80. Access, Behavior, Device, and Human categories average their signals to create category scores. Averaging highlights persistent conditions and reduces volatility from one-off events. This structure supports comparisons across teams, time periods, and tools. Use consistent definitions to keep scores stable and explainable.
Weights appetite and control offsets
Category weights reflect policy priorities. Defaults emphasize Access and Behavior (4 and 4), then Device (3) and Human (2). A risk appetite multiplier ranges from 0.8 to 1.2 for stricter or more tolerant scoring. Compensating controls subtract reductions, such as enforced authentication (6 points), encryption (3), healthy endpoint protection (4), current training (3), and compliant devices (3). Reductions can be scaled from 0% to 150%.
Score tiers and response actions
Scores map to tiers: Low under 30, Medium 30–59, High 60–79, and Critical 80 or above. For High and Critical users, require strong authentication, validate device compliance, and re-check third-party access within seven days. Medium users should have quarterly access recertification and anomaly monitoring. Low users still need baseline hygiene and reassessment after role, location, or device changes. Escalate repeated anomalies with evidence to analysts.
Audit readiness and continuous review
Exports support audit trails and repeatable governance. Capture date, assessor, and key drivers, then store results with identity reviews and incident tickets. Track average scores by department and watch trend shifts after control rollouts. Load the CSV into dashboards to correlate score changes with incidents, access grants, and exceptions. Recalibrate mappings when threats or integrations change, and refresh after policy updates. Consistent definitions reduce debate during escalations.
FAQs
What does the score range mean?
Scores run from 0 to 100. Higher scores indicate higher expected impact or likelihood from compromise or misuse. Use the tier labels to route reviews, approvals, and monitoring intensity.
How often should we recalculate ratings?
Recalculate after role, privilege, or device changes, and on a regular cadence such as monthly. High-risk groups may benefit from weekly refreshes during active incidents or major control rollouts.
Can we adjust weights for different teams?
Yes. Increase Access weight for privileged engineering groups, or increase Human weight for teams with higher phishing exposure. Keep a documented baseline so scores remain comparable when you report trends.
How are compensating controls applied?
When stronger controls are selected, the calculator subtracts fixed reduction points, then scales them using the control slider. This keeps improvements visible while preventing controls from masking inherently risky access patterns.
Does one bad event dominate the result?
Category scores use averages across signals, which dampens one-off spikes. However, selecting frequent anomalies or high privilege still raises the overall score materially, signaling the need for investigation.
What should we store for audits?
Store the inputs, final score, rating, assessor, date, and the top driver. Keep exports with access reviews or ticket records so decisions are traceable and consistent across quarters.