Calculator inputs
Example data table
| Scenario | Length | Charset | Rate | Mode | Estimated time |
|---|---|---|---|---|---|
| Policy baseline | 10 | 62 | 1 GH/s | Expected | ≈ 11 years (order of magnitude) |
| Digits only | 8 | 10 | 100 MH/s | Worst case | ≈ 100 seconds |
| Stronger policy | 14 | 95 | 10 MH/s | Expected | Very large; prioritize slow hashes |
Formula used
- Keyspace N = (character_set_size)^(password_length)
- Effective_rate R = base_rate × parallel_devices × availability
- Attempts_needed A = N × fraction (fraction = 0.5 expected, 1.0 worst case, or p for probability)
- Time_seconds T = A / R
This is a simplified model. If passwords are not uniformly random (for example, human-chosen patterns), real outcomes can differ significantly. Use this to drive better policies: longer secrets, bigger alphabets, and slow password-hashing functions with appropriate settings.
How to use this calculator
- Select a hash family label to document your scenario.
- Enter password length and choose a character set size.
- Provide a realistic attempt rate from your test environment.
- Set parallel devices and availability to match constraints.
- Pick an assumption: expected, worst case, or probability target.
- Click Calculate to view results above the form.
Assumptions behind the estimate
This calculator models a brute-force search where each candidate is equally likely. That assumption fits randomly generated secrets and helps compare policy options consistently. Human-chosen passwords often have patterns, making real compromise times shorter than the uniform model. Use the estimate as a conservative planning tool for audits, control design, and stakeholder communication, not as a promise. Use the output as a baseline, then add margin for salts, throttling, and monitoring. Policy improvements like longer secrets and slow hashing usually outperform hardware scaling. Document assumptions and keep outputs traceable.
Keyspace growth and policy levers
Keyspace increases exponentially with length and character-set size. Adding two characters can multiply search space by the charset size squared, which can dwarf hardware upgrades. Policies that raise minimum length, allow passphrases, and reduce predictable constraints typically provide larger gains than forcing complex composition rules that users work around. Pair this with uniqueness requirements to prevent reuse.
Attempt rates must match the hash settings
Attempt rate is the most sensitive operational input. Fast hashes can be evaluated quickly, while password-hashing functions intentionally slow evaluation using cost factors or memory hardness. Your measured rate should reflect the specific parameters in production, including iterations, memory, and parallelism limits. When rates are unknown, test in a controlled, authorized environment and document the measurement method for repeatability. For online guessing, lockouts and latency dominate the effective rate. For offline checks, benchmark the full verification configuration.
Parallelism, availability, and real constraints
Parallel devices scale throughput, but only if workloads are independent and not limited by shared bottlenecks. Availability accounts for throttling, scheduled windows, and competing workload demands. In real systems, additional controls—MFA, lockouts, rate limiting, and monitoring—reduce attacker opportunities. Treat availability as a guardrail that keeps estimates grounded in operational realities rather than ideal lab conditions.
Turning estimates into defensive actions
Use results to justify slow hashes, higher minimum lengths, and password-manager adoption. If the modeled time is uncomfortably low, prioritize stronger hashing configurations, deploy MFA for privileged and external access, and rotate exposed credentials. Re-run scenarios after control changes to demonstrate measurable risk reduction. Store CSV or PDF outputs as evidence in governance and compliance workflows.
FAQs
1) Does this calculator perform cracking?
No. It only estimates time based on keyspace and attempt rate for authorized assessments and awareness.
2) Why does “expected” use 50% of the keyspace?
If the secret is uniformly random, the average position in the search is halfway through the space.
3) What should I enter for attempt rate?
Use a measured attempts-per-second value for your exact hash parameters and hardware, including throttling or workload limits.
4) How do slow hashes change the results?
Slow hashes reduce attempt rate by design, often by orders of magnitude, increasing required time and improving resilience.
5) Why might real compromise be faster than the estimate?
Human passwords are not uniformly random; targeted guesses and leaked patterns can shrink the effective search space.
6) Which controls reduce risk most effectively?
Combine strong, unique secrets with MFA, rate limiting, monitoring, and modern hashing settings. Defense-in-depth matters.