Inputs
Example Data
| Scenario | Hash + Salt | Password Profile | Expected Risk |
|---|---|---|---|
| Legacy directory | NTLM, no salt | 8 chars, lowercase, reuse 60% | High |
| Modern web app | bcrypt cost 12, per-user salt | 14 chars, alnum, reuse 20% | Low–Medium |
| Best practice | Argon2id work 8, per-user salt | 16 chars, full set, reuse 10% | Low |
| Fast hash upgrade needed | SHA-256, shared salt | 12 chars, alnum, reuse 30% | Medium–High |
Formula Used
Rainbow tables are most effective when password hashes are fast to compute and lack unique salts. This calculator uses a heuristic model that combines technical controls and user password strength.
- H: hash type factor (fast hashes raise risk)
- S: salt factor (unique salts lower risk)
- C: cost/iterations factor (more work lowers risk)
- E: entropy factor (based on length and charset)
- R: reuse factor (higher reuse raises impact)
- B: exposure likelihood factor
- A: attacker capability factor
- L: rate limiting factor (small offline effect)
Scores are directional for prioritization, not a cryptographic guarantee.
How to Use This Calculator
- Select the password hash type used in your system.
- Choose whether salts are unique per user, shared, or absent.
- Enter the configured cost or iterations for the chosen hash.
- Estimate typical password length and allowed character set.
- Set the approximate password reuse rate in your user base.
- Pick exposure likelihood and attacker capability assumptions.
- Click Calculate Risk to see score and actions.
- Export results using CSV or PDF for reporting.
Threat Model and Scope
Rainbow tables target offline password hashes, not live logins. They become relevant after a database, backup, or log export is copied. Attackers precompute chains for a specific hash function and password space, then “lookup” recovered hashes quickly. Risk rises with the number of stored hashes, the proportion of human‑chosen passwords, and the time an attacker can spend offline.
Why Rainbow Tables Still Matter
GPU cracking is flexible, but tables are economical for repeated use against legacy environments. A well‑built table can recover common patterns in minutes once generated, and the cost can be shared across multiple incidents. Organizations still running NTLM, unsalted SHA‑1, or MD5 in any component face higher exposure because popular tables already exist for weak spaces and dictionaries.
Entropy and Search Space
This calculator estimates entropy in bits using: entropy ≈ length × log2(character set size). Eight lowercase characters are about 38 bits (26^8 possibilities). Twelve alphanumeric characters are about 71 bits (62^12). Fourteen alphanumeric characters are about 83 bits. Each additional bit doubles the search space, so a 20‑bit increase can be roughly a million‑fold harder.
Salt and Work Factors
Unique per‑user salts break precomputation by forcing tables to be rebuilt per salt, which is usually impractical. Shared salts help only slightly because tables can still be built once and reused. Work factors slow every guess: PBKDF2 iterations, bcrypt cost, or Argon2id parameters. Good practice is to target a server‑side verification time around 100–500 ms per password while keeping login throughput acceptable.
Operational Hardening Metrics
Use the score to prioritize remediation, then measure outcomes. Track the percentage of accounts rehashed to the current standard, the share of users with MFA enabled, and the rate of password reuse detected in resets. For high exposure likelihood, assume compromise: rotate secrets, invalidate sessions, monitor for credential stuffing, and segment identity stores so one leak cannot cascade. Also review pepper usage, backup encryption, and access logging. A sudden drop in average entropy or a spike in reuse should trigger policy updates. Reassess parameters yearly, because hardware speed and attacker resources increase steadily across consumer and cloud platforms.
FAQs
What does the risk score represent?
It is a prioritization score from 0 to 100 based on hash speed, salting, work factor, estimated entropy, reuse, exposure, and attacker strength. It does not guarantee crack time, but it highlights where rainbow tables and offline guessing are most practical.
Why is a unique salt so important?
Rainbow tables rely on reuse of precomputed work. A unique per‑user salt forces attackers to recompute tables for every account, removing the main advantage of rainbow tables and pushing attackers toward slower, per‑hash guessing.
How should I choose a work factor?
Increase parameters until verification typically takes about 100–500 ms on your production hardware, then load test. Revisit annually. The goal is to slow offline guesses while keeping authentication latency and CPU usage within your service limits.
Does rate limiting reduce rainbow table attacks?
Rainbow tables are offline, so rate limiting does not stop hash recovery. However, it can reduce the impact after recovery by slowing credential stuffing and automated login attempts, especially when combined with bot detection and MFA.
How do I estimate password entropy realistically?
Use typical user behavior, not policy text. Consider length, allowed characters, and common patterns. If you see many short passwords, dictionary terms, or predictable substitutions, model a smaller effective character set and lower length.
What should I do after a suspected hash leak?
Treat hashes as compromised. Rotate affected credentials, invalidate sessions and tokens, enforce MFA, and monitor for stuffing. Upgrade hashing immediately, then rehash on next login. Also review backups, access paths, and logging for the original exposure.