Test your passphrase
Example data table
| Sample input | Mode | Typical outcome |
|---|---|---|
| ocean tulip metro canyon | Words | Moderate to Strong, depending on wordlist size. |
| Orbit-Lantern Drift Canyon 7! | Words | Strong, with small bonuses for variety. |
| P@ssw0rd1234 | Characters | Weak, common pattern penalties apply. |
| fR9!xQ2#kL7@pT6 | Characters | Strong to Very Strong, long and diverse. |
Examples are placeholders for learning. Use unique secrets for real accounts.
Formula used
- Character mode entropy: H ≈ L × log2(N), where L is length and N is the assumed character pool size.
- Word mode entropy: H ≈ W × log2(S) + bonus, where W is word count and S is the assumed wordlist size.
- Score mapping: entropy is scaled to 0–100 with penalties for repeats, sequences, and common patterns.
- Crack-time estimate: time ≈ 2^H / rate using offline fast, offline slow, and online rates.
These estimates are educational and vary by hashing method, throttling, and attacker capability.
How to use this calculator
- Enter a sample passphrase similar in structure to your real one.
- Select the scoring mode that best matches how it was created.
- Optionally set wordlist size or word count for accuracy.
- Enable checks to flag common patterns and repeated sequences.
- Submit to view score, entropy, and crack-time estimates.
- Download CSV or PDF to attach to assessments and reports.
For best practice, use a password manager and unique secrets per service.
Operational goals for strength testing
Security teams often need a repeatable way to compare secrets across systems. This calculator turns a passphrase into an entropy score, then maps it to a 0–100 rating for quick triage. Using the default 7,776-word list assumption, each random word contributes about 12.9 bits of entropy. Four truly random words yield roughly 51.6 bits before bonuses, while six words reach about 77.5 bits, which commonly lands in the Strong range.
Entropy inputs and realistic assumptions
Entropy is only as good as the assumptions behind it. In word mode, the model assumes words are selected uniformly from a wordlist; user overrides let you set the list size when your policy mandates a specific standard. In character mode, the pool is based on selected character sets. For example, lowercase plus digits approximates 36 symbols, giving H ≈ L × log2(36). At 16 characters, that is about 82.7 bits.
Pattern penalties and policy alignment
Attackers rarely brute-force purely at random; they exploit patterns. The pattern checks subtract score influence for repeated runs, sequential chunks like “abcd” or “1234”, and common keyboard walks such as “qwer”. A common-pattern flag also reduces the score because dictionary hybrids (e.g., predictable words plus years) are widely targeted. Use penalties to align with policy language like “avoid dates and known phrases”.
Crack-rate scenarios for reporting
The crack-time panels use three reference rates to support different narratives. Offline fast represents high-throughput attacks (about 10 billion guesses per second) against weak hashes. Offline slow represents hardened storage or memory-hard settings (about 1 million guesses per second). Online rates are far lower (about 100 guesses per hour) when throttling and lockouts exist. These are educational anchors, not guarantees.
Exportable evidence for audits
Security reviews often need artifacts. The CSV export captures key metrics—score, grade, entropy, length, penalty, and crack-time estimates—for spreadsheet analysis. The PDF export produces a simple one-page summary suitable for attachment to risk registers or access reviews. Use consistent inputs, document the assumed wordlist size, and compare results over time to show improvement. Many teams set targets such as 60+ for shared accounts and 80+ for privileged access, then verify uniqueness per service periodically.
FAQs
1) Is a longer passphrase always stronger?
Length usually increases entropy, but only if the content is not predictable. Repeated phrases, dates, and common patterns can remain weak despite length. Combine length with randomness and uniqueness.
2) What wordlist size should I use?
Use the size that matches your policy or generation method. Diceware-style lists are often 7,776 words. If your organization uses a different approved list, enter that size for more relevant estimates.
3) Why does the score drop when pattern checks are enabled?
Because attackers prioritize guesses that follow human habits. Sequences, keyboard runs, and repeated characters are searched early. Penalties model that advantage so the score reflects real-world risk more closely.
4) Are the crack-time values accurate?
They are illustrative. Real outcomes depend on the hash type, GPU/ASIC capability, salting, key-stretching, and online rate limits. Use the times to compare options, not as a guarantee.
5) Should I test my real account password here?
No. Use a structurally similar sample instead. For real secrets, test locally, use a trusted password manager, and rely on policy-based generation rather than pasting production credentials into tools.
6) What score should we require for privileged access?
A common practice is to set higher targets for sensitive roles, then validate uniqueness and MFA. Treat 80+ as a strong baseline for privileged accounts, and adjust based on your threat model.