See what your DNS policy blocks instantly today. Adjust categories, weights, and confidence thresholds easily. Export results, share insights, and harden defenses faster now.
Enter measured counts for the selected period. Fields marked * are required.
Sample scenarios to sanity-check your inputs and outputs.
| Scenario | Total queries | Threat encountered | Threat blocked | False positives | Precision | Recall |
|---|---|---|---|---|---|---|
| Balanced | 250,000 | 860 | 763 | 120 | 86.40% | 88.72% |
| High block, noisy | 400,000 | 1,200 | 1,050 | 600 | 63.64% | 87.50% |
| Low false positives | 180,000 | 540 | 470 | 40 | 92.16% | 87.04% |
| Needs tuning | 300,000 | 900 | 540 | 90 | 85.71% | 60.00% |
Total threats: T = Σ encountered
Blocked threats: B = Σ blocked
Missed threats: M = max(0, T − B)
Total blocks: TB = B + FP
Precision: P = B / (B + FP)
Recall: R = B / T
F1 score: F1 = 2PR / (P + R)
False positive rate: FPR = FP / (TotalQueries − T)
Effectiveness score (0–100):
Score = 100 × (0.45R + 0.35P + 0.20(1 − clamp(FPR)))
Incident estimate:
IncidentsAvoided = B / QueriesPerIncident, and AvoidedCost = IncidentsAvoided × CostPerIncident.
DNS filtering sits at the earliest network decision point: the name lookup. Because every web session, update, and API call starts with resolution, small improvements in blocking accuracy can reduce exposure at scale. Track total queries, threat encounters, and blocks per period to normalize results across growth and seasonal traffic changes. Include internal and external resolvers, and separate recursive resolver logs from forwarder logs to avoid double counting.
Precision tells you how trustworthy a block is. If precision is 90%, one in ten blocked lookups is likely legitimate. Recall shows coverage: a recall of 85% means 15% of threat queries still resolved. Use F1 to balance both when comparing policies, feeds, or rule changes. For accuracy, label “encountered” using consistent threat intel tags and avoid mixing policy blocks with user-driven sinkhole responses.
False positives create downtime and help-desk load. Start with allow-list workflows for business-critical domains, then review top blocked domains by user group and application. A false positive rate below 0.05% is often a practical target in large enterprises, but the acceptable level depends on tolerance for disruption and attacker pressure. Add staged rollouts, monitor ticket spikes, and audit exceptions weekly to prevent allow-lists becoming permanent blind spots.
The calculator’s score combines recall, precision, and a penalty for false positives. Use it as a trend indicator, not a compliance badge. When the score drops, inspect category recall: phishing may remain high while command-and-control falls after a new malware family appears. Update threat intel sources and validate resolver policy propagation. Also watch encrypted DNS adoption; unmanaged DoH clients can bypass controls and lower observed block rates in edge logs.
Translate blocked threats into avoided incidents using an assumption like 500 threat queries per incident. Pair that with a cost per incident to estimate avoided cost for executives. For operations, chart weekly score, recall, and false positive rate alongside change events such as new feeds, policy tuning, or user migrations to new resolvers. Include a short narrative on top categories blocked, top missed categories, and remediation actions taken in production environments.
Use resolver logs labeled by threat intelligence, policy tags, or security analytics. Count queries that match known malicious indicators, even if they were allowed, so recall reflects coverage rather than only blocks.
It shouldn’t in this model. If blocked is higher, your data sources are inconsistent, or you are counting different periods. Align time windows and ensure each category uses the same definition of encountered.
Review top blocked domains, validate business use, and create time-limited exceptions. Prefer category tuning, reputation thresholds, and user-group policies over broad allow-lists, then re-measure precision after changes.
Low recall means many threat queries still resolve. Improve feed diversity, add newly observed indicators, tighten categories like C&C, and confirm enforcement across all resolvers, including roaming clients and remote offices.
They are directional. Queries-per-incident and cost-per-incident vary by environment and attacker behavior. Calibrate them using your incident reports, then track avoided and residual cost trends rather than single-point values.
Weekly is common for operations and monthly for leadership reporting. Measure after any feed or policy change, and keep a baseline period. Consistent cadence makes score, precision, and recall comparable over time.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.