Quantify wildcard reach, risky hosts, and mitigation strength. Export results and align remediation with real operational evidence quickly.
The calculator produces an Exposure Score (0–100) using weighted factors and control modifiers:
DNSSEC and edge filtering reduce risk; high criticality and slower monitoring increase it.
| Domain | Tested | Hits | Sensitive | Services | TTL | Controls | Risk |
|---|---|---|---|---|---|---|---|
| example.com | 50 | 35 | 4 | 12 | 3600 | DNSSEC: No, Edge: Yes | High |
| shop.example.com | 60 | 10 | 0 | 2 | 300 | DNSSEC: Yes, Edge: Yes | Low |
| corp.example.net | 40 | 18 | 2 | 6 | 7200 | DNSSEC: No, Edge: No | Medium |
Use the table as a reporting baseline. Replace values with your scan results and export your latest score for audit trails.
Wildcard records can cause unexpected hostnames to resolve, expanding discovery for attackers and complicating asset inventory. In many environments, a 30–70% random-resolution rate signals broad wildcard reach and should trigger a review of routing rules, certificate coverage, and logging. Tracking “tested versus hits” builds repeatable evidence for change management decisions.
The calculator converts raw counts into ratios to compare domains fairly. Coverage reflects how consistently random labels resolve. SensitiveRatio highlights impact by measuring sensitive hostnames among wildcard hits. SurfaceRatio measures how many reachable services sit behind wildcard-resolving hosts. When coverage is moderate but service exposure is high, remediation should prioritize port reduction and segmentation over record deletion alone.
DNSSEC strengthens integrity by protecting signed records from certain spoofing scenarios, while edge filtering reduces direct reachability of risky endpoints. The model applies control reductions so teams can quantify improvement after enabling defenses. If controls are present but the score remains high, that usually indicates excessive coverage, sensitive routing, or dangling dependencies rather than missing security tooling.
TTL influences how quickly clients and resolvers adopt a fix. Long TTL values can delay remediation and prolong misrouting after record changes. The TTLFactor is scaled so very short TTLs provide limited benefit, while multi-hour or multi-day TTLs increase persistence meaningfully. For volatile environments, lowering TTL can reduce outage risk during containment and rollback.
Scores are grouped into Low, Medium, and High bands to support triage. High scores typically justify narrowing wildcard scope, isolating sensitive hosts, and removing dangling CNAME targets. The built-in exports help teams capture inputs, results, and actions in a consistent format for auditors, runbooks, and quarterly security reviews. Recalculate after every DNS change to validate that exposure decreases as expected.
It matches many subdomains with one record, so unknown hostnames can resolve to the same target. This simplifies routing, but can obscure asset ownership and increase exposure if not tightly scoped.
Generate random labels like x7k3.example.com and query DNS. Count how many resolve to an address or CNAME target. Use the same test size each run to compare trends reliably.
If admin, SSO, VPN, or billing names can be influenced by wildcard routing, phishing, misrouting, and access-control mistakes become more likely. The model treats impact as a separate multiplier.
No. DNSSEC improves authenticity of DNS responses, but it does not reduce wildcard coverage or exposed services. It helps defensively, yet you still need scoping, segmentation, and monitoring.
It suggests a CNAME points to a target you no longer control, such as a deleted cloud service. That can enable subdomain takeover scenarios. Fix by removing or re-pointing to owned resources.
Run it after DNS changes, infrastructure migrations, and certificate updates. Weekly is ideal for high-criticality domains. Monthly is reasonable for stable properties with strong monitoring.