Wildcard DNS Exposure Calculator

Quantify wildcard reach, risky hosts, and mitigation strength. Export results and align remediation with real operational evidence quickly.

Enter a domain like example.com.
Count of random labels that resolved.
Admin, SSO, VPN, billing, CI, or internal portals.
HTTP/HTTPS, SSH, RDP, SMTP, APIs, dashboards.
Higher TTL increases persistence of risky records.
Result appears above this form after submission.

Formula Used

The calculator produces an Exposure Score (0–100) using weighted factors and control modifiers:

Coverage = wildcard_hits / subdomains_tested
SensitiveRatio = sensitive_hosts / max(1, wildcard_hits)
SurfaceRatio = open_services / max(1, wildcard_hits)
TTLFactor = scaled log(TTL) from 60s→0.15 to 86400s→1.00

Base = (wildcard?30:5) + 28·Coverage + 18·SensitiveRatio + 16·SurfaceRatio + 8·TTLFactor + (CNAME?10:0)
Score = clamp(0..100, Base · (1−ControlReduction) · CriticalityMultiplier · ScanMultiplier)

DNSSEC and edge filtering reduce risk; high criticality and slower monitoring increase it.

How to Use

  1. Generate random labels under your domain and query DNS resolution.
  2. Enter how many labels you tested and how many resolved (hits).
  3. Count sensitive hostnames that should never be wildcard-routable.
  4. Record exposed services reachable behind wildcard-resolving hosts.
  5. Mark DNSSEC, edge filtering, TTL, and monitoring cadence accurately.
  6. Submit to receive a score, risk band, and prioritized actions.

Example Data Table

Domain Tested Hits Sensitive Services TTL Controls Risk
example.com 50 35 4 12 3600 DNSSEC: No, Edge: Yes High
shop.example.com 60 10 0 2 300 DNSSEC: Yes, Edge: Yes Low
corp.example.net 40 18 2 6 7200 DNSSEC: No, Edge: No Medium

Use the table as a reporting baseline. Replace values with your scan results and export your latest score for audit trails.

Wildcard Resolution Footprint

Wildcard records can cause unexpected hostnames to resolve, expanding discovery for attackers and complicating asset inventory. In many environments, a 30–70% random-resolution rate signals broad wildcard reach and should trigger a review of routing rules, certificate coverage, and logging. Tracking “tested versus hits” builds repeatable evidence for change management decisions.

Exposure Drivers and Measured Ratios

The calculator converts raw counts into ratios to compare domains fairly. Coverage reflects how consistently random labels resolve. SensitiveRatio highlights impact by measuring sensitive hostnames among wildcard hits. SurfaceRatio measures how many reachable services sit behind wildcard-resolving hosts. When coverage is moderate but service exposure is high, remediation should prioritize port reduction and segmentation over record deletion alone.

Control Signals That Reduce Risk

DNSSEC strengthens integrity by protecting signed records from certain spoofing scenarios, while edge filtering reduces direct reachability of risky endpoints. The model applies control reductions so teams can quantify improvement after enabling defenses. If controls are present but the score remains high, that usually indicates excessive coverage, sensitive routing, or dangling dependencies rather than missing security tooling.

TTL Persistence and Operational Reality

TTL influences how quickly clients and resolvers adopt a fix. Long TTL values can delay remediation and prolong misrouting after record changes. The TTLFactor is scaled so very short TTLs provide limited benefit, while multi-hour or multi-day TTLs increase persistence meaningfully. For volatile environments, lowering TTL can reduce outage risk during containment and rollback.

Prioritization and Reporting Outputs

Scores are grouped into Low, Medium, and High bands to support triage. High scores typically justify narrowing wildcard scope, isolating sensitive hosts, and removing dangling CNAME targets. The built-in exports help teams capture inputs, results, and actions in a consistent format for auditors, runbooks, and quarterly security reviews. Recalculate after every DNS change to validate that exposure decreases as expected.

FAQs

1) What is a wildcard DNS record in practical terms?

It matches many subdomains with one record, so unknown hostnames can resolve to the same target. This simplifies routing, but can obscure asset ownership and increase exposure if not tightly scoped.

2) How do I measure “wildcard hits” accurately?

Generate random labels like x7k3.example.com and query DNS. Count how many resolve to an address or CNAME target. Use the same test size each run to compare trends reliably.

3) Why do sensitive hostnames raise the score so much?

If admin, SSO, VPN, or billing names can be influenced by wildcard routing, phishing, misrouting, and access-control mistakes become more likely. The model treats impact as a separate multiplier.

4) Does DNSSEC eliminate wildcard risk?

No. DNSSEC improves authenticity of DNS responses, but it does not reduce wildcard coverage or exposed services. It helps defensively, yet you still need scoping, segmentation, and monitoring.

5) What does a dangling CNAME indicate?

It suggests a CNAME points to a target you no longer control, such as a deleted cloud service. That can enable subdomain takeover scenarios. Fix by removing or re-pointing to owned resources.

6) How often should I run this assessment?

Run it after DNS changes, infrastructure migrations, and certificate updates. Weekly is ideal for high-criticality domains. Monthly is reasonable for stable properties with strong monitoring.