Audit Inputs
Use this form to score tag quality, governance, consent readiness, coverage depth, and performance impact in one container review.
Example Data Table
| Site | Total Tags | Broken Tags | Consent Coverage | Speed Impact | Audit Score |
|---|---|---|---|---|---|
| Storefront A | 42 | 2 | 78% | 260 ms | 72.40 |
| Lead Gen B | 31 | 1 | 91% | 150 ms | 84.85 |
| Publisher C | 58 | 6 | 62% | 610 ms | 54.60 |
Formula Used
- Tag Coverage Score = (Active Tags ÷ Total Tags) × 100
- Duplicate Score = 100 − ((Duplicate Tags ÷ Total Tags) × 120)
- Broken Score = 100 − ((Broken Tags ÷ Total Tags) × 180)
- Redundant Score = 100 − ((Redundant Tags ÷ Total Tags) × 100)
- Tag Hygiene Score = Average of tag coverage, duplicate, broken, and redundant scores
- Trigger Health Score = 100 − ((Unused Triggers ÷ Total Triggers) × 100)
- Variable Health Score = 100 − ((Unused Variables ÷ Total Variables) × 100)
- Environment Score = (Configured Environments ÷ 3) × 100
- Performance Score uses a threshold model based on page speed impact in milliseconds
- Overall Audit Score = weighted sum of hygiene, governance, consent, coverage, performance, and server-side maturity
How to Use This Calculator
- Enter the container or website name and define the review period.
- Count total tags, active tags, duplicates, failures, and redundant assets from your container audit.
- Enter trigger and variable totals, then record how many are unused or obsolete.
- Estimate naming, consent, data layer, documentation, and server-side coverage as percentages.
- Add measured page speed impact and the number of third-party tags currently deployed.
- Choose how many environments are configured, then submit the form.
- Review the scorecards, graph, and priority fixes to plan cleanup work.
- Use the CSV or PDF buttons to save a report for stakeholders.
Frequently Asked Questions
1. What does this calculator measure?
It measures implementation quality across tag hygiene, governance, consent setup, data layer depth, performance impact, and server-side maturity. The result helps prioritize cleanup before poor tracking affects reporting and optimization.
2. Why are broken tags penalized more than duplicates?
Broken tags can block core measurement, corrupt conversions, or silently fail consent logic. Duplicates are harmful too, but broken deployment usually creates more severe data loss risk.
3. Should active tags always equal total tags?
Not always. Some containers keep paused items for testing or staged releases. Still, a wide gap often suggests clutter, weak housekeeping, or outdated deployment practices needing review.
4. How is page speed impact used?
The calculator converts added milliseconds into a performance score. Lower impact earns a better score because efficient tag delivery supports user experience, crawl quality, and conversion rates.
5. What is considered a good audit score?
A score above 85 is excellent. Scores from 70 to 84 are strong. Scores from 55 to 69 need work, while anything below 55 usually needs a focused cleanup plan.
6. Can this audit help with GA4 and advertising pixels?
Yes. It fits analytics tags, ad platform pixels, consent tools, and custom events. The model focuses on deployment quality rather than one specific vendor.
7. Why track documentation coverage?
Documentation reduces key-person risk and speeds reviews, QA, and migrations. Well-documented tags are easier to validate, update, and retire without damaging measurement consistency.
8. Does server-side coverage guarantee compliance?
No. Server-side collection can improve resilience and control, but compliance still depends on lawful consent handling, data minimization, retention policies, and correct vendor configuration.