Enter weighted fit, engagement, intent, recency, and risk values to estimate a normalized lead score out of 100.
The Plotly chart displays component contribution to the final result. Negative penalty values reduce the overall score.
| Lead | Fit Avg | Intent Inputs | Days Since Activity | Penalty Input | Score | Band |
|---|---|---|---|---|---|---|
| Enterprise SaaS Demo | 8.6 | 7 pricing visits / Demo | 2 | 1 | 83.36 | Hot |
| Mid-Market Webinar | 6.4 | 3 pricing visits / No demo | 7 | 2 | 48.82 | Nurture |
| Low-Fit Content Lead | 4 | 1 pricing visits / No demo | 21 | 6 | 26.6 | Cold |
| Partner Referral | 7.8 | 5 pricing visits / Demo | 3 | 0 | 72.23 | Warm |
Total Lead Score = Fit Score + Engagement Score + Intent Score + Source Score + Recency Score - Penalty Score
- Fit Score = Average of five fit inputs, scaled to 40 points.
- Engagement Score = Email opens, page views, and content downloads, scaled to 22 points.
- Intent Score = Pricing page visits plus demo request strength, scaled to 23 points.
- Source Score = Lead source quality, scaled to 8 points.
- Recency Score = Freshness value that decays from 7 points to 0 as inactivity increases.
- Penalty Score = Negative signal deduction scaled to 15 points.
- Score the lead against company, role, industry, size, and budget fit.
- Enter engagement data such as email opens, page views, and downloads.
- Add buying intent using pricing page visits and the demo request field.
- Enter recency and any penalty signals that reduce sales readiness.
- Submit the form to view the lead score, band, recommendation, and graph above the form.
- Use the CSV and PDF buttons to export the current scored result.
Data foundations for better prioritization
Predictive lead scoring helps revenue teams rank accounts by likely conversion instead of relying on intuition. A useful model combines firmographic fit, engagement depth, buying intent, source credibility, recency, and disqualifying risk. In many pipelines, top quartile leads produce far more meetings than low score records, making prioritization measurable, repeatable, auditable, and easier to coach across teams.
Scoring architecture behind the calculator
This calculator converts each input into a weighted contribution and then normalizes the result to a score out of 100. Fit variables measure alignment with the ideal customer profile. Engagement variables capture opens, visits, and content interaction. Intent inputs represent stronger signals such as pricing requests or demo interest. Recency prevents old actions from looking artificially strong in weekly reporting cycles.
Operational bands for sales action
Raw scores are easier to use when grouped into operating bands. Scores from 80 to 100 can support immediate sales outreach and tighter follow up windows. Scores from 60 to 79 typically justify active nurture plus rep review. Scores from 40 to 59 often fit automated sequences. Scores below 40 usually reflect weak fit, limited intent, or stale activity requiring renewed qualification work.
Penalties and recency controls
Penalties improve accuracy because not every interaction deserves positive credit. A prospect may open emails repeatedly but still lack budget, authority, geography fit, or valid contact data. This model subtracts points for those negative factors. The result is a more realistic ranking, especially for teams that handle large inbound volumes where false positives can waste calling time, marketing spend, and forecast confidence.
Forecasting value across channels
The score becomes more valuable when reviewed by campaign, territory, and source. If partner referrals average 76 while paid social averages 48, managers can rebalance spend and staffing with stronger evidence. Reps can also compare response times against score bands. High scoring leads contacted within one hour often convert materially better than delayed follow up, especially in competitive categories with multiple vendors.
Calibration and governance
To keep the model credible, compare scores with actual downstream outcomes every month. Review meeting creation, opportunity rate, average deal size, and win rate by score band. If lower score leads close unexpectedly, an important variable may be missing. If high score leads stall, a current weight may be too generous. Regular calibration protects trust and keeps prioritization useful during product, pricing, and market shifts.
How should teams choose weights?
Start with historical wins and meetings. Give more weight to factors that consistently appear before opportunity creation, then review conversion by score band monthly.
Is this score a replacement for CRM stages?
No. It complements stages by ranking priority inside each stage. A qualified lead can still carry lower urgency if intent and recency are weak.
What is a good threshold for SDR follow up?
Many teams begin with 70 or 80 as a fast-response threshold, then adjust after measuring meeting rates, contact rates, and deal progression.
Why does recency matter so much?
Older engagement decays in value because buying momentum changes quickly. Recency stops dormant leads from outranking newer, high-intent prospects.
Should negative signals reduce the score?
Yes. Missing budget, invalid contact data, excluded geographies, or student inquiries can create false positives if they are not penalized.
How often should the model be recalibrated?
Review it monthly and recalibrate quarterly. Faster reviews are useful after pricing changes, new channels, product launches, or major market shifts.