The problem with rules-based scoring
Most marketing platforms still use rules-based lead scoring. The concept is simple: assign points for each action a lead takes. Opens an email? +5. Visits the pricing page? +20. Works at a company with more than 500 employees? +15. Reaches 80 points? Route to sales.
The problem is that these rules are configured by humans with limited data and a lot of assumptions. The weights are usually arbitrary ("pricing page feels worth 20 points") and never updated. They're the same in month twelve as in month one, regardless of what actually converted.
In practice, a rules-based score is an approximation of what the marketing team thinks matters — not a measurement of what actually predicts conversion. The two are often very different.
What predictive scoring does differently
A predictive lead scoring model learns from historical outcomes. It looks at every contact in your CRM, identifies which ones became customers, and finds the combination of attributes and behaviours that distinguished them from contacts who didn't.
The model doesn't start with assumptions about what matters. It asks: given everything I know about this contact — their firmographic data, which pages they visited, how they engaged with emails, how quickly they progressed through the funnel — what is the probability that they convert within the next 90 days?
That probability is the score. It's expressed as a percentage (or mapped to a 1–100 scale) and updated in near-real-time as new data comes in.
Because the model is trained on your data rather than a generic template, it captures the patterns that are specific to your business. A high-intent signal for a B2B SaaS business might be visiting the integration documentation. For a B2C subscription company, it might be returning to the site three days after the first visit. Rules-based scoring treats these the same — predictive scoring learns the difference.
The features that drive prediction accuracy
Predictive models draw on a combination of data types:
Firmographic data — company size, industry, geography, funding status, technology stack. This determines fit: is this the kind of company that typically buys from you?
Behavioural data — pages visited, emails opened and clicked, content downloaded, webinar attended, pricing page viewed, trial started. This determines intent: is this person actively evaluating you?
Velocity data — how quickly is this contact progressing? Someone who visits the site twice in a week after downloading a whitepaper is exhibiting different velocity than someone who downloaded the same whitepaper six months ago and hasn't returned.
Negative signals — contacts who unsubscribed, bounced, or explicitly disqualified themselves should immediately reduce the score. Predictive models handle this naturally because unsubscribe and churn events appear in the training data.
Recency weighting — a click from yesterday is worth more than a click from three months ago. Predictive models apply recency weighting automatically based on how conversion patterns decay over time in your data.
What a production rollout usually needs
A production predictive scoring rollout usually starts only after a team has enough history to train on — often 100+ closed deals and several months of engagement data.
The implementation checklist is usually straightforward:
1. Define what counts as a conversion event. 2. Validate the historical data quality feeding the model. 3. Hold out a validation set so the team can review precision, recall, and false positives before using the score operationally. 4. Decide where the score will actually be used: routing, segmentation, prioritisation, or suppression.
If you are evaluating a platform, the critical questions are whether scores are explainable, refreshed regularly, and wired into operational workflows rather than sitting as a static dashboard metric.
Using scores in your automation
The score is most powerful when it's wired into your automation rather than just displayed in a list.
Routing: When a score crosses a threshold, automatically assign the contact to a sales rep, send a personalised follow-up, or move them into a higher-touch nurture sequence.
Content personalisation: High-scoring contacts see case studies from similar companies; low-scoring contacts receive educational content to build awareness. The content path adapts to the likelihood of purchase.
Sales prioritisation: Instead of sales working a flat list of leads, they see a ranked queue sorted by conversion probability. Time spent on leads that are most likely to close is the clearest driver of sales efficiency.
Suppression: Contacts with rapidly declining scores (multiple non-engages, unsubscribes on other channels) are deprioritised automatically, reducing wasted outreach and protecting sender reputation.
The combination of these use cases — routing, personalisation, prioritisation, suppression — is where predictive scoring pays back many times over its implementation cost.