MQL counts and campaign dashboards don’t convince CFOs. The real challenge is building revenue evidence that finance trusts — agreed in advance, consistently applied, and honest about what the data can and cannot show. This Field Note gives you the framework.
Field Note 005 · Measurement
How do we prove marketing's impact on revenue — not just leads?The problem is not that B2B companies lack data. It is that the standard attribution models were built for a different kind of buying journey — and applying them to B2B produces numbers that are precise but wrong.
Most B2B attribution models are built for B2C buying patterns — single buyer, short cycle, digital touchpoints. B2B deals involve multiple stakeholders, months of invisible influence, and touchpoints that never enter a CRM. Applying B2C attribution to B2B produces numbers that feel precise but measure the wrong things.
Last-click attribution assigns 100% of credit to the final touchpoint before conversion. In B2B, the final touchpoint is almost always a sales rep's email or a direct visit. This makes marketing look irrelevant while hiding the six months of content, events, and brand work that created the conditions for the deal.
Attribution is not a measurement problem. It is a trust problem between marketing and finance. The CFO doesn't trust marketing's numbers because marketing picks the attribution model that makes marketing look best. The fix is not better analytics — it is a shared evidence standard that both functions agree on before the measurement begins.
Most Indian B2B companies don't have the martech infrastructure for multi-touch attribution. They default to last-click or "source" fields in CRM that are filled in inconsistently by sales. The result: brand, events, and content appear to produce zero pipeline — which leads to those budgets being cut, which eventually damages pipeline quality in ways that take 12 months to show up.
L1 questions ask how to measure better. L2 questions ask what evidence would actually change a decision.
These steps build a revenue evidence framework that finance will trust — not because it is technically sophisticated, but because it is honest, consistently applied, and agreed in advance.
Accept that perfect attribution is not the goal
The goal is not perfect attribution. It is sufficient evidence to make better budget decisions and have credible conversations with finance. Chasing perfect attribution in B2B is expensive, technically complex, and produces numbers that still don't capture offline influence.
| Attribution model | What it measures well | What it distorts | B2B fit |
|---|---|---|---|
| Last click | Final conversion trigger | Everything that created the conditions for the deal | Poor |
| First touch | Original awareness source | All influence between awareness and close | Poor |
| Linear (equal weight) | Broad channel participation | Channels that do heavy lifting vs light touchpoints | Partial |
| Time decay | Recent influence on close | Long-cycle top-of-funnel work | Partial |
| Pipeline influence | Marketing touchpoints in active deals | Deals with no tracked digital touchpoints | Good |
| Sales-confirmed source | Actual deal origin (human-verified) | Requires sales discipline to maintain accuracy | Best for India B2B |
Companies that invest most heavily in attribution infrastructure often have the least credible numbers — because they over-engineer the measurement and under-engineer the human discipline required to keep data clean. Simple, consistently applied frameworks beat sophisticated models with inconsistent inputs.
Build a three-tier evidence framework
Not all evidence is equal. Build a framework with three tiers: sales-confirmed origin at the top, pipeline influence in the middle, and leading indicators at the bottom. Present all three to finance, always in that order.
Ask sales: "How did this deal start?" Document the answer in CRM at opportunity creation — not retrospectively. This is imperfect but honest. When sales and marketing agree on the source, the number is defensible to finance.
Track which marketing touchpoints appeared in the history of deals that closed. This doesn't assign causation but shows correlation — useful for defending channel investment when direct attribution is impossible.
Useful for understanding direction of travel — is awareness growing, is content resonating — but not for proving revenue impact. Present these alongside Tier 1 and 2 evidence, never instead of them.
Marketing teams that lead with traffic and engagement numbers in board presentations lose credibility with finance quickly. Finance thinks in revenue terms. Lead with the revenue-connected evidence, then use leading indicators to explain the investment thesis for channels with long feedback loops.
For SaaS with a product trial motion, track which marketing channels appear in the history of deals that converted from trial to paid. This gives you influence data that is more credible than last-click attribution and more honest than vanity metrics. Cohort analysis — grouping trial users by acquisition channel and tracking their 90-day conversion rate — is particularly powerful for defending content and SEO investment.
IT services deals that begin at Nasscom or Gartner events close 6-18 months later. Measuring event ROI at 30 or 90 days will always show zero. Build a 12-month tracking window for conference-sourced pipeline. Track every conversation at each event, create an opportunity in CRM the week after, and measure close rate and deal value 12 months later. That is the only honest way to measure this channel.
For manufacturing companies, the equivalent of pipeline is RFQ volume from qualified buyers. Track the source of every RFQ with the same discipline as a SaaS company tracks leads. Trade shows, distributor introductions, technical specification downloads, and directory listings all produce RFQs — but only consistent source tracking will tell you which ones produce the highest-value orders.
Pharma partnerships that originate from KOL introductions at CPhI or DIA typically take 18-36 months to produce signed contracts. Marketing attribution frameworks built for quarterly reporting will never capture this. Build a relationship-source tracking system that documents the origin of every significant business conversation — and measure the revenue impact over a 24-month rolling window.
For BFSI, logistics, and relationship-driven B2B categories, referral pipeline is typically the highest-converting source and the most poorly tracked. Build a deliberate referral source field in your CRM, train sales to fill it at opportunity creation, and report referral-sourced pipeline as a primary marketing metric. The number will surprise most leadership teams.
Align on the evidence standard with finance before you measure
The most common reason marketing's revenue proof doesn't land with CFOs is that marketing chose the measurement methodology unilaterally. Finance distrusts numbers that marketing produces about marketing's own performance. The fix is to agree on the methodology together before the measurement begins.
Finance leaders who have been burned by inflated MQL numbers are skeptical of any marketing metric. Starting with a methodology conversation — rather than a results presentation — changes the dynamic. You are not defending past numbers. You are designing a shared measurement system.
Replace MQL reporting with pipeline quality metrics
MQLs measure marketing activity, not marketing impact. A team that produces 500 MQLs per month with a 2% SQL conversion rate is generating less revenue impact than one producing 50 MQLs with a 40% SQL conversion rate. Report quality metrics, not volume metrics.
When marketing reports pipeline quality rather than lead volume, the conversation with sales changes from 'your leads are poor quality' to a shared data set where both functions can see which channels are producing qualified pipeline and which are producing noise. That shared visibility is the foundation of real alignment.
Build the long-cycle measurement model for brand and events
Brand investment and event presence produce revenue impact over 12-24 month cycles. A measurement framework built for 90-day reporting will always show these channels as performing poorly. You need a separate model for long-cycle channels.
Tell the CFO: brand investment produces revenue impact that our current measurement infrastructure cannot fully capture in a 90-day window. Here is the methodology we use to build a 12-month evidence base. Here is what it showed last year. The honesty about the measurement limitation is more credible than a false precision.
Report revenue evidence, not marketing activity
The final step is changing what goes into board and finance presentations. Marketing activity reports — campaigns run, content produced, events attended — are not evidence of revenue impact. They are a description of work. Finance needs revenue evidence.
Finance leaders who have seen inflated, inconsistent marketing metrics for years respond well to honest, methodology-first reporting. You are not trying to impress them with big numbers. You are trying to build a shared model of marketing's contribution to revenue — one that both functions can trust and act on.
How companies across SaaS, IT services, manufacturing, and pharma have built credible revenue evidence — and what changed when they did.
Gong's marketing team became known for reporting pipeline quality metrics rather than MQL volume — tracking win rate, deal size, and sales cycle length by acquisition channel, and presenting this data jointly with sales leadership in every board meeting. The result was a shared language between marketing and sales about what "good pipeline" looked like, and a budget allocation that followed quality signals rather than volume signals. Their content-driven pipeline consistently showed higher win rates and larger deal sizes than paid-acquired pipeline — a finding that justified significant investment in original data-based content at a time when most B2B SaaS companies were spending the same budget on paid LinkedIn.
A mid-sized Indian IT services company faced a CFO who wanted to cut the Nasscom event budget because "we can't prove it generates revenue." The marketing team built a 12-month tracking model: every conversation at Nasscom was logged as a CRM contact, a follow-up sequence was triggered, and opportunity creation was tracked over the following year. The analysis showed that Nasscom-sourced conversations had a 28% higher opportunity creation rate and a 15% shorter sales cycle than cold outbound. That evidence — specific, methodology-transparent, jointly verified with sales — saved the event budget and doubled it the following year.
Freshworks went through a period where their MQL volume was growing but sales conversion was declining. The disconnect — high MQL volume, poor SQL rate — was eroding trust between marketing and sales. The fix was a complete overhaul of how marketing was measured: MQL reporting was deprioritised, and the primary metric became SQL conversion rate by channel and pipeline value by acquisition source. This forced a channel quality conversation that the MQL metric had been obscuring. Several high-volume but low-quality channels were cut. Lower-volume channels with strong SQL rates received more investment. The quality of sales-marketing conversation improved because both functions were looking at the same revenue-connected numbers.
An Indian industrial components manufacturer had been allocating the majority of their marketing budget to digital advertising because it was "measurable." A simple RFQ source tracking exercise — adding a mandatory source field to every RFQ logged in their system — revealed that trade shows produced 62% of their highest-value RFQs (above ₹50L order value) while generating only 15% of total RFQ volume. Digital advertising produced 40% of RFQ volume but less than 8% of high-value RFQs. The channel allocation shifted significantly toward trade shows the following year. The measurement was not sophisticated — it was a dropdown field in a spreadsheet filled in by the sales team. The discipline of consistent data entry produced a revenue insight that no analytics platform had surfaced.
A B2B SaaS company selling globally had significant content investment that showed no attribution in their CRM. The last-click data showed paid search and direct traffic as the top sources. A structured win/loss interview programme — 20 interviews with recently closed accounts — revealed that 14 of the 20 had read at least three pieces of the company's content before the sales conversation began, and 9 cited specific articles as the reason they trusted the company enough to take the first meeting. None of this showed up in last-click attribution. The interview evidence was presented to the CFO alongside the CRM data, with an explicit explanation of why the two numbers differed. The content budget was protected based on the qualitative evidence rather than the attribution data.
Pull your last 20 closed deals and ask one person from sales to confirm the actual origin of each one — where did the relationship start, not what the CRM says. Compare that against your current attribution data. The gap between those two numbers is the size of your under-attribution problem. That gap is what is getting your budget cut.
Then schedule a 60-minute working session with your CFO or finance lead. The agenda is not to present results — it is to agree on what evidence would convince them that marketing is contributing to revenue. That conversation, done before the measurement, is worth more than any analytics tool.
Every month, one hard B2B marketing problem.
First principles thinking. Real India context.