Field Note 005  ·  Measurement  ·  India B2B

How do we prove marketing's impact on revenue — not just leads?

MQL counts and campaign dashboards don’t convince CFOs. The real challenge is building revenue evidence that finance trusts — agreed in advance, consistently applied, and honest about what the data can and cannot show. This Field Note gives you the framework.

Reading time12 min
CategoryMeasurement
IndustriesSaaS · IT/ITES · Manufacturing · Pharma · Other B2B

Why attribution in B2B is structurally broken

The problem is not that B2B companies lack data. It is that the standard attribution models were built for a different kind of buying journey — and applying them to B2B produces numbers that are precise but wrong.

The attribution illusion

Most B2B attribution models are built for B2C buying patterns — single buyer, short cycle, digital touchpoints. B2B deals involve multiple stakeholders, months of invisible influence, and touchpoints that never enter a CRM. Applying B2C attribution to B2B produces numbers that feel precise but measure the wrong things.

Why last-click is actively misleading

Last-click attribution assigns 100% of credit to the final touchpoint before conversion. In B2B, the final touchpoint is almost always a sales rep's email or a direct visit. This makes marketing look irrelevant while hiding the six months of content, events, and brand work that created the conditions for the deal.

The real problem is trust, not data

Attribution is not a measurement problem. It is a trust problem between marketing and finance. The CFO doesn't trust marketing's numbers because marketing picks the attribution model that makes marketing look best. The fix is not better analytics — it is a shared evidence standard that both functions agree on before the measurement begins.

The India B2B dimension

Most Indian B2B companies don't have the martech infrastructure for multi-touch attribution. They default to last-click or "source" fields in CRM that are filled in inconsistently by sales. The result: brand, events, and content appear to produce zero pipeline — which leads to those budgets being cut, which eventually damages pipeline quality in ways that take 12 months to show up.

The question map: L1 vs L2

L1 questions ask how to measure better. L2 questions ask what evidence would actually change a decision.

L1 — Questions asked out loud
How do we show the CFO that our marketing is working?
Which campaigns are actually driving revenue?
Our MQL numbers look good but sales says the leads are poor quality. Who is right?
We run a lot of content and events. How do we prove they are generating pipeline?
How do we measure brand if it doesn't show up in last-click attribution?
L2 — Questions that unlock the real answer
What evidence would actually convince our CFO — and have we agreed on that standard before we start measuring?
Are we measuring what marketing does or what marketing causes? Those are different questions with different answers.
What is the cost of under-attributing marketing's contribution — what gets cut when the numbers look bad?
Can we agree with sales on a shared definition of pipeline quality before we argue about lead volume?
Which decisions would change if we had better attribution — and are those decisions worth the investment to measure?

The decision logic: six steps to credible revenue evidence

These steps build a revenue evidence framework that finance will trust — not because it is technically sophisticated, but because it is honest, consistently applied, and agreed in advance.

1

Accept that perfect attribution is not the goal

The goal is not perfect attribution. It is sufficient evidence to make better budget decisions and have credible conversations with finance. Chasing perfect attribution in B2B is expensive, technically complex, and produces numbers that still don't capture offline influence.

Logic
The real goalEnough evidence to defend investment decisions and identify which channels are producing qualified pipeline
The cost testWould better attribution change a material budget decision? If not, the measurement investment is not worth making
The trust testWould the CFO accept the measurement methodology if you explained it? If not, fix the methodology before building the infrastructure
The India constraintMost Indian B2B companies don't need sophisticated attribution tools. They need consistent CRM hygiene and honest sales-confirmed source tracking
Attribution modelWhat it measures wellWhat it distortsB2B fit
Last clickFinal conversion triggerEverything that created the conditions for the dealPoor
First touchOriginal awareness sourceAll influence between awareness and closePoor
Linear (equal weight)Broad channel participationChannels that do heavy lifting vs light touchpointsPartial
Time decayRecent influence on closeLong-cycle top-of-funnel workPartial
Pipeline influenceMarketing touchpoints in active dealsDeals with no tracked digital touchpointsGood
Sales-confirmed sourceActual deal origin (human-verified)Requires sales discipline to maintain accuracyBest for India B2B
The paradox

Companies that invest most heavily in attribution infrastructure often have the least credible numbers — because they over-engineer the measurement and under-engineer the human discipline required to keep data clean. Simple, consistently applied frameworks beat sophisticated models with inconsistent inputs.

2

Build a three-tier evidence framework

Not all evidence is equal. Build a framework with three tiers: sales-confirmed origin at the top, pipeline influence in the middle, and leading indicators at the bottom. Present all three to finance, always in that order.

Logic
Tier 1Sales-confirmed pipeline origin — where did this deal actually start, verified by the rep who worked it
Tier 2Pipeline influence — which marketing touchpoints appeared in the history of deals that closed, regardless of whether they caused the deal
Tier 3Leading indicators — brand search volume, content engagement, event attendance, inbound inquiry trends
The ruleNever present Tier 3 evidence as proof of revenue impact. It is context. Tier 1 is proof.
Tier 1 — Highest credibility
Sales-confirmed pipeline origin

Ask sales: "How did this deal start?" Document the answer in CRM at opportunity creation — not retrospectively. This is imperfect but honest. When sales and marketing agree on the source, the number is defensible to finance.

Tier 2 — Strong supporting evidence
Pipeline influence reporting

Track which marketing touchpoints appeared in the history of deals that closed. This doesn't assign causation but shows correlation — useful for defending channel investment when direct attribution is impossible.

Tier 3 — Context, not proof
Leading indicators (traffic, engagement, brand search)

Useful for understanding direction of travel — is awareness growing, is content resonating — but not for proving revenue impact. Present these alongside Tier 1 and 2 evidence, never instead of them.

Why this ordering matters

Marketing teams that lead with traffic and engagement numbers in board presentations lose credibility with finance quickly. Finance thinks in revenue terms. Lead with the revenue-connected evidence, then use leading indicators to explain the investment thesis for channels with long feedback loops.

SaaS — Pipeline influence is your strongest tool

For SaaS with a product trial motion, track which marketing channels appear in the history of deals that converted from trial to paid. This gives you influence data that is more credible than last-click attribution and more honest than vanity metrics. Cohort analysis — grouping trial users by acquisition channel and tracking their 90-day conversion rate — is particularly powerful for defending content and SEO investment.

IT / ITES — Conference and analyst ROI requires a 12-month window

IT services deals that begin at Nasscom or Gartner events close 6-18 months later. Measuring event ROI at 30 or 90 days will always show zero. Build a 12-month tracking window for conference-sourced pipeline. Track every conversation at each event, create an opportunity in CRM the week after, and measure close rate and deal value 12 months later. That is the only honest way to measure this channel.

Manufacturing — RFQ source is your pipeline metric

For manufacturing companies, the equivalent of pipeline is RFQ volume from qualified buyers. Track the source of every RFQ with the same discipline as a SaaS company tracks leads. Trade shows, distributor introductions, technical specification downloads, and directory listings all produce RFQs — but only consistent source tracking will tell you which ones produce the highest-value orders.

Pharma — KOL programme ROI requires a 24-month window

Pharma partnerships that originate from KOL introductions at CPhI or DIA typically take 18-36 months to produce signed contracts. Marketing attribution frameworks built for quarterly reporting will never capture this. Build a relationship-source tracking system that documents the origin of every significant business conversation — and measure the revenue impact over a 24-month rolling window.

Other B2B — Referral tracking is chronically underinvested

For BFSI, logistics, and relationship-driven B2B categories, referral pipeline is typically the highest-converting source and the most poorly tracked. Build a deliberate referral source field in your CRM, train sales to fill it at opportunity creation, and report referral-sourced pipeline as a primary marketing metric. The number will surprise most leadership teams.

3

Align on the evidence standard with finance before you measure

The most common reason marketing's revenue proof doesn't land with CFOs is that marketing chose the measurement methodology unilaterally. Finance distrusts numbers that marketing produces about marketing's own performance. The fix is to agree on the methodology together before the measurement begins.

Logic
The conversationSchedule a working session with the CFO or finance lead before the next planning cycle. The agenda: what evidence would convince you that marketing is working?
The outputA written definition of what constitutes 'marketing-sourced pipeline', 'marketing-influenced pipeline', and 'marketing-unattributed pipeline' — agreed by both functions
The commitmentFinance commits to accepting the methodology for 12 months. Marketing commits to reporting honestly even when numbers are unflattering
The reviewQuarterly review of the methodology. Not the numbers — the methodology. Are we measuring what we agreed to measure?
Why this conversation is hard

Finance leaders who have been burned by inflated MQL numbers are skeptical of any marketing metric. Starting with a methodology conversation — rather than a results presentation — changes the dynamic. You are not defending past numbers. You are designing a shared measurement system.

4

Replace MQL reporting with pipeline quality metrics

MQLs measure marketing activity, not marketing impact. A team that produces 500 MQLs per month with a 2% SQL conversion rate is generating less revenue impact than one producing 50 MQLs with a 40% SQL conversion rate. Report quality metrics, not volume metrics.

Logic
Stop reportingMQL volume, website traffic, social impressions, email open rates as primary metrics
Start reportingSQL conversion rate by channel, pipeline value by source, win rate by acquisition channel, time-to-close by source
The pipeline quality reportA monthly table showing: channel, pipeline generated, SQLs, win rate, average deal size, and CAC payback — for every channel simultaneously
The disciplineShare this report with sales leadership every month. If they disagree with the numbers, fix the data. Don't adjust the methodology to make marketing look better
SQL
Sales Qualified Lead
Agreed definition between sales and marketing of what qualifies as a real opportunity
Win rate
By channel source
Track close rate by pipeline source — not just volume. A channel with low volume and high win rate is more valuable than the reverse
Time to close
By acquisition channel
Channels that produce faster-closing pipeline are worth more per lead than channels with long nurture cycles
CAC payback
By segment and channel
How long does it take for a deal to recover the cost of acquiring it? Segment and channel combinations that pay back fastest deserve the most investment
Deal size
By source
Marketing-sourced deals may be smaller than enterprise referrals — track average deal size by source to understand the revenue mix implications
Expansion rate
By acquisition channel
Customers acquired through content and brand often expand at higher rates than paid-acquired customers. Track this to defend brand investment
The shift this produces

When marketing reports pipeline quality rather than lead volume, the conversation with sales changes from 'your leads are poor quality' to a shared data set where both functions can see which channels are producing qualified pipeline and which are producing noise. That shared visibility is the foundation of real alignment.

5

Build the long-cycle measurement model for brand and events

Brand investment and event presence produce revenue impact over 12-24 month cycles. A measurement framework built for 90-day reporting will always show these channels as performing poorly. You need a separate model for long-cycle channels.

Logic
Cohort trackingTrack every significant touchpoint — conference attendance, content download, brand search — against deal outcomes over 12-24 months
Brand search trendMonitor branded search volume monthly. A rising trend correlates with pipeline quality improvement 6-12 months later — not proof, but a leading indicator worth tracking
Win/loss interviewsConduct structured win/loss interviews with every significant deal. Ask specifically: where did you first hear about us, and what gave you confidence in us before the sales conversation?
Incremental testRun a geographic or segment-based test: invest in brand in one market and not another. Measure pipeline quality difference over 12 months. Imperfect but more credible than correlation
The honest framing

Tell the CFO: brand investment produces revenue impact that our current measurement infrastructure cannot fully capture in a 90-day window. Here is the methodology we use to build a 12-month evidence base. Here is what it showed last year. The honesty about the measurement limitation is more credible than a false precision.

6

Report revenue evidence, not marketing activity

The final step is changing what goes into board and finance presentations. Marketing activity reports — campaigns run, content produced, events attended — are not evidence of revenue impact. They are a description of work. Finance needs revenue evidence.

Logic
Revenue evidencePipeline sourced, pipeline influenced, deals closed from marketing-sourced pipeline, CAC by channel, expansion rate by acquisition source
Investment thesisFor every channel in the budget, one sentence: we invest in this channel because it produces this outcome, as evidenced by this data over this time period
Honest gapsWhere the evidence is incomplete, say so explicitly. 'We don't have clean attribution for conference-sourced pipeline but here is our 12-month tracking methodology' is more credible than a number you made up
The askFrame budget requests as investment theses with evidence, not as cost line items. 'We need ₹X for this channel because it produced ₹Y in pipeline last year with a Z% close rate'
What changes

Finance leaders who have seen inflated, inconsistent marketing metrics for years respond well to honest, methodology-first reporting. You are not trying to impress them with big numbers. You are trying to build a shared model of marketing's contribution to revenue — one that both functions can trust and act on.

Real-world examples

How companies across SaaS, IT services, manufacturing, and pharma have built credible revenue evidence — and what changed when they did.

Gong — B2B SaaS
Built their entire marketing attribution model around pipeline quality, not lead volume

Gong's marketing team became known for reporting pipeline quality metrics rather than MQL volume — tracking win rate, deal size, and sales cycle length by acquisition channel, and presenting this data jointly with sales leadership in every board meeting. The result was a shared language between marketing and sales about what "good pipeline" looked like, and a budget allocation that followed quality signals rather than volume signals. Their content-driven pipeline consistently showed higher win rates and larger deal sizes than paid-acquired pipeline — a finding that justified significant investment in original data-based content at a time when most B2B SaaS companies were spending the same budget on paid LinkedIn.

Indian IT Services — Evidence-based budget defence
Used 12-month conference tracking to prove Nasscom ROI to a skeptical CFO

A mid-sized Indian IT services company faced a CFO who wanted to cut the Nasscom event budget because "we can't prove it generates revenue." The marketing team built a 12-month tracking model: every conversation at Nasscom was logged as a CRM contact, a follow-up sequence was triggered, and opportunity creation was tracked over the following year. The analysis showed that Nasscom-sourced conversations had a 28% higher opportunity creation rate and a 15% shorter sales cycle than cold outbound. That evidence — specific, methodology-transparent, jointly verified with sales — saved the event budget and doubled it the following year.

Freshworks — Moving from MQL to revenue metrics
Rebuilt marketing reporting around revenue impact after MQL numbers lost credibility with sales

Freshworks went through a period where their MQL volume was growing but sales conversion was declining. The disconnect — high MQL volume, poor SQL rate — was eroding trust between marketing and sales. The fix was a complete overhaul of how marketing was measured: MQL reporting was deprioritised, and the primary metric became SQL conversion rate by channel and pipeline value by acquisition source. This forced a channel quality conversation that the MQL metric had been obscuring. Several high-volume but low-quality channels were cut. Lower-volume channels with strong SQL rates received more investment. The quality of sales-marketing conversation improved because both functions were looking at the same revenue-connected numbers.

Manufacturing B2B — RFQ Source Tracking
Built a simple RFQ source tracking system that revealed trade shows as highest-value channel

An Indian industrial components manufacturer had been allocating the majority of their marketing budget to digital advertising because it was "measurable." A simple RFQ source tracking exercise — adding a mandatory source field to every RFQ logged in their system — revealed that trade shows produced 62% of their highest-value RFQs (above ₹50L order value) while generating only 15% of total RFQ volume. Digital advertising produced 40% of RFQ volume but less than 8% of high-value RFQs. The channel allocation shifted significantly toward trade shows the following year. The measurement was not sophisticated — it was a dropdown field in a spreadsheet filled in by the sales team. The discipline of consistent data entry produced a revenue insight that no analytics platform had surfaced.

Indian B2B SaaS — Win/loss interviews as attribution evidence
Used structured win/loss interviews to prove content's role in the buying journey

A B2B SaaS company selling globally had significant content investment that showed no attribution in their CRM. The last-click data showed paid search and direct traffic as the top sources. A structured win/loss interview programme — 20 interviews with recently closed accounts — revealed that 14 of the 20 had read at least three pieces of the company's content before the sales conversation began, and 9 cited specific articles as the reason they trusted the company enough to take the first meeting. None of this showed up in last-click attribution. The interview evidence was presented to the CFO alongside the CRM data, with an explicit explanation of why the two numbers differed. The content budget was protected based on the qualitative evidence rather than the attribution data.

When the logic works — and when it breaks

Works when
  • The evidence standard is agreed with finance before measurement begins
  • Sales and marketing use the same CRM source fields consistently
  • Pipeline quality metrics replace MQL volume as the primary report
  • Long-cycle channels have a separate 12-month measurement model
  • Win/loss interviews are done regularly and findings are shared with finance
Breaks when
  • Marketing chooses the attribution model unilaterally to maximise credit
  • CRM source fields are filled in inconsistently or retrospectively
  • Leading indicators (traffic, social) are presented as revenue proof
  • Brand and event channels are measured on a 30-day window
  • The methodology is changed when the numbers are unflattering

Your move

One thing to do this week

Pull your last 20 closed deals and ask one person from sales to confirm the actual origin of each one — where did the relationship start, not what the CRM says. Compare that against your current attribution data. The gap between those two numbers is the size of your under-attribution problem. That gap is what is getting your budget cut.

Then schedule a 60-minute working session with your CFO or finance lead. The agenda is not to present results — it is to agree on what evidence would convince them that marketing is contributing to revenue. That conversation, done before the measurement, is worth more than any analytics tool.

More Field Notes from Digital Uncovered

Every month, one hard B2B marketing problem.
First principles thinking. Real India context.

Browse all Field Notes