003 ·  Selected work  ·  CPG · multi-brand media ·  2021

Three brands.
One model. Survived two CMOs.

We wrote the measurement plan first — seven decisions the number had to defend — then built a weighted multi-touch model with per-channel time-decay calibrated to a Nielsen panel. The argument moved from "which weights" to "which decisions."

Duration: 9 weeks  ·  Team: 3  ·  Programs: Diagnose · Advance  ·  Year: 2021

Brief

The ask

A multi-brand CPG holding asked us to rebuild the cross-channel attribution model three of its brands shared. The previous model had been rebuilt three times in eighteen months and abandoned each time. The constraint was nine weeks. Whatever we shipped had to be defensible enough to survive the next CMO change.

Starting state: three brands under one holding company sharing a single paid media buy across Meta, Google, an emerging TikTok line, and a programmatic DSP. Each brand's marketing team had its own weighting opinion and its own preferred channel. Last-click numbers from the platforms disagreed with the agency's MTA model, which disagreed with the quarterly revenue finance was actually booking. Nobody trusted the report. Nobody could prove which version was right.

Problem

The real diagnosis

The reported symptom was clean: "We can't agree on what's driving revenue." Three brands, three opinions, three competing reports. The marketing leadership wanted a fourth attribution methodology — a better weighting scheme, a cleaner multi-touch model, anything that would end the argument.

The real diagnosis was different. This wasn't a model problem. It was a decisions problem. The team kept rebuilding the model because it had never agreed on what the model needed to defend. Each rebuild started with methodology and ended with a CMO overruling it on a campaign she'd championed. The right starting move was a measurement plan, not more SQL. The argument had to be moved upstream of the math.

Quantify: three brands, eighteen months, three rebuilds. The agency's MTA model and the Nielsen weekly panel were out of agreement by 22–41% depending on the brand × channel cell — and nobody had documented why the gap was acceptable in some cells and not in others. The calibration step was the work nobody had done.

Approach

Five moves, in order

Move 01

Write the measurement plan first. Methodology second.

Three weeks of work before we touched a model. We documented seven decisions the attribution number had to defend each quarter: brand-vs-performance split, in-quarter creative reallocation, halo accounting on shared SKUs, regional buys, retailer co-op, channel kills, annual budget reset. For each decision, we wrote the smallest signal that would justify changing course. Each brand's CMO signed the page. That page — not the model — was the contract the work had to defend.

Move 02

Land spend, touches, and sales into one warehouse.

Meta, Google, TikTok, and the DSP for spend and click-stream. First-party sales by brand from the e-commerce stack and a weekly retail sell-through feed. Everything keyed to a shared dim_campaign with brand, channel, sub-channel, and creative-family spelled out the same way across every source. Bronze was load-as-is. Silver was where identity and brand attribution got resolved. The grain was brand × channel × week — not user — because most of the spend hit channels with no de-duplicated user-ID.

Move 03

Build the weighted multi-touch model the measurement plan demanded.

One dbt mart. Reconstructed touch path per pseudo-converter, position-weighted time-decay with a per-channel half-life, halo exclusions across the three shared SKUs so spend on Brand A could not claim a conversion on Brand B's overlapping line, calibration factors joined per brand × channel off a versioned dim. The comments in the model tie each step back to a numbered decision in the measurement plan — so a reviewer can read the SQL and a CMO can read the contract, and the two reconcile:

-- models/marts/mart_attributed_revenue.sql
-- Weighted multi-touch attribution at brand x channel x fiscal_week.
-- Implements decisions 1, 3, and 5 of the signed measurement plan:
--   1. brand-vs-performance split is preserved per brand
--   3. halo on shared SKUs excluded before crediting
--   5. per-channel time-decay calibrated to Nielsen panel
-- BigQuery dialect.

with touchpoints as (

    -- Reconstruct each pseudo-converter's path. We rank touches
    -- within a 30-day conversion window per the measurement plan.

    select
        customer_pseudo_id,
        brand_id,
        channel,
        touch_ts,
        sku_id,
        conversion_id,
        conversion_revenue_cad,
        row_number() over (
            partition by conversion_id
            order by touch_ts
        )                                          as touch_position,
        count(*) over (
            partition by conversion_id
        )                                          as journey_length
    from `silver.touchpoints_unified`
    where touch_ts between
              timestamp_sub(current_timestamp(), interval 90 day)
          and current_timestamp()

),

-- Halo exclusion: if a touch is on Brand A but the conversion SKU
-- is one of the three lines shared with Brand B or C, we drop the
-- touch from credit. Decision 3 of the measurement plan.

halo_filtered as (

    select t.*
    from touchpoints t
    left join `dim.shared_sku_halo` h
      on  h.sku_id   = t.sku_id
      and h.brand_id = t.brand_id
    where h.sku_id is null

),

-- Position-weighted time-decay. alpha and half_life_days come from
-- the versioned channel-calibration dim, not from a global var.

weighted as (

    select
        h.brand_id,
        h.channel,
        date_trunc(date(h.touch_ts), week)        as fiscal_week,
        sum(
            h.conversion_revenue_cad
          * pow(
                c.decay_alpha,
                h.journey_length - h.touch_position
            )
          * safe_divide(h.touch_position, h.journey_length)
        )                                          as attributed_revenue_raw
    from halo_filtered h
    join `dim.channel_calibration` c
      on  c.brand_id   = h.brand_id
      and c.channel    = h.channel
      and date(h.touch_ts) between c.valid_from and c.valid_to
    group by 1, 2, 3

),

-- Panel calibration. Multiplier is per brand x channel, versioned.
-- This is the table everyone argues about on Mondays. Decision 5.

calibrated as (

    select
        w.brand_id,
        w.channel,
        w.fiscal_week,
        w.attributed_revenue_raw,
        w.attributed_revenue_raw
          * c.panel_calibration_factor             as attributed_revenue,
        c.calibration_version
    from weighted w
    join `dim.channel_calibration` c
      on  c.brand_id   = w.brand_id
      and c.channel    = w.channel
      and w.fiscal_week between c.valid_from and c.valid_to

)

select * from calibrated

Move 04

Calibrate to the panel. Land the calibration table as a versioned dim.

Where the raw model disagreed with the Nielsen weekly panel beyond tolerance, we applied a per-brand × per-channel calibration factor and a per-channel half-life. The factors live in dim_channel_calibration — versioned with valid_from / valid_to rows so any past report rebuilds bit-exact. Arguments about the model became arguments about that table. Channels were retired and added by editing one row, reviewed by performance media and brand each quarter. That is how the model stayed alive under new leadership.

Move 05

Put the quarterly review on the calendar. Step out.

The measurement plan is the contract. The model is the implementation. The quarterly review is where the two get reconciled in front of finance — agenda fixed to the seven signed decisions, calibration versions on the page, channel changes requiring a dim edit and a sign-off. We were out by week nine. The model has since survived two further CMO changes. Nobody has rebuilt it.

Outcome

2 CMOs.
1 model.

What got proven was not a set of weights. What got proven was a set of decisions. The methodology held because the decisions held — and the decisions held because each brand's CMO had signed them before anyone wrote a window function.

7 decisions

Signed by each brand CMO. The measurement plan the number had to defend — brand-vs-performance split, in-quarter creative reallocation, halo on shared SKUs, four more.

Per-channel decay

Time-decay weights calibrated against the Nielsen weekly panel — one half-life per brand × channel, stored as a versioned dim.

3 shared SKUs

Halo accounting written into the model. Spend on Brand A no longer claimed conversions on Brand B's overlapping line — the dispute that had broken every prior rebuild.

2 CMOs

The model has not been rebuilt. The measurement plan is the contract — successor CMOs argue the calibration table, not the methodology.

What this enabled, beyond the model surviving: the argument moved from "which weights" to "which decisions." A halo dispute that had broken three prior rebuilds — Brand A's spend claiming credit on a shared SKU — was settled in week five by a one-row edit to the halo dim, not a model rebuild. Two successor CMOs inherited a measurement plan they could read on day one and a calibration table they could argue with on day two. That is the engagement we're proudest of.

Credits

Who shipped this

Lead Analyst
Haris Naeem
Marketing Analytics
Contractor — anonymized
Data Engineering
Contractor — anonymized
Client Stakeholder
CMO — anonymized
Client Stakeholder
Director, Brand Marketing — anonymized
Client Stakeholder
Director, Performance Media — anonymized
Client Sponsor
VP Growth — anonymized

Client-side names withheld for confidentiality. Named references available on request after a discovery call.

Next

Running a multi-brand media buy that can't agree on what works?

Send a note describing the brands sharing the buy, the decisions the attribution number has to defend, and where your model and your panel currently disagree. If it's a fit, a named reference from this engagement is available after a discovery call.

© 2026 Coded Level — Toronto analytics studio.Toronto, ON · Canada