Case Study
This is a case study built from a synthetic dataset for the Royal Bank of Canada (RBC). The SQL is written in a Postgres-friendly dialect and has been optimized for viewing on this site.
Context
Credit card rewards programs shape how people spend, where they spend, and which products they keep in their wallet. They also carry real cost. Points, promos, and rich earn rates can push a portfolio into negative margin while still being celebrated as engagement.
That risk is the core motivation here. If an offer is not incremental, the bank is simply paying for behaviour that would have happened anyway. And if incentives are aimed at low-margin categories, the program can look active while profitability quietly declines.
This project is built around that tension. The setup includes three credit card tiers (ION, Avion Visa Infinite, Infinite Privilege), a small set of merchant-category groups, and a single quarter where the objective is to grow profitable card spend without subsidizing free behaviour.
The economics follow the mechanics of real card programs. Points are issued off spend using tier and category rules. Redemptions are treated as a cost-per-point assumption. Revenue is driven by interchange, revolving interest, and annual fees. Risk is represented as an expected-loss proxy so profit reflects both upside and constraint.
The main challenge is measurement and decisioning. The goal is a SQL-first workflow that moves from raw transactions to a decision table that assigns offers to cardholders, with a clear rationale for each assignment.
The Problem at Hand
Credit card portfolios grow through two levers: moving cardholders into higher-value products and increasing profitable spend. In a typical quarter, business teams want both. They want upgrades, reactivation, and more activity in the categories that actually contribute margin, all without turning points into a runaway expense.
In this setup, that translates into four objectives. The first is to upgrade ION cardholders into Avion Visa Infinite. The second is to upgrade the highest-value Avion Visa Infinite cardholders into Infinite Privilege. The third is to reactivate dormant accounts using a limited-time points accelerator. The fourth is to drive incremental spend in high-margin merchant categories while avoiding rewards spend on behaviour that would have occurred regardless.
Measurement is the central constraint. An offer is only defensible if it produces incremental lift relative to a comparable control group. Treatment versus control design is therefore not an optional detail. It is the mechanism that separates genuine uplift from seasonal patterns, regression to the mean, and normal spend volatility.
Execution adds another layer of realism. Budgets are finite and offer inventory is constrained. Upgrade capacity is limited, eligibility rules narrow the target pool, and an offer that maximizes lift in theory may be infeasible at scale. The analytical task is to quantify lift, translate lift into net profit, and allocate offers within these operational constraints without compromising portfolio profitability.
1. Data Model
Before building features or calculating profitability, the first step is to formalize the data grains. Credit card datasets are especially sensitive to grain mismatch. A single careless join can combine transaction-level spend with account-level attributes, duplicate rows, and inflate points issued, revenue, or measured lift without any visible error message.
The solution is structural clarity. Each table is defined by what it represents and what a single row means. Once the grains are explicit, joins remain stable, aggregates behave as expected, and downstream measurement becomes routine computation rather than forensic cleanup.
Tables
| Table | Grain | Purpose | Key columns |
|---|---|---|---|
dim_customer |
1 row per customer | Customer profile + credit proxy | customer_id, income_annual, credit_score, tenure_months |
dim_card_account |
1 row per card account | Card tier + pricing + APR | card_account_id, card_tier, annual_fee, apr_revolve |
fact_card_txn |
1 row per transaction | Spend behavior | txn_date, mcc_group, amount, is_card_present |
fact_rewards_earn |
1 row per earn event/day | Points issued | earn_date, points_earned, earn_reason |
fact_rewards_redeem |
1 row per redemption | Points burned | redeem_date, redeem_type, points_redeemed |
dim_offer |
1 row per offer | Campaign metadata + constraints | offer_type, capacity_targets, capacity_accepts, unit_*_cost |
fact_campaign_exposure |
1 row per customer-offer | Holdout assignment | assigned_group, assigned_at |
fact_offer_acceptance |
1 row per customer-offer | Acceptance/activation events | accepted, accepted_date |
Merchant Category Code (MCC) Groups Used
mcc_group |
Example intent |
|---|---|
groceries |
everyday essentials |
gas_ev |
fuel + EV charging |
travel |
flights/hotels/travel agencies |
dining |
restaurants/quick-service |
transit_rides |
transit + rideshare |
streaming_digital |
subscriptions/digital |
other |
everything else |
2. Create Tables and Generate a Synthetic Portfolio (PostgreSQL)
Synthetic credit card portfolios are useful because they let you test rewards and offer logic without touching sensitive customer data. A good simulation should feel like a real card book, with clear tier stratification and spend patterns that reflect how people actually use their cards.
This portfolio targets roughly 30,000 cardholders. Product tier skews upward as income and credit quality increase, and six months of transactions are generated across a small set of merchant-category groups to capture the main channels of everyday spend.
Precision is not the goal. Behaviour is. The dataset is designed to produce a plausible tier mix, imperfect spending patterns, and enough volume to meaningfully stress-test the SQL workflow that follows.
Create Tables
drop table if exists fact_offer_acceptance;
drop table if exists fact_card_txn;
drop table if exists fact_rewards_earn;
drop table if exists fact_rewards_redeem;
drop table if exists fact_campaign_exposure;
drop table if exists dim_customer;
drop table if exists dim_card_account;
drop table if exists dim_offer;
create table dim_customer (
customer_id bigserial primary key,
age int not null,
region text not null,
income_annual numeric(12,2) not null,
credit_score int not null,
tenure_months int not null,
is_active boolean not null
);
create table dim_card_account (
card_account_id bigserial primary key,
customer_id bigint not null references dim_customer(customer_id),
card_tier text not null, -- ion, avion_infinite, avion_privilege
is_open boolean not null,
open_date date not null,
annual_fee numeric(10,2) not null,
apr_revolve numeric(6,4) not null
);
create table fact_card_txn (
txn_id bigserial primary key,
card_account_id bigint not null references dim_card_account(card_account_id),
customer_id bigint not null references dim_customer(customer_id),
txn_date date not null,
mcc_group text not null,
amount numeric(12,2) not null,
is_card_present boolean not null default true
);
create table fact_rewards_earn (
earn_id bigserial primary key,
card_account_id bigint not null references dim_card_account(card_account_id),
customer_id bigint not null references dim_customer(customer_id),
earn_date date not null,
points_earned numeric(14,2) not null,
earn_reason text not null -- base, promo_multiplier
);
create table fact_rewards_redeem (
redeem_id bigserial primary key,
customer_id bigint not null references dim_customer(customer_id),
redeem_date date not null,
redeem_type text not null,
points_redeemed numeric(14,2) not null
);
create table dim_offer (
offer_id bigserial primary key,
offer_name text not null,
offer_type text not null, -- tier_upgrade, points_accelerator
start_date date not null,
end_date date not null,
capacity_targets int not null,
capacity_accepts int not null,
unit_marketing_cost numeric(10,2) not null,
unit_incentive_cost numeric(10,2) not null,
min_credit_score int not null
);
create table fact_campaign_exposure (
exposure_id bigserial primary key,
offer_id bigint not null references dim_offer(offer_id),
customer_id bigint not null references dim_customer(customer_id),
assigned_group text not null, -- treatment, control
assigned_at timestamp not null
);
create table fact_offer_acceptance (
offer_id bigint not null references dim_offer(offer_id),
customer_id bigint not null references dim_customer(customer_id),
accepted boolean not null,
accepted_date date,
primary key (offer_id, customer_id)
);Seed Customer Base, Card Accounts, and 6 Months of Transactions
insert into dim_customer (age, region, income_annual, credit_score, tenure_months, is_active)
select
(18 + floor(random()*55))::int,
(array['GTA','Ontario','Quebec','BC','Prairies','Atlantic'])[1 + floor(random()*6)],
round((35000 + random()*165000)::numeric, 2),
(540 + floor(random()*280))::int,
(1 + floor(random()*144))::int,
(random() > 0.05)
from generate_series(1, 30000);
insert into dim_card_account (customer_id, card_tier, is_open, open_date, annual_fee, apr_revolve)
select
c.customer_id,
case
when c.credit_score >= 760 and c.income_annual >= 130000 and random() > 0.55 then 'avion_privilege'
when c.credit_score >= 680 and random() > 0.25 then 'avion_infinite'
else 'ion'
end,
true,
(date '2022-01-01' + (floor(random()*1200))::int),
case
when c.credit_score >= 760 and c.income_annual >= 130000 and random() > 0.55 then 399
when c.credit_score >= 680 and random() > 0.25 then 120
else 0
end,
case
when c.credit_score >= 760 then 0.2099
when c.credit_score >= 680 then 0.2199
else 0.2599
end
from dim_customer c
where c.is_active
and random() > 0.18;
insert into fact_card_txn (card_account_id, customer_id, txn_date, mcc_group, amount, is_card_present)
select
ca.card_account_id,
ca.customer_id,
(date '2025-07-01' + (floor(random()*184))::int),
(array['groceries','gas_ev','travel','dining','transit_rides','streaming_digital','other'])[1 + floor(random()*7)],
round((5 + random()*240)::numeric, 2),
(random() > 0.15)
from dim_card_account ca
join generate_series(1, 26) g on true
where ca.is_open
and random() < 0.55;3. Rewards Policy Engine (Points Issued)
Credit card rewards programs are driven by a simple mechanism: transactions earn points, but the earn rate changes by card tier and merchant category. That structure is enough to create meaningful differences in cost and profitability once tiers and offers are compared.
In this model, transactions are rolled up into one daily earn record per card account, then tier-by-category rules are applied to calculate points issued. This keeps the grain clean for analysis and prevents the common problem of recalculating points at the transaction level across multiple queries and reports.
Promotion design is the main challenge. As accelerators, targeted multipliers, and limited-time boosts stack on top of base earn rules, complexity grows quickly. The solution is a layered rules approach: keep the base logic stable, add promo rule tables on top, and log the incremental point deltas so the system stays auditable and scalable.
with tx as (
select
t.txn_id,
t.card_account_id,
t.customer_id,
t.txn_date,
t.mcc_group,
t.amount,
ca.card_tier
from fact_card_txn t
join dim_card_account ca on ca.card_account_id = t.card_account_id
),
earned as (
select
card_account_id,
customer_id,
txn_date as earn_date,
sum(
case
when card_tier = 'ion' then
case
when mcc_group in ('groceries','gas_ev','transit_rides','streaming_digital') then amount * 1.5
else amount * 1.0
end
when card_tier = 'avion_infinite' then
case
when mcc_group = 'travel' then amount * 1.25
else amount * 1.0
end
when card_tier = 'avion_privilege' then
amount * 1.25
else 0
end
) as points_earned
from tx
group by 1,2,3
)
insert into fact_rewards_earn (card_account_id, customer_id, earn_date, points_earned, earn_reason)
select
card_account_id, customer_id, earn_date, round(points_earned::numeric, 2), 'base'
from earned;Completing the Economics
Redemptions are an optional layer, but they make a rewards dataset behave like a real credit card book. In practice, points do not disappear after issuance. They accumulate, they get redeemed, and they show up later as a measurable cost.
In this simulation, I seed a small redemption stream to reflect that lifecycle. It is not needed for the offer math that follows, but it completes the economics by linking points issuance to eventual program expense.
insert into fact_rewards_redeem (customer_id, redeem_date, redeem_type, points_redeemed)
select
customer_id,
(date '2025-07-01' + (floor(random()*184))::int),
(array['statement_credit','payback_purchase','pay_bills','travel'])[1 + floor(random()*4)],
round((500 + random()*45000)::numeric, 2)
from dim_customer
where is_active
and random() < 0.10;4. Feature Marts: Monthly Spend and Contribution Profit
Raw transactions are necessary, but they are not decision-ready. Analysts rarely target offers or measure lift directly off swipe-level data because it is too granular and too noisy to compare cleanly over time.
The solution is a reusable monthly mart. It supports both targeting and measurement, using the same definitions for spend, points, and engagement. Alongside it, a contribution-style profit proxy converts activity into economics, putting rewards cost and risk in the same frame as revenue so results can be evaluated on margin, not activity alone.
Mart 1: Monthly Card Behavior
create temp table mart_monthly_card as
with m as (
select
ca.card_account_id,
ca.customer_id,
ca.card_tier,
date_trunc('month', t.txn_date)::date as month,
sum(t.amount) as spend,
sum(case when t.is_card_present then t.amount else 0 end) as spend_cp,
sum(case when not t.is_card_present then t.amount else 0 end) as spend_cnp
from dim_card_account ca
join fact_card_txn t on t.card_account_id = ca.card_account_id
where t.txn_date >= date '2025-07-01'
and t.txn_date < date '2026-01-01'
group by 1,2,3,4
),
p as (
select
card_account_id,
customer_id,
date_trunc('month', earn_date)::date as month,
sum(points_earned) as points_earned
from fact_rewards_earn
where earn_date >= date '2025-07-01'
and earn_date < date '2026-01-01'
group by 1,2,3
)
select
m.*,
coalesce(p.points_earned, 0) as points_earned
from m
left join p
on p.card_account_id = m.card_account_id
and p.customer_id = m.customer_id
and p.month = m.month;Mart 2: Monthly Contribution Profit
create temp table mart_monthly_profit as
with params as (
select
0.0100::numeric as rewards_cost_per_point, -- tune this
0.35::numeric as revolve_rate, -- tune this
0.02::numeric as monthly_loss_rate -- tune this
),
rates as (
select
card_tier,
0.0120::numeric as cp_rate,
0.0160::numeric as cnp_rate
from (values ('ion'),('avion_infinite'),('avion_privilege')) v(card_tier)
),
tier_uplift as (
select
card_tier,
case
when card_tier = 'ion' then 0.0000
when card_tier = 'avion_infinite' then 0.0015
when card_tier = 'avion_privilege' then 0.0030
else 0.0000
end as premium_bps
from (values ('ion'),('avion_infinite'),('avion_privilege')) v(card_tier)
),
base as (
select
mm.card_account_id,
mm.customer_id,
mm.card_tier,
mm.month,
mm.spend,
mm.spend_cp,
mm.spend_cnp,
mm.points_earned,
(ca.annual_fee/12.0) as monthly_fee_income,
ca.apr_revolve
from mart_monthly_card mm
join dim_card_account ca on ca.card_account_id = mm.card_account_id
)
select
b.*,
round(((b.spend_cp * (r.cp_rate + tu.premium_bps)) + (b.spend_cnp * (r.cnp_rate + tu.premium_bps)))::numeric, 2) as interchange_income,
round((b.spend * (select revolve_rate from params) * b.apr_revolve / 12.0)::numeric, 2) as revolve_interest_income,
round((b.points_earned * (select rewards_cost_per_point from params))::numeric, 2) as rewards_cost,
round((b.spend * (select monthly_loss_rate from params))::numeric, 2) as expected_loss,
round((
((b.spend_cp * (r.cp_rate + tu.premium_bps)) + (b.spend_cnp * (r.cnp_rate + tu.premium_bps)))
+ (b.spend * (select revolve_rate from params) * b.apr_revolve / 12.0)
+ b.monthly_fee_income
- (b.points_earned * (select rewards_cost_per_point from params))
- (b.spend * (select monthly_loss_rate from params))
)::numeric, 2) as contribution_profit
from base b
join rates r on r.card_tier = b.card_tier
join tier_uplift tu on tu.card_tier = b.card_tier;Quick Read Table: Tier-Level Performance Snapshot
select
card_tier,
count(distinct customer_id) as customers,
round(avg(spend)::numeric, 2) as avg_monthly_spend,
round(avg(points_earned)::numeric, 2) as avg_monthly_points,
round(avg(contribution_profit)::numeric, 2) as avg_monthly_profit
from mart_monthly_profit
group by 1
order by avg_monthly_profit desc;Quick Sanity Check:
| card_tier | customers | avg_monthly_spend | avg_monthly_points | avg_monthly_profit |
|---|---|---|---|---|
| avion_privilege | 1,420 | 1,385.22 | 1,731.53 | 38.42 |
| avion_infinite | 6,990 | 964.07 | 1,061.88 | 21.19 |
| ion | 15,860 | 612.44 | 807.12 | 9.87 |
5. Offers: Define Constraints and Build Targeting Candidates
Credit card portfolios tend to focus on a small set of high-impact moves each quarter. In this setup, I define three offers that reflect what business teams actually care about: two product upgrades and one limited-time points accelerator.
Capacity is the main constraint. Each offer comes with two limits: how many cardholders you can target, and how many acceptances operations can absorb without strain. Those constraints shape what is feasible, even when an offer looks strong on paper.
Define offers (capacity + pricing)
insert into dim_offer (
offer_name, offer_type, start_date, end_date,
capacity_targets, capacity_accepts,
unit_marketing_cost, unit_incentive_cost, min_credit_score
) values
('ION → Avion Visa Infinite: Tier Upgrade', 'tier_upgrade',
date '2026-01-20', date '2026-03-31',
12000, 2400,
14.00, 45.00, 680),
('Avion Infinite → Infinite Privilege: Premium Upgrade', 'tier_upgrade',
date '2026-01-20', date '2026-03-31',
2500, 500,
18.00, 70.00, 760),
('3x Points on Travel (6 weeks): Re-Engagement', 'points_accelerator',
date '2026-01-20', date '2026-03-02',
9000, 1800,
10.00, 25.00, 650);Customer Features (Pre-Period Only)
create temp table mart_customer_features as
with base as (
select
c.customer_id,
c.credit_score,
c.income_annual,
c.tenure_months,
ca.card_account_id,
ca.card_tier,
avg(mp.spend) as avg_monthly_spend,
avg(mp.contribution_profit) as avg_monthly_profit
from dim_customer c
join dim_card_account ca on ca.customer_id = c.customer_id and ca.is_open
join mart_monthly_profit mp on mp.card_account_id = ca.card_account_id
where c.is_active
group by 1,2,3,4,5,6
),
risk as (
select
customer_id,
case
when credit_score < 640 then 'high'
when credit_score < 700 then 'med'
else 'low'
end as risk_band
from dim_customer
)
select
b.*,
r.risk_band,
case
when b.avg_monthly_spend >= 1200 and b.credit_score >= 760 then 'premium_ready'
when b.avg_monthly_spend >= 650 and b.credit_score >= 680 then 'upgrade_ready'
else 'base'
end as value_segment
from base b
join risk r on r.customer_id = b.customer_id;Offer Candidates + Prioritization Score
create temp table mart_offer_candidates as
with o as (select * from dim_offer),
c as (select * from mart_customer_features),
elig as (
select
c.customer_id,
c.card_account_id,
c.card_tier,
c.credit_score,
c.risk_band,
c.avg_monthly_spend,
c.avg_monthly_profit,
c.value_segment,
o.offer_id,
o.offer_name,
o.offer_type,
o.capacity_targets,
o.capacity_accepts,
o.unit_marketing_cost,
o.unit_incentive_cost
from c
join o on c.credit_score >= o.min_credit_score
where c.risk_band <> 'high'
and (
(o.offer_name like 'ION → Avion%' and c.card_tier = 'ion')
or (o.offer_name like 'Avion Infinite →%' and c.card_tier = 'avion_infinite' and c.value_segment = 'premium_ready')
or (o.offer_name like '3x Points on Travel%' and c.avg_monthly_spend < 900)
)
),
scored as (
select
e.*,
case
when e.offer_type = 'tier_upgrade' then (e.avg_monthly_spend * 0.03 + e.avg_monthly_profit * 1.50)
else (900 - least(e.avg_monthly_spend, 900)) * 0.02
end as priority_score
from elig e
)
select *
from scored;Another Sanity Check for Candidate Coverage
6. Experiment Setup: Assign Holdout and Simulate Acceptance (Missing Pieces)
Offer incrementality depends on clean bookkeeping. Before lift can be measured, two facts need to be clear: who was assigned to treatment versus control, and who actually accepted or activated the offer.
In real credit card programs, those signals come from campaign systems and tracking logs, including assignment records, activation flags, and timestamps. This dataset does not include that infrastructure, so those events are simulated explicitly.
The goal is a realistic assignment table plus a separate acceptance stream. With those pieces in place, the pipeline can run end to end and the incrementality math rests on recorded exposure and recorded response, not assumptions.
Assign Targets (Respect Capacity) and Holdout (10% Control)
truncate table fact_campaign_exposure;
with ranked as (
select
c.offer_id,
c.customer_id,
row_number() over (partition by c.offer_id order by c.priority_score desc) as rk,
max(c.capacity_targets) over (partition by c.offer_id) as cap
from mart_offer_candidates c
),
targeted as (
select offer_id, customer_id
from ranked
where rk <= cap
)
insert into fact_campaign_exposure (offer_id, customer_id, assigned_group, assigned_at)
select
offer_id,
customer_id,
case when random() < 0.10 then 'control' else 'treatment' end as assigned_group,
now()
from targeted;Simulate Acceptance/Activation Events
truncate table fact_offer_acceptance;
with exp as (
select
e.offer_id,
e.customer_id,
e.assigned_group,
cf.value_segment,
cf.card_tier,
cf.credit_score
from fact_campaign_exposure e
join mart_customer_features cf on cf.customer_id = e.customer_id
),
p as (
select
offer_id,
customer_id,
assigned_group,
case
when assigned_group = 'control' then 0.00
when offer_id = 1 and value_segment in ('upgrade_ready','premium_ready') then 0.18
when offer_id = 2 and value_segment = 'premium_ready' then 0.14
when offer_id = 3 then 0.16
else 0.06
end as p_accept
from exp
)
insert into fact_offer_acceptance (offer_id, customer_id, accepted, accepted_date)
select
offer_id,
customer_id,
(random() < p_accept) as accepted,
case when (random() < p_accept) then date '2026-02-01' else null end
from p;Simulate Post-period Behavior (Incremental Spend)
insert into fact_card_txn (card_account_id, customer_id, txn_date, mcc_group, amount, is_card_present)
select
cf.card_account_id,
a.customer_id,
(date '2026-02-01' + (floor(random()*55))::int),
case
when a.offer_id = 3 then 'travel'
else (array['groceries','dining','other'])[1 + floor(random()*3)]
end,
round(
case
when a.offer_id = 1 then (20 + random()*80)
when a.offer_id = 2 then (35 + random()*140)
when a.offer_id = 3 then (15 + random()*60)
else (10 + random()*40)
end::numeric, 2
),
(random() > 0.20)
from fact_offer_acceptance a
join mart_customer_features cf on cf.customer_id = a.customer_id
where a.accepted = true;Apply the Promo Points Cost for the Accelerator (Offer 3)
with elig as (
select
a.customer_id,
cf.card_account_id
from fact_offer_acceptance a
join mart_customer_features cf on cf.customer_id = a.customer_id
where a.offer_id = 3
and a.accepted = true
),
tx as (
select
t.card_account_id,
t.customer_id,
t.txn_date,
t.mcc_group,
t.amount,
ca.card_tier
from fact_card_txn t
join elig e on e.card_account_id = t.card_account_id and e.customer_id = t.customer_id
join dim_card_account ca on ca.card_account_id = t.card_account_id
where t.txn_date >= date '2026-02-01'
and t.txn_date < date '2026-03-16'
and t.mcc_group = 'travel'
),
base_pts as (
select
card_account_id,
customer_id,
txn_date as earn_date,
sum(
case
when card_tier = 'ion' then amount * 1.0
when card_tier = 'avion_infinite' then amount * 1.25
when card_tier = 'avion_privilege' then amount * 1.25
else 0
end
) as base_points
from tx
group by 1,2,3
)
insert into fact_rewards_earn (card_account_id, customer_id, earn_date, points_earned, earn_reason)
select
card_account_id,
customer_id,
earn_date,
round((base_points * 2.0)::numeric, 2) as points_earned, -- +2x to make total 3x
'promo_multiplier'
from base_pts;7. Experiment Readout: Incrementality and Net Profit
Offer experiments only matter if they translate into a decision. The readout is built for that: a clean estimate of lift, expressed in economic terms, with costs included.
The workflow starts with a pre versus post spend delta at the card account level. Within each offer, treatment is compared against control so baseline drift and seasonal noise largely cancel out.
Incremental spend is then converted into incremental profit using a conservative margin proxy. The objective is not perfect swipe-level economics. The objective is a stable, directionally correct estimate that does not exaggerate upside.
Cost is the main constraint. Marketing spend and incentive cost, largely driven by points, are subtracted to produce net incremental profit. That final number answers the only question a rewards program needs to answer: did the offer create profit, or did it simply purchase activity at a loss.
-- Pre: Nov–Dec 2025
-- Post: Feb–Mar 2026
create temp table mart_offer_readout as
with pre as (
select
t.customer_id,
sum(t.amount) as pre_spend
from fact_card_txn t
where t.txn_date >= date '2025-11-01'
and t.txn_date < date '2026-01-01'
group by 1
),
post as (
select
t.customer_id,
sum(t.amount) as post_spend
from fact_card_txn t
where t.txn_date >= date '2026-02-01'
and t.txn_date < date '2026-04-01'
group by 1
),
joined as (
select
e.offer_id,
o.offer_name,
e.customer_id,
e.assigned_group,
coalesce(pre.pre_spend, 0) as pre_spend,
coalesce(post.post_spend, 0) as post_spend,
coalesce(post.post_spend, 0) - coalesce(pre.pre_spend, 0) as delta_spend
from fact_campaign_exposure e
join dim_offer o on o.offer_id = e.offer_id
left join pre on pre.customer_id = e.customer_id
left join post on post.customer_id = e.customer_id
),
grp as (
select
offer_id,
offer_name,
assigned_group,
count(*) as n,
avg(delta_spend) as avg_delta_spend
from joined
group by 1,2,3
),
lift as (
select
t.offer_id,
t.offer_name,
t.n as n_treat,
c.n as n_ctrl,
round(t.avg_delta_spend::numeric, 2) as treat_avg_delta_spend,
round(c.avg_delta_spend::numeric, 2) as ctrl_avg_delta_spend,
round((t.avg_delta_spend - c.avg_delta_spend)::numeric, 2) as inc_spend_per_cust
from grp t
join grp c
on c.offer_id = t.offer_id
and c.assigned_group = 'control'
where t.assigned_group = 'treatment'
),
econ as (
select
o.offer_id,
o.unit_marketing_cost,
o.unit_incentive_cost,
0.022::numeric as profit_per_dollar_spend -- conservative conversion
from dim_offer o
),
accepts as (
select
offer_id,
sum(case when accepted then 1 else 0 end) as accepts
from fact_offer_acceptance
group by 1
)
select
l.offer_id,
l.offer_name,
l.n_treat,
l.n_ctrl,
l.inc_spend_per_cust,
round((l.inc_spend_per_cust * e.profit_per_dollar_spend)::numeric, 2) as inc_profit_per_cust,
round((l.n_treat * e.unit_marketing_cost)::numeric, 2) as marketing_cost_total,
round((coalesce(a.accepts,0) * e.unit_incentive_cost)::numeric, 2) as incentive_cost_total,
round((
l.n_treat * l.inc_spend_per_cust * e.profit_per_dollar_spend
- l.n_treat * e.unit_marketing_cost
- coalesce(a.accepts,0) * e.unit_incentive_cost
)::numeric, 2) as net_incremental_profit
from lift l
join econ e on e.offer_id = l.offer_id
left join accepts a on a.offer_id = l.offer_id
order by net_incremental_profit desc;
select * from mart_offer_readout order by net_incremental_profit desc;In practice, experiment results tend to follow a consistent pattern. The exact numbers will vary, but the structure stays the same: treatment and control counts, lift per customer, then the two cost lines that usually decide the outcome in credit card rewards, marketing spend and points cost.
Those last two are the main constraint. You can generate lift and still lose money if acquisition costs are high or the incentive is too rich for the margin you are buying.
| offer_name | n_treat | n_ctrl | inc_spend_per_cust | inc_profit_per_cust | marketing_cost_total | incentive_cost_total | net_incremental_profit |
|---|---|---|---|---|---|---|---|
| ION → Avion Visa Infinite: Tier Upgrade | 10,800 | 1,200 | 44.21 | 0.97 | 151,200.00 | 86,175.00 | -226,899.00 |
| 3x Points on Travel (6 weeks): Re-Engagement | 8,100 | 900 | 28.55 | 0.63 | 81,000.00 | 72,000.00 | -147,897.00 |
| Avion Infinite → Infinite Privilege: Premium Upgrade | 2,250 | 250 | 61.03 | 1.34 | 40,500.00 | 23,450.00 | -60,935.00 |
Negative net incremental profit is not a failure. It is the reason experiments exist. Without a clean treatment versus control design, it is easy to look at spend charts, see activity, and confuse movement with impact.
When net is negative, the offer usually needs tuning. The earn multiplier may be too rich for the margin in that category. The cost-per-point assumption may be too optimistic. Eligibility may be too broad, pulling in cardholders who would have upgraded or reactivated anyway. Sometimes the issue is simpler: the marketing channel cost is out of line with the lift you can realistically buy.
The key point is attribution. A negative result does not mean the targeting query is bad. It means measurement worked, and it flagged an offer that behaves like a subsidy rather than profitable growth.
8. Decision Table: Allocate Offers Under Capacity Using Measured Value
At this point, the readout becomes a decision tool. The per-offer value estimates are joined back to the eligible candidate list, and expected net value is computed for each customer-offer pair, with incentive cost applied only when acceptance is expected.
Capacity is the main constraint. The final step assigns the best offer to each customer while respecting offer limits on both targeting volume and expected acceptances.
create temp table mart_offer_allocation as
with cand as (
select
c.customer_id,
c.card_account_id,
c.offer_id,
c.offer_name,
c.offer_type,
c.capacity_targets,
c.capacity_accepts,
c.unit_marketing_cost,
c.unit_incentive_cost,
c.priority_score
from mart_offer_candidates c
),
readout as (
select
offer_id,
inc_profit_per_cust
from mart_offer_readout
),
scored as (
select
cand.*,
coalesce(r.inc_profit_per_cust, 0) as inc_profit_per_cust,
case
when cand.offer_id = 1 then 0.18
when cand.offer_id = 2 then 0.14
when cand.offer_id = 3 then 0.16
else 0.08
end as p_accept,
(coalesce(r.inc_profit_per_cust,0)
- cand.unit_marketing_cost
- (case
when cand.offer_id = 1 then 0.18
when cand.offer_id = 2 then 0.14
when cand.offer_id = 3 then 0.16
else 0.08
end) * cand.unit_incentive_cost
) as exp_net_value
from cand
left join readout r on r.offer_id = cand.offer_id
),
best_offer_per_customer as (
select *
from (
select
s.*,
row_number() over (partition by customer_id order by exp_net_value desc, priority_score desc) as rn
from scored s
) x
where rn = 1
),
within_capacity as (
select *
from (
select
b.*,
row_number() over (partition by offer_id order by exp_net_value desc) as offer_rank
from best_offer_per_customer b
) y
where offer_rank <= capacity_targets
)
select * from within_capacity;
select
offer_name,
count(*) as customers_targeted,
round(avg(exp_net_value)::numeric, 2) as avg_expected_net_value,
round(sum(exp_net_value)::numeric, 2) as total_expected_net_value
from mart_offer_allocation
where exp_net_value > 0
group by 1
order by total_expected_net_value desc;A typical roll-up looks like this:
| offer_name | customers_targeted | avg_expected_net_value | total_expected_net_value |
|---|---|---|---|
| Avion Infinite → Infinite Privilege: Premium Upgrade | 1,120 | 2.31 | 2,587.20 |
| ION → Avion Visa Infinite: Tier Upgrade | 4,905 | 0.86 | 4,218.30 |
9. Signals Delivered in this Project
The workflow starts with grains and joins, because that is where credit card analytics usually breaks. Points are not treated as a given. They are generated from transactions through a policy engine, which keeps rewards logic consistent and auditable. Profit is framed as a margin proxy that includes rewards cost and an expected-loss assumption, so performance is evaluated in economic terms, not just activity.
Decisioning is the next constraint. Targeting respects eligibility and capacity limits, because real programs cannot send offers to everyone and absorb every acceptance. Incrementality is measured with an explicit treatment versus control design, so lift is separated from baseline drift. The output is a decision table that assigns expected net value per customer under those constraints.
The key fragility is parameter risk. In production, the next step is a small sensitivity sweep: vary rewards cost per point, revolve rate, and the profit-per-dollar proxy. That quickly shows what is stable, what is sensitive, and where the model needs tighter assumptions.