Most agency reports include too many metrics. Not because the data is wrong, but because leaving a number out feels risky. What if the client asks about it? What if they think you are hiding something?
So you include everything. Impressions, clicks, CTR, bounce rate, engagement rate, sessions, pages per session, average time on page, scroll depth. The report balloons to 30 metrics across 12 pages, and the client reads none of it. They skim the first chart, glance at the spend number, and reply with the same question they ask every month: "So is this good or bad?"
A report with 30 metrics is a spreadsheet, not a communication. The metrics that matter are the ones that answer three questions: what happened, is it good or bad, and what should we do about it. Everything else is noise dressed up as thoroughness.
Here is how to decide which metrics earn a place in your client reports, which ones to keep in reserve, and which ones to stop including entirely.
Tier 1: The metrics every client report needs
These are the metrics clients will ask about if you leave them out. They represent the business outcome, the investment, and the efficiency of the connection between the two. Every report should lead with these, regardless of channel or campaign type.
Revenue, leads, or conversions
This is the metric your client actually cares about. Everything else in the report exists to explain this number.
What it tells the client: Whether the marketing investment produced the business result they are paying for. For e-commerce clients, this is revenue. For lead generation, it is qualified leads or form submissions. For SaaS, it might be trial signups or demo requests.
How to present it: Always with a comparison. "$84,200 in revenue" means nothing on its own. "$84,200 in revenue, up 14% from last month" tells a story. Show the trend over at least three months so the client can see direction, not just a snapshot.
When it misleads: Conversions without quality context can paint a false picture. A 40% jump in leads sounds great until you learn that half of them were unqualified. If your client's sales team is complaining about lead quality, the raw conversion number is hiding the problem, not revealing it. Pair it with a quality indicator when you have one.
Spend
What was invested this period. Simple, but essential.
What it tells the client: Whether you stayed within budget and how the investment was allocated across channels or campaigns.
How to present it: Show total spend and the split by channel or campaign. If spend changed significantly from the prior period, explain why before the client has to ask. "We increased Meta spend by 22% to capture the seasonal demand window" is better than letting the client discover the number and wonder.
When it misleads: Spend in isolation tells you nothing about efficiency. A client who sees "$12,000 spent" without seeing what it produced will always feel nervous. Never present spend without the outcome it generated.
CPA or cost per lead
The efficiency metric. How much did each result cost?
What it tells the client: Whether the money spent is producing results at an acceptable rate. CPA is the bridge between spend and conversions. It answers the question every client is thinking but sometimes does not ask: "Am I getting my money's worth?"
How to present it: With comparison to the prior period and, if possible, a benchmark. "$42 CPA, down 18% from last month" is clear and directional. Note that CPA is an inverse metric: down is good, up is bad. Your report should reflect this with semantic colors — green when CPA decreases, red when it increases.
When it misleads: CPA without volume context is dangerous. A $12 CPA sounds excellent until you learn it came from 3 conversions. Low CPA at low volume often means the campaign is not scaling, not that it is efficient. Always pair CPA with conversion volume so the client sees both sides.
ROAS
Return on ad spend. The ratio that tells a client whether their paid media investment is profitable.
What it tells the client: For every dollar spent, how many dollars came back. A ROAS of 4.2x means $4.20 in revenue for every $1 spent. It is the single clearest measure of paid media effectiveness.
How to present it: As a ratio with trend. "ROAS of 4.2x, up from 3.8x last month" gives the client both the current performance and the direction. Segment by campaign or channel when the differences are meaningful — a blended ROAS can hide a strong campaign subsidizing a weak one.
When it misleads: ROAS depends heavily on attribution models — and on which platform is reporting it. Last-click attribution will undercount campaigns that assist conversions but do not close them. If your client runs both brand and prospecting campaigns, blended ROAS will almost certainly undervalue the prospecting work. Be explicit about which attribution model is in use and what it favors.
Tier 2: The metrics that explain tier 1
These metrics belong in the report when a tier 1 number moves unexpectedly. They are diagnostic tools, not headline numbers. Include them when they explain something. Leave them out when tier 1 is tracking steadily and the story is straightforward.
Conversion rate
Conversion rate explains CPA changes. If CPA increased, the cause is either higher costs per click or lower conversion rates. Conversion rate tells you which one.
When to include it: When CPA shifted and you need to show the client whether the problem is traffic quality (conversion rate dropped) or market pricing (CPCs went up). If both tier 1 metrics are stable, conversion rate is background noise.
How to present it: As a percentage with comparison. "Landing page conversion rate dropped from 4.2% to 3.1%" is specific enough to act on. Segment by device or landing page when you can — aggregate conversion rate often hides a mobile-specific problem.
CTR
Click-through rate explains traffic quality. A declining CTR means your ads or listings are less compelling to the audience seeing them. It is an early warning signal — CTR drops often precede CPA increases by a week or two.
When to include it: When traffic volume or quality changed and you need to explain why. A CTR drop on search ads might mean increased competition or ad fatigue. A CTR drop on social might mean creative needs refreshing.
How to present it: By campaign or ad group, not as a blended average. Blended CTR across all campaigns is almost meaningless because brand and non-brand search will always have wildly different rates.
Impressions and clicks
Volume metrics. They explain whether changes in conversions came from reaching more or fewer people, or from converting them at different rates.
When to include it: When conversion volume changed and you need to show whether the cause was traffic volume or conversion efficiency. If conversions dropped 20% and impressions dropped 22%, the problem is reach, not the funnel.
How to present it: Together with conversion rate, never alone. "Impressions increased 35%" without context is a vanity stat. "Impressions increased 35% but conversions were flat, suggesting the additional reach did not find qualified buyers" is a diagnostic insight.
Average position and impression share
Visibility metrics for search campaigns. They explain whether your client is showing up where they need to.
When to include it: When search performance shifted and you need to explain competitive dynamics. A drop in average position often explains a CPC increase — you are paying more to maintain the same visibility because a competitor entered the auction.
How to present it: With the competitive context that makes it actionable. "Average position dropped from 2.1 to 3.4" is a fact. "Average position dropped from 2.1 to 3.4, and impression share lost to rank increased by 12 points, suggesting a competitor increased their bids" is an explanation.
Tier 3: The metrics to stop including
These metrics appear in reports out of habit, not because they help the client make decisions. Each one causes a specific kind of confusion.
Impressions alone
Impressions without clicks, CTR, or conversions are meaningless. A client who sees "1.2M impressions" has no idea whether that is good. It sounds big, which is why agencies include it — it feels like proof of activity. But it tells the client nothing about whether those impressions reached the right people or produced any result.
Replace with: Impressions as a supporting data point alongside CTR and conversions. The number matters only in context.
Bounce rate
Bounce rate is the most misunderstood metric in client reporting. Most clients interpret "68% bounce rate" as "68% of visitors hated the site," which is not what it means. A single-page visit where someone reads the entire article and leaves is a bounce. A user who finds the phone number on the contact page and calls is a bounce. The metric conflates genuine disengagement with perfectly successful visits.
Replace with: Engagement rate (GA4's inversion of bounce rate) is marginally better, but the real fix is to stop reporting session-level engagement metrics to clients who do not have the context to interpret them. If you need to show on-site behavior, use conversion rate by landing page — it measures what the client actually cares about.
Raw click counts
"12,400 clicks" tells a client nothing without knowing the denominator. Was that from 50,000 impressions (a strong 24.8% CTR) or from 2,000,000 impressions (a weak 0.62% CTR)? Raw counts without rates invite misinterpretation.
Replace with: CTR by campaign or channel. Rates are comparable across time periods; raw counts are not, because they scale with budget.
"Engagement"
Engagement means something different on every platform. On Meta, it includes reactions, comments, shares, and link clicks. On Google Ads, it does not exist as a standard metric. On GA4, it means sessions lasting longer than 10 seconds. Including "engagement" in a report without defining it creates the illusion of measurement without any actual clarity.
Replace with: The specific action you are measuring. "Link clicks from Meta ads" or "video views to 50%" are concrete. "Engagement" is not.
How to present metrics so clients actually read them
The difference between a report that gets read and one that gets filed is not the metrics you choose — it is how you present them. Four principles make the difference.
Always show change, not just value. "$42 CPA" is a data point. "$42 CPA, down 18% from last month" is information. The comparison is what makes a metric meaningful. Every number in the report should have a reference point — prior period, prior year, target, or benchmark.
Use semantic colors correctly. Green means good. Red means bad. But "good" is directional, and the direction depends on the metric. Revenue up is green. CPA up is red. CPC down is green. This sounds obvious, but most reporting tools get it wrong by defaulting to green-for-up on every metric. If your report shows CPA increasing in green, you are actively misleading the client.
Add one sentence of context per metric. The number says what happened. The sentence says why and whether it matters. "ROAS dropped from 4.2x to 3.6x. This was driven by increased CPCs on non-brand campaigns as a new competitor entered the auction." That sentence is the difference between a report and a data dump.
Front-load the answer. Lead with the headline metric and the verdict. "Performance improved this month" or "Paid search efficiency declined." Then support it with the numbers. Clients read top-down. If the conclusion is on page 8, they will never reach it.
For a deeper guide on writing the narrative layer of your reports, see how to explain marketing data to clients.
The metric that matters most is different for every client
Everything above is a framework, not a formula. The specific metrics that belong in a report depend on what the client is trying to achieve.
E-commerce clients care about revenue and ROAS above all else. Their reports should lead with revenue by channel, ROAS by campaign, and the spend required to produce those numbers. Everything else is supporting detail.
Lead generation clients care about cost per lead and lead quality. If your client's sales team closes 1 in 10 leads, the total lead count matters less than whether the leads are qualified. Report CPL alongside any quality signal you have — SQL conversion rate, close rate, or even subjective feedback from the sales team.
Brand awareness clients care about reach and frequency. These are the rare cases where impressions are a valid headline metric, because the objective is visibility, not direct response. Even here, pair reach with frequency to show whether you are reaching new people or saturating the same audience.
The best time to decide which metrics matter is the first client meeting, not the first report. Ask the client what success looks like. Ask what number their CEO asks about. Ask what would make them feel confident that the investment is working. Then build the report around those answers. A report that reflects the client's definition of success will always be more useful than a report that reflects every metric your platforms can produce.
The agencies that retain clients are the ones whose reports answer questions before the client has to ask them. That starts with choosing the right metrics and presenting them clearly enough that the client never has to wonder what the numbers mean.



