Your client has a Google Ads dashboard. They have access to GA4. They receive a monthly PDF with charts and tables. They have more data than they have ever had. And they still email you every month asking: "So... is this good?"
The problem is not access to data. The problem is that data without explanation creates confusion, not confidence. When a client sees "CPA: $42.30" on a report, they do not know if that is good or bad. When they see "CPA dropped 18% to $42.30 because the new landing page improved conversion rate — this is the best CPA this account has had since January" — now they understand what happened and what it means.
The gap between showing data and explaining data is where most agency-client communication breaks down. Closing that gap is not about better charts or more frequent reports. It is about how you frame, compare, and narrate the numbers you already have. The same principles apply whether you run an agency or an in-house marketing team.
The three things every client wants to know
Every client question, whether they articulate it or not, is really three questions stacked together.
What happened? The factual change. Revenue went up. CPA went down. Impressions dropped. This is the data layer — what the dashboards already show.
Why did it happen? The causal explanation. Revenue went up because the new Meta prospecting audience drove a 34% lift in first-time purchasers. CPA went down because the landing page redesign improved conversion rate from 2.1% to 3.4%. This is the interpretation layer — the part that requires cross-referencing multiple data points.
What should we do next? The recommendation. Increase budget on the Meta prospecting audience. Roll the landing page change out to the remaining campaigns. Pause the underperforming creative variants. This is the action layer — the reason the client hired an agency in the first place.
If your communication answers all three with specific numbers, you are doing it right. If it answers only one — which is what most dashboards and automated reports do — the client will email you asking for the other two. Every time.
Lead with the change, not the number
There is a meaningful difference between reporting data and communicating it. Consider these two statements about the same metric.
"Revenue was $87,400 last month."
"Revenue increased 14% to $87,400, driven by a 22% lift in Google Ads conversion rate following the landing page redesign on March 3rd."
The first is data. The second is communication. The difference is that the second statement leads with the change (14% increase), provides the absolute number as context ($87,400), and names the cause (conversion rate lift from the landing page redesign).
Clients do not wake up thinking about absolute numbers. They think in terms of direction: are things getting better or getting worse? Leading with the change — and then grounding it in a cause — mirrors how they actually process information. The absolute number becomes the anchor, not the headline.
This applies at every level of granularity. "Brand Search spent $12,400" is data. "Brand Search spend increased 8% to $12,400 but CPA remained flat at $31, so the incremental spend is converting efficiently" is communication. The second version tells the client something they can act on.
Use comparison anchors
A number without context is meaningless. $42 CPA. Is that good? Compared to what?
Every metric you present to a client needs at least one comparison anchor. There are four that work reliably.
Last period. "$42 CPA, down from $51 last month." This is the simplest and most common. It answers the direction question immediately.
Same period last year. "$42 CPA versus $48 in March last year." This controls for seasonality, which month-over-month comparisons miss. If CPA always rises in Q4 and your client does not know that, they will think something is wrong.
The target or KPI. "$42 CPA against a $45 target." This is the most powerful anchor when the client has set explicit goals. It reframes the conversation from "what happened" to "are we on track." For guidance on which metrics to prioritize in the first place, see marketing metrics that actually matter.
The account average. "$42 CPA versus a 90-day average of $47." This smooths out week-to-week noise and shows whether the current performance is an outlier or a trend.
You do not need all four in every communication. But you need at least one. A metric presented without any comparison forces the client to supply their own context — and they will usually supply the wrong one. When a client sees "$42 CPA" with no anchor, they compare it to whatever number they last remember, which may be from a different account, a different channel, or a different year.
Name the cause directly
This is where most agency communication falls apart. The data is there. The comparison is there. But the explanation hedges.
"Performance may have been impacted by several factors including changes to audience targeting, seasonal trends, and broader market conditions."
That sentence communicates nothing. It is the reporting equivalent of saying "things happened because of stuff." The client reads it and learns nothing they did not already know.
Compare it to this: "CPA increased 22% because we expanded the Brand Search audience on March 5th to include broader match types. The new traffic converted at 1.8% versus 3.4% for the original audience. We are reverting the change this week."
That tells the client exactly what happened, why, and what you are doing about it. It is specific. It is falsifiable. It demonstrates that you understand the account and are actively managing it.
If you genuinely do not know the cause, say so — but then explain what you are investigating. "CPA increased 22% and we have not yet isolated the cause. We are reviewing audience segments, creative performance, and landing page conversion rates this week and will update you by Friday." That is honest, specific, and shows agency. The client can respect that. They cannot respect "several factors may have contributed."
One recommendation per insight
A common mistake in client reporting is listing every recommendation at the end of the report in a long bullet list. By the time the client reaches recommendation number seven, they have forgotten what data point it relates to and why it matters.
Instead, attach one clear recommendation to each significant finding, right where the finding appears.
"CPA spiked 22% on Brand Search → reverting the audience expansion this week."
"Meta prospecting ROAS improved from 2.1x to 2.8x → recommending a 15% budget increase to capture more volume while the creative is performing."
"Organic traffic from Search Console dropped 11% on product pages → reviewing the title tag changes made on March 8th against current ranking positions."
Each recommendation is specific, grounded in the data that preceded it, and actionable. The client does not need to connect dots between the analysis section and the recommendations section because there is no gap to bridge. Understanding how attribution models affect these recommendations also helps — if the client knows how conversions are counted, the recommendations make more sense.
This structure also makes it easier for the client to respond. Instead of reacting to a wall of text, they can approve, question, or reject each recommendation individually.
Match the format to the audience
Not every client stakeholder needs the same depth. Sending a 12-page report to a founder who wants a yes-or-no answer wastes their time and your credibility. Sending three bullet points to a Head of Marketing who needs channel-level detail leaves them unsatisfied.
The founder or CEO. They want the answer to one question: are we on track? Give them three to five sentences. Total spend, total return, whether performance is above or below target, and the single most important change since last month. If they want more, they will ask.
The Head of Marketing or CMO. They want the executive summary plus channel-level highlights. One paragraph on overall performance, one paragraph per major channel, and a short list of next steps. This is the format that most monthly reports should follow by default.
The performance lead or in-house specialist. They want everything. Campaign-level tables, keyword data, audience breakdowns, test results. Give them the full report with the detailed appendix. But even here, lead with the narrative — do not make them hunt for the explanation in a table of numbers.
The common thread is that every audience gets the explanation first and the data second. The depth varies. The structure does not.
What bad client communication looks like — and how to fix it
Reading examples makes this concrete. Here are three patterns that show up in client reports and emails constantly, with rewrites.
The vague summary.
Bad: "Performance was mixed this month with some campaigns performing well and others underperforming."
Good: "Revenue grew 8% but CPA increased 22%. The growth came from Meta prospecting, which delivered a 3.1x ROAS. The CPA increase is isolated to Google Ads Brand Search — the negative keyword list change on March 12th removed high-intent terms. We have reverted the change and expect CPA to normalize within two weeks."
The bad version tells the client nothing they could not have guessed. The good version names the channel, the cause, the date it happened, and what is being done about it.
The data dump.
Bad: "Impressions: 1,240,000. Clicks: 34,200. CTR: 2.76%. CPC: $1.42. Conversions: 820. CPA: $48.60. Spend: $48,564. Revenue: $187,300. ROAS: 3.86x."
Good: "ROAS improved from 3.2x to 3.86x on flat spend of $48,564, driven by an 18% increase in conversions. The conversion lift came primarily from the new landing page on Google Ads Search campaigns, which improved on-page conversion rate from 2.4% to 3.1%."
The bad version is a spreadsheet formatted as a sentence. The good version uses the same data but structures it around a narrative: what changed, why, and what it means. The raw numbers belong in a table below the narrative, not as the narrative itself.
The hedge pile.
Bad: "The decrease in performance could potentially be attributed to a number of factors, including possible changes in competitive landscape, audience fatigue, or seasonal fluctuations. We recommend monitoring the situation closely over the coming weeks."
Good: "Conversions dropped 15% because creative fatigue set in on the two top-performing Meta ad sets — both have been running unchanged for 47 days and frequency has risen from 1.8 to 3.4. We are launching three new creative variants on Monday and will report back on performance by end of next week."
The bad version uses eleven hedging words ("could," "potentially," "possible," "a number of factors") and recommends monitoring — which is not an action, it is the absence of one. The good version names the cause, quantifies it, and states the fix.
Structuring a monthly report that clients actually read
Given everything above, a monthly client report should follow this structure.
Executive summary (3-5 sentences). What happened, why, and what is next. This is the section the client reads. Every other section is supporting evidence. If the client reads only this paragraph and walks away with an accurate understanding of the month, the report has succeeded.
KPI overview. Key metrics with period-over-period comparison and target context. Use comparison anchors. Flag anything that deviated significantly from the target or trend, with a one-line explanation attached to each.
Channel performance. One paragraph per major channel. Lead with the change and the cause, not the absolute numbers. Include a recommendation where relevant. If you are generating reports from conversational analytics, this narrative writes itself — each channel gets its own explanation grounded in the actual data.
Campaign detail. A table for the performance-oriented stakeholder. But even here, add a one-sentence annotation to any campaign that had a notable change. Do not force the reader to infer meaning from a table.
Recommendations. A short list of specific next steps, each tied to a finding from earlier in the report. No generic advice. No "continue to optimize." Every recommendation should name the action, the expected impact, and the timeline.
This structure mirrors the three-question framework: what happened (executive summary and KPIs), why (channel performance and campaign detail), and what to do (recommendations). It works because it matches how the client processes information, not how the data is organized in the platform.
The tool that writes the explanation for you
The reason explaining data to clients is hard is not that account managers lack the skill. It is that the task requires simultaneously cross-referencing data across platforms, identifying the most significant changes, determining probable causes, and writing a clear narrative — all under time pressure, for every client, every month.
Conversational analytics automates this specific step. You ask "How did Greenfield Digital perform last month?" and the platform returns the explanation — with specific numbers, period-over-period comparisons, named causes, and actionable recommendations. The output is specific enough to paste into a client email or generate a branded report without editing.
This does not replace the account manager's judgement. It replaces the 45 to 90 minutes of manual cross-referencing and writing that precedes it. The account manager reviews the explanation, adjusts anything that needs their specific context, and sends. The thinking is still theirs. The assembly line is not.
When the tool writes the explanation for you
Everything in this post — leading with cause, including comparison anchors, making recommendations assertive — is exactly what a conversational analytics platform generates automatically. The principles are the same; the assembly is instant.
LDOO takes a question like "why did CPA increase last week?" and returns an explanation that names the specific campaigns, cites the percentage change, identifies the likely cause, and suggests a next step. That is the same structure this post recommends — just produced in seconds instead of 30 minutes. The output is specific enough to paste into a client email without editing.
The account manager's role shifts from writing the explanation to reviewing it. They add client-specific context ("they changed their landing page on Tuesday"), adjust the recommendation if needed, and send. The thinking is still theirs. The first draft is not.



