Google Search Console

Search Console is now live in LDOO. See what it unlocks

Ask your data. Get an answer worth sending.

LDOO is the analyst who lives on your data. It maintains 90-day baselines, remembers your conversations, and knows which source to trust for which metric. Every answer is data-backed against five quality criteria before you see it. Behind it is a six-step pipeline that fetches, investigates, interprets, and verifies — all in seconds.

01

Connecting your data

Before LDOO can answer anything, it needs access to your marketing platforms. You connect each source through a standard OAuth flow — the same secure handshake that powers “Sign in with Google” across the web.

Each connection takes about 30 seconds. You select the property or account, authorize read-only access, and LDOO begins syncing in the background. See all supported integrations and features. For reporting and analytics walkthroughs, see the blog. For details on how credentials are stored and protected, see trust and data handling.

What happens during a sync

When you connect a data source, LDOO runs a background job that pulls your marketing data and normalizes it into a unified format. Every platform reports data differently — Google Ads measures clicks and conversions by campaign, GA4 tracks sessions and pageviews by page, Search Console counts impressions and clicks by search query. LDOO maps all of this into a single, consistent structure so every data point carries the same core fields: which platform it came from, what kind of entity it describes, the date, and a standard set of metrics — impressions, clicks, conversions, spend, revenue, CTR, CPC, CPA, and ROAS.

This normalization is what makes cross-platform questions possible. When you ask “How does paid traffic compare to organic this month?”, LDOO can answer it cleanly because Google Ads and Search Console data already speak the same internal language. Platform-specific fields that fall outside the standard structure are preserved in a flexible metadata column — nothing is discarded.

Once a sync completes, LDOO clears stale cached answers and runs an anomaly scan across your data to flag anything unusual before you even ask. The whole process runs in the background and never interrupts what you are doing in the app.

02

The six-step pipeline

When you ask LDOO a question, it does not go to a single AI model that produces an answer in one pass. It moves through a six-step pipeline — plan, fetch, investigate, interpret, verify, enrich — where each station does one thing well and the output of each becomes the input for the next.

Answer pipeline - full sequence
01
Plan
Reads the question and builds a formal data retrieval strategy - what data is needed, from which source, over which date range. Always includes a prior-period comparison so no number lacks context.
Deterministic · cached
02
Fetch
Queries live platform APIs or the synced database. Multi-source requests run in parallel with automatic fallback to cached data if a source is slow.
Live + synced · parallel
03
Investigate
If any metric changed significantly, LDOO automatically digs deeper - running parallel breakdowns by channel, device, traffic source, country, and landing page to identify what caused the change. Not just when but why.
Data-triggered · multi-query
04
Interpret
Turns raw data and investigation evidence into a specific, causal, client-ready explanation. Uses client baselines, goals, conversation memory, alerts, and feedback patterns to contextualize every number.
Streaming · baseline-aware
05
Verify
Scores the explanation against a quality checklist. Retries interpretation with feedback if the answer does not meet the standard.
Quality gate · retry on fail
06
Enrich
Adds metric tiles, charts, confidence indicators, data tables, data-grounded follow-up questions, and freshness metadata.
Structured output

Each step is a separate, independent module. If any step encounters an error, the pipeline has fallback paths so you always get an answer. It might note that a data source was temporarily unavailable or that the answer reflects synced rather than live data — but it never leaves you empty-handed.

03

How LDOO reads your question

The first AI model reads your question and produces a structured query plan — a formal description of exactly what data is needed to answer it. Not SQL. Closer to a brief handed to the data layer.

If you ask “Which campaigns had the highest CPA last month?”, the plan specifies CPA and spend metrics, grouped by campaign name, filtered to last month's date range, sorted by CPA descending, limited to the top ten results. That plan is handed to the fetch step.

The planning step uses Anthropic's Claude in a deterministic mode — the same question asked twice will always produce the same plan. It receives the full schema of available data, the list of connected integrations, the date range covered by your data, and any context from earlier questions in the current conversation thread.

The planner does not guess. If your question is ambiguous — “How are things going?” — it maps it to a sensible default and flags the ambiguity so the interpretation step can acknowledge it. If you ask about a platform that is not connected, the plan notes the gap and identifies the closest available alternative. Query plans are cached, so repeat questions skip the AI model entirely and go straight to fetch.

04

Where the data actually comes from

With a plan in hand, LDOO fetches the data. It draws from two sources depending on availability — live platform APIs or its own synced database — and prefers live data whenever possible.

Live queries go directly to the platform — the GA4 Data API for Google Analytics, the Google Ads Query Language for Google Ads, the Search Analytics API for Search Console, the Marketing API for Meta. Each platform has its own authentication, rate limits, query format, and response structure. LDOO handles all of this internally. You interact with a single interface regardless of which platform the data is coming from.

If a platform is slow or temporarily rate-limited, LDOO does not wait. It falls back to the synced database and notes in the answer that the data may be slightly older. When a question requires multiple sources — Google Ads and GA4 for a cross-platform comparison, for example — those requests run in parallel rather than in sequence.

Database queries use the normalized data from your most recent sync. These are fast and reliable, but only as fresh as the last sync. LDOO tracks the age of every source and shows a freshness indicator on every answer.

Data freshness - per source
Google AdsLive
Meta AdsLive
Google Analytics (GA4)Cached · 3h ago
Search ConsoleCached · 3h ago

There is also a smart caching layer. If you ask a question that was recently answered, LDOO serves the cached answer immediately while re-running the full pipeline with live data in the background. If the refreshed data differs, the answer updates in place. If nothing has changed, only the freshness timestamp updates. Repeat questions feel instant without sacrificing accuracy.

05

Investigating what changed

This is what separates LDOO from a query tool. Before interpreting the data, LDOO checks whether anything warrants deeper investigation — and if it does, it digs in automatically.

Every query compares the current period to the previous one by default. If any metric has moved more than 20%, the pipeline upgrades to investigation mode — even if you just asked a simple question like “How are sessions?”

Investigation runs up to five parallel breakdowns: by channel, by device, by traffic source, by country, and by landing page. Each one asks the same question — “which segment drove the change?” — against a different dimension of the data. The result is not a guess. It is evidence: “Organic traffic dropped 45% while paid held steady. The drop is concentrated on mobile devices and coming from the US market.”

For ratio metrics like CPA and ROAS, the investigation also decomposes the ratio into its components — was it spend that changed, or conversions? — so the explanation identifies the actual lever, not just the symptom.

Investigation triggers on the data, not the phrasing. You do not need to ask “why” to get the cause. If LDOO sees something unusual, it investigates before answering — the same way an analyst would.

06

Turning raw data into a plain-English explanation

This is the most consequential step in the pipeline. The interpretation step is where rows of metrics and investigation evidence become a specific, causal, client-ready explanation — the thing your client actually reads.

The model receives the raw data, investigation evidence, and a carefully assembled context window. That context includes: the current conversation thread, key findings from previous conversations going back 30 days, the client's 90-day baselines (what “normal” looks like for this specific client), any KPI targets the agency has set, industry and seasonality context, active anomaly alerts, recent report narratives, and negative feedback patterns to avoid.

This is why LDOO answers read differently from generic AI: when it says “CPA is $65 — above your normal $40–$55 range and the highest in three months”, that baseline context comes from real statistical computation, not a guess. When it says “this is unusual for this client”, it means it, because it knows what usual looks like.

The interpretation step uses Anthropic's most capable Claude model. The response streams back to your screen in real time.

Example answer output
What happened to our Google Ads CPA last week?
CPA
$38.20
14.2% vs prior week
Conversions
184
18.7%
Spend
$7,029
→ Flat

CPA dropped 14.2% to $38.20 — the best result in six weeks. The improvement was driven almost entirely by the Brand Search campaign, where conversion rate climbed from 4.1% to 5.8%. Spend held steady at $7k, which means the efficiency gain is genuine — more conversions for the same budget, not the result of pulling back.

Why did Brand Search conversion rate jump from 4.1% to 5.8%?
What does the CPA trend look like over the past 3 months?
Doo a report on this for your client.
LiveHigh confidenceGoogle Ads
07

The quality gate

Before the answer reaches you, a separate AI model scores it against a quality checklist. If the answer does not meet the standard, the pipeline retries interpretation with explicit feedback about what was missing. The goal is unique depth in every answer, not padding that sounds confident but says nothing.

Verification pass - 5/5
Quality criteria
Contains specific numbers, not vague claims
States a clear cause when relevant
Contextualizes against client baselines and goals
Compares against a prior period
Suggests a specific next step

This is a structural quality gate, not a stylistic preference — it is the reason LDOO answers read like analyst briefings rather than chatbot responses. For simpler requests where the numbers are self-evident, the gate is skipped. Adding a verification step to an answer that does not need one only adds latency without improving output.

08

Assembling the final answer

The last step adds the visual and structural elements that make the answer useful beyond the written explanation. Then it caches everything and closes the pipeline.

Metric tiles
Headline numbers with period-over-period changes and trend arrows. Color-coded semantically - with smart inversion for metrics like CPA where down is good.
Auto-selected charts
Chart type is inferred from data shape. Time series gets a line chart. Comparisons get a bar chart. Rankings get a horizontal bar. You never choose.
Confidence indicators
Flags when underlying data is too thin to support a reliable conclusion. An answer based on eight clicks over two days is different from one based on eight thousand.
Sparklines and tables
Daily trend sparklines for key metrics, plus full data tables for campaign-level, keyword, and page-level breakdowns wherever the full picture matters.

Once enriched, the answer is cached and the pipeline closes. From question to answer: seconds. From that answer, you can generate a branded report, create a live client portal, or ask a follow-up — all without leaving the conversation. Every answer is a launchpad.

09

Purpose-built AI models

Your answers move through a fixed pipeline of Anthropic Claude models: a fast planner turns the question into a safe query, the strongest model writes the interpretation your clients read, and a fast verifier checks quality before anything ships to the UI.

Planner
Query planning
Translates plain-English questions into structured data retrieval strategies.
Fully deterministic. Same question, same plan, every time. Optimized for speed and structured output. Plans are cached so repeat questions skip the model entirely.
Interpreter
Interpretation
Turns raw data into specific, causal, client-ready explanations.
The most capable model in the pipeline. The interpretation step is what clients read. Quality here is everything. Streams responses in real time.
Verifier
Verification and repair
Scores answers against a quality standard. Catches and corrects issues before they reach you.
The fastest model in the pipeline. Verification is a speed-critical task - it adds a quality check without adding noticeable latency.
10

Why the answers can be trusted

If an LDOO answer contains a wrong number or an invented cause, that answer goes to a client. The accuracy system is not a single clever mechanism — it is a set of independent layers, each enforcing the same constraints separately. A failure in one does not expose the others. See how this plays out in practice in the agency guide.

Layer 01
Data scoping - enforced twice
Every query is scoped to your account and your client's data. This is enforced by the application layer before a query runs, and again at the database level using Row Level Security policies that make it structurally impossible to access another account's data - even if the application had a bug. Two independent systems enforcing the same constraint.
Layer 02
Source integrity
When a question names a specific platform - "What is our Google Ads CPA?" - the answer must come from Google Ads data. Not GA4. Not Meta. A server-side check detects platform references and overrides the query plan if the AI planner did not correctly identify the required source.
Layer 03
Client baselines
LDOO computes 90-day statistical baselines for every metric, for every client. It knows the normal range, the median, and the recent trend. When CPA rises to $65, LDOO doesn't just report the number - it tells you that $65 is above the client's normal $40-$55 range and represents the highest value in three months. If you've set KPI targets, it compares against those too.
Layer 04
Fuzzy entity matching
When you refer to a specific campaign - "How is the Brand Search NZ campaign doing?" - there is often a gap between how you name things conversationally and how your ad platform labels them internally. LDOO fuzzy-matches your phrasing against actual entity names in your data. If a close match exists, it uses it and tells you explicitly that it made the correction.
Layer 05
Confidence assessment
Not all data supports the same level of confidence. A trend based on five clicks over two days is not in the same category as one based on five thousand clicks over thirty days. LDOO evaluates sample size, date range coverage, and data freshness behind every answer. When the data is thin, the answer says so and explains why.
Layer 06
Feedback loop and memory
When you give an answer a thumbs down, LDOO stores the feedback and learns from it. When it discovers patterns in your data - like a metric trending down for three weeks - it stores that as a client memory that persists across conversations. The system gets smarter over time, not just at the model level, but at the level of understanding each individual client.
11

The evaluation system

No change to the pipeline — prompt update, model swap, new data source, structural adjustment — ships without being tested against a standardized set of questions first.

LDOO maintains a growing library of golden query fixtures: real questions with known-correct answers, covering every question type across every connected platform. When the pipeline is updated, every fixture runs through it. If any fixture produces a regression, the change does not ship until the regression is resolved.

Fixture #14 · Google Ads · Diagnostic
Input and expected
“Why did our Google Ads CPA spike last week?”
Trend dir.CPA ↑ = negative
ColorCPA up = red tile
Cause req.Must name primary driver
SourceGoogle Ads only
AnchorPrior week comparison
Scores
Accuracy
3/3
Format
3/3
Client-ready without editing
Dozens
Test fixtures
6
Question types
All
Platforms tested

A score of 3/3 on both accuracy and format means the answer is client-ready without editing. That is the bar. Every question type. Every platform. Before anything ships.

Your data has answers. LDOO finds them.

Connect a source, ask a question, and see how an analyst who knows your data answers it. Setup takes five minutes.