Conversational analytics · LDOO

Ask your data.
Get an answer worth sending.

A client-ready explanation - specific numbers, a clear cause, a suggested next step - in seconds. Behind it is a five-step pipeline that pulls from all your data sources, interprets what it finds, and checks the result against a quality standard before it reaches you.

01

Connecting your data

Before LDOO can answer anything, it needs access to your marketing platforms. You connect each source through a standard OAuth flow - the same secure handshake that powers “Sign in with Google” across the web.

LDOO never sees or stores your platform passwords. Instead, it receives a token: a limited-permission key that allows it to read your data and nothing else. Each connection takes about 30 seconds. You select the property or account, authorize access, and LDOO begins syncing in the background. See all supported integrations and features.

What happens during a sync

When you connect a data source, LDOO runs a background job that pulls your marketing data and normalizes it into a unified format. Every platform reports data differently - Google Ads measures clicks and conversions by campaign, GA4 tracks sessions and pageviews by page, Search Console counts impressions and clicks by search query. LDOO maps all of this into a single, consistent structure so every data point carries the same core fields: which platform it came from, what kind of entity it describes, the date, and a standard set of metrics - impressions, clicks, conversions, spend, revenue, CTR, CPC, CPA, and ROAS.

This normalization is what makes cross-platform questions possible. When you ask “How does paid traffic compare to organic this month?”, LDOO can answer it cleanly because Google Ads and Search Console data already speak the same internal language. Platform-specific fields that fall outside the standard structure are preserved in a flexible metadata column - nothing is discarded.

Once a sync completes, LDOO clears stale cached answers and runs an anomaly scan across your data to flag anything unusual before you even ask. The whole process runs in the background and never interrupts what you are doing in the app.

Keeping tokens fresh

OAuth tokens expire - Google tokens last about an hour. LDOO handles this transparently. When it needs to query a platform and the token is close to expiring, it automatically refreshes it before making the request. If a refresh fails because access was revoked or the token invalidated, the connection is marked as expired and LDOO prompts you to reconnect. All tokens are encrypted at rest using AES-256 - the same encryption standard used by banks and financial institutions. For the full details on how LDOO handles your data, see our trust and data handling page.

02

The five-step pipeline

When you ask LDOO a question, it does not go to a single AI model that produces an answer in one pass. It moves through a five-step pipeline - an assembly line where each station does one thing well, and the output of each becomes the input for the next.

Answer pipeline - full sequence
01
Plan
Reads the question and builds a formal data retrieval strategy - what data is needed, from which source, over which date range, in which order.
Deterministic · cached
02
Fetch
Queries live platform APIs or the synced database. Multi-source requests run in parallel with automatic fallback to cached data if a source is slow.
Live + synced · parallel
03
Interpret
Turns raw data into a specific, causal, client-ready explanation. Uses full conversation context, prior insights, alerts, and feedback patterns.
Streaming · context-aware
04
Verify
Scores the explanation against a quality checklist. Retries interpretation with feedback if the answer does not meet the standard.
Quality gate · retry on fail
05
Enrich
Adds metric tiles, charts, confidence indicators, data tables, follow-up questions, and freshness metadata.
Structured output

Each step is a separate, independent module. If any step encounters an error, the pipeline has fallback paths so you always get an answer. It might note that a data source was temporarily unavailable or that the answer reflects synced rather than live data - but it never leaves you empty-handed.

03

How LDOO reads your question

The first AI model reads your question and produces a structured query plan - a formal description of exactly what data is needed to answer it. Not SQL. Closer to a brief handed to the data layer.

If you ask “Which campaigns had the highest CPA last month?”, the plan specifies CPA and spend metrics, grouped by campaign name, filtered to last month's date range, sorted by CPA descending, limited to the top ten results. That plan is handed to the fetch step.

The planning step uses Anthropic's Claude in a deterministic mode - the same question asked twice will always produce the same plan. It receives the full schema of available data, the list of connected integrations, the date range covered by your data, and any context from earlier questions in the current conversation thread.

The planner does not guess. If your question is ambiguous - “How are things going?” - it maps it to a sensible default and flags the ambiguity so the interpretation step can acknowledge it. If you ask about a platform that is not connected, the plan notes the gap and identifies the closest available alternative. Query plans are cached, so repeat questions skip the AI model entirely and go straight to fetch.

04

Where the data actually comes from

With a plan in hand, LDOO fetches the data. It draws from two sources depending on availability - live platform APIs or its own synced database - and prefers live data whenever possible.

Live queries go directly to the platform - the GA4 Data API for Google Analytics, the Google Ads Query Language for Google Ads, the Search Analytics API for Search Console, the Marketing API for Meta. Each platform has its own authentication, rate limits, query format, and response structure. LDOO handles all of this internally. You interact with a single interface regardless of which platform the data is coming from.

If a platform is slow or temporarily rate-limited, LDOO does not wait. It falls back to the synced database and notes in the answer that the data may be slightly older. When a question requires multiple sources - Google Ads and GA4 for a cross-platform comparison, for example - those requests run in parallel rather than in sequence.

Database queries use the normalized data from your most recent sync. These are fast and reliable, but only as fresh as the last sync. LDOO tracks the age of every source and shows a freshness indicator on every answer.

Data freshness - per source
Google AdsLive
Meta AdsLive
Google Analytics (GA4)Cached · 3h ago
Search ConsoleCached · 3h ago

There is also a smart caching layer. If you ask a question that was recently answered, LDOO serves the cached answer immediately while re-running the full pipeline with live data in the background. If the refreshed data differs, the answer updates in place. If nothing has changed, only the freshness timestamp updates. Repeat questions feel instant without sacrificing accuracy.

05

Turning raw data into a plain-English explanation

This is the most consequential step in the pipeline. The interpretation step is where rows of metrics become a specific, causal, client-ready explanation - the thing your client actually reads.

The model receives the raw data alongside the original question, the query plan, and a carefully assembled context window. That context includes the current conversation thread so follow-up questions make sense, key findings from previous conversations about this client going back 30 days, any active anomaly alerts, recent report narratives so the AI can reference prior analysis, and negative feedback you have given on recent answers so the model avoids repeating patterns you have flagged.

All of this context is budget-controlled so the most relevant information always fits. An AI model given too much context becomes unfocused - it draws on signals that are present but not relevant. By controlling what goes in, LDOO ensures every answer is shaped by the right signals rather than diluted by noise.

The interpretation step uses Anthropic's most capable Claude model. Phrasing varies naturally so answers do not read identically every time, while the substance remains grounded in your data. The response streams back to your screen in real time.

Example answer output
What happened to our Google Ads CPA last week?
↳ Google Ads · 7-day window · live data
CPA
$38.20
↓ 14.2% vs prior week
Conversions
184
↑ 18.7%
Spend
$7,029
→ Flat

CPA dropped 14.2% to $38.20 - the best result in six weeks. The improvement was driven almost entirely by the Brand Search campaign, where conversion rate climbed from 4.1% to 5.8%. Spend held steady at $7k, which means the efficiency gain is genuine - more conversions for the same budget, not the result of pulling back.

Which ad groups drove the conversion rate improvement?
How does this compare to the same period last month?
LiveHigh confidenceGoogle Ads
06

The quality gate

Before the answer reaches you, a separate AI model scores it against a quality checklist. If the answer does not meet the standard, the pipeline retries interpretation with explicit feedback about what was missing.

Verification pass - 5/5
Quality criteria
Contains specific numbers, not vague claims
States a clear cause when relevant
Includes supporting context
Compares against a prior period or benchmark
Suggests a next step

This is a structural quality gate, not a stylistic preference - it is the reason LDOO answers read like analyst briefings rather than chatbot responses. For simpler requests where the numbers are self-evident, the gate is skipped. Adding a verification step to an answer that does not need one only adds latency without improving output.

07

Assembling the final answer

The last step adds the visual and structural elements that make the answer useful beyond the written explanation. Then it caches everything and closes the pipeline.

Metric tiles
Headline numbers with period-over-period changes and trend arrows. Color-coded semantically - with smart inversion for metrics like CPA where down is good.
Auto-selected charts
Chart type is inferred from data shape. Time series gets a line chart. Comparisons get a bar chart. Rankings get a horizontal bar. You never choose.
Confidence indicators
Flags when underlying data is too thin to support a reliable conclusion. An answer based on eight clicks over two days is different from one based on eight thousand.
Sparklines and tables
Daily trend sparklines for key metrics, plus full data tables for campaign-level, keyword, and page-level breakdowns wherever the full picture matters.

Once enriched, the answer is cached and the pipeline closes. From question to answer: seconds. From that answer, you can generate a branded report, create a live client portal, or ask a follow-up - all without leaving the conversation. Every answer is a launchpad.

08

Purpose-built AI models

LDOO uses multiple Anthropic Claude models, each selected for a specific role. The tiered architecture ensures the most capable model handles interpretation - the step your clients actually read - while faster models handle planning and verification.

Planner
Query planning
Translates plain-English questions into structured data retrieval strategies.
Fully deterministic. Same question, same plan, every time. Optimized for speed and structured output. Plans are cached so repeat questions skip the model entirely.
Interpreter
Interpretation
Turns raw data into specific, causal, client-ready explanations.
The most capable model in the pipeline. The interpretation step is what clients read. Quality here is everything. Streams responses in real time.
Verifier
Verification and repair
Scores answers against a quality standard. Catches and corrects issues before they reach you.
The fastest model in the pipeline. Verification is a speed-critical task - it adds a quality check without adding noticeable latency.
09

Why the answers can be trusted

If an LDOO answer contains a wrong number or an invented cause, that answer goes to a client. The accuracy system is not a single clever mechanism - it is a set of independent layers, each enforcing the same constraints separately. A failure in one does not expose the others.

Layer 01
Data scoping - enforced twice
Every query is scoped to your account and your client's data. This is enforced by the application layer before a query runs, and again at the database level using Row Level Security policies that make it structurally impossible to access another account's data - even if the application had a bug. Two independent systems enforcing the same constraint.
Layer 02
Source integrity
When a question names a specific platform - "What is our Google Ads CPA?" - the answer must come from Google Ads data. Not GA4. Not Meta. A server-side check detects platform references and overrides the query plan if the AI planner did not correctly identify the required source.
Layer 03
Fuzzy entity matching
When you refer to a specific campaign - "How is the Brand Search NZ campaign doing?" - there is often a gap between how you name things conversationally and how your ad platform labels them internally. LDOO fuzzy-matches your phrasing against actual entity names in your data. If a close match exists, it uses it and tells you explicitly that it made the correction.
Layer 04
Confidence assessment
Not all data supports the same level of confidence. A trend based on five clicks over two days is not in the same category as one based on five thousand clicks over thirty days. LDOO evaluates sample size, date range coverage, and data freshness behind every answer. When the data is thin, the answer says so and explains why.
Layer 05
Feedback loop
When you give an answer a thumbs down, LDOO stores the feedback alongside the question type that triggered it. That feedback is injected into the interpretation prompt for subsequent questions of the same kind, so the model actively avoids repeating patterns you have flagged. The guardrails are not static - they improve with use.
10

The evaluation system

No change to the pipeline - prompt update, model swap, new data source, structural adjustment - ships without being tested against a standardized set of questions first.

LDOO maintains a growing library of golden query fixtures: real questions with known-correct answers, covering every question type across every connected platform. When the pipeline is updated, every fixture runs through it. If any fixture produces a regression, the change does not ship until the regression is resolved.

Fixture #14 · Google Ads · Diagnostic
Input and expected
“Why did our Google Ads CPA spike last week?”
Trend dir.CPA ↑ = negative
ColorCPA up = red tile
Cause req.Must name primary driver
SourceGoogle Ads only
AnchorPrior week comparison
Scores
Accuracy
3/3
Format
3/3
Client-ready without editing
Dozens
Test fixtures
6
Question types
All
Platforms tested

A score of 3/3 on both accuracy and format means the answer is client-ready without editing. That is the bar. Every question type. Every platform. Before anything ships.

Stop digging through dashboards. Ask your data.

Setup takes five minutes. The first answer will tell you everything you need to know.

Alerts

All clear

Nothing to act on

All alerts have been dismissed.

Updates every 15 minutes