Connecting your data
Before LDOO can answer anything, it needs access to your marketing platforms. You connect each source through a standard OAuth flow — the same secure handshake that powers “Sign in with Google” across the web.
Each connection takes about 30 seconds. You select the property or account, authorize read-only access, and LDOO begins syncing in the background. See all supported integrations and features. For reporting and analytics walkthroughs, see the blog. For details on how credentials are stored and protected, see trust and data handling.
What happens during a sync
When you connect a data source, LDOO runs a background job that pulls your marketing data and normalizes it into a unified format. Every platform reports data differently — Google Ads measures clicks and conversions by campaign, GA4 tracks sessions and pageviews by page, Search Console counts impressions and clicks by search query. LDOO maps all of this into a single, consistent structure so every data point carries the same core fields: which platform it came from, what kind of entity it describes, the date, and a standard set of metrics — impressions, clicks, conversions, spend, revenue, CTR, CPC, CPA, and ROAS.
This normalization is what makes cross-platform questions possible. When you ask “How does paid traffic compare to organic this month?”, LDOO can answer it cleanly because Google Ads and Search Console data already speak the same internal language. Platform-specific fields that fall outside the standard structure are preserved in a flexible metadata column — nothing is discarded.
Once a sync completes, LDOO clears stale cached answers and runs an anomaly scan across your data to flag anything unusual before you even ask. The whole process runs in the background and never interrupts what you are doing in the app.
The six-step pipeline
When you ask LDOO a question, it does not go to a single AI model that produces an answer in one pass. It moves through a six-step pipeline — plan, fetch, investigate, interpret, verify, enrich — where each station does one thing well and the output of each becomes the input for the next.
Each step is a separate, independent module. If any step encounters an error, the pipeline has fallback paths so you always get an answer. It might note that a data source was temporarily unavailable or that the answer reflects synced rather than live data — but it never leaves you empty-handed.
How LDOO reads your question
The first AI model reads your question and produces a structured query plan — a formal description of exactly what data is needed to answer it. Not SQL. Closer to a brief handed to the data layer.
If you ask “Which campaigns had the highest CPA last month?”, the plan specifies CPA and spend metrics, grouped by campaign name, filtered to last month's date range, sorted by CPA descending, limited to the top ten results. That plan is handed to the fetch step.
The planning step uses Anthropic's Claude in a deterministic mode — the same question asked twice will always produce the same plan. It receives the full schema of available data, the list of connected integrations, the date range covered by your data, and any context from earlier questions in the current conversation thread.
The planner does not guess. If your question is ambiguous — “How are things going?” — it maps it to a sensible default and flags the ambiguity so the interpretation step can acknowledge it. If you ask about a platform that is not connected, the plan notes the gap and identifies the closest available alternative. Query plans are cached, so repeat questions skip the AI model entirely and go straight to fetch.
Where the data actually comes from
With a plan in hand, LDOO fetches the data. It draws from two sources depending on availability — live platform APIs or its own synced database — and prefers live data whenever possible.
Live queries go directly to the platform — the GA4 Data API for Google Analytics, the Google Ads Query Language for Google Ads, the Search Analytics API for Search Console, the Marketing API for Meta. Each platform has its own authentication, rate limits, query format, and response structure. LDOO handles all of this internally. You interact with a single interface regardless of which platform the data is coming from.
If a platform is slow or temporarily rate-limited, LDOO does not wait. It falls back to the synced database and notes in the answer that the data may be slightly older. When a question requires multiple sources — Google Ads and GA4 for a cross-platform comparison, for example — those requests run in parallel rather than in sequence.
Database queries use the normalized data from your most recent sync. These are fast and reliable, but only as fresh as the last sync. LDOO tracks the age of every source and shows a freshness indicator on every answer.
There is also a smart caching layer. If you ask a question that was recently answered, LDOO serves the cached answer immediately while re-running the full pipeline with live data in the background. If the refreshed data differs, the answer updates in place. If nothing has changed, only the freshness timestamp updates. Repeat questions feel instant without sacrificing accuracy.
Investigating what changed
This is what separates LDOO from a query tool. Before interpreting the data, LDOO checks whether anything warrants deeper investigation — and if it does, it digs in automatically.
Every query compares the current period to the previous one by default. If any metric has moved more than 20%, the pipeline upgrades to investigation mode — even if you just asked a simple question like “How are sessions?”
Investigation runs up to five parallel breakdowns: by channel, by device, by traffic source, by country, and by landing page. Each one asks the same question — “which segment drove the change?” — against a different dimension of the data. The result is not a guess. It is evidence: “Organic traffic dropped 45% while paid held steady. The drop is concentrated on mobile devices and coming from the US market.”
For ratio metrics like CPA and ROAS, the investigation also decomposes the ratio into its components — was it spend that changed, or conversions? — so the explanation identifies the actual lever, not just the symptom.
Investigation triggers on the data, not the phrasing. You do not need to ask “why” to get the cause. If LDOO sees something unusual, it investigates before answering — the same way an analyst would.
Turning raw data into a plain-English explanation
This is the most consequential step in the pipeline. The interpretation step is where rows of metrics and investigation evidence become a specific, causal, client-ready explanation — the thing your client actually reads.
The model receives the raw data, investigation evidence, and a carefully assembled context window. That context includes: the current conversation thread, key findings from previous conversations going back 30 days, the client's 90-day baselines (what “normal” looks like for this specific client), any KPI targets the agency has set, industry and seasonality context, active anomaly alerts, recent report narratives, and negative feedback patterns to avoid.
This is why LDOO answers read differently from generic AI: when it says “CPA is $65 — above your normal $40–$55 range and the highest in three months”, that baseline context comes from real statistical computation, not a guess. When it says “this is unusual for this client”, it means it, because it knows what usual looks like.
The interpretation step uses Anthropic's most capable Claude model. The response streams back to your screen in real time.
CPA dropped 14.2% to $38.20 — the best result in six weeks. The improvement was driven almost entirely by the Brand Search campaign, where conversion rate climbed from 4.1% to 5.8%. Spend held steady at $7k, which means the efficiency gain is genuine — more conversions for the same budget, not the result of pulling back.
The quality gate
Before the answer reaches you, a separate AI model scores it against a quality checklist. If the answer does not meet the standard, the pipeline retries interpretation with explicit feedback about what was missing. The goal is unique depth in every answer, not padding that sounds confident but says nothing.
This is a structural quality gate, not a stylistic preference — it is the reason LDOO answers read like analyst briefings rather than chatbot responses. For simpler requests where the numbers are self-evident, the gate is skipped. Adding a verification step to an answer that does not need one only adds latency without improving output.
Assembling the final answer
The last step adds the visual and structural elements that make the answer useful beyond the written explanation. Then it caches everything and closes the pipeline.
Once enriched, the answer is cached and the pipeline closes. From question to answer: seconds. From that answer, you can generate a branded report, create a live client portal, or ask a follow-up — all without leaving the conversation. Every answer is a launchpad.
Purpose-built AI models
Your answers move through a fixed pipeline of Anthropic Claude models: a fast planner turns the question into a safe query, the strongest model writes the interpretation your clients read, and a fast verifier checks quality before anything ships to the UI.
Why the answers can be trusted
If an LDOO answer contains a wrong number or an invented cause, that answer goes to a client. The accuracy system is not a single clever mechanism — it is a set of independent layers, each enforcing the same constraints separately. A failure in one does not expose the others. See how this plays out in practice in the agency guide.
The evaluation system
No change to the pipeline — prompt update, model swap, new data source, structural adjustment — ships without being tested against a standardized set of questions first.
LDOO maintains a growing library of golden query fixtures: real questions with known-correct answers, covering every question type across every connected platform. When the pipeline is updated, every fixture runs through it. If any fixture produces a regression, the change does not ship until the regression is resolved.
A score of 3/3 on both accuracy and format means the answer is client-ready without editing. That is the bar. Every question type. Every platform. Before anything ships.