Connecting your data
Before LDOO can answer anything, it needs access to your marketing platforms. You connect each source through a standard OAuth flow - the same secure handshake that powers “Sign in with Google” across the web.
LDOO never sees or stores your platform passwords. Instead, it receives a token: a limited-permission key that allows it to read your data and nothing else. Each connection takes about 30 seconds. You select the property or account, authorize access, and LDOO begins syncing in the background. See all supported integrations and features.
What happens during a sync
When you connect a data source, LDOO runs a background job that pulls your marketing data and normalizes it into a unified format. Every platform reports data differently - Google Ads measures clicks and conversions by campaign, GA4 tracks sessions and pageviews by page, Search Console counts impressions and clicks by search query. LDOO maps all of this into a single, consistent structure so every data point carries the same core fields: which platform it came from, what kind of entity it describes, the date, and a standard set of metrics - impressions, clicks, conversions, spend, revenue, CTR, CPC, CPA, and ROAS.
This normalization is what makes cross-platform questions possible. When you ask “How does paid traffic compare to organic this month?”, LDOO can answer it cleanly because Google Ads and Search Console data already speak the same internal language. Platform-specific fields that fall outside the standard structure are preserved in a flexible metadata column - nothing is discarded.
Once a sync completes, LDOO clears stale cached answers and runs an anomaly scan across your data to flag anything unusual before you even ask. The whole process runs in the background and never interrupts what you are doing in the app.
Keeping tokens fresh
OAuth tokens expire - Google tokens last about an hour. LDOO handles this transparently. When it needs to query a platform and the token is close to expiring, it automatically refreshes it before making the request. If a refresh fails because access was revoked or the token invalidated, the connection is marked as expired and LDOO prompts you to reconnect. All tokens are encrypted at rest using AES-256 - the same encryption standard used by banks and financial institutions. For the full details on how LDOO handles your data, see our trust and data handling page.
The five-step pipeline
When you ask LDOO a question, it does not go to a single AI model that produces an answer in one pass. It moves through a five-step pipeline - an assembly line where each station does one thing well, and the output of each becomes the input for the next.
Each step is a separate, independent module. If any step encounters an error, the pipeline has fallback paths so you always get an answer. It might note that a data source was temporarily unavailable or that the answer reflects synced rather than live data - but it never leaves you empty-handed.
How LDOO reads your question
The first AI model reads your question and produces a structured query plan - a formal description of exactly what data is needed to answer it. Not SQL. Closer to a brief handed to the data layer.
If you ask “Which campaigns had the highest CPA last month?”, the plan specifies CPA and spend metrics, grouped by campaign name, filtered to last month's date range, sorted by CPA descending, limited to the top ten results. That plan is handed to the fetch step.
The planning step uses Anthropic's Claude in a deterministic mode - the same question asked twice will always produce the same plan. It receives the full schema of available data, the list of connected integrations, the date range covered by your data, and any context from earlier questions in the current conversation thread.
The planner does not guess. If your question is ambiguous - “How are things going?” - it maps it to a sensible default and flags the ambiguity so the interpretation step can acknowledge it. If you ask about a platform that is not connected, the plan notes the gap and identifies the closest available alternative. Query plans are cached, so repeat questions skip the AI model entirely and go straight to fetch.
Where the data actually comes from
With a plan in hand, LDOO fetches the data. It draws from two sources depending on availability - live platform APIs or its own synced database - and prefers live data whenever possible.
Live queries go directly to the platform - the GA4 Data API for Google Analytics, the Google Ads Query Language for Google Ads, the Search Analytics API for Search Console, the Marketing API for Meta. Each platform has its own authentication, rate limits, query format, and response structure. LDOO handles all of this internally. You interact with a single interface regardless of which platform the data is coming from.
If a platform is slow or temporarily rate-limited, LDOO does not wait. It falls back to the synced database and notes in the answer that the data may be slightly older. When a question requires multiple sources - Google Ads and GA4 for a cross-platform comparison, for example - those requests run in parallel rather than in sequence.
Database queries use the normalized data from your most recent sync. These are fast and reliable, but only as fresh as the last sync. LDOO tracks the age of every source and shows a freshness indicator on every answer.
There is also a smart caching layer. If you ask a question that was recently answered, LDOO serves the cached answer immediately while re-running the full pipeline with live data in the background. If the refreshed data differs, the answer updates in place. If nothing has changed, only the freshness timestamp updates. Repeat questions feel instant without sacrificing accuracy.
Turning raw data into a plain-English explanation
This is the most consequential step in the pipeline. The interpretation step is where rows of metrics become a specific, causal, client-ready explanation - the thing your client actually reads.
The model receives the raw data alongside the original question, the query plan, and a carefully assembled context window. That context includes the current conversation thread so follow-up questions make sense, key findings from previous conversations about this client going back 30 days, any active anomaly alerts, recent report narratives so the AI can reference prior analysis, and negative feedback you have given on recent answers so the model avoids repeating patterns you have flagged.
All of this context is budget-controlled so the most relevant information always fits. An AI model given too much context becomes unfocused - it draws on signals that are present but not relevant. By controlling what goes in, LDOO ensures every answer is shaped by the right signals rather than diluted by noise.
The interpretation step uses Anthropic's most capable Claude model. Phrasing varies naturally so answers do not read identically every time, while the substance remains grounded in your data. The response streams back to your screen in real time.
CPA dropped 14.2% to $38.20 - the best result in six weeks. The improvement was driven almost entirely by the Brand Search campaign, where conversion rate climbed from 4.1% to 5.8%. Spend held steady at $7k, which means the efficiency gain is genuine - more conversions for the same budget, not the result of pulling back.
The quality gate
Before the answer reaches you, a separate AI model scores it against a quality checklist. If the answer does not meet the standard, the pipeline retries interpretation with explicit feedback about what was missing.
This is a structural quality gate, not a stylistic preference - it is the reason LDOO answers read like analyst briefings rather than chatbot responses. For simpler requests where the numbers are self-evident, the gate is skipped. Adding a verification step to an answer that does not need one only adds latency without improving output.
Assembling the final answer
The last step adds the visual and structural elements that make the answer useful beyond the written explanation. Then it caches everything and closes the pipeline.
Once enriched, the answer is cached and the pipeline closes. From question to answer: seconds. From that answer, you can generate a branded report, create a live client portal, or ask a follow-up - all without leaving the conversation. Every answer is a launchpad.
Purpose-built AI models
LDOO uses multiple Anthropic Claude models, each selected for a specific role. The tiered architecture ensures the most capable model handles interpretation - the step your clients actually read - while faster models handle planning and verification.
Why the answers can be trusted
If an LDOO answer contains a wrong number or an invented cause, that answer goes to a client. The accuracy system is not a single clever mechanism - it is a set of independent layers, each enforcing the same constraints separately. A failure in one does not expose the others.
The evaluation system
No change to the pipeline - prompt update, model swap, new data source, structural adjustment - ships without being tested against a standardized set of questions first.
LDOO maintains a growing library of golden query fixtures: real questions with known-correct answers, covering every question type across every connected platform. When the pipeline is updated, every fixture runs through it. If any fixture produces a regression, the change does not ship until the regression is resolved.
A score of 3/3 on both accuracy and format means the answer is client-ready without editing. That is the bar. Every question type. Every platform. Before anything ships.