Skip to main content
Explainer4 min read

Agent-led product analytics: when 'ask the data' replaces the dashboard

PMs are increasingly skipping dashboards and asking an agent. Here is the architecture: warehouse access, semantic layer, query agent, visualisation, and the trust controls that prevent the agent from confidently lying about your funnel.

Static dashboards are losing ground to "ask the data": a PM types a question, an agent queries the warehouse, and the answer arrives with a chart. The pattern works when the architecture is right. It produces confidently-wrong numbers when it is not.

What this replaces

Three classical product-analytics workflows:

  • Dashboard authoring — the PM who knew what to ask but had to wait for an analyst.
  • Self-serve BI — the tool that could answer if the PM mastered it.
  • Slack-the-data-team — the analyst who could answer if they had time.

Agents collapse these into one interface. Done well, they free the data team for harder work. Done poorly, they multiply incorrect numbers.

The five-layer architecture

PM question
   ↓
intent classifier (clarify vs answer vs escalate)
   ↓
semantic layer (definitions, metrics, dimensions)
   ↓
SQL generator (LLM, constrained to semantic layer)
   ↓
warehouse (read-only role, query budget)
   ↓
visualiser + answer generator

The layer that separates good from bad agent analytics is the semantic layer.

The semantic layer is the moat

Without it, the agent generates SQL that is sometimes right and sometimes wrong, with no way to tell. With it, the agent picks from a curated set of metrics and dimensions defined by the data team.

A working semantic layer:

metric: weekly_active_users
description: "Distinct users who logged any event in the last 7 days"
sql: "SELECT count(distinct user_id) FROM events WHERE ts >= now() - interval '7 days'"
filters_allowed:
  - country
  - product_tier
  - signup_cohort

The agent constructs queries by composing metrics and filters, not by writing SQL from scratch.

What the agent is good at

Five tasks where it shines:

Single-metric questions

"How many active users last week?" — direct lookup, fast and reliable.

Filtered queries

"Active users last week by country" — composing the dimension is trivial.

Time comparisons

"How does this week compare to last week?" — repeating the query with a window shift.

Anomaly explanation

"Why did sign-ups drop on Tuesday?" — the agent can pull related metrics, look at deploys, scan recent incidents.

Funnel summaries

"Walk me through the onboarding funnel" — the agent can run the metric chain.

What it is bad at

Four classes of question that produce wrong answers:

Novel metric definitions

"What is our 'engaged user' definition?" If it is not in the semantic layer, the agent makes one up.

Cohort gymnastics

Complex cohort retention is fragile; the agent often misaligns time windows.

Statistical significance

"Is this drop real?" requires statistical machinery the agent does not natively have.

Causal questions

"Did the new pricing cause churn?" — correlation is easy, causation is research.

For these, escalate to a human analyst. Build the escalation path explicitly.

Trust controls

Without these, agent analytics is a PR liability.

Citation

Every answer cites the metric definitions and filters used. PM can trace back to the SQL.

Sanity ranges

The agent flags numbers outside historical norms. Catches both data bugs and agent hallucinations.

Read-only enforcement

The agent's database role is read-only. Hardcoded; does not depend on prompts.

Query budget

Per-question token and compute budget; cap heavy queries.

Audit

Every question and answer logged. See audit trails.

Visualisation

The agent picks the right chart from a small palette:

  • Single number for one metric, point in time.
  • Time series for trends.
  • Bar chart for breakdowns.
  • Funnel diagram for sequential conversion.
  • Table when the answer needs precise numbers.

A picker with five options is reliable. A general-purpose chart generator is not.

Performance matters more than usual

PMs ask many small questions. Latency budget:

  • Simple lookups: under 2 s.
  • Compose-and-query: under 5 s.
  • Complex composites: under 15 s with progress narration.

Cache aggressively (every PM asks the same DAU question) and route by complexity.

What good looks like

Three signals of a working agent analytics deployment:

  • PMs use it daily without escalating to data team for routine questions.
  • Analyst time shifts to deeper work, not query-answering.
  • Data quality bugs surface faster because more people are asking.

Common mistakes

  • Skipping the semantic layer — guarantees wrong answers.
  • Letting the agent write any SQL — cost overruns, security risk.
  • No human escalation — agent confidently wrong, nobody catches it.
  • No metric definitions — every team gets a different "active user".

Where this is heading

Three trends by 2027: BI vendors (Looker, Mode, Hex) shipping agent layers natively, agent-aware metric stores becoming standard, and product analytics shifting from dashboard-first to question-first across most SaaS companies. Build the semantic layer regardless of which agent you pick.

Loadout

Build your AI agent loadout

The directory of MCP servers and AI agents that actually work. Pick the right loadout for Slack, Postgres, GitHub, Figma and 20+ integrations — with install commands ready to paste into Claude Desktop, Cursor or your own stack.

© 2026 Loadout. Built on Angular 21 SSR.