We couldn’t find the right tool for AI agents to query databases reliably. So we built one.

•
March 27, 2026

At Motley, we build AI-powered reporting automation: agents that pull data, crunch numbers, and generate business documents without humans doing the copy-paste work in between. That means our agents need to query databases reliably, with the right business logic baked in. When an agent answers “what was revenue last month?”, it needs to know your definition of revenue: which transactions count, which don’t, whether it’s gross or net.
Getting this right turned out to be harder than expected.
What we tried
The obvious first move was raw SQL. Give the agent the schema, let it write queries. It works in demos. In production it falls apart: hallucinated column names, subtly wrong join logic, metric definitions that drift from what the finance team uses. There’s no single source of truth.
So we looked at semantic layers.
Cube is the incumbent and genuinely good, but it’s built around a different use case. Cube is optimised for running the same query many times, as fast as possible: pre-aggregation pipelines, a dedicated caching layer (Cube Store), and a distributed infrastructure that requires multiple nodes just to run in production. That architecture makes sense if you’re serving the same dashboard to a hundred users. It’s too much if you’re serving one-off, exploratory queries from an agent. Beyond the infrastructure, Cube’s data model refresh takes seconds, every time an agent wants to update or create a model on the fly, it waits. We needed near-instantaneous.
Also, Cube’s expressiveness for complex queries left much to be desired. All too often, when trying to combine its features, we ran into errors (for example, grouping by a dimension from a joined cube worked fine, as did time-shifting a measure, but trying to do both in the same query raised an error).
A similar pattern held for other options we looked at: these tools are optimised for predictability and repetition. Define your queries upfront in config files, run them reliably at scale. That’s the dashboard use case.
It’s not the agent use case, where queries are dynamic, one-off, and need to change as the agent learns more about what it’s looking for.
What we built
SLayer is a lightweight, open-source semantic layer optimised for flexibility rather than repetition. You define models in YAML or via its CLI, REST, MCP, or Python interface, queries come in as structured JSON, SLayer generates and runs the SQL.
The key design choices that make it different:
Auto-ingestion with fact table awareness. Connect a database and SLayer generates models from the schema automatically. It automatically pulls in columns from linked tables as dimensions (measures from linked tables coming soon): you get a usable set of models without manually wiring up every join.
Dynamic model manipulation. Agents can create and modify models at runtime through the MCP tools or API, no file editing, no restart, near-instantaneous refresh. The model is live the moment it’s created or modified.
Query-time expressions. In most semantic layers, every measure must be pre-defined in the model definition. In SLayer, agents can define expressions at query time. Need month-on-month change in revenue divided by order count? You don’t need a data engineer to add that measure to the model first. The agent composes it on the fly.
First-class time operations. Time comparisons are among the most common thing agents need to do and among worst-supported in many semantic layers. SLayer has composable time transforms built in, such as time shift, change, last. Compare this period to last period without constructing the subqueries yourself.
What’s next
SLayer is deliberately minimal right now, no auth, no caching, no UI.
The near-term roadmap:
Including into a query measures from joined models
Structure SLayer as a Claude Code plugin, with skills for specific query patterns
Flexible multistage query support: for example, show improvement in performance over time grouped by recent usage buckets
Easy unpivoting: for example, one column per funnel stage → one row per funnel stage
Smart formatting of output (currency, percentages, etc)
Auto-propagate filters: define a filter on a column once, have it automatically apply to every model that exposes that column as a dimension.
asof joins: for point-in-time correctness in historical queries.
Chart generation using eCharts
If any of these are blockers for what you’re building, please open an issue, priorities are shaped by what people are actually running into.
Get started
bash
pip install agentic-slayer[all] slayer ingest --datasource my_postgres --schema public slayer serve --models-dir ./my_models
⭐ Star on GitHub · Quick start · Open an issue
MIT licensed. Built by Motley.