Teams trying to add a copilot to their app generally starte with a clear goal — “let users ask the product to do things in natural language”. But they often end up operating a second infrastructure stack alongside their actual product. This page is for teams deciding whether to keep building or switch to something that handles the infrastructure so they can focus on what their copilot actually does.
If you decide to build, you need three things
Ask any AI “how do I add a copilot to my SaaS?” and you'll get the same architecture. Try it yourself. Three layers, each its own project.
Knowledge layer
A vector database, ingestion pipeline, embedding generation, retrieval API with re-ranking, and sync jobs to keep the index fresh.
Backend agent
A server-side service for planning, a tool registry, a streaming bridge to the browser, and auth forwarding so the agent acts as the user.
Frontend chat UI
Chat panel, streaming display, tool call rendering, confirmation flows, and client state serialization for the backend.
Before you've written a single tool, you're operating a second system.
The real cost is the glue between them
Those three systems sound like distinct, manageable projects. In practice, the integration work between them is where teams spend most of their time.
You define tools twice — once on the backend where the agent can plan around them, and again on the frontend where they actually execute. You serialize client state so the backend can “see” what the user sees. You build a streaming transport so the backend can push tool calls back into the browser. You forward auth tokens so the agent service can act with the user's permissions. And you coordinate releases so the backend planner and frontend executor don't drift out of sync.
When something breaks, you debug across two runtimes, often in two languages. The failure could be in the planning step, the streaming bridge, the tool execution, or the state sync. None of this is product work. It's plumbing.
What Pillar replaces
Pillar is one SDK. You install it, register tools in your frontend code, and the copilot works. Each of the three systems you were going to build maps to something Pillar already provides.
Instead of a vector database and ingestion pipeline, Pillar gives you a managed knowledge base. Upload your docs or connect your help center. Pillar handles embedding, chunking, retrieval, and freshness. You don't run infrastructure for it.
Instead of a backend agent service, Pillar hosts the reasoning server. Your tools are registered client-side with JSON schemas and descriptions. The model picks which to call and in what order. When it needs to chain steps — create a dashboard, then add panels, then set alerts — it calls tools sequentially and uses each result as input to the next. That's native LLM tool-calling. You don't need a state machine.
Instead of building a chat UI with streaming, tool cards, and confirmation flows, the SDK ships them. npm install @pillar-ai/react, wrap your app in the provider, and register your tools in a hook. Three files.
The glue work — duplicate tool definitions, state serialization, streaming bridge, auth forwarding, coordinated releases — disappears because there's no split architecture. Tools live in your frontend code, next to the components that already know how to do the work. Planning happens on Pillar's servers. Execution happens in the browser with the user's existing session.
If the user can't do something, the copilot can't either. No proxy servers. No token forwarding. No permission mapping layer.
Teams are getting set up in days, not months
The DIY copilot stack is a multi-month project. Teams tell us they spent four to eight weeks on infrastructure before their copilot could do anything useful — and they still had a backlog of integration work when they launched.
With Pillar, the pattern we see is different. A team installs the SDK, registers a handful of tools that call their existing APIs, and has a working copilot in a few days. The first week is usually spent on tool quality — writing better descriptions, adding confirmation flows for sensitive actions, tuning what the copilot can read vs. write. That work matters regardless of which architecture you choose, but with Pillar you start there instead of spending weeks on plumbing first.
We've seen teams go from “npm install” to a copilot that creates dashboards, invites users, navigates the app, and answers product questions — in under a week. The integration with Grafana and Apache Superset on our demos page shows what that looks like in a real product.
The difference isn't that Pillar is a simpler product. It's that the infrastructure — the vector database, the reasoning server, the streaming transport, the chat UI — is already built. You spend your time on the part that's unique to your product: the tools.
When building your own makes sense
Pillar handles user-initiated, permission-sensitive copilots — the kind where a user types a request, the AI calls a few functions inside the app, and the user stays in control.
There are real cases where you need a custom orchestration layer. Background automation that runs without the user present. Durable workflows that need to survive server restarts and resume hours later. Multi-agent architectures where specialized agents hand off work to each other. Complex branching logic driven by business rules rather than tool output.
If your copilot needs those things, build the backend. Use LangGraph, Temporal, or Hatchet. But most product copilots don't need them. They need the AI to take a user request, call a few tools in sequence, and show the result. That's what Pillar does.
What buying gets you that building doesn't
When you build your own stack, you freeze the architecture at the moment you ship it. Every new standard, protocol, or model capability is another integration project on your backlog.
With Pillar, that work is on us. When new models ship with better tool-calling, Pillar adopts them. When the reasoning layer improves, your copilot gets better without a deploy. Your team stays focused on the product, not on keeping agent infrastructure current.
A concrete example: WebMCP is a W3C proposal that adds navigator.modelContext to the browser, letting external AI agents discover and call tools your app registers. When you define tools with Pillar, they automatically become your WebMCP surface — available to Claude, ChatGPT, Cursor, and any agent your users bring. If you built the three-system stack yourself, you'd need to build that integration separately. With Pillar, it ships when the spec ships.
The tools you define for your copilot become your product's agent API. You write them once. Pillar keeps them current.
Try it
If you want a quick readiness check, run the agent tool score. If you want to see what a single-SDK copilot looks like in a real app, check the live demos on Grafana and Apache Superset.
Questions? Email founders@trypillar.com.
Get Started