- ›
/research— three tools at three depths for any AI Employee that needs real facts - ›
web_search— Tavily / Brave-backed real-time search with structured results - ›
read_url— fetch any URL, extract main content, optional AI-powered focus extraction - ›
research— multi-step deep research that searches, fetches, synthesizes, and returns cited paragraphs - ›
Citation-grounded— every output paragraph maps back to a source URL the agent can cite - ›
Composes with /email, /video, /social— research-grounded outbound and content production
Today we're launching /research — three tools at three depths for any AI Employee that needs real, verifiable facts. Real-time search, URL fetching with content extraction, and multi-step deep research with citations. The primitive that turns prompt-only agents into research-grounded agents.
The problem: agent outputs without citations are guesses
LLMs hallucinate when they answer from training data alone — and worse, they hallucinate confidently. For an autonomous business making real decisions (which prospect to email, what claim to make in a video, how much to charge), unsourced output is a liability. Every meaningful Employee call has to be grounded in something verifiable.
The status quo:
- The model's pre-training cutoff — outdated for anything time-sensitive.
- Naked search APIs (Tavily, Brave, Bing) — work, but you build the URL fetcher, the citation tracker, the synthesis layer, and the hallucination detection yourself.
- Per-creator tools (Perplexity, You.com) — strong, but human-facing UIs with no agent surface.
- DIY RAG — six months of pipeline work and the citations are still wrong half the time.
Until now. With /research, the three depths an agent actually needs — quick search, focused extraction, deep synthesis — are runtime calls with citations the runtime verifies before returning.
The three tools at three depths
| Tool | Latency | Output | Use it when |
|---|---|---|---|
| web_search | ~1s | List of results with snippets | Agent needs current information across many sources |
| read_url | ~2-5s | Extracted main content of one URL | Agent has a specific URL; wants only the relevant content |
| research | ~30-60s | Synthesized answer with citations | Agent needs analysis and proof, not just retrieval |
You don't pick the deepest tool by default. Most Employee runs want web_search or read_url. research is for the moments where the agent's about to make a decision and the cost of being wrong outweighs the cost of waiting.
Two ways to research: prompt or code
1. Natural language
naive search research \
"compare Stripe Issuing vs Lithic for AI agent payment cards in 2026"
The CLI runs multi-step deep research and prints a synthesized answer with inline citations.
2. Code
const res = await fetch("https://api.usenaive.ai/v1/search/research", {
method: "POST",
headers: {
"Authorization": `Bearer ${process.env.NAIVE_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
query: "compare Stripe Issuing vs Lithic for AI agent payment cards in 2026",
}),
});
const result = await res.json();
// result.answer, result.citations
Each citation maps to the URL the synthesis step fetched and the verbatim quote it grounded that paragraph on. Hallucinated citations are filtered before return.
Quick search, when that's all you need
const res = await fetch("https://api.usenaive.ai/v1/search", {
method: "POST",
headers: {
"Authorization": `Bearer ${process.env.NAIVE_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
query: "10DLC campaign registration cost 2026",
}),
});
const { results } = await res.json();
// results[i].url, results[i].title, results[i].snippet
Tavily-backed by default; Brave Search as fallback when Tavily is unavailable. Returns are scored for relevance and de-duplicated by domain so the agent doesn't get 10 versions of the same article.
Focused URL extraction
const res = await fetch("https://api.usenaive.ai/v1/search/url", {
method: "POST",
headers: {
"Authorization": `Bearer ${process.env.NAIVE_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
url: "https://www.fcc.gov/consumers/guides/stop-unwanted-robocalls-and-texts",
extract: "how does TCPA define automatic outbound SMS",
}),
});
const content = await res.json();
// content.text, content.extract
The focus parameter is optional — when present, the runtime runs a small extraction pass to pull only the relevant paragraphs. For long compliance documents, government pages, or RFC-style specs, this saves the Employee tokens and time.
Citation grounding
Every paragraph of a research output maps back to a source the synthesis step actually fetched. The runtime enforces this:
{
answer: "Stripe Issuing charges $0.10 per authorization plus a percentage [1]; Lithic charges based on a tiered monthly volume model [2]. For agent-issued cards under 10,000/month, Lithic's pricing is roughly 30% lower in our analysis [3].",
citations: [
{ ref: 1, url: "https://stripe.com/issuing/pricing", quote: "..." },
{ ref: 2, url: "https://lithic.com/pricing", quote: "..." },
{ ref: 3, url: "https://...", quote: "..." }
]
}
Hallucinated citations — claims that the synthesis layer references but the verifier can't ground in the fetched content — are explicitly removed before return. This is a runtime guarantee, not a model property.
What you can build with /research
Send research-grounded outbound email — Compose /research with /email so every outbound email cites recent, real signals about the prospect. Outbound that demonstrates research lands in inboxes; outbound that hallucinates lands in spam.
Produce sourced explainer videos — Pair /research with /video so the script for every avatar video carries inline citations. Pin the source list in the YouTube description.
Run autonomous SEO content pipelines — Research-driven outlines, citation-grounded body, on-brand visuals via /image, distribution via /social. The full content stack from one Employee.
Compare vendors before purchasing — A finance Employee evaluating SaaS pricing, regulatory regimes, or partner options uses research to surface the comparison, then /cards to commit.
Stay current on regulatory changes — Pair /research with /email to feed weekly digests of changes in jurisdictions the Company cares about.
Get started
- Read the docs: usenaive.ai/docs/guides/research
- Quickstart: usenaive.ai/docs/getting-started/quickstart
- Background reading: Tavily, Brave Search API, and Mozilla Readability.
- Join the community on Discord