/research /research /research /research /research /research /research /research /research /research
/research /research /research /research /research /research /research /research /research /research
/research /research /research /research /research /research /research /research /research /research
/research /research /research /research /research /research /research /research /research /research
/research /research /research /research /research /research /research /research /research /research
/research /research /research /research /research /research /research /research /research /research
/research /research /research /research /research /research /research /research /research /research
/research /research /research /research /research /research /research /research /research /research
/research /research /research /research /research /research /research /research /research /research
/research /research /research /research /research /research /research /research /research /research
/research /research /research /research /research /research /research /research /research /research
/research /research /research /research /research /research /research /research /research /research
/research /research /research /research /research /research /research /research /research /research
/research /research /research /research /research /research /research /research /research /research
Primitive/research
[ Primitive ]

Introducing /research: Multi-step web research with citations for AI agents

Three depths of web research — single-page extraction, multi-source web search, and full multi-step deep research with cited synthesis. Tavily-backed and ready for agent loops that need real, verifiable facts.

Dennis Zax·Published April 12, 2026·4 min read·/research
TL;DR
  • /research three tools at three depths for any AI Employee that needs real facts
  • web_search Tavily / Brave-backed real-time search with structured results
  • read_url fetch any URL, extract main content, optional AI-powered focus extraction
  • research multi-step deep research that searches, fetches, synthesizes, and returns cited paragraphs
  • Citation-grounded every output paragraph maps back to a source URL the agent can cite
  • Composes with /email, /video, /social research-grounded outbound and content production

Today we're launching /research — three tools at three depths for any AI Employee that needs real, verifiable facts. Real-time search, URL fetching with content extraction, and multi-step deep research with citations. The primitive that turns prompt-only agents into research-grounded agents.

The problem: agent outputs without citations are guesses

LLMs hallucinate when they answer from training data alone — and worse, they hallucinate confidently. For an autonomous business making real decisions (which prospect to email, what claim to make in a video, how much to charge), unsourced output is a liability. Every meaningful Employee call has to be grounded in something verifiable.

The status quo:

  • The model's pre-training cutoff — outdated for anything time-sensitive.
  • Naked search APIs (Tavily, Brave, Bing) — work, but you build the URL fetcher, the citation tracker, the synthesis layer, and the hallucination detection yourself.
  • Per-creator tools (Perplexity, You.com) — strong, but human-facing UIs with no agent surface.
  • DIY RAG — six months of pipeline work and the citations are still wrong half the time.

Until now. With /research, the three depths an agent actually needs — quick search, focused extraction, deep synthesis — are runtime calls with citations the runtime verifies before returning.

The three tools at three depths

| Tool | Latency | Output | Use it when | |---|---|---|---| | web_search | ~1s | List of results with snippets | Agent needs current information across many sources | | read_url | ~2-5s | Extracted main content of one URL | Agent has a specific URL; wants only the relevant content | | research | ~30-60s | Synthesized answer with citations | Agent needs analysis and proof, not just retrieval |

You don't pick the deepest tool by default. Most Employee runs want web_search or read_url. research is for the moments where the agent's about to make a decision and the cost of being wrong outweighs the cost of waiting.

Two ways to research: prompt or code

1. Natural language

naive search research \
  "compare Stripe Issuing vs Lithic for AI agent payment cards in 2026"

The CLI runs multi-step deep research and prints a synthesized answer with inline citations.

2. Code

const res = await fetch("https://api.usenaive.ai/v1/search/research", {
  method: "POST",
  headers: {
    "Authorization": `Bearer ${process.env.NAIVE_API_KEY}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    query: "compare Stripe Issuing vs Lithic for AI agent payment cards in 2026",
  }),
});
const result = await res.json();
// result.answer, result.citations

Each citation maps to the URL the synthesis step fetched and the verbatim quote it grounded that paragraph on. Hallucinated citations are filtered before return.

Quick search, when that's all you need

const res = await fetch("https://api.usenaive.ai/v1/search", {
  method: "POST",
  headers: {
    "Authorization": `Bearer ${process.env.NAIVE_API_KEY}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    query: "10DLC campaign registration cost 2026",
  }),
});
const { results } = await res.json();
// results[i].url, results[i].title, results[i].snippet

Tavily-backed by default; Brave Search as fallback when Tavily is unavailable. Returns are scored for relevance and de-duplicated by domain so the agent doesn't get 10 versions of the same article.

Focused URL extraction

const res = await fetch("https://api.usenaive.ai/v1/search/url", {
  method: "POST",
  headers: {
    "Authorization": `Bearer ${process.env.NAIVE_API_KEY}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    url: "https://www.fcc.gov/consumers/guides/stop-unwanted-robocalls-and-texts",
    extract: "how does TCPA define automatic outbound SMS",
  }),
});
const content = await res.json();
// content.text, content.extract

The focus parameter is optional — when present, the runtime runs a small extraction pass to pull only the relevant paragraphs. For long compliance documents, government pages, or RFC-style specs, this saves the Employee tokens and time.

Citation grounding

Every paragraph of a research output maps back to a source the synthesis step actually fetched. The runtime enforces this:

{
  answer: "Stripe Issuing charges $0.10 per authorization plus a percentage [1]; Lithic charges based on a tiered monthly volume model [2]. For agent-issued cards under 10,000/month, Lithic's pricing is roughly 30% lower in our analysis [3].",
  citations: [
    { ref: 1, url: "https://stripe.com/issuing/pricing", quote: "..." },
    { ref: 2, url: "https://lithic.com/pricing", quote: "..." },
    { ref: 3, url: "https://...", quote: "..." }
  ]
}

Hallucinated citations — claims that the synthesis layer references but the verifier can't ground in the fetched content — are explicitly removed before return. This is a runtime guarantee, not a model property.

What you can build with /research

Send research-grounded outbound email — Compose /research with /email so every outbound email cites recent, real signals about the prospect. Outbound that demonstrates research lands in inboxes; outbound that hallucinates lands in spam.

Produce sourced explainer videos — Pair /research with /video so the script for every avatar video carries inline citations. Pin the source list in the YouTube description.

Run autonomous SEO content pipelines — Research-driven outlines, citation-grounded body, on-brand visuals via /image, distribution via /social. The full content stack from one Employee.

Compare vendors before purchasing — A finance Employee evaluating SaaS pricing, regulatory regimes, or partner options uses research to surface the comparison, then /cards to commit.

Stay current on regulatory changes — Pair /research with /email to feed weekly digests of changes in jurisdictions the Company cares about.

Get started

Frequently Asked Questions
What is /research?+
/research is the Naïve primitive that gives an AI Employee three tools for real-time web research: web_search (real-time SERP-style results), read_url (URL fetch with content extraction), and research (multi-step deep research with synthesis and citations). Backed by Tavily for search and an in-house extraction stack for URL reads.
When should I use web_search vs read_url vs research?+
Use web_search when the agent needs broad current information (latest news, product comparisons, recent prices). Use read_url when the agent has a specific URL and needs the main content extracted. Use research when the agent needs synthesized analysis across multiple sources with citations — slower and more expensive, but the answer carries proof.
How does Naïve avoid hallucinated citations?+
Every citation in a research output is the URL the synthesis step actually fetched. The runtime verifies the citation URL was visited; the snippet quoted appears in the fetched content. Hallucinated citations are explicitly detected and removed before the response returns.
Which underlying providers does Naïve use?+
Search uses Tavily (default) with Brave Search as a fallback. URL fetching is Naïve's in-house extractor with Mozilla Readability as a baseline. The deep-research synthesis step runs through whatever LLM the operator has configured for the Employee's runtime.
How much does /research cost?+
web_search is billed per-query. read_url is billed per-fetch. research is billed per-step (typically 5-10 steps per call). See the pricing page for current rates.
What's the difference between /research and Perplexity, You.com, or Tavily direct?+
Perplexity and You.com are end-user products. Tavily is a raw search API. /research wraps Tavily and an extractor in the Naïve runtime: outputs are queryable runtime objects, citations bind to URLs the agent actually visited, and the synthesis composes with /email, /video, and /social so the research and the artifact (email, blog post, video script) share a single Employee context.
How do I get started with /research?+
Run naive search "best agent infrastructure 2026" or naive search research "compare Stripe Issuing and Lithic". The full quickstart is at usenaive.ai/docs/getting-started/quickstart.
DZ
Dennis ZaxCTO

CTO of Naïve. Building the open-source agent runtime.

@denniszax