This guide walks you through creating your first deep agent with planning, file system tools, and subagent capabilities. You’ll build a research agent that can conduct research and write reports.Documentation Index
Fetch the complete documentation index at: https://langchain-5e9cc07a-preview-andyye-1778820730-7214c62.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
Before you begin, make sure you have an API key from a model provider (e.g., Gemini, Anthropic, OpenAI).Deep Agents require a model that supports tool calling. See customization for how to configure your model.
Step 1: Install dependencies
This guide uses Tavily as an example search provider, but you can substitute any search API (e.g., DuckDuckGo, SerpAPI, Brave Search).
Step 2: Set up your API keys
- Google
- OpenAI
- Anthropic
- OpenRouter
- Fireworks
- Baseten
- Ollama
- Other
Step 3: Create a search tool
Step 4: Create a deep agent
Pass amodel string in provider:model format, or an initialized model instance. See supported models for all providers and suggested models for tested recommendations.
Step 5: Run the agent
How does it work?
Your deep agent automatically:- Plans its approach using the built-in
write_todostool to break down the research task. - Conducts research by calling the
internet_searchtool to gather information. - Manages context by using file system tools (
write_file,read_file) to offload large search results. - Spawns subagents as needed to delegate complex subtasks to specialized subagents.
- Synthesizes a report to compile findings into a coherent response.
Examples
For agents, patterns, and applications you can build with Deep Agents, see Examples.Streaming
Deep Agents have built-in streaming for real-time updates from agent execution using LangGraph. This allows you to observe output progressively and review and debug agent and subagent work, such as tool calls, tool results, and LLM responses.Next steps
Now that you’ve built your first deep agent:- Customize your agent: Learn about customization options, including custom system prompts, tools, and subagents.
- Add long-term memory: Enable persistent memory across conversations.
- Deploy to production: Use Managed Deep Agents to create, run, and operate deep agents in LangSmith.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

