Build a local deep research agent with DeerFlow
DeerFlow is an open-source framework that transforms a simple research topic into a comprehensive, detailed report.
This guide will walk through the process of setting up DeerFlow on your Olares device, integrating it with a local Ollama model and the Tavily search engine for web-enabled research.
Learning objectives
In this guide, you will learn how to:
- Configure DeerFlow to communicate with a local LLM via Ollama.
- Configure the Tavily search API for web access.
- Execute deep research tasks and manage reports.
Prerequisites
Before you begin, make sure:
- Ollama is installed and running in your Olares environment.
- At least one model installed using Ollama. See Ollama for details.
- You have a Tavily account (a free account is sufficient).
Install DeerFlow
Open Market, and search for "DeerFlow".

Click Get, then Install, and wait for installation to complete.
Configure DeerFlow
DeerFlow requires connection details for the LLM. You will configure this by editing the conf.yaml file using either the graphical interface or the command line.
Configure DeerFlow to use Ollama
Configure DeerFlow to use Tavily
To enable web search, add your Tavily API key to the application configuration.
In Control Hub, select the DeerFlow project.
Click Configmaps in the resource list and select deerflow-config.

Click edit_square in the top-right to open the editor.
Add the following key-value pairs under the
datasection:yamlSEARCH_API: tavily TAVILY_API_KEY: tvly-xxx # Your Tavily API Key
Click Confirm to save the changes.
Restart DeerFlow
Restart the service to apply the new model and search configurations.
In Control Hub, select the DeerFlow project.
Under Deployments, locate deerflow and click Restart.

In the confirmation dialog, type
deerflowand click Confirm.Wait for the status icon to turn green, which indicates the service has successfully restarted.
Run DeerFlow
Run a deep research task
Open DeerFlow from the Olares Launchpad.
Click Get Started and enter your research topic in the prompt box.

Click the wand icon to have DeerFlow refine your prompt for better results.
Enable Investigation.
Select your preferred writing style (e.g., Popular Science).
Click arrow_upward to send the request.
DeerFlow will generate a preliminary research plan. Review and edit this plan if necessary, or allow it to proceed. 
Once the process is complete, a detailed analysis report will be displayed. 
To audit the sources and steps taken, click the Activities tab. 
Edit and save the report
Verify citations
AI models may occasionally generate inaccurate citations or "hallucinated" links. Be sure to manually verify important sources in the citations section.
- Click edit in the top-right corner to enter editing mode.
- You can adjust formatting using Markdown or select a section and ask the AI to improve or expand it.

- Click undo in the top-right corner to exit editing mode.
- Click download to save the report to your local machine as a Markdown file.
Add an MCP server
The Model Context Protocol (MCP) extends DeerFlow's capabilities by integrating external tools. For example, adding the Fetch server allows the agent to scrape and convert web content into Markdown for analysis.
- Open your DeerFlow app, and click settings to open the Settings dialog.
- Select the MCP tab and click Add Servers.
- Paste the JSON configuration for the server. The following example adds the fetch server:json
{ "mcpServers": { "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } } - Click Add. The server is automatically enabled and available for research agents.

Turn research report to a podcast (TTS)
DeerFlow can convert reports into MP3 audio using a Text-to-Speech (TTS) service, such as Volcengine TTS. This requires adding API credentials to the application environment.
- Obtain your Access Token and App ID from the Volcengine console.
- In Control Hub, select the DeerFlow project and go to Configmaps > deerflow-config.
- Click the Edit icon in the top-right corner.
- Add the following keys under the
datasection:yamlVOLCENGINE_TTS_ACCESS_TOKEN: # Your Access Token VOLCENGINE_TTS_APPID: # Your App ID - Click Confirm to save the changes.
- Navigate to Deployments > deerflow and click Restart.
Once restarted, DeerFlow should detect these keys and the podcast/TTS feature will be available.
FAQ
DeerFlow does not generate a response
If the agent fails to start or hangs:
- Check model compatibility: DeerFlow does not support reasoning models (e.g., DeepSeek R1). Switch to a standard chat model and try again.
- Check endpoint configuration: Ensure the Ollama API endpoint in
conf.yamlincludes the/v1suffix.
No web search results during the research
If the report is generic and lacks external data:
- Check model capabilities: The selected LLM may lack strong tool-calling capabilities. Switch to a model known for effective tool use, such as Qwen 2.5 or Llama 3.1.
- Verify API Key: Ensure the
TAVILY_API_KEYin the ConfigMap is correct and the account has remaining quota.