[Deep Agents:LangChain的Agent Harness-03]FilesystemMiddleware:赋能Agent读写文件及管理长上下文
通过“构建抽象的文件系统”我们知道,Deep Agents的文件系统是建立在一个利用BackendProtocol协议抽象的文件系统之上的,使得Agent能够以统一的方式进行文件操作,无论底层存储是本地磁盘、云端S3、数据库还是内存。这种设计不仅提供了极大的灵活性,还使得Agent能够适应不同的应用场景,从而实现更复杂的数据管理和交互能力。这个文件系统是通过FilesystemMiddleware赋予Agent的。
1. 提供文件操作工具集
在Deep Agents框架中,FilesystemMiddleware不仅仅是提供了简单的文件读写工具,更是实现长程任务和上下文工程的核心基础设施。它通过将文件系统抽象为AI的外部工作记忆,解决了大模型在处理复杂任务时因上下文窗口溢出而导致的健忘问题。它赋予Agent存储与计算分离能力,为Agent自动注入了一套标准化的文件操作工具集:
- ls:列出目录;
- read_file:读取内容;
- write_file:写入文件;
- edit_file:精确编辑内容;
- glob:模式匹配查找;
- grep:文本搜索;
- execute:代码执行(仅限于可作为沙箱的后端存储);
FilesystemMiddleware提供了自动上下文卸载(Context Offloading)的能力。为了防止Agent被海量的工具输出(如超长的搜索结果或日志)淹没,FilesystemMiddleware实现了自动化调度:
- 阈值触发:默认情况下,当工具调用结果超过20,000个tokens(可配置)时,会自动将结果保存到文件系统中;
- 引用替换:Agent的对话历史中不会保留原始巨量数据,对应的内容被替换为一个指向文件路径的指针;
- 动态加载:Agent可以根据需要,通过
read_file的offset和limit参数实现分页读取,仅获取当前推理所需的片段,从而极大地节省了Context消耗;
与其说FilesystemMiddleware为Agent缔造了一个文件系统,不如说它提供了一套工具集以文件系统的方式来操作各种后端存储。FilesystemMiddleware默认提供了七个工具:ls、read、write、edit、glob、grep和execute,它们会调用对应后端的接口来实现文件的列出、读取、写入、编辑、模式匹配和命令执行等功能。通过这些工具,Agent可以以文件系统的方式来访问和操作不同类型的存储,从而实现更复杂的数据管理和交互能力。
如下面的代码片段所示,当我们调用__init__方法创建FilesystemMiddleware实例时,可以利用backend参数来指定一个后端实例,或者直接传入一个后端工厂函数(BackendFactory)来动态创建后端。如果采用StateBackend作为默认后端,文件内容和元数据的FileData对象最终会写入名为files的状态字段中。表示状态Schema类型的state_schema字段返回的类型是FilesystemState,唯一的状态成员files返回的字典模拟存储的文件。
BackendFactory: TypeAlias = Callable[[ToolRuntime], BackendProtocol]
BACKEND_TYPES = BackendProtocol | BackendFactory
class FilesystemMiddleware(AgentMiddleware[FilesystemState, ContextT, ResponseT]):
state_schema = FilesystemState
def __init__(
self,
*,
backend: BACKEND_TYPES | None = None,
system_prompt: str | None = None,
custom_tool_descriptions: dict[str, str] | None = None,
tool_token_limit_before_evict: int | None = 20000,
max_execute_timeout: int = 3600,
) -> None:
self.backend = backend if backend is not None else (StateBackend)
self._custom_system_prompt = system_prompt
self._custom_tool_descriptions = custom_tool_descriptions or {}
self._tool_token_limit_before_evict = tool_token_limit_before_evict
self._max_execute_timeout = max_execute_timeout
self.tools = [
self._create_ls_tool(),
self._create_read_file_tool(),
self._create_write_file_tool(),
self._create_edit_file_tool(),
self._create_glob_tool(),
self._create_grep_tool(),
self._create_execute_tool(),
]
def _create_ls_tool(self) -> BaseTool
def _create_read_file_tool(self) -> BaseTool
def _create_write_file_tool(self) -> BaseTool
def _create_edit_file_tool(self) -> BaseTool
def _create_glob_tool(self) -> BaseTool
def _create_grep_tool(self) -> BaseTool
def _create_execute_tool(self) -> BaseTool
__init__方法中,除了backend参数外,还提供了以下几个可选参数:
- system_prompt:用于定制文件系统工具的系统提示语,帮助Agent更好地理解工具的用途和使用方法;
- custom_tool_descriptions:一个字典,用于定制每个工具的描述信息,键是工具名称,值是对应的描述文本;
- tool_token_limit_before_evict:一个整数,指定在工具调用历史中,当累计的工具调用参数的token数量超过这个限制时,应该开始逐步淘汰旧的工具调用记录,以节省上下文空间;
- max_execute_timeout:一个整数,指定execute工具的最大执行时间(以秒为单位),防止Agent执行长时间运行的命令导致资源占用;
与其说FilesystemMiddleware为Agent缔造了一个文件系统,不如说它提供了一套工具集以文件系统的方式来操作各种后端存储。它默认提供了七个工具:ls、read、write、edit、glob、grep和execute,它们会调用对应后端的接口来实现文件的列出、读取、写入、编辑、模式匹配和命令执行等功能。通过这些工具,Agent可以以文件系统的方式来访问和操作不同类型的存储,从而实现更复杂的数据管理和交互能力。
2. 看看工具描述
AI编程对程序员的自然语言的文字驾驭能力提出了更高的要求,而这是程序最欠缺的能力,也是很多人不屑的地方。对于LLM来说,提示词是唯一的输入,而工具描述是组成提示词的重要部分。即使你将定义工具的函数写得再完美,没有一份清晰、准确、详细的工具描述,LLM也无法正确地理解工具的功能和使用方法,更无法在实际调用中正确地传递参数和处理返回值。工具描述就像是工具的使用说明书,只有当LLM能够正确地理解和使用这些说明书中的信息,才能发挥工具的最大效用。
正如我一贯主张通过阅读知名框架的源码,而不是阅读大量的书籍来学习架构设计一样,我主张通过阅读知名框架中的提示词来学习如何撰写提示词,而不是阅读一本又一本关于提示词工程的书籍。因为框架是高复用率的程序,如果框架本身被验证是成功的,那们证明其设计的提示词具有很高的质量。我们来看一下FilesystemMiddleware的工具描述是如何设计的,也许我们就能从中学到一些撰写工具描述的技巧。
ls
Lists all files in a directory.
This is useful for exploring the filesystem and finding the right file to read or edit.
You should almost ALWAYS use this tool before using the read_file or edit_file tools.
read_file
Reads a file from the filesystem.
Assume this tool is able to read all files. If the User provides a path to a file assume that path is valid. It is okay to read a file that does not exist; an error will be returned.
Usage:
- By default, it reads up to 100 lines starting from the beginning of the file
- **IMPORTANT for large files and codebase exploration**: Use pagination with offset and limit parameters to avoid context overflow
- First scan: read_file(path, limit=100) to see file structure
- Read more sections: read_file(path, offset=100, limit=200) for next 200 lines
- Only omit limit (read full file) when necessary for editing
- Specify offset and limit: read_file(path, offset=0, limit=100) reads first 100 lines
- Results are returned using cat -n format, with line numbers starting at 1
- Lines longer than 5,000 characters will be split into multiple lines with continuation markers (e.g., 5.1, 5.2, etc.). When you specify a limit, these continuation lines count towards the limit.
- You have the capability to call multiple tools in a single response. It is always better to speculatively read multiple files as a batch that are potentially useful.
- If you read a file that exists but has empty contents you will receive a system reminder warning in place of file contents.
- Image files (`.png`, `.jpg`, `.jpeg`, `.gif`, `.webp`) are returned as multimodal image content blocks (see https://docs.langchain.com/oss/python/langchain/messages#multimodal).
For image tasks:
- Use `read_file(file_path=...)` for `.png/.jpg/.jpeg/.gif/.webp`
- Do NOT use `offset`/`limit` for images (pagination is text-only)
- If image details were compacted from history, call `read_file` again on the same path
- You should ALWAYS make sure a file has been read before editing it.
write_file
Writes to a new file in the filesystem.
Usage:
- The write_file tool will create the a new file.
- Prefer to edit existing files (with the edit_file tool) over creating new ones when possible.
edit_file
Performs exact string replacements in files.
Usage:
- You must read the file before editing. This tool will error if you attempt an edit without reading the file first.
- When editing, preserve the exact indentation (tabs/spaces) from the read output. Never include line number prefixes in old_string or new_string.
- ALWAYS prefer editing existing files over creating new ones.
- Only use emojis if the user explicitly requests it.
glob
Find files matching a glob pattern.
Supports standard glob patterns: `*` (any characters), `**` (any directories), `?` (single character).
Returns a list of absolute file paths that match the pattern.
Examples:
- `**/*.py` - Find all Python files
- `*.txt` - Find all text files in root
- `/subdir/**/*.md` - Find all markdown files under /subdir
- `data_??.csv` - Find files like data_01.csv, data_A1.csv, etc.
grep
Search for a text pattern across files.
Searches for literal text (not regex) and returns matching files or content based on output_mode.
Special characters like parentheses, brackets, pipes, etc. are treated as literal characters, not regex operators.
Examples:
- Search all files: `grep(pattern="TODO")`
- Search Python files only: `grep(pattern="import", glob="*.py")`
- Show matching lines: `grep(pattern="error", output_mode="content")`
- Search for code with special chars: `grep(pattern="def __init__(self):")`
- Search for a pattern that includes glob special chars: `grep(pattern="data_*.csv", glob="data_*.csv")`
execute
Executes a shell command in an isolated sandbox environment.
Usage:
Executes a given command in the sandbox environment with proper handling and security measures.
Before executing the command, please follow these steps:
1. Directory Verification:
- If the command will create new directories or files, first use the ls tool to verify the parent directory exists and is the correct location
- For example, before running "mkdir foo/bar", first use ls to check that "foo" exists and is the intended parent directory
2. Command Execution:
- Always quote file paths that contain spaces with double quotes (e.g., cd "path with spaces/file.txt")
- Examples of proper quoting:
- cd "/Users/name/My Documents" (correct)
- cd /Users/name/My Documents (incorrect - will fail)
- python "/path/with spaces/script.py" (correct)
- python /path/with spaces/script.py (incorrect - will fail)
- After ensuring proper quoting, execute the command
- Capture the output of the command
Usage notes:
- Commands run in an isolated sandbox environment
- Returns combined stdout/stderr output with exit code
- If the output is very large, it may be truncated
- For long-running commands, use the optional timeout parameter to override the default timeout (e.g., execute(command="make build", timeout=300))
- A timeout of 0 may disable timeouts on backends that support no-timeout execution
- VERY IMPORTANT: You MUST avoid using search commands like find and grep. Instead use the grep, glob tools to search. You MUST avoid read tools like cat, head, tail, and use read_file to read files.
- When issuing multiple commands, use the ';' or '&&' operator to separate them. DO NOT use newlines (newlines are ok in quoted strings)
- Use '&&' when commands depend on each other (e.g., "mkdir dir && cd dir")
- Use ';' only when you need to run commands sequentially but don't care if earlier commands fail
- Try to maintain your current working directory throughout the session by using absolute paths and avoiding usage of cd
Examples:
Good examples:
- execute(command="pytest /foo/bar/tests")
- execute(command="python /path/to/script.py")
- execute(command="npm install && npm test")
- execute(command="make build", timeout=300)
Bad examples (avoid these):
- execute(command="cd /foo/bar && pytest tests") # Use absolute path instead
- execute(command="cat file.txt") # Use read_file tool instead
- execute(command="find . -name '*.py'") # Use glob tool instead
- execute(command="grep -r 'pattern' .") # Use grep tool instead
Note: This tool is only available if the backend supports execution (SandboxBackendProtocol).
If execution is not supported, the tool will return an error message.
3. 指导LLM调用文件操作工具的系统提示词
FilesystemMiddleware的__init__方法提供了系统提示词作为使用工具的说明书,提示词的应用是通过重写的wrap_model_call/awrap_model_call方法来实现的。如果tool_token_limit_before_evict不为空,在工具返回内容超过设置的这个阈值时,为了避免上下文过载,FilesystemMiddleware会将其内容以文件形式存储到后端,并返回一个新的工具调用结果,并返回文件的路径。由于tool_token_limit_before_evict参数设置的阈值以Token作为单位,在进行阈值计算的时候会从返回的ToolMessage中提取文本内容,并按照每个字符平均4个token的经验值来估算总的token数量。此项工作是通过重写的wrap_tool_call/awrap_tool_call方法来实现的。
class FilesystemMiddleware(AgentMiddleware[FilesystemState, ContextT, ResponseT]):
def wrap_model_call(
self,
request: ModelRequest[ContextT],
handler: Callable[[ModelRequest[ContextT]], ModelResponse[ResponseT]],
) -> ModelResponse[ResponseT]
async def awrap_model_call(
self,
request: ModelRequest[ContextT],
handler: Callable[[ModelRequest[ContextT]], Awaitable[ModelResponse[ResponseT]]],
) -> ModelResponse[ResponseT]
def wrap_tool_call(
self,
request: ToolCallRequest,
handler: Callable[[ToolCallRequest], ToolMessage | Command],
) -> ToolMessage | Command
async def awrap_tool_call(
self,
request: ToolCallRequest,
handler: Callable[[ToolCallRequest], Awaitable[ToolMessage | Command]],
) -> ToolMessage | Command
如下所示的是FilesystemMiddleware提供的系统提示词,看看我们能否撰写出同样质量的提示词来指导LLM正确地调用工具:
## Following Conventions
- Read files before editing — understand existing content before making changes
- Mimic existing style, naming conventions, and patterns
## Filesystem Tools `ls`, `read_file`, `write_file`, `edit_file`, `glob`, `grep`
You have access to a filesystem which you can interact with using these tools.
All file paths must start with a /. Follow the tool docs for the available tools, and use pagination (offset/limit) when reading large files.
- ls: list files in a directory (requires absolute path)
- read_file: read a file from the filesystem
- write_file: write to a file in the filesystem
- edit_file: edit a file in the filesystem
- glob: find files matching a pattern (e.g., "**/*.py")
- grep: search for text within files
## Large Tool Results
When a tool result is too large, it may be offloaded into the filesystem instead of being returned inline. In those cases, use `read_file` to inspect the saved result in chunks, or use `grep` within `/large_tool_results/` if you need to search across offloaded tool results and do not know the exact file path. Offloaded tool results are stored under `/large_tool_results/<tool_call_id>`.
如果涉及execute工具的使用,还会加上如下这段:
## Execute Tool `execute`
You have access to an `execute` tool for running shell commands in a sandboxed environment.
Use this tool to run commands, scripts, tests, builds, and other shell operations.
- execute: run a shell command in the sandbox (returns output and exit code)
4. 让Agent操作你的本地文件
如下的这个演示程序充分体现了FilesystemMiddleware的功能。我们创建了一个Agent,并在其中注册了FilesystemMiddleware。通过调用Agent来执行一系列文件系统操作,包括创建目录、写入文件、列出目录内容和读取文件内容。Agent通过FilesystemMiddleware提供的工具来与后端存储进行交互,实现了一个完整的文件系统操作流程。
from langchain.agents import create_agent
from deepagents.middleware.filesystem import FilesystemMiddleware
from deepagents.backends.local_shell import LocalShellBackend
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
import asyncio
load_dotenv()
agent = create_agent(
model=ChatOpenAI(model= "gpt-5.2-chat"),
middleware=[FilesystemMiddleware(backend=LocalShellBackend(virtual_mode=True))],
)
prompt = """\
Execute following operations:
- Remove the `test_dir` directory if it already exists, and then create a new empty one;
- Change the current working directory to such a newly created directory;
- Create a file called `hello.py` inside it with the content `print("Hello World")`;
- Create three files in the newly created directory: `file1.txt`, `file2.txt`, and `file3.txt` with any content of your choice;
- List the files of the directory;
- Read the content of `hello.py`.
"""
async def main():
result = await agent.ainvoke(input= {"messages": [{"role": "user", "content": prompt}]})
result["messages"][-1].pretty_print()
asyncio.run(main())
输出:
================================== Ai Message ==================================
✅ All requested operations have been completed. Here are the results step by step:
### 📁 Directory Setup
- The directory **`test_dir`** was removed (if it existed) and recreated as a new empty directory.
### 📝 Files Created Inside `test_dir`
- **`hello.py`**
```python
print("Hello World")
```
- **`file1.txt`** – contains: `content 1`
- **`file2.txt`** – contains: `content 2`
- **`file3.txt`** – contains: `content 3`
### 📂 Directory Listing
The contents of `test_dir` are:
```
file1.txt
file2.txt
file3.txt
hello.py
```
### 📖 Contents of `hello.py`
```
print("Hello World")
```
If you’d like to run `hello.py`, modify any files, or perform more filesystem operations, just let me know!
AtomGit 是由开放原子开源基金会联合 CSDN 等生态伙伴共同推出的新一代开源与人工智能协作平台。平台坚持“开放、中立、公益”的理念,把代码托管、模型共享、数据集托管、智能体开发体验和算力服务整合在一起,为开发者提供从开发、训练到部署的一站式体验。
更多推荐


所有评论(0)