数据只有具有希望的结构才可能被有效地处理,我们在调用create_agent函数创建Agent的时候,可以利用response_format控制输出的格式。我们可以定义一个Pydantic模型类表表示期望输出的结构,如果将此类型表示成ResponseT,我们可以直接将此类型作为response_format参数的值,也可以将此参数赋值为ResponseFormat[ResponseT]。除此之外,我们也可以创建一个表示JSON Schema的字典作为response_format参数的值。ResponseFormat实际上针对ToolStrategyProviderStrategyAutoStrategy三个类型的联合。ToolStrategyProviderStrategy代表了两种实现结构化输出的不同技术路径。

def create_agent(
    ...
    response_format: ResponseFormat[ResponseT] | type[ResponseT] | dict[str, Any] | None = None,
    ...
) 

ResponseFormat = ToolStrategy[SchemaT] | ProviderStrategy[SchemaT] | AutoStrategy[SchemaT]

1. ToolStrategy

ToolStrategy是目前针对接过话输出最稳定、最通用的实现方式。它利用了LLM的工具调用能力,这是所有LLM具有的通用技能。LangChain会针对你指定的Pydantic类型或者JSON Schema额外注册一个工具来完成格式化。这样LLM就会在回复消息的tool_calls中生成针对格式化工具的调用。Agent拦截这个调用,将其解析为结构化数据,并自动终止整个执行流程(不在继续调用模型)。

下面的代码演示了针对ToolStrategy的使用。我们定义如下这个名为WeatherResponse的Pydantc模型类描述针对天气查询的响应。工具函数get_weather以字符串文本的形式返回指定城市的天气信息。我们调用create_agent函数创建的Agent时将模型指定为一个ChatOpenAI对象(使用模型gpt-5.2-chat),response_format参数设置为针对WeatherResponse类型创建的ToolStrategy对象。

from typing import Annotated,Any
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from pydantic import BaseModel, Field
from langchain.agents  import create_agent
from langchain.agents.structured_output import ToolStrategy
from dotenv import load_dotenv
import json

load_dotenv()

class WeatherResponse(BaseModel):
    """A structured response format for weather information."""
    city: str = Field(description="City for which the weather is being reported")
    temperature: float = Field(description="Current temperature in Celsius")
    summary: str = Field(description="Brief summary of the weather conditions")
    suggestion: str = Field(description= "Clothing suggestion based on the weather")

def get_weather(city: str):
    """Get the current weather for a given city."""
    return "It is sunny today, and the temperature is about 25.0 outside"

agent = create_agent(
     model= ChatOpenAI(model="gpt-5.2-chat"),
    tools=[get_weather],
    response_format= ToolStrategy( WeatherResponse), 
)

inputs = {"messages": [("user", "What is whether like in Suzhou, and what kind of closing is sugguested?")]}
result:dict[str,Any] = agent.invoke(inputs) # type: ignore

response: WeatherResponse = result.get("structured_response") # type: ignore
print(f"City: {response.city}")
print(f"Temperature: {response.temperature}°C")
print(f"Summary: {response.summary}")
print(f"Clothing Suggestion: {response.suggestion}")

通过前面的内容我们知道作为Pregel对象的Agent,它默认具有三个输出通道,其中包括一个表示结构化输出的structured_response。我们在完成Agent调用后从结果中提取此数据成员,得到的就是一个WeatherResponse对象,它承载的信息会以如下形式输出:

City: Suzhou
Temperature: 25.0°C
Summary: Sunny
Clothing Suggestion: Light, breathable clothing such as a T-shirt or blouse with jeans or light trousers. Bring a light jacket if you stay out in the evening. 

如果我们拦截针对OpenAI API的调用,会得到两轮请求/响应。如下所示的第一次调用OpenAI API的请求,可以看出在请求中提供了针对两个工具的描述,其中一个是我们注册的get_weather,另一个名为WeatherResponse的工具就是LangChain根据注册的结构化输出类型自行创建的。第二个请求(携带工具get_weather执行的结果)也会包含相同的可用工具列表。

{
  "messages": [
    {
      "content": "What is whether like in Suzhou, and what kind of closing is sugguested?",
      "role": "user"
    }
  ],
  "model": "gpt-5.2-chat",
  "stream": false,
  "tool_choice": "required",
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get the current weather for a given city.",
        "parameters": {
          "properties": {
            "city": {
              "type": "string"
            }
          },
          "required": [
            "city"
          ],
          "type": "object"
        }
      }
    },
    {
      "type": "function",
      "function": {
        "name": "WeatherResponse",
        "description": "A structured response format for weather information.",
        "parameters": {
          "properties": {
            "city": {
              "description": "City for which the weather is being reported",
              "type": "string"
            },
            "temperature": {
              "description": "Current temperature in Celsius",
              "type": "number"
            },
            "summary": {
              "description": "Brief summary of the weather conditions",
              "type": "string"
            },
            "suggestion": {
              "description": "Clothing suggestion based on the weather",
              "type": "string"
            }
          },
          "required": [
            "city",
            "temperature",
            "summary",
            "suggestion"
          ],
          "type": "object"
        }
      }
    }
  ]
}

如下所示的是第二次调用OpenAI API得到的响应,此时LLM已经得到了get_weather执行的结果(纯文本形式),然后它利用自身的推理能力根据格式化工具WeatherResponse的描述生成对应的工具调用,具体内容体现在"choices"->“message”->"tool_calls"节点中。

{
  "choices": [
    {
      "content_filter_results": {},
      "finish_reason": "tool_calls",
      "index": 0,
      "logprobs": null,
      "message": {
        "annotations": [],
        "content": null,
        "refusal": null,
        "role": "assistant",
        "tool_calls": [
          {
            "function": {
              "arguments": "{\"city\":\"Suzhou\",\"temperature\":25,\"summary\":\"Sunny\",\"suggestion\":\"Light, breathable clothing such as a T-shirt or blouse with jeans or light trousers. Bring a light jacket if you stay out in the evening.\"}",
              "name": "WeatherResponse"
            },
            "id": "call_z9H4ZDeVJx9FwGgJQEkMncea",
            "type": "function"
          }
        ]
      }
    }
  ],
  "created": 1772802482,
  "id": "chatcmpl-DGPCMdPFQc9YwdYo5UmSXoteV1wyQ",
  "model": "gpt-5.2-chat-2025-12-11",
  "object": "chat.completion",
  "prompt_filter_results": [...],
  "system_fingerprint": null,
  "usage": {...}
}

ToolStrategy类型定义如下,从它的构造函数可知:除了指定Schema类型之外,我们还可以指定其他参数。虽然用于格式化输出的工具是LangChain自行生成的,但是针对它的调用也应该生成一个ToolMessagetool_message_content参数用于指定消息内容。如果模型输出的JSON与Schema不匹配,我们可以利用handle_errors参数设置相应的错误处理策略。该参数支持多种形式,包括布尔值、字符串、异常类型或自定义函数。如果设置为True,则会将错误信息发回给模型,让它重试修复JSON。如果设置成字符串,系统会向消息历史中追加一条以此为内容的ToolMessage。

@dataclass(init=False)
class ToolStrategy(Generic[SchemaT]):
    schema: type[SchemaT] | UnionType | dict[str, Any]
    schema_specs: list[_SchemaSpec[Any]]
    tool_message_content: str | None
    handle_errors: (
        bool | str | type[Exception] | tuple[type[Exception], ...] | Callable[[Exception], str]
    )

    def __init__(
        self,
        schema: type[SchemaT] | UnionType | dict[str, Any],
        *,
        tool_message_content: str | None = None,
        handle_errors: bool
        | str
        | type[Exception]
        | tuple[type[Exception], ...]
        | Callable[[Exception], str] = True,
    ) -> None

2. ProviderStrategy

我们之所以称ToolStrategy是目前最稳定、最通用的实现方式,是因为这种格式化输出是采用工具调用方式实现的,这是所有LLM都具备的能力。但是很多LLM(如OpenAI和Anthropic等)自身就具有结构化输出的能力。如果确认使用的模型具有此种能力,我们可以利用ProviderStrategy直接利用LLM对输出进行格式化,这无疑使更加高效的方式。

agent = create_agent(
     model= llm,
    tools=[get_weather],
    response_format= ProviderStrategy( WeatherResponse), 
)

对于上面演示的例子,我们只需要在调用create_agent函数的时候将response_format参数设置为ProviderStrategy对象,同样可以得到几乎一致的结构化输出。但是调用OpenAI API的请求和响应会有所不同。如下所示的使第一次调用的请求,可以看出希望输出的格式以JSON Schema的形式被置于请求的"response_format"节点,此节点将会包含在后续的所有请求中。可用工具列表中也不再有用于格式化的工具。

{
  "messages": [
    {
      "content": "What is whether like in Suzhou, and what kind of closing is sugguested?",
      "role": "user"
    }
  ],
  "model": "gpt-5.2-chat",
  "response_format": {
    "type": "json_schema",
    "json_schema": {
      "name": "WeatherResponse",
      "description": "A structured response format for weather information.",
      "strict": false,
      "schema": {
        "properties": {
          "city": {
            "description": "City for which the weather is being reported",
            "title": "City",
            "type": "string"
          },
          "temperature": {
            "description": "Current temperature in Celsius",
            "title": "Temperature",
            "type": "number"
          },
          "summary": {
            "description": "Brief summary of the weather conditions",
            "title": "Summary",
            "type": "string"
          },
          "suggestion": {
            "description": "Clothing suggestion based on the weather",
            "title": "Suggestion",
            "type": "string"
          }
        },
        "required": [
          "city",
          "temperature",
          "summary",
          "suggestion"
        ],
        "type": "object"
      }
    }
  },
  "stream": false,
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get the current weather for a given city.",
        "parameters": {
          "properties": {
            "city": {
              "type": "string"
            }
          },
          "required": [
            "city"
          ],
          "type": "object",
          "additionalProperties": false
        },
        "strict": true
      }
    }
  ]
}

如下所示的第二次调用OpenAI API的响应,可以看出它提供的已经是与Schema匹配的结构化JSON内容了。

{
  "choices": [
    {
      "content_filter_results": {...},
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null,
      "message": {
        "annotations": [],
        "content": "{\"city\":\"Suzhou\",\"temperature\":25,\"summary\":\"Sunny and pleasant\",\"suggestion\":\"Light clothing such as a T-shirt or blouse with thin pants or a skirt is suitable. You may also want a light jacket for the morning or evening.\"}",
        "refusal": null,
        "role": "assistant"
      }
    }
  ],
  "created": 1772805154,
  "id": "chatcmpl-DGPtSmO45cggzuhJvfd9id0MZNNyQ",
  "model": "gpt-5.2-chat-2025-12-11",
  "object": "chat.completion",
  "prompt_filter_results": [...],
  "system_fingerprint": null,
  "usage": {...}
}

ProviderStrategy定义如下,其构造函数除了表示Schema类型的参数之外,还有一个布尔类型的参数stric。目前它主要针对的是OpenAI的Strict Mode开关。如果设为True,OpenAI会保证输出100%与指定的Schema匹配。但是带来的代价生成的第一个Token会有明显的延迟(预处理Schema),且Schema中不能使用一些高级校验(如自定义正则表达式)。

@dataclass(init=False)
class ProviderStrategy(Generic[SchemaT]):
    schema: type[SchemaT] | dict[str, Any]
    schema_spec: _SchemaSpec[SchemaT]

    def __init__(
        self,
        schema: type[SchemaT] | dict[str, Any],
        *,
        strict: bool | None = None,
    ) -> None

    def to_model_kwargs(self) -> dict[str, Any]

3. AutoStrategy

如果LLM自身就具有结构化输出的能力,那么使用ProviderStrategy一般是更好的选择。倘若我们不清楚使用的LLM是否支持结构化输出,或者会随时切换LLM,我们可以使用AutoStrategy。它会自动检测当前LLM的能力,并选择最优的结构化输出路径。如果在调用create_agent函数时将response_format参数直接指定为Pydantic类型或者表示JSON Schema的字典,相当于选择了AutoStrategy

class AutoStrategy(Generic[SchemaT]):
    schema: type[SchemaT] | dict[str, Any]
    def __init__(
        self,
        schema: type[SchemaT] | dict[str, Any],
    ) -> None

AutoStrategy背后的逻辑是:

  • 优先选择ProviderStrategy:如果当前使用的模型(如OpenAI、Anthropic的最新版本)支持原生的结构化输出功能,AutoStrategy会直接利用LLM产生希望的结构化输出,这种方式通常最稳定、解析率最高。
  • 回退至ToolStrategy:如果模型不支持原生结构化输出,但支持工具调用,它会将结构化模式转换为一个内部的“隐形工具”,引导模型通过“调用工具”的方式输出符合要求的结构化数据。

使用AutoStrateg能够带来如下的好处:

  • 跨模型兼容性:开发者无需为OpenAI编写一套逻辑,再为本地模型(如Ollama运行的Llama)编写另一套逻辑。AutoStrategy屏蔽了不同模型供应商在结构化数据实现上的差异。
  • 简化代码:你只需定义好Pydantic数据结构,直接传给response_format即可,系统会自动完成复杂的绑定工作。
  • 动态支持:在中间件中,我们可以利用AutoStrategy根据运行时的上下文动态调整输出模式。
Logo

AtomGit 是由开放原子开源基金会联合 CSDN 等生态伙伴共同推出的新一代开源与人工智能协作平台。平台坚持“开放、中立、公益”的理念,把代码托管、模型共享、数据集托管、智能体开发体验和算力服务整合在一起,为开发者提供从开发、训练到部署的一站式体验。

更多推荐