Skip to main content
Alpha Notice: These docs cover the v1-alpha release. Content is incomplete and subject to change.For the latest stable version, see the v0 LangChain Python or LangChain JavaScript docs.
1.0 Alpha releases are available for most packages. Only the following currently support new content blocks:Broader support for content blocks will be rolled out during the alpha period and following stable release.
LangChain v1 is a focused, production-ready foundation for building agentic applications. We’ve streamlined the framework around three core improvements:

create_agent

create_agent is the standard way to build agents in LangChain 1.0. It provides a simpler interface than langgraph.prebuilt.create_react_agent while offering greater customization potential through middleware.
from langchain.agents import create_agent

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[search_web, analyze_data, send_email],
    system_prompt="You are a helpful research assistant."
)

result = agent.invoke({
    "messages": [{"role": "user", "content": "Research AI safety trends"}]
})
For more information, see Agents.

Middleware

Middleware is the defining feature of create_agent. It makes create_agent highly customizable, raising the ceiling for what you can build. Great agents require context engineering: getting the right information to the model at the right time. Middleware helps you control dynamic prompts, conversation summarization, selective tool access, state management, and guardrails through a composable abstraction.

Prebuilt middleware

LangChain provides a few prebuilt middlewares for common patterns, including:
  • PIIRedactionMiddleware: Redact sensitive information before sending to the model
  • SummarizationMiddleware: Condense conversation history when it gets too long
  • HumanInTheLoopMiddleware: Require approval for sensitive tool calls
from langchain.agents import create_agent
from langchain.agents.middleware import (
    PIIRedactionMiddleware,
    SummarizationMiddleware,
    HumanInTheLoopMiddleware
)

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[read_email, send_email],
    middleware=[
        PIIRedactionMiddleware(patterns=["email", "phone", "ssn"]),
        SummarizationMiddleware(
            model="anthropic:claude-sonnet-4-5-20250929",
            max_tokens_before_summary=500
        ),
        HumanInTheLoopMiddleware(
            interrupt_on={"send_email": {"allow_accept": True, "allow_edit": True}}
        ),
    ]
)

Custom middleware

You can also build custom middleware to fit your specific needs. Build custom middleware by implementing any of these hooks on a subclass of the AgentMiddleware class:
HookWhen it runsUse cases
before_agentBefore calling the agentLoad memory, validate input
before_modelBefore each LLM callUpdate prompts, trim messages
wrap_model_callAround each LLM callIntercept and modify requests/responses
wrap_tool_callAround each tool callIntercept and modify tool execution
after_modelAfter each LLM responseValidate output, apply guardrails
after_agentAfter agent completesSave results, cleanup
Middleware flow diagram Example custom middleware:
from dataclasses import dataclass

from langchain.agents.middleware import (
    AgentMiddleware,
    ModelRequest,
    ModelRequestHandler
)
from langchain_core.messages import AIMessage

@dataclass
class Context:
    user_expertise: str = "beginner"

class ExpertiseBasedToolMiddleware(Middleware):
    def wrap_model_call(
        self, request: ModelRequest, handler: ModelRequestHandler
    ) -> AIMessage:
        user_level = request.runtime.context.user_expertise

        if user_level == "expert":
            # More powerful model
            model = "openai:gpt-5"
            tools = [advanced_search, data_analysis]
        else:
            # Less powerful model
            model = "openai:gpt-5-nano"
            tools = [simple_search, basic_calculator]

        return handler(
            request.replace(model=model, tools=tools)
        )

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[
        simple_search, advanced_search, basic_calculator, data_analysis
    ],
    middleware=[ExpertiseBasedToolMiddleware()],
    context_schema=Context
)
For more information, see the complete middleware guide.

Built on LangGraph

Because create_agent is built on LangGraph, you automatically get built in support for long running, reliable agents via:

Persistence

Conversations automatically persist across sessions with built-in checkpointing

Streaming

Stream tokens, tool calls, and reasoning traces in real-time

Human-in-the-loop

Pause agent execution for human approval before sensitive actions

Time travel

Rewind conversations to any point and explore alternate paths and prompts
You don’t need to learn LangGraph to use these features—they work out of the box.

Structured output

create_agent has improved structured output generation:
  • Main loop integration: Structured output is now generated in the main loop instead of requiring an additional LLM call
  • Structured output strategy: Models can choose between calling tools or using provider-side structured output generation
  • Cost reduction: Eliminates extra expense from additional LLM calls
from langchain.agents import create_agent
from langchain.agents.structured_output import ToolStrategy
from pydantic import BaseModel

class Weather(BaseModel):
    temperature: float
    condition: str

def weather_tool(city: str) -> str:
    """Get the weather for a city."""
    return f"it's sunny and 70 degrees in {city}"

agent = create_agent(
    "openai:gpt-4o-mini",
    tools=[weather_tool],
    response_format=ToolStrategy(Weather)
)

result = agent.invoke({
    "messages": [{"role": "user", "content": "What's the weather in SF?"}]
})

print(repr(result["structured_response"]))
#> Weather(temperature=70.0, condition='sunny')
Error handling: Control error handling via the handle_errors parameter to ToolStrategy:
  • Parsing errors: Model generates data that doesn’t match desired structure
  • Multiple tool calls: Model generates 2+ tool calls for structured output schemas

Standard content blocks

The new .content_blocks property provides unified access to modern LLM features across all providers:
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(model="claude-sonnet-4-5-20250929")
response = model.invoke("What's the capital of France?")

# Unified access to content blocks
for block in response.content_blocks:
    if block["type"] == "reasoning":
        print(f"Model reasoning: {block['reasoning']}")
    elif block["type"] == "text":
        print(f"Response: {block['text']}")
    elif block["type"] == "tool_call":
        print(f"Tool call: {block['name']}({block['args']})")

Benefits

  • Provider agnostic: Access reasoning traces, citations, built-in tools (web search, code interpreters, etc.), and other features using the same API regardless of provider
  • Future proof: New LLM capabilities are automatically available through content blocks
  • Type safe: Full type hints for all content block types
  • Backward compatible: Standard content can be loaded lazily, so there are no associated breaking changes
For more information, see our guide on content blocks

langchain-classic

LangChain v1 focuses on standard interfaces and production-ready agents. Legacy functionality has moved to langchain-classic to keep the core package lean.

What’s in langchain-classic

  • Legacy chains and chain implementations
  • The indexing API
  • langchain-community exports
  • Other deprecated functionality
If you use any of this functionality, install langchain-classic:
pip install langchain-classic
Then update your imports:
# Before
from langchain import ...
from langchain.chains import ...

# After
from langchain_classic import ...
from langchain_classic.chains import ...

Reporting issues

Please report any issues discovered with 1.0 on GitHub using the 'v1' label.

Additional resources

See also


I