Skip to main content
An application must be configured with a configuration file in order to be deployed to LangSmith (or to be self-hosted). This how-to guide discusses the basic steps to set up an application for deployment using requirements.txt to specify project dependencies. This example is based on this repository, which uses the LangGraph framework. The final repository structure will look something like this:
my-app/
├── my_agent # all project code lies within here
   ├── utils # utilities for your graph
   ├── __init__.py
   ├── tools.py # tools for your graph
   ├── nodes.py # node functions for your graph
   └── state.py # state definition of your graph
│   ├── requirements.txt # package dependencies
│   ├── __init__.py
│   └── agent.py # code for constructing your graph
├── .env # environment variables
└── langgraph.json # configuration file for LangGraph
You don’t need to use LangGraph to build your app to deploy on the LangSmith. LangSmith’s deployable unit is a LangGraph graph, but your graph can be a thin adapter around any codebase or framework. This lets you keep your core application logic outside LangGraph while still using LangSmith for deployment, scaling, and observability.
You can also set up with:
  • pyproject.toml: If you prefer using poetry for dependency management, check out this how-to guide on using pyproject.toml for LangSmith.
  • a monorepo: If you are interested in deploying a graph located inside a monorepo, take a look at this repository for an example of how to do so.
After each step, an example file directory is provided to demonstrate how code can be organized.

Specify Dependencies

Dependencies can optionally be specified in one of the following files: pyproject.toml, setup.py, or requirements.txt. If none of these files is created, then dependencies can be specified later in the configuration file. The dependencies below will be included in the image, you can also use them in your code, as long as with a compatible version range:
langgraph>=0.6.0
langgraph-sdk>=0.1.66
langgraph-checkpoint>=2.0.23
langchain-core>=0.2.38
langsmith>=0.1.63
orjson>=3.9.7,<3.10.17
httpx>=0.25.0
tenacity>=8.0.0
uvicorn>=0.26.0
sse-starlette>=2.1.0,<2.2.0
uvloop>=0.18.0
httptools>=0.5.0
jsonschema-rs>=0.20.0
structlog>=24.1.0
cloudpickle>=3.0.0
Example requirements.txt file:
langgraph
langchain_anthropic
tavily-python
langchain_community
langchain_openai

Example file directory:
my-app/
├── my_agent # all project code lies within here
│   └── requirements.txt # package dependencies

Specify Environment Variables

Environment variables can optionally be specified in a file (e.g. .env). See the Environment Variables reference to configure additional variables for a deployment. Example .env file:
MY_ENV_VAR_1=foo
MY_ENV_VAR_2=bar
OPENAI_API_KEY=key
Example file directory:
my-app/
├── my_agent # all project code lies within here
│   └── requirements.txt # package dependencies
└── .env # environment variables

Define Graphs

Implement your graphs. Graphs can be defined in a single file or multiple files. Make note of the variable names of each CompiledStateGraph to be included in the application. The variable names will be used later when creating the LangGraph configuration file. Example agent.py file, which shows how to import from other modules you define (code for the modules is not shown here, please see this repository to see their implementation):
# my_agent/agent.py
from typing import Literal
from typing_extensions import TypedDict

from langgraph.graph import StateGraph, END, START
from my_agent.utils.nodes import call_model, should_continue, tool_node # import nodes
from my_agent.utils.state import AgentState # import state

# Define the runtime context
class GraphContext(TypedDict):
    model_name: Literal["anthropic", "openai"]

workflow = StateGraph(AgentState, context_schema=GraphContext)
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges(
    "agent",
    should_continue,
    {
        "continue": "action",
        "end": END,
    },
)
workflow.add_edge("action", "agent")

graph = workflow.compile()
Example file directory:
my-app/
├── my_agent # all project code lies within here
   ├── utils # utilities for your graph
   ├── __init__.py
   ├── tools.py # tools for your graph
   ├── nodes.py # node functions for your graph
   └── state.py # state definition of your graph
│   ├── requirements.txt # package dependencies
│   ├── __init__.py
│   └── agent.py # code for constructing your graph
└── .env # environment variables

Create the configuration file

Create a configuration file called langgraph.json. See the configuration file reference for detailed explanations of each key in the JSON object of the configuration file. Example langgraph.json file:
{
  "dependencies": ["./my_agent"],
  "graphs": {
    "agent": "./my_agent/agent.py:graph"
  },
  "env": ".env"
}
Note that the variable name of the CompiledGraph appears at the end of the value of each subkey in the top-level graphs key (i.e. :<variable_name>).
Configuration File Location The configuration file must be placed in a directory that is at the same level or higher than the Python files that contain compiled graphs and associated dependencies.
Example file directory:
my-app/
├── my_agent # all project code lies within here
   ├── utils # utilities for your graph
   ├── __init__.py
   ├── tools.py # tools for your graph
   ├── nodes.py # node functions for your graph
   └── state.py # state definition of your graph
│   ├── requirements.txt # package dependencies
│   ├── __init__.py
│   └── agent.py # code for constructing your graph
├── .env # environment variables
└── langgraph.json # configuration file for LangGraph

Next

After you set up your project and place it in a GitHub repository, it’s time to deploy your app.
I