Skip to main content
Important
Self-hosted LangSmith is an add-on to the Enterprise plan designed for our largest, most security-conscious customers. For more details, refer to Pricing. Contact our sales team if you want to get a license key to trial LangSmith in your environment.
LangSmith supports different self-hosted configurations depending on your scale, security, and infrastructure needs. This page provides an overview of the supported self-hosted models. Self-hosting allows you to run all components entirely within your own cloud environment. You can choose between the following self-hosting models:
  1. LangSmith: Deploy an instance of the LangSmith application that includes observability, tracing, and evaluations in the UI and API. Best for teams who want self-hosted monitoring and evaluation without deploying agents.
  2. LangSmith with agent deployment: Deploy a graph (workflow or agentic) to LangGraph Server via the control plane. The control plane and data plane form the full LangSmith, providing UI and API management for running and monitoring agents. This includes observability, evaluation, and deployment management.
  3. Standalone server: Deploy a LangGraph Server directly without the control plane UI. Ideal for lightweight setups running one or a few agents as independent services, with full control over scaling and integration.
ModelIncludesBest forMethods
LangSmith
  • LangSmith app (UI + API)
  • Backend services (queue, playground, ACE)
  • Datastores: PostgreSQL, Redis, ClickHouse, optional blob storage
  • Teams who need self-hosted observability, tracing, and evaluation
  • Running the LangSmith app without deploying agents/graphs
  • Docker Compose (dev/test)
  • Kubernetes + Helm (production)
LangSmith with agent deployment
  • Everything from LangSmith
  • Control plane (deployments UI, revision management, Studio)
  • Data plane (LangGraph Server pods)
  • Kubernetes operator for orchestration
  • Enterprise teams needing a private LangChain Cloud
  • Centralized UI/API for managing multiple agents/graphs
  • Integrated observability and orchestration
  • Kubernetes with Helm (required)
  • Runs on EKS, GKE, AKS, or self-managed clusters
Standalone server
  • LangGraph Server container(s)
  • Requires PostgreSQL + Redis (shared or dedicated)
  • Optional LangSmith integration for tracing
  • Lightweight deployments of one or a few agents
  • Integrating LangGraph Servers as microservices
  • Teams preferring to manage scaling & CI/CD themselves
  • Docker / Docker Compose (dev/test)
  • Kubernetes + Helm (production)
  • Any container runtime or VM (ECS, EC2, ACI, etc.)
For a guide on deployment, refer to:Supported compute platforms: Kubernetes (for Control Plane), any compute platform (for Standalone Server Only)

LangSmith

You can run LangSmith in Kubernetes (recommended) or Docker in a cloud environment that you control. The LangSmith application consists of several components including LangSmith servers and stateful services:
  • Services
    • LangSmith frontend
    • LangSmith backend
    • LangSmith backend
    • LangSmith Playground
    • LangSmith queue
    • LangSmith ACE (Arbitrary Code Execution) backend
  • Storage services
    • ClickHouse
    • PostgreSQL
    • Redis
    • Blob storage (Optional, but recommended)
Light mode overview To access the LangSmith UI and send API requests, you will need to expose the LangSmith frontend service. Depending on your installation method, this can be a load balancer or a port exposed on the host machine.

Services

ServiceDescription
LangSmith frontendThe frontend uses Nginx to serve the LangSmith UI and route API requests to the other servers. This serves as the entrypoint for the application and is the only component that must be exposed to users.
LangSmith backendThe backend is the main entrypoint for CRUD API requests and handles the majority of the business logic for the application. This includes handling requests from the frontend and SDK, preparing traces for ingestion, and supporting the hub API.
LangSmith queueThe queue handles incoming traces and feedback to ensure that they are ingested and persisted into the traces and feedback datastore asynchronously, handling checks for data integrity and ensuring successful insert into the datastore, handling retries in situations such as database errors or the temporary inability to connect to the database.
LangSmith backendThe platform backend is another critical service that primarily handles authentication, run ingestion, and other high-volume tasks.
LangSmith playgroundThe playground is a service that handles forwarding requests to various LLM APIs to support the LangSmith Playground feature. This can also be used to connect to your own custom model servers.
LangSmith ACE (Arbitrary Code Execution) backendThe ACE backend is a service that handles executing arbitrary code in a secure environment. This is used to support running custom code within LangSmith.

Storage services

LangSmith will bundle all storage services by default. You can configure it to use external versions of all storage services. In a production setting, we strongly recommend using external storage services.
ServiceDescription
ClickHouseClickHouse is a high-performance, column-oriented SQL database management system (DBMS) for online analytical processing (OLAP).

LangSmith uses ClickHouse as the primary data store for traces and feedback (high-volume data).
PostgreSQLPostgreSQL is a powerful, open source object-relational database system that uses and extends the SQL language combined with many features that safely store and scale the most complicated data workloads.

LangSmith uses PostgreSQL as the primary data store for transactional workloads and operational data (almost everything besides traces and feedback).
RedisRedis is a powerful in-memory key-value database that persists on disk. By holding data in memory, Redis offers high performance for operations like caching.

LangSmith uses Redis to back queuing and caching operations.
Blob storageLangSmith supports several blob storage providers, including AWS S3, Azure Blob Storage, and Google Cloud Storage.

LangSmith uses blob storage to store large files, such as trace artifacts, feedback attachments, and other large data objects. Blob storage is optional, but highly recommended for production deployments.

LangSmith with agent deployment

LangSmith with agent deployment builds on top of the LangSmith option. Enabling deployment is ideal for enterprise teams who want a centralized, UI-driven platform to deploy and manage multiple agents and graphs, with all infrastructure, data, and orchestration fully under their control. You must already have a self-hosted LangSmith instance installed in your cloud. Once you have a LangSmith instance, you can enable deployments, which provides the control plane and data plane for running and managing graphs (workflow and agentic). You run both the control plane and the data plane entirely within your own infrastructure. You are responsible for provisioning and managing all components.
ComponentResponsibilitiesWhere it runsWho manages it
Control plane
  • UI for creating deployments & revisions
  • APIs for deployment management
Your cloudYou
Data plane
  • Operator/listener to reconcile deployments
  • LangGraph Servers (agents/graphs)
  • Backing services (Postgres, Redis, etc.)
Your cloudYou

Requirements

  1. Use the langgraph-cli or Studio to test your graph locally.
  2. Build a Docker image with langgraph build.
  3. Deploy your LangGraph Server via the LangSmith control plane UI or through your container tooling of choice.
  4. All agents are deployed as Kubernetes services behind the ingress configured for your LangSmith instance.

Architecture

Self-Hosted Full Platform Architecture

Supported compute platforms

  • Kubernetes: LangSmith with agent deployment supports running control plane and data plane infrastructure on any Kubernetes cluster.
If you would like to enable this on your LangSmith instance, please follow the Self-Hosted Full Platform deployment guide.

Standalone server

The Standalone server option is the most lightweight and flexible way to run LangSmith. Unlike the other models, you only manage a simplified data plane made up of LangGraph Servers and their required backing services (PostgreSQL, Redis, etc.). This option is best for teams who want to run one or a few agents as independent services, or integrate LangGraph Servers as microservices into their own systems. It gives you full control over scaling, deployment, and CI/CD pipelines, while still allowing optional integration with LangSmith for tracing and evaluation.
Do not run standalone servers in serverless environments. Scale-to-zero may cause task loss and scaling up will not work reliably.
ComponentResponsibilitiesWhere it runsWho manages it
Control planen/an/an/a
Data plane
  • LangGraph Servers
  • Postgres, Redis, etc.
Your cloudYou

Workflow

  1. Define and test your graph locally using the langgraph-cli or Studio.
  2. Package your agent as a Docker image.
  3. Deploy the LangGraph Server to your compute platform of choice (Kubernetes, Docker, VM).
  4. Optionally, configure LangSmith API keys and endpoints so the server reports traces and evaluations back to LangSmith (self-hosted or SaaS).

Architecture

Standalone Container

Supported compute platforms

  • Kubernetes: Use the LangSmith Helm chart to run LangGraph Servers in a Kubernetes cluster. This is the recommended option for production-grade deployments.
  • Docker: Run in any Docker-supported compute platform (local dev machine, VM, ECS, etc.). This is best suited for development or small-scale workloads.
To set up a LangGraph Server, see the how-to guide.

I