Skip to main content
Important The hybrid option requires an Enterprise plan.
The hybrid model lets you run the data plane in your own cloud while LangChain hosts and manages the control plane. This option combines the convenience of a managed control plane with the flexibility of self-hosting your own LangGraph Servers and backing stores. When using hybrid, you authenticate with a LangSmith API key.
ComponentResponsibilitiesWhere it runsWho manages it
Control plane
  • UI for creating deployments and revisions
  • APIs for creating deployments and revisions
LangChain’s cloudLangChain
Data plane
  • Listener to reconcile deployments with control plane state
  • LangGraph Servers
  • Backing services (Postgres, Redis, etc.)
Your cloudYou

Workflow

  1. Use the langgraph-cli or Studio to test your graph locally.
  2. Build a Docker image using the langgraph build command.
  3. Deploy your LangGraph Server from the control plane UI.
Supported Compute Platforms: Kubernetes.
For setup, refer to the Hybrid setup guide.

Architecture

Hybrid deployment: LangChain-hosted control plane (LangSmith UI/APIs) manages deployments. Your cloud runs a listener, LangGraph Server instances, and backing stores (Postgres/Redis) on Kubernetes.

Compute Platforms

  • Kubernetes: Hybrid supports running the data plane on any Kubernetes cluster.
For setup in Kubernetes, refer to the Hybrid setup guide

Egress to LangSmith and the control plane

In the hybrid deployment model, your self-hosted data plane will send network requests to the control plane to poll for changes that need to be implemented in the data plane. Traces from data plane deployments also get sent to the LangSmith instance integrated with the control plane. This traffic to the control plane is encrypted, over HTTPS. The data plane authenticates with the control plane with a LangSmith API key. In order to enable this egress, you may need to update internal firewall rules or cloud resources (such as Security Groups) to allow certain IP addresses.
AWS/Azure PrivateLink or GCP Private Service Connect is currently not supported. This traffic will go over the internet.

Listeners

In the hybrid option, one or more “listener” applications can run depending on how your LangSmith workspaces and Kubernetes clusters are organized.

Kubernetes cluster organization

  • One or more listeners can run in a Kubernetes cluster.
  • A listener can deploy into one or more namespaces in that cluster.
  • Cluster owners are responsible for planning listener layout and LangGraph Server deployments.

LangSmith workspace organization

  • A workspace can be associated with one or more listeners.
  • A workspace can only deploy to Kubernetes clusters where all of its listeners are deployed.

Use Cases

Here are some common listener configurations (not strict requirements):

Each LangSmith workspace → separate Kubernetes cluster

  • Cluster alpha runs workspace A
  • Cluster beta runs workspace B

Separate clusters, with shared “dev” cluster

  • Cluster alpha runs workspace A
  • Cluster beta runs workspace B
  • Cluster dev runs workspaces A and B
  • Both workspaces have two listeners; cluster dev has two listener deployments

One cluster, one namespace per workspace

  • Cluster alpha, namespace 1 runs workspace A
  • Cluster alpha, namespace 2 runs workspace B

One cluster, single namespace for multiple workspaces

  • Cluster alpha runs workspace A
  • Cluster alpha runs workspace B

I