In this tutorial, you will learn how to build an agent that can answer questions about a SQL database using LangChain agents.At a high level, the agent will:
1
Fetch the available tables and schemas from the database
2
Decide which tables are relevant to the question
3
Fetch the schemas for the relevant tables
4
Generate a query based on the question and information from the schemas
5
Double-check the query for common mistakes using an LLM
6
Execute the query and return the results
7
Correct mistakes surfaced by the database engine until the query is successful
8
Formulate a response based on the results
Building Q&A systems of SQL databases requires executing model-generated SQL queries. There are inherent risks in doing this. Make sure that your database connection permissions are always scoped as narrowly as possible for your agent’s needs. This will mitigate, though not eliminate, the risks of building a model-driven system.
You will be creating a SQLite database for this tutorial. SQLite is a lightweight database that is easy to set up and use. We will be loading the chinook database, which is a sample database that represents a digital media store.For convenience, we have hosted the database (Chinook.db) on a public GCS bucket.
Copy
Ask AI
import fs from "node:fs/promises";import path from "node:path";const url = "https://storage.googleapis.com/benchmarks-artifacts/chinook/Chinook.db";const localPath = path.resolve("Chinook.db");async function resolveDbPath() { if (await fs.exists(localPath)) { return localPath; } const resp = await fetch(url); if (!resp.ok) throw new Error(`Failed to download DB. Status code: ${resp.status}`); const buf = Buffer.from(await resp.arrayBuffer()); await fs.writeFile(localPath, buf); return localPath;}
Use the SqlDatabase wrapper available in the langchain/sql_db to interact with the database. The wrapper provides a simple interface to execute SQL queries and fetch results:
Copy
Ask AI
import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";let db: SqlDatabase | undefined;async function getDb() { if (!db) { const dbPath = await resolveDbFile(); const datasource = new DataSource({ type: "sqlite", database: dbPath }); db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource }); } return db;}async function getSchema() { const db = await getDb(); return await db.getTableInfo();}
It can be prudent to check the agent’s SQL queries before they are executed for any unintended actions or inefficiencies.LangChain agents feature support for built-in human-in-the-loop middleware to add oversight to agent tool calls. Let’s configure the agent to pause for human review on calling the sql_db_query tool:
We’ve added a checkpointer to our agent to allow execution to be paused and resumed. See the human-in-the-loop guide for detalis on this as well as available middleware configurations.
On running the agent, it will now pause for review before executing the sql_db_query tool:
Copy
Ask AI
question = "Which genre on average has the longest tracks?"config = {"configurable": {"thread_id": "1"}} for step in agent.stream( {"messages": [{"role": "user", "content": question}]}, config, stream_mode="values",): if "messages" in step: step["messages"][-1].pretty_print() elif "__interrupt__" in step: print("INTERRUPTED:") interrupt = step["__interrupt__"][0] for request in interrupt.value: print(request["description"]) else: pass
Copy
Ask AI
...INTERRUPTED:Tool execution pending approvalTool: sql_db_queryArgs: {'query': 'SELECT g.Name AS Genre, AVG(t.Milliseconds) AS AvgTrackLength FROM Track t JOIN Genre g ON t.GenreId = g.GenreId GROUP BY g.Name ORDER BY AvgTrackLength DESC LIMIT 1;'}
We can resume execution, in this case accepting the query, using Command:
Copy
Ask AI
from langgraph.types import Command for step in agent.stream( Command(resume=[{"type": "accept"}]), config, stream_mode="values",): if "messages" in step: step["messages"][-1].pretty_print() elif "__interrupt__" in step: print("INTERRUPTED:") interrupt = step["__interrupt__"][0] for request in interrupt.value: print(request["description"]) else: pass
Copy
Ask AI
================================== Ai Message ==================================Tool Calls: sql_db_query (call_7oz86Epg7lYRqi9rQHbZPS1U) Call ID: call_7oz86Epg7lYRqi9rQHbZPS1U Args: query: SELECT Genre.Name, AVG(Track.Milliseconds) AS AvgDuration FROM Track JOIN Genre ON Track.GenreId = Genre.GenreId GROUP BY Genre.Name ORDER BY AvgDuration DESC LIMIT 5;================================= Tool Message =================================Name: sql_db_query[('Sci Fi & Fantasy', 2911783.0384615385), ('Science Fiction', 2625549.076923077), ('Drama', 2575283.78125), ('TV Shows', 2145041.0215053763), ('Comedy', 1585263.705882353)]================================== Ai Message ==================================The genre with the longest average track length is "Sci Fi & Fantasy" with an average duration of about 2,911,783 milliseconds, followed by "Science Fiction" and "Drama."
Use createAgent to build a ReAct agent with minimal code. The agent will interpret the request and generate a SQL command. The tools will check the command for safety and then try to execute the command. If the command has an error, the error message is returned to the model. The model can then examine the original request and the new error message and generate a new command. This can continue until the LLM generates the command successfully or reaches an end count. This pattern of providing a model with feedback - error messages in this case - is very powerful.Initialize the agent with a descriptive system prompt to customize its behavior:
Copy
Ask AI
import { SystemMessage } from "langchain";const getSystemPrompt = async () => new SystemMessage(`You are a careful SQLite analyst.Authoritative schema (do not invent columns/tables):${await getSchema()}Rules:- Think step-by-step.- When you need data, call the tool \`execute_sql\` with ONE SELECT query.- Read-only only; no INSERT/UPDATE/DELETE/ALTER/DROP/CREATE/REPLACE/TRUNCATE.- Limit to 5 rows unless user explicitly asks otherwise.- If the tool returns 'Error:', revise the SQL and try again.- Limit the number of attempts to 5.- If you are not successful after 5 attempts, return a note to the user.- Prefer explicit column lists; avoid SELECT *.`);
Now, create an agent with the model, tools, and prompt:
human: Which genre, on average, has the longest tracks?ai:tool: [{"Genre":"Sci Fi & Fantasy","AvgMilliseconds":2911783.0384615385}]ai: Sci Fi & Fantasy — average track length ≈ 48.5 minutes (about 2,911,783 ms).
The agent correctly wrote a query, checked the query, and ran it to inform its final response.
You can inspect all aspects of the above run, including steps taken, tools invoked, what prompts were seen by the LLM, and more in the LangSmith trace.
Studio provides a “client side” loop as well as memory so you can run this as a chat interface and query the database. You can ask questions like “Tell me the scheme of the database” or “Show me the invoices for the 5 top customers”. You will see the SQL command that is generated and the resulting output. The details of how to get that started are below.
Run your agent in Studio
In addition to the previously mentioned packages, you will need to:
Copy
Ask AI
npm i -g langgraph-cli@latest
In directory you will run in, you will need a langgraph.json file with the following contents: