}

LangGraph Tutorial 2026: Build Stateful AI Agents with Python

LangGraph Tutorial 2026: Build Stateful AI Agents with Python

Why LangGraph?

LangChain Agents were simple but hit a wall for complex use cases — they couldn't handle loops, multiple decision branches, or state that persisted across steps.

LangGraph solves this with a graph-based model: your agent is a directed graph where nodes are functions (LLM calls, tool calls, human input) and edges control flow.

LangGraph vs LangChain Agents: - LangGraph: explicit control flow, cycles, state persistence, human-in-the-loop - LangChain Agents: simpler but less control, no cycles, no real state

Installation

pip install langgraph langchain-anthropic langchain-openai

Core Concepts

  • StateGraph: the graph that defines your agent's behavior
  • State: a TypedDict that flows through all nodes
  • Nodes: Python functions that transform the state
  • Edges: connections between nodes (can be conditional)
  • Checkpointer: saves state so you can resume a conversation

Your First Agent: Simple Chat Loop

from typing import Annotated
from typing_extensions import TypedDict
from langchain_anthropic import ChatAnthropic
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages

class State(TypedDict):
    messages: Annotated[list, add_messages]  # add_messages handles appending

model = ChatAnthropic(model="claude-haiku-4-5")

def call_model(state: State) -> State:
    """Call the LLM with current messages."""
    response = model.invoke(state["messages"])
    return {"messages": [response]}

# Build the graph
builder = StateGraph(State)
builder.add_node("model", call_model)
builder.add_edge(START, "model")
builder.add_edge("model", END)

graph = builder.compile()

# Run it
result = graph.invoke({
    "messages": [{"role": "user", "content": "What is the capital of France?"}]
})
print(result["messages"][-1].content)

ReAct Agent: LLM + Tools

from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent

@tool
def search_web(query: str) -> str:
    """Search the web for information."""
    # In production, use a real search API
    return f"Search results for '{query}': [simulated results]"

@tool
def calculate(expression: str) -> str:
    """Evaluate a mathematical expression."""
    try:
        return str(eval(expression))
    except Exception as e:
        return f"Error: {e}"

model = ChatAnthropic(model="claude-opus-4-5")
tools = [search_web, calculate]

agent = create_react_agent(model, tools)

result = agent.invoke({
    "messages": [{"role": "user", "content": "What is 2345 * 6789? Also search for Python asyncio."}]
})

for message in result["messages"]:
    print(f"{message.type}: {message.content[:200]}")

Conditional Routing

from langgraph.graph import StateGraph, START, END

def route_question(state: State) -> str:
    """Decide which node to call next based on the last message."""
    last_message = state["messages"][-1]
    content = last_message.content.lower()

    if "code" in content or "python" in content:
        return "code_expert"
    elif "math" in content:
        return "math_expert"
    else:
        return "general"

builder = StateGraph(State)
builder.add_node("router", route_question_node)
builder.add_node("code_expert", code_expert_node)
builder.add_node("math_expert", math_expert_node)
builder.add_node("general", general_node)

builder.add_edge(START, "router")
builder.add_conditional_edges(
    "router",
    route_question,
    {"code_expert": "code_expert", "math_expert": "math_expert", "general": "general"}
)
builder.add_edge("code_expert", END)
builder.add_edge("math_expert", END)
builder.add_edge("general", END)

State Persistence with Checkpointers

from langgraph.checkpoint.memory import MemorySaver

checkpointer = MemorySaver()
agent = create_react_agent(model, tools, checkpointer=checkpointer)

# Thread ID keeps conversations separate
config = {"configurable": {"thread_id": "user-123"}}

# First message
result = agent.invoke(
    {"messages": [{"role": "user", "content": "My name is Alice"}]},
    config=config
)

# Second message — agent remembers context
result = agent.invoke(
    {"messages": [{"role": "user", "content": "What is my name?"}]},
    config=config
)
print(result["messages"][-1].content)  # "Your name is Alice"

For production, use PostgreSQL or Redis checkpointer instead of MemorySaver.

Human-in-the-Loop

from langgraph.checkpoint.memory import MemorySaver

agent = create_react_agent(
    model, tools,
    checkpointer=MemorySaver(),
    interrupt_before=["tools"]  # pause before running any tool
)

config = {"configurable": {"thread_id": "1"}}

# Start the agent
result = agent.invoke(
    {"messages": [{"role": "user", "content": "Search for recent Python news"}]},
    config=config
)

# Agent wants to use a tool — it's paused
print("Agent wants to use:", result["messages"][-1])

# Human approves or modifies
user_input = input("Approve tool use? (y/n): ")
if user_input == "y":
    # Resume from where it stopped
    result = agent.invoke(None, config=config)

Streaming Intermediate Steps

for step in agent.stream(
    {"messages": [{"role": "user", "content": "What is 42 * 100?"}]},
    config=config,
    stream_mode="values"
):
    step["messages"][-1].pretty_print()

Leonardo Lazzaro

Software engineer and technical writer. 10+ years experience in DevOps, Python, and Linux systems.

More articles by Leonardo Lazzaro