Unlocking AI Workflows with LangGraph

What is LangGraph?

LangGraph is a framework built on top of LangChain that lets you build stateful, multi-step AI agent workflows using a graph-based structure — where nodes are actions and edges are the flow between them.

It was created to solve a key limitation of basic LangChain chains: they only go in one direction. LangGraph adds loops, branches, and state — making it possible to build complex, real-world AI agents.


The Core Idea

Instead of a linear chain:

A → B → C → Done

LangGraph lets you build a graph:

    A
   / \
  B   C
   \ /
    D
    ↓
  (loop back to A if needed)
    ↓
   End

This means agents can make decisions, retry, branch, and loop — just like real workflows.


Key Concepts

1. Nodes

Each node is a function or action — it does one job.

def call_llm(state):
response = llm.invoke(state["messages"])
return {"messages": [response]}
def call_tool(state):
result = tool.run(state["tool_input"])
return {"tool_result": result}

2. Edges

Edges define how nodes connect — what runs after what.

  • Normal Edge → always goes A → B
  • Conditional Edge → branches based on logic (if/else)
# Normal edge
graph.add_edge("node_a", "node_b")
# Conditional edge — branches based on state
graph.add_conditional_edges(
"agent",
should_continue, # decision function
{
"use_tool": "tool_node", # if tool needed
"end": END # if done
}
)

3. State

A shared dictionary that flows through every node — each node can read and update it.

from typing import TypedDict, List
class AgentState(TypedDict):
messages: List[str] # conversation history
tool_result: str # result from tool calls
step_count: int # how many steps taken
is_done: bool # completion flag

4. Cycles / Loops

Unlike chains, LangGraph supports loops — the agent can keep running until a condition is met.

Agent → decides to use tool
Tool runs → result added to state
Agent re-evaluates → needs another tool?
↓ (yes)
Tool runs again
Agent re-evaluates → done?
↓ (yes)
END

How It Works — Step by Step

┌─────────────────────────────────────────────────┐
│ USER INPUT │
│ "Research AI trends and write │
│ a summary report" │
└────────────────────┬────────────────────────────┘
┌─────────────────────────────────────────────────┐
│ STATE INITIALIZED │
│ { messages: [...], results: [], done: false } │
└────────────────────┬────────────────────────────┘
┌─────────────────────────────────────────────────┐
│ NODE: Agent (LLM) │
│ Thinks: "I should search the web first" │
│ Decision: → go to "web_search" node │
└────────────────────┬────────────────────────────┘
┌─────────────────────────────────────────────────┐
│ NODE: Web Search Tool │
│ Searches → returns top articles │
│ Updates state: results = [article1, article2] │
└────────────────────┬────────────────────────────┘
┌─────────────────────────────────────────────────┐
│ CONDITIONAL EDGE CHECK │
│ Agent re-evaluates: "Do I need more info?" │
│ → YES: loop back to search │
│ → NO: go to "write_report" node │
└────────────────────┬────────────────────────────┘
↓ (NO — has enough info)
┌─────────────────────────────────────────────────┐
│ NODE: Write Report │
│ LLM writes summary using state.results │
└────────────────────┬────────────────────────────┘
┌─────────────────────────────────────────────────┐
│ END │
│ Final report returned to user │
└─────────────────────────────────────────────────┘

Code Example — Simple Agent

from langgraph.graph import StateGraph, END
from typing import TypedDict
# 1. Define state
class AgentState(TypedDict):
messages: list
next_step: str
# 2. Define nodes
def agent_node(state: AgentState):
response = llm.invoke(state["messages"])
# Decide next step
if "SEARCH:" in response.content:
return {"next_step": "search", "messages": state["messages"] + [response]}
else:
return {"next_step": "end", "messages": state["messages"] + [response]}
def search_node(state: AgentState):
result = search_tool.run(state["messages"][-1].content)
return {"messages": state["messages"] + [result]}
# 3. Build the graph
graph = StateGraph(AgentState)
graph.add_node("agent", agent_node)
graph.add_node("search", search_node)
# 4. Add edges
graph.set_entry_point("agent")
graph.add_conditional_edges(
"agent",
lambda s: s["next_step"], # routing function
{
"search": "search", # → go to search node
"end": END # → finish
}
)
graph.add_edge("search", "agent") # loop back after search
# 5. Compile & run
app = graph.compile()
result = app.invoke({"messages": ["Research quantum computing"], "next_step": ""})

LangChain vs LangGraph

FeatureLangChainLangGraph
StructureLinear chainGraph (nodes + edges)
Loops❌ Not supported✅ Built-in
BranchingLimited✅ Full conditional logic
State managementBasic✅ Rich shared state
Multi-agentDifficult✅ Native support
Human-in-the-loop✅ Pause & resume
Best forSimple pipelinesComplex agent workflows

Advanced Features

Human-in-the-Loop

Pause the graph and wait for human approval before continuing:

# Graph pauses here and waits
graph.add_node("human_review", interrupt_before=["execute_action"])
# Human approves → graph resumes from where it stopped
app.invoke(input, config={"checkpoint_id": "abc123"})

Multi-Agent

Multiple agents working together, each as a node:

graph.add_node("researcher_agent", researcher)
graph.add_node("writer_agent", writer)
graph.add_node("critic_agent", critic)
# Researcher → Writer → Critic → (loop if needed) → Done

Persistence & Checkpointing

Save graph state to a database — resume interrupted workflows:

from langgraph.checkpoint.sqlite import SqliteSaver
memory = SqliteSaver.from_conn_string(":memory:")
app = graph.compile(checkpointer=memory)

Real-World Use Cases

Use CaseWhy LangGraph fits
AI coding assistantLoop: write → test → fix → retest
Research agentBranch: search → evaluate → search more or summarize
Customer support botBranch by issue type, escalate to human if needed
Data pipeline agentMulti-step: fetch → clean → analyze → report
Multi-agent teamResearcher + Writer + Reviewer agents collaborating

The Ecosystem

LangSmith (Observability & Debugging)
LangGraph ←── builds on ──→ LangChain
LangServe (Deploy as API)

Key Takeaway

LangGraph is LangChain with superpowers — it transforms simple linear AI pipelines into dynamic, stateful workflows that can loop, branch, pause, and coordinate multiple agents — making it the right tool for building production-grade AI systems.

Leave a comment