Thus, in this article, I’ll dive deeper into LangGraph, one of the available agentic AI frameworks. I’ll utilize it to develop a simple agentic application, with several steps highlighting the benefits of agentic AI packages. I’ll also cover the pros and cons of using LangGraph and other similar agentic frameworks.
I’m not sponsored in any way by LangGraph to create this article. I simply chose the framework as it is one of the most prevalent ones out there. There are many other options out there, such as:
- LangChain
- LlamaIndex
- CrewAI

Why do you need an agentic framework?
There are numerous packages out there that are supposed to make programming applications easier. In a lot of cases, these packages have the exact opposite effect, because they obscure the code, don’t work well in production, and sometimes make it harder to debug.
However, you need to find the packages that simplify your application by abstracting away boilerplate code. This principle is often highlighted in the startup world with a quote like the one below:
Focus on solving the exact problem you’re trying to solve. All other (previously solved problems) should be outsourced to other applications
An agentic framework is required because it abstracts away a lot of complications you do not want to deal with:
- Maintaining state. Not just message history, but all other information you gather, for example, when performing RAG
- Tool usage. You don’t want to set up your own logic for executing tools. Rather, you should simply define them and let the agentic framework handle how to invoke the tools. (This is especially relevant for parallel and async tool calling)
Thus, using an agentic framework abstracts away a lot of complications, so you can focus on the core part of your product.
Basics of LangGraph
To get started implementing LangGraph, I begin by reading the docs, covering:
- Basic chatbot implementation
- Tool usage
- Maintaining and updating the state
LangGraph is, as its name suggests, based on building graphs and executing this graph per request. In a graph, you can define:
- The state (the current information kept in memory)
- Nodes. Typically, an LLM or a tool call, for example, classifying user intent, or answering the user’s question
- Edges. Conditional logic determines which node to go to next.
All of which stems from basic graph theory.
Implementing a workflow

I believe one of the best ways of learning is to simply try things out for yourself. Thus, I’ll implement a simple workflow in LangGraph. You can learn about building these workflows in the workflow docs, which is based on Anthropic’s Building effective agents blog (one of my favorite blog posts about agents, which I’ve covered in several of my earlier articles. I highly recommend reading it.
I’ll make a simple workflow to define an application where a user can:
- Create documents with text
- Delete documents
- Search in documents
To do this, I’ll create the following workflow:
- Detect user intent. Do they want to create a document, delete a document, or search in a document?
- Given the outcome of step 1, I’ll have different flows to handle each of them.
You could also do this by simply defining all the tools and giving the agent access to create/delete/search a document. However, if you want to do more actions depending on intent, doing an intent classification routing step first is the way to go.
Loading imports and LLM
First, I’ll load the required imports and the LLM I’m using. I’ll be using AWS Bedrock, though you can use other providers, as you can see from step 3 in this tutorial.
"""
Make a document handler workflow where a user can
create a new document to the database (currently just a dictionary)
delete a document from the database
ask a question about a document
"""
from typing_extensions import TypedDict, Literal
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.graph import StateGraph, START, END
from langgraph.types import Command, interrupt
from langchain_aws import ChatBedrockConverse
from langchain_core.messages import HumanMessage, SystemMessage
from pydantic import BaseModel, Field
from IPython.display import display, Image
from dotenv import load_dotenv
import os
load_dotenv()
aws_access_key_id = os.getenv("AWS_ACCESS_KEY_ID") or ""
aws_secret_access_key = os.getenv("AWS_SECRET_ACCESS_KEY") or ""
os.environ["AWS_ACCESS_KEY_ID"] = aws_access_key_id
os.environ["AWS_SECRET_ACCESS_KEY"] = aws_secret_access_key
llm = ChatBedrockConverse(
model_id="us.anthropic.claude-3-5-haiku-20241022-v1:0", # this is the model id (added us. before id in platform)
region_name="us-east-1",
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
)
document_database: dict[str, str] = {} # a dictionary with key: filename, value: text in document
I also defined the database as a dictionary of files. In production, you would naturally use a proper database; however, I simplify it for this tutorial
Defining the graph
Next, it’s time to define the graph. I first create the Router object, which will classify the user’s prompt into one of three intents:
- add_document
- delete_document
- ask_document
# Define state
class State(TypedDict):
input: str
decision: str | None
output: str | None
# Schema for structured output to use as routing logic
class Route(BaseModel):
step: Literal["add_document", "delete_document", "ask_document"] = Field(
description="The next step in the routing process"
)
# Augment the LLM with schema for structured output
router = llm.with_structured_output(Route)
def llm_call_router(state: State):
"""Route the user input to the appropriate node"""
# Run the augmented LLM with structured output to serve as routing logic
decision = router.invoke(
[
SystemMessage(
content="""Route the user input to one of the following 3 intents:
- 'add_document'
- 'delete_document'
- 'ask_document'
You only need to return the intent, not any other text.
"""
),
HumanMessage(content=state["input"]),
]
)
return {"decision": decision.step}
# Conditional edge function to route to the appropriate node
def route_decision(state: State):
# Return the node name you want to visit next
if state["decision"] == "add_document":
return "add_document_to_database_tool"
elif state["decision"] == "delete_document":
return "delete_document_from_database_tool"
elif state["decision"] == "ask_document":
return "ask_document_tool"
I define the state where we store the user input, the router’s decision (one of the three intents), and then ensure structured output from the LLM. The structured output ensures the model responds with one of the three intents.
Continuing, I’ll define the tools we are using in this article, one for each of the intents.
# Nodes
def add_document_to_database_tool(state: State):
"""Add a document to the database. Given user query, extract the filename and content for the document. If not provided, will not add the document to the database."""
user_query = state["input"]
# extract filename and content from user query
filename_prompt = f"Given the following user query, extract the filename for the document: {user_query}. Only return the filename, not any other text."
output = llm.invoke(filename_prompt)
filename = output.content
content_prompt = f"Given the following user query, extract the content for the document: {user_query}. Only return the content, not any other text."
output = llm.invoke(content_prompt)
content = output.content
# add document to database
document_database[filename] = content
return {"output": f"Document {filename} added to database"}
def delete_document_from_database_tool(state: State):
"""Delete a document from the database. Given user query, extract the filename of the document to delete. If not provided, will not delete the document from the database."""
user_query = state["input"]
# extract filename from user query
filename_prompt = f"Given the following user query, extract the filename of the document to delete: {user_query}. Only return the filename, not any other text."
output = llm.invoke(filename_prompt)
filename = output.content
# delete document from database if it exsits, if not retunr info about failure
if filename not in document_database:
return {"output": f"Document {filename} not found in database"}
document_database.pop(filename)
return {"output": f"Document {filename} deleted from database"}
def ask_document_tool(state: State):
"""Ask a question about a document. Given user query, extract the filename and question for the document. If not provided, will not ask the question about the document."""
user_query = state["input"]
# extract filename and question from user query
filename_prompt = f"Given the following user query, extract the filename of the document to ask a question about: {user_query}. Only return the filename, not any other text."
output = llm.invoke(filename_prompt)
filename = output.content
question_prompt = f"Given the following user query, extract the question to ask about the document: {user_query}. Only return the question, not any other text."
output = llm.invoke(question_prompt)
question = output.content
# ask question about document
if filename not in document_database:
return {"output": f"Document {filename} not found in database"}
result = llm.invoke(f"Document: {document_database[filename]}\n\nQuestion: {question}")
return {"output": f"Document query result: {result.content}"}
And finally, we build the graph with nodes and edges:
# Build workflow
router_builder = StateGraph(State)
# Add nodes
router_builder.add_node("add_document_to_database_tool", add_document_to_database_tool)
router_builder.add_node("delete_document_from_database_tool", delete_document_from_database_tool)
router_builder.add_node("ask_document_tool", ask_document_tool)
router_builder.add_node("llm_call_router", llm_call_router)
# Add edges to connect nodes
router_builder.add_edge(START, "llm_call_router")
router_builder.add_conditional_edges(
"llm_call_router",
route_decision,
{ # Name returned by route_decision : Name of next node to visit
"add_document_to_database_tool": "add_document_to_database_tool",
"delete_document_from_database_tool": "delete_document_from_database_tool",
"ask_document_tool": "ask_document_tool",
},
)
router_builder.add_edge("add_document_to_database_tool", END)
router_builder.add_edge("delete_document_from_database_tool", END)
router_builder.add_edge("ask_document_tool", END)
# Compile workflow
memory = InMemorySaver()
router_workflow = router_builder.compile(checkpointer=memory)
config = {"configurable": {"thread_id": "1"}}
# Show the workflow
display(Image(router_workflow.get_graph().draw_mermaid_png()))
The last display function should show the graph as you see below:

Now you can try out the workflow by asking a question per intent.
Add a document:
user_input = "Add the document 'test.txt' with content 'This is a test document' to the database"
state = router_workflow.invoke({"input": user_input}, config)
print(state["output"]
# -> Document test.txt added to database
Search a document:
user_input = "Give me a summary of the document 'test.txt'"
state = router_workflow.invoke({"input": user_input}, config)
print(state["output"])
# -> A brief, generic test document with a simple descriptive sentence.
Delete a document:
user_input = "Delete the document 'test.txt' from the database"
state = router_workflow.invoke({"input": user_input}, config)
print(state["output"])
# -> Document test.txt deleted from database
Great! You can see the workflow is working with the different routing options. Feel free to add more intents or more nodes per intent to create a more complex workflow.
Stronger agentic use cases
The difference between agentic workflows and fully agentic applications is sometimes confusing. However, to separate the two terms, I’ll use the quote below from Anthropic’s Building effective agents:
Workflows are systems where LLMs and tools are orchestrated through predefined code paths. Agents, on the other hand, are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.
Most challenges you solve with LLMs will use the workflow pattern, because most problems (from my experience) are pre-defined, and should have a pre-determined set of guardrails to follow. In the example above, when adding/deleting/searching documents, you should absolutely set up a pre-determined workflow by defining the intent classifier and what to do given each intent.
However, sometimes, you also want more autonomous agentic use cases. Imagine, for example, Cursor, where they want a coding agent that can search through your code, check the newest documentation online, and modify your code. In these instances, it’s difficult to create pre-determined workflows because there are so many different scenarios that can occur.
If you want to create more autonomous agentic systems, you can read more about that here.
LangGraph pros and cons
Pros
My three main positives about LangGraph are:
- Easy to set up
- Open-source
- Simplifies your code
It was simple to set up LangGraph and quickly get it working. Especially when following their documentation, or feeding their documentation to Cursor and prompting it to implement specific workflows.
Furthermore, the code for LangGraph is open-source, meaning you can keep running the code, no matter what happens to the company behind it or changes they decide to make. I think this is crucial if you want to deploy it to production. Lastly, LangGraph also simplifies a lot of the code and abstracts away a lot of logic you would’ve had to write in Python yourself.
Cons
However, there are also some downsides to LangGraph that I’ve noticed during implementation.
- Still a surprising amount of boilerplate code
- You will encounter LangGraph-specific errors
When implementing my own custom workflow, I felt I still had to add a lot of boilerplate code. Though the amount of code was definitely less than if I’d implemented everything from scratch, I found myself surprised by the amount of code I had to add to create a relatively simple workflow. However, I think part of this is that LangGraph attempts to position itself as a lower-code tool than, for example, a lot of functionality you find in LangChain (which I think is good because LangChain, in my opinion, abstracts away too much, making it harder to debug your code).
Furthermore, as with many externally installed packages, you will encounter LangGraph-specific issues when implementing the package. For example, when I wanted to preview the graph of the workflow I created, I got an issue relating to the draw_mermaid_png function. Encountering such errors is inevitable when using external packages, and it will always be a trade-off between the helpful code abstractions a package gives you, versus the different kinds of bugs you may face using such packages.
Summary
All in all, I find LangGraph a helpful package when dealing with agentic systems. Setting up my desired workflow by first doing intent classification and proceeding with different flows depending on intent was relatively simple. Furthermore, I think LangGraph found a good middle ground between not abstracting away all logic (obscuring the code, making it harder to debug) and actually abstracting away challenges I don’t want to deal with when developing my agentic system. There are both positives and negatives to implementing such agentic frameworks, and I think the best way to make this trade-off is by implementing simple workflows yourself.
👉 My free eBook and Webinar:
📚 Get my free Vision Language Models ebook
💻 My webinar on Vision Language Models
👉 Find me on socials:
🧑💻 Get in touch
✍️ Medium







