Skip to content
Built by Postindustria. We help teams build agentic production systems.

NeoGraph vs LangGraph

NeoGraph compiles down to LangGraph. The runtime is identical — same StateGraph, same Send, same interrupt, same checkpointer. The difference is how much wiring you write to get there.

These comparisons use the same task in both approaches. Focus on the contrast between declaring the logic (NeoGraph) and wiring the plumbing (LangGraph).

You don’t writeBecause NeoGraph
TypedDict state schemaInfers the schema from your node output types
add_node / add_edge callsReads dependencies from parameter names
State dict reads and writesPasses typed values directly to your functions
Router functions for branchesCompiles Python if to conditional edges
Fan-out scaffolding with SendCompiles @node(map_over=...) to the same IR
Oracle ensemble plumbingCompiles @node(ensemble_n=3, merge_fn=...) to fan-out + barrier + merge
Condition registries for interruptsAccepts inline lambdas via interrupt_when=
from typing import TypedDict
from langgraph.graph import END, START, StateGraph
class PipelineState(TypedDict):
topic: str
claims: Claims | None
classified: ClassifiedClaims | None
def decompose(state: PipelineState):
result = llm.with_structured_output(Claims).invoke(
f"Break this into claims: {state['topic']}")
return {"claims": result}
def classify(state: PipelineState):
result = llm.with_structured_output(ClassifiedClaims).invoke(
f"Classify: {state['claims']}")
return {"classified": result}
graph = StateGraph(PipelineState)
graph.add_node("decompose", decompose)
graph.add_node("classify", classify)
graph.add_edge(START, "decompose")
graph.add_edge("decompose", "classify")
graph.add_edge("classify", END)
app = graph.compile()
import sys
from neograph import node, construct_from_module, compile
@node(outputs=Claims, prompt='decompose', model='fast')
def decompose(topic: Annotated[str, FromInput]) -> Claims: ...
@node(outputs=ClassifiedClaims, prompt='classify', model='fast')
def classify(decompose: Claims) -> ClassifiedClaims: ...
graph = compile(construct_from_module(sys.modules[__name__]))

The parameter decompose in classify(decompose: Claims) IS the edge. No state dict, no add_node, no add_edge.

from langgraph.prebuilt import ToolNode
from langgraph.graph import END, START, StateGraph, MessagesState
def agent(state: MessagesState):
return {"messages": [llm.bind_tools(tools).invoke(state["messages"])]}
def should_continue(state: MessagesState):
last = state["messages"][-1]
return "tools" if last.tool_calls else END
graph = StateGraph(MessagesState)
graph.add_node("agent", agent)
graph.add_node("tools", ToolNode(tools))
graph.add_edge(START, "agent")
graph.add_conditional_edges("agent", should_continue)
graph.add_edge("tools", "agent")
app = graph.compile()
@node(outputs=Answer, mode='agent', prompt='research', model='reason',
tools=[Tool('search_web', budget=5)])
def researcher() -> Answer: ...
graph = compile(construct_from_module(sys.modules[__name__]))

mode='agent' is the ReAct tool loop. Budgets are enforced per tool. The LLM config, tool wiring, and termination logic are handled by the framework.

from langgraph.constants import Send
def fan_out(state):
return [Send("process", {"item": item}) for item in state["items"]]
def process(state):
result = llm.invoke(f"Process {state['item']}")
return {"results": [result]}
def reduce(state):
return {"summary": summarize(state["results"])}
graph = StateGraph(State)
graph.add_node("process", process)
graph.add_node("reduce", reduce)
graph.add_conditional_edges(START, fan_out, ["process"])
graph.add_edge("process", "reduce")
graph.add_edge("reduce", END)
app = graph.compile()
@node(outputs=Items)
def source() -> Items: ...
@node(outputs=ProcessResult, map_over='source.items', map_key='id')
def process(item: Item) -> ProcessResult: ...
@node(outputs=Summary)
def reduce(process: dict[str, ProcessResult]) -> Summary: ...
graph = compile(construct_from_module(sys.modules[__name__]))

map_over='source.items' compiles to Send fan-out. Results collect as dict[str, ProcessResult] keyed by map_key. The validator catches process: dict[str, ProcessResult] as the correct downstream shape.

from langgraph.types import interrupt
def validate(state):
result = check(state["claims"])
if not result.passed:
interrupt({"issues": result.issues})
return {"validation": result}
graph = StateGraph(State)
graph.add_node("validate", validate)
graph.add_edge(START, "validate")
graph.add_edge("validate", END)
app = graph.compile(checkpointer=MemorySaver())
@node(outputs=ValidationResult,
interrupt_when=lambda s: {'issues': s.validate.issues}
if not s.validate.passed else None)
def validate(claims: Claims) -> ValidationResult: ...
graph = compile(pipeline, checkpointer=MemorySaver())

The interrupt condition lives co-located with the node it guards. No manual interrupt() call inside the node body, no condition registry.

# Build the inner graph
inner = StateGraph(InnerState)
inner.add_node("lookup", lookup_fn)
inner.add_node("score", score_fn)
inner.add_edge(START, "lookup")
inner.add_edge("lookup", "score")
inner.add_edge("score", END)
inner_compiled = inner.compile()
# Wrap it in the outer graph
def call_inner(state):
result = inner_compiled.invoke({"claims": state["claims"]})
return {"scored": result["score"]}
outer = StateGraph(OuterState)
outer.add_node("decompose", decompose)
outer.add_node("enrich", call_inner)
outer.add_edge(START, "decompose")
outer.add_edge("decompose", "enrich")
outer.add_edge("enrich", END)
app = outer.compile()
# The sub-construct gets its own typed boundary
enrich = Construct(
"enrich",
input=Claims,
output=ScoredClaims,
nodes=[lookup, score],
)
@node(outputs=Claims, prompt='decompose', model='reason')
def decompose() -> Claims: ...
# Top-level @node + sub-construct compose into one pipeline
pipeline = Construct("main", nodes=[decompose, enrich])
graph = compile(pipeline)

Sub-constructs are the one place NeoGraph keeps the declarative form — the typed I/O boundary (input=Claims, outputs=ScoredClaims) is meaningful metadata that doesn’t map cleanly to function signatures.

NeoGraph adds value when your pipeline is typed, structured, and modular. It adds less when:

  • You need LangGraph features NeoGraph hasn’t wrapped yet (custom reducers beyond what Each/Oracle cover, advanced streaming modes)
  • Your pipeline is one-off glue code where the plumbing IS the logic
  • You’re building a library that runs arbitrary user-provided graphs

In those cases, use LangGraph directly. NeoGraph is a standard LangGraph graph underneath — you can always drop down to the raw API. The Node(mode='raw', ...) escape hatch exists exactly for this: a function with the standard (state, config) -> dict signature runs inside an otherwise @node-decorated pipeline.

NeoGraph compiles to LangGraph, so you never lose access to the LangGraph ecosystem. What you gain is a shorter path from “I know what I want the pipeline to do” to “the graph exists and runs correctly”. The compiler handles the mechanical wiring; you write the logic.


Documentation © 2025-2026 Constantine Mirin, mirin.pro. Licensed under CC BY-ND 4.0.