NeoGraph vs LangGraph
NeoGraph compiles down to LangGraph. The runtime is identical — same StateGraph, same Send, same interrupt, same checkpointer. The difference is how much wiring you write to get there.
These comparisons use the same task in both approaches. Focus on the contrast between declaring the logic (NeoGraph) and wiring the plumbing (LangGraph).
What NeoGraph eliminates
Section titled “What NeoGraph eliminates”| You don’t write | Because NeoGraph |
|---|---|
TypedDict state schema | Infers the schema from your node output types |
add_node / add_edge calls | Reads dependencies from parameter names |
| State dict reads and writes | Passes typed values directly to your functions |
| Router functions for branches | Compiles Python if to conditional edges |
Fan-out scaffolding with Send | Compiles @node(map_over=...) to the same IR |
| Oracle ensemble plumbing | Compiles @node(ensemble_n=3, merge_fn=...) to fan-out + barrier + merge |
| Condition registries for interrupts | Accepts inline lambdas via interrupt_when= |
Pattern 1: Sequential pipeline
Section titled “Pattern 1: Sequential pipeline”LangGraph
Section titled “LangGraph”from typing import TypedDictfrom langgraph.graph import END, START, StateGraph
class PipelineState(TypedDict): topic: str claims: Claims | None classified: ClassifiedClaims | None
def decompose(state: PipelineState): result = llm.with_structured_output(Claims).invoke( f"Break this into claims: {state['topic']}") return {"claims": result}
def classify(state: PipelineState): result = llm.with_structured_output(ClassifiedClaims).invoke( f"Classify: {state['claims']}") return {"classified": result}
graph = StateGraph(PipelineState)graph.add_node("decompose", decompose)graph.add_node("classify", classify)graph.add_edge(START, "decompose")graph.add_edge("decompose", "classify")graph.add_edge("classify", END)app = graph.compile()NeoGraph
Section titled “NeoGraph”import sysfrom neograph import node, construct_from_module, compile
@node(outputs=Claims, prompt='decompose', model='fast')def decompose(topic: Annotated[str, FromInput]) -> Claims: ...
@node(outputs=ClassifiedClaims, prompt='classify', model='fast')def classify(decompose: Claims) -> ClassifiedClaims: ...
graph = compile(construct_from_module(sys.modules[__name__]))The parameter decompose in classify(decompose: Claims) IS the edge. No state dict, no add_node, no add_edge.
Pattern 2: Tool-calling agent
Section titled “Pattern 2: Tool-calling agent”LangGraph
Section titled “LangGraph”from langgraph.prebuilt import ToolNodefrom langgraph.graph import END, START, StateGraph, MessagesState
def agent(state: MessagesState): return {"messages": [llm.bind_tools(tools).invoke(state["messages"])]}
def should_continue(state: MessagesState): last = state["messages"][-1] return "tools" if last.tool_calls else END
graph = StateGraph(MessagesState)graph.add_node("agent", agent)graph.add_node("tools", ToolNode(tools))graph.add_edge(START, "agent")graph.add_conditional_edges("agent", should_continue)graph.add_edge("tools", "agent")app = graph.compile()NeoGraph
Section titled “NeoGraph”@node(outputs=Answer, mode='agent', prompt='research', model='reason', tools=[Tool('search_web', budget=5)])def researcher() -> Answer: ...
graph = compile(construct_from_module(sys.modules[__name__]))mode='agent' is the ReAct tool loop. Budgets are enforced per tool. The LLM config, tool wiring, and termination logic are handled by the framework.
Pattern 3: Map-reduce fan-out
Section titled “Pattern 3: Map-reduce fan-out”LangGraph
Section titled “LangGraph”from langgraph.constants import Send
def fan_out(state): return [Send("process", {"item": item}) for item in state["items"]]
def process(state): result = llm.invoke(f"Process {state['item']}") return {"results": [result]}
def reduce(state): return {"summary": summarize(state["results"])}
graph = StateGraph(State)graph.add_node("process", process)graph.add_node("reduce", reduce)graph.add_conditional_edges(START, fan_out, ["process"])graph.add_edge("process", "reduce")graph.add_edge("reduce", END)app = graph.compile()NeoGraph
Section titled “NeoGraph”@node(outputs=Items)def source() -> Items: ...
@node(outputs=ProcessResult, map_over='source.items', map_key='id')def process(item: Item) -> ProcessResult: ...
@node(outputs=Summary)def reduce(process: dict[str, ProcessResult]) -> Summary: ...
graph = compile(construct_from_module(sys.modules[__name__]))map_over='source.items' compiles to Send fan-out. Results collect as dict[str, ProcessResult] keyed by map_key. The validator catches process: dict[str, ProcessResult] as the correct downstream shape.
Pattern 4: Human-in-the-loop
Section titled “Pattern 4: Human-in-the-loop”LangGraph
Section titled “LangGraph”from langgraph.types import interrupt
def validate(state): result = check(state["claims"]) if not result.passed: interrupt({"issues": result.issues}) return {"validation": result}
graph = StateGraph(State)graph.add_node("validate", validate)graph.add_edge(START, "validate")graph.add_edge("validate", END)app = graph.compile(checkpointer=MemorySaver())NeoGraph
Section titled “NeoGraph”@node(outputs=ValidationResult, interrupt_when=lambda s: {'issues': s.validate.issues} if not s.validate.passed else None)def validate(claims: Claims) -> ValidationResult: ...
graph = compile(pipeline, checkpointer=MemorySaver())The interrupt condition lives co-located with the node it guards. No manual interrupt() call inside the node body, no condition registry.
Pattern 5: Subgraph composition
Section titled “Pattern 5: Subgraph composition”LangGraph
Section titled “LangGraph”# Build the inner graphinner = StateGraph(InnerState)inner.add_node("lookup", lookup_fn)inner.add_node("score", score_fn)inner.add_edge(START, "lookup")inner.add_edge("lookup", "score")inner.add_edge("score", END)inner_compiled = inner.compile()
# Wrap it in the outer graphdef call_inner(state): result = inner_compiled.invoke({"claims": state["claims"]}) return {"scored": result["score"]}
outer = StateGraph(OuterState)outer.add_node("decompose", decompose)outer.add_node("enrich", call_inner)outer.add_edge(START, "decompose")outer.add_edge("decompose", "enrich")outer.add_edge("enrich", END)app = outer.compile()NeoGraph
Section titled “NeoGraph”# The sub-construct gets its own typed boundaryenrich = Construct( "enrich", input=Claims, output=ScoredClaims, nodes=[lookup, score],)
@node(outputs=Claims, prompt='decompose', model='reason')def decompose() -> Claims: ...
# Top-level @node + sub-construct compose into one pipelinepipeline = Construct("main", nodes=[decompose, enrich])graph = compile(pipeline)Sub-constructs are the one place NeoGraph keeps the declarative form — the typed I/O boundary (input=Claims, outputs=ScoredClaims) is meaningful metadata that doesn’t map cleanly to function signatures.
When NeoGraph doesn’t help
Section titled “When NeoGraph doesn’t help”NeoGraph adds value when your pipeline is typed, structured, and modular. It adds less when:
- You need LangGraph features NeoGraph hasn’t wrapped yet (custom reducers beyond what
Each/Oraclecover, advanced streaming modes) - Your pipeline is one-off glue code where the plumbing IS the logic
- You’re building a library that runs arbitrary user-provided graphs
In those cases, use LangGraph directly. NeoGraph is a standard LangGraph graph underneath — you can always drop down to the raw API. The Node(mode='raw', ...) escape hatch exists exactly for this: a function with the standard (state, config) -> dict signature runs inside an otherwise @node-decorated pipeline.
Summary
Section titled “Summary”NeoGraph compiles to LangGraph, so you never lose access to the LangGraph ecosystem. What you gain is a shorter path from “I know what I want the pipeline to do” to “the graph exists and runs correctly”. The compiler handles the mechanical wiring; you write the logic.
Documentation © 2025-2026 Constantine Mirin, mirin.pro. Licensed under CC BY-ND 4.0.