What is NeoGraph?
NeoGraph is a compiler for LLM pipelines. You write Python functions. NeoGraph infers the graph topology, validates types at assembly time, and compiles to an executable LangGraph StateGraph.
NeoGraph is not a replacement for LangGraph. It is a layer on top of it. The compiled output is a standard LangGraph graph with full access to checkpointing, streaming, and the LangGraph ecosystem.
Philosophy
Section titled “Philosophy”A function is a node. A parameter name is an edge. An if is a branch.
You define logic, not wiring. The compiler handles TypedDict schemas, add_node, add_edge, START, END, state mapping, and all the plumbing LangGraph requires. You write the parts that are unique to your pipeline.
Three Surfaces, One Compiler
Section titled “Three Surfaces, One Compiler”NeoGraph offers three ways to define pipelines. All three compile to the same LangGraph output.
@node — The Default
Section titled “@node — The Default”Decorate Python functions. Parameter names are dependencies. The compiler infers the full topology from function signatures.
from neograph import node, construct_from_module, compile, runimport sys
@node(outputs=Claims, prompt='decompose', model='reason')def decompose(topic: RawText) -> Claims: ...
@node(outputs=Classified, prompt='classify', model='fast')def classify(decompose: Claims) -> Classified: ...
@node(outputs=Report)def report(classify: Classified) -> Report: return Report(summary=f"{len(classify.items)} claims processed")
pipeline = construct_from_module(sys.modules[__name__])graph = compile(pipeline)result = run(graph, input={'node_id': 'doc-001'})classify(decompose: Claims) — the parameter name decompose matches the upstream node. Rename a function, downstream breaks at import time. Fan-in is just more parameters: def report(claims, scores, verified).
Mode is inferred: prompt= + model= means LLM call. Neither means the function body runs as scripted Python.
ForwardConstruct — Branching
Section titled “ForwardConstruct — Branching”A class-based surface for pipelines with if/else branches. Python control flow compiles to conditional edges.
from neograph import ForwardConstruct, Node, compile
class Analysis(ForwardConstruct): check = Node(outputs=CheckResult, prompt='check', model='fast') deep = Node(outputs=Result, prompt='deep-analysis', model='reason') shallow = Node(outputs=Result, prompt='quick-scan', model='fast')
def forward(self, topic): checked = self.check(topic) if checked.confidence > 0.8: return self.shallow(checked) else: return self.deep(checked)
graph = compile(Analysis())The if compiles to a conditional edge. for compiles to fan-out. Your type checker and debugger work. The tracer records which nodes each branch reaches and the compiler generates the corresponding LangGraph topology.
Node | Modifier — Runtime / LLM-Driven
Section titled “Node | Modifier — Runtime / LLM-Driven”For runtime construction — an LLM emitting a pipeline via tool calls, a config system defining workflows — use the programmatic API with the | pipe syntax.
from neograph import Node, Construct, Oracle, Each, compile, run
decompose = Node("decompose", mode="think", outputs=Claims, prompt="rw/decompose", model="reason") | Oracle(n=3, merge_fn="merge")verify = Node("verify", mode="agent", outputs=MatchResult, prompt="verify", model="fast") | Each(over="decompose.items", key="label")
pipeline = Construct("dynamic", nodes=[decompose, verify])graph = compile(pipeline)This surface is fully programmatic. Nodes are data objects, modifiers compose via |, and the whole thing can be built from JSON, YAML, or LLM tool-call output.
Architecture
Section titled “Architecture”Your code NeoGraph LangGraph----------- -------- ---------@node functions → → StateGraphForwardConstruct → compile() → add_node()Node | Modifier → → add_edge() compile() ↓ Executable graphAll three surfaces produce the same internal representation. compile() infers the state schema from type annotations, wires edges from the topology, expands modifiers (Oracle, Each, Operator), and outputs a compiled LangGraph graph.
The 6 Vocabulary Terms
Section titled “The 6 Vocabulary Terms”A typed processing block. Declared with @node (decorator style) or Node(...) (programmatic style).
# Decorator style — mode inferred from kwargs@node(outputs=Claims, prompt='decompose', model='fast')def decompose(topic: RawText) -> Claims: ...
# Programmatic style — mode explicitNode(name="decompose", mode="think", outputs=Claims, model="fast", prompt="decompose")Modes:
- think — single LLM call with structured JSON output, no tools
- agent — ReAct tool loop with tools (read-only exploration)
- act — ReAct tool loop with tools (mutations, side effects)
- scripted — deterministic Python, no LLM (inferred when
@nodehas noprompt=/model=)
An LLM-callable tool with a per-tool call budget. Used in agent and act mode Nodes.
search = Tool("search_web", budget=5)read = Tool("read_artifact", budget=10, config={"max_chars": 6000})Construct
Section titled “Construct”An ordered composition of Nodes. The pipeline blueprint.
pipeline = Construct("analysis", nodes=[decompose, classify, summarize])With @node, use construct_from_module instead of listing nodes manually:
pipeline = construct_from_module(sys.modules[__name__])Oracle
Section titled “Oracle”A modifier that runs a Node N times in parallel and merges the results. Ensemble pattern.
# @node style — keyword argument@node(outputs=Claims, prompt='decompose', model='reason', ensemble_n=3, merge_fn='merge_claims')def decompose() -> Claims: ...
# Programmatic style — pipe operatorNode(...) | Oracle(n=3, merge_prompt="pick-best")A modifier that fans out a Node over a collection. Map-reduce pattern.
# @node style — keyword arguments@node(outputs=MatchResult, map_over='clusters.groups', map_key='label')def verify(cluster: ClusterGroup) -> MatchResult: ...
# Programmatic style — pipe operatorNode(...) | Each(over="discover.groups", key="label")Operator
Section titled “Operator”A modifier that pauses the graph for human review. Human-in-the-loop.
# @node style — keyword argument@node(outputs=ValidationResult, interrupt_when=lambda s: {'issues': s.validate.issues} if not s.validate.passed else None)def validate(claims: Claims) -> ValidationResult: ...
# Programmatic style — pipe operatorNode(...) | Operator(when="low_confidence")Simplest Possible Example
Section titled “Simplest Possible Example”A 3-node pipeline with no LLM — pure Python functions using @node:
import sysfrom pydantic import BaseModelfrom neograph import node, construct_from_module, compile, run
class Claims(BaseModel): items: list[str]
class Classified(BaseModel): classified: list[dict[str, str]]
@node(outputs=Claims)def extract() -> Claims: return Claims(items=["The system shall log access", "The system shall validate input"])
@node(outputs=Classified)def classify(extract: Claims) -> Classified: return Classified(classified=[ {"claim": c, "category": "security"} for c in extract.items ])
pipeline = construct_from_module(sys.modules[__name__])graph = compile(pipeline)result = run(graph, input={"node_id": "doc-001"})
print(result["classify"].classified)# [{'claim': 'The system shall log access', 'category': 'security'}, ...]No LLM, no API keys, no wiring. No register_scripted, no Node.scripted(). The @node decorator with no prompt=/model= infers scripted mode. The parameter name extract wires classify after extract automatically.
Documentation © 2025-2026 Constantine Mirin, mirin.pro. Licensed under CC BY-ND 4.0.