API Reference
NeoGraph exposes three API surfaces that compile to the same IR:
@nodeAPI — the primary surface for human-written pipelinesForwardConstruct— class-based with Python control flow- Programmatic / IR —
Node+Construct+|pipe for runtime construction
All three are importable from neograph:
from neograph import ( # Primary @node API node, construct_from_module, construct_from_functions, FromInput, FromConfig, merge_fn, # ForwardConstruct ForwardConstruct, # Programmatic IR Node, Construct, Oracle, Each, Operator, # Tools Tool, tool, ToolInteraction, # BAML rendering describe_type, describe_value, # Prompt inspection render_prompt, # Renderers (opt-in alternatives to BAML) Renderer, XmlRenderer, DelimitedRenderer, JsonRenderer, # Shared compile, run, configure_llm, # Errors NeographError, ConstructError, CompileError, ConfigurationError, ExecutionError, # Registries (for runtime construction) register_scripted, register_condition, register_tool_factory, # Spec loader load_spec, register_type, lookup_type,)@node API
Section titled “@node API”@node(...)
Section titled “@node(...)”Decorator that turns a function into a Node with signature-inferred dependencies.
@node( outputs: type[BaseModel] | dict[str, type], *, mode: Literal["scripted", "think", "agent", "act", "raw"] | None = None, prompt: str | None = None, model: str | None = None, tools: list[Tool] | None = None, llm_config: dict | None = None, name: str | None = None, # Modifier kwargs map_over: str | None = None, map_key: str | None = None, ensemble_n: int | None = None, models: list[str] | None = None, merge_fn: str | None = None, merge_prompt: str | None = None, interrupt_when: str | Callable | None = None, loop_when: str | Callable | None = None, max_iterations: int | None = None, on_exhaust: "error" | "last" | None = None,)outputs— Pydantic model ordict[str, type]the node produces. Required for all modes exceptraw. Dict form enables multi-output (e.g.{"result": Claims, "tool_log": list[ToolInteraction]}).mode— Execution mode. Inferred from kwargs if not set:prompt=+model=→think; neither →scripted.prompt,model— Required for LLM modes (think,agent,act). Validated at decoration time.tools— List ofToolinstances foragent/actmodes.llm_config— Per-node LLM settings (temperature, max_tokens, output_strategy, etc.)name— Override the node name. Default: function name with_→-.inputs— Explicitdict[str, type]input spec. Usually inferred from param annotations.map_over/map_key— Fan-out: run this node once per item in the referenced collection.ensemble_n/merge_fn/merge_prompt— Oracle ensemble: N parallel generators plus a scripted or LLM merge.models— Multi-model ensemble: list of model tiers. Each generator gets a different model round-robin. Infersensemble_nfromlen(models). Whenmodels=is set withoutmerge_fn/merge_prompt, the function body IS the merge.interrupt_when— Human-in-the-loop: inline callable or registered condition name.loop_when— Loop: callable or registered condition name. ReturnTrueto continue looping. The value may beNoneon the first iteration, so the callable must be None-safe.max_iterations— Loop iteration cap. Default 10. Used withloop_when=.on_exhaust—"error"(default) raisesExecutionErrorwhen max reached;"last"exits with last result.renderer— Pluggable prompt-input renderer (XmlRenderer,DelimitedRenderer,JsonRenderer, or custom).context—list[str]of state field names injected verbatim into the prompt (not BAML-rendered). For pre-formatted context like graph catalogs.skip_when— Callable predicate. When True, skips the LLM call entirely.skip_value— Callable that produces the output whenskip_whenfires.
The function body runs for scripted and raw modes. For LLM modes, the body is dead code — use ... as the body.
construct_from_module(mod, name=None, *, llm_config=None, input=None, output=None)
Section titled “construct_from_module(mod, name=None, *, llm_config=None, input=None, output=None)”Walks a module’s @node-decorated functions, infers dependencies from parameter names, topologically sorts, and returns a Construct.
pipeline = construct_from_module(sys.modules[__name__], name="my-pipeline")When building a sub-construct, pass input= / output= to define the state boundary:
sub = construct_from_module(mod, input=VerifyClaim, output=ClaimResult)construct_from_functions(name, functions, *, llm_config=None, input=None, output=None)
Section titled “construct_from_functions(name, functions, *, llm_config=None, input=None, output=None)”Builds a Construct from an explicit list of @node-decorated functions and/or Construct instances. Use when multiple pipelines share a file or when mixing @node functions with sub-constructs:
pipeline = construct_from_functions("rw-ingestion", [ flatten_claims, verify_claim.map("flatten_claims.claims", key="claim_id"), deterministic_merge,])Accepts both @node items and Construct items in the same list. Construct items must declare output= so downstream @node params can resolve against them.
@merge_fn
Section titled “@merge_fn”Decorator for Oracle merge functions with FromInput/FromConfig DI support:
@merge_fndef combine(variants: list[Claims], shared: Annotated[Ctx, FromConfig]) -> Claims: return Claims(items=[i for v in variants for i in v.items])
@node(outputs=Claims, ensemble_n=3, merge_fn="combine")def generate() -> Claims: ...The first parameter receives the list of Oracle generator results. Subsequent parameters use standard DI annotations. The merge_fn’s first parameter type (list[T]) tells the framework what type each generator should produce — the generators use T as the LLM schema, not node.outputs.
FromInput
Section titled “FromInput”Dependency-injection marker used inside typing.Annotated. Parameters annotated with Annotated[T, FromInput] are resolved from run(input={...}) at runtime (runtime-variable data — user queries, document IDs, per-call configuration).
from typing import Annotatedfrom neograph import node, FromInput
@node(outputs=Report)def summarize(claims: Claims, topic: Annotated[str, FromInput]) -> Report: ...
run(graph, input={'topic': 'security', 'node_id': 'demo'})If the inner type T is a Pydantic BaseModel subclass, FromInput bundles: it constructs an instance by pulling each of the model’s declared fields from config['configurable']. Use this to inject a typed context object into many nodes without repeating per-field parameters.
FromConfig
Section titled “FromConfig”Dependency-injection marker used inside typing.Annotated. Parameters annotated with Annotated[T, FromConfig] are resolved from config['configurable'] at runtime (shared infrastructure — rate limiters, database connections, trace providers).
from typing import Annotatedfrom neograph import node, FromConfig
@node(outputs=Result)def process(data: Data, tracer: Annotated[Tracer, FromConfig]) -> Result: ...
run(graph, input={...}, config={'configurable': {'tracer': my_tracer}})FromConfig also bundles for Pydantic models — Annotated[Shared, FromConfig] constructs a Shared instance from per-field configurable entries.
ForwardConstruct
Section titled “ForwardConstruct”class ForwardConstruct(Construct)
Section titled “class ForwardConstruct(Construct)”Base class for pipelines with Python control flow. Subclass, declare Node attributes, override forward():
class Analysis(ForwardConstruct): check = Node(outputs=CheckResult, prompt='check', model='fast') deep = Node(outputs=Result, prompt='deep', model='reason') shallow = Node(outputs=Result, prompt='shallow', model='fast')
def forward(self, topic): checked = self.check(topic) if checked.confidence > 0.8: return self.shallow(checked) else: return self.deep(checked)
graph = compile(Analysis())At compile time, NeoGraph traces forward() with symbolic proxies. Python if compiles to a conditional edge. for over a proxy compiles to Each fan-out. self.loop() compiles to a graph cycle. try/except around node calls is allowed but doesn’t create a fallback edge in v1 — see the ForwardConstruct guide for the full semantics.
Python for/while loops don’t produce graph cycles (same limitation as torch.jit.trace). Use self.loop() for iterative patterns.
For testing, call forward() directly — it runs real Python with real values.
self.loop(body, when, max_iterations=10, on_exhaust="error")
Section titled “self.loop(body, when, max_iterations=10, on_exhaust="error")”Explicit loop primitive that compiles to a sub-construct with a Loop modifier. Returns a callable; call it with a proxy to wire the loop into the graph.
class Writer(ForwardConstruct): draft = Node(outputs=Draft, prompt='draft', model='fast') review = Node(outputs=ReviewResult, prompt='review', model='reason') revise = Node(outputs=Draft, prompt='revise', model='reason')
def forward(self, topic): d = self.draft(topic) d = self.loop( body=[self.review, self.revise], when=lambda r: r.score < 0.8, max_iterations=5, )(d) return dbody— List of node references (self.review,self.revise, etc.) that form the loop body. At least one node required.when— Callable that receives the loop body’s latest output. ReturnTrueto continue looping,Falseto exit.max_iterations— Iteration cap. Default 10.on_exhaust—"error"(default) raisesExecutionError;"last"exits with the last result.
Maps 1:1 to construct | Loop(when=..., max_iterations=...) in the programmatic API.
Programmatic IR
Section titled “Programmatic IR”Node(name, *, mode, outputs, inputs=None, prompt=None, model=None, tools=None, llm_config=None, context=None, skip_when=None, skip_value=None, renderer=None)
Section titled “Node(name, *, mode, outputs, inputs=None, prompt=None, model=None, tools=None, llm_config=None, context=None, skip_when=None, skip_value=None, renderer=None)”The Node IR class. Used directly for runtime construction, programmatic pipeline generation, and sub-constructs.
decompose = Node("decompose", mode="think", outputs=Claims, prompt="rw/decompose", model="reason")Node.scripted(name, fn, inputs=None, outputs=None)
Section titled “Node.scripted(name, fn, inputs=None, outputs=None)”Convenience constructor for scripted nodes registered via register_scripted:
register_scripted("extract_fn", lambda input_data, config: ...)extract = Node.scripted("extract", fn="extract_fn", outputs=RawText)Construct(name, *, nodes, input=None, output=None, description=None)
Section titled “Construct(name, *, nodes, input=None, output=None, description=None)”Ordered composition of Nodes. Validates input/output compatibility across the node chain at assembly time.
pipeline = Construct("my-pipeline", nodes=[extract, classify])Sub-constructs need input= and output= to declare their typed I/O boundary:
enrich = Construct( "enrich", input=Claims, output=ScoredClaims, nodes=[lookup, score],)Modifiers (pipe syntax)
Section titled “Modifiers (pipe syntax)”Apply to a Node or Construct via |:
node | Oracle(n=3, merge_fn="combine")node | Oracle(models=["reason", "fast"], merge_fn="pick_best")node | Each(over="upstream.field", key="id")node | Operator(when="condition_name")node | Loop(when=lambda d: d is None or d.score < 0.8, max_iterations=5)construct | Loop(when=lambda d: d is None or d.score < 0.8, max_iterations=10)Oracle(n=3, models=None, merge_fn=None, merge_prompt=None) — N-way ensemble. Exactly one of merge_fn / merge_prompt required. models= assigns model tiers round-robin; infers n from len(models).
Each(over, key) — Fan-out over a dotted-path collection. Results keyed by getattr(item, key).
Operator(when) — Human-in-the-loop interrupt. when is a registered condition name.
Loop(when, max_iterations=10, on_exhaust="error", history=False) — Cycle modifier. On a Node: self-loop (output feeds back as input). On a Construct: the sub-construct re-runs with its output as the next input. when receives the latest output; return True to continue. The value may be None on the first iteration, so the callable must be None-safe (e.g. lambda d: d is None or d.score < 0.8). When history=True, each iteration’s output is collected in a state list field ({node_name}_history), useful for debugging or post-loop analysis.
The | syntax returns a new Node/Construct with the modifier appended. You can chain: node | Oracle(...) | Operator(...).
Node.map(source, *, key) -> Node
Section titled “Node.map(source, *, key) -> Node”Sugar over | Each(over=..., key=...) with IDE-friendly lambda introspection. Returns a new Node with an Each modifier appended.
# Lambda form (refactor-safe — Pyright/Pylance catch typos):verify.map(lambda s: s.make_clusters.groups, key="label")
# String form (escape hatch, equivalent to | Each(...)):verify.map("make_clusters.groups", key="label")source— Either a dotted-path string ("upstream.field") or a lambda that accesses attributes on a state proxy (lambda s: s.upstream.field). The lambda must be a pure attribute-access chain; indexing, arithmetic, or underscore-prefixed attributes raiseTypeError.key— Field on each iterated item used as the dispatch key (same semantics asEach.key).
Both forms compile to the same Each modifier. The lambda form gives static analysis coverage: renaming an upstream node surfaces as a type-checker error instead of a silent runtime failure.
Construct.renderer
Section titled “Construct.renderer”Default renderer propagated to all child nodes that don’t have their own renderer set. Dispatch hierarchy: model.render_for_prompt() > node.renderer > construct.renderer > global default > None.
pipeline = Construct( "my-pipeline", nodes=[extract, classify], renderer=JsonRenderer(),)Construct.llm_config
Section titled “Construct.llm_config”Default LLM configuration propagated to all child nodes. Per-node llm_config merges over the Construct default (node wins on conflicts). Common use: setting output_strategy="json_mode" once for a whole pipeline instead of on every node.
pipeline = Construct( "my-pipeline", nodes=[generate, classify, summarize], llm_config={"output_strategy": "json_mode", "temperature": 0.2},)Shared
Section titled “Shared”compile(construct, *, checkpointer=None, retry_policy=None) -> CompiledGraph
Section titled “compile(construct, *, checkpointer=None, retry_policy=None) -> CompiledGraph”Compiles a Construct (or ForwardConstruct) into an executable LangGraph StateGraph.
checkpointer— Required when the pipeline usesOperatororinterrupt_when=.retry_policy— LangGraphRetryPolicyapplied to all LLM-calling nodes (think/agent/act). Handles malformed JSON, validation errors, and transient API failures. Scripted nodes are not retried. Sub-constructs inherit the policy.
from langgraph.types import RetryPolicygraph = compile(pipeline, retry_policy=RetryPolicy(max_attempts=3))run(graph, *, input=None, resume=None, config=None) -> dict
Section titled “run(graph, *, input=None, resume=None, config=None) -> dict”Executes a compiled graph. Use input= for the first run, resume= to continue from an interrupt. Returns the final state dict with framework internals stripped.
describe_graph(compiled) -> str
Section titled “describe_graph(compiled) -> str”Returns a Mermaid diagram string for a compiled graph. Paste the output into any Mermaid renderer (GitHub, docs, mermaid.live).
graph = compile(pipeline)print(describe_graph(graph))lint(construct, *, config=None) -> list[LintIssue]
Section titled “lint(construct, *, config=None) -> list[LintIssue]”Validates DI bindings in a Construct against a sample config dict. Walks every node (recursing into sub-constructs) and checks that every FromInput/FromConfig parameter has a matching key in the provided config. Returns a list of LintIssue instances; an empty list means all bindings are satisfied. Never raises.
When config is None, only structural checks are performed: required params are flagged as missing since no config is available.
from neograph import lint
issues = lint(pipeline, config={"topic": "auth", "tracer": my_tracer})for issue in issues: print(f"{issue.node_name}.{issue.param}: {issue.message}")LintIssue
Section titled “LintIssue”Dataclass representing a single DI binding problem found by lint():
@dataclassclass LintIssue: node_name: str # which node has the problem param: str # which parameter is unbound kind: str # "from_input", "from_config", "from_input_model", "from_config_model" message: str # human-readable description required: bool = Falseparse_condition(expr) -> Callable[[Any], bool]
Section titled “parse_condition(expr) -> Callable[[Any], bool]”Safe expression evaluator for spec-driven conditions. Parses field op literal expressions where op is one of < > <= >= == != and literal is a number, boolean (true/false), or a quoted string. Dotted field access is supported.
from neograph import parse_condition
check = parse_condition("result.score < 0.8")check(my_output) # True if my_output.result.score < 0.8Raises ValueError for any expression that does not match the grammar. Used internally by load_spec for string-form conditions in pipeline specs; available for custom runtime routing.
configure_llm(llm_factory, prompt_compiler)
Section titled “configure_llm(llm_factory, prompt_compiler)”Configures the LLM provider once per process. llm_factory(tier, ...) returns a BaseChatModel. prompt_compiler(template, data, **kwargs) returns a list of messages.
The prompt compiler may accept additional kwargs: output_model, output_schema, llm_config, context, node_name, config. NeoGraph inspects the compiler’s signature and passes only kwargs it declares.
Tool(name, budget=0, config=None)
Section titled “Tool(name, budget=0, config=None)”LLM-callable tool with an optional per-call budget. budget=0 means unlimited.
ToolInteraction
Section titled “ToolInteraction”Record of a single tool call during a ReAct loop:
class ToolInteraction(BaseModel, frozen=True): tool_name: str args: dict[str, Any] result: str # BAML-rendered string (for prompts) typed_result: Any # original Pydantic model (for downstream nodes) duration_ms: inttyped_result preserves the original tool return value. result is the BAML-rendered form (via describe_value) sent to the LLM in the ReAct loop.
@tool(budget=None, config=None)
Section titled “@tool(budget=None, config=None)”Decorator that wraps a function as a Tool, auto-registers it, and returns the Tool instance directly:
@tool(budget=5)def search_codebase(query: str) -> SearchResult: """Search the codebase for a query.""" return SearchResult(file="auth.py", line=42, score=0.95)Tool results can be typed Pydantic models. The framework preserves them in ToolInteraction.typed_result and renders them as BAML for the LLM.
describe_type(model, *, prefix=...) -> str
Section titled “describe_type(model, *, prefix=...) -> str”Renders a Pydantic model class as a TypeScript-style (BAML) schema. Used internally for json_mode output schemas:
describe_type(SearchResult)# {# file: string# line: int# score: float // Relevance score 0-1# }describe_value(instance, *, prefix=...) -> str
Section titled “describe_value(instance, *, prefix=...) -> str”Renders a Pydantic model instance in the same BAML notation with actual values:
describe_value(SearchResult(file="auth.py", line=42, score=0.95))# {# file: "auth.py"# line: 42# score: 0.95 // Relevance score 0-1# }Used for tool result rendering in ToolMessage content.
render_prompt(node, input_data, *, config=None) -> str
Section titled “render_prompt(node, input_data, *, config=None) -> str”Returns the exact prompt that would be sent to the LLM, without calling the LLM. For debugging and testing prompt templates.
Error Hierarchy
Section titled “Error Hierarchy”NeographError (base) ├── ConstructError (ValueError) — assembly-time validation failures ├── CompileError — compilation failures ├── ConfigurationError — missing LLM config, unregistered tools └── ExecutionError — runtime failures (duplicate keys, reducer errors)All errors are importable: from neograph import ConstructError, CompileError, ConfigurationError, ExecutionError.
Registries
Section titled “Registries”register_scripted(name, fn)— Register a function forNode.scripted(fn='name')register_condition(name, fn)— Register a condition forOperator(when='name')and string-forminterrupt_when='name'register_tool_factory(name, factory)— Register a tool factory for declarative tool lookup
These are only needed for the programmatic API. @node, @tool, and @merge_fn handle registration automatically.
Spec Loader
Section titled “Spec Loader”load_spec(spec, project=None)
Section titled “load_spec(spec, project=None)”Load a YAML/JSON pipeline spec and return a compilable Construct.
from neograph import load_spec, compile, run
construct = load_spec("pipeline.yaml")graph = compile(construct)result = run(graph, input={"node_id": "demo"})Parameters:
spec(str | dict) — Pipeline spec as a YAML/JSON string, a file path, or a pre-parsed dict.project(str | dict | None) — Project surface (types/tools/models) as a YAML/JSON string, file path, or pre-parsed dict. Optional — types can also be pre-registered viaregister_type.
Returns: A Construct ready for compile().
The spec is validated against the bundled JSON Schema (neograph/schemas/neograph-pipeline.schema.json) when jsonschema is installed. After parsing, the standard Construct validator runs — same checks as @node and programmatic pipelines.
See Pipeline Spec Format for the full spec schema and examples.
register_type(name, cls)
Section titled “register_type(name, cls)”Register a Pydantic model under a string name for spec-based lookup.
from neograph import register_typefrom myapp.schemas import Draft
register_type("Draft", Draft)Types referenced in specs (e.g., outputs: Draft) are resolved from this registry. Types can also be auto-generated from a project surface passed to load_spec(project=...).
lookup_type(name)
Section titled “lookup_type(name)”Return the model registered under the given name. Raises ConfigurationError if not found.
Documentation © 2025-2026 Constantine Mirin, mirin.pro. Licensed under CC BY-ND 4.0.