Node Modes
Every node has a mode that determines how it runs. The mode controls whether an LLM is called, whether tools are available, and what side effects are permitted.
Overview
Section titled “Overview”| Mode | LLM? | Tools? | Side effects? | Use case |
|---|---|---|---|---|
produce | Yes | No | No | Single structured LLM call. Classification, extraction, generation. |
gather | Yes | Yes (read-only) | No | ReAct tool loop. Research, lookup, exploration. |
execute | Yes | Yes (mutations) | Yes | ReAct tool loop. Write files, call APIs, apply changes. |
scripted | No | No | Up to you | Pure Python. Deterministic transforms, filtering, formatting. |
raw | No | No | Up to you | LangGraph escape hatch. Full access to (state, config). |
Mode inference
Section titled “Mode inference”You rarely need to set mode explicitly. The @node decorator infers it from the kwargs you pass:
| You pass… | Inferred mode |
|---|---|
prompt= and/or model= | produce |
Neither prompt= nor model= | scripted |
mode='raw' | raw (must be explicit) |
from neograph import node
# Inferred as produce — prompt= and model= are present@node(output=Claims, prompt='rw/decompose', model='reason')def decompose(topic: RawText) -> Claims: ...
# Inferred as scripted — no prompt or model@node(output=Report)def report(classify: Classified) -> Report: return Report(summary=f"{len(classify.items)} claims processed")If you need gather or execute, set mode explicitly — the presence of tools= alone does not trigger inference:
@node(mode='gather', output=ResearchFindings, prompt='rw/explore', model='reason', tools=[Tool("search_nodes", budget=5)])def explore(query: SearchQuery) -> ResearchFindings: ...produce
Section titled “produce”A single LLM call with structured output. The LLM receives a compiled prompt and must return a response that validates against the node’s output Pydantic model. No tools are involved.
When to use: Classification, extraction, summarization, generation — any task where the LLM has all the context it needs in the prompt and should produce a single structured answer.
@node(output=ClassifiedClaims, prompt='rw/classify', model='reason')def classify(raw_claims: RawClaims) -> ClassifiedClaims: ...Under the hood, the framework calls llm.with_structured_output(output_model) and logs the token usage, duration, and output type via structlog.
The function body is not executed for produce nodes — the LLM call provides the output. If you write a non-trivial body, the framework emits a warning:
# WARNING: body is dead code for produce nodes@node(output=Claims, prompt='rw/decompose', model='reason')def decompose(topic: RawText) -> Claims: print("This never runs") # UserWarning raised at decoration timeUse ... as a placeholder body, or omit the body entirely.
gather
Section titled “gather”A ReAct tool loop with read-only tools. The LLM can call tools repeatedly (subject to per-tool budgets), then produces a structured final answer.
When to use: Research tasks where the LLM needs to explore — searching a codebase, reading documents, querying databases — before synthesizing a result. The tools should be read-only; gather nodes do not modify external state.
from neograph import node, Tool
@node(mode='gather', output=ResearchFindings, prompt='rw/explore', model='reason', tools=[ Tool("search_nodes", budget=5), Tool("read_artifact", budget=10, config={"max_chars": 6000}), ])def explore(query: SearchQuery) -> ResearchFindings: ...Tool budgets are enforced per tool. When search_nodes hits 5 calls, the LLM is told that tool is exhausted and must use remaining tools or respond. When all budgeted tools are spent, tools are unbound entirely and the LLM is forced to produce a final structured answer.
execute
Section titled “execute”A ReAct tool loop with mutation tools. Mechanically identical to gather, but the tools are permitted to have side effects: writing files, calling external APIs, modifying databases.
When to use: Tasks that change the world. Applying code fixes, updating records, sending notifications. Separating execute from gather makes it clear during review which nodes are write-capable.
@node(mode='execute', output=ApplyResult, prompt='rw/apply-fix', model='reason', tools=[ Tool("write_file", budget=3), Tool("run_tests", budget=1), ])def apply_fix(proposed: ProposedFix) -> ApplyResult: ...scripted
Section titled “scripted”Pure Python. No LLM, no tools. The function body is executed directly. Parameters are resolved from upstream nodes, runtime input, config, or default values.
When to use: Deterministic transforms that do not need an LLM. Parsing, filtering, formatting, aggregation, or any logic where the output is a known function of the input.
from neograph import node
@node(output=Catalog)def build_catalog(classify: ClassifiedClaims) -> Catalog: return Catalog(entries=[ CatalogEntry(claim=c.claim, category=c.category) for c in classify.items ])Fan-in
Section titled “Fan-in”Scripted nodes support fan-in naturally — just add more parameters. Each parameter name must match an upstream @node function name:
@node(output=Report)def report(claims: Claims, scores: ScoredClaims, verified: VerifiedClaims) -> Report: return Report( total=len(claims.items), avg_score=sum(s.value for s in scores.items) / len(scores.items), passed=verified.passed, )Non-node parameters
Section titled “Non-node parameters”Not every parameter needs to be an upstream node. Use type annotations to declare where each value comes from:
from neograph import node, FromInput, FromConfig
@node(output=Report)def summarize( claims: Claims, # upstream node topic: FromInput[str], # from run(input={...}) rate_limiter: FromConfig[RateLimiter], # from config['configurable'] max_items: int = 10, # constant (default value)) -> Report: rate_limiter.acquire() items = claims.items[:max_items] return Report(topic=topic, count=len(items))The LangGraph escape hatch. When you need direct access to the full LangGraph state and config — custom wiring, conditional state updates, or something the other four modes do not cover.
When to use: Sparingly. The declarative modes exist to eliminate exactly this kind of boilerplate. Use raw when you genuinely need access to the full state dict or config that the framework’s extraction layer cannot provide.
from neograph import node
@node(mode='raw', output=Disposition)def custom_resolution(state, config): """Full access to the LangGraph state and config.""" disp = state.disposition node_id = config["configurable"]["node_id"]
if disp.needs_manual_review: return {"disposition": disp.model_copy(update={"status": "pending"})}
return {"disposition": disp.model_copy(update={"resolved": True})}Raw mode enforces a strict signature: exactly two parameters named state and config. Any other signature raises ConstructError at decoration time.
The function must return a dict of state field updates. The framework handles wiring, edges, and observability — you handle extraction and state writing.
A raw node is still a Node. It goes into a Construct or construct_from_module like any other, and modifiers work on it:
pipeline = construct_from_module(sys.modules[__name__])# custom_resolution is discovered alongside other @node functionsDocumentation © 2025-2026 Constantine Mirin, mirin.pro. Licensed under CC BY-ND 4.0.