Quick Start
This guide gets you from zero to a running LLM pipeline in under 5 minutes.
1. Install
Section titled “1. Install”pip install neographNeoGraph depends on langgraph, langchain-core, pydantic, and structlog. These are installed automatically.
You will also need an LLM provider. This guide uses OpenAI, but any LangChain-compatible chat model works (Anthropic, Google, OpenRouter, local models via Ollama, etc.):
pip install langchain-openai2. Configure the LLM
Section titled “2. Configure the LLM”NeoGraph does not own LLM configuration. You provide two functions: an LLM factory that creates chat model instances, and a prompt compiler that builds message lists for each node.
import osfrom langchain_openai import ChatOpenAIfrom neograph import configure_llm
# Map model tiers to actual modelsMODELS = { "fast": "gpt-4o-mini", "reason": "gpt-4o",}
configure_llm( llm_factory=lambda tier: ChatOpenAI( model=MODELS.get(tier, "gpt-4o-mini"), api_key=os.environ["OPENAI_API_KEY"], ), prompt_compiler=lambda template, data, **kwargs: [ {"role": "user", "content": f"[{template}] {data}"} ],)The llm_factory receives a tier string (like "fast" or "reason") and returns a BaseChatModel. The prompt_compiler receives a template name and the input data from the previous node, and returns a list of messages.
In production, you would replace the prompt compiler with something that loads real prompt templates and injects context. For now, this inline version works.
3. Define Your Schemas
Section titled “3. Define Your Schemas”Every node declares its output type as a Pydantic model. This is how NeoGraph knows what data flows between nodes.
from pydantic import BaseModel
class Claims(BaseModel): items: list[str]
class ClassifiedClaims(BaseModel): classified: list[dict[str, str]]
class Summary(BaseModel): text: str4. Build the Pipeline
Section titled “4. Build the Pipeline”Decorate functions with @node. The parameter name IS the dependency — classify(decompose: Claims) wires classify after the decompose node automatically.
import sysfrom neograph import node, construct_from_module
@node(output=Claims, prompt='decompose', model='fast')def decompose() -> Claims: ...
@node(output=ClassifiedClaims, prompt='classify', model='fast')def classify(decompose: Claims) -> ClassifiedClaims: ...
@node(output=Summary, prompt='summarize', model='fast')def summarize(classify: ClassifiedClaims) -> Summary: ...
pipeline = construct_from_module(sys.modules[__name__])Each @node specifies:
- output — Pydantic model defining the typed output contract
- prompt — template name passed to your
prompt_compiler - model — tier string passed to your
llm_factory
Mode is inferred: prompt= + model= means LLM call (produce mode). Neither means the function body runs as scripted Python.
construct_from_module scans the module for all @node-decorated functions, infers the dependency graph from parameter names, and builds a Construct.
5. Compile and Run
Section titled “5. Compile and Run”from neograph import compile, run
graph = compile(pipeline)result = run(graph, input={"topic": "microservice authentication"})
print(result["summarize"].text)compile() converts the Construct into a LangGraph StateGraph, wires all edges, infers the state schema, and compiles. run() executes the graph and returns the final state with framework internals stripped out.
Every field you pass in input= also flows into config["configurable"], so your prompt compiler can read them via config["configurable"]["topic"]. This is how pipeline-level metadata (topic, project_root, whatever you need) reaches every node without threading it through state manually.
The result is a dict keyed by node name. Each value is the typed Pydantic model that node produced.
Complete Example
Section titled “Complete Example”Here is the full script, copy-paste-runnable:
import osimport sysfrom langchain_openai import ChatOpenAIfrom pydantic import BaseModelfrom neograph import node, construct_from_module, compile, configure_llm, run
# --- Schemas ---class Claims(BaseModel): items: list[str]
class ClassifiedClaims(BaseModel): classified: list[dict[str, str]]
class Summary(BaseModel): text: str
# --- LLM Configuration ---configure_llm( llm_factory=lambda tier: ChatOpenAI( model="gpt-4o-mini", api_key=os.environ["OPENAI_API_KEY"], ), prompt_compiler=lambda template, data, **kwargs: [ {"role": "user", "content": ( f"Break this topic into 3-5 factual claims: " f"{kwargs.get('config', {}).get('configurable', {}).get('topic', 'AI')}" if template == "decompose" else f"Classify each claim as security/reliability/performance: {data}" if template == "classify" else f"Summarize these classified claims in one paragraph: {data}" )} ],)
# --- Pipeline ---@node(output=Claims, prompt='decompose', model='fast')def decompose() -> Claims: ...
@node(output=ClassifiedClaims, prompt='classify', model='fast')def classify(decompose: Claims) -> ClassifiedClaims: ...
@node(output=Summary, prompt='summarize', model='fast')def summarize(classify: ClassifiedClaims) -> Summary: ...
pipeline = construct_from_module(sys.modules[__name__])
# --- Run ---graph = compile(pipeline)result = run(graph, input={"topic": "microservice authentication"})
print(result["summarize"].text)What Just Happened
Section titled “What Just Happened”Behind the scenes, NeoGraph did the following:
@nodedecorated each function and created aNodespec with the inferred mode, output type, and upstream dependenciesconstruct_from_modulescanned the module for all@nodefunctions, built the dependency graph from parameter names (classifydepends ondecomposebecause it has a parameter nameddecompose), topologically sorted the nodes, and created aConstructcompile()inferred a Pydantic state model with fields for each node’s output type, created LangGraph node functions that call yourllm_factoryandprompt_compiler, wiredSTART -> decompose -> classify -> summarize -> END, and compiled the LangGraphStateGraph
You got a standard LangGraph graph. You can stream it, checkpoint it, visualize it, or deploy it with LangGraph Platform — NeoGraph does not change the runtime, only the authoring.
Next Steps
Section titled “Next Steps”- Add tools to a node with
mode="gather"andTool("name", budget=5)— see the Walkthrough - Fan out with
ensemble_n=3ormap_over='field.path'in@nodekwargs — see Core Concepts - Add human-in-the-loop with
interrupt_when=— see the Walkthrough - Use
ForwardConstructfor branching pipelines — see What is NeoGraph?
Documentation © 2025-2026 Constantine Mirin, mirin.pro. Licensed under CC BY-ND 4.0.