Skip to content
Built by Postindustria. We help teams build agentic production systems.

Quick Start

This guide gets you from zero to a running LLM pipeline in under 5 minutes.

Terminal window
pip install neograph

NeoGraph depends on langgraph, langchain-core, pydantic, and structlog. These are installed automatically.

You will also need an LLM provider. This guide uses OpenAI, but any LangChain-compatible chat model works (Anthropic, Google, OpenRouter, local models via Ollama, etc.):

Terminal window
pip install langchain-openai

NeoGraph does not own LLM configuration. You provide two functions: an LLM factory that creates chat model instances, and a prompt compiler that builds message lists for each node.

import os
from langchain_openai import ChatOpenAI
from neograph import configure_llm
# Map model tiers to actual models
MODELS = {
"fast": "gpt-4o-mini",
"reason": "gpt-4o",
}
configure_llm(
llm_factory=lambda tier: ChatOpenAI(
model=MODELS.get(tier, "gpt-4o-mini"),
api_key=os.environ["OPENAI_API_KEY"],
),
prompt_compiler=lambda template, data, **kwargs: [
{"role": "user", "content": f"[{template}] {data}"}
],
)

The llm_factory receives a tier string (like "fast" or "reason") and returns a BaseChatModel. The prompt_compiler receives a template name and the input data from the previous node, and returns a list of messages.

In production, you would replace the prompt compiler with something that loads real prompt templates and injects context. For now, this inline version works.

Every node declares its output type as a Pydantic model. This is how NeoGraph knows what data flows between nodes.

from pydantic import BaseModel
class Claims(BaseModel):
items: list[str]
class ClassifiedClaims(BaseModel):
classified: list[dict[str, str]]
class Summary(BaseModel):
text: str

Decorate functions with @node. The parameter name IS the dependency — classify(decompose: Claims) wires classify after the decompose node automatically.

import sys
from neograph import node, construct_from_module
@node(output=Claims, prompt='decompose', model='fast')
def decompose() -> Claims: ...
@node(output=ClassifiedClaims, prompt='classify', model='fast')
def classify(decompose: Claims) -> ClassifiedClaims: ...
@node(output=Summary, prompt='summarize', model='fast')
def summarize(classify: ClassifiedClaims) -> Summary: ...
pipeline = construct_from_module(sys.modules[__name__])

Each @node specifies:

  • output — Pydantic model defining the typed output contract
  • prompt — template name passed to your prompt_compiler
  • model — tier string passed to your llm_factory

Mode is inferred: prompt= + model= means LLM call (produce mode). Neither means the function body runs as scripted Python.

construct_from_module scans the module for all @node-decorated functions, infers the dependency graph from parameter names, and builds a Construct.

from neograph import compile, run
graph = compile(pipeline)
result = run(graph, input={"topic": "microservice authentication"})
print(result["summarize"].text)

compile() converts the Construct into a LangGraph StateGraph, wires all edges, infers the state schema, and compiles. run() executes the graph and returns the final state with framework internals stripped out.

Every field you pass in input= also flows into config["configurable"], so your prompt compiler can read them via config["configurable"]["topic"]. This is how pipeline-level metadata (topic, project_root, whatever you need) reaches every node without threading it through state manually.

The result is a dict keyed by node name. Each value is the typed Pydantic model that node produced.

Here is the full script, copy-paste-runnable:

import os
import sys
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
from neograph import node, construct_from_module, compile, configure_llm, run
# --- Schemas ---
class Claims(BaseModel):
items: list[str]
class ClassifiedClaims(BaseModel):
classified: list[dict[str, str]]
class Summary(BaseModel):
text: str
# --- LLM Configuration ---
configure_llm(
llm_factory=lambda tier: ChatOpenAI(
model="gpt-4o-mini",
api_key=os.environ["OPENAI_API_KEY"],
),
prompt_compiler=lambda template, data, **kwargs: [
{"role": "user", "content": (
f"Break this topic into 3-5 factual claims: "
f"{kwargs.get('config', {}).get('configurable', {}).get('topic', 'AI')}"
if template == "decompose"
else f"Classify each claim as security/reliability/performance: {data}"
if template == "classify"
else f"Summarize these classified claims in one paragraph: {data}"
)}
],
)
# --- Pipeline ---
@node(output=Claims, prompt='decompose', model='fast')
def decompose() -> Claims: ...
@node(output=ClassifiedClaims, prompt='classify', model='fast')
def classify(decompose: Claims) -> ClassifiedClaims: ...
@node(output=Summary, prompt='summarize', model='fast')
def summarize(classify: ClassifiedClaims) -> Summary: ...
pipeline = construct_from_module(sys.modules[__name__])
# --- Run ---
graph = compile(pipeline)
result = run(graph, input={"topic": "microservice authentication"})
print(result["summarize"].text)

Behind the scenes, NeoGraph did the following:

  1. @node decorated each function and created a Node spec with the inferred mode, output type, and upstream dependencies
  2. construct_from_module scanned the module for all @node functions, built the dependency graph from parameter names (classify depends on decompose because it has a parameter named decompose), topologically sorted the nodes, and created a Construct
  3. compile() inferred a Pydantic state model with fields for each node’s output type, created LangGraph node functions that call your llm_factory and prompt_compiler, wired START -> decompose -> classify -> summarize -> END, and compiled the LangGraph StateGraph

You got a standard LangGraph graph. You can stream it, checkpoint it, visualize it, or deploy it with LangGraph Platform — NeoGraph does not change the runtime, only the authoring.

  • Add tools to a node with mode="gather" and Tool("name", budget=5) — see the Walkthrough
  • Fan out with ensemble_n=3 or map_over='field.path' in @node kwargs — see Core Concepts
  • Add human-in-the-loop with interrupt_when= — see the Walkthrough
  • Use ForwardConstruct for branching pipelines — see What is NeoGraph?

Documentation © 2025-2026 Constantine Mirin, mirin.pro. Licensed under CC BY-ND 4.0.