System Overview
ForgeLLM is an enterprise orchestration layer for specialized AI assistants. It provides the runtime, policy engine, and tooling necessary to deploy reliable, deterministic intelligence.
Beta Release Note: v0.9.1
Documentation is currently synchronized with the private beta release. Core orchestration syntax (forge.yml) is subject to minor stylistic changes before RC1.
DAG Architecture
Unlike generic chat interfaces, a Forge worker is a composed DAG (Directed Acyclic Graph) of distinct functional nodes:
- The Controller: Evaluates incoming objective intent against rigid policy layers before execution begins.
- The Planner: Computes the required sequence of deterministic tool calls, returning a structured JSON graph.
- The Executor: Securely dispatches external API requests within isolated memory spaces.
worker: "financial_reconciliation"
version: "1.0.4"
runtime: "vpc-isolated-us-east-1"
policy_engine:
strict_determinism: true
allow_external_http: false
allowed_tools:
- ledger_db_read
- math_verifier_v2
guardrails:
- no_hallucination_override: enforce
- human_approval_on_discrepancy: enforce
entrypoint:
max_recursion_depth: 3
timeout_ms: 15000Boundary Rules
To build effective workers on the ForgeLLM architecture, understanding action boundaries is critical. Action boundaries define exactly what state changes a worker is permitted to induce.
Deterministic Guards ensure that given the exact same input state and contextual prompt, the worker will produce structurally identical tool-call graphs. This eliminates the "retry lottery" associated with standard LLM interfaces.