What Is FLAME GPU?
FLAME GPU 2 is a GPU-accelerated framework for running agent-based models at scale. It takes the same conceptual model as NetLogo or Mesa — agents with state, a step function, and local interactions — and compiles the per-agent rules to CUDA kernels so thousands of agents update in parallel on GPU hardware. The result is two to three orders of magnitude more agents per simulation than CPU-based tools can handle.
This is a stub pillar page. The full guide will cover FLAME GPU internals, agent-function programming model, message lists, spatial partitioning, and end-to-end examples. Use the browser demos below to see what GPU-scale ABM can do.
Why GPUs for ABM
Agent-based models are embarrassingly parallel at the per-step level. Each agent reads its own state plus information about a bounded neighborhood, computes its next state, and writes the result. That pattern is a near-perfect fit for GPU SIMT execution. Running 1M agents in parallel on an A100 beats running them sequentially in Python by three orders of magnitude.
The Core Abstractions
- Agents. Structs of state arrays laid out contiguously for coalesced GPU memory access.
- Agent functions. Small CUDA kernels that run once per agent per step. They read messages, update local state, and emit new messages.
- Messages. Broadcast payloads with spatial or network indices, used for neighbor-based interactions without quadratic pair loops.
- Environments. Global state shared across all agents — parameters, simulation time, tunable constants.
When FLAME GPU Wins
- Large populations. 100K+ agents is where CPU starts to hurt; 1M+ is FLAME GPU's natural territory.
- Parameter sweeps. Running an ensemble of 100 configurations in parallel on one GPU instead of 100 sequential CPU runs.
- Real-time dashboards. Interactive sliders that re-run a simulation every few hundred ms only feel responsive at GPU speed.
Where CPU Frameworks Still Win
For teaching, prototyping, or small-population research (under ~10K agents), the CPU frameworks — NetLogo, Mesa, AgentPy — remain the right choice. The GPU programming model carries fixed overhead that is only worth paying when you need the scale.
SimLab: FLAME GPU as an API
SciRouter's SimLab hosts FLAME GPU 2 as a hosted service. You call simulation.run() with a model name, parameters, and a seed, and we return metrics and trajectory data. No GPU setup, no CUDA version mismatches, no infrastructure to maintain. The browser simulators below are JavaScript reimplementations of the same canonical models you can run at ten million agents through the SimLab API.