MaterialsMaterials Potentials

AI Materials Discovery API: MACE and CHGNet Property Prediction

Universal neural network potentials (MACE, CHGNet, M3GNet) predict material properties 1000x faster than DFT. All three via API.

SciRouter Team
April 11, 2026
13 min read

Materials discovery has historically been bottlenecked by the cost of quantum mechanics. To know whether a new oxide is stable, metallic, or piezoelectric, you had to run a density functional theory calculation that might take hours on a compute cluster. Scanning a million candidate compositions was unthinkable.

That changed with universal neural network potentials. Models like MACE-MP-0 and CHGNet are trained on the entire Materials Project and predict formation energies, forces, and stresses for any composition in the periodic table at roughly 1000 times the speed of DFT. Paired with graph neural network property predictors, they make it possible to screen candidate materials faster than a lab can synthesize them.

SciRouter's materials API exposes the whole stack through a single endpoint: send a crystal structure, get back energy, band gap, elastic moduli, and stability metrics in one response.

Note
The neural potentials described here are surrogates for DFT, not replacements for experiment. Always validate top candidates with real synthesis and characterization. The API is a screening tool, not a verdict.

From DFT to neural potentials

The Materials Project, run from Lawrence Berkeley National Lab, contains over 150,000 DFT-computed crystal structures with energies, forces, and properties. That dataset was the seed corpus for a new generation of universal machine learning potentials. The idea is simple: if you have enough high-quality energies and forces, you can train a neural network to predict them directly from atomic positions, bypassing the expensive self-consistent electronic calculation.

The payoff is dramatic. A DFT calculation on a 50-atom supercell might take 30 minutes on 64 CPU cores. The same prediction through MACE or CHGNet takes milliseconds on a single GPU. That gap is what makes high-throughput screening newly possible.

MACE: equivariant and universal

MACE stands for Multi Atomic Cluster Expansion. Developed at Cambridge, it builds on the e3nn equivariant neural network framework to respect the rotational and translational symmetries of physical space. That is not a cosmetic detail: it means predictions are guaranteed to transform correctly under rotation, which is critical for forces.

The universal variant, MACE-MP-0, was trained on the Materials Project and covers every element in the periodic table. You can throw a novel composition at it and get a sensible prediction even if that exact combination never appeared in training. That generalization is what makes it practical for screening campaigns where most candidates are new compositions.

  • Strengths. State-of-the-art energy accuracy, excellent force predictions, good generalization to unseen compositions.
  • Weaknesses. Does not natively predict magnetic structure or charge transfer. For battery materials and transition metal oxides, CHGNet may be a better fit.

CHGNet: charge-aware and magnetic-aware

CHGNet is a graph neural network potential from Berkeley that additionally predicts atomic charges and magnetic moments. For most applications these are incidental, but for lithium-ion battery cathodes, transition metal oxides, and any system where oxidation state matters, having them exposed lets you reason about the chemistry in a way MACE cannot.

CHGNet is slightly behind MACE-MP-0 on pure energy benchmarks but the gap is small. The recommended pattern is to run both on your candidate list and check agreement. Structures where the two models disagree are worth flagging for DFT confirmation.

A one-call property prediction

Here is what a full property prediction looks like in Python. You pass a crystal structure (as CIF, POSCAR, or a dict of lattice and positions), and you get back everything you need to triage the candidate.

mace-property-predict.py
import httpx

API_KEY = "sk-sci-..."
BASE = "https://scirouter.ai/v1"

# A simple rocksalt MgO structure
structure = {
    "lattice": [[4.21, 0, 0], [0, 4.21, 0], [0, 0, 4.21]],
    "species": ["Mg", "O", "Mg", "O"],
    "coords": [
        [0.0, 0.0, 0.0],
        [0.5, 0.5, 0.5],
        [0.5, 0.5, 0.0],
        [0.0, 0.0, 0.5],
    ],
    "coords_are_cartesian": False,
}

response = httpx.post(
    f"{BASE}/materials/property",
    headers={"Authorization": f"Bearer {API_KEY}"},
    json={
        "structure": structure,
        "model": "mace-mp-0",
        "properties": [
            "formation_energy",
            "band_gap",
            "bulk_modulus",
            "shear_modulus",
            "stability",
        ],
    },
    timeout=60,
)

result = response.json()
print(f"Formation energy: {result['formation_energy']:.3f} eV/atom")
print(f"Band gap: {result['band_gap']:.2f} eV")
print(f"Bulk modulus: {result['bulk_modulus']:.1f} GPa")
print(f"Above hull: {result['energy_above_hull']:.3f} eV/atom")

The call returns in under a second per structure. Batch mode accepts up to 1000 structures per request and uses a GPU worker behind the scenes, so you can screen a modest candidate library in a single API call.

Stability and the convex hull

Predicting energy is only the first step. What you usually care about is stability: will this composition actually form, or will it decompose into some combination of known phases? That question is answered by checking whether the candidate sits on the convex hull of the phase diagram. A structure with energy above the hull of zero is thermodynamically stable. A structure 100 meV per atom above the hull is usually metastable. Anything higher is unlikely to form.

The API computes the energy above hull automatically by pulling reference phases from the Materials Project and running a convex hull construction. You get one number that tells you whether the candidate is worth pursuing.

Band gap and elastic properties

For functional materials screening, energy alone is not enough. A semiconductor candidate needs a target band gap. A structural material needs high bulk and shear moduli. The API exposes specialized models trained on the MatBench benchmark for each of these:

  • Band gap. GNN trained on PBE band gaps from Materials Project. Accurate for ranking, less so for absolute values (DFT underestimates gaps by roughly a factor of two).
  • Bulk and shear moduli. Trained on elasticity tensor data. Useful for screening candidates for high-stiffness applications.
  • Piezoelectric and dielectric properties. Available as optional predictions for compositions where the training data covers the relevant chemistry.
Tip
For piezoelectric and nonlinear optical screening, always run a symmetry check first. Many predicted phases with promising properties are in centrosymmetric space groups where those effects are forbidden by symmetry. The API returns space group automatically.

How this fits into a self-driving lab

Self-driving labs combine automated synthesis, characterization, and active learning to run hundreds of experiments per day with minimal human intervention. The loop goes: propose candidates, synthesize, characterize, update model, propose next batch. The rate-limiting step used to be either synthesis or DFT; with a fast neural potential API, it is increasingly synthesis alone.

Teams building self-driving labs typically call the materials API tens of thousands of times per day from their active learning orchestrators. Because each call is sub-second and pay-per-use, the economics work out: you pay for the screening only when the lab is actively exploring, not for idle GPU time.

When to trust the surrogate and when to confirm with DFT

The honest answer is: trust the surrogate for ranking and rough screening, confirm with DFT before believing any specific number. MACE-MP-0 gets formation energies within about 20 to 30 meV per atom on average, which is excellent for deciding which of a million candidates deserves a closer look. But for publication-quality numbers or for finely competing polymorphs, running a real DFT calculation on the top 50 candidates is still the right move.

The good news is that the surrogate collapses a million-candidate problem to a 50-candidate problem, and the 50-candidate problem is tractable even on modest compute. That is the real workflow enabled by a fast materials API.

Getting started

The easiest way to try the API is through Materials Lab, the web interface that lets you paste a CIF file or draw a structure and see predictions in real time. Once you have a candidate library, the Python SDK handles batching and retries.

Materials scientists, battery researchers, and photovoltaic teams are among the earliest users of the materials API. If you are running high-throughput screening campaigns, the combination of MACE, CHGNet, and property-specific GNNs behind one endpoint removes the infrastructure overhead that has historically slowed the field.

Open Materials Lab →

Frequently Asked Questions

What is a neural network potential and how does it differ from DFT?

Density functional theory (DFT) computes a material's energy from first principles by solving the Kohn-Sham equations. It is accurate but slow: a single structure can take minutes to hours on a compute cluster. A neural network potential (NNP) is a machine learning model trained on millions of DFT calculations to predict the same energy directly from atomic positions. Once trained, inference is about 1000 times faster than DFT, with accuracy close enough for screening applications.

What is MACE and why is it important?

MACE (Multi-ACE) is a state-of-the-art equivariant graph neural network potential developed at Cambridge. It uses higher-order tensor representations of atomic environments to capture many-body interactions that simpler potentials miss. MACE-MP-0 is the universal variant, trained on the Materials Project, that can predict energies and forces for any composition in the periodic table without retraining.

How does CHGNet compare to MACE?

CHGNet is a universal graph neural network potential from Berkeley that additionally predicts magnetic moments and atomic charges, which lets it handle transition metal oxides and battery materials where charge transfer matters. It is slightly less accurate than MACE-MP-0 on pure energy benchmarks but more physically expressive. In practice, running both and checking agreement is a robust sanity check for new compositions.

Can I predict band gap and elastic properties, not just energy?

Yes. The materials API wraps several models: MACE and CHGNet for energy and forces, MatBench-trained GNNs for band gap and formation energy, and elasticity-specific models for bulk and shear moduli. Each endpoint returns a single call payload with all the properties you request, so you do not have to orchestrate multiple services.

How accurate are these predictions for real materials screening?

On the Materials Project test set, MACE-MP-0 achieves roughly 20 to 30 meV per atom mean absolute error on formation energy, which is competitive with DFT-level accuracy for ranking compositions. Band gap predictions are less accurate because DFT itself underestimates gaps, but the ranking of candidates is usually reliable. For quantitative property values you still want to confirm top candidates with real DFT, but screening millions of compositions is now tractable.

What is a self-driving lab and how does this API fit?

A self-driving lab combines automated synthesis robots with active learning loops that decide what to make next based on model predictions. These loops live or die by inference latency: if each prediction takes an hour, the robot sits idle. An API that returns neural potential predictions in under a second per structure keeps the loop tight and lets the lab run dozens of synthesis-characterize-learn cycles per day.

Run this yourself — no GPU, no install

Free for researchers. Pick a tool, paste your input, see results in seconds.