For 30 years, density functional theory was the workhorse of computational materials science. You wanted a new cathode? DFT. You wanted a formation energy? DFT. You wanted to understand catalysis? DFT. The method had a Nobel Prize behind it and predictable accuracy, and most of the field's muscle memory was built around its quirks.
In 2026 that default is changing. Machine learning interatomic potentials (MLIPs) are fast enough, accurate enough, and universal enough that most production work is moving to them. DFT is still essential, but its role is narrowing. This post walks through what changed, where MLIPs now win, and where DFT is still the right tool.
The speed gap
A single-point DFT calculation on a 100-atom system with a plane-wave code and a modern functional takes minutes to hours on a CPU node. The same calculation with a modern MLIP like MACE-MP-0 takes a millisecond on the same hardware. Molecular dynamics with DFT forces is limited to thousands of steps. Molecular dynamics with MLIP forces can run billions.
This is not a small improvement. It is six orders of magnitude. It changes what is practical.
- Before: Nanosecond ab-initio MD trajectories were major publications.
- Now: Microsecond trajectories run overnight on a laptop.
- Before: Screening 100 compositions required a cluster allocation.
- Now: Screening 100,000 compositions fits in an afternoon.
Accuracy caught up
A decade ago, machine learning potentials were interesting curiosities. Their accuracy was OK for narrow chemistries they were trained on, and they were notorious for failing on out-of-distribution configurations. Universal foundation potentials changed that.
A model like MACE-MP-0, trained on millions of DFT calculations across the Materials Project, now achieves mean absolute errors on formation energies and forces that are within the inherent error of the underlying DFT itself. Equivariant architectures bake in the symmetries of physics, so they learn more from less data. And foundation checkpoints mean you do not have to train from scratch for every new system.
For a hands-on introduction, see our MACE-MP-0 guide.
Where MLIPs now win
There is a fairly well-defined set of workflows where MLIPs are simply the better tool in 2026:
- Molecular dynamics. Long trajectories, large systems, realistic temperatures. MLIPs are 100,000 times faster than DFT MD with similar accuracy.
- Structure optimization. Geometry relaxation of large systems, surface reconstructions, defect formation. A few hundred iterations finish in seconds instead of hours.
- Property screening. Elastic constants, phonons, formation energies across a chemical space. MLIPs make sweep-based discovery practical.
- Diffusion and rare events. Microsecond trajectories are now accessible, which means you can actually sample activation barriers.
- Exploratory work on a laptop. You can open ASE, load a MACE checkpoint, and run a simulation on your MacBook at a coffee shop. See our laptop MD tutorial.
Where DFT is still essential
DFT is not going away. It is still the best tool for:
- Reaction transition states. If you are computing activation energies for bond-breaking chemistry, DFT gives you the right answer while an MLIP may fail if the transition state is out of its training distribution.
- Exotic electronic structure. Unusual spin states, strongly correlated systems, transition metal complexes with multiple low-lying configurations. Most MLIPs are not trained for these, and some DFT functionals themselves struggle.
- Generating training data. The training data that MLIPs learn from comes from DFT. As long as MLIPs improve, DFT remains the reference standard that feeds them.
- Novel chemistry. Work on elements, compositions, or geometries that are not represented in public training data. DFT is the safe default while you assemble a small reference set for later MLIP fine-tuning.
The hybrid workflow
The new best practice is hybrid. You use DFT to generate a small reference dataset for your chemistry, use that to fine-tune a universal MLIP, and then run production with the MLIP while periodically validating against DFT on novel configurations. This gives you DFT-quality answers at MLIP speed with controlled uncertainty.
A typical hybrid campaign looks like:
- Run 200 DFT single-point calculations on diverse configurations of your system.
- Fine-tune a MACE or NequIP foundation checkpoint on that dataset.
- Run production MD or optimization with the fine-tuned model.
- Spot-check 10 percent of accepted frames with DFT and check the residual error stays within tolerance.
- If the error grows, add the offending configurations to the training set and retrain.
What this means for your group
If you are running a computational materials group in 2026 and every project still uses pure DFT for MD, you are leaving significant productivity on the table. Even a few weeks of effort to set up an MLIP-based workflow will typically pay for itself within one project by reducing turnaround time and increasing statistical sampling.
That does not mean everyone needs to become an ML expert. Hosted tools like the Materials studio on SciRouter let you call MACE-MP-0 and related universal potentials through a single API, without installing training frameworks or maintaining your own GPU infrastructure.
Bottom line
Machine learning potentials have crossed the threshold from interesting research tool to production default for most materials MD and optimization workflows. DFT is still essential as a reference standard and for the chemistry where MLIPs cannot be trusted, but the division of labor has fundamentally shifted. If you are planning a new project in 2026, the right question is not “which DFT functional should I use,” it is “which MLIP should I train or fine-tune.”