Medical ImagingMedical Imaging

Medical Image Segmentation API: MONAI and TotalSegmentator

873+ FDA-approved AI radiology algorithms. MONAI + TotalSegmentator + Pillar-0 — research-grade medical imaging via API.

SciRouter Team
April 11, 2026
14 min read

Medical imaging AI is one of the most mature application areas in clinical machine learning. Over 870 radiology AI algorithms have been cleared by the FDA, spanning lung nodule detection, stroke triage, mammography, and many other narrow tasks. Behind that clinical layer is a larger research ecosystem built on open source tools: MONAI for the framework, TotalSegmentator for anatomical segmentation, and a new generation of foundation models like Pillar-0 that aim to cover hundreds of findings at once.

SciRouter's medical imaging API exposes these open source research tools through a single endpoint. You upload a CT or MRI volume, pick a task, and get back segmentations or predictions. It is for research and education, not clinical use.

Warning
This API is for research, education, and non-clinical analytics only. It is not FDA-cleared, not CE-marked, and not a medical device. It must not be used for diagnosis, treatment planning, or any decision that affects patient care. For clinical applications use an FDA-cleared product from a validated vendor with proper deployment and monitoring infrastructure.

The open source medical imaging stack

Medical imaging has a rich open source ecosystem. Three layers matter:

  • Frameworks. MONAI (Medical Open Network for AI) is the NVIDIA and King's College London project that standardized PyTorch-based medical imaging. It provides transforms, losses, metrics, and reference implementations for volumetric data.
  • Task-specific models. TotalSegmentator (CT), TotalSegmentator-MRI, nnU-Net (the baseline that keeps winning challenges), and hundreds of specialized tumor and lesion segmenters.
  • Foundation models. Pillar-0 (Stanford), RAD-DINO (Microsoft Research), and others that pretrain on large unlabeled corpora and fine-tune for downstream tasks.

Each layer has its own install story. MONAI itself is a pip install away, but the full stack, with GPU-accelerated preprocessing, DICOM tooling, and model weights, is non-trivial to get running. That is where a hosted API helps most.

TotalSegmentator: 100 structures in one call

TotalSegmentator is the workhorse of anatomical segmentation. Trained on thousands of manually labeled whole-body CT scans, it identifies over 100 structures in a single inference pass: bones, organs, muscles, blood vessels, and more. The model is built on nnU-Net, the reference architecture that has won most medical segmentation challenges since 2018.

What makes it valuable is coverage plus reliability. You get every major anatomical structure at once, with accuracy comparable to specialized per-organ models, and you get it from a single pretrained checkpoint. The API wraps the hosted version so you can process a CT volume without installing the full stack.

MONAI: the framework layer

MONAI is the PyTorch ecosystem for medical imaging. It standardizes the parts of a segmentation pipeline that used to be reinvented in every paper: spatial transforms (resize, rotate, flip in 3D), intensity normalization, sliding window inference, and volume-aware loss functions. It also ships with reference implementations of common architectures (UNet, UNETR, SegResNet, Swin UNETR).

The API uses MONAI under the hood for all preprocessing. When you upload a DICOM series, the service reorients it to a canonical frame, normalizes intensity, resamples to the model's native spacing, and runs sliding window inference for volumes that do not fit in GPU memory. All of that happens automatically.

Pillar-0 and foundation models for radiology

Pillar-0 is a foundation model for radiology developed at Stanford. Rather than training a separate model per task, it pretrains on a large unlabeled corpus and fine-tunes for a wide panel of findings, 350 plus according to published benchmarks. The result is a single backbone that handles diverse tasks with fewer labels than would be needed to train from scratch.

This matters for research because labeled medical data is expensive to collect. A foundation model can give you a useful baseline on a new task from a few hundred examples instead of tens of thousands. The API exposes Pillar-0 for research benchmarking, not as a diagnostic tool.

A segmentation call

Here is how you would run TotalSegmentator on a chest CT. You upload the DICOM series (the API also accepts NIfTI if you already have it in that format) and specify the task.

monai-segment.py
import httpx

API_KEY = "sk-sci-..."
BASE = "https://scirouter.ai/v1"

# Upload a zipped DICOM series
with open("chest_ct.zip", "rb") as f:
    upload = httpx.post(
        f"{BASE}/medimg/upload",
        headers={"Authorization": f"Bearer {API_KEY}"},
        files={"file": ("chest_ct.zip", f, "application/zip")},
    )
dataset_id = upload.json()["dataset_id"]

# Run segmentation
response = httpx.post(
    f"{BASE}/medimg/segment",
    headers={"Authorization": f"Bearer {API_KEY}"},
    json={
        "dataset_id": dataset_id,
        "model": "totalsegmentator-v2",
        "task": "total",
        "return_measurements": True,
    },
    timeout=600,
)

result = response.json()
print(f"Segmented {result['n_structures']} structures")
print(f"Lung volume: {result['measurements']['lung_total_volume_ml']:.0f} mL")
print(f"Segmentation mask URL: {result['mask_url']}")

The call returns a NIfTI segmentation mask with per-voxel labels plus structure-level measurements (volumes, center-of-mass coordinates, intensity statistics). Processing a chest CT takes roughly one to two minutes end to end.

Common research use cases

  • Body composition analysis. Use TotalSegmentator to segment muscle, visceral fat, and subcutaneous fat, then derive sarcopenia and body composition metrics from the masks.
  • Lesion tracking. Segment organs across a longitudinal series of scans to quantify growth or response over time.
  • Dataset curation. Run segmentation across a large public dataset like TCIA to enrich it with anatomical labels for downstream research.
  • Benchmarking. Compare a new model against TotalSegmentator or nnU-Net on a standard test set with one API call per model.
  • Education. Give medical students a reproducible way to explore segmentation without installing the full stack on their laptops.

DICOM, PHI, and data handling

Medical imaging data comes with privacy obligations. The API does not accept DICOM files containing patient identifiers as a matter of operational hygiene. Before upload, you should de-identify your data using a tool like pydicom's anonymizer or dcmtk's dcmodify. The API documentation includes a de-identification checklist.

Uploaded datasets are encrypted at rest, stored in a per-account isolated bucket, and deleted after a configurable retention window. For teams with stricter compliance requirements (HIPAA BAAs, SOC 2 reports), the Enterprise tier is the right fit.

Warning
De-identification is the uploader's responsibility. The API does not scan uploads for PHI. Always run your institution's de-identification pipeline before sending any real clinical data, and follow your IRB and data use agreement requirements.

Comparing models on the same volume

One of the benefits of a unified API is that you can run multiple segmentation models on the same volume and compare them. The API supports this as a single call: you specify a list of models and get back parallel results. This is useful for method papers, benchmarking studies, and sanity checks when adopting a new model.

Why compare?

No segmentation model is perfect on every case. A robust research pipeline runs two or three models in parallel and flags disagreements for human review. The overhead of running multiple models through the API is small because preprocessing happens once and the inference calls run in parallel.

Getting started

The easiest way to explore is through Medical Imaging Lab, which lets you upload a DICOM series and visualize the results in a 3D slice viewer. You can pick a model, see the output overlaid on the original volume, and export the mask as NIfTI.

For researchers building datasets or running benchmarks, the Python SDK handles upload batching, progress reporting, and result aggregation. The ability to process hundreds of volumes through a single script, without configuring MONAI locally, is the main draw.

Warning
One more reminder: the medical imaging API is a research tool. It is not a diagnostic device. It must not be used in any workflow that affects patient care. If you are building a clinical product, you need the regulatory pathway and the validated infrastructure that comes with a real medical device.

Open Medical Imaging Lab →

Frequently Asked Questions

Is this API cleared for clinical or diagnostic use?

No. SciRouter's medical imaging API is explicitly for research, education, and non-clinical analytics. It is not FDA-cleared, not CE-marked, and not a medical device. Do not use it for diagnosis, treatment planning, or any decision that affects patient care. For clinical workflows you need an FDA-cleared product from a medical device vendor and a properly validated deployment pipeline.

What is MONAI?

MONAI (Medical Open Network for AI) is an open source PyTorch-based framework for medical imaging AI, originally developed by NVIDIA and King's College London. It provides standardized transforms, dataset loaders, models, and training loops for volumetric medical imaging tasks. It has become the de facto foundation for most academic research in medical image segmentation and classification.

What is TotalSegmentator?

TotalSegmentator is an open source model that segments over 100 anatomical structures in whole-body CT scans (bones, organs, muscles, vessels) with a single inference call. Built on top of nnU-Net and trained on thousands of labeled CTs, it has become a standard tool for anatomical segmentation in research. A similar tool exists for MRI. The API wraps the hosted version so you do not have to install it locally.

What is Pillar-0?

Pillar-0 is a foundation model for radiology that can predict over 350 findings from CT and MRI scans. It represents the next generation of medical imaging AI: rather than training a separate model per task, a single backbone predicts a broad panel of findings. In SciRouter it is exposed alongside classical task-specific models so researchers can compare approaches.

How does the API handle DICOM files?

You upload a zipped DICOM series (or a NIfTI file if you prefer). The service parses the headers, extracts the volume, and runs the requested model. Results come back as NIfTI segmentation masks plus optional measurements (volumes, HU distributions, center-of-mass coordinates). PHI handling is your responsibility as the uploader; the API recommends de-identifying data before upload.

How many FDA-approved radiology AI algorithms are there?

As of late 2025, over 870 radiology AI algorithms have received FDA clearance, most for narrow tasks like lung nodule detection, intracranial hemorrhage flagging, or mammography triage. Those are regulated medical devices. The research API discussed here is a separate category entirely: it wraps open source models for exploration, benchmarking, and research, not for clinical deployment.

Run this yourself — no GPU, no install

Free for researchers. Pick a tool, paste your input, see results in seconds.