Medical imaging AI is one of the most mature application areas in clinical machine learning. Over 870 radiology AI algorithms have been cleared by the FDA, spanning lung nodule detection, stroke triage, mammography, and many other narrow tasks. Behind that clinical layer is a larger research ecosystem built on open source tools: MONAI for the framework, TotalSegmentator for anatomical segmentation, and a new generation of foundation models like Pillar-0 that aim to cover hundreds of findings at once.
SciRouter's medical imaging API exposes these open source research tools through a single endpoint. You upload a CT or MRI volume, pick a task, and get back segmentations or predictions. It is for research and education, not clinical use.
The open source medical imaging stack
Medical imaging has a rich open source ecosystem. Three layers matter:
- Frameworks. MONAI (Medical Open Network for AI) is the NVIDIA and King's College London project that standardized PyTorch-based medical imaging. It provides transforms, losses, metrics, and reference implementations for volumetric data.
- Task-specific models. TotalSegmentator (CT), TotalSegmentator-MRI, nnU-Net (the baseline that keeps winning challenges), and hundreds of specialized tumor and lesion segmenters.
- Foundation models. Pillar-0 (Stanford), RAD-DINO (Microsoft Research), and others that pretrain on large unlabeled corpora and fine-tune for downstream tasks.
Each layer has its own install story. MONAI itself is a pip install away, but the full stack, with GPU-accelerated preprocessing, DICOM tooling, and model weights, is non-trivial to get running. That is where a hosted API helps most.
TotalSegmentator: 100 structures in one call
TotalSegmentator is the workhorse of anatomical segmentation. Trained on thousands of manually labeled whole-body CT scans, it identifies over 100 structures in a single inference pass: bones, organs, muscles, blood vessels, and more. The model is built on nnU-Net, the reference architecture that has won most medical segmentation challenges since 2018.
What makes it valuable is coverage plus reliability. You get every major anatomical structure at once, with accuracy comparable to specialized per-organ models, and you get it from a single pretrained checkpoint. The API wraps the hosted version so you can process a CT volume without installing the full stack.
MONAI: the framework layer
MONAI is the PyTorch ecosystem for medical imaging. It standardizes the parts of a segmentation pipeline that used to be reinvented in every paper: spatial transforms (resize, rotate, flip in 3D), intensity normalization, sliding window inference, and volume-aware loss functions. It also ships with reference implementations of common architectures (UNet, UNETR, SegResNet, Swin UNETR).
The API uses MONAI under the hood for all preprocessing. When you upload a DICOM series, the service reorients it to a canonical frame, normalizes intensity, resamples to the model's native spacing, and runs sliding window inference for volumes that do not fit in GPU memory. All of that happens automatically.
Pillar-0 and foundation models for radiology
Pillar-0 is a foundation model for radiology developed at Stanford. Rather than training a separate model per task, it pretrains on a large unlabeled corpus and fine-tunes for a wide panel of findings, 350 plus according to published benchmarks. The result is a single backbone that handles diverse tasks with fewer labels than would be needed to train from scratch.
This matters for research because labeled medical data is expensive to collect. A foundation model can give you a useful baseline on a new task from a few hundred examples instead of tens of thousands. The API exposes Pillar-0 for research benchmarking, not as a diagnostic tool.
A segmentation call
Here is how you would run TotalSegmentator on a chest CT. You upload the DICOM series (the API also accepts NIfTI if you already have it in that format) and specify the task.
import httpx
API_KEY = "sk-sci-..."
BASE = "https://scirouter.ai/v1"
# Upload a zipped DICOM series
with open("chest_ct.zip", "rb") as f:
upload = httpx.post(
f"{BASE}/medimg/upload",
headers={"Authorization": f"Bearer {API_KEY}"},
files={"file": ("chest_ct.zip", f, "application/zip")},
)
dataset_id = upload.json()["dataset_id"]
# Run segmentation
response = httpx.post(
f"{BASE}/medimg/segment",
headers={"Authorization": f"Bearer {API_KEY}"},
json={
"dataset_id": dataset_id,
"model": "totalsegmentator-v2",
"task": "total",
"return_measurements": True,
},
timeout=600,
)
result = response.json()
print(f"Segmented {result['n_structures']} structures")
print(f"Lung volume: {result['measurements']['lung_total_volume_ml']:.0f} mL")
print(f"Segmentation mask URL: {result['mask_url']}")The call returns a NIfTI segmentation mask with per-voxel labels plus structure-level measurements (volumes, center-of-mass coordinates, intensity statistics). Processing a chest CT takes roughly one to two minutes end to end.
Common research use cases
- Body composition analysis. Use TotalSegmentator to segment muscle, visceral fat, and subcutaneous fat, then derive sarcopenia and body composition metrics from the masks.
- Lesion tracking. Segment organs across a longitudinal series of scans to quantify growth or response over time.
- Dataset curation. Run segmentation across a large public dataset like TCIA to enrich it with anatomical labels for downstream research.
- Benchmarking. Compare a new model against TotalSegmentator or nnU-Net on a standard test set with one API call per model.
- Education. Give medical students a reproducible way to explore segmentation without installing the full stack on their laptops.
DICOM, PHI, and data handling
Medical imaging data comes with privacy obligations. The API does not accept DICOM files containing patient identifiers as a matter of operational hygiene. Before upload, you should de-identify your data using a tool like pydicom's anonymizer or dcmtk's dcmodify. The API documentation includes a de-identification checklist.
Uploaded datasets are encrypted at rest, stored in a per-account isolated bucket, and deleted after a configurable retention window. For teams with stricter compliance requirements (HIPAA BAAs, SOC 2 reports), the Enterprise tier is the right fit.
Comparing models on the same volume
One of the benefits of a unified API is that you can run multiple segmentation models on the same volume and compare them. The API supports this as a single call: you specify a list of models and get back parallel results. This is useful for method papers, benchmarking studies, and sanity checks when adopting a new model.
Why compare?
No segmentation model is perfect on every case. A robust research pipeline runs two or three models in parallel and flags disagreements for human review. The overhead of running multiple models through the API is small because preprocessing happens once and the inference calls run in parallel.
Getting started
The easiest way to explore is through Medical Imaging Lab, which lets you upload a DICOM series and visualize the results in a 3D slice viewer. You can pick a model, see the output overlaid on the original volume, and export the mask as NIfTI.
For researchers building datasets or running benchmarks, the Python SDK handles upload batching, progress reporting, and result aggregation. The ability to process hundreds of volumes through a single script, without configuring MONAI locally, is the main draw.