ImagingCell Imaging

From Microscope to Data in 30 Seconds: Cellpose-SAM via API

A 30-second walkthrough from wet-lab microscopy capture to quantitative cell metrics using Cellpose-SAM via SciRouter's API.

SciRouter Team
April 11, 2026
10 min read

The bottleneck in quantitative microscopy is almost never the microscope. It is the gap between capturing an image and turning it into numbers. Segmentation, counting, measuring, exporting — each step traditionally involves opening ImageJ, running a plugin, fiddling with macros, and copying values into a spreadsheet. This guide shows how to collapse that whole thing into a 30-second workflow using Cellpose-SAM through a hosted API.

The target is simple: capture an image, save it to disk, run one script, get a per-cell CSV. No local GPU, no ImageJ, no macro editing. Just a microscope, a laptop, and the SciRouter imaging API.

Note
This is a workflow recipe, not a complete codebase. The focus is on the end-to-end sequence from wet-lab to data and the decisions that affect how fast and how accurate the pipeline is.

The 30-second workflow

Here is the target flow:

  • Second 0: press the capture button on the microscope.
  • Second 2: the software saves a TIFF to a watched folder.
  • Second 3: a file-watcher script picks up the new file.
  • Seconds 5-25: the script uploads the image to Cellpose-SAM and receives masks and per-cell stats.
  • Second 28: the script writes a CSV alongside the original image.
  • Second 30: you open the CSV in your spreadsheet or pipe it to a notebook.

None of these steps are novel on their own. The trick is wiring them together so you do not have to think about any of it.

Step 1 — configure the microscope software

Every microscope software has a setting that saves captured images to a user-defined folder. Turn this on and point it at something easy to watch:

  • Create a folder like ~/microscope/raw.
  • Configure the acquisition software to save images there with a descriptive file name template — well, field of view, channel, timestamp.
  • Turn off any automatic contrast enhancement or rescaling. You want the raw linear intensities.

Step 2 — the file watcher

A small Python script using watchdog monitors the raw folder and processes each new TIFF as it appears. This is the glue that keeps the pipeline hands-off.

The watcher does three things: detect a new file, call the segmentation API, and save a CSV next to the image. That is the entire job description.

Step 3 — the segmentation call

Each image is posted to the Cellpose-SAM endpoint with the model variant and diameter you picked for this experiment. The endpoint returns instance masks plus per-cell statistics in one response.

Because the endpoint is stateless, you can fire parallel requests for multiple fields of view without worrying about server-side state. For a 96-well plate, a handful of workers running in parallel tears through the whole plate in under two minutes.

Step 4 — per-cell measurements

The response includes whatever measurements the endpoint is configured to return — typically centroid, area, perimeter, mean intensity, and a few shape descriptors. If you need additional measurements (ratio of channel A to channel B, texture features, colocalization), you can compute them locally from the mask array and the raw image.

Step 5 — the spreadsheet

The final step is just df.to_csv(). A CSV per image, plus an optional summary CSV that concatenates everything for the whole experiment. You now have structured data ready for plotting, statistical tests, or further analysis.

Where the time goes

On a warm endpoint with a 1024x1024 image, the typical breakdown looks like:

  • Upload the image: 1-3 seconds on a broadband connection.
  • Model inference: 3-8 seconds depending on cell count.
  • Response download: 1-2 seconds.
  • Local CSV write: fractions of a second.

Total: comfortably under 15 seconds for the API half of the workflow. Add capture time and file watcher overhead, and you are in the 30-second neighborhood for the first image of a session. Subsequent images are faster because the endpoint stays warm.

Warning
Cold starts on the hosted endpoint can take longer. If you have a time-critical workflow, fire a warm-up request before you start imaging — a small dummy image takes a few seconds and keeps the endpoint hot.

Picking the right parameters once per experiment

The one thing you still need to do manually is pick the right model variant and diameter for your experiment. Do this once on a representative image using the imaging workspace, then hard-code those values in the watcher script.

  • Whole-cell fluorescence: cyto3 with a diameter matched to your objective and cell type.
  • DAPI or Hoechst nuclei: nuclei with a smaller diameter.
  • Tissue with membrane staining: tissuenet with both nuclear and membrane channels.

Quality control you cannot skip

Even with a fully automated pipeline, three QC steps are worth keeping in the loop:

  • Overlay spot check. For the first image of each session, have the script also save a mask overlay PNG. Glance at it to confirm the masks are sensible before trusting the batch.
  • Count sanity. If a replicate well returns zero cells, raise an alert. Zero is almost always a bug, not a biological result.
  • Area distribution. A histogram of mask areas should be roughly log-normal. Heavy tails suggest bad segmentation.

Scaling up

Once the single-image workflow is solid, scaling to a whole plate or a time-lapse movie is mostly bookkeeping:

  • 96-well plates: loop over wells, one subfolder per well, one CSV per well, and a summary CSV at the top level.
  • Time-lapse: segment each frame independently, then link detections across frames with a simple nearest-neighbor tracker.
  • Multi-channel: segment on the primary channel, then compute per-mask intensity on the other channels locally.

Bottom line

Thirty seconds from microscope to data is not a stunt. It is what happens when you replace a chain of local tools with one hosted API call. The savings compound across an experiment: fewer manual steps, fewer places to make a mistake, more time for actual biology. Set up the watcher once, tune the parameters once, and the rest of your career of imaging becomes a much faster loop.

Try the imaging workspace →

Frequently Asked Questions

Is 30 seconds realistic or marketing?

It is realistic for a single field of view from capture to per-cell CSV on a warm endpoint, assuming the image is already saved to disk. Batch runs with cold starts can take longer for the first image.

What file format should I save from the microscope?

TIFF is the most reliable. Most microscope software exports TIFF natively, and the imaging endpoint accepts it directly. PNG works too for screenshots or snapshots.

Do I need to uncheck anything in my microscope software?

Turn off automatic contrast enhancement and any on-the-fly rescaling. You want the raw linear intensities saved to disk so downstream quantification is accurate.

Can this workflow handle time-lapse movies?

Yes, but segment each frame independently and then link detections across time in a second pass. Btrack or a simple nearest-neighbor linker works for the linking step.

What about multi-channel acquisitions?

Send the segmentation channel (DAPI, Hoechst, or a cytoplasmic marker) as the primary input. The other channels are for downstream quantification and are applied to the masks after segmentation.

How do I scale this to a whole plate?

Loop the script over all field-of-view files, save one CSV per image, and concatenate at the end. A 96-well plate with one image per well typically finishes in a few minutes on the hosted endpoint.

Try this yourself

500 free credits. No credit card required.