What Is the Segment Anything Model?
In April 2023, Meta AI released the Segment Anything Model (SAM), a foundation model trained to segment any object in any image. The model was trained on SA-1B, a dataset of over 1 billion segmentation masks across 11 million diverse images. The result is a model that understands object boundaries at a fundamental level – give it a point, a bounding box, or even just a text prompt, and it draws precise segmentation masks around the objects you care about.
What makes SAM remarkable for the remote sensing community is its zero-shot generalization. Despite being trained primarily on natural images (photos of everyday objects, animals, and scenes), SAM transfers surprisingly well to satellite imagery. Buildings, roads, agricultural fields, water bodies, and forest patches all have clear visual boundaries that SAM can detect without ever having seen a satellite image during training.
Why SAM Works for Satellite Imagery
Satellite images share several properties with natural images that make SAM effective. Objects in satellite scenes have well-defined edges: buildings have sharp boundaries against roads, water bodies contrast strongly with land, and agricultural fields form geometric patterns against surrounding vegetation. These are exactly the types of boundaries SAM was trained to detect.
Three factors make SAM particularly useful for remote sensing workflows:
- No labeled data required: Traditional remote sensing classification needs hundreds or thousands of labeled training pixels for each land cover class in each new study area. SAM requires zero training labels. You point, it segments.
- Instance-level segmentation: SAM does not just classify pixels – it identifies individual objects. You can count buildings, measure individual field areas, and track specific water bodies over time.
- Promptable interface: You can guide SAM with point prompts (click on an object), box prompts (draw a bounding box), or generate all masks automatically. This flexibility fits diverse remote sensing workflows from manual inspection to fully automated pipelines.
Where SAM Struggles with Satellite Data
SAM is not perfect for remote sensing. It was trained on RGB images, so it cannot natively handle multispectral bands (near-infrared, shortwave infrared, thermal) that carry critical information for vegetation health, soil moisture, and mineral identification. It also struggles with medium-to-low resolution imagery where individual objects span only a few pixels. And it has no concept of geographic context – it does not know that a dark patch near a river is probably water, while a dark patch in an urban area might be a parking lot.
These limitations have spawned a growing ecosystem of SAM variants fine-tuned for remote sensing. Models like SAM-Geo, RSPrompter, and Geo-SAM extend SAM with multispectral support, geographic priors, and resolution-adapted architectures. The core insight remains: SAM provides a powerful foundation that can be adapted to satellite-specific tasks with relatively little additional training.
Using SAM for Satellite Analysis via API
Running SAM locally requires a GPU and careful setup of the model weights, image preprocessing, and post-processing pipelines. SciRouter's satellite analysis API abstracts this complexity into a single endpoint. Upload an image (or reference a scene by coordinates and date), and get back segmentation masks with land cover classifications.
import requests
API_KEY = "sk-sci-your-api-key"
BASE = "https://api.scirouter.ai/v1"
HEADERS = {"Authorization": f"Bearer {API_KEY}"}
# Segment a satellite image by geographic coordinates
response = requests.post(f"{BASE}/geospatial/segment",
headers=HEADERS,
json={
"latitude": 34.0522,
"longitude": -118.2437,
"source": "sentinel-2",
"date_range": ["2026-01-01", "2026-03-01"],
"model": "sam-geo",
"classes": ["urban", "vegetation", "water", "bare_soil"]
})
result = response.json()
print(f"Scene ID: {result['scene_id']}")
print(f"Segments found: {result['segment_count']}")
for seg in result["segments"]:
print(f" {seg['class']}: {seg['area_pct']:.1f}%")Scene ID: S2A_20260215_T11SLT
Segments found: 4
urban: 62.3%
vegetation: 18.1%
bare_soil: 13.8%
water: 5.8%Interpreting Segmentation Results
SAM returns segmentation masks – arrays where each pixel is assigned to a segment ID. When combined with a classification head (as in SAM-Geo), each segment also gets a land cover label and a confidence score. Here is how to interpret the key outputs:
- Segment masks: Binary arrays showing which pixels belong to each detected object. Overlapping masks indicate hierarchical segmentation (a building within an urban block within a city district).
- Confidence scores: Per-segment confidence values between 0 and 1. Scores above 0.85 are generally reliable. Scores between 0.5 and 0.85 should be reviewed manually. Below 0.5, the model is uncertain and the segment may be spurious.
- Area statistics: The percentage of the scene covered by each land cover class. These are computed from pixel counts and the known ground sampling distance of the sensor.
- Boundary polygons: Vector outlines of each segment, suitable for import into GIS software like QGIS or ArcGIS. Returned in GeoJSON format with geographic coordinates.
Use Cases
Urban Planning and Growth Monitoring
City planners use satellite segmentation to track urban expansion, identify informal settlements, and measure green space coverage. SAM's ability to delineate individual buildings makes it particularly useful for building footprint extraction, which feeds into population estimates, infrastructure planning, and property tax assessment. Running segmentation quarterly provides a time series of urban growth that would be prohibitively expensive to survey on the ground.
Precision Agriculture
In agriculture, SAM segments individual crop fields, identifies irrigation infrastructure, and detects crop stress patterns. When combined with multispectral data (NDVI from near-infrared bands), segmentation boundaries help farmers understand exactly where within a field crops are thriving or struggling. This enables variable-rate application of fertilizer and irrigation, reducing waste and increasing yields.
Disaster Response
After floods, hurricanes, or wildfires, rapid damage assessment is critical for directing emergency resources. SAM can segment satellite images of affected areas to identify flooded zones, damaged buildings, and blocked roads within hours of image acquisition. Organizations like the International Charter for Space and Major Disasters use similar segmentation workflows to produce damage maps that guide search-and-rescue operations.
Environmental Monitoring
Tracking deforestation, wetland loss, coral reef bleaching, and glacier retreat all benefit from automated segmentation. SAM provides a consistent, reproducible method for delineating these features across large areas and long time periods. For more on deforestation specifically, see our guide on detecting deforestation with AI.
Getting Started
The fastest path from zero to satellite segmentation:
- Try the Satellite Analyzer Studio – upload an image or use our sample scenes to see segmentation results instantly
- How to Analyze Satellite Images with AI – broader guide covering classification, detection, and segmentation
- Remote Sensing for Beginners – understand satellites, sensors, and free data sources before diving into analysis
- Satellite Analyzer – free online tool for quick satellite image segmentation
Ready to segment your own satellite imagery? Get a free API key and start analyzing the Earth from space.