Spatial-SAM: Spatially Consistent 3D Electron Microscopy Segmentation with SDF Memory and Semi-Supervised Learning

Yikai Huang1, Renmin Han2, Yuxuan Wang1, Youcheng Cai1, Ligang Liu1
1University of Science and Technology of China   2Shandong University
CVPR 2026
Spatial-SAM teaser figure showing the workflow and qualitative comparison

Left: The complete workflow of Spatial-SAM, transitioning from interactive annotation to fully automatic segmentation. Right: Visual comparison of Spatial-SAM and other semi-supervised methods on mitochondria segmentation.

Abstract

Segment Anything Model (SAM)-based approaches have demonstrated remarkable potential for biomedical image segmentation. However, these methods often struggle to maintain spatial consistency in 3D electron microscopy (3D-EM) data and require extensive manual annotations. To this end, we propose Spatial-SAM, a spatially consistent and annotation-efficient framework that achieves high precision on 3D-EM data. Our method introduces two key innovations. First, we incorporate a 3D Signed Distance Field (SDF) memory mechanism that replaces the original memory in SAM2 with SDF representations precomputed by a 3D U-Net, providing richer geometric information and improving spatial consistency. Second, by combining the few-shot capability of SAM2 with a dual-track pseudo-label iterative optimization strategy, Spatial-SAM efficiently learns to segment large-scale 3D-EM datasets from minimal annotations. Experiments show that Spatial-SAM significantly outperforms existing semi-supervised methods and achieves performance comparable to state-of-the-art fully supervised approaches on multiple 3D-EM benchmarks, reducing annotation costs while preserving spatial consistency. The code will be publicly released upon acceptance.