# GLNet **Repository Path**: mirrors_VITA-Group/GLNet ## Basic Information - **Project Name**: GLNet - **Description**: [CVPR 2019, Oral] "Collaborative Global-Local Networks for Memory-Efficient Segmentation of Ultra-High Resolution Images" by Wuyang Chen*, Ziyu Jiang*, Zhangyang Wang, Kexin Cui, and Xiaoning Qian - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2022-01-11 - **Last Updated**: 2026-03-30 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # GLNet for Memory-Efficient Segmentation of Ultra-High Resolution Images [](https://lgtm.com/projects/g/chenwydj/ultra_high_resolution_segmentation/context:python) [](https://opensource.org/licenses/MIT) Collaborative Global-Local Networks for Memory-Efficient Segmentation of Ultra-High Resolution Images Wuyang Chen*, Ziyu Jiang*, Zhangyang Wang, Kexin Cui, and Xiaoning Qian In CVPR 2019 (Oral). [[Youtube](https://www.youtube.com/watch?v=am1GiItQI88)] ## Overview Segmentation of ultra-high resolution images is increasingly demanded in a wide range of applications (e.g. urban planning), yet poses significant challenges for algorithm efficiency, in particular considering the (GPU) memory limits. We propose collaborative **Global-Local Networks (GLNet)** to effectively preserve both global and local information in a highly memory-efficient manner. * **Memory-efficient**: **training w. only one 1080Ti** and **inference w. less than 2GB GPU memory**, for ultra-high resolution images of up to 30M pixels. * **High-quality**: GLNet outperforms existing segmentation models on ultra-high resolution images.
Inference memory v.s. mIoU on the DeepGlobe dataset.
GLNet (red dots) integrates both global and local information in a compact way, contributing to a well-balanced trade-off between accuracy and memory usage.
Ultra-high resolution Datasets: DeepGlobe, ISIC, Inria Aerial
GLNet: the global and local branch takes downsampled and cropped images, respectively. Deep feature map sharing and feature map regularization enforce our global-local collaboration. The final segmentation is generated by aggregating high-level feature maps from two branches.
Deep feature map sharing: at each layer, feature maps with global context and ones with local fine structures are bidirectionally brought together, contributing to a complete patch-based deep global-local collaboration.