Issue URL:
Thirty Years of the U.S. National Land Cover Database: Impacts and Future Direction
Editor’s Choice
Affiliations: 1: U.S. Geological Survey, Earth Resources Observation and Science Center, Sioux Falls, SD 2: Retired, Environmental Protection Agency Environmental Protection Agency, National Exposure Research Laboratory, Durham, NC 3: State University of New York, Environmental Science and Forestry, Syracuse, NY 4: National Oceanographic and Atmospheric Administration, Office For Coastal Management, Charleston, SC 5: U.S. Forest Service, Rocky Mountain Research Station, Forest Inventory and Monitoring, Riverdale, UT 6: KBR, Inc
The National Land Cover Database (NLCD), developed through the Multi-Resolution Land Characteristics Consortium, was initiated 30 years ago and has continually provided critical, Landsat-based landcover and land-change information for the United States. Originally launched to address the lack of national-scale, moderate-resolution land-cover data, NLCD has evolved from the pioneering 1992 dataset into a comprehensive, annually updated product suite. Key innovations include the introduction of impervious surface mapping, forest canopy mapping, standardized Landsat mosaics, national-scale accuracy assessments, continual evolution of deep learning and artificial intelligence methodologies, and a transition toward operational, change-focused monitoring. The NLCD has become an essential resource for scientific research, land management, and policy development, with extensive adoption across federal, state, and local agencies; academia; and the private sector. The NLCD data underpin a wide array of applications, including biodiversity conservation, urban planning, hydrology, human health studies, and natural hazard assessment. As new global and high-resolution commercial land-cover products emerge, the NLCD continues to distinguish itself through its temporal depth, federal backing, and thematic consistency. Moving forward, the NLCD will maintain its niche as the leading, moderate-resolution, long-term land-cover and land-change dataset for the United States, ensuring continued support for broad national applications while complementing higher-resolution and global-mapping efforts.
Paper URL:
Historical maps that describe past land use and land cover (LULC) forms can be a precious source of information in many scientific fields studying long-term spatial and temporal changes in the landscape. Such repositories were created manually in small areas in the past, which was a time-consuming and labor-intensive task. Recently, there has been a growing tendency to use machine learning models for this purpose, along with deep learning methods. However, having a massive amount of labeled data is necessary for these methods to train the networks. Training data are often manually labeled, posing a significant challenge and limiting the automation of these methods. This article presents a method that uses topographic databases to extract complex multi-class maps representing LULC from historical aerial photographs, eliminating the time-consuming data labeling step. The method uses transfer learning with a pretrained model on 2020 and 2014 data and attempts to reconstruct LULC types with the same convolutional neural network (CNN) network on archived images from 2006. The experiment covered 488 km2 and included seven LULC classes. The method was tested using different CNN architectures (U-Net, Pyramid Scene Parsing Network [PSPNet], and LinkNet) with backbones (ResNeXt+SE, EfficientNet, and Inception). The PSPNet‐EfficientNet‐b7 network model achieved the best results, with 90% overall accuracy for predicting LULC classes based on the 2006 archived aerial images.
Paper URL:
https://www.ingentaconnect.com/contentone/asprs/pers/2025/00000091/00000010/art00011
When the airborne infrared imaging system detects ships, significant variations in environmental temperature are often observed. In the existing calculation models for the operating range, the factor of environmental temperature has not been taken into account. However, when the environmental temperature changes, it will affect the variation of the minimum resolvable temperature difference (MRTD) of the system, resulting in a relatively large deviation in the prediction of the operating range of the airborne infrared imaging system. To address this crucial technical challenge, this study systematically established the relationship formula of the MRTD under different temperatures. By integrating with the improved theoretical model of MRTD, a calculation method for the operating range that takes environmental temperature into consideration was developed to accurately determine the operating range of the airborne infrared imaging system. Comparative experimental studies focusing on ships show that, compared with traditional methods, the prediction deviation of the proposed method is significantly reduced, with an average reduction of 10.1%.
https://www.ingentaconnect.com/contentone/asprs/pers/2025/00000091/00000010/art00010
Authors: Chen, Yan¹; Shi, Xinlu¹; Wang, Xiaofeng¹; Gu, Qi¹; Zhang, Chen¹; Xu, Lixiang¹; Zhan, Shian²; Yu, Wenle²;
Applications of remote sensing images in both defense and civilian sectors have spurred substantial research interest. In the field of remote sensing, object detection confronts challenges such as complex backgrounds, scale diversity, and the presence of dense small objects. To address these issues, we propose an improved deep learning-based model, the Global Multi-scale Fusion Self-calibration Network, which is expected to contribute to alleviating the challenges. It consists of three main components: the hierarchical feature aggregation backbone, which uses improved modules such as the receptive field context-aware feature extraction module, the global information acquisition module, and the simple parameter-free attention module to extract key features and minimize the background interference. To couple multi-scale features, we enhanced the fusing component and designed the multi-scale enhanced pyramid structure integrating the proposed new modules. During the detection phase, especially when focusing on small object detection, we designed a novel convolutional attention feature fusion head. This head is constructed to integrate local and global branches for feature extraction by leveraging channel shuffling and multi-head attention mechanisms for efficient and accurate detection. Experiments on the Detection in Optical Remote Sensing Images (DIOR), Northwestern Polytechnical University Very High-Resolution‐10 (NWPU VHR‐10), remote sensing object detection (RSOD), and DOTAv1.0 data sets show that our method achieves mAP50(mean average precision at 50% intersection over union) of 69.7%, 91.3%, 94.2%, and 70.0%, respectively, outperforming existing comparative methods. The proposed network is expected to provide new perspectives for remote sensing tasks and possible solutions for relevant applications in the image domain.
Paper URL:
https://www.ingentaconnect.com/contentone/asprs/pers/2025/00000091/00000010/art00007
Stripe Noise Removal of ZY1-02D Hyperspectral Images Using an Improved Three-Dimensional U-Net Network
https://www.ingentaconnect.com/content/asprs/pers/pre-prints/content-25-00095
Scale-adaptive Knowledge Distillation with Superpixel for Hyperspectral Image Classification
Hyperspectral image (HSI) classification is a critical area in remote sensing with broad applications in geoscience. While deep learning methods have gained popularity for HSI classification, their potential remains underexplored due to limited labeled data. To address this, we propose a scale-adaptive knowledge distillation with superpixel framework that trains deep neural networks using unlabeled samples. The proposed framework incorporates three core components: (1) scale-adaptive superpixel knowledge distillation, (2) bilateral spatial–spectral attention mechanisms, and (3) three-dimensional (3D) hyperspectral data transformation. The distillation module implements self-supervised learning through dynamically generated soft labels based on cross-dimensional similarity metrics. The workflow proceeds through three stages: Initially, spatial–spectral joint distance metrics evaluate the affinity between unlabeled superpixels and target classes. Subsequently, these measurements inform probabilistic soft label assignments for each superpixel cluster. Finally, an end-to-end trainable dense convolutional network with dual attention pathways is refined by optimizing the divergence between the adaptive label distributions and network predictions. Additionally, 3D transformations, including spectral and spatial rotations of the HSI cube, are applied to maximize the utility of labeled data. Experiments on three public HSI data sets demonstrate that the proposed method achieves competitive accuracy and efficiency compared to existing approaches. The implementation code is available at https://github.com/San-dow/Awnsome-SAKDS_HSI.
Paper URL:
https://www.ingentaconnect.com/content/asprs/pers/pre-prints/content-25-00079r2
A Novel Multi-level Feature Collaborative Matching Network for Optical and Synthetic Aperture Radar Image Registration
Due to the complementary characteristics of synthetic aperture radar (SAR) and optical images, image registration as a prerequisite for their information fusion has received increasing attention. Currently, learning-based methods can better handle the significant radiometric and geometric differences between optical and SAR images compared to traditional registration approaches, but they still have limitations in distinguishing difficult samples, making high-precision registration a remaining challenge. To address these challenges, this paper proposes a multi-level feature collaborative matching network (MFC-Net) that effectively integrates high-level abstract features and low-level spatial features for precise registration. Furthermore, a novel dual-dimension joint attention module (DDJA) is designed to dynamically capture feature dependencies across both channel and spatial dimensions, enhancing cross-modal feature consistency and improving matching performance. Additionally, to address the problem of similarity between hard positive and negative samples caused by high-precision registration requirements, a dynamic differentiation factor is introduced at the loss function level, enabling the model to better distinguish between these similar samples in training. Extensive experiments conducted on the WHU-OPT-SAR data set and WHU-SEN-City data set demonstrate that the proposed MFC-Net outperforms state-of-the-art methods in both matching accuracy and precision, validating its superiority in cross-modal image registration tasks.
Paper URL:
https://www.ingentaconnect.com/content/asprs/pers/pre-prints/content-25-00052r3
An Efficient Irregular Texture Nesting Method via Hybrid NFP-SADE with Adaptive Container Resizing
Efficient irregular texture nesting, which is necessary for improving the efficiency of texture mapping and 3D model rendering, especially for large-scale 3D reconstruction tasks, has emerged as a critical research topic in the fields of photogrammetry, computer graphics, and computer vision. However, persistent inefficiencies and high computational costs in existing texture nesting algorithms pose significant challenges when dealing with vast quantities of irregularly shaped texture patches. To solve this problem, this work presents an efficient and well-structured texture nesting for reorganizing irregular textures in a space-efficient and time-efficient way. More specifically, a hybrid optimization approach that integrates an enhanced no-fit polygon (NFP) method with an improved simplified atavistic differential evolution (SADE) algorithm is proposed. The canonical SADE is reformulated, tailored for texture nesting optimization, and a novel self-adaptive container resizing strategy is used to surpass traditional NFP approaches in polygon processing efficiency. The experimental results demonstrate that the proposed method significantly improves irregular texture nesting efficiency, achieving speed improvements of up to 5.44 times compared with the common genetic algorithm–based method and 5.21 times over the simulated annealing–based method. Furthermore, it consistently improves space use by approximately 6.56%, indicating a more effective layout strategy and optimized resource use. Code is available at https://github.com/louliyuan/NFP-SADE-With-Adaptive-Container-Resizing.
Paper URL:
https://www.ingentaconnect.com/content/asprs/pers/pre-prints/content-25-00038r3

