new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Jan 6

HyperspectralViTs: General Hyperspectral Models for On-board Remote Sensing

On-board processing of hyperspectral data with machine learning models would enable unprecedented amount of autonomy for a wide range of tasks, for example methane detection or mineral identification. This can enable early warning system and could allow new capabilities such as automated scheduling across constellations of satellites. Classical methods suffer from high false positive rates and previous deep learning models exhibit prohibitive computational requirements. We propose fast and accurate machine learning architectures which support end-to-end training with data of high spectral dimension without relying on hand-crafted products or spectral band compression preprocessing. We evaluate our models on two tasks related to hyperspectral data processing. With our proposed general architectures, we improve the F1 score of the previous methane detection state-of-the-art models by 27% on a newly created synthetic dataset and by 13% on the previously released large benchmark dataset. We also demonstrate that training models on the synthetic dataset improves performance of models finetuned on the dataset of real events by 6.9% in F1 score in contrast with training from scratch. On a newly created dataset for mineral identification, our models provide 3.5% improvement in the F1 score in contrast to the default versions of the models. With our proposed models we improve the inference speed by 85% in contrast to previous classical and deep learning approaches by removing the dependency on classically computed features. With our architecture, one capture from the EMIT sensor can be processed within 30 seconds on realistic proxy of the ION-SCV 004 satellite.

  • 2 authors
·
Oct 22, 2024

Total Nitrogen Estimation in Agricultural Soils via Aerial Multispectral Imaging and LIBS

Measuring soil health indicators is an important and challenging task that affects farmers' decisions on timing, placement, and quantity of fertilizers applied in the farms. Most existing methods to measure soil health indicators (SHIs) are in-lab wet chemistry or spectroscopy-based methods, which require significant human input and effort, time-consuming, costly, and are low-throughput in nature. To address this challenge, we develop an artificial intelligence (AI)-driven near real-time unmanned aerial vehicle (UAV)-based multispectral sensing (UMS) solution to estimate total nitrogen (TN) of the soil, an important macro-nutrient or SHI that directly affects the crop health. Accurate prediction of soil TN can significantly increase crop yield through informed decision making on the timing of seed planting, and fertilizer quantity and timing. We train two machine learning models including multi-layer perceptron and support vector machine to predict the soil nitrogen using a suite of data classes including multispectral characteristics of the soil and crops in red, near-infrared, and green spectral bands, computed vegetation indices, and environmental variables including air temperature and relative humidity. To generate the ground-truth data or the training data for the machine learning models, we measure the total nitrogen of the soil samples (collected from a farm) using laser-induced breakdown spectroscopy (LIBS).

  • 3 authors
·
Jul 5, 2021

JWST observations of photodissociation regions III. Dust modelling at the illuminated edge of the Horsehead PDR

Carbonaceous nano-grains are a significant component of interstellar dust and dominate the mid-infrared emission of photodissociation regions (PDRs). We study the evolution of nano-grains across the illuminated edge of the Horsehead PDR, especially their abundance and size properties. This work is part of the Physics and Chemistry of PDR Fronts program studying dust and gas in PDRs with JWST. We use NIRCam+MIRI photometric bands and NIRSpec+MRS spectroscopy to map the illuminated edge. We model dust emission using the THEMIS dust model with the SOC radiative transfer code. Detailed modeling of high angular resolution JWST data allows us to obtain constraints on nano-grain properties. We find that diffuse ISM dust cannot account for the observed data, requiring evolved grains. A sharp density increase is observed at the illuminated edge, consistent with ALMA observations revealing a sharp transition between molecular and ionized gas. Although the PDR length could not be directly determined, we estimate an upper limit of approximately 0.015 pc. This implies a lower limit on small grain abundance (greater than 0.003), showing small grains are not depleted at the Horsehead edge, unlike in the Orion Bar. Our findings indicate a high-density environment and less steep size distribution for nano-grains at the illuminated edge versus the diffuse ISM. This implies nano-grain destruction mechanisms might be less efficient in the Horsehead's moderate-UV field than in more intense PDRs. These results support a model where nano-grain population recovery is slower in moderate-UV environments, leading to a unique dust size distribution at the edge of the Horsehead Nebula.

  • 22 authors
·
Oct 28, 2025

Optimizing Methane Detection On Board Satellites: Speed, Accuracy, and Low-Power Solutions for Resource-Constrained Hardware

Methane is a potent greenhouse gas, and detecting its leaks early via hyperspectral satellite imagery can help mitigate climate change. Meanwhile, many existing missions operate in manual tasking regimes only, thus missing potential events of interest. To overcome slow downlink rates cost-effectively, onboard detection is a viable solution. However, traditional methane enhancement methods are too computationally demanding for resource-limited onboard hardware. This work accelerates methane detection by focusing on efficient, low-power algorithms. We test fast target detection methods (ACE, CEM) that have not been previously used for methane detection and propose a Mag1c-SAS - a significantly faster variant of the current state-of-the-art algorithm for methane detection: Mag1c. To explore their true detection potential, we integrate them with a machine learning model (U-Net, LinkNet). Our results identify two promising candidates (Mag1c-SAS and CEM), both acceptably accurate for the detection of strong plumes and computationally efficient enough for onboard deployment: one optimized more for accuracy, the other more for speed, achieving up to ~100x and ~230x faster computation than original Mag1c on resource-limited hardware. Additionally, we propose and evaluate three band selection strategies. One of them can outperform the method traditionally used in the field while using fewer channels, leading to even faster processing without compromising accuracy. This research lays the foundation for future advancements in onboard methane detection with minimal hardware requirements, improving timely data delivery. The produced code, data, and models are open-sourced and can be accessed from https://github.com/zaitra/methane-filters-benchmark.

  • 3 authors
·
Jul 2, 2025

METER-ML: A Multi-Sensor Earth Observation Benchmark for Automated Methane Source Mapping

Reducing methane emissions is essential for mitigating global warming. To attribute methane emissions to their sources, a comprehensive dataset of methane source infrastructure is necessary. Recent advancements with deep learning on remotely sensed imagery have the potential to identify the locations and characteristics of methane sources, but there is a substantial lack of publicly available data to enable machine learning researchers and practitioners to build automated mapping approaches. To help fill this gap, we construct a multi-sensor dataset called METER-ML containing 86,599 georeferenced NAIP, Sentinel-1, and Sentinel-2 images in the U.S. labeled for the presence or absence of methane source facilities including concentrated animal feeding operations, coal mines, landfills, natural gas processing plants, oil refineries and petroleum terminals, and wastewater treatment plants. We experiment with a variety of models that leverage different spatial resolutions, spatial footprints, image products, and spectral bands. We find that our best model achieves an area under the precision recall curve of 0.915 for identifying concentrated animal feeding operations and 0.821 for oil refineries and petroleum terminals on an expert-labeled test set, suggesting the potential for large-scale mapping. We make METER-ML freely available at https://stanfordmlgroup.github.io/projects/meter-ml/ to support future work on automated methane source mapping.

  • 10 authors
·
Jul 22, 2022

Biomolecular Analysis of Soil Samples and Rock Imagery for Tracing Evidence of Life Using a Mobile Robot

The search for evidence of past life on Mars presents a tremendous challenge that requires the usage of very advanced robotic technologies to overcome it. Current digital microscopic imagers and spectrometers used for astrobiological examination suffer from limitations such as insufficient resolution, narrow detection range, and lack of portability. To overcome these challenges, this research study presents modifications to the Phoenix rover to expand its capability for detecting biosignatures on Mars. This paper examines the modifications implemented on the Phoenix rover to enhance its capability to detect a broader spectrum of biosignatures. One of the notable improvements comprises the integration of advanced digital microscopic imagers and spectrometers, enabling high-resolution examination of soil samples. Additionally, the mechanical components of the device have been reinforced to enhance maneuverability and optimize subsurface sampling capabilities. Empirical investigations have demonstrated that Phoenix has the capability to navigate diverse geological environments and procure samples for the purpose of biomolecular analysis. The biomolecular instrumentation and hybrid analytical methods showcased in this study demonstrate considerable potential for future astrobiology missions on Mars. The potential for enhancing the system lies in the possibility of broadening the range of detectable biomarkers and biosignatures.

  • 5 authors
·
Nov 27, 2024

Advancing global aerosol forecasting with artificial intelligence

Aerosol forecasting is essential for air quality warnings, health risk assessment, and climate change mitigation. However, it is more complex than weather forecasting due to the intricate interactions between aerosol physicochemical processes and atmospheric dynamics, resulting in significant uncertainty and high computational costs. Here, we develop an artificial intelligence-driven global aerosol-meteorology forecasting system (AI-GAMFS), which provides reliable 5-day, 3-hourly forecasts of aerosol optical components and surface concentrations at a 0.5° x 0.625° resolution. AI-GAMFS combines Vision Transformer and U-Net in a backbone network, robustly capturing the complex aerosol-meteorology interactions via global attention and spatiotemporal encoding. Trained on 42 years of advanced aerosol reanalysis data and initialized with GEOS Forward Processing (GEOS-FP) analyses, AI-GAMFS delivers operational 5-day forecasts in one minute. It outperforms the Copernicus Atmosphere Monitoring Service (CAMS) global forecasting system, GEOS-FP forecasts, and several regional dust forecasting systems in forecasting most aerosol variables including aerosol optical depth and dust components. Our results mark a significant step forward in leveraging AI to refine physics-based aerosol forecasting, facilitating more accurate global warnings for aerosol pollution events, such as dust storms and wildfires.

  • 22 authors
·
Dec 3, 2024

The interstellar flux gap: From dust to kilometer-scale objects

Context. Three kilometer-sized interstellar objects (ISOs) have been detected transiting the Solar System, and spacecraft have directly measured micrometer-scale interstellar dust (ISD). Yet no intermediate-size interstellar meteoroids have been identified in current meteor surveys. Aims. We test whether a power-law flux extrapolation connecting spacecraft ISD and kilometer-scale ISOs is consistent with meteor surveys, and we quantify the expected interstellar impacting flux based on various observational reports. Methods. We compiled differential fluxes and limits from spacecraft ISD, radar and optical meteor surveys, and theoretical estimates. We evaluated the power-law size-frequency fits, computed the 3I-like flux, and compared measured fluxes to predictions. Results. The spacecraft-measured dust flux exceeds extrapolations constrained by meteor surveys and kilometer-scale ISOs by sim2-7 orders of magnitude. An r^{-3.0} fit combining spacecraft ISD detections with kilometer-scale ISOs overpredicts the number of meteors with hyperbolic orbits, whereas slopes of r^{-2.7}-r^{-2.3} (derived from radar and optical meteor upper limits, respectively) instead yield interplanetary-to-interstellar flux ratios of 10^{3}-10^{6}. Conclusions. A simple power-law from ISD to ISOs is inconsistent with meteor survey constraints and yields unrealistic predictions for interstellar meteoroids. The data reveal a gap between submicron dust entrained in the Local Interstellar Cloud (LIC) and macroscopic bodies ejected from planetary systems. This gap may reflect distinct origins and destruction-transport processes rather than a continuous size-frequency distribution. This would imply either the dominance of a small-particle LIC component or the need to reassess spacecraft dust fluxes.

  • 2 authors
·
Nov 3, 2025

EarthDial: Turning Multi-sensory Earth Observations to Interactive Dialogues

Automated analysis of vast Earth observation data via interactive Vision-Language Models (VLMs) can unlock new opportunities for environmental monitoring, disaster response, and {resource management}. Existing generic VLMs do not perform well on Remote Sensing data, while the recent Geo-spatial VLMs remain restricted to a fixed resolution and few sensor modalities. In this paper, we introduce EarthDial, a conversational assistant specifically designed for Earth Observation (EO) data, transforming complex, multi-sensory Earth observations into interactive, natural language dialogues. EarthDial supports multi-spectral, multi-temporal, and multi-resolution imagery, enabling a wide range of remote sensing tasks, including classification, detection, captioning, question answering, visual reasoning, and visual grounding. To achieve this, we introduce an extensive instruction tuning dataset comprising over 11.11M instruction pairs covering RGB, Synthetic Aperture Radar (SAR), and multispectral modalities such as Near-Infrared (NIR) and infrared. Furthermore, EarthDial handles bi-temporal and multi-temporal sequence analysis for applications like change detection. Our extensive experimental results on 44 downstream datasets demonstrate that EarthDial outperforms existing generic and domain-specific models, achieving better generalization across various EO tasks. Our source codes and pre-trained models are at https://github.com/hiyamdebary/EarthDial.

  • 11 authors
·
Dec 19, 2024

DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal Forecasting

The ever-increasing sensor service, though opening a precious path and providing a deluge of earth system data for deep-learning-oriented earth science, sadly introduce a daunting obstacle to their industrial level deployment. Concretely, earth science systems rely heavily on the extensive deployment of sensors, however, the data collection from sensors is constrained by complex geographical and social factors, making it challenging to achieve comprehensive coverage and uniform deployment. To alleviate the obstacle, traditional approaches to sensor deployment utilize specific algorithms to design and deploy sensors. These methods dynamically adjust the activation times of sensors to optimize the detection process across each sub-region. Regrettably, formulating an activation strategy generally based on historical observations and geographic characteristics, which make the methods and resultant models were neither simple nor practical. Worse still, the complex technical design may ultimately lead to a model with weak generalizability. In this paper, we introduce for the first time the concept of spatio-temporal data dynamic sparse training and are committed to adaptively, dynamically filtering important sensor distributions. To our knowledge, this is the first proposal (termed DynST) of an industry-level deployment optimization concept at the data level. However, due to the existence of the temporal dimension, pruning of spatio-temporal data may lead to conflicts at different timestamps. To achieve this goal, we employ dynamic merge technology, along with ingenious dimensional mapping to mitigate potential impacts caused by the temporal aspect. During the training process, DynST utilize iterative pruning and sparse training, repeatedly identifying and dynamically removing sensor perception areas that contribute the least to future predictions.

  • 8 authors
·
Mar 5, 2024