Home  | Publications | XZ26

Towards Unified Vision Language Models for Forest Ecological Analysis in Earth Observation

MCML Authors

Abstract

Recent progress in vision language models (VLMs) has enabled remarkable perception and reasoning capabilities, yet their potential for scientific regression in Earth Observation (EO) remains largely unexplored. Existing EO datasets mainly emphasize semantic understanding tasks such as captioning or classification, lacking benchmarks that align multimodal perception with measurable biophysical variables. To fill this gap, we present REO-Instruct, the first unified benchmark designed for both descriptive and regression tasks in EO. REO-Instruct establishes a cognitively interpretable logic chain in forest ecological scenario (human activity,land-cover classification, ecological patch counting, above-ground biomass (AGB) regression), bridging qualitative understanding and quantitative prediction. The dataset integrates co-registered Sentinel-2 and ALOS-2 imagery with structured textual annotations generated and validated through a hybrid human AI pipeline. Comprehensive evaluation protocols and baseline results across generic VLMs reveal that current models struggle with numeric reasoning, highlighting an essential challenge for scientific VLMs. REO-Instruct offers a standardized foundation for developing and assessing next-generation geospatial models capable of both description and scientific inference.

inproceedings XZ26


AI4ES @AAAI 2026

Workshop on AI for Environmental Science at the 40th Conference on Artificial Intelligence. Singapore, Jan 20-27, 2026. To be published. Preprint available.

Authors

X. XueX. Zhu

Links

arXiv GitHub

Research Area

 C3 | Physics and Geo Sciences

BibTeXKey: XZ26

Back to Top