Home  | Publications | WWM+25

Dream-to-Recon: Monocular 3D Reconstruction With Diffusion-Depth Distillation From Single Images

MCML Authors

Abstract

Volumetric scene reconstruction from a single image is crucial for a broad range of applications like autonomous driving and robotics. Recent volumetric reconstruction methods achieve impressive results, but generally require expensive 3D ground truth or multi-view supervision. We propose to leverage pre-trained 2D diffusion models and depth prediction models to generate synthetic scene geometry from a single image. This can then be used to distill a feed-forward scene reconstruction model. Our experiments on the challenging KITTI-360 and Waymo datasets demonstrate that our method matches or outperforms state-of-the-art baselines that use multi-view supervision, and offers unique advantages, for example regarding dynamic scenes.

misc


Preprint

Aug. 2025

Authors

P. Wulff • F. WimbauerD. MuhleD. Cremers

Links


Research Area

 B1 | Computer Vision

BibTeXKey: WWM+25

Back to Top