Street-to-satellite image synthesis focuses on generating realistic satellite images from corresponding ground street-view images while maintaining a consistent content layout, similar to looking down from the sky. The significant differences in perspectives create a substantial domain gap between the views, making this cross-view generation task particularly challenging. In this paper, we introduce SkyDiffusion, a novel cross-view generation method for synthesizing satellite images from street-view images, leveraging diffusion models and Bird's Eye View (BEV) paradigm. First, we design a Curved-BEV method to transform street-view images to the satellite view, reformulating the challenging cross-domain image synthesis task into a conditional generation problem. Curved-BEV also includes a "Multi-to-One" mapping strategy for leveraging multiple street-view images within the same satellite coverage area, effectively solving the occlusion issues in dense urban scenes. Next, we design a BEV-controlled diffusion model to generate satellite images consistent with the street-view content, which also incorporates a light manipulation module to make the lighting conditions of the synthesized satellite images more flexible. Experimental results demonstrate that SkyDiffusion outperforms state-of-the-art methods on both suburban (CVUSA & CVACT) and urban (VIGOR-Chicago) cross-view datasets, with an average SSIM increase of 13.96% and a FID reduction of 20.54%, achieving realistic and content-consistent satellite image generation. The code and models of this work will be released at https://opendatalab.github.io/skydiffusion/
Overview of SkyDiffusion. This paper introduces a novel method for synthesizing satellite images from corresponding street-view images. Our cross-view synthesis network, SkyDiffusion, initially applies a curved BEV transformation to the input street-view images, converting the perspective to a top-down view using either one-to-one or multi-to-one mappings. Subsequently, the BEV-Controlled diffusion model with light manipulation is employed for the controlled synthesis of satellite images.
On the suburban CVUSA and CVACT datasets, our SkyDiffusion method achieved the outstanding results. Compared to state-of-the-art methods, it reduced FID by 25.83% and increased SSIM by 13.89%, demonstrating its superiority in synthesizing realistic and consistent satellite images. In the urban VIGOR-Chicago dataset, SkyDiffusion reduced FID by 9.96% and improved SSIM by 14.11% compared to the state-of-the-art method
"Baseline" represents directly using street-view image,"C-BEV" denotes using Curved-BEV transformation, and "Multi" stands for Multi-to-One strategy.
The ablation experiments in the beneath table indicate that the Light Manipulation module aligns the lighting conditions of the synthesized images with those of the target domain images, improving SSIM and PSNR metrics.
We trained the model on CVACT and tested it on the VIGOR-Chicago, and vice versa, to evaluate cross-dataset generation capability. Compared to Instruct pix2pix, our method (w/o light) demonstrates superior performance across metrics. Our method effectively preserves scene content such as road directions and intersections.
@misc{ye2024skydiffusionstreettosatelliteimagesynthesis,
title={SkyDiffusion: Street-to-Satellite Image Synthesis with Diffusion Models and BEV Paradigm},
author={Junyan Ye and Jun He and Weijia Li and Zhutao Lv and Jinhua Yu and Haote Yang and Conghui He},
year={2024},
eprint={2408.01812},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2408.01812},
}