UrbanGIRAFFE: Representing Urban Scenes as Compositional
Generative Neural Feature Field

 ICCV2023

Controllable 3D-aware Image Synthesis (top) & Panoptic Prior (bottom)

Abstract

overview

Generating photorealistic images with controllable camera pose and scene contents is essential for many applications including AR/VR and simulation. Despite the fact that rapid progress has been made in 3D-aware generative models, most existing methods focus on object-centric images and are not applicable to generating urban scenes for free camera viewpoint control and scene editing. To address this challenging task, we propose UrbanGIRAFFE, which uses a coarse 3D panoptic prior, including the layout distribution of uncountable stuff and countable objects, to guide a 3D-aware generative model. Our model is compositional and controllable as it breaks down the scene into stuff, objects, and sky. Using stuff prior in the form of semantic voxel grids, we build a conditioned stuff generator that effectively incorporates the coarse semantic and geometry information. The object layout prior further allows us to learn an object generator from cluttered scenes. With proper loss functions, our approach facilitates photorealistic 3D-aware image synthesis with diverse controllability, including large camera movement, stuff editing, and object manipulation. We validate the effectiveness of our model on both synthetic and real-world datasets, including the challenging KITTI-360 dataset.

Viewpoint Control on KITTI-360

Move Forward

Camera Pose Interpolation

Stuff Editing on KITTI-360

Road to Grass

Building to Tree

Building Lower

Object Editing on KITTI-360

Controllable Image Synthesis on CLEVR-W

Citation

@article{Yang2023,
author = {Yuanbo Yang and Yifei Yang and Hanlei Guo and Rong Xiong and Yue Wang and Yiyi Liao},
title = {UrbanGIRAFFE: Representing Urban Scenes as Compositional Generative Neural Feature Fields},
journal = {ARXIV},
year = {2023}}

Acknowledgements


The website template was borrowed from Jon Barron.