OMG3D: 3D object manipulation in a single image usint generative models

Ruisi Zhao1, Zechuan Zhang1, Zongxin Yang1, Yi Yang1
1ReLER, CCAI, Zhejiang University

Applications of OMG3D.

Abstract

Object manipulation in images aims to not only edit the object presentation but also gift objects with motion. Previous methods encountered challenges in concurrently handling static editing and dynamic motion applications, while also struggling to achieve realism in object appearance and scene lighting. In this work, we introduce OMG3D, a novel framework that integrates the precise geometric control with the generative power of diffusion models, thus achieving significant enhancements in visual performance. Our framework converts 2D objects into 3D, enabling user-directed modifications and lifelike motions at the geometric level. To address texture realism, we propose CustomRefiner, a texture refinement module that pretrain a customized diffusion model to align the style and perspectives of coarse renderings with the original image. Additionally, we introduce IllumiCombiner, an lighting processing module that estimates and adjusts background lighting to match human visual perception, resulting in more realistic illumination. Extensive experiments demonstrate the outstanding visual performance of our approach in both static and dynamic scenarios. Remarkably, all these steps can be done using one NVIDIA 3090. The code and project page will be released upon acceptance of the paper.

teaser

Model structure of our model, OMG3D.


teaser

Comparison with other image edit methods.


Text Description: A rotated pumpkin jumping on a stump.

By DynamiCrafter
By Image Sculpting
Ours
By Pika Video Model
By SVD

Text Description: An elephant is walking on the ground.

By DynamiCrafter
By Image Sculpting
Ours
By Pika Video Model
By SVD

Text Description: A boxtoy greeting on the keyboard.

By DynamiCrafter
By Image Sculpting
Ours
By Pika Video Model
By SVD

Comparison with other image animation methods.


Video Demo