CoDE-GAN: Content Decoupled and Enhanced GAN for Sketch-guided Flexible Fashion Editing

The Hong Kong Polytechnic University
*Indicates Corresponding Author

The interactive online demo is available Here .

Abstract

Rapid advancements in generative models, including generative adversarial networks (GANs) and diffusion models, have made possible of automated efficient image editing through the use of text descriptions, semantic segmentation, and/or reference style images. Nevertheless, in the fashion industry, the task of image editing requires more flexible, and typically iterative, modifications to the image content, while existing methods struggle to achieve. This paper proposes a new model called Content Decoupled and Enhanced GAN (CoDE-GAN), which is formulated and trained for the task of image reconstruction, more specifically, image inpainting, with sketch-guidance. Through this proxy task, the trained model can be used for flexible image editing, generating new images with consistent colours and required textures based on sketch inputs. In this new model, a content decoupling block is introduced including specially designed dual encoders, which pre-process inputs and further transformed into separated structure and texture representations. Moreover, a content enhancing module is designed and applied to the decoder, improving the colour consistency and refining the texture of the generated images. The proposed CoDE-GAN can achieve coarse-to-fine results in one single stage. Extensive experiment on three datasets, covering human, garment-only and scene images, show that CoDE-GAN outperforms other state-of-the-art methods in terms of both generated image quality and editing flexibility.

Methods

An overview of our proposed CoDE-GAN. It incorporates Content Decoupling Module to obtain latent representation fl of the input set x. In the latter generation process, a Content Enhancement Module is applied to further improve the consistency between the synthesized textures and the unedited textures.

BibTeX


        @article{zhengwt2023codegan,
        title={{CoDE-GAN: Content Decouple and Enhancement GAN for Flexible Fashion Editing}},
        author={Sun, Zhengwentai and Zhou, Yanghong and He, Honghong and Mok, P. Y.},
        journal={sun-zhengwt.com},
        url={https://taited.github.io/codegan-project/}
        year={2023}
        }