HiDiffusion: Unlocking Higher-Resolution Creativity and Efficiency in Pretrained Diffusion Models

(ECCV 2024)

Shen Zhang, Zhaowei Chen, Zhenyu Zhao, Yuhao Chen, Yao Tang, Jiajun Liang*
MEGVII Technology
*Indicates corresponding author

Increases the resolution and speed of your diffusion models by only adding a single line of code.

Higher resolution, better visual enjoyment

Faster speed, better practicality.

MY ALT TEXT SDXL, 2048x3072. In the depths of a mystical forest, a robotic owl with night vision lenses for eyes watches over the nocturnal creatures. MY ALT TEXT SDXL, 2048x4096. Autumn season, a serene mountain lake lies beside a mountain. The leaves are yellow, the blue sky with fluffy clouds adds to the tranquility of the landscape.

Abstract

Diffusion models have become a mainstream approach for high-resolution image synthesis. However, directly generating higher-resolution images from pretrained diffusion models will encounter unreasonable object duplication and exponentially increase the generation time. In this paper, we discover that object duplication arises from feature duplication in the deep blocks of the U-Net. Concurrently, We pinpoint the extended generation times to self-attention redundancy in U-Net's top blocks. To address these issues, we propose a tuning-free higher-resolution framework named HiDiffusion. Specifically, HiDiffusion contains Resolution-Aware U-Net~(RAU-Net) that dynamically adjusts the feature map size to resolve object duplication and engages Modified Shifted Window Multi-head Self-Attention(MSW-MSA) that utilizes optimized window attention to reduce computations. we can integrate HiDiffusion into various pretrained diffusion models to scale image generation resolutions even to 4096×4096 at 1.5-6× the inference speed of previous methods. Extensive experiments demonstrate that our approach can address object duplication and heavy computation issues, achieving state-of-the-art performance on higher-resolution image synthesis tasks.

Method

MY ALT TEXT

HiDiffusion framework.

Text-to-Image Task

Image quality comparison with other high-resolution image generation methods

MY ALT TEXT 2048x2048. An Astronaut in space playing an electric guitar, stylistic, cinematic, earth visible in the background. MY ALT TEXT 2048x2048. Girl with pink hair, vaporwave style, retro aesthetic, cyberpunk, vibrant, neon colors, vintage 80s and 90s style, highly detailed. MY ALT TEXT 2048x3072. Roger rabbit as a real person, photorealistic, cinematic. MY ALT TEXT 2048x4096. An otherworldly forest with bioluminescent trees, their neon blue leaves casting an ethereal glow on the path below, and curious creatures with gentle eyes peering from behind the glowing trunks. MY ALT TEXT 4096x4096. An adorable happy brown border collie sitting on a bed, high detail. MY ALT TEXT 4096x4096. Standing tall amidst the ruins, a stone golem awakens, vines and flowers sprouting from the crevices in its body.

Efficiency comparison with other acceleration methods

MY ALT TEXT

ControlNet Task

MY ALT TEXT 2048x2048 image generation with ControlNet. We can generate better images with faster speed.

Inpainting Task

MY ALT TEXT 2048x2048 image generation on inpainting task. We can generate better images with faster speed.

BibTeX

@article{zhang2023hidiffusion,
        title={HiDiffusion: Unlocking Higher-Resolution Creativity and Efficiency in Pretrained Diffusion Models},
        author={Zhang, Shen and Chen, Zhaowei and Zhao, Zhenyu and Chen, Yuhao and Tang, Yao and Liang, Jiajun},
        journal={arXiv preprint arXiv:2311.17528},
        year={2023}
      }