September 27, 2023
Training text-to-image models with web scale image-text pairs enables the generation of a wide range of visual concepts from text. However, these pre-trained models often face challenges when it comes to generating highly aesthetic images. This creates the need for aesthetic alignment post pre-training. In this paper, we propose quality-tuning to effectively guide a pre-trained model to exclusively generate highly visually appealing images, while maintaining generality across visual concepts. Our key insight is that supervised fine-tuning with a set of surprisingly small but extremely visually appealing images can significantly improve the generation quality. We pre-train a latent diffusion model on 1.1 billion image-text pairs and fine-tune it with only a few thousand carefully selected high-quality images. The resulting model, Emu, achieves a win rate of 82.9% compared with its pre-trained only counterpart. Compared to the state-of-the-art SDXLv1.0, Emu is preferred 68.4% and 71.3% of the time on visual appeal on the standard PartiPrompts and our Open User Input benchmark based on the real-world usage of text-to-image models. In addition, we show that quality-tuning is a generic approach that is also effective for other architectures, including pixel diffusion and masked generative transformer models.
Written by
Xiaoliang Dai
Kevin Chih-Yao Ma
Sam Tsai
Peizhao Zhang
Simon Vandenhende
Xiaofang Wang
Matthew Yu
Abhishek Kadian
Kunpeng Li
Yue (R) Zhao
Vladan Petrovic
Simran Motwani
Yiwen Song
Yi Wen
Zijian He
Peter Vajda
Publisher
Meta
Research Topics
July 29, 2024
Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chay Ryali, Tengyu Ma, Haitham Khedr, Roman Rädle, Chloe Rolland, Laura Gustafson, Eric Mintun, Junting Pan, Kalyan Vasudev Alwala, Nicolas Carion, Chao-Yuan Wu, Ross Girshick, Piotr Dollar, Christoph Feichtenhofer
July 29, 2024
July 24, 2024
Vlad Sobal, Mark Ibrahim, Randall Balestriero, Vivien Cabannes, Pietro Astolfi, Kyunghyun Cho, Yann LeCun
July 24, 2024
July 23, 2024
Zecheng He, Bo Sun, Felix Xu, Haoyu Ma, Ankit Ramchandani, Vincent Cheung, Siddharth Shah, Anmol Kalia, Ning Zhang, Peizhao Zhang, Roshan Sumbaly, Peter Vajda, Animesh Sinha
July 23, 2024
July 23, 2024
Llama team
July 23, 2024
Foundational models
Latest news
Foundational models