Information Theoretic Text-to-Image Alignment
&
RFMI: Estimating Mutual Information on Rectified Flow for Text-to-Image Alignment

1EURECOM, 2Huawei Technologies SASU, France
arXiv arXiv Code

We propose MI-TUNE and RFMI-FT, a novel famility of self-supervised fine-tuning methods

for text-to-image diffusion models and rectified flow models, that uses mutual Information

to align generated images to user intentions through natural prompts.

Abstract

Diffusion models for Text-to-Image (T2I) conditional generation have recently achieved tremendous success. Yet, aligning these models with user's intentions still involves a laborious trial-and-error process, and this challenging alignment problem has attracted considerable attention from the research community. In this work, instead of relying on fine-grained linguistic analyses of prompts, human annotation, or auxiliary vision-language models, we use Mutual Information (MI) to guide model alignment. In brief, our method uses self-supervised fine-tuning and relies on a point-wise MI estimation between prompts and images to create a synthetic fine-tuning set for improving model alignment. Our analysis indicates that our method is superior to the state-of-the-art, yet it only requires the pre-trained denoising network of the T2I model itself to estimate MI, and a simple fine-tuning strategy that improves alignment while maintaining image quality.

Method

Self-supervised finetuning algorithm, and point-wise MI estimation algorithm:

mitune_algos
rfmi_algos

Experimental Results

Quantitative alignment results on T2I-CompBench:

t2i_table

Qualitative visualization:

t2i_visu