Davidsonian Scene Graph: Improving Reliability in Fine-Grained Evaluation for Text-to-Image
Generation
Abstract
Evaluating text-to-image models is notoriously difficult. A strong recent approach for
assessing text-image faithfulness is based on QG/A (question generation and answering),
which uses pre-trained foundational models to automatically generate a set of questions and
answers from the prompt, and output images are scored based on whether these answers
extracted with a visual question answering model are consistent with the prompt-based
answers. This kind of evaluation is naturally dependent on the quality of the underlying QG
and QA models. We identify and address several reliability challenges in existing QG/A work:
(a) QG questions should respect the prompt (avoiding hallucinations, duplications, and
omissions) and (b) VQA answers should be consistent (not asserting that there is no
motorcycle in an image while also claiming the motorcycle is blue). We address these issues
with Davidsonian Scene Graph (DSG), an empirically grounded evaluation framework inspired by
formal semantics. DSG is an automatic, graph-based QG/A that is modularly implemented to be
adaptable to any QG/A module. DSG produces atomic and unique questions organized in
dependency graphs, which (i) ensure appropriate semantic coverage and (ii) sidestep
inconsistent answers. With extensive experimentation and human evaluation on a range of
model configurations (LLM, VQA, and T2I), we empirically demonstrate that DSG addresses the
challenges noted above. Finally, we present DSG-1k, an open-sourced evaluation benchmark
that includes 1,060 prompts, covering a wide range of fine-grained semantic categories with
a balanced distribution. We will release the DSG-1k prompts and the corresponding DSG
questions.
QG/A: New Paradigm in T2I Alignment Eval
Reliability Issues in Existing QG/A Methods
DSG Solution to the Reliability Issues
Publication
Davidsonian Scene Graph: Improving Reliability in Fine-Grained Evaluation for
Text-to-Image Generation
Jaemin Cho, Yushi Hu, Roopal Garg, Peter Anderson, Ranjay Krishna,
Jason Baldridge,
Mohit Bansal, Jordi Pont-Tuset, Su Wang
ICLR 2024
[
PDF] [
BibTeX]