Despite significant advancements in Vision-Language Models (VLMs), the performance of existing VLMs remains hindered by object hallucination, a critical challenge to achieving accurate visual understanding. To address this issue, we propose SECOND: Selective and Contrastive Decoding, a novel approach that enables VLMs to effectively leverage multi-scale visual information with an object-centric manner, closely aligning with human visual perception.
SECOND progressively selects and integrates multi-scale visual information, facilitating a more precise interpretation of images. By contrasting these visual information iteratively, SECOND significantly reduces perceptual hallucinations and outperforms a wide range of benchmarks. Our theoretical analysis and experiments highlight the largely unexplored potential of multi-scale application in VLMs, showing that prioritizing and contrasting across scales outperforms existing methods.
VLMs often suffer from perceptual hallucination, mainly because they uniformly integrate multi-scale patches, mixing object signals with background noise. Inspired by human coarse-to-fine perception, SECOND tackles this by selectively keeping salient patches and enforcing contrastive consistency between coarse and fine stages.
SECOND (Selective and Contrastive Decoding) is a training-free multi-stage framework designed to mitigate perceptual hallucinations in Vision-Language Models (VLMs). SECOND combines selective multi-scale feature integration with multi-stage contrastive decoding to progressively refine object-centric representations and suppress hallucinated outputs.
SECOND constructs a multi-stage visual hierarchy by progressively expanding resolution from coarse to fine. At stage \(s\), the set of patches \( \mathcal{P}^{(s)} \) is selected based on an entropy-guided rule:
\( p_{\text{select}} = \frac{\exp(\lambda \cdot H(V)) - 1}{\exp(\lambda) - 1}, \)
where \(H(V)\) is the entropy of the visual attention distribution, and \(\lambda\) is a scaling hyperparameter. Patches with the top \(p_{\text{select}}\%\) attention scores are retained for the next stage, ensuring that object-relevant regions are progressively emphasized while background noise is suppressed.
Building on the hierarchy of outputs, SECOND introduces multi-stage contrastive decoding. Standard Contrastive Decoding contrasts an expert output with a single amateur:
\( \text{logit}_{\text{single}} = \text{logit}_{\text{expert}} + \alpha(\text{logit}_{\text{expert}} - \text{logit}_{\text{amateur}}). \)
SECOND generalizes this into a multi-stage setting, leveraging all intermediate “amateur” outputs. For a 4-stage setup:
This hierarchical contrast exploits the progressive refinement of patch selection, amplifying consistent object evidence while canceling out hallucinated signals from earlier stages.
We report main results on the POPE hallucination benchmark, comparing SECOND with baselines and VCD across multiple VLM backbones (LLaVA-Next, LLaVA-OneVision, Yi-VL) and LLMs (Vicuna-7B, Mistral-7B, Qwen2-0.5B, Yi-6B). SECOND consistently outperforms prior methods, achieving 11 out of 12 wins, with substantial gains in recall, accuracy, and F1. These improvements demonstrate SECOND’s effectiveness in mitigating perceptual hallucination while preserving reasoning ability.
Table 1. Results of POPE benchmark. SECOND consistently outperforms baselines and VCD across multiple backbones.
Beyond POPE. On general VQA benchmarks including VQAv2(lite), MMStar, and MMBench(lite), SECOND(+CD) consistently achieves strong performance across diverse backbones and LLMs, further demonstrating its effectiveness beyond hallucination-specific evaluation.
Table 2. Results on VQAv2(lite), MMStar, and MMBench(lite).
@inproceedings{park2025second,
title = {SECOND: Mitigating Perceptual Hallucination in Vision-Language Models via Selective and Contrastive Decoding},
author = {Park, Woohyeon and Kim, Woojin and Kim, Jaeik and Do, Jaeyoung},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning (ICML)},
year = {2025},
series = {Proceedings of Machine Learning Research},
publisher = {PMLR}
}