How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations
Leo Yu-Ho Lo - The Hong Kong University of Science and Technology, Hong Kong, China
Huamin Qu - The Hong Kong University of Science and Technology, Hong Kong, China
Download preprint PDF
Download Supplemental Material
Room: Bayshore I + II + III
2024-10-18T13:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T13:30:00Z
Fast forward
Full Video
Keywords
Deceptive Visualization, Large Language Models, Prompt Engineering
Abstract
In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer’s perception of data, leading to misinterpretations and decisions based on false information. The development of effective automatic detection methods for misleading charts is an urgent field of research. The recent advancement of multimodal Large Language Models (LLMs) has introduced a promising direction for addressing this challenge. We explored the capabilities of these models in analyzing complex charts and assessing the impact of different prompting strategies on the models’ analyses. We utilized a dataset of misleading charts collected from the internet by prior research and crafted nine distinct prompts, ranging from simple to complex, to test the ability of four different multimodal LLMs in detecting over 21 different chart issues. Through three experiments–from initial exploration to detailed analysis–we progressively gained insights into how to effectively prompt LLMs to identify misleading charts and developed strategies to address the scalability challenges encountered as we expanded our detection range from the initial five issues to 21 issues in the final experiment. Our findings reveal that multimodal LLMs possess a strong capability for chart comprehension and critical thinking in data interpretation. There is significant potential in employing multimodal LLMs to counter misleading information by supporting critical thinking and enhancing visualization literacy. This study demonstrates the applicability of LLMs in addressing the pressing concern of misleading charts.