Do You See What I See? Eliciting High-Level Visualization Comprehension
Do You See What I See? Eliciting High-Level Visualization Comprehension |
Abstract
Designers often create visualizations to achieve specific high-level analytical or communication goals. These goals require people to naturally extract complex, contextualized, and interconnected patterns in data. While limited prior work has studied general high-level interpretation, prevailing perceptual studies of visualization effectiveness primarily focus on isolated, predefined, low-level tasks, such as estimating statistical quantities. This study more holistically explores visualization interpretation to examine the alignment between designers’ communicative goals and what their audience sees in a visualization, which we refer to as their textit{comprehension}. We found that statistics people effectively estimate from visualizations in classical graphical perception studies may differ from the patterns people intuitively comprehend in a visualization. We conducted a qualitative study on three types of visualizations—line graphs, bar graphs, and scatterplots—to investigate the high-level patterns people naturally draw from a visualization. Participants described a series of graphs using natural language and think-aloud protocols. We found that comprehension varies with a range of factors, including graph complexity and data distribution. Specifically, 1) a visualization’s stated objective often does not align with people’s comprehension, 2) results from traditional experiments may not predict the knowledge people build with a graph, and 3) chart type alone is insufficient to predict the information people extract from a graph. Our study confirms the importance of defining visualization effectiveness from multiple perspectives to assess and inform visualization practices.
Downloads
Citation
Ghulam Jilani Quadri, Zeyu Wang, Zhehao Wang, Jennifer Adorno Nieves, Paul Rosen, and Danielle Albers Szafir. Do You See What I See? Eliciting High-Level Visualization Comprehension. ACM SIGCHI Conference on Human Factors in Computing Systems, 2024.
Bibtex
@inproceedings{quadri2024hlc, title = {Do You See What I See? Eliciting High-Level Visualization Comprehension}, author = {Quadri, Ghulam Jilani and Wang, Zeyu and Wang, Zhehao and Nieves, Jennifer Adorno and Rosen, Paul and Szafir, Danielle Albers}, booktitle = {ACM SIGCHI Conference on Human Factors in Computing Systems}, year = {2024}, abstract = {Designers often create visualizations to achieve specific high-level analytical or communication goals. These goals require people to naturally extract complex, contextualized, and interconnected patterns in data. While limited prior work has studied general high-level interpretation, prevailing perceptual studies of visualization effectiveness primarily focus on isolated, predefined, low-level tasks, such as estimating statistical quantities. This study more holistically explores visualization interpretation to examine the alignment between designers' communicative goals and what their audience sees in a visualization, which we refer to as their textit{comprehension}. We found that statistics people effectively estimate from visualizations in classical graphical perception studies may differ from the patterns people intuitively comprehend in a visualization. We conducted a qualitative study on three types of visualizations---line graphs, bar graphs, and scatterplots---to investigate the high-level patterns people naturally draw from a visualization. Participants described a series of graphs using natural language and think-aloud protocols. We found that comprehension varies with a range of factors, including graph complexity and data distribution. Specifically, 1) a visualization's stated objective often does not align with people's comprehension, 2) results from traditional experiments may not predict the knowledge people build with a graph, and 3) chart type alone is insufficient to predict the information people extract from a graph. Our study confirms the importance of defining visualization effectiveness from multiple perspectives to assess and inform visualization practices.} }