Capturing User Intent when Brushing in Data Visualizations

Intent-Inference screenshot

Abstract

Capturing provenance information of interactive visualization system is an important step towards reproducibility of findings. However, provenance data that is based on logged interactions does not capture higher-level intent behind the actions taken: provenance data captures the "what' but not the "why'. For recall, reproducibility, and even re-use, however, understanding the why and the reasoning behind and action is critical. Capturing the intent of a user action, however, can currently only be achieved by manually specifying intent. In this paper we introduce a set of methods to infer intent for selections and brushes in scatterplots. We first introduce a taxonomy of types of patterns that users might specify, which we base on a formative study conducted with professional data analysts and scientists from a variety of fields. Based on this, we identify algorithms that can classify a selection into a semantically meaningful pattern. We then introduce a system that implements these methods and scores competing classifications against each other. Analysts then can use these predictions to conveniently capture their intent, while at the same time making a concise representation of that intent available to the system. Beyond capturing intent, we demonstrate that our methods can be used to speed up or correct selections, for example, when analysts missed to select a small set of points matching their intent. We evaluate our approach using usage scenarios conducted with domain experts.

Citation

BibTeX

@article{2020_intent,
  title = {Capturing User Intent when Brushing in Data Visualizations},
  author = {Kiran Gadhave and Jochen Görtler and Oliver Deussen and Miriah Meyer and Jeff Phillips and Alexander Lex},
  booktitle = {Preprint},
  doi = {10.31219/osf.io/mq2rk},
  year = {2020}
}

Acknowledgements

We thank the domain experts we interviewed for their time and their willingness to provide datasets. We also thank Carolina Nobre for help with our qualitative data analysis. We gratefully acknowledge funding by the National Science Foundation (IIS 1751238).