Troubling Collaboration: Matters of Care for Visualization Design Study

Troubling screenshot

Abstract

A common research process in visualization is for visualization researchers to collaborate with domain experts to solve particular applied data problems. While there is existing guidance and expertise around how to structure collaborations to strengthen research contributions, there is comparatively little guidance on how to navigate the implications of, and power produced through the socio-technical entanglements of collaborations. In this paper, we qualitatively analyze reflective interviews of past participants of collaborations from multiple perspectives: visualization graduate students, visualization professors, and domain collaborators. We juxtapose the perspectives of these individuals, revealing tensions about the tools that are built and the relationships that are formed — a complex web of competing motivations. Through the lens of matters of care, we interpret this web, concluding with considerations that both trouble and necessitate reformation of current patterns around collaborative work in visualization design studies to promote more equitable, useful, and care-ful outcomes.

Citation

Derya Akbaba, Devin Lange, Michael Correll, Alexander Lex, Miriah Meyer
Troubling Collaboration: Matters of Care for Visualization Design Study
SIGCHI Conference on Human Factors in Computing Systems (CHI), 1-15, doi:10.1145/3544548.3581168, 2023.

BibTeX

@inproceedings{2023_chi_troubling,
  title = {Troubling Collaboration: Matters of Care for Visualization Design Study},
  author = {Derya Akbaba and Devin Lange and Michael Correll and Alexander Lex and Miriah Meyer},
  booktitle = {SIGCHI Conference on Human Factors in Computing Systems (CHI)},
  publisher = {ACM},
  doi = {10.1145/3544548.3581168},
  pages = {1-15},
  year = {2023}
}

Acknowledgements

Thank you to our interview participants for their trust, candor, and time. We wish to acknowledge Ericka Johnson, Kathyrn Norlock, Sheldene Simola, the Vis Collective at LiU, the Vis Design Lab at the U, and our anonymous reviewers for fruitful discussions and feedback. This work is partially funded by the National Science Foundation (OAC 1835904), and by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.