SCIENTIFIC COMPUTING AND IMAGING INSTITUTE
at the University of Utah

An internationally recognized leader in visualization, scientific computing, and image analysis

Abstract graphic design with black lines and red, yellow, and blue shapes
The University of Utah Scientific Computing and Imaging (SCI) Institute will have one of its biggest years yet at the flagship conference for human-computer interaction, or CHI, which will be held later this month in Yokohama, Japan. SCI contributions include four papers, one of which won an honorable mention award, along with a software course and a paper in “alt.chi,” a forum for boundary-pushing research.

CHI brings together an interdisciplinary group of over 3,000 people—from computer scientists to psychologists to ergonomists—who “investigate and design new and creative ways for people to interact using technology,” according to conference organizers. The event, which dates to 1983, has served as a hub for the latest research and products in interactive systems including social networking, wearable devices, smart homes, and more.

A CHI paper is an accomplishment in itself: this year’s conference accepted only 25% of its 5,014 submissions. Andrew McNutt, a SCI faculty member and assistant professor in the Kahlert School of Computing (KSoC), served as lead author on a paper that won an honorable mentiona top-5% honor. KSoC associate professor Jason Wiese also secured an honorable mention, further boosting the U’s presence at CHI.

Below, McNutt and other SCI members discuss their CHI papers and why they matter.

“Slowness, Politics, and Joy: Values That Guide Technology Choices in Creative Coding Classrooms”

This paper, one of 201 to secure an honorable mention, builds on McNutt’s previous research on creative coding—a cornerstone of computer science education that often involves using programming to make graphical art. In a 2023 CHI paper, McNutt analyzed student interactions with creative coding tools and identified ways to improve those tools. It left him wondering about the other side of the classroom: what creative coding tools do teachers choose and why?

McNutt and his co-authors interviewed 12 people who have built creative coding tools or used such tools to teach middle schoolers, graduate students, people with disabilities, and more. The team identified three themes that influence educators when they choose or build these tools:
  • The first is slowness, or whether the tool encourages students to “sweat the details to learn something,” McNutt said.
  • The second is politics—for instance, whether a creative coding tool is free or requires a paid subscription.
  • The third is joy. Creative coding should be fun, but it's not only about play. “There's joy in being part of a community,” McNutt said. Tool documentation that is welcoming, as opposed to condescending, is one way to build that joy, he said.
The paper encourages people, especially human-computer interaction researchers and professionals, to consider the broader implications of creative coding tools. “The decisions we make affect us as people, not merely as technologists,” he said. “We should design and value technologies with that in mind.”

Banner graphic: McNutt often turns his papers into zines, or self-published mini-magazines. This image is part of the visual identity that McNutt designed for this paper and its zine.

“Visualization Guardrails: Designing Interventions Against Cherry-Picking in Interactive Data Explorers”

According to lead author Maxim Lisnic, a Ph.D. student working with SCI faculty member Alexander Lex and KSoC assistant professor Marina Kogan, this paper builds on previous research on COVID-skeptic posts on Twitter. “We saw a common trick: people would share a screenshot from a trusted COVID dashboard, like Our World In Data, but only include one or two countries to make their point,” he said. So his team looked at how data explorers might stop people from creating misleading visualizations in the first place. “In this study, we came up with different ways to automatically add back the missing data to the chart. We tested these ideas to see if they made it harder to cherry-pick and less convincing when people tried. They worked—but only when the added information looked simple and matched the original chart.”

“Plume: Scaffolding Text Composition in Dashboards”

In this paper, Lisnic and collaborators at Tableau Research explain how they used past work and new advances in generative AI to build a system that helps creators of visualization dashboards write better text. “It suggests what kind of text to add, where to put it, and how to format it to keep things clear and easy to follow,” Lisnic said. It can even generate text based on the data and charts in the dashboard, he added. “What’s especially exciting about our system design is that the structure we provide to maintain the hierarchy and proper reading order of the text helps not only write better text but also (I think) make better dashboard layouts in general.”

“Crowdsourced Think-Aloud Studies” and a reVISit course

Lead author Zachary Cutler and his team built a tool to allow anyone online to participate in a think-aloud study, a feedback-gathering method typically done in person. The most exciting result covered in the paper? Online study participants talked—a lot. “We actually found that they talk more online,” said Cutler, a Ph.D. student working with Lex. “It was very useful feedback as well. The quality of what they said was very similar to in person.” The work is important because it makes think-aloud studies more accessible, particularly for other researchers. “Crowdsourcing has a lot of benefits,” Cutler said. “We can access people around the world and it's much more time efficient.”

The tool is part of reVISit, open-source software that empowers visualization researchers to create and maintain ownership of sophisticated user studies. Lex and his reVISit co-leads launched the software last June and continue to add features. ReVISit team members, including Cutler and SCI software developer Jack Wilburn, will offer a course at CHI on using reVISit for online user studies.

“Linting is People! Exploring the Potential of Human Computation as a Sociotechnical Linter of Data Visualizations”

McNutt co-authored this paper, which was accepted to alt.chi—“a forum for controversial, risk-taking, and boundary-pushing research at CHI.” In programming, linting refers to the automatic detection of coding errors, akin to spell check in word-processing software. The paper explores whether linters can be viewed as tools for communities rather than individuals. Instead of “code cops,” linters could be “policy enforcers” that reflect a community standard. “It provokes readers to see day-to-day entities in a new light: What is a linter and what is not? Is the stock market a linter? What about social media downvotes?” McNutt asked. “We argue that taking this socio-technical view of the world reveals new opportunities to design technologies that better meet the needs of groups that use them.”