Congratulations to Zhuohao (Jerry) Zhang – the most recent CREATE Ph.D. student to receive an Apple Scholars in AIML PhD fellowship. The prestigious award supports students through funding, internship opportunities, and mentorship with an Apple researcher.
Zhang is a 3rd-year iSchool Ph.D. student advised by Prof. Jacob. O Wobbrock. His research focuses on using human-AI interactions to address real-world accessibility problems. He is particularly interested in designing and evaluating intelligent assistive technologies to make creativity tasks accessible.
Zhang joins previous CREATE-advised Apple AIML fellows:
Venkatesh Potluri (Apple AIML Ph.D. fellow 2022), advised by CREATE Director Jennifer Mankoff in the Allen School. His research makes overlooked software engineering spaces such as IOT and user interface development accessible to developers who are blind or visually impaired. His work systematically understands the accessibility gaps in these spaces and addresses them by enhancing widely used programming tools.
Rachel Franz (Apple AIML Ph.D. fellow 2021) is also advised by Wobbrock in the iSchool. Her research focuses on accessible technology design and evaluation for users with functional impairments and low digital literacy. Specifically, she is focused on using AI to make virtual reality more accessible to individuals with mobility limitations.
A team led by CREATE researchers has created A11yBoard for Google Slides, a browser extension and phone or tablet app that allows blind users to navigate through complex slide layouts, objects, images, and text. Here, a user demonstrates the touchscreen interface. Team members Zhuohao (Jerry) Zhang, Jacob O. Wobbrock, and Gene S-H Kim presented the research at ASSETS 2023.
Screen readers, which convert digital text to audio, can make computers more accessible to many disabled users — including those who are blind, low vision or dyslexic. Yet slideshow software, such as Microsoft PowerPoint and Google Slides, isn’t designed to make screen reader output coherent. Such programs typically rely on Z-order — which follows the way objects are layered on a slide — when a screen reader navigates through the contents. Since the Z-order doesn’t adequately convey how a slide is laid out in two-dimensional space, slideshow software can be inaccessible to people with disabilities.
Combining a desktop computer with a mobile device, A11yBoard lets users work with audio, touch, gesture, speech recognition and search to understand where different objects are located on a slide and move these objects around to create rich layouts. For instance, a user can touch a textbox on the screen, and the screen reader will describe its color and position. Then, using a voice command, the user can shrink that textbox and left-align it with the slide’s title.
“We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”
Jacob O. Wobbrock, CREATE associate director and professor in the UW Information School
“For a long time and even now, accessibility has often been thought of as, ‘We’re doing a good job if we enable blind folks to use modern products.’ Absolutely, that’s a priority,” said senior author Jacob O. Wobbrock, a UW professor in the Information School. “But that is only half of our aim, because that’s only letting blind folks use what others create. We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”
A11yBoard for Google Slides builds on a line of research in Wobbrock’s lab exploring how blind users interact with “artboards” — digital canvases on which users work with objects such as textboxes, shapes, images and diagrams. Slideshow software relies on a series of these artboards. When lead author Zhuohao (Jerry) Zhang, a UW doctoral student in the iSchool, joined Wobbrock’s lab, the two sought a solution to the accessibility flaws in creativity tools, like slideshow software. Drawing on earlier research from Wobbrock’s lab on the problems blind people have using artboards, Wobbrock and Zhang presented a prototype of A11yBoard in April. They then worked to create a solution that’s deployable through existing software, settling on a Google Slides extension.
For the current paper, the researchers worked with co-author Gene S-H Kim, an undergraduate at Stanford University, who is blind, to improve the interface. The team tested it with two other blind users, having them recreate slides. The testers both noted that A11yBoard greatly improved their ability to understand visual content and to create slides themselves without constant back-and-forth iterations with collaborators; they needed to involve a sighted assistant only at the end of the process.
The testers also highlighted spots for improvement: Remaining continuously aware of objects’ positions while trying to edit them still presented a challenge, and users were forced to do each action individually, such as aligning several visual groups from left to right, instead completing these repeated actions in batches. Because of how Google Slides functions, the app’s current version also does not allow users to undo or redo edits across different devices.
Ultimately, the researchers plan to release the app to the public. But first they plan to integrate a large language model, such as GPT, into the program.
“That will potentially help blind people author slides more efficiently, using natural language commands like, ‘Align these five boxes using their left edge,’” Zhang said. “Even as an accessibility researcher, I’m always amazed at how inaccessible these commonplace tools can be. So with A11yBoard we’ve set out to change that.”
CREATE researchers shone this spring at the 2023 Web4All 2023 conference that, in part, seeks to “make the internet more accessible to the more than one billion people who struggle to interact with digital content each day due to neurodivergence, disability or other impairments.” Two CREATE-funded open source projects won accolades.
Built on prior research to develop taxonomies of information sought by screen-reader users to interact with online data visualizations, the team’s research used these taxonomies to extend the functionality of VoxLens—an open-source multi-modal system that improves the accessibility of data visualizations—by supporting drilled-down information extraction. They assessed the performance of their VoxLens enhancements through task-based user studies with 10 screen-reader and 10 non-screen-reader users. Their enhancements “closed the gap” between the two groups by enabling screen-reader users to extract information with approximately the same accuracy as non-screen-reader users, reducing interaction time by 22% in the process.
Authors: Ather Sharif, Aneesha Ramesh, Qianqian Yu, Trung-Anh H. Nguyen, and Xuhai Xu
Ather Sharif’s work on another project, UnlockedMaps, was honored with the Accessibility Challenge Delegates’ Award. The paper details a web-based map that allows users to see in real time how accessible rail transit stations are in six North American cities, including Seattle, Toronto, New York and the Bay Area. UnlockedMaps shows whether stations are accessible and if they are currently experiencing elevator outages. Their work includes a public website that enables users to make informed decisions regarding their commute and an open source API that can be used by developers, disability advocates, and policy makers for a variety of purposes, including shedding light on the frequency of elevator outages and their repair times to identify the disparities between neighborhoods in a given city.
The CREATE community thanks three of our founding leaders for their energy and service in launching the center as we embark upon some transitions. “CREATE would not be where it is today without the vision, passion, and commitment that Jake, Richard, and Anat brought to their work leading the center,” says CREATE Director Jennifer Mankoff.
Co-Director Jacob O. Wobbrock: From vision, to launch, to sustainable leadership
It was back in June 2019 that Jacob O. Wobbrock, CREATE’s founding Co-Director, was on a panel discussion at Microsoft’s IdeaGen 2030 event, where he talked about ability-based design. Also on that panel was future CREATE Associate Director Kat Steele. After the event, the two talked with Microsoft Research colleagues, particularly Dr. Meredith Ringel Morris, about the possibility of founding an accessible technology research center at the University of Washington.
Wobbrock and Steele thought that a center could bring faculty together and make them more than the sum of their parts. Within a few months, Wobbrock returned to Microsoft with Jennifer Mankoff, Richard Ladner, and Anat Caspi to pitch Microsoft’s Chief Accessibility Officer, Jenny Lay-Flurrie, on the idea of supporting the new Center for Research and Education on Accessible Technology and Experiences (CREATE). With additional support from Microsoft President Brad Smith, and input from Morris, the center was launched by Smith and UW President Ana Marie Cauce at Microsoft’s Ability Summit in Spring 2020.
Wobbrock, along with Mankoff, served as CREATE’s inaugural co-directors until June 2023, when Wobbrock stepped down into an associate director role, with Mankoff leading CREATE as sole Director. “I’m a founder by nature,” Wobbrock said. “I helped start DUB, the MHCI+D degree, a startup called AnswerDash, and then CREATE. I really enjoy establishing new organizations and seeing them take flight. Now that CREATE is soaring, it’s time for more capable hands than mine to pilot the plane. Jennifer Mankoff is one of the best, most capable, energetic, and visionary leaders I know. She will take CREATE into its next chapter and I can’t wait to see what she does.” Wobbrock will still be very active with the center.
Professor Emeritus Richard Ladner, one of CREATE’s founders and our inaugural Education Director
We thank Professor Emeritus Richard Ladner for three years of leadership as one of our founders and CREATE’s inaugural Education Director. Ladner initiated the CREATE Student Minigrant Program that helps fund small grants up to $2,000 in support of student initiated research projects.
Ladner has shepherded 10 minigrants and worked directly with eight Teach Access Study Away students. Through his AccessComputing program, he helped fund several summer research internships for undergraduate students working with CREATE faculty. All CREATE faculty contribute to accessibility related education in their courses, where he provides encouragement.
Anat Caspi defined and elevated CREATE’s translation efforts, leveraging the center’s relationships with partners in industry, disability communities, and academia. Her leadership created sustainable models for translation and built on our prior successes. Collaborations with the TASKAR center, HuskyADAPT, and the UW Disability Studies Program have ensured diverse voices to inform innovation.
Director of Translation duties will be distributed across Mankoff, CREATE’s Community Engagement and Partnerships Manager Kathleen Quin Voss, and the Taskar Center for Accessible Technology, which Caspi directs.
The Association for Computing Machinery (ACM) has honored CREATE Co-Director Jacob O. Wobbrock and colleagues with a 10-year lasting impact award for their groundbreaking work improving how computers recognize stroke gestures.
Wobbrock, a professor in the Information School, and co-authors Radu-Daniel Vatavu and Lisa Anthony were presented with the 2022 Ten Year Technical Impact Award in November at the ACM International Conference on Multimodal Interaction (ICMI). The award honors their 2012 paper titled Gestures as point clouds: A $P recognizer for user interface prototypes, which also won ICMI’s Outstanding Paper Award when it was published.
The $P point-cloud gesture recognizer was a key advancement in the way computers recognize stroke gestures, such as swipes, shapes, or drawings on a touchscreen. It provided a new way to quickly and accurately recognize what users’ fingers or styluses were telling their devices to do, and even could be used with whole-hand gestures to accomplish more complex tasks such as typing in the air or controlling a drone with finger movements.
The research built on Wobbrock’s 2007 invention of the $1 unistroke recognizer, which made it much easier for devices to recognize single-stroke gestures, such as a circle or a triangle. Wobbrock called it “$1” — 100 pennies — because it required only 100 lines of code, making it easy for user interface developers to incorporate gestures in their prototypes.
Nearly 500 people traveled to beautiful Bend, OR to share their latest innovations in user interface software and technology from fabrication and materials, to VR and AR, to interactive tools and interaction techniques. UIST showcased the very best inventive research in the field of human-computer interaction. “Attending UIST is like attending an exclusive preview of possible tomorrows, where one gazes into the future and imagines living there, if only for a moment,” said Wobbrock.
Bringing accessibility into the conversation, Wobbrock’s opening keynote questioned the assumptions made in statements we often see, such as, “Just touch the screen” assumes the ability to see the screen, to move the hand, and so on.
For the closing keynote, available on YouTube, Wobbrock interviewed Marissa Mayer, former CEO of Yahoo and an early employee at Google. She studied Symbolic Systems and Computer Science with a focus on artificial intelligence at Stanford, along with Wobbrock. Mayer answered audience questions, including one about making design choices through a combination of crowdsourcing, an abundance of data, and strong opinions.
Mobile apps have become a key feature of everyday life, with apps for banking, work, entertainment, communication, transportation, and education, to name a few. But many apps remain inaccessible to people with disabilities who use screen readers or other assistive technologies.
Any person who uses an assistive technology can describe negative experiences with apps that do not provide proper support. For example, screen readers unhelpfully announce “unlabeled button” when they encounter a screen widget without proper information provided by the developer.
We know that apps often lack adequate accessibility, but until now, it has been difficult to get a big picture of mobile app accessibility overall.
How good or bad is the state of mobile app accessibility? What are the common problems? What can be done?
Research led by Ph.D. student Anne Spencer Ross and co-advised by James Fogarty (CREATE Associate Director) and Jacob O. Wobbrock (CREATE Co-Director) has been examining these questions in first-of-their-kind large-scale analyses of mobile app accessibility. Their latest research automatically examined data from approximately 10,000 apps to identify seven common types of accessibility failures. Unfortunately, this analysis found that many apps are highly inaccessible. For example, 23% of the analyzed apps failed to provide accessibility metadata, known as a “content description,” for more than 90% of their image-based buttons. The functionality of those buttons will therefore be inaccessible when using a screen reader.
Bar chart shows that 23 percent of apps are missing labels on all their elements. Another 23 percent were not missing labels on any elements. And the rest were missing labels on 6 to 7 percent of their elements.
Clearly, we need better approaches to ensuring all apps are accessible. This research has also shown that large-scale data can help identify reasons why such labeling failures occur. For example, “floating action buttons” are a relatively new Android element that typically present a commonly-used command as an image-button floating atop other elements. Our analyses found that 93% of such buttons lacked a content description, so they were even more likely than other buttons to be inaccessible. By examining this issue closely, Ross and her advisors found that commonly used software development tools do not detect this error. In addition to highlighting accessibility failures in individual apps, results like these suggest that identifying and addressing underlying failures in common developer tools can improve the accessibility of many apps.
Next, the researchers aim to detect a greater variety of accessibility failures and to include longitudinal analyses over time. Eventually, they hope to paint a complete picture of mobile app accessibility at scale.
Animated GIFs, prevalent in social media, texting platforms and websites, often lack adequate alt-text descriptions, resulting in inaccessible GIFs for blind or low-vision (BLV) users and the loss of meaning, context, and nuance in what they read. In an article published in the Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI ’22), a research team led by CREATE Co-director Jacob O. Wobbrock has demonstrated a system called Ga11y (pronounced “galley”) for creating GIF annotations and improving the accessibility of animated GIFs.
Video describing Ga11y, an Automated GIF Annotation System for Visually Impaired Users. The video frame shows an obscure image and the question, How would you describe this GIF to someone so they can understand it without seeing it?
Ga11y combines the power of machine intelligence and crowdsourcing and has three components: an Android client for submitting annotation requests, a backend server and database, and a web interface where volunteers can respond to annotation requests.
Wobbrock’s co-authors are Mingrui “Ray” Zhang, a Ph.D. candidate in the UW iSchool, and Mingyuan Zhong, a Ph.D. student in the Paul G. Allen School of Computer Science & Engineering.
Working with screen-reader users, CREATE graduate student Ather Sharif and Co-Director Jacob O. Wobbrock, along with other UW researchers, have designed VoxLens, a JavaScript plugin that allows people to interact with visualizations. To implement VoxLens, visualization designers add just one line of code.
Millions of Americans use screen readers for a variety of reasons, including complete or partial blindness, learning disabilities or motion sensitivity. But visually-oriented graphics often are not accessible to people who use screen readers. VoxLens lead author Sharif, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering noted, “Right now, screen-reader users either get very little or no information about online visualizations, which, in light of the COVID-19 pandemic, can sometimes be a matter of life and death. The goal of our project is to give screen-reader users a platform where they can extract as much or as little information as they want.”
With written content, there is a beginning, middle and end of a sentence, Wobbrock, Co-senior author explained, “But as soon as you move things into two dimensional spaces, such as visualizations, there’s no clear start and finish. It’s just not structured in the same way, which means there’s no obvious entry point or sequencing for screen readers.”
Participants learned how to use VoxLens and then completed nine tasks, each of which involved answering questions about a visualization. Compared to participants who did not have access to this tool, VoxLens users completed the tasks with 122% increased accuracy and 36% decreased interaction time.
We congratulate CREATE Co-Director Jacob O. Wobbrock on being named an ACM Fellow by the Association for Computing Machinery for his contributions to human-computer interaction and accessible computing!
Wobbrock’s research seeks to understand and improve people’s interactions with computers and information, especially for people with disabilities. He is the primary creator of ability-based design, which scrutinizes the ability assumptions embedded in technologies in an effort to create systems better matched to what people can do.
For this and his other contributions to accessible computing, he received the 2017 ACM SIGCHI Social Impact Award and the 2019 SIGACCESS ASSETS Paper Impact Award. He was also inducted to the ACM CHI Academy in 2019. In addition to being a CREATE founding co-director, Professor Wobbrock directs the ACE Lab and is a founding member of UW’s cross-campus DUB Group.
The ACM is the world’s largest educational and scientific computing society. Its Fellows program recognizes the top 1% of members for their outstanding accomplishments in computing and information technology and/or outstanding service to the ACM and the larger computing community. ACM Fellows are nominated by their peers, with nominations reviewed by a distinguished selection committee.
Wobbrock, and the other 70 Fellows named in 2021 will be formally recognized at the ACM Awards Banquet in San Francisco in June.
UW CREATE has a large and quality presence at ASSETS 2020, the premier annual conference for accessible computing research. Drawing from three departments, University of Washington authors contributed to six papers and two posters to be presented at this year’s online conference. Three of our papers were nominated for best paper! Seven members also served in conference roles: two on the organizing committee and five on the program committee.
The papers and posters span a variety of topics including input performance evaluation of people with limited mobility, media usage patterns of autistic adults, sound awareness for d/Deaf and hard of hearing people, and autoethnography reports of multiple people with disabilities. Congratulations to the authors and their collaborators!
We look forward to seeing you virtually at ASSETS 2020, which runs October 26 to 28.
An autoethnograher’s daughter’s handcrafted cane, as presented in the paper, “Living disability theory: Reflections on access, research, and design.”The SoundWatch, as described in the paper: “SoundWatch: Exploring smartwatch-based deep learning approaches to support sound awareness for deaf and hard of hearing users.”
Accepted papers
Input accessibility: A large dataset and summary analysis of age, motor ability and input performance
Leah Findlater, University of Washington Lotus Zhang, University of Washington
The reliability of fitts’s law as a movement model for people with and without limited fine motor function
Ather Sharif, University of Washington Victoria Pao, University of Washington Katharina Reinecke, University of Washington Jacob O. Wobbrock, University of Washington
Lessons learned in designing AI for autistic adults: Designing the video calling for autism prototype
Andrew Begel, Microsoft Research John Tang, Microsoft Research Sean Andrist, Microsoft Research Michael Barnett, Microsoft Research Tony Carbary, Microsoft Research Piali Choudhury, Microsoft Edward Cutrell, Microsoft Research Alberto Fung, University of Houston Sasa Junuzovic, Microsoft Research Daniel McDuff, Microsoft Research Kael Rowan, Microsoft Shibashankar Sahoo, UmeŒ Institute Of Design Jennifer Frances Waldern, Microsoft Jessica Wolk, Microsoft Research Hui Zheng, George Mason University Annuska Zolyomi, University of Washington
SoundWatch: Exploring smartwatch-based deep learning approaches to support sound awareness for deaf and hard of hearing users
Dhruv Jain, University of Washington Hung Ngo, University of Washington Pratyush Patel, University of Washington Steven Goodman, University of Washington Leah Findlater, University of Washington Jon E. Froehlich, University of Washington
Megan Hofmann, Carnegie Mellon University Devva Kasnitz, Society for Disability Studies Jennifer Mankoff, University of Washington Cynthia L Bennett, Carnegie Mellon University
Navigating graduate school with a disability
Dhruv Jain, University of Washington Venkatesh Potluri, University of Washington Ather Sharif, University of Washington
Accepted posters
HoloSound: Combining speech and sound identification for Deaf or hard of hearing users on a head-mounted display
Ru Guo, University of Washington Yiru Yang, University of Washington Johnson Kuang, University of Washington Xue Bin, University of Washington Dhruv Jain, University of Washington Steven Goodman, University of Washington Leah Findlater, University of Washington Jon E. Froehlich, University of Washington
#ActuallyAutistic Sense-making on Twitter
Annuska Zolyomi, University of Washington Ridley Jones, University of Washington Tomer Kaftan, University of Washington
Organizing Committee roles
Dhruv Jain as Posters & Demonstrations Co-Chair Cynthia Bennett as Accessibility Co-Chair
Program committee roles
Cynthia Bennett (recent alumni, now at Apple/CMU) Leah Findlater Jon Froehlich Richard Ladner Anne Ross
CREATE faculty are already internationally recognized for their contributions to assistive technology and accessible computing; by bringing them together under one organizational roof, CREATE will enable synergies and foster collaborations that enable faculty and students to become more than the sum of their parts.
iSchool news, University of Washington | May 28, 2020
Jacob O. Wobbrock, CREATE Co-Director and a professor in the UW Information School, has become one of the world’s foremost experts on accessible computing and human-computer interaction. His approach is to create interactive systems that can capitalize on the situated abilities of users, whatever they are, rather than make users contort themselves to become amenable to the ability-assumptions of rigid technologies. He calls this perspective Ability-Based Design.
In the iSchool article, Wobbrock answers questions about what CREATE will mean for his research, starting with ‘why do we need CREATE?’ and a very compelling answer: “We’re getting older and living longer. If we live long enough, we will all have disabilities. So the need for technology to be accessible, and for technology-mediated services to be accessible, is clearer than ever.”
University of Washington professor Jacob Wobbrock figures the best way to make technology more accessible to disabled people is to anticipate their needs from the very beginning. “The world we live in is built on certain assumptions,’’ Wobbrock said. “If we question those assumptions right from the start when we design things, then suddenly things are accessible.’’
The Center for Research and Education on Accessible Technology and Experience (CREATE) is launching with a nine-member, interdisciplinary faculty led by Wobbrock and co-director Jennifer Mankoff.
Jacob Wobbrock honored for improving touch-screen accessibility
Congratulations to Jacob O. Wobbrock, a founding co-director of CREATE, for his work with Shaun Kane, PhD ’11 and Jeffrey Bigham, PhD ’09 improving the accessibility of mobile technology.
Slide Rule addressed the challenge of navigating within a screen when mobile phones transitioned from physical buttons to touch screens. Their methods represented the first screen reader for touch screens, using simple gestures for navigation and tapping targets. These features have since become mainstream in commercial products.
Given the prevalence of touch screens in our society, the need to make them accessible to all people is still great, and we will continue to pursue that goal, along with the many other projects we are doing.
Jacob O. Wobbrock
As technology continues to advance, Wobbrock’s team continues to identify innovative methods for interaction that improve accessibility. Read more about the award and his recent research.