Congratulations to Zhuohao (Jerry) Zhang – the most recent CREATE Ph.D. student to receive an Apple Scholars in AIML PhD fellowship. The prestigious award supports students through funding, internship opportunities, and mentorship with an Apple researcher.
Zhang is a 3rd-year iSchool Ph.D. student advised by Prof. Jacob. O Wobbrock. His research focuses on using human-AI interactions to address real-world accessibility problems. He is particularly interested in designing and evaluating intelligent assistive technologies to make creativity tasks accessible.
Zhang joins previous CREATE-advised Apple AIML fellows:
Venkatesh Potluri (Apple AIML Ph.D. fellow 2022), advised by CREATE Director Jennifer Mankoff in the Allen School. His research makes overlooked software engineering spaces such as IOT and user interface development accessible to developers who are blind or visually impaired. His work systematically understands the accessibility gaps in these spaces and addresses them by enhancing widely used programming tools.
Rachel Franz (Apple AIML Ph.D. fellow 2021) is also advised by Wobbrock in the iSchool. Her research focuses on accessible technology design and evaluation for users with functional impairments and low digital literacy. Specifically, she is focused on using AI to make virtual reality more accessible to individuals with mobility limitations.
Being able to easily get from the house to the playground affects how long and how often children use an adapted ride-on car, according to a study, Off to the park: a geospatial investigation of adapted ride-on car usage, published by CREATE Ph.D. student Mia Hoffman with CREATE associate directors Heather A. Feldner (lead researcher on the project), Katherine M. Steele, and Jon Froehlich. Their research demonstrates the importance of accessibility in the built environment and that advocating for environmental accessibility should include both the indoors and outdoors.
For a recent study, adapted ride-on cars were provided to 14 families with young children in locations across Western Washington. Photo courtesy of Heather Feldner.
Ride-on cars are miniature toy cars for children with steering wheels and a battery-powered pedal. Adapted ride-on cars are an easy-to-use temporary solution for children with mobility issues. Although wheelchairs have more finite control, insurance typically covers new wheelchairs every five years. Children under age 5 can use adapted ride-on cars to explore their surroundings if they outgrow their wheelchair, or if they aren’t able to be in a wheelchair yet.
Exploration is critical to language, social and physical development. There are big benefits when a child starts moving.
Mia Hoffman, CREATE Ph.D. student
“Adapted ride-on cars allow children to explore by themselves,” says Mia Hoffman, the Ph.D. candidate in mechanical engineering who co-authored the paper published in fall 2023. “Exploration is critical to language, social and physical development. There are big benefits when a child starts moving.”
The researchers adapted the ride-on cars to make them more accessible. Instead of a foot pedal, children might start the car with a different option that’s accessible to them, such as a large button or a sip-and-puff, which is a pneumatic device that would respond to air being blown into it. Researchers added additional structural supports to the device, such as a backrest made out of kickboards or PVC side-supports.
Adapted ride-on cars were provided to 14 families with young children in locations across Western Washington. Heather Feldner, an assistant professor in the Department of Rehabilitation Medicine and adjunct assistant professor in ME, trained families on how to use the cars. The families then spent a year playing with the cars. Each car had an integrated data logger that tracked how often the child pressed the switch to move the car, and GPS data indicated how far they traveled.
The study found that most play sessions occurred indoors, underscoring the importance of indoor accessibility for children’s mobility technology. However, children used the car longer outdoors, and identifying an accessible route increased the frequency and duration of outside play sessions. Study participants drove outdoors more often in pedestrian-friendly neighborhoods, measured by researchers with the Walk Score, and when close to accessible paths, measured by Project Sidewalk’s AccessScore.
“Families can sometimes be uncertain about introducing powered mobility for their children in these early stages of development,” says Feldner. “But ride-on cars and other small devices designed for kids open up so many opportunities — from experiencing the joy of mobility, learning more about the world around them, enjoying social time with family and friends in new environments, and working on developmental skills. We want to work with kids and families to show them what is possible with these devices, listen to their needs and ideas, and continue working to ensure that both our technology designs and our community environments are accessible and available for all.”
Exploring different mobility devices
As a graduate student, Hoffman conducts research on children ages 3 and under who might crawl, roll, sit up, or cruise in a power mobility device. Besides processing sensor data and other data analysis, Hoffman’s work also involves getting to know families, “playing with a lot of toys, singing, and entertaining kids,” she jokes.
Research involving pediatrics and accessibility like the adapted ride-on cars study is why Hoffman joined the Steele Lab. She became interested in biomechanics in sixth grade, when she learned that working on engineering and medical design was possible. As an undergraduate at the University of Notre Dame, Hoffman studied brain biomechanics, computational design and assistive technology. She worked on projects such as analyzing the morphology of monkey brains and creating 3D-printed prosthetic hands for children.
After connecting with Feldner and Kat Steele, Albert Kobayashi Professor in Mechanical Engineering and CREATE associate director, Hoffman realized that the Steele Lab, which often collaborates with UW Medicine, was the perfect fit.
Hoffman is currently working on research with Feldner and Steele that compares children’s usage of a commercial pediatric powered mobility device to their usage of adapted ride-on cars in the community environment. Next, Hoffman will conduct one of the first comparative studies about how using supported mobility in the form of a partial body weight support system or using a powered wheelchair affects children’s exploration patterns. The study involves children with Down Syndrome, who often have delayed motor development and who are underrepresented in mobility research.
There can be stigma associated with using a wheelchair instead of a walker or another mobility device that may help with motor development, but Hoffman says the study could demonstrate that both are important.
“The goal is to show that children can simultaneously work on motor gains while using powered wheelchairs or other mobility devices to explore their environment,” she says.
“Our hope is for kids to just be kids,” says Hoffman. “We want them to be mobile and experience life at the same time as their peers. It’s about meeting a kid where they’re at and supporting them so that they can move around and play with their friends and family.”
People with low vision (LV) have had fewer options for physical activity, particularly in competitive sports such as tennis and soccer that involve fast, continuously moving elements such as balls and players. A group of researchers from CREATE associate director Jon E. Froehlich‘s Makeability Lab hopes to overcome this challenge by enabling LV individuals to participate in ball-based sports using real-time computer vision (CV) and wearable augmented reality (AR) headsets. Their initial focus has been on tennis.
ARTennis is their prototype system capable of tracking and enhancing the visual saliency of tennis balls from a first-person point-of-view (POV). Recent advancements in deep learning have led to models like TrackNet, a neural network capable of tracking tennis balls in third-person recordings of tennis games that is used to improve sports viewing for LV people. To enhance playability, the team first built a dataset of first-person POV images by having the authors wear an AR headset and play tennis. They then streamed video from a pair of AR glasses to a back-end server, analyzed the frames using a custom-trained deep learning model, and sent back the results for real-time overlaid visualization.
After a brainstorming session with an LV research team member, the team added visualization improvements to enhance the ball’s color contrast and add a crosshair in real-time.
Early evaluations have provided feedback that the prototype could help LV people enjoy ball-based sports but there’s plenty of further work to be done. A larger field-of-view (FOV) and audio cues would improve a player’s ability to track the ball. The weight and bulk of the headset, in addition to its expense are also factors the team expects to improve with time, as Lee noted in an interview on Oregon Public Broadcasting.
“Wearable AR devices such as the Microsoft HoloLens 2 hold immense potential in non-intrusively improving accessibility of everyday tasks. I view AR glasses as a technology that can enable continuous computer vision, which can empower BLV individuals to participate in day-to-day tasks, from sports to cooking. The Makeability Lab team and I hope to continue exploring this space to improve the accessibility of popular sports, such as tennis and basketball.”
Training a robot to feed people presents an array of challenges for researchers. Foods come in a nearly endless variety of shapes and states (liquid, solid, gelatinous), and each person has a unique set of needs and preferences. A team led by CREATE Ph.D. students Ethan K. Gordon and Amal Nanavati created a set of 11 actions a robotic arm can make to pick up nearly any food attainable by fork.
In tests with this set of actions, the robot picked up the foods more than 80% of the time, which is the user-specified benchmark for in-home use. The small set of actions allows the system to learn to pick up new foods during one meal. UW News talked with Gordon and Nanavati co-lead authors, both doctoral students in the Paul G. Allen School of Computer Science & Engineering, and with co-author Taylor Kessler Faulkner, a UW postdoctoral scholar in the Allen School, about the successes and challenges of robot-assisted feeding.
UW News talked with co-lead authors Gordon and Nanavati, both doctoral students members of CREATE and in the Paul G. Allen School of Computer Science & Engineering, and with co-author Taylor Kessler Faulkner, a UW postdoctoral scholar in the Allen School, about the successes and challenges of robot-assisted feeding for 1.8 million people in the U.S. (according to data from 2010) who can’t eat on their own.
The Personal Robotics Lab has been working on robot-assisted feeding for several years. What is the advance of this paper?
Ethan K. Gordon: I joined the Personal Robotics Lab at the end of 2018 when Siddhartha Srinivasa, a professor in the Allen School and senior author of our new study, and his team had created the first iteration of its robot system for assistive applications. The system was mounted on a wheelchair and could pick up a variety of fruits and vegetables on a plate. It was designed to identify how a person was sitting and take the food straight to their mouth. Since then, there have been quite a few iterations, mostly involving identifying a wide variety of food items on the plate. Now, the user with their assistive device can click on an image in the app, a grape for example, and the system can identify and pick that up.
Taylor Kessler Faulkner: Also, we’ve expanded the interface. Whatever accessibility systems people use to interact with their phones — mostly voice or mouth control navigation — they can use to control the app.
EKG: In this paper we just presented, we’ve gotten to the point where we can pick up nearly everything a fork can handle. So we can’t pick up soup, for example. But the robot can handle everything from mashed potatoes or noodles to a fruit salad to an actual vegetable salad, as well as pre-cut pizza or a sandwich or pieces of meat.
In previous work with the fruit salad, we looked at which trajectory the robot should take if it’s given an image of the food, but the set of trajectories we gave it was pretty limited. We were just changing the pitch of the fork. If you want to pick up a grape, for example, the fork’s tines need to go straight down, but for a banana they need to be at an angle, otherwise it will slide off. Then we worked on how much force we needed to apply for different foods.
In this new paper, we looked at how people pick up food, and used that data to generate a set of trajectories. We found a small number of motions that people actually use to eat and settled on 11 trajectories. So rather than just the simple up-down or coming in at an angle, it’s using scooping motions, or it’s wiggling inside of the food item to increase the strength of the contact. This small number still had the coverage to pick up a much greater array of foods.
We think the system is now at a point where it can be deployed for testing on people outside the research group. We can invite a user to the UW, and put the robot either on a wheelchair, if they have the mounting apparatus ready, or a tripod next to their wheelchair, and run through an entire meal.
For you as researchers, what are the vital challenges ahead to make this something people could use in their homes every day?
EKG: We’ve so far been talking about the problem of picking up the food, and there are more improvements that can be made here. Then there’s the whole other problem of getting the food to a person’s mouth, as well as how the person interfaces with the robot, and how much control the person has over this at least partially autonomous system.
TKF: Over the next couple of years, we’re hoping to personalize the robot to different people. Everyone eats a little bit differently. Amal did some really cool work on social dining that highlighted how people’s preferences are based on many factors, such as their social and physical situations. So we’re asking: How can we get input from the people who are eating? And how can the robot use that input to better adapt to the way each person wants to eat?
Amal Nanavati: There are several different dimensions that we might want to personalize. One is the user’s needs: How far the user can move their neck impacts how close the fork has to get to them. Some people have differential strength on different sides of their mouth, so the robot might need to feed them from a particular side of their mouth. There’s also an aspect of the physical environment. Users already have a bunch of assistive technologies, often mounted around their face if that’s the main part of their body that’s mobile. These technologies might be used to control their wheelchair, to interact with their phone, etc. Of course, we don’t want the robot interfering with any of those assistive technologies as it approaches their mouth.
There are also social considerations. For example, if I’m having a conversation with someone or at home watching TV, I don’t want the robot arm to come right in front of my face. Finally, there are personal preferences. For example, among users who can turn their head a little bit, some prefer to have the robot come from the front so they can keep an eye on the robot as it’s coming in. Others feel like that’s scary or distracting and prefer to have the bite come at them from the side.
A key research direction is understanding how we can create intuitive and transparent ways for the user to customize the robot to their own needs. We’re considering trade-offs between customization methods where the user is doing the customization, versus more robot-centered forms where, for example, the robot tries something and says, “Did you like it? Yes or no.” The goal is to understand how users feel about these different customization methods and which ones result in more customized trajectories.
What should the public understand about robot-assisted feeding, both in general and specifically the work your lab is doing?
EKG: It’s important to look not just at the technical challenges, but at the emotional scale of the problem. It’s not a small number of people who need help eating. There are various figures out there, but it’s over a million people in the U.S. Eating has to happen every single day. And to require someone else every single time you need to do that intimate and very necessary act can make people feel like a burden or self-conscious. So the whole community working towards assistive devices is really trying to help foster a sense of independence for people who have these kinds of physical mobility limitations.
AN: Even these seven-digit numbers don’t capture everyone. There are permanent disabilities, such as a spinal cord injury, but there are also temporary disabilities such as breaking your arm. All of us might face disability at some time as we age and we want to make sure that we have the tools necessary to ensure that we can all live dignified lives and independent lives. Also, unfortunately, even though technologies like this greatly improve people’s quality of life, it’s incredibly difficult to get them covered by U.S. insurance companies. I think more people knowing about the potential quality of life improvement will hopefully open up greater access.
Additional co-authors on the paper were Ramya Challa, who completed this research as an undergraduate student in the Allen School and is now at Oregon State University, and Bernie Zhu, a UW doctoral student in the Allen School. This research was partially funded by the National Science Foundation, the Office of Naval Research and Amazon.
Adapted ride-on cars (ROC) are an affordable, power mobility training tool for young children with disabilities. But weather and adequate drive space create barriers to families’ adoption of their ROC.
CREATE Ph.D. student Mia E. Hoffman is the lead author on a paper that investigates the relationship between the built environment and ROC usage.
With her co-advisors Kat Steele and Heather A. Feldner, Jon E. Froehlich (all three CREATE associate directors), and Kyle N. Winfree as co-authors, Hoffman found that play sessions took place more often within the participants’ homes. But when the ROC was used outside, children engaged in longer play sessions, actively drove for a larger portion of the session, and covered greater distances.
Most notably, they found that children drove more in pedestrian-friendly neighborhoods and when in proximity to accessible paths, demonstrating that providing an accessible place for a child to move, play, and explore is critical in helping a child and family adopt the mobility device into their daily life.
Generative artificial intelligence tools like ChatGPT, an AI-powered language tool, and Midjourney, an AI-powered image generator, can potentially assist people with various disabilities. They could summarize content, compose messages, or describe images. Yet they also regularly spout inaccuracies and fail at basic reasoning, perpetuating ableist biases.
This year, seven CREATE researchers conducted a three-month autoethnographic study — drawing on their own experiences as people with and without disabilities — to test AI tools’ utility for accessibility. Though researchers found cases in which the tools were helpful, they also found significant problems with AI tools in most use cases, whether they were generating images, writing Slack messages, summarizing writing or trying to improve the accessibility of documents.
“When technology changes rapidly, there’s always a risk that disabled people get left behind,” said senior author Jennifer Mankoff, CREATE’s director and a professor in the Paul G. Allen School of Computer Science & Engineering. “I’m a really strong believer in the value of first-person accounts to help us understand things. Because our group had a large number of folks who could experience AI as disabled people and see what worked and what didn’t, we thought we had a unique opportunity to tell a story and learn about this.”
The group presented its research in seven vignettes, often amalgamating experiences into single accounts to preserve anonymity. For instance, in the first account, “Mia,” who has intermittent brain fog, deployed ChatPDF.com, which summarizes PDFs, to help with work. While the tool was occasionally accurate, it often gave “completely incorrect answers.” In one case, the tool was both inaccurate and ableist, changing a paper’s argument to sound like researchers should talk to caregivers instead of to chronically ill people. “Mia” was able to catch this, since the researcher knew the paper well, but Mankoff said such subtle errors are some of the “most insidious” problems with using AI, since they can easily go unnoticed.
Yet in the same vignette, “Mia” used chatbots to create and format references for a paper they were working on while experiencing brain fog. The AI models still made mistakes, but the technology proved useful in this case.
“When technology changes rapidly, there’s always a risk that disabled people get left behind.”
Jennifer Mankoff, CREATE Director, professor in the Allen School
Mankoff, who’s spoken publicly about having Lyme disease, contributed to this account. “Using AI for this task still required work, but it lessened the cognitive load. By switching from a ‘generation’ task to a ‘verification’ task, I was able to avoid some of the accessibility issues I was facing,” Mankoff said.
The results of the other tests researchers selected were equally mixed:
One author, who is autistic, found AI helped to write Slack messages at work without spending too much time troubling over the wording. Peers found the messages “robotic,” yet the tool still made the author feel more confident in these interactions.
Three authors tried using AI tools to increase the accessibility of content such as tables for a research paper or a slideshow for a class. The AI programs were able to state accessibility rules but couldn’t apply them consistently when creating content.
Image-generating AI tools helped an author with aphantasia (an inability to visualize) interpret imagery from books. Yet when they used the AI tool to create an illustration of “people with a variety of disabilities looking happy but not at a party,” the program could conjure only fraught images of people at a party that included ableist incongruities, such as a disembodied hand resting on a disembodied prosthetic leg.
“I was surprised at just how dramatically the results and outcomes varied, depending on the task,” said lead author Kate Glazko, a UW doctoral student in the Allen School. “”n some cases, such as creating a picture of people with disabilities looking happy, even with specific prompting — can you make it this way? — the results didn’t achieve what the authors wanted.”
The researchers note that more work is needed to develop solutions to problems the study revealed. One particularly complex problem involves developing new ways for people with disabilities to validate the products of AI tools, because in many cases when AI is used for accessibility, either the source document or the AI-generated result is inaccessible. This happened in the ableist summary ChatPDF gave “Mia” and when “Jay,” who is legally blind, used an AI tool to generate code for a data visualization. He could not verify the result himself, but a colleague said it “didn’t make any sense at all.” The frequency of AI-caused errors, Mankoff said, “makes research into accessible validation especially important.”
Mankoff also plans to research ways to document the kinds of ableism and inaccessibility present in AI-generated content, as well as investigate problems in other areas, such as AI-written code.
“Whenever software engineering practices change, there is a risk that apps and websites become less accessible if good defaults are not in place,” Glazko said. “For example, if AI-generated code were accessible by default, this could help developers to learn about and improve the accessibility of their apps and websites.”
Co-authors on this paper are Momona Yamagami, who completed this research as a UW postdoctoral scholar in the Allen School and is now at Rice University; Aashaka Desai, Kelly Avery Mack and Venkatesh Potluri, all UW doctoral students in the Allen School; and Xuhai Xu, who completed this work as a UW doctoral student in the Information School and is now at the Massachusetts Institute of Technology. This research was funded by Meta, Center for Research and Education on Accessible Technology and Experiences (CREATE), Google, an NIDILRR ARRT grant and the National Science Foundation.
A team led by CREATE researchers has created A11yBoard for Google Slides, a browser extension and phone or tablet app that allows blind users to navigate through complex slide layouts, objects, images, and text. Here, a user demonstrates the touchscreen interface. Team members Zhuohao (Jerry) Zhang, Jacob O. Wobbrock, and Gene S-H Kim presented the research at ASSETS 2023.
Screen readers, which convert digital text to audio, can make computers more accessible to many disabled users — including those who are blind, low vision or dyslexic. Yet slideshow software, such as Microsoft PowerPoint and Google Slides, isn’t designed to make screen reader output coherent. Such programs typically rely on Z-order — which follows the way objects are layered on a slide — when a screen reader navigates through the contents. Since the Z-order doesn’t adequately convey how a slide is laid out in two-dimensional space, slideshow software can be inaccessible to people with disabilities.
Combining a desktop computer with a mobile device, A11yBoard lets users work with audio, touch, gesture, speech recognition and search to understand where different objects are located on a slide and move these objects around to create rich layouts. For instance, a user can touch a textbox on the screen, and the screen reader will describe its color and position. Then, using a voice command, the user can shrink that textbox and left-align it with the slide’s title.
“We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”
Jacob O. Wobbrock, CREATE associate director and professor in the UW Information School
“For a long time and even now, accessibility has often been thought of as, ‘We’re doing a good job if we enable blind folks to use modern products.’ Absolutely, that’s a priority,” said senior author Jacob O. Wobbrock, a UW professor in the Information School. “But that is only half of our aim, because that’s only letting blind folks use what others create. We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”
A11yBoard for Google Slides builds on a line of research in Wobbrock’s lab exploring how blind users interact with “artboards” — digital canvases on which users work with objects such as textboxes, shapes, images and diagrams. Slideshow software relies on a series of these artboards. When lead author Zhuohao (Jerry) Zhang, a UW doctoral student in the iSchool, joined Wobbrock’s lab, the two sought a solution to the accessibility flaws in creativity tools, like slideshow software. Drawing on earlier research from Wobbrock’s lab on the problems blind people have using artboards, Wobbrock and Zhang presented a prototype of A11yBoard in April. They then worked to create a solution that’s deployable through existing software, settling on a Google Slides extension.
For the current paper, the researchers worked with co-author Gene S-H Kim, an undergraduate at Stanford University, who is blind, to improve the interface. The team tested it with two other blind users, having them recreate slides. The testers both noted that A11yBoard greatly improved their ability to understand visual content and to create slides themselves without constant back-and-forth iterations with collaborators; they needed to involve a sighted assistant only at the end of the process.
The testers also highlighted spots for improvement: Remaining continuously aware of objects’ positions while trying to edit them still presented a challenge, and users were forced to do each action individually, such as aligning several visual groups from left to right, instead completing these repeated actions in batches. Because of how Google Slides functions, the app’s current version also does not allow users to undo or redo edits across different devices.
Ultimately, the researchers plan to release the app to the public. But first they plan to integrate a large language model, such as GPT, into the program.
“That will potentially help blind people author slides more efficiently, using natural language commands like, ‘Align these five boxes using their left edge,’” Zhang said. “Even as an accessibility researcher, I’m always amazed at how inaccessible these commonplace tools can be. So with A11yBoard we’ve set out to change that.”
RASSAR – Room Accessibility and Safety Scan in Augmented Reality – is a novel smartphone-based prototype for semi-automatically identifying, categorizing, and localizing indoor accessibility and safety issues. With RASSAR, the user holds out their phone and scans a space. The tool uses LiDAR and camera data, real-time machine learning, and AR to construct a real-time model of the 3D scene, attempts to identify and classify known accessibility and safety issues, and visualizes potential problems overlaid in AR.
RASSAR researchers envision the tool as an aid in the building and validation of new construction, planning renovations, or updating homes for health concerns, or for telehealth home visits with occupational therapists. UW News interviewed two CREATE Ph.D. students about their work on the project:
Augmented Reality to Support Accessibility
CREATE students Xia Su and Jae Lee, advised by CREATE Associate Director Jon Froehlich in the Makeability Lab, discuss their work using augmented reality to support accessibility. The Allen School Ph.D. students are presenting their work at ASSETS and UIST this year.
CREATE students Xia Su and Jae Lee, advised by CREATE Associate Director Jon Froehlich in the Makeability Lab, discuss their work using augmented reality to support accessibility. The Allen School Ph.D. students are presenting their work at ASSETS and UIST this year.
As has become customary, CREATE faculty, students and alumni will have a large presence at the 2023 ASSETS Conference. It’ll be quiet on campus October 23-25 with these folks in New York.
Understanding Digital Content Creation Needs of Blind and Low Vision People Monday, Oct 23 at 1:40 p.m. Eastern time Lotus Zhang, Simon Sun, Leah Findlater
Notably Inaccessible — Data Driven Understanding of Data Science Notebook (In)Accessibility Monday, Oct 23 at 4 p.m. Eastern time Venkatesh Potluri, Sudheesh Singanamalla, Nussara Tieanklin, Jennifer Mankoff
A Large-Scale Mixed-Methods Analysis of Blind and Low-vision Research in ACM and IEEE Tuesday, Oct 24 at 11:10 a.m. Eastern time Yong-Joon Thoo, Maximiliano Jeanneret Medina, Jon E. Froehlich, Nicolas Ruffieux, Denis Lalanne
Working at the Intersection of Race, Disability and Accessibility Tuesday, Oct 24 at 1:40 p.m. Eastern time Christina Harrington, Aashaka Desai, Aaleyah Lewis, Sanika Moharana, Anne Spencer Ross, Jennifer Mankoff
Comparing Locomotion Techniques in Virtual Reality for People with Upper-Body Motor Impairments Wednesday, Oct 25 at 8:45 a.m. Eastern time Rachel L. Franz, Jinghan Yu, Jacob O. Wobbrock
Jod: Examining the Design and Implementation of a Videoconferencing Platform for Mixed Hearing Groups Wednesday, Oct 25 at 11:10 a.m. Eastern time Anant Mittal, Meghna Gupta, Roshni Poddar, Tarini Naik, SeethaLakshmi Kuppuraj, James Fogarty. Pratyush Kumar, Mohit Jain
Azimuth: Designing Accessible Dashboards for Screen Reader Users Wednesday, Oct 25 at 1:25 p.m. Eastern time Arjun Srinivasan, Tim Harshbarger, Darrell Hilliker, Jennifer Mankoff
Developing and Deploying a Real-World Solution for Accessible Slide Reading and Authoring for Blind Users Wednesday, Oct 25 at 1:25 p.m. Eastern time Zhuohao Zhang, Gene S-H Kim, Jacob O. Wobbrock
Experience Reports
An Autoethnographic Case Study of Generative Artificial Intelligence’s Utility for Accessibility Kate S Glazko, Momona Yamagami, Aashaka Desai, Kelly Avery Mack, Venkatesh Potluri, Xuhai Xu, Jennifer Mankoff
Maintaining the Accessibility Ecosystem: a Multi-Stakeholder Analysis of Accessibility in Higher Education Kelly Avery Mack, Natasha A Sidik, Aashaka Desai, Emma J McDonnell, Kunal Mehta, Christina Zhang, Jennifer Mankoff
TACCESS Papers
“I’m Just Overwhelmed”: Investigating Physical Therapy Accessibility and Technology Interventions for People with Disabilities and/or Chronic Conditions
Momona Yamagami, Kelly Mack, Jennifer Mankoff, Katherine M. Steele
The Global Care Ecosystems of 3D Printed Assistive Devices
Saiph Savage, Claudia Flores-Saviaga, Rachel Rodney, Liliana Savage, Jon Schull, Jennifer Mankoff
Posters
Conveying Uncertainty in Data Visualizations to Screen-Reader Users Through Non-Visual Means Ather Sharif, Ruican Zhong, Yadi Wang
U.S. Deaf Community Perspectives on Automatic Sign Language Translation Nina Tran, Richard E. Ladner, Danielle Bragg (Microsoft Research)
Workshops
Bridging the Gap: Towards Advancing Privacy and Accessibility Rahaf Alharbi, Robin Brewer, Gesu India, Lotus Zhang, Leah Findlater, and Abigale Stangl
Tackling the Lack of a Practical Guide in Disability-Centered Research Emma McDonnell, Kelly Avery Mack, Kathrin Gerling, Katta Spiel, Cynthia Bennett, Robin N. Brewer, Rua M. Williams, and Garreth W. Tigwell
A11yFutures: Envisioning the Future of Accessibility Research Foad Hamidi Kirk Crawford, Jason Wiese, Kelly Avery Mack, Jennifer Mankoff
Demos
A Demonstration of RASSAR : Room Accessibility and Safety Scanning in Augmented Reality Xia Su, Kaiming Cheng, Han Zhang, Jaewook Lee, Wyatt Olson, Jon E. Froehlich
BusStopCV: A Real-time AI Assistant for Labeling Bus Stop Accessibility Features in Streetscape Imagery Chaitanyashareef Kulkarni, Chu Li, Jaye Ahn, Katrina Oi Yau Ma, Zhihan Zhang, Michael Saugstad, Kevin Wu, Jon E. Froehlich; with Valerie Novack and Brent Chamberlain (Utah State University)
Papers and presentations by CREATE associates and alumni
Monday, Oct 23 at 4:00 p.m. Eastern time Understanding Challenges and Opportunities in Body Movement Education of People who are Blind or have Low Vision Madhuka Thisuri De Silva, Leona M Holloway, Sarah Goodwin, Matthew Butler
Tuesday, Oct 24 at 8:45 a.m. Eastern time AdaptiveSound: An Interactive Feedback-Loop System to Improve Sound Recognition for Deaf and Hard of Hearing Users Hang Do, Quan Dang, Jeremy Zhengqi Huang, Dhruv Jain
Tuesday, Oct 24 at 8:45 a.m. Eastern time “Not There Yet”: Feasibility and Challenges of Mobile Sound Recognition to Support Deaf and Hard-of-Hearing People Jeremy Zhengqi Huang, Hriday Chhabria, Dhruv Jain
Tuesday, Oct 24 at 4:00 p.m. Eastern time The Potential of a Visual Dialogue Agent In a Tandem Automated Audio Description System for Videos Abigale Stangl, Shasta Ihorn, Yue-Ting Siu, Aditya Bodi, Mar Castanon, Lothar D Narins, Ilmi Yoon
What are the opportunities for research to engage the intersection of race and disability?
What is the value of considering how constructs of race and disability work alongside each other within accessibility research studies?
Two CREATE Ph.D. students have explored these questions and found little focus on this intersection within accessibility research. In their paper, Working at the Intersection of Race, Disability and Accessibility (PDF), they observe that we’re missing out on the full nuance of marginalized and “otherized” groups.
The Allen School Ph.D. students, Aashaka Desai and Aaleyah Lewis, and collaborators will present their findings at the ASSETS 2023 conference on Tuesday, October 24.
Spurred by the conversation at the Race, Disability & Technology research seminar earlier in the year, members of the team realized they lacked a framework for thinking about work at this intersection. In response, they assembled a larger team to conduct an analysis of existing work and research with accessibility research.
The resulting paper presents a review of considerations for engaging with race and disability in the research and education process. It offers analyses of exemplary papers, highlights opportunities for intersectional engagement, and presents a framework to explore race and disability research. Case studies exemplify engagement at this intersection throughout the course of research, in designs of socio-technical systems, and in education.
Case studies
Representation in image descriptions: How to describe appearance, factoring preferences for self-descriptions of identity, concerns around misrepresentation by others, interest in knowing others’ appearance, and guidance for AI-generated image descriptions.
Experiences of immigrants with disabilities: Cultural barriers that include cultural disconnects and levels of stigma about disability between refugees and host countries compound language barriers.
Designing for intersectional, interdependent accessibility: How access practices as well as cultural and racial practices influence every stage of research design, method, and dissemination, in the context of work with communities of translators.
Authors, left to right: Christina Harringon, Aashaka Desai, Aaleyah Lewis, Sanika Moharana, Anne Spencer Ross, and Jennifer Mankoff
Authors
Christina N. Harrington, Assistant Professor in the Human-Computer Interaction Institute at Carnegie Mellon
CREATE researchers shone this spring at the 2023 Web4All 2023 conference that, in part, seeks to “make the internet more accessible to the more than one billion people who struggle to interact with digital content each day due to neurodivergence, disability or other impairments.” Two CREATE-funded open source projects won accolades.
Built on prior research to develop taxonomies of information sought by screen-reader users to interact with online data visualizations, the team’s research used these taxonomies to extend the functionality of VoxLens—an open-source multi-modal system that improves the accessibility of data visualizations—by supporting drilled-down information extraction. They assessed the performance of their VoxLens enhancements through task-based user studies with 10 screen-reader and 10 non-screen-reader users. Their enhancements “closed the gap” between the two groups by enabling screen-reader users to extract information with approximately the same accuracy as non-screen-reader users, reducing interaction time by 22% in the process.
Authors: Ather Sharif, Aneesha Ramesh, Qianqian Yu, Trung-Anh H. Nguyen, and Xuhai Xu
Ather Sharif’s work on another project, UnlockedMaps, was honored with the Accessibility Challenge Delegates’ Award. The paper details a web-based map that allows users to see in real time how accessible rail transit stations are in six North American cities, including Seattle, Toronto, New York and the Bay Area. UnlockedMaps shows whether stations are accessible and if they are currently experiencing elevator outages. Their work includes a public website that enables users to make informed decisions regarding their commute and an open source API that can be used by developers, disability advocates, and policy makers for a variety of purposes, including shedding light on the frequency of elevator outages and their repair times to identify the disparities between neighborhoods in a given city.
Led by Human Centered Design and Engineering (HCDE) Ph.D. candidate Emma McDonnell and supported by CREATE, this work investigates how groups with both hearing and d/Deaf and hard of hearing (DHH) members could be better supported when using captions during videoconferences.
Researchers recruited four groups to participate in a series of codesign sessions, which de-centers researchers’ priorities and seeks to empower participants to lead the development of new design ideas. In the study, participants reflected on their experiences using captioning, sketched and discussed their ideas for technology that could help build accessible group norms, and then critiqued video prototypes researchers created of their ideas.
One major finding from this research is that participants’ relationships with each other shape what kinds of accessibility support the group would benefit from.
For example, one group that participated in our study were cousins who had been close since childhood. Now in their mid-twenties, they found they did not have to actively plan for accessibility; they had their ways of communicating and would stop and clarify if things broke down. On the other hand, a group of colleagues who work on technology for DHH people had many explicit norms they used to ensure communication accessibility. One participant, Blake, noted,“I was pretty emotional after the first meeting because it was just so inclusive.” These different approaches demonstrate that there is no one-size-fits-all approach to communication accessibility – people work together as a group to develop an approach that works for them.
This paper also contributes new priorities for the design of videoconferencing software. Participants focused on designing add-ons to videoconferencing systems that would better support their group in communicating accessibly. Their designs fell into four categories:
Speaker Identity and Overlap: Having video conferencing tools identify speakers and warn groups when multiple people speak at once, since overlapping speech can’t be captioned accurately. Participants found this to be critical, and often missing, information.
Support for Behavioral Feedback: Building in ways for people to subtly notify conversation partners if they need to adjust their behavior. Participants desired tools to flag when people need to adjust their cameras, critical caption errors, and if speech rate gets too high. They considered, but decided against, a general purpose conversation breakdown warning.
Videoconferencing Infrastructure for Accessibility: Adding more features and configurable settings around conversational accessibility to videoconferencing platforms. Participants desired basic controls, such as color and font size, as well as the ability to preset and share group accessibility norms and customize behavior feedback tools.
Sound Information: Providing more information about the sound happening during a conversation. Participants were excited about building sound recognition into captioning tools, and considered conveying speech volume via font weight, but decided it would be overwhelming and ambiguous.
This research also has implications for broader captioning and videoconferencing design. While often captioning tools are designed for individual d/Deaf and hard of hearing people, researchers argue that we should design for the entire group having a conversation. This shift in focus revealed many ways that, on top of transcribing a conversation, technology could help groups communicate in ways that can be more effectively captioned. Many of these tools are easy to build with current technology, such as being able to click on a confusing caption to request clarification. The research team hopes that their work can illuminate the need to pay attention to groups’ social context when studying captioning and can provide videoconferencing platform designers a design approach to better support groups with mixed hearing abilities.
McDonnell is advised by CREATE Associate Directors Leah Findlater, HCDE, and Jon Froehlich, Paul G. Allen School of Computer Science & Engineering.
A team of Allen School robotics researchers has published a paper on the finer aspects of robot-assisted dining with friends. “A meal should be memorable, and not for a potential faux pas from the machine,” notes co-author Patrícia Alves-Oliveira. Supported by a CREATE Student minigrant and in the spirit of “nothing about us without us,” they are working with the Tyler Schrenk Foundation to address the design of robot-assisted feeding systems that facilitate meaningful social dining experiences.
UW CREATE collaborates toward a world with fewer problems and more solutions for people of all abilities.
The UW College of Engineering showcased CREATE’s mission, moonshots, and collaborative successes in a feature article, Rethinking disability and advancing access, written by Alice Skipton. The article is reproduced and reformatted here.
CREATE researchers and partners work on high-impact projects — such as those focused on mobility and on mobile device accessibility — advancing the inclusion and participation for people with disabilities.
According to the Centers for Disease Control and Prevention (CDC), one in four people in the United States lives with a disability.
“The presence of disability is everywhere. But how disability has been constructed, as an individual problem that needs to be fixed, leads to exclusion and discrimination.”
Heather Feldner, UW Medicine assistant professor in Rehabilitation Medicine and a CREATE associate director
The construct also ignores the reality that people’s physical and mental abilities continually change. Examples include pregnancy, childbirth, illness, injuries, accidents and aging. Additionally, assuming that people all move, think or communicate in a certain way fails to recognize diverse bodies and minds. By ignoring this reality, technology and access solutions have traditionally been limited and limiting.
UW CREATE, a practical, applied research center, exists to counter this problem by making technology accessible and the world accessible through technology. Launched in early 2020 with support from Microsoft, the Center connects research to industry and the community.
On campus, it brings together accessibility experts and work-in-progress from across engineering, medicine, disability studies, computer science, information science and more, with the model always open to new collaborators.
“Anyone interested in working in the area of accessible technology is invited to become part of CREATE,” says Jacob O. Wobbrock, a professor in the UW Information School and one of the founders and co-director of the Center.
Shooting for the moon
CREATE is partnering with UW I-LABS to explore how accessibility impacts young children’s development, identity and agency. Their study uses the only powered mobility device available in the U.S. designed for children one to three years old. Photo courtesy of UW CREATE.
“We have an amazing critical mass at UW of faculty doing accessibility research,” says Jennifer Mankoff, a professor in the Paul G. Allen School of Computer Science & Engineering and another founder and co-director of CREATE. “There’s also a lot of cross-talk with Microsoft, other technology leaders, and local and national community groups. CREATE wants to ensure people joining the workforce know about accessibility and technology and that the work they do while they are at UW directly and positively impacts the disability community.” The Center’s community and corporate partnerships approach increases creativity and real-world impact.
The concept of moonshots — technology breakthroughs resulting from advances in space exploration — offers a captivating way of thinking about the potential of CREATE’s research. The Center currently has four research moonshots for addressing technological accessibility problems. One focuses on how accessibility impacts young children’s development, identity and agency and includes a mobility and learning study with the UW Institute for Learning & Brain Sciences (I-LABS) that employs the only powered mobility device available in the U.S. market specifically designed for children one to three years old. Another looks more broadly at mobility indoors and outdoors, such as sidewalk and transit accessibility. A third seeks ways to make mobile and wearable devices more accessible along with the apps people use every day to access such essentials as banking, gaming, transportation and more. A fourth works toward addressing access, equity and inclusion for multiply marginalized people.
“CREATE wants to ensure people joining the workforce know about accessibility and technology and that the work they do while they are at UW directly and positively impacts the disability community.”
— Jennifer Mankoff, founder and co-director of CREATE
For CREATE, advancing these moonshots isn’t just about areas where technologies already exist, like improving an interface to meet more people’s needs. It’s about asking questions and pushing research to address larger issues and inequities. “In certain spaces, disabled people are overrepresented, like in the unhoused or prison populations, or in health-care settings,” Mankoff says. “In others, they are underrepresented, such as in higher education, or simply overlooked. For example, disabled people are more likely to die in disaster situations because disaster response plans often don’t include them. We need to ask how technology contributes to these problems and how it can be part of the solution.”
Broader problem-solving abilities
For even greater impact, CREATE has situated these research moonshots within a practical framework for change that involves education initiatives, translation work and research funding. Seminars, conversations, courses, clubs and internship opportunities all advance the knowledge and expertise of the next generation of accessibility leaders. Translation work ensures that ideas get shaped and brought to life by community stakeholders and through collaborations with UW entities like the TASKAR Center for Accessible Technology, HuskyADAPT and the UW Disability Studies Program, as well as through collaborations with industry leaders like Microsoft, Google and Meta. CREATE’s research funding adds momentum by supporting education, translation and direct involvement of people with disabilities.
Engineering and computer science researchers seek to make digital wayfinding more equitable and accessible to more people.
Nicole Zaino, a mechanical engineering Ph.D. student participating in CREATE’s early childhood mobility technology research, describes the immense benefits of having her education situated in the context of CREATE. “It’s broadened my research and made me a better engineer,” she says. She talks about the critical importance of end-user expertise, like the families participating in the mobility and learning study. Doing collaborative research and taking classes in other disciplines gives her more insights into intersecting issues. That knowledge and new vocabulary inform her work because she can search out research from different fields she otherwise wouldn’t have known about.
More equity advocates
At the same time, Zaino’s lived experience with her disability also broadens her perspective and enhances her research. She became interested in her current field when testing out new leg braces and seeing other assistive technology on the shelves at the clinic. For Mankoff, it was the reverse. She worked in the field and then experienced disability when diagnosed with Lyme disease, something she’s incorporated into her research. Wobbrock got a front-row seat to mobility and accessibility challenges when he severely herniated his L5-S1 disc and couldn’t sit down for two years. For Feldner, although she studied disability academically as a physical therapist and in disability studies, first-hand experiences came later in her career when she became a disability advocate for one of her children and a parent. At CREATE, more than 50% of those involved have some lived experience with disability. This strengthens the Center by bringing a diversity of perspectives and first-hand knowledge about how assumptions often get in the way of progress.
Seeking to push progress further on campus, CREATE has an initiative on research at the intersection of race, disability and technology with the Allen School, the Simpson Center for the Humanities, the Population Health Initiative, the Office of Minority Affairs and Diversity, the Buerk Center for Entrepreneurship, and the Office of the ADA Coordinator.
CDC statistics show that the number of people experiencing a disability is higher when examined through the lens of race and ethnicity. With events and an open call for proposals, the initiative seeks increased research and institutional action in higher education, health care, artificial intelligence, biased institutions and more.
“If we anticipate that people don’t conform to certain ability assumptions, we can think ahead,” says Wobbrock. “What would that mean for a particular technology design? It’s a longstanding tenant of accessibility research that better access for some people results in better access for all people.”
Make a gift
By supporting UW CREATE, you can help make technology accessible and make the world accessible through technology.
Just about everybody in business, education, and artistic settings needs to use presentation software like Microsoft PowerPoint, Google Slides, and Adobe Illustrator. These tools use artboards to hold objects such as text, shapes, images, and diagrams. But for blind and low vision (BLV) people, using such software adds a new level of challenge beyond keeping our bullet points short and images meaningful. They experience:
High added cognitive load
Difficulty determining relationships between objects
Uncertainty if an operation has been successful
Screen readers, which were built for 1-D text information, don’t handle 2-D information spaces like artboards well.
For example, NVDA and Windows Narrator would only report artboard objects in their Z-order – regardless of where those objects are located or whether they are visually overlapping – and only report its shape name without any other useful information.
Can digital artboards in presentation software be made accessible for blind and low-vision users to read and edit on their own?
Can we design interaction techniques to deliver rich 2-D information to screen reader users?
The answer is yes!
They developed a multidevice, multimodal interaction system – A11yBoard – to mirror the desktop’s canvas on a mobile touchscreen device, and enabled rapid finger-driven screen reading via touch, gesture, and speech.
Blind and low-vision users can explore the artboard by using a “reading finger” to move across objects and receive audio tone feedback. They can also use a second finger to “split-tap” on the screen to receive detailed information and select this object for further interactions.
“Walkie-talkie mode,” when turned on by dwelling a finger on the screen like turning on a switch, lets users “talk” to the application.
Users can therefore access tons of details and properties of objects and their relationships. For example, they can ask for a number of closest objects to understand what objects are near to explore. As for some operations that are not easily manipulable using touch, gesture, and speech, we also designed an intelligent keyboard search interface to let blind and low-vision users perform all object-related tasks possible.
Through a series of evaluations with blind users, A11yBoard was shown to provide intuitive spatial reasoning, multimodal access to objects’ properties and relationships, and eyes-free reading and editing experience of 2-D objects.
Currently, much digital content has been made accessible for blind and low-vision people to read and “digest.” But few technologies have been introduced to make the creation process accessible to them so that blind and low-vision users can create visual content on their own. With A11yBoard, we have gained a step towards a bigger goal – to make heavily visual-based content creation accessible to blind and low-vision people.
Paper author Zhuohao (Jerry) Zhang is a second-year Ph.D. student at the UW iSchool. His work in HCI and accessibility focuses on designing assistive technologies for blind and low-vision people. Zhang has published and presented at CHI, UIST, and ASSETS conferences, receiving a CHI best paper honorable mention award, a UIST best poster honorable mention award, and a CHI Student Research Competition Winner, and featured by Microsoft New Future of Work Report 2022. He is advised by CREATE Co-Director Jacob O. Wobbrock.
The machines and devices we use every day – for example, touch screens, gas pedals, and computer track pads – interpret our actions and intentions via sensors. But these sensors are designed based on assumptions about our height, strength, dexterity, and abilities. When they aim for the average person (who does not actually exist), they aren’t usable or accessible.
CREATE Post-doctoral student Momona Yamagami seeks to integrate personalization and customization into sensor design and the resulting algorithms baked into the products we use. Her research has shown that biosignal interfaces that use electromyography sensors, accelerometers, and other biosignals as inputs provide promise to improve accessibility for people with disabilities.
In a recent presentation of her research as a CREATE postdoctoral scholar, she emphasizes that generalized models that are not personalized to the individual’s abilities, body sizes, and skin tones may not perform well.
Momona Yamagami presenting her biosignal research, with a slide noting that biosignals fluctuate and are higher on the neural circuitry and a smartwatch as an “always on” sensors for continuous health monitoring.
Individualized interfaces that are personalized to the individual and their abilities could significantly enhance accessibility. Continuous (i.e., 2-dimensional trajectory-tracking) and discrete (i.e., gesture) electromyography (EMG) interfaces can be personalized to the individual:
For the continuous task, we used methods from game theory to iteratively optimize a linear model that mapped EMG input to cursor position.
For the discrete task, we developed a dataset of participants with and without disabilities performing gestures that are accessible to them.
As biosignal interfaces become more commonly available, it is important to ensure that such interfaces have high performance across a wide spectrum of users.
Momona Yamagami is completing her time as a CREATE postdoctoral scholar, advised by CREATE Co-director Jennifer Mankoff. Starting summer 2023, Yamagami will be an Assistant Professor at Rice University Electrical & Computer Engineering as part of the Digital Health Initiative.
The Association for Computing Machinery (ACM) has honored CREATE Co-Director Jacob O. Wobbrock and colleagues with a 10-year lasting impact award for their groundbreaking work improving how computers recognize stroke gestures.
Wobbrock, a professor in the Information School, and co-authors Radu-Daniel Vatavu and Lisa Anthony were presented with the 2022 Ten Year Technical Impact Award in November at the ACM International Conference on Multimodal Interaction (ICMI). The award honors their 2012 paper titled Gestures as point clouds: A $P recognizer for user interface prototypes, which also won ICMI’s Outstanding Paper Award when it was published.
The $P point-cloud gesture recognizer was a key advancement in the way computers recognize stroke gestures, such as swipes, shapes, or drawings on a touchscreen. It provided a new way to quickly and accurately recognize what users’ fingers or styluses were telling their devices to do, and even could be used with whole-hand gestures to accomplish more complex tasks such as typing in the air or controlling a drone with finger movements.
The research built on Wobbrock’s 2007 invention of the $1 unistroke recognizer, which made it much easier for devices to recognize single-stroke gestures, such as a circle or a triangle. Wobbrock called it “$1” — 100 pennies — because it required only 100 lines of code, making it easy for user interface developers to incorporate gestures in their prototypes.
Congratulations to CREATE Ph.D. student Ather Sharif, Orson (Xuhai) Xu, and team for this great project on transit access! Together they developed UnlockedMaps, a web-based map that allows users to see in real time how accessible rail transit stations are in six metro areas including Seattle, Philadelphia (where the project was first conceived by Sharif and a friend at a hackathon), Chicago, Toronto, New York, and the California Bay Area.
Shown here is a screenshot of UnlockedMaps in New York. Stations that are labeled green are accessible while stations that are labeled orange are not accessible. Yellow stations have elevator outages reported.
Sharif, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering advised by CREATE Co-Director Jacob O. Wobbrock, said the team also included nearby and accessible restaurant and bathroom data. “I think restaurants and restrooms are two of the most common things that people look for when they plan their commute. But no other maps really let you filter those out by accessibility. You have to individually click on each restaurant and check if it’s accessible or not, using Google Maps. With UnlockedMaps, all that information is right there!”
Whether she’s researching how biofeedback systems can guide gait training in children with cerebral palsy or leading toy adaptation events, Alyssa Spomer is committed to advancing accessible technology.
A Ph.D. student in UW Mechanical Engineering (ME) and advised by CREATE Associate Director Kat Steele, Spomer is the student chair of CREATE-sponsored HuskyADAPT. Her studies have been multidisciplinary, spanning ME and rehabilitation medicine. She uses her engineering skills to understand the efficacy of using robotic devices to target and improve neuromuscular control during walking.
“Delving into how the central nervous system controls movement and how these systems are impacted by brain injury has been such an interesting aspect of my work,” Spomer says. “My research is a mix of characterizing the capacity for individuals to adapt their motor control and movement patterns, and evaluating the efficacy of devices that may help advance gait rehabilitation.”
In her dissertation work, Spomer is primarily evaluating how individuals adapt movement patterns while using a pediatric robotic exoskeleton paired with an audiovisual biofeedback system that she helped design. The Biomtoum SPARK exoskeleton works to sense and support motion at the ankle during walking, using motors worn on a hip belt to provide either resistance or assistance to the ankles during walking. The audiovisual system is integrated into the device’s app and provides the user with real-time information on their ankle motion alongside a desired target to help guide movement correction.
The audiovisual system that Spomer helped design (shown on a screen in the right photo) provides the user with real-time information on their ankle motion alongside a desired target to help guide movement correction.
Inspired by CREATE’s Kat Steele and the Steele Lab
Spomer was drawn to ME by the Steele Lab’s focus on enhancing human mobility through engineering and design. Working with Kat Steele has been a highlight of her time at the UW.
“I really resonated with Kat’s approach to research,” Spomer says. “The body is the ultimate machine, meaning that we as engineers can apply much of our foundational curriculum in dynamics and control to characterize its function. The beauty of ME is that you are able to develop such a rich knowledge base with numerous applications which really prepares you to create and work in these multidisciplinary spaces.”
This winter, Spomer will begin a new job at Gillette Children’s Specialty Healthcare. She’s excited to pursue research that aligns with her Ph.D. work. Her goal remains the same: “How can we advance and improve the accessibility of healthcare strategies to help promote independent and long-term mobility?”
Mobile apps have become a key feature of everyday life, with apps for banking, work, entertainment, communication, transportation, and education, to name a few. But many apps remain inaccessible to people with disabilities who use screen readers or other assistive technologies.
Any person who uses an assistive technology can describe negative experiences with apps that do not provide proper support. For example, screen readers unhelpfully announce “unlabeled button” when they encounter a screen widget without proper information provided by the developer.
We know that apps often lack adequate accessibility, but until now, it has been difficult to get a big picture of mobile app accessibility overall.
How good or bad is the state of mobile app accessibility? What are the common problems? What can be done?
Research led by Ph.D. student Anne Spencer Ross and co-advised by James Fogarty (CREATE Associate Director) and Jacob O. Wobbrock (CREATE Co-Director) has been examining these questions in first-of-their-kind large-scale analyses of mobile app accessibility. Their latest research automatically examined data from approximately 10,000 apps to identify seven common types of accessibility failures. Unfortunately, this analysis found that many apps are highly inaccessible. For example, 23% of the analyzed apps failed to provide accessibility metadata, known as a “content description,” for more than 90% of their image-based buttons. The functionality of those buttons will therefore be inaccessible when using a screen reader.
Bar chart shows that 23 percent of apps are missing labels on all their elements. Another 23 percent were not missing labels on any elements. And the rest were missing labels on 6 to 7 percent of their elements.
Clearly, we need better approaches to ensuring all apps are accessible. This research has also shown that large-scale data can help identify reasons why such labeling failures occur. For example, “floating action buttons” are a relatively new Android element that typically present a commonly-used command as an image-button floating atop other elements. Our analyses found that 93% of such buttons lacked a content description, so they were even more likely than other buttons to be inaccessible. By examining this issue closely, Ross and her advisors found that commonly used software development tools do not detect this error. In addition to highlighting accessibility failures in individual apps, results like these suggest that identifying and addressing underlying failures in common developer tools can improve the accessibility of many apps.
Next, the researchers aim to detect a greater variety of accessibility failures and to include longitudinal analyses over time. Eventually, they hope to paint a complete picture of mobile app accessibility at scale.