Skip to content

Amy Ko named ACM Distinguished Member

March 18, 2024

Congratulations to CREATE faculty Amy J. Ko, who has been recognized as a Distinguished Member of the Association for Computing Machinery (ACM) for her work on human-centered theories of program understanding and the development of tools and learning technologies. 

Amy J. Ko, a 40-something white/Asian woman with brown hair and black rimmed eyeglasses.

“I’m honored to be recognized by my nominators, all of whom have been role models and mentors in my career,” said Ko, a professor in the iSchool. “It makes me want to pay their giving and caring work forward to more junior scholars across my community.” 

Ko has made substantial contributions to researching computing education, human-computer interaction, and humanity’s struggle to understand computing and harness it for creativity, equity and justice. She is one of the editors of the newly released, open source book, Teaching Accessible Computing and has released a beta version of Wordplay, an educational programming language created particularly for adolescents with disabilities and those who are not English fluent, who have so often been left behind in learning about computing. (She invites undergraduates interested in making programming languages more playful, global, and accessible to join Wordplaypen, a community that helps design, build, and maintain Wordplay.)

The ACM is the world’s largest computing society. It recognizes up to 10 percent of its worldwide membership as distinguished members based on their professional experience, groundbreaking achievements, and longstanding participation in computing. The ACM has three tiers of recognition: fellows, distinguished members and senior members.


This article has been excerpted from an iSchool article.

Zhang is CREATE’s Newest Apple AIML fellow

March 18, 2024

Congratulations to Zhuohao (Jerry) Zhang – the most recent CREATE Ph.D. student to receive an Apple Scholars in AIML PhD fellowship. The prestigious award supports students through funding, internship opportunities, and mentorship with an Apple researcher. 

Zhang is a 3rd-year iSchool Ph.D. student advised by Prof. Jacob. O Wobbrock. His research focuses on using human-AI interactions to address real-world accessibility problems. He is particularly interested in designing and evaluating intelligent assistive technologies to make creativity tasks accessible.

Zhuohao (Jerry) Zhang standing in front of a poster, wearing a black sweater and a pair of black glasses, smiling.

Zhang joins previous CREATE-advised Apple AIML fellows:

Venkatesh Potluri (Apple AIML Ph.D. fellow 2022), advised by CREATE Director Jennifer Mankoff in the Allen School. His research makes overlooked software engineering spaces such as IOT and user interface development accessible to developers who are blind or visually impaired. His work systematically understands the accessibility gaps in these spaces and addresses them by enhancing widely used programming tools.

Venkatesh Potluri leans toward the camera smiling with eyes cast downward

Rachel Franz (Apple AIML Ph.D. fellow 2021) is also advised by Wobbrock in the iSchool. Her research focuses on accessible technology design and evaluation for users with functional impairments and low digital literacy. Specifically, she is focused on using AI to make virtual reality more accessible to individuals with mobility limitations.

Rachel Franz, a young woman with long blond hair and light skin photographed in front of a rock wall.

New Book: Teaching Accessible Computing

March 14, 2024

A new, free, and community-sourced online book helps Computer Science educators integrate accessibility topics into their classes. Teaching Accessibility provides the foundations of accessibility relevant to computer science teaching and then presents teaching methods for integrating those topics into course designs.

From the first page of the book, a line drawing of a person hunched over a laptop with their face close to the screen which is populated by large, unreadable characters.

The editors are Alannah Oleson, a postdoctoral scholar and co-founder at the UW Center for Learning, Computing, and Imagination (LCI), CREATE and iSchool faculty Amy Ko, and Richard Ladner, CREATE Director of Education Emeritus. You may recognize many CREATE faculty members’ research referenced throughout the guide. CREATE Director Jennifer Mankoff and CREATE Ph.D. student Avery Kelly Mack contributed a foundational chapter that advocates for teaching inclusively in addition to teaching about accessibility.

Letting the book speak for itself

“… we’ve designed this book as a freeopenlivingweb-first document. It’s free thanks to a National Science Foundation grant (NSF No. 2137312) that has funded our time to edit and publish the book. It’s open in that you can see and comment on the book at any time, creating community around its content. It’s living in that we expect it to regularly change and evolve as the community of people integrating accessibility into their CS courses grows and evolves. And it’s web-first in that the book is designed first and foremost as an accessible website to be read on desktops, laptops, and mobile devices, rather than as a print book or PDF. This ensures that everyone can read it, but also that it can be easily changed and updated as our understandings of how to teach accessibility in CS evolve.”

Introduction by Alannah Oleson, Amy J. Ko, Richard Ladner

“To write these chapters, we recruited some of the world’s experts on accessible computing and teaching accessible computing, giving them a platform to share both their content knowledge about how accessibility intersects with specific CS topics, but also their pedagogical content knowledge about how to teach those intersections in CS courses.”

Introduction by Alannah Oleson, Amy J. Ko, Richard Ladner

DUB hosts para.chi event

March 1, 2024

Para.chi is a worldwide parallel event to CHI ’24 for those unable or unwilling to join CHI ‘24. UW Design. Use. Build. (DUB) is hosting para.chi.dub with members of the DUB team–and maybe you.

  • Live session for accepted virtual papers
  • Networking opportunities
  • Accessibility for students and early career researchers locally and online

Wednesday, May 8, 2024 
Hybrid event: Seattle location to be announced and virtual info shared upon registration
Presenter applications due March 15 
Register to attend by Monday, April 1.

Do you have a virtual paper and wish to get feedback from a live audience? Perhaps you have a journal paper accepted to an HCI venue and wish to present it live? Then consider joining us!

Note that presenter space is somewhat limited. Decisions about how to distribute poster, presenter, and hybrid opportunities will be made after March 15.

Seattle and beyond

Each regional team is offering a different event, from mini-conferences to virtual paper sessions to mentoring and networking events. 

Learn more:

Three Myths and Three Actions: “Accommodating” Disabled Students

February 29, 2024

Excerpted from the Winter 2024 Allen School DEIA newsletter article contributed by CREATE Ph.D. students Kelly Avery Mack and Ather Sharif, with Lucille Njoo.

Completing graduate school is difficult for any student, but it’s especially difficult when you’re trying to learn at an institution that isn’t built for you. Students with disabilities at UW face extra challenges every day because our university doesn’t support equitable participation in educational activities like research and mentorship – those of us who don’t fit the mold face an uphill struggle to make ourselves heard in an academic culture that values maximum efficiency over unique perspectives. In this article, we share three common myths about students with disabilities, reveal the reality of our inequitable experience as grad students at UW, and propose a few potential solutions to begin ameliorating this reality, both at our university and beyond.

Myth 1: DRS (Disabilities Resources for Students) handles all accessibility accommodations.

This is an incorrect expectation of the role DRS serves in a campus ecosystem. The term “accommodations,” in the first place, frames us as outcasts, implying that someone needs to “review” and “approve” of our “requests” to simply exist equitably; but given that this is the term folks are most familiar with, we’ll continue referring to them as “accommodations” for ease of communication. While DRS can provide some assistance, they are outrageously under-staffed, and UW research has demonstrated that they are only part of the ecosystem. Instructors need to consider accessibility when building their courses and when teaching their classes. Accessibility, like computer security, works best when it is considered from the beginning, but it’s not too late to start repairing inaccessible PDFs or lecture slides for a future quarter. UW DO-IT has a great resource for accessible teaching.

Myth 2: Making my materials accessible is all I have to do
for disabled students, right?

Disability is highly individual, and no matter how much an instructor prepares, a student might need further accommodations than what was prepared ahead of time. Listen to and believe disabled students when they discuss the accessibility barriers they face. Questioning their disability or using language that makes them doubt their self-worth is a hard no. Then, work with the student to decide on a solution moving forward, and remember that students are the number-one experts on their own accessibility needs.

Myth 3: Advising a student with a disability is the same as advising a student without a disability.

Disabled students have very different experiences of grad school, and they need advisors who are informed, aware, and proactive about those differences. If you are taking on a disabled student, the best ways to prepare yourself are:

Educate yourself about disability.

Disabled students are tired of explaining the same basic accessibility practices over and over again. Be willing to listen if your student wants to educate you more about their experience with disability, and recognize action items from the conversation that you can incorporate to improve your methods.

Expect that timelines might look different.

Disabled students deal with all kinds of barriers, from inaccessible technology to multiple-week hospital stays, so they may do things faster or slower than other students (as is true for any student). This does not mean they are not as productive or deserving of research positions. Disabled students produce high-quality research and award-winning papers, and their unique perspectives have the potential to strengthen every field, not just those related to disability studies. And they are able to do their best work when they have an advisor who recognizes their intellectual merit and right to be a part of the program.

Be prepared to be your student’s number-one ally.

Since DRS cannot fulfill all accessibility needs, you might need to figure out how to solve them yourselves. Can you find $200 in a grant to purchase an OCR tool to help make PDFs accessible for a blind student? (Yes, you can.) Can you advocate for them if their instructor isn’t meeting accessibility requests? (Yes, you can.) Not only will this help them do their best work, but it also sets an example for the other students in your lab and establishes an academic culture that values students of all abilities.

Wheels in motion: Improving mobility technologies for children

February 28, 2024

Being able to easily get from the house to the playground affects how long and how often children use an adapted ride-on car, according to a study, Off to the park: a geospatial investigation of adapted ride-on car usage, published by CREATE Ph.D. student Mia Hoffman with CREATE associate directors Heather A. Feldner (lead researcher on the project), Katherine M. Steele, and Jon Froehlich. Their research demonstrates the importance of accessibility in the built environment and that advocating for environmental accessibility should include both the indoors and outdoors.

Two children ride in small toy cars, one of which has an adapted steering wheel to make it accessible for the child to use.

For a recent study, adapted ride-on cars were provided to 14 families with young children in locations across Western Washington. Photo courtesy of Heather Feldner.

Ride-on cars are miniature toy cars for children with steering wheels and a battery-powered pedal. Adapted ride-on cars are an easy-to-use temporary solution for children with mobility issues. Although wheelchairs have more finite control, insurance typically covers new wheelchairs every five years. Children under age 5 can use adapted ride-on cars to explore their surroundings if they outgrow their wheelchair, or if they aren’t able to be in a wheelchair yet.

Exploration is critical to language, social and physical development. There are big benefits when a child starts moving.

Mia Hoffman, CREATE Ph.D. student

“Adapted ride-on cars allow children to explore by themselves,” says Mia Hoffman, the Ph.D. candidate in mechanical engineering who co-authored the paper published in fall 2023. “Exploration is critical to language, social and physical development. There are big benefits when a child starts moving.”

The researchers adapted the ride-on cars to make them more accessible. Instead of a foot pedal, children might start the car with a different option that’s accessible to them, such as a large button or a sip-and-puff, which is a pneumatic device that would respond to air being blown into it. Researchers added additional structural supports to the device, such as a backrest made out of kickboards or PVC side-supports.

Adapted ride-on cars were provided to 14 families with young children in locations across Western Washington. Heather Feldner, an assistant professor in the Department of Rehabilitation Medicine and adjunct assistant professor in ME, trained families on how to use the cars. The families then spent a year playing with the cars. Each car had an integrated data logger that tracked how often the child pressed the switch to move the car, and GPS data indicated how far they traveled.

The study found that most play sessions occurred indoors, underscoring the importance of indoor accessibility for children’s mobility technology. However, children used the car longer outdoors, and identifying an accessible route increased the frequency and duration of outside play sessions. Study participants drove outdoors more often in pedestrian-friendly neighborhoods, measured by researchers with the Walk Score, and when close to accessible paths, measured by Project Sidewalk’s AccessScore.

“Families can sometimes be uncertain about introducing powered mobility for their children in these early stages of development,” says Feldner. “But ride-on cars and other small devices designed for kids open up so many opportunities — from experiencing the joy of mobility, learning more about the world around them, enjoying social time with family and friends in new environments, and working on developmental skills. We want to work with kids and families to show them what is possible with these devices, listen to their needs and ideas, and continue working to ensure that both our technology designs and our community environments are accessible and available for all.”

Exploring different mobility devices

Heather Feldner and Mia Hoffman stand next to their poster board about adapted ride-on cars research at a conference.

As a graduate student, Hoffman conducts research on children ages 3 and under who might crawl, roll, sit up, or cruise in a power mobility device. Besides processing sensor data and other data analysis, Hoffman’s work also involves getting to know families, “playing with a lot of toys, singing, and entertaining kids,” she jokes.

Research involving pediatrics and accessibility like the adapted ride-on cars study is why Hoffman joined the Steele Lab. She became interested in biomechanics in sixth grade, when she learned that working on engineering and medical design was possible. As an undergraduate at the University of Notre Dame, Hoffman studied brain biomechanics, computational design and assistive technology. She worked on projects such as analyzing the morphology of monkey brains and creating 3D-printed prosthetic hands for children.

After connecting with Feldner and Kat Steele, Albert Kobayashi Professor in Mechanical Engineering and CREATE associate director, Hoffman realized that the Steele Lab, which often collaborates with UW Medicine, was the perfect fit.

Hoffman is currently working on research with Feldner and Steele that compares children’s usage of a commercial pediatric powered mobility device to their usage of adapted ride-on cars in the community environment. Next, Hoffman will conduct one of the first comparative studies about how using supported mobility in the form of a partial body weight support system or using a powered wheelchair affects children’s exploration patterns. The study involves children with Down Syndrome, who often have delayed motor development and who are underrepresented in mobility research.

There can be stigma associated with using a wheelchair instead of a walker or another mobility device that may help with motor development, but Hoffman says the study could demonstrate that both are important.

“The goal is to show that children can simultaneously work on motor gains while using powered wheelchairs or other mobility devices to explore their environment,” she says.

“Our hope is for kids to just be kids,” says Hoffman. “We want them to be mobile and experience life at the same time as their peers. It’s about meeting a kid where they’re at and supporting them so that they can move around and play with their friends and family.”


This article was excerpted from an article written by Lyra Fontaine for Mechanical Engineering.

Joshua Miele: Driving Accessibility through Open Source

February 15, 2024

Formally, Dr. Joshua Miele describes himself as a blind scientist, designer, performance artist and disability activist who is focused on the overlap of technology, disability, and equity. But in his personable and humorous lecture, he listed a few more identities: Interrupter. Pain in the ass. “CAOS” promoter.

The Allen School Distinguished Lecture took place earlier this month and is a worthwhile listen on YouTube.

Miele’s passions are right in line with CREATE’s work and he started his lecture, after being introduced by CREATE Director Jennifer Mankoff, with a compliment we heartily accept: “This community at the University of Washington is one of the largest, one of the most vibrant communities of people thinking and working around disability, accessibility, and technology.”

Dr. Joshua Miele, a 50-something white male with culry brown hair. He has facial scarring from an attack when he was a child and a single prosthetic, blue eye.

Miele shared his enthusiasm for disability-inclusive design and its impact on global disability equity and inclusion. Drawing on examples and counterexamples from his own life and career, Dr. Miele described some of the friction the accessibility field has faced and speculated about what challenges may lie ahead, with particular emphasis on the centrality of user-centered practices, and the exhilarating potential of open source solutions and communities.

When he received the MacArthur grant, Miele had to decide what to do with the spotlight on his work. He shared his hopes for a Center for Accessibility and Open Source (CAOS, pronounced “chaos”) to promote global digital equity for people with disabilities through making low-cost accessible tools available to everyone, whether they have financial resources or not. He invited anyone interested in global equity, disability, direct action, performance art, and CAOS/chaos to reach out to work together on this incredibly important work.

More about Miele and the lecture

Alice Wong and Patty Berne: Two UW lectures moderated by CREATE researchers

Winter 2024 quarter kicked off with two outstanding conversations with women of color who are leaders in disability justice.

Alice Wong: Raising the visibility of disabled people

First, Alice Wong discussed topics important to her work in raising the visibility of disabled people. Wong’s book Year of the Tiger: An Activist’s Life was the topic of the Autumn 2023 CREATE Accessibility Seminar.

CREATE Director Jennifer Mankoff started the conversation asking Wong about her experience as a disabled person in academia and what needs to change. Wong said her work in disability justice was inspired in part by the “incredible amount of emotion and physical labor to ask for equal access” in academic settings. She had to spend precious time, money and energy to gain the accommodations and access she needed to succeed. But she realized that as soon as she transitioned out, her efforts would be lost and the next student would have to start over to prove their need and request a new set of accommodations. Wong was doubtful that large academic institutions can support the goal of collective liberation. It’s the “dog-eat-dog world [of] academia where the competition is stiff and everyone is pushed to their limits to produce and be valuable.” She encouraged instructors to incorporate books about disability justice in their syllabi (see the reading list below). 

Wong, who spoke with a text-to-voice tool and added emphasis with her facial expressions on the screen, also addressed the value and the limitations of assistive technology. She noted that the text-to-speech app she uses does not convey her personality. She also discussed how ableism appears in activist discourse.

One of her examples was a debate over gig economy delivery services, which are enormously important for many people with disabilities and that also under-compensate delivery work. She noted that blaming disabled people for undermining efforts for better wages was not the solution; collective efforts to make corporations compensate workers is the solution. She also explained that hashtag activism, which has been disparaged in popular discourse, is a crucial method for disabled people to participate in social justice activism. And she discussed her outrage when, as she prepared to give a talk to a public health school, her own access needs were used to censor her. Throughout her talk, Wong returned again and again to the principles of disability justice, and encouraged attendees to engage in collective forms of change.

Wong’s responses embodied a key component of disability justice principles: citational practices that name fellow contributors to collective disability justice wisdom. Her long list of recommended reading for the audience inspired us to build our new RDT reading list. Wong referenced Patty Berne several times, calling Berne her introduction to disability justice.

Patty Berne on disability justice: Centering intersectionality and liberation

A week later, two CREATE Ph.D. students, Aashaka Desai and Aaleyah Lewis, moderated a conversation with Patty Berne. Berne, who identifies as a Japanese-Haitian queer disabled woman, co-founded Sins Invalid, a disability justice-based arts project focusing on disabled artists of color and queer and gender non-conforming artists with disabilities. Berne defined disability justice as advocating for each other, understanding access needs, and normalizing those needs. On the topic of climate justice, she noted that state-sponsored disaster planning often overlooks the needs of people with motor impairments or life-sustaining medical equipment. This is where intersectional communities do, and should, take care of each other when disaster strikes.

Berne addressed language justice within the disability community, noting that “we don’t ‘language’ like able-bodied people.” For example, the use of ventilators and augmented speech technology change the cadence of speech. Berne wants to normalize access needs for a more inclusive experience of everyday life. Watch the full conversation on YouTube.

Anat Caspi receives Human Rights Educator Award

Congratulations to Anat Caspi on receiving the 2023 Human Rights Education Award from the Seattle Human Rights Commission!

Caspi is a CREATE associate director and the founder and director of the Taskar Center for Accessible Technology. About the award she said, “I have always thought making an impact on local communities is where it starts,” said Caspi. “The Seattle commission’s focus on disability rights as civil rights serve as a reminder of the importance of collaborative efforts in creating a more inclusive and just society.” She emphasized that it also celebrates the collective efforts of the Taskar Center community.

Those efforts range from the expansive — mapping the accessibility of miles of urban infrastructure — to those of a more human scale. For example, the Taskar Center partnered with not-for-profit Provail Therapy Center to create a library of adapted technology for people with different abilities to borrow for free. The 800 artifacts in the Pacific Northwest Adaptive Technology Library were either donated or created through community education events, where volunteers don protective goggles and get a crash-course in safe use of a soldering iron before adapting battery-operated toys to be switch-accessible for players of different abilities. 

CREATE Welcomes Dr. Olivia Banner!

January 2, 2024

Olivia Banner, a white woman with a warm smile and smiling eyes

In her role as CREATE’s Director of Strategy and Operations, Olivia Banner, Ph.D., will help develop and oversee organizational strategy, design and implement new programs, manage center operations, and help ensure a sustainable trajectory of high quality work in service of the CREATE’s core mission

Banner is a disabled author and educator who has taught courses on disability, technology, and media. She comes to Seattle and the UW from the University of Texas at Dallas, where she was an associate professor of Critical Media Studies. She is the author of Communicative Biocapitalism: The Voice of the Patient in Digital Health and the Health Humanities. Her new book about technology, psychiatry, and practices of mutual care is forthcoming with Duke University Press. Her research has been published in Catalyst: Feminism, Theory, TechnoscienceLiterature and Medicine and is forthcoming in Disability Studies Quarterly.  

“Her principles and commitment to intersectional work caught our attention in our conversations about the new role. We are so lucky to have her joining CREATE!”

Jennifer Mankoff, CREATE Director

Banner says she looks forward to integrating disability principles into projects with tangible effects on disabled peoples’ lives, including AI + Accessibility, integrating disabled perspectives into projects, and race, technology, and disability—which align with Banner’s previous academic work. She is personally invested in fostering just technological futures through collaborative work and is very excited about the Center’s aim of expanding access through community partnerships.

CREATE Director Jennifer Mankoff is equally excited about the vision Banner brings for CREATE’s future, her policy experience, her administrative skills, and her commitment to amplifying the voices of those she serves. “Her principles and commitment to intersectional work caught our attention in our conversations about the new role. We are so lucky to have her joining CREATE!” says Mankoff.

In her research, scholarship, and teaching, Banner has centered disability knowledge as a method for envisioning technological futures. Her work extends to multiple collaborative projects, including co-teaching a seminar on surveillance with a computer science professor, serving on a Lancet-sponsored commission developing policies for global digital health development, and co-directing Social Practice & Community Engagement Media, a lab that used low-tech methods to reimagine campus practices of care. Toward the goal of improving access on the UT Dallas campus, Banner conducted critical access mapping projects, led Teach-Ins and workshops on disability and equity and on accessible course design, and served on the University Accessibility Committee.

Having served as managing editor of an academic journal and as Associate Dean of Graduate Studies for her School, she also brings professional experience working with faculty, students, staff, and community members from varied disciplines and professions, and anticipates generative conversations on the horizon. She joins CREATE eager to support and enhance its visions of accessible and equitable technology.

ARTennis attempts to help low vision players

December 16, 2023

People with low vision (LV) have had fewer options for physical activity, particularly in competitive sports such as tennis and soccer that involve fast, continuously moving elements such as balls and players. A group of researchers from CREATE associate director Jon E. Froehlich‘s Makeability Lab hopes to overcome this challenge by enabling LV individuals to participate in ball-based sports using real-time computer vision (CV) and wearable augmented reality (AR) headsets. Their initial focus has been on tennis.

The team includes Jaewook Lee (Ph.D. student, UW CSE), Devesh P. Sarda (MS/Ph.D. student, University of Wisconsin), Eujean Lee (Research Assistant, UW Makeability Lab), Amy Seunghyun Lee (BS student, UC Davis), Jun Wang (BS student, UW CSE), Adrian Rodriguez (Ph.D. student, UW HCDE), and Jon Froehlich.

Their paper, Towards Real-time Computer Vision and Augmented Reality to Support Low Vision Sports: A Demonstration of ARTennis was published in the 2023 ACM Symposium on User Interface Software and Technology (UIST).

ARTennis is their prototype system capable of tracking and enhancing the visual saliency of tennis balls from a first-person point-of-view (POV). Recent advancements in deep learning have led to models like TrackNet, a neural network capable of tracking tennis balls in third-person recordings of tennis games that is used to improve sports viewing for LV people. To enhance playability, the team first built a dataset of first-person POV images by having the authors wear an AR headset and play tennis. They then streamed video from a pair of AR glasses to a back-end server, analyzed the frames using a custom-trained deep learning model, and sent back the results for real-time overlaid visualization.

After a brainstorming session with an LV research team member, the team added visualization improvements to enhance the ball’s color contrast and add a crosshair in real-time.

Early evaluations have provided feedback that the prototype could help LV people enjoy ball-based sports but there’s plenty of further work to be done. A larger field-of-view (FOV) and audio cues would improve a player’s ability to track the ball. The weight and bulk of the headset, in addition to its expense are also factors the team expects to improve with time, as Lee noted in an interview on Oregon Public Broadcasting.

“Wearable AR devices such as the Microsoft HoloLens 2 hold immense potential in non-intrusively improving accessibility of everyday tasks. I view AR glasses as a technology that can enable continuous computer vision, which can empower BLV individuals to participate in day-to-day tasks, from sports to cooking. The Makeability Lab team and I hope to continue exploring this space to improve the accessibility of popular sports, such as tennis and basketball.”

Jaewook Lee, Ph.D. student and lead author

Ph.D. student Jaewook Lee presents a research poster, Makeability Lab Demos - GazePointAR & ARTennis.

UW News: How an assistive-feeding robot went from picking up fruit salads to whole meals

November, 2023

In tests with this set of actions, the robot picked up the foods more than 80% of the time, which is the user-specified benchmark for in-home use. The small set of actions allows the system to learn to pick up new foods during one meal. UW News talked with Gordon and Nanavati co-lead authors, both doctoral students in the Paul G. Allen School of Computer Science & Engineering, and with co-author Taylor Kessler Faulkner, a UW postdoctoral scholar in the Allen School, about the successes and challenges of robot-assisted feeding.

An assistive-feeding robotic arm attached to a wheelchair uses a fork to stab a piece of fruit on a plate among other fruits.

The team presented its findings Nov. 7 at the 2023 Conference on Robotic Learning in Atlanta.

UW News talked with co-lead authors Gordon and Nanavati, both doctoral students members of CREATE and in the Paul G. Allen School of Computer Science & Engineering, and with co-author Taylor Kessler Faulkner, a UW postdoctoral scholar in the Allen School, about the successes and challenges of robot-assisted feeding for 1.8 million people in the U.S. (according to data from 2010) who can’t eat on their own.

The Personal Robotics Lab has been working on robot-assisted feeding for several years. What is the advance of this paper?

Ethan K. Gordon: I joined the Personal Robotics Lab at the end of 2018 when Siddhartha Srinivasa, a professor in the Allen School and senior author of our new study, and his team had created the first iteration of its robot system for assistive applications. The system was mounted on a wheelchair and could pick up a variety of fruits and vegetables on a plate. It was designed to identify how a person was sitting and take the food straight to their mouth. Since then, there have been quite a few iterations, mostly involving identifying a wide variety of food items on the plate. Now, the user with their assistive device can click on an image in the app, a grape for example, and the system can identify and pick that up.

Taylor Kessler Faulkner: Also, we’ve expanded the interface. Whatever accessibility systems people use to interact with their phones — mostly voice or mouth control navigation — they can use to control the app.

EKG: In this paper we just presented, we’ve gotten to the point where we can pick up nearly everything a fork can handle. So we can’t pick up soup, for example. But the robot can handle everything from mashed potatoes or noodles to a fruit salad to an actual vegetable salad, as well as pre-cut pizza or a sandwich or pieces of meat.

In previous work with the fruit salad, we looked at which trajectory the robot should take if it’s given an image of the food, but the set of trajectories we gave it was pretty limited. We were just changing the pitch of the fork. If you want to pick up a grape, for example, the fork’s tines need to go straight down, but for a banana they need to be at an angle, otherwise it will slide off. Then we worked on how much force we needed to apply for different foods.

In this new paper, we looked at how people pick up food, and used that data to generate a set of trajectories. We found a small number of motions that people actually use to eat and settled on 11 trajectories. So rather than just the simple up-down or coming in at an angle, it’s using scooping motions, or it’s wiggling inside of the food item to increase the strength of the contact. This small number still had the coverage to pick up a much greater array of foods.

We think the system is now at a point where it can be deployed for testing on people outside the research group. We can invite a user to the UW, and put the robot either on a wheelchair, if they have the mounting apparatus ready, or a tripod next to their wheelchair, and run through an entire meal.

For you as researchers, what are the vital challenges ahead to make this something people could use in their homes every day?

EKG: We’ve so far been talking about the problem of picking up the food, and there are more improvements that can be made here. Then there’s the whole other problem of getting the food to a person’s mouth, as well as how the person interfaces with the robot, and how much control the person has over this at least partially autonomous system.

TKF: Over the next couple of years, we’re hoping to personalize the robot to different people. Everyone eats a little bit differently. Amal did some really cool work on social dining that highlighted how people’s preferences are based on many factors, such as their social and physical situations. So we’re asking: How can we get input from the people who are eating? And how can the robot use that input to better adapt to the way each person wants to eat?

Amal Nanavati: There are several different dimensions that we might want to personalize. One is the user’s needs: How far the user can move their neck impacts how close the fork has to get to them. Some people have differential strength on different sides of their mouth, so the robot might need to feed them from a particular side of their mouth. There’s also an aspect of the physical environment. Users already have a bunch of assistive technologies, often mounted around their face if that’s the main part of their body that’s mobile. These technologies might be used to control their wheelchair, to interact with their phone, etc. Of course, we don’t want the robot interfering with any of those assistive technologies as it approaches their mouth.

There are also social considerations. For example, if I’m having a conversation with someone or at home watching TV, I don’t want the robot arm to come right in front of my face. Finally, there are personal preferences. For example, among users who can turn their head a little bit, some prefer to have the robot come from the front so they can keep an eye on the robot as it’s coming in. Others feel like that’s scary or distracting and prefer to have the bite come at them from the side.

A key research direction is understanding how we can create intuitive and transparent ways for the user to customize the robot to their own needs. We’re considering trade-offs between customization methods where the user is doing the customization, versus more robot-centered forms where, for example, the robot tries something and says, “Did you like it? Yes or no.” The goal is to understand how users feel about these different customization methods and which ones result in more customized trajectories.

What should the public understand about robot-assisted feeding, both in general and specifically the work your lab is doing?

EKG: It’s important to look not just at the technical challenges, but at the emotional scale of the problem. It’s not a small number of people who need help eating. There are various figures out there, but it’s over a million people in the U.S. Eating has to happen every single day. And to require someone else every single time you need to do that intimate and very necessary act can make people feel like a burden or self-conscious. So the whole community working towards assistive devices is really trying to help foster a sense of independence for people who have these kinds of physical mobility limitations.

AN: Even these seven-digit numbers don’t capture everyone. There are permanent disabilities, such as a spinal cord injury, but there are also temporary disabilities such as breaking your arm. All of us might face disability at some time as we age and we want to make sure that we have the tools necessary to ensure that we can all live dignified lives and independent lives. Also, unfortunately, even though technologies like this greatly improve people’s quality of life, it’s incredibly difficult to get them covered by U.S. insurance companies. I think more people knowing about the potential quality of life improvement will hopefully open up greater access.

Additional co-authors on the paper were Ramya Challa, who completed this research as an undergraduate student in the Allen School and is now at Oregon State University, and Bernie Zhu, a UW doctoral student in the Allen School. This research was partially funded by the National Science Foundation, the Office of Naval Research and Amazon.

For more information, contact Gordon at ekgordon@cs.uw.edu, Nanavati at amaln@cs.uw.edu and Faulkner at taylorkf@cs.washington.edu.


Excerpted and adapted from the UW News story by Stefan Milne.

Community Partner Spotlight: PAVE

November 8, 2023

CREATE is pleased to work with PAVE (Partnerships for Action | Voices for Empowerment) to help guide our efforts and shape solutions around the needs and limitations of accessible technology. They’ve supported our grant applications, shared opportunities for participation in CREATE research projects with their community, and published CREATE research on the importance of self-initiated mobility for children, particularly children with disabilities. 


PAVE logo, with the V in a light green color and stylized to look like a flower.

PAVE’s mission is to provide support, training, information, and resources to empower and give voice to individuals, youth, and families living with disabilities throughout Washington State.


“Without technology—accessible technology—PAVE would never be able to support those who rely on us for accurate information and resources.” says Barb Koumjian, Project Coordinator for Lifespan Respite WA at PAVE. This includes the highly accessible PAVE website, with links to parent training programs, family health resources, and support systems.

“All of us at PAVE are deeply committed to addressing the concerns of parents worried about their loved one in school, navigating medical supports, or caregiving for a family member. PAVE’s goal is to provide a seamless online experience, allowing everyone to find information quickly, get support, and hopefully get some peace of mind,” adds Communications Specialist Nicol Walsh. “PAVE’s goal is to provide a seamless online experience, allowing everyone to find information quickly and get support.”

PAVE supports accessibility via adaptive technology: “For the families I support at PAVE, there is an uprising of parents advocating for AAC, in any capacity, at an early age with an autism diagnosis,” says Shawnda Hicks, PAVE Coordinator. “Giving children communication in early learning stages reduces frustration and high behaviors.”

Connecting with PAVE

Cute mixed race child during hearing exam wears special headphones.

Proud to be a UW CREATE Community Partner

“As a statewide organization, we’re deeply committed to accessibility and equity for everyone, and we value our collaborations with UW CREATE for all we serve in Washington,” says Tracy Kahlo, PAVE Executive Director. 


Thanks to these PAVE staff members for contributing words, data, and perspective: Barb Koumjian, Nicol Walsh, Shawnda Hicks, and Tracy Kahlo.

Off to the Park: A Geospatial Investigation of Adapted Ride-on Car Usage

November 7, 2023

Adapted ride-on cars (ROC) are an affordable, power mobility training tool for young children with disabilities. But weather and adequate drive space create barriers to families’ adoption of their ROC. 

CREATE Ph.D. student Mia E. Hoffman is the lead author on a paper that investigates the relationship between the built environment and ROC usage.

Mia Hoffman smiling into the sun. She has long, blonde hair. Behind her is part of the UW campus with trees and brick buildings.

With her co-advisors Kat Steele and Heather A. Feldner, Jon E. Froehlich (all three CREATE associate directors), and Kyle N. Winfree as co-authors, Hoffman found that play sessions took place more often within the participants’ homes. But when the ROC was used outside, children engaged in longer play sessions, actively drove for a larger portion of the session, and covered greater distances.

Accessibility scores for the sidewalks near a participant’s home on the left and the drive path of the participant on the right. Participant generally avoided streets that were not accessible.

Most notably, they found that children drove more in pedestrian-friendly neighborhoods and when in proximity to accessible paths, demonstrating that providing an accessible place for a child to move, play, and explore is critical in helping a child and family adopt the mobility device into their daily life.

UW News: Can AI help boost accessibility? CREATE researchers tested it for themselves

November 2, 2023 | UW News

Generative artificial intelligence tools like ChatGPT, an AI-powered language tool, and Midjourney, an AI-powered image generator, can potentially assist people with various disabilities. They could summarize content, compose messages, or describe images. Yet they also regularly spout inaccuracies and fail at basic reasoning, perpetuating ableist biases.

This year, seven CREATE researchers conducted a three-month autoethnographic study — drawing on their own experiences as people with and without disabilities — to test AI tools’ utility for accessibility. Though researchers found cases in which the tools were helpful, they also found significant problems with AI tools in most use cases, whether they were generating images, writing Slack messages, summarizing writing or trying to improve the accessibility of documents.

Four AI-generated images show different interpretations of a doll-sized “crocheted lavender husky wearing ski goggles,” including two pictured outdoors and one against a white background.

The team presented its findings Oct. 22 at the ASSETS 2023 conference in New York.

“When technology changes rapidly, there’s always a risk that disabled people get left behind,” said senior author Jennifer Mankoff, CREATE’s director and a professor in the Paul G. Allen School of Computer Science & Engineering. “I’m a really strong believer in the value of first-person accounts to help us understand things. Because our group had a large number of folks who could experience AI as disabled people and see what worked and what didn’t, we thought we had a unique opportunity to tell a story and learn about this.”

The group presented its research in seven vignettes, often amalgamating experiences into single accounts to preserve anonymity. For instance, in the first account, “Mia,” who has intermittent brain fog, deployed ChatPDF.com, which summarizes PDFs, to help with work. While the tool was occasionally accurate, it often gave “completely incorrect answers.” In one case, the tool was both inaccurate and ableist, changing a paper’s argument to sound like researchers should talk to caregivers instead of to chronically ill people. “Mia” was able to catch this, since the researcher knew the paper well, but Mankoff said such subtle errors are some of the “most insidious” problems with using AI, since they can easily go unnoticed.

Yet in the same vignette, “Mia” used chatbots to create and format references for a paper they were working on while experiencing brain fog. The AI models still made mistakes, but the technology proved useful in this case.

“When technology changes rapidly, there’s always a risk that disabled people get left behind.”

Jennifer Mankoff, CREATE Director, professor in the Allen School

Mankoff, who’s spoken publicly about having Lyme disease, contributed to this account. “Using AI for this task still required work, but it lessened the cognitive load. By switching from a ‘generation’ task to a ‘verification’ task, I was able to avoid some of the accessibility issues I was facing,” Mankoff said.

The results of the other tests researchers selected were equally mixed:

  • One author, who is autistic, found AI helped to write Slack messages at work without spending too much time troubling over the wording. Peers found the messages “robotic,” yet the tool still made the author feel more confident in these interactions.
  • Three authors tried using AI tools to increase the accessibility of content such as tables for a research paper or a slideshow for a class. The AI programs were able to state accessibility rules but couldn’t apply them consistently when creating content.
  • Image-generating AI tools helped an author with aphantasia (an inability to visualize) interpret imagery from books. Yet when they used the AI tool to create an illustration of “people with a variety of disabilities looking happy but not at a party,” the program could conjure only fraught images of people at a party that included ableist incongruities, such as a disembodied hand resting on a disembodied prosthetic leg.

“I was surprised at just how dramatically the results and outcomes varied, depending on the task,” said lead author Kate Glazko, a UW doctoral student in the Allen School. “”n some cases, such as creating a picture of people with disabilities looking happy, even with specific prompting — can you make it this way? — the results didn’t achieve what the authors wanted.”

The researchers note that more work is needed to develop solutions to problems the study revealed. One particularly complex problem involves developing new ways for people with disabilities to validate the products of AI tools, because in many cases when AI is used for accessibility, either the source document or the AI-generated result is inaccessible. This happened in the ableist summary ChatPDF gave “Mia” and when “Jay,” who is legally blind, used an AI tool to generate code for a data visualization. He could not verify the result himself, but a colleague said it “didn’t make any sense at all.”  The frequency of AI-caused errors, Mankoff said, “makes research into accessible validation especially important.”

Mankoff also plans to research ways to document the kinds of ableism and inaccessibility present in AI-generated content, as well as investigate problems in other areas, such as AI-written code.

“Whenever software engineering practices change, there is a risk that apps and websites become less accessible if good defaults are not in place,” Glazko said. “For example, if AI-generated code were accessible by default, this could help developers to learn about and improve the accessibility of their apps and websites.”

Co-authors on this paper are Momona Yamagami, who completed this research as a UW postdoctoral scholar in the Allen School and is now at Rice University; Aashaka DesaiKelly Avery Mack and Venkatesh Potluri, all UW doctoral students in the Allen School; and Xuhai Xu, who completed this work as a UW doctoral student in the Information School and is now at the Massachusetts Institute of Technology. This research was funded by Meta, Center for Research and Education on Accessible Technology and Experiences (CREATE), Google, an NIDILRR ARRT grant and the National Science Foundation.


For more information, contact Glazko at glazko@cs.washington.edu and Mankoff at jmankoff@cs.washington.edu.


This article was adapted from the UW News article by Stefan Milne.

UW News: A11yBoard accessible presentation software

October 30, 2023 | UW News

A team led by CREATE researchers has created A11yBoard for Google Slides, a browser extension and phone or tablet app that allows blind users to navigate through complex slide layouts, objects, images, and text. Here, a user demonstrates the touchscreen interface. Team members Zhuohao (Jerry) Zhang, Jacob O. Wobbrock, and Gene S-H Kim presented the research at ASSETS 2023.

A user demonstrates creating a presentation slide with A11yBoard on a touchscreen tablet and computer screen.

Screen readers, which convert digital text to audio, can make computers more accessible to many disabled users — including those who are blind, low vision or dyslexic. Yet slideshow software, such as Microsoft PowerPoint and Google Slides, isn’t designed to make screen reader output coherent. Such programs typically rely on Z-order — which follows the way objects are layered on a slide — when a screen reader navigates through the contents. Since the Z-order doesn’t adequately convey how a slide is laid out in two-dimensional space, slideshow software can be inaccessible to people with disabilities.

Combining a desktop computer with a mobile device, A11yBoard lets users work with audio, touch, gesture, speech recognition and search to understand where different objects are located on a slide and move these objects around to create rich layouts. For instance, a user can touch a textbox on the screen, and the screen reader will describe its color and position. Then, using a voice command, the user can shrink that textbox and left-align it with the slide’s title.

“We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”

Jacob O. Wobbrock, CREATE associate director and professor in the UW Information School

“For a long time and even now, accessibility has often been thought of as, ‘We’re doing a good job if we enable blind folks to use modern products.’ Absolutely, that’s a priority,” said senior author Jacob O. Wobbrock, a UW professor in the Information School. “But that is only half of our aim, because that’s only letting blind folks use what others create. We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”

A11yBoard for Google Slides builds on a line of research in Wobbrock’s lab exploring how blind users interact with “artboards” — digital canvases on which users work with objects such as textboxes, shapes, images and diagrams. Slideshow software relies on a series of these artboards. When lead author Zhuohao (Jerry) Zhang, a UW doctoral student in the iSchool, joined Wobbrock’s lab, the two sought a solution to the accessibility flaws in creativity tools, like slideshow software. Drawing on earlier research from Wobbrock’s lab on the problems blind people have using artboards, Wobbrock and Zhang presented a prototype of A11yBoard in April. They then worked to create a solution that’s deployable through existing software, settling on a Google Slides extension.

For the current paper, the researchers worked with co-author Gene S-H Kim, an undergraduate at Stanford University, who is blind, to improve the interface. The team tested it with two other blind users, having them recreate slides. The testers both noted that A11yBoard greatly improved their ability to understand visual content and to create slides themselves without constant back-and-forth iterations with collaborators; they needed to involve a sighted assistant only at the end of the process.

The testers also highlighted spots for improvement: Remaining continuously aware of objects’ positions while trying to edit them still presented a challenge, and users were forced to do each action individually, such as aligning several visual groups from left to right, instead completing these repeated actions in batches. Because of how Google Slides functions, the app’s current version also does not allow users to undo or redo edits across different devices.

Ultimately, the researchers plan to release the app to the public. But first they plan to integrate a large language model, such as GPT, into the program.

“That will potentially help blind people author slides more efficiently, using natural language commands like, ‘Align these five boxes using their left edge,’” Zhang said. “Even as an accessibility researcher, I’m always amazed at how inaccessible these commonplace tools can be. So with A11yBoard we’ve set out to change that.”

This research was funded in part by the University of Washington’s Center for Research and Education on Accessible Technology and Experiences (UW CREATE). For more information, contact Zhang at zhuohao@uw.edu and Wobbrock at wobbrock@uw.edu.


This article was adapted from the UW News article by Stefan Milne.

Augmented Reality to Support Accessibility

October 25, 2023

RASSAR – Room Accessibility and Safety Scan in Augmented Reality – is a novel smartphone-based prototype for semi-automatically identifying, categorizing, and localizing indoor accessibility and safety issues. With RASSAR, the user holds out their phone and scans a space. The tool uses LiDAR and camera data, real-time machine learning, and AR to construct a real-time model of the 3D scene, attempts to identify and classify known accessibility and safety issues, and visualizes potential problems overlaid in AR. 

RASSAR researchers envision the tool as an aid in the building and validation of new construction, planning renovations, or updating homes for health concerns, or for telehealth home visits with occupational therapists. UW News interviewed two CREATE Ph.D. students about their work on the project:


Augmented Reality to Support Accessibility

CREATE students Xia Su and Jae Lee, advised by CREATE Associate Director Jon Froehlich in the Makeability Lab, discuss their work using augmented reality to support accessibility. The Allen School Ph.D. students are presenting their work at ASSETS and UIST this year.

Illustration of a user holding a smartphone using the RASSAR prototype app to scan the room for accessibility issues.

ASSETS 2023 Papers and Posters

October 4, 2023


Augmented Reality to Support Accessibility

CREATE students Xia Su and Jae Lee, advised by CREATE Associate Director Jon Froehlich in the Makeability Lab, discuss their work using augmented reality to support accessibility. The Allen School Ph.D. students are presenting their work at ASSETS and UIST this year.

Illustration of a user holding a smartphone using the RASSAR prototype app to scan the room for accessibility issues.

As has become customary, CREATE faculty, students and alumni will have a large presence at the 2023 ASSETS Conference. It’ll be quiet on campus October 23-25 with these folks in New York.

Papers and presentations

How Do People with Limited Movement Personalize Upper-Body Gestures? Considerations for the Design of Personalized and Accessible Gesture Interfaces
Monday, Oct 23 at 11:10 a.m. Eastern time
Momona Yamagami, Alexandra A Portnova-Fahreeva, Junhan Kong, Jacob O. Wobbrock, Jennifer Mankoff

Understanding Digital Content Creation Needs of Blind and Low Vision People
Monday, Oct 23 at 1:40 p.m. Eastern time
Lotus Zhang, Simon Sun, Leah Findlater

Notably Inaccessible — Data Driven Understanding of Data Science Notebook (In)Accessibility
Monday, Oct 23 at 4 p.m. Eastern time
Venkatesh Potluri, Sudheesh Singanamalla, Nussara Tieanklin, Jennifer Mankoff

A Large-Scale Mixed-Methods Analysis of Blind and Low-vision Research in ACM and IEEE
Tuesday, Oct 24 at 11:10 a.m. Eastern time
Yong-Joon Thoo, Maximiliano Jeanneret Medina, Jon E. Froehlich, Nicolas Ruffieux, Denis Lalanne

Working at the Intersection of Race, Disability and Accessibility
Tuesday, Oct 24 at 1:40 p.m. Eastern time
Christina Harrington, Aashaka Desai, Aaleyah Lewis, Sanika Moharana, Anne Spencer Ross, Jennifer Mankoff

Comparing Locomotion Techniques in Virtual Reality for People with Upper-Body Motor Impairments
Wednesday, Oct 25 at 8:45 a.m. Eastern time
Rachel L. Franz, Jinghan Yu, Jacob O. Wobbrock

Jod: Examining the Design and Implementation of a Videoconferencing Platform for Mixed Hearing Groups
Wednesday, Oct 25 at 11:10 a.m. Eastern time
Anant Mittal, Meghna Gupta, Roshni Poddar, Tarini Naik, SeethaLakshmi Kuppuraj, James Fogarty. Pratyush Kumar, Mohit Jain

Azimuth: Designing Accessible Dashboards for Screen Reader Users
Wednesday, Oct 25 at 1:25 p.m. Eastern time
Arjun Srinivasan, Tim Harshbarger, Darrell Hilliker, Jennifer Mankoff

Developing and Deploying a Real-World Solution for Accessible Slide Reading and Authoring for Blind Users
Wednesday, Oct 25 at 1:25 p.m. Eastern time
Zhuohao Zhang, Gene S-H Kim, Jacob O. Wobbrock

Experience Reports

An Autoethnographic Case Study of Generative Artificial Intelligence’s Utility for Accessibility
Kate S Glazko, Momona Yamagami, Aashaka Desai, Kelly Avery Mack, Venkatesh Potluri, Xuhai Xu, Jennifer Mankoff

Maintaining the Accessibility Ecosystem: a Multi-Stakeholder Analysis of Accessibility in Higher Education
Kelly Avery Mack, Natasha A Sidik, Aashaka Desai, Emma J McDonnell, Kunal Mehta, Christina Zhang, Jennifer Mankoff

TACCESS Papers

“I’m Just Overwhelmed”: Investigating Physical Therapy Accessibility and Technology Interventions for People with Disabilities and/or Chronic Conditions

Momona Yamagami, Kelly Mack, Jennifer Mankoff, Katherine M. Steele

The Global Care Ecosystems of 3D Printed Assistive Devices

Saiph Savage, Claudia Flores-Saviaga, Rachel Rodney, Liliana Savage, Jon Schull, Jennifer Mankoff

Posters

Conveying Uncertainty in Data Visualizations to Screen-Reader Users Through Non-Visual Means
Ather Sharif, Ruican Zhong, Yadi Wang

U.S. Deaf Community Perspectives on Automatic Sign Language Translation
Nina Tran, Richard E. Ladner, Danielle Bragg (Microsoft Research)

Workshops

Bridging the Gap: Towards Advancing Privacy and Accessibility
Rahaf Alharbi, Robin Brewer, Gesu India, Lotus Zhang, Leah Findlater, and Abigale Stangl

Tackling the Lack of a Practical Guide in Disability-Centered Research
Emma McDonnell, Kelly Avery Mack, Kathrin Gerling, Katta Spiel, Cynthia Bennett, Robin N. Brewer, Rua M. Williams, and Garreth W. Tigwell

A11yFutures: Envisioning the Future of Accessibility Research
Foad Hamidi Kirk Crawford, Jason Wiese, Kelly Avery Mack, Jennifer Mankoff

Demos

A Demonstration of RASSAR : Room Accessibility and Safety Scanning in Augmented Reality
Xia Su, Kaiming Cheng, Han Zhang, Jaewook Lee, Wyatt Olson, Jon E. Froehlich

BusStopCV: A Real-time AI Assistant for Labeling Bus Stop Accessibility Features in Streetscape Imagery
Chaitanyashareef Kulkarni, Chu Li, Jaye Ahn, Katrina Oi Yau Ma, Zhihan Zhang, Michael Saugstad, Kevin Wu, Jon E. Froehlich; with Valerie Novack and Brent Chamberlain (Utah State University)

Papers and presentations by CREATE associates and alumni

  • Monday, Oct 23 at 4:00 p.m. Eastern time
    Understanding Challenges and Opportunities in Body Movement Education of People who are Blind or have Low Vision
    Madhuka Thisuri De Silva, Leona M Holloway, Sarah Goodwin, Matthew Butler
  • Tuesday, Oct 24 at 8:45 a.m. Eastern time
    AdaptiveSound: An Interactive Feedback-Loop System to Improve Sound Recognition for Deaf and Hard of Hearing Users
    Hang Do, Quan Dang, Jeremy Zhengqi Huang, Dhruv Jain
  • Tuesday, Oct 24 at 8:45 a.m. Eastern time
    “Not There Yet”: Feasibility and Challenges of Mobile Sound Recognition to Support Deaf and Hard-of-Hearing People
    Jeremy Zhengqi Huang, Hriday Chhabria, Dhruv Jain
  • Tuesday, Oct 24 at 4:00 p.m. Eastern time
    The Potential of a Visual Dialogue Agent In a Tandem Automated Audio Description System for Videos
    Abigale Stangl, Shasta Ihorn, Yue-Ting Siu, Aditya Bodi, Mar Castanon, Lothar D Narins, Ilmi Yoon

Recommended Reading: Parenting with a Disability

October 16, 2023

Two recent publications address unnecessary challenges faced by parents with disabilities and how those challenges are made extraordinary by a legal system that is not protecting parents or their children.

Rocking the Cradle: Ensuring the Rights of Parents with Disabilities and Their Children

The National Council on Disability report finds that roughly 4 million parents in the U.S. who are disabled (about 6% of parents) are the only distinct community that must struggle to retain custody of their children. 

While we have moved (somewhat) beyond the blatant eugenics of the 20th century, some of those tactics persist. Further, “parents with disabilities are the only distinct community of Americans who must struggle to retain custody of their children.” This is also connected to other intersectional factors. For example, “Because children from African American and Native American families are more likely to be poor, they are more likely to be exposed to mandated reporters as they turn to the public social service system for support in times of need…”

Research has shown that exposure bias is evident at each decision point in the child welfare system.

Under the Watchful Eye of All: Disabled Parents and the Family Policing System’s Web of Surveillance

Author Robyn Powel details how the child welfare system employs extensive surveillance that disproportionately targets marginalized families. Yet centers for independent living and other existing programs have the potential to support these parents. Instead, “The child welfare system, more accurately referred to as the family policing system, employs extensive surveillance that disproportionately targets marginalized families, subjecting them to relentless oversight.”

One particular story in that article highlights the role of technology in this ‘policing’: “…just as the Hackneys were preparing to bring [their 8 month old] home, the Allegheny County DHS [alleged] negligence due to [the parents’] disabilities… More than a year later, their toddler remains in the foster care system, an excruciating separation for the Hackneys. The couple is left questioning whether DHS’ use of a predictive artificial intelligence (“AI”) tool unfairly targeted them based on their disabilities.”

As technologists, we wonder whether this AI tool was tested for racial or disability bias. It is essential that the technologies we create are equitable before they are deployed.