Technology has solved some problems for disabled people, but it’s also created plenty of problems that remain to be solved. At one of the year’s biggest events on human-computer interaction, we got a glimpse of how some researchers are putting wild new technology to work to increase access for all at the cutting edge of inclusive design .

At the Association for Computing Machinery’s CHI Conference on Human Factors in Computing Systems last week, a host of researchers showed off experimental technology designed to make the digital world more inclusive to people with disabilities, from blindness to deafness. It’s not a new phenomenon; CHI has had papers on inclusive emerging technology for years. But this year’s work reinforces an emphasis on inclusive design that is increasingly ubiquitous.

Here’s the most exciting new research that reimagines inclusive ways for people to interact with computers.

Making Group Conversations Easier For Deaf People–Using Hololens

Talking to a group of people is an integral part of professional work and just having fun with friends. But their group nature makes them difficult for people who have a hard time hearing.

A group of researchers from National Taiwan University, Texas A&M University, and the National Taiwan University of Science and Technology worked with eight individuals who are deaf or hard of hearing to create a new AR-based speech recognition system that places speech bubble animations over speakers’ heads. The user wears a Microsoft Hololens AR headset, which superimposes the bubbles over a real-time conversation. A study with 12 people who are deaf or hard of hearing revealed that they preferred these kinds of speech bubble visualizations rather than more traditional captions.

Making Images, Interactives, And Maps Accessible To Vision- Impaired People

Blind internet users often rely on screen readers, which use captions embedded in websites to describe the images on a page. But often website authors won’t include captions at all, leaving blind users in the dark.

Researchers from Microsoft Research created a browser plug-in that connects to a typical screen reader. The plug-in uses reverse image search to crawl the internet for a particular image and find existing captions from other websites. Instead of relying on computer vision to describe images, Caption Crawler, as it’s named, takes advantage of what’s already on the internet.