My 21 year old brother Giovanni loves to listen to music and movies. But because he was born with congenital cataracts, Down syndrome and West syndrome, he is non-verbal. This means he relies on our parents and friends to start or stop music or a movie.

Over the years, Giovanni has used everything from DVDs to tablets to YouTube to Chromecast to fill his entertainment needs. But as new voice-driven technologies started to emerge, they also came with a different set of challenges that required him to be able to use his voice or a touchscreen. That’s when I decided to find a way to let my brother control access to his music and movies on voice-driven devices without any help. It was a way for me to give him some independence and autonomy.

Working alongside my colleagues in the Milan Google office, I set up Project DIVA, which stands for DIVersely Assisted. The goal was to create a way to let people like Giovanni trigger commands to the Google Assistant without using their voice. We looked at many different scenarios and methodologies that people could use to trigger commands, like pressing a big button with their chin or their foot, or with a bite. For several months we brainstormed different approaches and presented them at different accessibility and tech events to get feedback.

We had a bunch of ideas on paper that looked promising. But in order to turn those ideas into something real, we took part in an Alphabet-wide accessibility innovation challenge and built a prototype which went on to win the competition. We identified that many assistive buttons available on the market come with a 3.5mm jack, which is the kind many people have on their wired headphones. For our prototype, we created a box to connect those buttons and convert the signal coming from the button to a command sent to the Google Assistant.