To judge by the headlines, artificial intelligence is either the greatest of threats to humanity or its savior. AI (the term for machines performing tasks that generally require human intelligence) is, we are told, about to profoundly reshape the way the world works. On the one hand, we see that machine learning tools generate increasingly convincing video and audio facsimiles of real-world people and can deceive humans into thinking they are speaking to humans, threatening to exacerbate the challenges of a “post-truth” public sphere. We hear that machines may displace as much as 30% of workers by 2030 (and not just blue collar jobs.) Elon Musk goes so far as to warn us that AI represents a ‘fundamental risk to human civilization’.

These are bleak prospects, and they stand in stark contrast to Ray Kuzweil’s and other optimists’ vision that by 2029 machines will achieve human levels of intelligence and begin to merge with humans for the betterment of the species. On a smaller scale, there are articles about machines generating poetry, reducing stress in the workplace, or creating visual art. But a hardly day goes by without hearing that artificial intelligence will accomplish something gargantuan, like predicting violence before it happens or modeling the spread of infectious diseases so they can be intercepted.

The average person seeking social change is left to consider AI as a cataclysmic force — a phenomenon that will supposedly leave no part of our lives unchanged, but one that is also inaccessible; a tool for superhuman coders and big businesses, not practically applicable by the average person seeking social change.

This doesn’t have to be the case. While wielding AI still requires coding skills, the barriers for civil society actors to access it are beginning to fall as development tools become more democratized and less expensive. Forward thinking activists, campaigners and nonprofits should be aware of the utility of AI and the various resources that exist to enable us to wield it.

As an Innovation Specialist at Counterpart International, part of my work is dedicated to exploring new technologies and determining how they can be used by nonprofits and activists to achieve positive change. For the past six months I’ve been experimenting and learning about voice controlled digital assistants. Voice, we are told, is the next frontier for our interaction with machines. We are awash with voice assistant services developed by for-profit companies: Siri, Alexa, Google Assistant, Bixby, etc. These services allow someone with a crowded visual environment such as a browser with too many tabs open or simply with full hands to issue commands and retrieve information. But nonprofits have needs that extend beyond the convenience of ordering products or looking up trivia. Social impact workers need tools designed to meet their particular needs in protecting civic space, building peace, or developing a community.

I have been experimenting with the open-source digital assistant Mycroft. “Open-source” software provides the source code for free and is often the collaborative work of many community members. One of the primary benefits to using an open-source tool that it lowers the barrier to creating something by allowing users to benefit from previous work by a community. With Mycroft, users are encouraged to submit their ideas to a free shared repository of skills, so all Mycroft users benefit from the developer community’s code. Another benefit is data privacy: unlike the big tech companies, Mycroft employs an opt-in data collection policy and refrains from using user data to train their model unless explicitly authorized to do so. No data is sold to third parties. Mycroft’s policy is a great model for how other technology companies should operate.

The Mycroft AI Logo

“Open-source” is the crucial component that enables AI to be accessible by social impact. When thousands of people working in a similar space make their code available for free, it enables individuals to leverage pre-created resources and wield them for their own purposes. Non-profits who are not in a position to build AI from scratch are able to leverage the work of thousands of others to acquire a tool for almost free. This gets easier if the developers of open source tools place a lot of focus on accessibility and usability — what good is a tool, after all, if nobody can figure out how to use it?

Last month I developed an ability for Mycroft to report back on the status of political freedoms in any country using the open source civic space dataset, CIVICUS Monitor. Building off Mycroft’s ability to process spoken words into text, the script determined which country is referenced in a query, (“check civic space in [country]”), looks up the appropriate page on the CIVICUS Monitor website, and reviews the page to find a summary of civic space in the country we’ve chosen. This was a simple skill (the whole of the core script is less than 160 lines code) but it was custom made to fit a specific need relevant to social impact. I had created a new way to access an important dataset about democratic freedoms.