Blindspots can occur at any point in the pipeline during the development of a model, from when the model is first conceptualized to when it is built and even after it is deployed. No human is immune to blind spots, and while we can roughly point to their location, they are normally hard to perceive. The same can be said of the way algorithmic technologies are developed — even with the best of intentions, the things we never anticipated can end up causing great harm.

The consequences of blindspots are challenging to foresee, but they tend to have adverse effects on historically marginalized communities. These harms can be mitigated if we all intentionally take action to guard against them.

With recent calls for the tech industry to take greater responsibility for their dual-use inventions, we need “systematic consideration of the harms that might occur” before products are designed and launched. We offer AI Blindspot as a provocation to help us build better systems.

Help Us Improve The “AI Blindspot” Project at MozFest 2019

We are engaging with stakeholders in the AI community, such as policymakers, activists, journalists, product managers, and academic researchers to conduct user tests, surveys, and workshops to gather feedback and iterate the content, format, and presentation of this tool — and we need your help!

Two of our AI Blindspot team members, Ania Calderon and Hong Qu, will be conducting a workshop at MozFest on October 26. If you will be going to MozFest, we urge you to join us. We will be handing out sets of AI Blindspot cards to participants.

You can also find us on Twitter @aiblindspot and email us info@aiblindspot.com.