If you’re using machine learning in your organization, you probably should be thinking about how to manage the ethical, legal, and business risks involved if something goes wrong.

But according to a new paper from the Future of Privacy Forum and the College Park, Maryland, data management platform startup Immuta, there simply isn’t an industry standard framework for thinking about these kinds of issues.

“We see a really deep and pressing need for guidelines and for an actual framework to measure risk for machine learning,” says Andrew Burt, chief privacy officer at Immuta and one of the paper’s authors.

In the paper, released Tuesday, they offer some guidance to companies thinking about these issues. Among their suggestions, inspired in part by a 2011 Federal Reserve document on handling financial model risk, is that companies set up three “lines of defense” in handling artificial intelligence risk.

Those should include data scientists and other experts defining exact assumptions and goals around a project; a second team of data and legal experts who work as “validators” and review assumptions, methods, documentation, and information on underlying data quality; and a regular third line of defense involving reviews of the overall assumptions around the model and how they’re working out.

FPF & Immuta – How can we govern a technology its creators can’t fully explain? from Immuta on Vimeo.