If you subscribe to the train of thought so far, we have a conundrum: we, as humans, want fair systems, but can’t define what is fair and what isn’t. And even if we could, we are assuming ourselves infallible, which we clearly are not. Why, then, are we holding software systems to conceptual standards we fail to achieve ourselves?

Another lens through which to view AI ethics is that of explainability. Perhaps as a consequence of us being unable to define fairness in different situations, we demand that AI systems be to explain how and why they arrive at a certain prediction. This type of framework gives us a way to identify discriminations that we “feel” are unacceptable, even though we can’t explicitly define a proper set.

The main problem with this approach is that most machine learning models in use today are inherently unexplainable, chiefly due to distributed nonlinearities. The reason we use such models lies in their performance: they tend to work much better than simpler, more explainable models like decision trees. This leads to an interesting hypothetical: if we feel that the black-box nature of modern machine learning is too worrisome, why not sacrifice accuracy and go with more explainable models across the board? Whilst it seems we value accuracy over explainability, my personal experience suggests that even when provided an explainable model like a decision tree, we still don’t find the derived rulesets useful. No matter how explainable the system, for any non-trivial problem, the resulting logical graph from input to output is too complex to be satisfactory to most. It seems, again, that we want something (explainability) but can’t define what exactly that entails. Double standards apply here, too: ask a human to fully and fundamentally explain why they make a decision and you will likely not get a proper answer.

In addition to the glaring issues regarding discrimination, explainability and the concept of “fairness”, it seems that when we talk about AI ethics, we can “break” the field without discussing AI, or indeed even traditional software development. The next time you read about AI ethics, replace “AI” with any other disruptive technology, and see if it still reads correctly. The fundamental problems lie with us, not the systems we develop. Combine that with the fact that models and datasets are in and of themselves not biased, but merely give rise to bias based on data generated by human-designed processes, and we find ourselves between a rock and a hard place. As far as I can see, the only avenue for progress is to do as we should already be doing: employing common sense, accepting the possibility of making mistakes, and, most importantly, fixing problems as they arise.

Unfortunately, such notions put the vast majority of ongoing AI ethics work into question.