We recently came across this article, Show your work: The new terms for trust in journalism, which describes techniques that journalists are using to regain reader trust in journalism and distance themselves from the label of “fake news”. We wanted to do the same for machine learning and discuss how to bring sufficient clarity and transparency to enable client / stakeholder trust in it.

For those who are unfamiliar with 1plusX, we are a Predictive Data Management Platform. The word predictive is intended to convey just how central machine learning is to our offering. Needless to say, we believe that machine learning can truly enable our clients to make more revenue, save costs and make their jobs easier.

We believe that machine learning can truly enable our clients to make more revenue, save costs and make their jobs easier.

With that being said, it is clear that machine learning has hit the peak of the hype cycle. It seems that nearly every company claims to have or has attempted their own machine learning offering, with varying substance or success.

Gartner shows Machine Learning past peak on the hype cycle.

On top of that, machine learning can be a black box, where even if one does get the results they desire, it can be difficult to convince stakeholders, bosses, or executives that they can continue to trust it. We have seen this day in and day out, since so many stakeholders care about and are affected by a DMP.

With these points in mind, we wanted to start a conversation and point out the methods we use to ensure machine learning not only provides sufficient value to our clients, but also is something they feel that they can trust.

Monetary Customer / Stakeholder Value

One of our top goals when working with a client is to mutually assess the monetary value that our offering brings to them. At the end of the day, enabling our clients to clearly see that they boosted revenues by X dollars or reduced cost by Y percent is a major first-step in gaining their trust.

Enabling our clients to clearly see that they boosted revenues by X dollars or reduced cost by Y percent is a major first-step in gaining their trust.

This is not an easy task. It generally requires nurturing a strong enough relationship with our clients so that we can deeply understand what they are struggling with and discuss with them how it affects them financially. Questions such as “What alternative would you choose if there was no machine learning solution?” and “How much does this alternative cost in dollars, labor and complexity?” are good starting points to driving these discussions.

The hard work is well worth the trouble. Framing things in financial terms helps our clients prioritize what to focus on, which allows us to tune our algorithms accordingly, maximizing our chances for a successful result. If you are a machine learning vendor like us, this also has the additional benefit of setting things up nicely for case studies.

External Quality Assessments

Every vendor claims to have the best quality machine learning offerings, but we think it is best when the results are validated and can speak for themselves.

Of course, external validation data is not always readily available, so we love it when our clients want to do an external quality assessment of our machine learning predictions. Positive results from unbiased third parties can convince skeptical stakeholders that it is not just empty marketing claims.

For us, there are fortunately a number of providers that can validate our work. For example, we supply predictions for digital users’ gender and age, which can be assessed by international panel providers, such as Nielsen, comScore, and GfK and regional ones, such as Germany’s Arbeitsgemeinschaft Online Forschung (AGOF) or Switzerland’s Link Institute. Recently, one of our clients, Admeira, ran an assessment using Facebook Atlas Solutions and were able to directly compare our predictions to those of their existing third-party data source. The results showed our quality to be 32% better than those from that provider, giving them a definitive reason to solely use our offering.

Predictability Trumps Quality

Several months ago, we deployed an algorithm improvement, which improved age predictions for a client by 10%. However, after running for a month, we saw unexpected fluctuations in the size of a particular audience over the course of two days, before it stabilized to a more expected size. A few days later, a similar blip occurred again.

Unpredictability in machine learning results shake client and stakeholder confidence.

The issue turned out to be a very rare set of conditions in the data, which caused things to go haywire. We believed that there would be a very slim chance of it occuring again. Unfortunately, if it were to happen again, we would not be able to stop the size jump from occurring again. We decided to shelve our quality improvement in favor of predictability, in order to maintain client confidence in our predictions.

We decided to shelve our quality improvement in favor of predictability, in order to maintain client confidence in our predictions.

Responsiveness to Problems

Having a machine learning offering requires one to work with a lot of data. We process terabytes of data daily, and together with rolling out new features at a furious pace, well, let’s just say the best-laid plans of mice and men sometimes go awry.

We have previously talked about our monitoring and alerting system. Our main driver for setting up a monitoring and alerting system was to ensure that we would be the first ones to know if something went wrong. This gives our engineers the maximum amount of time to fix issues and also allows us to give heads-up messages to our clients, so that they can coordinate contingency plans on their end.

Visualization and Simplification

There are a number of fantastic resources, such as Edx, Coursera and Udacity, for delving into the details of machine learning, and we certainly encourage our clients to take these courses.

Having said that, we do not expect managers and executives at our clients to have the time to do so. Instead, we have focused our account management efforts on designing visualizations and explanations that allow our clients to more quickly understand our approach.

For example, we frequently show the following visualization. Each dot represents a web article or a user. The closer a dot is to another dot, the more similar they are and conversely, the farther away two dots are, the less similar they are. As a result, you will begin to see clusters of topics. For example, the group of yellow dots represents sports articles; grey dots near the yellow dots represent users who are most interested in sports articles.

A visualization we use to explain the partial results of our machine learning algorithm.

Our algorithms do not actually compute similarity distances between articles or between users. However, for many people, thinking about physical distances and similarity is a much easier concept to grasp than if we were to explain in detail what is going on behind the scenes.

Transparent Algorithms

Lastly, while we have not yet rolled out anything using these algorithms, we would be remiss in not mentioning that there is a push for more explainable artificial intelligence. For example, if an algorithm diagnosed someone with asthma, the goal would be for the algorithm to also be able to list which symptoms led them to that conclusion.

The US DARPA is one large organization who is actively pushing for Explainable Artificial Intelligence. There is also a group at the University of Washington, who released LIME, which helps to open up the black box of image and text classification.

An example of the University of Washinton’s LIME algorithm for explaining image classification elements.

Conclusion

We have an active interest in setting up other machine learning projects for success, so that collectively more and more people can understand the value of machine learning and put their trust in its effectiveness. With some luck, we can outright skip the trough of disillusionment that affects so many promising emerging technologies.

We hope that sharing these anecdotes helps spark a conversation on how best to cultivate trust in machine learning. If you have other methods that we missed or just have any questions about the topic, feel free to comment or get in touch with us!