Artificial intelligence (AI) has the potential to help us tackle the pressing issues raised by the COVID-19 pandemic. It is not the technology itself, though, that will make the difference but rather the knowledge and creativity of the humans who use it.

Indeed, the COVID-19 crisis will likely expose some of the key shortfalls of AI. Machine learning, the current form of AI, works by identifying patterns in historical training data. When used wisely, AI has the potential to exceed humans not only through speed but also by detecting patterns in that training data that humans have overlooked.

However, AI systems need a lot of data, with relevant examples in that data, in order to find these patterns. Machine learning also implicitly assumes that conditions today are the same as the conditions represented in the training data. In other words, AI systems implicitly assume that what has worked in the past will still work in the future.

What does this have to do with the current crisis? We are facing unprecedented times. Our situation is jarringly different from that of just a few weeks ago. Some of what we need to try today will have never been tried before. Similarly, what has worked in the past may very well not work today.

Humans are not that different from AI in these limitations, which partly explains why our current situation is so daunting. Without previous examples to draw on, we cannot know for sure the best course of action. Our traditional assumptions about cause and effect may no longer hold true.

The human touch

Humans have an advantage over AI, though. We are able to learn lessons from one setting and apply them to novel situations, drawing on our abstract knowledge to make best guesses on what might work or what might happen. AI systems, in contrast, have to learn from scratch whenever the setting or task changes even slightly.

The COVID-19 crisis, therefore, will highlight something that has always been true about AI: it is a tool, and the value of its use in any situation is determined by the humans who design it and use it. In the current crisis, human action and innovation will be particularly critical in leveraging the power of what AI can do.

One approach to the novel situation problem is to gather new training data under current conditions. For both human decision-makers and AI systems alike, each new piece of information about our current situation is particularly valuable in informing our decisions going forward. The more effective we are at sharing information, the more quickly our situation is no longer novel and we can begin to see a path forward.

Projects such as the COVID-19 Open Research Dataset, which provides the text of over 24,000 research papers, the COVID-net open-access neural network, which is working to collaboratively develop a system to identify COVID-19 in lung scans, and an initiative asking individuals to donate their anonymized data, represent important efforts by humans to pool data so that AI systems can then sift through this information to identify patterns.

A second approach is to use human knowledge and creativity to undertake the abstraction that the AI systems cannot do. Humans can discern between places where algorithms are likely to fail and situations in which historical training data is likely still relevant to address critical and timely issues, at least until more current data becomes available.

Such systems might include algorithms that predict the spread of the virus using data from previous pandemics or tools that help job seekers identify opportunities that match their skillsets. Even though the particular nature of COVID-19 is unique and many of the fundamental rules of the labour market are not operating, it is still possible to identify valuable, although perhaps carefully circumscribed, avenues for applying AI tools.

Collaboration is key

Efforts to leverage AI tools in the time of COVID-19 will be most effective when they involve the input and collaboration of humans in several different roles. The data scientists who code AI systems play an important role because they know what AI can do and, just as importantly, what it can’t. We also need domain experts who understand the nature of the problem and can identify where past training data might still be relevant today. Finally, we need out-of-the-box thinkers who push us to move beyond our assumptions and can see surprising connections.

Toronto-based startup Bluedot is an example of such a collaboration. In December it was one of the first to identify the emergence of a new outbreak in China. Its system relies on the vision of its founder, who believed that predicting outbreaks was possible, and combines the power several different AI tools with the knowledge of epidemiologists who identified where and how to look for evidence of emerging diseases. These epidemiologists also verify the results at the end.

Reinventing the rules is different from breaking the rules, though. As we work to address our current needs, we must also keep our eye on the long-term consequences. All of the humans involved in developing AI systems need to maintain ethical standards and consider possible unintended consequences of the technologies they create. While our current crisis is very pressing, we cannot sacrifice our fundamental principles to address it.

The key takeaway is this: Despite the hype, there are many ways that humans in which still surpass the capabilities of AI. The stunning advances that AI has made in recent years are not an inherent quality of the technology, but rather a testament to the humans who have been incredibly creative in how they use a tool that is mathematically and computationally complex and yet at its foundation still quite simple and limited.

As we seek to move rapidly to address our current problems, therefore, we need to continue to draw on this human creativity from all corners, not just the technology experts but also those with knowledge of the settings, as well as those who challenge our assumptions and see new connections. It is this human collaboration that will enable AI to be the powerful tool for good that it has the potential to be.

Matissa Hollister, Assistant Professor of Organizational Behaviour, McGill University

Republished from the World Economic Forum.