The latest entry in a special project in which business and labor leaders, social scientists, technology visionaries, activists, and journalists weigh in on the most consequential changes in the workplace.

(Photo: Willyam Bradberry/Shutterstock)

Reining in artificial intelligence might be the sanest thing we do this century.

Diane E. Bailey is the co-author with Paul M. Leonardi of Technology Choices: Why Occupations Differ in Their Embrace of New Technology, and is an associate professor in the School of Information at the University of Texas–Austin.

Computer scientists have invested six decades and hundreds of millions of dollars in an effort to create artificial intelligence, or machines that think like humans.

No doubt AI offers many worthy benefits. But artificial brains are also putting human brains out of work across a wide swath of occupations. First came factory workers, then travel agents, booksellers, and accountants; today, writers are on the chopping block, while doctors, lawyers, and professors are on notice.

Today’s crazy pace of AI research is so fast that state legislatures, which once thought driverless vehicles belonged to a distant future, are scrambling to write laws for what happens when nobody hits nobody and somebody gets hurt. That’s a reactive, not a proactive, approach. Moreover, it is an approach that ignores the occupational implications of AI, in this case the future of taxi drivers, bus drivers, truck drivers, and others who operate vehicles for a living.

We can do better.

One way is to require workforce impact studies, similar to environmental impact studies, before new AI can enter the market. We know from studies of workplace technology that designers cannot fully predict how people will use new technologies. But that doesn't mean we can't make fairly educated guesses about how organizations might implement new AI and what might happen when they do. If history is any guide, their plans are unlikely to involve humans and machines working harmoniously, warm hand in happy manipulator. Rather, because we are the higher-priced option, we can expect our metal mates to replace us at every juncture possible.

Models already exist for constructing such workforce impact studies. Economist Alan Blinder has done exacting analyses of Department of Labor data to predict which occupations may be prone to offshoring; similar analyses might foreshadow the kinds of changes that AI reaps. By looking closely at the demands and skill requirements of each potentially affected occupation, such studies would tell us in advance, for example, how many professional drivers might driverless vehicles put out of work or how many doctors were likely to lose their jobs to AI that would draw upon troves of patient and clinical trial data to form diagnoses and make treatment recommendations.

In my own work with Paul Leonardi, we’ve shown that occupations can and do make choices about technology use. Observing engineers at work across three occupations, we learned that some chose to automate their work and others did not. One factor that shaped engineers’ decision was whether legal codes and professional licensing (read: rules) governed their products. Structural engineers, who were legally liable for the buildings they designed, resisted efforts to automate their design and analysis technologies. With human lives at stake in the event of building failure, they recognized the importance of forcing themselves to re-think their assumptions by re-visiting the building model at every step of the design process. By contrast, hardware engineers, who were not legally liable for the computer chips they designed, had no such qualms; they sought to fully automate the suite of technologies they employed, thereby seeing only the final model, and not the intermediate ones, in each test they ran. In this manner, external laws and regulations (or the lack thereof) helped to shape which tasks computers did and which tasks humans did in these two occupations. We might make similar choices with AI not just to save lives, but to save jobs.

In short, we as a society don’t have to sit here while computer scientists who create AI make de facto decisions about the future of work. We have technology choices, and we ought to explore them. We have scholars and practitioners in ethics, society, culture, history, work, government, public policy, and more whose expertise will help us make sense of these issues; it is time they join computer scientists around the AI table.

Technology does not have a greater right to employment than we do. If the oil-spill deaths of mammals and birds along the California coastline could spawn an environmental movement in the 1970s that forced corporations to reveal the impacts of planned development, then surely we should not witness the demise of one more occupation before showing the same concern for every human brain on this planet.

For the Future of Work, a special project from the Center for Advanced Study in the Behavioral Sciences at Stanford University, business and labor leaders, social scientists, technology visionaries, activists, and journalists weigh in on the most consequential changes in the workplace, and what anxieties and possibilities they might produce.