Font Size: A A A

Employers may increasingly automate their workplaces, requiring a new approach to workplace regulation.

Font Size: A A A

Why and when do firms hire employees? This question has grown urgent in light of two trends that are undermining the standard employment relationship: “fissuring,” or firms’ growing tendency to secure inputs from outside suppliers, and automation, or the replacement of human labor with machines. The latter has been going on for centuries. But many informed observers argue that “this time is different,” and that innovations in robotics, artificial intelligence, and machine learning are likely in the foreseeable future to destroy more jobs than they create—and will depress wages for much of the rest of the working population.

The rise of both fissuring and automation imperils the stability of employment as a platform for delivering a range of basic social entitlements, and points toward shifting the locus of some of those entitlements, as well as their costs, off employment and onto a broader and more redistributive revenue base.

The venerable “theory of the firm” tells us that firms hire employees, versus contracting with an outside supplier, when it is too costly to specify and enforce terms of performance through explicit contracting, such that direct supervision of the work within the firm is more cost-effective. Whether firms “make” what they need internally or “buy” it from outside contractors, another calculus governs the choice between deploying labor or investing in capital, such as machines or software. Simply put, firms hire employees when they can secure the inputs they need more cheaply, reliably, or effectively through the direct supervision of workers than through either outside suppliers or technological substitutes for labor.

In recent decades, firms have found more ways and more reasons to contract out their labor needs rather than hiring employees directly. A raft of technologies have enhanced firms’ ability to specify terms and monitor suppliers’ performance. As a result, firms are able to reduce production costs while concentrating their internal resources on higher-profit-margin “core competencies.” Hence, the proliferation of global supply chains, domestic outsourcing, franchising arrangements, and other organizational innovations that David Weil, in his influential work, has called “fissuring.”

Fissuring tends to undermine workers’ welfare because wages, working conditions, security, and opportunities for advancement are generally worse within supplier firms—and the competitive low-wage labor markets within which many operate—than they are within the internal labor markets of large integrated “lead firms.” Indeed, that is one key to the cost advantages that firms achieve through outsourcing.

The rise of the “platform economy” has taken these trends seemingly to the limit: Uber, for example, contracts out to outside suppliers—individuals whom they consider “independent contractors”—the entirety of what they sell to consumers. The rise of platform work, misclassification of workers, and fissuring more broadly has captivated labor and employment law scholars and advocates in recent years.

The field has devoted less attention to the growing capacity of firms to replace human labor, whether in-house or outsourced, with technology. The rising capabilities and falling cost of robots and algorithms and the corresponding potential for job destruction are plain to all. But there has been little systematic study of how the growing array of automation technologies and fissuring techniques together should affect the future shape of labor and employment law.

For example, the rise of fissuring has sensibly led labor scholars and advocates to seek to extend firms’ legal responsibility for the wages and working conditions of the humans who supply their labor inputs to those who are not legally considered employees. But automation throws a wrench into the works: To the extent those reforms succeed, and firms cannot escape the legal costs associated with employment through fissuring, they are more likely to turn to robots and algorithms instead of human labor. A constructive response to fissuring must take into account the exit option of automation.

Clearly humans will still be needed for some tasks; some skilled workers will be in high demand going forward, and some jobs will improve as machines take over the most repetitive or grueling tasks. Indeed, most economists still argue that, in the future as in the past, automation will promote prosperity through “creative destruction,” replacing grueling or repetitive jobs with better ones.

As the capabilities of robots and algorithms rise and their costs fall, however, some leading economists—both modelers and measurers—have found that net job losses and economic polarization from automation are not just possible in theory but are occurring on the ground. Much remains uncertain. The future, it turns out, is still hard to predict.

If the future of work involves a lot less of it for humans, the implications will extend far beyond the regulation of work itself. Collectively and individually, society will have to find other sources for the income and other rewards now derived from work, as well as other sites of social integration and purposeful cooperative activity.

The ancient but resurgent idea of a “universal basic income,” or the old notion of the government as employer of last resort, newly repackaged as a federal job guarantee, might gain traction. A push for “job sharing” through shorter working hours might reemerge after decades of dormancy. There are both welcome and worrisome possibilities in a future of much less work.

But there is little doubt that a wave of job destruction, if and when it becomes manifest, will require a profound rethinking of the existing constellation of laws, norms, and institutions that regulate work and of the assumptions underlying those regulatory structures.

In my own first cut at that daunting project, I begin with the simple proposition that much of the law of work imposes costs on employers. In some cases—as with health and safety laws, regulation of hours and scheduling, and rights against discrimination—those costs are necessary to secure the underlying worker entitlements. The costs help to deter harmful conduct or induce desired precautions. But in other cases—as with health coverage or paid leave mandates—that logic does not apply; all an employer can do to avoid those costs is employ fewer workers. For that latter set of entitlements, society should aim to shift costs onto a broader and more redistributive funding base instead of inefficiently taxing the use of human labor.

In short, employment law should “first, do no harm.” It should avoid unnecessarily and inefficiently accelerating fissuring and automation. Speeding up automation, in particular, is counterproductive if society is facing not historical patterns of creative destruction, but shrinking demand for the skills that most humans can muster.

The American model of social provision that developed over the 20th century consists largely of voluntary and mandatory employment-based rights and entitlements. The model worked fairly well for several decades, at least for the full-time employees of major firms and their families, if not for those at the margins of the economy. Cracks in the model began emerging in the 1970s, and they have grown under the pressure of changes in the global and domestic economy too numerous and complex to recount here.

The rise of ever-smarter machines threatens to ratchet up that pressure to the breaking point. Those machines will yield plenty of goods, services, and wealth; the risk, however, is that the gains will be monopolized by the few rather than cascading down to the many though decent-paying jobs. Society should strive to avert that risk, in part through the thoughtful redesign and reform of the laws and structures through which the government regulates work.

Cynthia Estlund is the Catherine A. Rein Professor of Law at the New York University School of Law.