For centuries, the law has defined corporate misconduct in terms of employee misconduct. The legal doctrine for attributing mental states to corporations – respondeat superior – defines corporate mental states in terms of employee mental states.

Mihailis Diamantis

Iowa College of Law

Iowa City, Iowa

But algorithms may soon replace employees as the leading cause of corporate harm.

When corporations using algorithms to break the law, current liability doctrines do not apply.

Unless the law adapts, corporations will become increasingly immune to civil and criminal liability as they transfer responsibility from employees to algorithms.

That’s according to a new paper titled – The Extended Corporate Mind: When Corporations Use AI to Break the Law by Mihailis E. Diamantis, an associate professor of law at the University of Iowa College of Law.

The paper examines the growing doctrinal gap left by what Diamantis calls “algorithmic corporate misconduct.”

“To hold corporations accountable, the law must sometimes treat them as if they ‘know’ information stored on their servers and ‘intend’ decisions reached by their automated systems. Cognitive science and the philosophy of mind offer a path forward,” Diamantis writes. “The ‘extended mind thesis’ complicates traditional views about the physical boundaries of the mind. The thesis states that the mind encompasses any system that sufficiently assists thought – by facilitating recall or enhancing decision-making. For natural people, the thesis implies that minds can extend beyond the brain to include external cognitive aids, like rolodexes and calculators.”

Diamantis adapts the extended mind thesis to corporate law. He puts forward a doctrinal framework for extending the corporate mind to the algorithms that are increasingly integral to corporate thought.

“The law needs such an innovation if it is to hold future corporations to account for their most serious harms,” Diamantis writes.