In a recent podcast series called Instaserfs, a former Uber driver named Mansour gave a chilling description of the new, computer-mediated workplace. First, the company tried to persuade him to take a predatory loan to buy a new car. Apparently a number cruncher deemed him at high risk of defaulting. Second, Uber would never respond in person to him – it just sent text messages and emails. This style of supervision was a series of take-it-or-leave-it ultimatums – a digital boss coded in advance.

Then the company suddenly took a larger cut of revenues from him and other drivers. And finally, what seemed most outrageous to Mansour: his job could be terminated without notice if a few passengers gave him one-star reviews, since that could drag his average below 4.7. According to him, Uber has no real appeal recourse or other due process in play for a rating system that can instantly put a driver out of work – it simply crunches the numbers.

Mansour’s story compresses long-standing trends in credit and employment – and it’s by no means unique. Online retailers live in fear of a ‘Google Death Penalty’ – a sudden, mysterious drop in search-engine rankings if they do something judged fraudulent by Google’s spam detection algorithms. Job applicants at Walmart in the US and other large companies take mysterious ‘personality tests’, which process their responses in undisclosed ways. And white-collar workers face CV-sorting software that may understate, or entirely ignore, their qualifications. One algorithmic CV analyser found all 29,000 people who applied for a ‘reasonably standard engineering position’ unqualified.

The infancy of the internet is over. As online spaces mature, Facebook, Google, Apple, Amazon, and other powerful corporations are setting the rules that govern competition among journalists, writers, coders, and e-commerce firms. Uber and Postmates and other platforms are adding a code layer to occupations like driving and service work. Cyberspace is no longer an escape from the ‘real world’. It is now a force governing it via algorithms: recipe-like sets of instructions to solve problems. From Google search to OkCupid matchmaking, software orders and weights hundreds of variables into clean, simple interfaces, taking us from query to solution. Complex mathematics govern such answers, but it is hidden from plain view, thanks either to secrecy imposed by law, or to complexity outsiders cannot unravel.

Algorithms are increasingly important because businesses rarely thought of as high tech have learned the lessons of the internet giants’ successes. Following the advice of Jeff Jarvis’s What Would Google Do, they are collecting data from both workers and customers, using algorithmic tools to make decisions, to sort the desirable from the disposable. Companies may be parsing your voice and credit record when you call them, to determine whether you match up to ‘ideal customer’ status, or are simply ‘waste’ who can be treated with disdain. Epagogix advises movie studios on what scripts to buy, based on how closely they match past, successful scripts. Even winemakers make algorithmic judgments, based on statistical analyses of the weather and other characteristics of good and bad vintage years.

For wines or films, the stakes are not terribly high. But when algorithms start affecting critical opportunities for employment, career advancement, health, credit and education, they deserve more scrutiny. US hospitals are using big data-driven systems to determine which patients are high-risk – and data far outside traditional health records is informing those determinations. IBM now uses algorithmic assessment tools to sort employees worldwide on criteria of cost-effectiveness, but spares top managers the same invasive surveillance and ranking. In government, too, algorithmic assessments of dangerousness can lead to longer sentences for convicts, or no-fly lists for travellers. Credit-scoring drives billions of dollars in lending, but the scorers’ methods remain opaque. The average borrower could lose tens of thousands of dollars over a lifetime, thanks to wrong or unfairly processed data.

This trend toward using more data, in more obscure ways, to rank and rate us, may seem inevitable. Yet the exact development of such computerised sorting methods is anything but automatic. Search engines, for example, are paradigmatic examples of algorithmic technology, but their present look and feel owe a great deal to legal interventions. For example, thanks to Federal Trade Commission action in 2002, United States consumer-protection laws require the separation of advertisements from unpaid, ‘organic’ content. In a world where media firms are constantly trying to blur the distinction between content and ‘native advertising’, that law matters. European Union regulators are now trying to ensure that irrelevant, outdated, or prejudicial material does not haunt individuals’ ‘name search’ results – a critical task in an era when so many prospective employers google those whom they are considering for a job. The EU has also spurred search engines to take human dignity into account – by, for example, approving the request of a ‘victim of physical assault [who] asked for results describing the assault to be removed for queries against her name’.

Such controversies have given rise to a movement for algorithmic accountability. At ‘Governing Algorithms’, a 2013 conference at New York University, a community of scholars and activists coalesced to analyse the outputs of algorithmic processes critically. Today these scholars and activists are pushing a robust dialogue on algorithmic accountability, or #algacc for short. Like the ‘access to knowledge’ (A2K) mobilisation did in the 2000s, #algacc turns a spotlight on a key social justice issue of the 2010s.

Some in the business world would prefer to see the work of this community end before it has even started. Spokesmen and lobbyists for insurers, banks, and big business generally believe that key algorithms deserve the iron-clad protections of trade secrecy, so they can never be examined (let alone critiqued) by outsiders. But lawyers have faced down such stonewalling before, and will do so again.

Regulators can make data-centric firms more accountable. But first, they need to be aware of the many ways that business computation can go wrong. The data used may be inaccurate or inappropriate. Algorithmic modelling or analysis may be biased or incompetent. And the uses of algorithms are still opaque in many critical sectors – for example, we may not even know if our employers are judging us according to secret formulae. In fact, however, at each stage of algorithmic decision-making, simple legal reforms can bring basic protections (such as due process and anti-discrimination law) into a computational age.

Everyone knows how inaccurate credit reports can be – and how hard they are to correct. But credit histories are actually one of the most regulated areas of the data economy, with plenty of protection available for savvy consumers. Far more worrying is the shady world of thousands of largely unregulated data brokers who create profiles of people, profiles built without people’s knowledge, consent, and often without the right to review or correct. One casual slur against you could enter into a random database without your knowledge—and then go on to populate hundreds of other digital dossiers purporting to report on your health status, finances, competence, or criminal record.

This new digital underworld can ruin reputations. One woman was falsely accused of being a meth dealer by a private data broker, and it took years for her to set the record straight – years during which landlords and banks denied her housing and credit. Government databases can be even worse, in the US for example tarring innocents with ‘Suspicious Activity Reports’ (SARs), or harbouring inaccurate arrest records. Both problems have beset unlucky citizens for years. The data gluttony of both state and market actors’ means that ersatz reports can quickly spread.

However much knowledge of every moment of a worker’s life may add to the bottom line, a democratic society should resist it

When false, damaging information can instantly spread between databases, but take months or years of legwork and advocacy to correct, the data architecture is defective by design. Future reputation systems must enable the reversal of stigma as fast as they promote its spread. This is not an insoluble problem: the US Congress passed the Fair Credit Reporting Act in 1970 to govern the data-gathering practices of the credit bureaux. Extending and modernising its protections would build accountability, mechanisms of fairness and redress into data systems currently slapped together with only quick profits, not people or citizens, in mind.

Data collection problems go beyond inaccuracy. Some data methods are just too invasive to be permitted in a civilised society. Even if applicants are so desperate for a job that they will allow themselves to be videotaped in the bathroom as a condition for employment, privacy law ought to stop such bargains. Digital data collection can also cross a line. For example, a former worker at the international wire-transfer service Intermex claims that she was fired after she disabled an app that enabled the firm to track her location constantly.

Note that the employer might have business reasons beyond voyeurism for such tracking; it may find out that employees who are always home by 8pm tend to perform better the next day at work, and then gradually introduce incentives for, or even require, that behaviour among its entire workforce. However much knowledge of every moment of a worker’s life may add to the bottom line, a democratic society should resist it. There needs to be some division between work and non-work life.

Limits on data collection will frustrate big-data mavens. The CEO of ZestFinance has proudly stated that ‘all data is credit data’ – that is, predictive analytics can take virtually any scrap of information about a person, analyse whether it corresponds to a characteristic of known-to-be-creditworthy people, and extrapolate accordingly. Such data might include sexual orientation or political views. But even if we knew that supporters of George W Bush were more likely to be behind on their bills than John Kerry voters, is that really something that we trust our banks or credit scorers to know? Is it knowledge they should have? Marriage counselling may be treated as a signal of impending instability and lead to higher interest rates or lower credit limits – one US company, CompuCredit, has already settled (without admitting wrongdoing) a lawsuit for doing precisely that. But such intimate information shouldn’t be monetised. Too many big-data mavens aspire to analyse all captureable information – but when their fever dreams of a perfectly known world clash with basic values, they must yield.

While most privacy activists focus on the collection issue, the threat posed by reckless, bad, or discriminatory analysis may well be more potent. Consider a ‘likely employment success score’ that heavily weights an applicant’s race, zip code, or lack of present employment. Each of these pieces of data may be innocent, or even appropriate, in the right context. (For example, the firm Entelo tries to match minority applicants to firms that want more diversity.) But they should also bear scrutiny.

Consider racism first. There is a long and troubling history of discrimination against minorities. Extant employment discrimination laws already ban bias, and can result in hefty penalties. So, many advocates of algorithmic decision-making say, why worry about our new technology? Discrimination in any form – personal, technological, what have you – is already banned. This is naïve at best. Algorithmic decision-making processes collect personal and social data from a society with a discrimination problem. Society abounds with data that are often simple proxies for discrimination – zip or postal codes, for example.

Consider a variable that seems, on its face, less charged: months since last job. Such data could aid employers who favour workers quickly moving from job to job – or discriminate against those who needed time off to recover from an illness. Worried about the potentially unfair impact of such considerations, some jurisdictions have forbidden employers from posting ‘help wanted’ ads telling the unemployed not to apply. That is a commendable policy step – but whatever its merits, what teeth will it have if employers never see CVs excluded by an algorithm that blackballs those whose latest entry is more than a few months old? Big data can easily turn into a sophisticated tool for deepening already prevalent forms of unfair disadvantage.

Law enforcers of the future could find it difficult to learn all the variables that go into credit and employment decisions. Protected by trade secrecy, many algorithms remain impenetrable to outside observers. When they try to unveil them, litigants can face a Catch-22. Legitimately concerned to stop ‘fishing expeditions’, courts are likely to grant discovery requests only if a plaintiff has accumulated some quantity of evidence of discrimination. But if the key entity making a decision was a faceless ‘black boxed’ algorithm, what’s the basis for an initial suspicion of discrimination?

Indeed, beginning with the Equal Credit Opportunity Act (1974), US regulators have often encouraged businesses to use algorithms to make decisions. Regulators want to avoid the irrational or subconscious biases of human decision-makers, but of course human decision-makers devised the algorithms, inflected the data, and influenced its analysis. No ‘code layer’ can create a ‘plug and play’ level playing field. Policy, human judgment, and law will always be needed. Algorithms will never offer an escape from society.

Governments should ensure that the algorithms they are promoting serve rather than defeat their stated purposes. The subprime crisis offers a good example of past legal failure, and an innovative solution to it. Rating agencies – Moody’s and S&P for example – were using algorithmic assessments of creditworthiness to rubber-stamp dubious mortgage-backed securities (MBS’s) as AAA – the highest rating. Those ersatz imprimaturs in turn drew a flood of money for subprime loans. Critics allege the agencies changed their rating methods in order to attract more business from those selling MBS’s. Triple-A ratings after the method change may have meant something very different from prior ones – but many investors lacked knowledge of the switch.

The government could deny contracts to companies that use secret algorithms to make employment decisions

To address that issue, the Dodd-Frank Act requires rating agencies to disclose material changes in their methods. Such openness helps those involved in the markets understand the ‘guts’ of a AAA rating, rather than mindlessly presume that it always did and always will assure a certain benchmark of reliability. As any investor will tell you, information is power, and credit ratings are not necessarily information – just a shorthand.

While credit ratings assess the value of securities, algorithmic scoring of consumers assess people, along any number of dimensions, including (but by no means limited to) creditworthiness. As the 2014 World Privacy Forum report ‘The Scoring of America’ revealed, there are thousands of such scores. When an important decision-maker decides to use one, he or she owes it to the people ranked and rated to explain exactly what data was used, how it was analysed, and how potential mistakes, biases or violations of law can be identified, corrected or challenged. In areas ranging from banking and employment to housing and insurance, algorithms may well be kingmakers, deciding who gets hired or fired, who gets a raise and who is demoted, who gets a 5 per cent or 15 per cent interest rate. People need to be able to understand how they work, or don’t work.

The growing industry of ‘predictive analytics’ will object to this proposal, claiming that its ways of ranking and rating persons deserve absolute trade secrecy protection. Such intellectual property is well-protected under current law. However, the government can condition funding on the use or disclosure of the data and methods used by its contractors. The government’s power to use its leverage as a purchaser is enormous, and it could deny contracts to companies that, say, used secret algorithms to make employment decisions, or based credit decisions on objectionable data.

In the US it is time for the federal budget to reward the creation of accountable algorithmic decision-making – rather than simply paying for whatever tools its contractors come up with. We wouldn’t tolerate parks studded by listening equipment that recorded every stroller’s conversation, or refused entry to the bathrooms to those designated ‘vandalism risk’ by secret software. We should have similar expectations of privacy and fair treatment in the thousands of algorithmic systems government directly or indirectly funds each year.

Some clinical trial recruiters have discovered that people who own minivans, have no young children, and subscribe to many cable TV channels are more likely to be obese. At least in their databases, and perhaps in others, minivan-driving, childless, cable-lovers are suddenly transmuted into a new group – the ‘likelier obese’ – and that inference is a new piece of data created about them.

An inference like this may not be worth much on its own. But once people are so identified, it could easily be combined and recombined with other lists – say, of plus-sized shoppers, or frequent buyers of fast food – that solidify the inference. A new algorithm from Facebook instantly classifies individuals in photographs based on body type or posture. The holy grail of algorithmic reputation is the most complete possible database of each individual, unifying credit, telecom, location, retail and dozens of other data streams into a digital doppelganger.

However certain they may be about our height, or weight, or health status, it suits data gatherers to keep the classifications murky. A person could, in principle, launch a defamation lawsuit against a data broker that falsely asserted the individual concerned was diabetic. But if the broker instead chooses a fuzzier classification, such as ‘member of a diabetic-concerned household’, it looks a lot more like an opinion than a fact to courts. Opinions are much harder to prove defamatory – how might you demonstrate beyond a doubt that your household is not in some way ‘diabetic-concerned’? But the softer classification may lead to exactly the same disadvantageous outcomes as the harder, more factual one.

Similar arbitrage strategies may attract other businesses. For instance, if an employer tells you he is not hiring you because you’re a diabetic, that’s clearly illegal. But what if there is some euphemistic terminology that scores your ‘robustness’ as an employee? Even if the score is based in part on health-related information, that may be near-impossible to prove because candidates almost never know what goes into an employer’s decision not to interview them or not to give them a job. An employer may even claim not to know what’s going into the score. Indeed, at some point in the hiring or evaluation process, applicants are likely to encounter managers or human resources staff who in fact do not know what constituted the ‘robustness’ rating. When so much anti-discrimination law requires plaintiffs to prove an intent to use forbidden classifications, ignorance may be bliss.

It will be much easier to regulate these troubling possibilities before they become widespread, endemic business practices. The Equal Employment Opportunity Commission (EEOC) is considering disputes stemming from employer personality tests featuring questions that seem to be looking for patterns of thought connected to mental illnesses, but unrelated to bona fide occupational qualifications or performance. Those investigations should continue, and extend to a growing class of algorithmic assessments of past or likely performance. In some cases, mere disclosure and analysis of algorithmic assessments is not enough to make them fair. Rather, their use may need to be forbidden in important contexts, ranging from employment to housing to credit to education.

When the problems with algorithmic decision-making come to light, big firms tend to play a game of musical expertise. Lawyers say, and are told, they don’t understand the code. Coders say, and are told, they don’t understand the law. Economists, sociologists, and ethicists hear variations on both stonewalling stances.

Algorithmic accountability is an urgent, global cause with committed and mobilised experts looking for support

In truth, it took a combination of computational, legal, and social scientific skills to unearth each of the examples discussed above – troubling collection, bad or biased analysis, and discriminatory use. Collaboration among experts in different fields is likely to yield even more important work. For example, the law academics Ryan Calo, of the University of Washington, and James Grimmelmann, of the University of Maryland, along with other ethicists, have offered frameworks for assessing algorithmic manipulation of content and persons. Grounded in well-established empirical social science methods, their models can and should inform the regulation of firms and governments using algorithms.

Empiricists may be frustrated by the ‘black box’ nature of algorithmic decision-making; they can work with legal scholars and activists to open up certain aspects of it (via freedom of information and fair data practices). Journalists, too, have been teaming up with computer programmers and social scientists to expose new privacy-violating technologies of data collection, analysis, and use – and to push regulators to crack down on the worst offenders.

Researchers are going beyond the analysis of extant data, and joining coalitions of watchdogs, archivists, open data activists, and public interest attorneys, to assure a more balanced set of ‘raw materials’ for analysis, synthesis, and critique. Social scientists and others must commit to the vital, long term project of assuring that algorithms are producing fair and relevant documentation; otherwise states, banks, insurance companies and other big powerful actors will make and own more and more inaccessible data about society and people. Algorithmic accountability is a big tent project, requiring the skills of theorists and practitioners, lawyers, social scientists, journalists and others. It’s an urgent, global cause with committed and mobilised experts looking for support.

The world is full of algorithmically driven decisions. One errant or discriminatory piece of information can wreck someone’s employment or credit prospects. It is vital that citizens be empowered to see and regulate the digital dossiers of business giants and government agencies. Even if one believes that no information should be ‘deleted’ – that every slip and mistake anyone makes should be on a permanent record for ever – that still leaves important decisions to be made about the processing of the data. Algorithms can be made more accountable, respecting rights of fairness and dignity for which generations have fought. The challenge is not technical, but political, and the first step is law that empowers people to see and challenge what the algorithms are saying about us.