People in the tech world love to say that everybody should learn to code — and there are a lot of good reasons to pick up the trade: earn a higher salary, understand the implications of A.I. and cybersecurity, protect yourself from automation. While wages have been stagnant, there has been consistent growth of high-paying jobs in software (for now). Despite all the incentives bringing people into programming and other technology fields, we actually need more people willing to break out of it.

Don’t get me wrong: I teach computer science and computer programming. I believe wholeheartedly in the power of technology to improve lives. My personal experience with computer programming has served me well with job security and financial stability, which I’m grateful for. I think the increase in people learning to code (and other STEM fields) is good; I just don’t think it should be a one-way street. With computers as our tool — and perhaps even guided by good intentions — we have done remarkable damage to our communities. The technology industry has driven advancements faster than society was prepared for, and soon, the bill will be due.

The ideas of “creative destruction” and “disruption” are enshrined as de facto good among the investor and CEO class of Silicon Valley — never mind the consequences. Change is inevitable, they say. Adapt or die. The gig economy is a prime example, as former Uber engineer Susan Fowler put it in a piece for Vanity Fair:

A lunchtime conversation with several friends turned to the subject of the gig economy. We began to enumerate the potential causes of worker displacement — things like artificial intelligence and robots, which are fast becoming a reality, expanding the purview of companies such as Google and Amazon. “The displacement is happening right under our noses,” said a woman sitting next to me, another former engineer. “Not in the future — it’s happening now.” […] The risk, we agreed, is that the gig economy will become the only economy, swallowing up entire groups of employees who hold full-time jobs, and that it will, eventually, displace us all. The bigger risk, however, is that the only people who understand the looming threat are the ones enabling it.

Driving for Uber is about as lucrative and fulfilling as working in fast food; researchers have estimated that around 41–54 percent of Uber drivers make less than minimum wage and 65 percent of drivers quit within six months of starting. If drivers were paid as employees rather than independent contractors, their wage would be illegal. As independent contractors, drivers sacrifice other legal protections as well. They don’t qualify for paid vacation, they shoulder a higher tax burden compared to employees, they lose the right to join a union, they are typically ineligible for unemployment insurance and disability/injury compensation, and they are not offered benefits such as health insurance.

And Uber is hardly the only gig-economy platform profiteering from the 1099’s limited legal protections. In her book Gigged, Sarah Kessler unearthed rampant abuse of independent contractors among gig platforms. Even big names you might not expect, such as Google and Facebook, employ many less-technical workers as independent contractors creating a two-tiered workforce. The world famous perks of working for a world-class technology firm — free lunch, daycare, office happy hours — are only available to full-time employees.

Skirting the lines of legality is hardly news in the history of labor, but Silicon Valley is imbued with a unique kind of techno-libertarian hubris. A sense that the law cannot possibly keep up with technology and that legal efforts to reel technology back are only standing in the way of the inevitability of progress. When asked about the legality of their businesses (“Is it legal for regular people to basically operate an unregulated hotel out of their home?”), I’ve heard more than one would-be world changer reply that the company is “pre-legal.” That’s code for “we can make a lot of money before the law catches up.” For some of the more arrogant, there is the additional implication “and when the law finally catches up we’ll be ‘too big to fail.’”

This disdain for law is coupled with the deluded notion that every technology company is “changing the world” regardless of the banality of their product. This is the perfect storm that gave us the likes of Travis Kalanick and Elizabeth Holmes, both celebrated as tech-heros before it all came crashing down. The total sum of Kalanick’s positive contribution to the world is that it’s easier to hail a cab. Holmes, we now know, contributed literally nothing to society in the process of becoming a billionaire.

Paradoxically, the same people who say change is inevitable cast themselves as geniuses building things only they could imagine. Perhaps change is inevitable, but the specific changes brought about are the direct consequence of human initiative. It’s a weak dodge, but their responsibility is “to the shareholders,” not to society.

For the people using this software the bias is hidden behind a veneer of “algorithmic objectivity,” but machine learning experts know better.

To address the human cost and correct course, we need people who understand the technology world to do more than just build more technology. The ability of people like Holmes to pull the wool over so many peoples’ eyes speaks to the problem. Our Congress’ embarrassing interview of Mark Zuckerberg earlier this year does as well.

As technology plays a more integral role in all our lives, we desperately need the technologically savvy to bring their understanding to the institutions and communities that rely on their creations. From the pernicious exploitation of gig-economy workers to high-brow fields like journalism, politics, law, and medicine — technology is reshaping our world with critically faulty oversight. For example, bias in machine learning has impacted thousands of lives via risk assessment tools that inform legal procedures, such as sentencing and bail setting. For the people using this software the bias is hidden behind a veneer of “algorithmic objectivity,” but machine learning experts know better.

Many in the technology world view themselves as bestowing advanced solutions to the plebians of these “lesser” fields — casting the plebs only as uninformed consumers of technology. Such technologists paternalistically believe they can “revolutionize” these fields by bringing the silly Luddites new software “solutions” without stopping to determine if they’re actually solving a problem or just creating more headaches, integration problems, and confusion. We are drowning in technology that was created for its own sake, then aggressively marketed, lobbied, and otherwise pushed to reluctant consumers.

Most of our politicians and legal professionals understand critical topics like encryption and machine learning the same way a cat understands its owner’s midlife crisis: They don’t know, they don’t care, and they wish we’d just shut up and feed them already. Many fields suffer from similar problems. Doctors complain constantly about frustrating electronic medical records, which have become a legal requirement for many hospitals despite horrible design that costs doctors time while contributing little to quality of care.

In my view, these problems can be alleviated by technologists willing to abandon the safety of their craft to become congresspeople, journalists, lawyers, doctors, and more. Experienced technologists need to rethink their next venture. Don’t start a law tech company, start a technology focused law firm. Don’t run the technology department for a political campaign, run a political campaign centered on technology. Don’t create technology for educators, educate people about technology.

Silicon Valley and technology-at-large have languished under a technology-first worldview for too long. Those of us with a deeper understanding of technology need to step outside our bubbles, figure out what people really need, and only then think about how technology can be deployed to those ends.