2018 was a year of reckoning for tech companies, their employees, and consumers. Both Facebook and Google were caught misusing people’s personal information–landing their leaders in front of the Senate. There was public outcry over Amazon licensing biased facial recognition software to ICE and police departments. Tech employees got fed up with their companies, sparking protests across Silicon Valley. With so little oversight from regulators and continued poor judgment on the part of big companies, both consumers and makers of tech were asking: What does it mean to develop technology in an ethical way?

advertisement

advertisement

advertisement

As for folks who are out of school: Take an online class. Or just read the news. As the North Carolina State University study on codes of ethics pointed out, developers who were more informed about current events were more likely to make more responsible decisions about how to develop technology compared to those without knowledge of those events. Don’t blame users One of the biggest questions about ethical technology has to do with who really has control over the ways users behave, especially when it’s to their detriment. Case in point: People spend too much time on their phones, but their phones were designed to be addictive. Same goes for platforms like YouTube, whose autoplay feature is designed to pull you down a rabbit hole of stupid videos, and before you know it your afternoon is gone. Is that your fault or YouTube’s? After years of blaming the user for their weak wills, tech companies are finally taking some responsibility for their designs, but with baby steps. Apple and Google released features that claim to be in the service of a user’s digital well-being, such as screen time monitors that inform users of how much time they’re spending on their phones, more ways to limit notifications, like turning them off completely around bedtime, and app timers that allow users to set limits on how much time they want to spend in certain apps. Unfortunately, these moves have done little to address the underlying problems that cause digital addition in the first place. For instance, YouTube released a new tool to let you know how much time you’re spending watching YouTube–but the company hasn’t made it any easier to turn off the autoplay feature in your account settings. That’s a real design change that would be in users’ best interests. Companies can’t afford to keep that up. When building a product, designers should make the default settings the ones that will be best for users. Firefox is a good example. After completing more usability tests, the browser will start blocking third-party trackers, which collect your data as you surf the internet, by default. Do ask questions, and a lot of them It’s obviously difficult to know when a product will have unintended consequences. But a handy tool created by the Seattle-based design firm Artefact can help you make sure you’re at least asking the right questions when you design a new product.

advertisement

advertisement

Do turn users into owners Perhaps the most extreme way to ensure that you’re developing technology ethically is to make your users the owners of your platform–not venture capitalists, not shareholders. That’s exactly what the subscription-based social media startup Are.na did in 2018. While the platform has been around for many years, its subscriber numbers took off in 2018, with all of its revenue coming from users’ subscriptions–not from advertisers. Earlier this year, the company also launched a crowdfunding campaign that allowed anyone to invest in Are.na. Ultimately, Are.na raised $270,00 from nearly 900 investors, the majority of whom were people already using the platform. It’s a radical form of ownership in our capitalist age, but one that other companies could consider in different capacities. What would it mean if your users were part-owners of your company? How would that change your decision-making?