Many of us are grateful to Silicon Valley for the convenience it's brought to our lives, whether shopping, looking up information or communicating with other human beings. But as tech companies become corporate behemoths influencing every aspect of modern life, many realize it's time to take action. Ramesh Srinivasan , UCLA professor and author of Beyond the Valley, recommends books for a more rounded understanding of Silicon Valley.

You’ve called your book Beyond the Valley: how much is Silicon Valley in need of reform?

Silicon Valley is in significant need of reform, partly because what Silicon Valley is is far more than meets the eye. In general, Silicon Valley has been seen as acceptable—no matter what its actions are—because it’s simply the place from which technologies, which are largely seen as neutral, emerge. I’m not even necessarily trying to blame Silicon Valley, but more generally an approach that builds technologies for “everyone” without bringing them to the table or really even deeply considering the values, realities, and experiences of users. Beyond the Valley is not merely about Silicon Valley, but inclusive of it as part of a larger problem: the asymmetric relationships between places where technologies are produced (Silicon Valley, Seattle, China), and the billions of users impacted by their hidden, largely unaccountable decisions.

Today, the internet represents the mechanism by which every aspect of human experience is being communicated, defined, expressed and mediated. Within these technologies, there are hidden forces at work that end up opening up opportunities for the few and closing opportunities for everybody else. They have become the mechanisms by which society is being engineered. Silicon Valley—and also Seattle, Shanghai and Beijing, these very concentrated places in the world—has become problematic because its technologies have intruded onto all sorts of other domains that people thought of traditionally as non-digital or non-technological: work, politics, journalism, insurance, banking loans, the criminal justice system or lack thereof, to name just a few. These are all technological.

So Beyond the Valley, in one sense, is a reference to Silicon Valley, but more generally it’s a metaphor that’s attempting to open up our understanding and imagination of what technology actually can be—outside of the idea of a valley as some kind of concentrated zone, where the few build technologies that define the lives of the many.

My goal is not to find fault with the engineers and executives within Silicon Valley. I’m a former engineer myself and as engineers we are not trained, either in our educational system or our professional roles, to deeply understand economics, politics, cultures and the world. We’re trained to optimize systems to be as efficient and fast as possible and to “personalize” information for people, but we’re not informed by a more environmental, people-centric understanding of users. So that’s why, at the minimum, the reforms Silicon Valley needs to take are ones of true collaboration and giving up some power.

How likely is that to happen? I do feel there’s been a bit of a sea change recently, with more and more books coming out saying, ‘Wait a second, what’s going on?’ There is a bit more awareness that Google (say) isn’t necessarily a force for good.

A lot of books start and end with critique of Silicon Valley, and I appreciate those books. They don’t necessarily propose a comprehensive set of possibilities and actions that we can all take together and move forward. Beyond The Valley is focused on doing so, by being informed by stakeholders in the political realm, the economic realm and, very importantly, in the global realm. That’s why I have put out several op-eds, from Wired to the LA Times to a forthcoming Guardian piece, that share practical proposals regarding how we can arrive at a more balanced digital world.

There are many steps we can take. First of all, we have to carefully investigate monopolistic forms of behaviour and examine whether they are or are not the case and if so, which cases and which companies. That’s something we can do right away. Senator Bernie Sanders, whose campaign I represent as a national surrogate, as well as Senator Warren, have both given voice to that issue in the United States. In Europe, there’s been the passage of the GDPR, which starts to deal with some of these issues.

In terms of monopolistic practices, isn’t part of the problem that we all want to be on the same platform in order to be able to communicate with each other?

Right, but that platform should not be taking advantage of one set of activities that we do on it to gain an unfair advantage relative to another set of activities. For example, Amazon is not merely the platform for e-commerce; it’s also the platform for cloud-based computing. So if Amazon’s deep knowledge and control—and this also applies to Alibaba, actually—over the cloud-based computing environment allows them to have an unfair advantage in the e-commerce marketplace, where they suggest certain products that benefit their own hidden agendas, then that’s monopolistic, horizontal integration. We can also see aspects of vertical integration from a monopolistic perspective, because what Amazon has attempted to do is control every aspect of the supply chain: logistics, content/supply warehousing, the delivery platform, efficiency, the marketplace itself. So there are aspects of horizontal and vertical integration that would need to be explored.

“As engineers we are not trained . . . to deeply understand economics, politics, cultures and the world. We’re trained to optimize systems”

Part two—and this is very important—technologies that impact particular stakeholders need to be designed collaboratively with those stakeholders. If we’re building a predictive policing system that ends up being racist and takes those who are most criminalized and criminalizes them further, there’s only two alternatives that are appropriate, to my mind. One is banning or suspending that system because we should not be building systems based on existing inequalities. Or—and this my main point—these systems should be designed collaboratively with those communities. That doesn’t mean those people are engineers, but they can provide very valuable input and must have power over how that technology is designed and how those technologies are audited. So that’s a second key point and Cathy O’Neill makes that point very persuasively in her book, which we’ll discuss in a minute.

Another example that I bring up in my book is that we’ve seen these spectacular failures with Facebook in relation to the genocide in Myanmar. Facebook mischaracterized the Burmese military, who was committing the genocide, as the legitimate source, and the people who had the genocide committed against them as the terrorists. As a result, the experiences and stories of the Rohingya, who were having the genocide committed against them, were invisible both within the country and around the world. We were spun a narrative that was false.

What Facebook could have done is partner with people who actually understand Myanmar much more deeply and figure out the right way to balance the situation. Maybe they could even have considered working side-by-side with Southeast Asian human rights organizations. So that’s a global example. A company that’s located in Silicon Valley—whose engineers, executives and investors are mostly white dudes and maybe a few Asian dudes, but not too many women and not too many women of color—is not going to understand the specifics of Myanmar’s history and current politics.

Thirdly, the modalities by which our worlds are being technologized—it’s not just about Facebook or Amazon or Google, but everything is being computed and expressed through digital technology in the form of ‘data’—are responsible for greater and greater economic inequalities. The gig economy is an example I provide in the book. Our personal data being worth potentially trillions of dollars in valuation across some of the big companies is another example. The automation of work and labour is another example. All of these are based on a model that private corporations are beholden to, which is to maximize profits or valuation (a company like Uber doesn’t actually get a lot of profits but their valuation is very high) no matter what the cost is to working people. They do provide value and efficiency to consumers in certain cases, but the overall macroeconomic effects are profoundly negative.

So my third set of proposals are about figuring out ways to balance the economic situation moving forward. There’s a few things we can do. People who are losing their jobs in relation to automation or the gig economy should be reskilled or provided with funds through this transition and trained up for new new dignified, living wage, unionized, and meaningful jobs. That’s a point that Bernie Sanders has made.

Get the weekly Five Books newsletter

A second set of ideas that are popular right now look at paying people for their data, so data micropayments. Jaron Lanier most publicly makes that point. I have some issues with that: how can one, in a snapshot, quantify the value of one’s data? Is my data worth more now that I have written this book than before? Should we really be invading people’s privacy based on ad hoc assessments of what they are ‘worth’? Nonetheless, it’s a well-intentioned idea. The other big idea that is hot these days is universal basic income, though the question is, at what cost?

Most importantly, as all this money is made off of our data, we need to develop economic systems that combat rather than perpetuate economic inequalities. As an example, in 2012, Instagram was sold for $1 billion with 12 employees, within three months of Kodak going bankrupt with 35,000 employees, even though Kodak had way more patents in digital imaging than Instagram did. In the United States, the youngest generation is the first in the history of this country to make less money, when you account for inflation, than their parents did. Also, our life expectancy is decreasing in this country. That’s not the fault of technology per se, but technologies are catalysts of creating and amplifying that inequality when they’re written by and for private, corporate, self-serving interests. That’s why we need a mechanism to balance all these things.

Really what you’re saying is that capitalism, broadly defined, isn’t working very well for society at the moment.

This is not free-market capitalism that we are seeing now; it’s oligarchic capitalism. This current stage of what some call ‘late capitalism’ or ‘neoliberal capitalism’ is harming our planet, it’s harming working people who represent 99 plus per cent of the population (and the vast majority of internet users), and is based on extraction rather than providing value to everyone involved. And when I say extraction, I mean the extraction of data, but I also mean things like the extraction of minerals—like coltan from mines in the Congo or lithium from Bolivia—or the extraction of labour in assembly lines or Amazon fulfilment centers or even our data as our labour.

But should the impulse to deal with a lot of these things be coming from the government? Is that where’s the solution going to come from, or is it up to us?

I think it’s all hands on deck. We need to transform our regulatory system and governments to get going on these issues. However, I should note that right now, in the United States, 48 out of 50 state attorney generals are investigating Facebook for antitrust violations, including many Republican attorney generals. Vox did a survey a few months ago that showed that Americans across the board—politically, generationally, racially, by gender etc—all support significant regulation. So, yes, it’s partly a political/regulatory issue.

“This is not free-market capitalism that we are seeing now; it’s oligarchic capitalism”

But it’s also about incentivizing community-based, small business-based alternatives. This is something that I’ve actually helped in proposing policy for, for Senator Sanders’s campaign. We need to incentivize other forms of entrepreneurship and also nonprofits, cooperatives and so on.

It’s also about writers and journalists calling attention to these themes and not being simply negative but pragmatic about strategies and movements.

We also need to pay attention to all the stuff that’s out there that actually represents viable alternatives. They’re small scale, like community networks, and I write about them a lot in the book.

In your book, you speak to a lot of people: was that part of the project, to get a global sense of critiques of big tech, as well as possible solutions?

I really enjoyed writing this book. It’s readable for anyone and I used stories and examples to show how incredibly important it is to ensure that our technology serves all our interests. I wanted to reach a wide, global audience. I spoke with very prominent public figures as well as people who are just very interesting characters shaping technology innovation for tomorrow.

The book was a combination of three things. One, my conversations with people from Eric Holder, the former United States Attorney General to Elizabeth Warren to Noam Chomsky to African technology innovators to Latin American indigenous technology activists. It was so much fun to get those voices into conversation with one another. Second was fieldwork that I myself did, going to different parts of the world doing interviews, doing ethnographies, uncovering examples and stories and insights myself. Third, and this is so important, there is such good journalism out there now on these themes: the Guardian, the Intercept, ProPublica etc. I really try to bring those different journalists and their important work and their investigative work into conversation in the book. So I tried to bridge these three layers.

I also wanted to write a broad book. People are interested in economic issues around the gig economy or automation: there’s a few chapters in the book focused on that. People are interested in African tech and AI issues: there’s a few chapters based on that. People are interested in antitrust and monopoly issues: there’s material about that. People are interested in understanding blockchain in a way that is more people-based, organization-based and community-based, this whole other model, so I cover that. So it’s a really expansive book. I wrote the book in a broad way, with depth in each chapter, to really show how there’s a multi-dimensional impact on every aspect of our lives. Right now it’s encoded into dominant, overpowering technologies, but I wanted to give examples of how we can move past some of these conundrums.

Read 1 Team Human by Douglas Rushkoff Read

Let’s talk about some of the other Silicon Valley books that you’ve recommended. The first on your list is Team Human by Doug Rushkoff. This is a real manifesto/rallying cry, I gather from reviews of the book. Could you explain what he’s arguing and why it’s important?

Doug Rushkoff has been a friend and wrote the foreword to my book. Team Human is a pretty well known—and growing—podcast where he has interviews with people. The book and his podcasts are focused on one major theme: reminding us—and this is his motto, so I’m just going to quote it—that “being human is a team sport.” So let’s now apply that logic to thinking about technology, which is what his book does.

What we find is that technologies are designed in an individuated manner. So say you and I were very similar demographically, politically, etc., and we both logged onto YouTube or we went on Twitter or Facebook or Instagram. We could be presented with completely different worlds, which may have very little intersection with one another. Why is that? It’s because the algorithmic systems, the computational systems, are built on the logic of feeding, controlling (or at least shaping and influencing) the individual user. The effect of that at scale is massive amounts of polarization and actually us forgetting that our lives are deeply interconnected, on the level of family, of community, of neighbourhood, of nation and society and of the world, which faces common challenges like climate change.

“We have to carefully investigate monopolistic forms of behaviour”

Doug Rushkoff makes the point that this idea of collectivity and communality was an early part of the internet itself. The early internet was comprised of a lot of hippies and get-back-to-the-earth, community-oriented people in the Usenet newsgroup world, but what the internet has turned into is a set of fragmented, individualized, hyper-inflammatory at times, spaces. Rushkoff is saying, ‘Let’s get back to the logic of thinking of ourselves as interconnected and as a team. Only as a team can we create technology that helps us come together and only as a team can we solve the major challenges that we face and become more humane as human beings.’

That’s what that book is about and he also makes the point that communality is a fundamental aspect of nature itself. Trees have underground internet systems that they communicate through, mycelial networks and rhyizomatic networks. I love that he makes those kinds of points. So I appreciate his work and this book. I’ve been a guest on his podcast as well.

Read 2 Sorting Things Out: Classification and Its Consequences by Geoffrey Bowker & Susan Leigh Star Read

The next book you’ve recommended is from a while back, it’s called Sorting Things Out (1999). Why is this one of the books you’ve chosen as helpful for our understanding of tech and Silicon Valley?

Sorting Things Out by Geoff Bowker and Susan Leigh Star was an extremely influential book on me and influences my work to this day, including my work on technology. Technologies almost always rely on systems of classification and organization. One might think of Apartheid as a political system, but it was also in some sense a technological system, in that people were classified based on who they were racially within that society. They were provided with opportunities—or not—based on that. You can actually experience that if you go into the Apartheid Museum in Johannesburg. It’s incredible.

What Bowker and Star argue is that how knowledge and information is organized and classified actually ends up defining what counts as knowledge and what we understand to be true or not true. They make the point with relation to library classification systems, for example. The Dewey Decimal System, for example, was largely comprised of Christian subject headings, because it’s a technology that was created by a Christian man within a Christian society. It therefore reflected the biases of that man and his knowledge.

So that’s very important, because a lot of l technological systems rely on databases and how databases classify information has everything to do with how that information is treated and what’s considered relevant and what isn’t. We build technological systems based on who we are. Feminists have also pointed out that women’s issues have been left out of a lot of database systems.

This is an important book for forming an intellectual and critical blueprint for the organization of knowledge itself. That’s a theme that people as renowned as Michel Foucault wrote about, dating back. It’s about a philosophy of knowledge and what counts as knowledge. A lot of diverse cultures and non-western communities don’t define knowledge in a static manner the way that the western Enlightenment did and the way that currently informs our tech world. My first book, Whose Global Village? was a full-on analysis of some of these themes and that book, Sortings Things Out, continues to inform me. It’s a really good book. I really recommend it.

Read 3 To Save Everything, Click Here: The Folly of Technological Solutionism by Evgeny Morozov Read

Evgeny Morozov is the author of your next book. The Guardian describes him as the “most penetrating and brilliantly sardonic critic of techno-utopianism”. What does his book, To Save Everything Click Here, bring to the picture?

Evgeny Morozov is an important critical, at times cynical, figure. He identifies the problems of naive and delusional social engineering that come out of tech cultures and maybe even science cultures. He asks us to remember, much like Rushkoff, who we are as a society and what questions we should delegate to technologies which take complex human issues and problems and model them into quantifiable, comparable and deterministic queries and questions. This book is important because even though it’s several years old, it identifies a key intellectual flaw that exists in tech bubbles, which is the idea that everything can be solved by a few dudes in Silicon Valley, because they’re just going to build better tech. That attitude in Silicon Valley reflects a deep cynicism of the public sphere and about how democracy really functions, which is messy. And it really ignores, reduces and devalues the voices of people who are outside the tech world.

It’s really a trailblazing book in pointing out the hubris or at least ignorance and very flawed way of designing, dominating, commanding and engineering our world that comes out of ideological circles in tech elite bubbles. I really appreciate his analysis of that issue.

You’re based in California. Is that part of your world on a day-to-day basis, that tech bubble?

I wouldn’t say that in my day-to-day job now I have much interaction, but I am from Silicon Valley. I went to Stanford in the late 90s and people who lived in my dorms founded Paypal with Elon Musk and Peter Thiel. I have undergrad and graduate degrees in engineering, the latter from the MIT Media Lab where I was side-by-side with a number of major figures in tech today. But for the last 20 years or so, I’ve observed something very insular in that world, something very culturally and globally ignorant. It falsely sees itself as agnostic when it’s deeply influential on people’s lives.

“Right now, in the United States, 48 out of 50 state attorney generals are investigating Facebook for antitrust violations”

I’m often speaking at big conferences with vice presidents of Facebook on the panel with me. I am pretty conciliatory and I try to be charming and nice to everybody. I don’t single people out as individuals; I try to find spaces where we can change and learn from one another. As you can probably tell, I’m extremely personally passionate about this. I didn’t write this book for my career. I wrote it because I believe these things. What I’m most passionate about is the objectification and the erasure of the brilliance and beauty of billions of people across the world and especially the Global South. That’s a big theme that concerns me a lot. They are being completely ignored and seen merely as passive users of our stuff, or even that our technology will liberate them. It’s patronizing at best and colonial at worst. So I’ve always been a political activist. I’ve also travelled all over the world, long before I started writing about this in relation to tech. I’ve been in probably about 80 countries, so I’ve seen these absurdities first hand.

Yes, the first time I became aware of the power of Facebook, early on, was when I noticed how much my Ecuadorean babysitter, who didn’t speak any English, was using it. She’d play games, she’d communicate with family, I just couldn’t believe that Facebook was so big in Ecuador.

The vast majority of Google and Facebook users are not in Europe, they’re not in North America. They’re across the continents of the Global South. They’re in Asia, Africa and South America. So it’s quite telling that these companies are monetizing the lives and data of billions of people and trying to lock them into keeping their attention without any responsibility or collaboration on any significant level. That’s why some people refer to this as ‘digital colonialism.’ I try not to use that language because it’s not quite so simple as mapping, and it doesn’t give us a pathway out. Nonetheless, it’s a common term. Shoshana Zuboff, in her book The Age of Surveillance Capitalism, uses some of this kind of language as well. It’s an appropriate term. It’s an older term in a way, she’s been writing about this for 20 years or so.

Read 4 Digital Cosmopolitans: Why We Think the Internet Connects Us, Why It Doesn't, and How to Rewire It by Ethan Zuckerman Read

We’ve got two more books to go. Firstly, tell me about Rewire by Ethan Zuckerman and why it’s significant.

This is a manifesto from Ethan Zuckerman, a great colleague of mine, calling for a global communication system, where people really have power to tell their own stories and do their own journalism based on who and where they are. That was really his and Rebecca McKinnon’s experiment with Global Voices, which they created. It was an attempt to bridge divides not in a top-down way, but with a collection of bottom-up voices put in communication with one another. Ethan Zuckerman has worked with bloggers and journalists and people interested in civic life all over the world. As he explains in this book, he’s interested in bringing people together in a global conversation that isn’t erasing and flattening people, but where each can stand on their own.

Can you give me an example?

Global Voices is the main example. It’s a network of different bloggers from all over the world, communicating with one another via a website where all their stories can be shared. People also work to translate content with one another. That’s a very important point for Ethan because he thinks that linguistic differences can create gaps in communication, which is true. We should be supporting one another’s languages. Languages are so important and they speak to our diversity. A lot of what Global Voices attempts to do is support local journalists, but also create bridges between them.

Isn’t one of the challenges, in general, that the tech giants are so dominant that they have the resources to do everything better? No one else can really compete.

You’re absolutely right. That’s why we have to intervene. We need regulatory measures supporting a much more competitive, open marketplace, and we need to support non-profit initiatives via incentives or subsidies. Otherwise, to the winners go the spoils. You might actually note that book, Winners Take All by Anand Giridharadas. He’s critiquing philanthropy, but I completely agree: Global Voices, there’s no way it could compete with a Facebook unless it was provided an opportunity to do so. That can’t happen at the moment because the economies of scale are so massive. That’s why regulatory measures and incentivization need to be part of the solution. Or Facebook should be subsidizing them. We have to figure out ways to do this.

“In the United States, the youngest generation is the first in the history of this country to make less money, when you account for inflation, than their parents did. ”

The tax issue is another huge problem. Amazon did not pay any taxes last year. They received a rebate. The wealthiest man in the history of the world, his company received a rebate. Examples like this represent the norm rather than the exception in the current, corrupt system. They provide us with opportunities to push back in new, progressive directions.

Another point I often make is that the internet was publicly funded. This is a common theme we see in the US: we socialize the costs and privatize the profits. We all pay for our highways but then Amazon runs its trucks on and doesn’t pay anything. It’s so bizarre. It’s the same with pharmaceuticals. The National Institutes of Health National, and the National Science Foundation fund all this drug research, but then it gets patented and privatized by a big pharmaceutical company. We can’t let that happen with the internet. It’s already happening and we’ve got to stop it.

Read 5 Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O'Neil Read

Finally on your list of Silicon Valley books is Weapons of Math Destruction by Cathy O’Neill. The subtitle is, ‘How Big Data Increases Inequality and Threatens Democracy.’ How does it do that?

This book is a really fantastic analysis of how quantification, the collection of data, the modelling around data, the predictions made by using data, the algorithmic and quantifiable ways of predicting behaviour based on data, are all built by elites for elites and end up, quite frankly, screwing over everybody else.

Cathy gives great examples, from predictive policing systems to insurance systems, of how biases are baked into the technologies that are supposedly serving us. For example, an insurance algorithm—and this is actually illegal—will take poorer people and consider them a greater risk and therefore make it impossible for them to afford health or life insurance. She shows how bank loans are contingent on credit, but these algorithms are taking poor people and seeing them as less creditworthy. I understand that, but the question is, ‘What kind of world should we be living in? What do we think about credit or insurance based on those principles?’

So this is a very important book that addresses the dangers of hidden, opaque, biased quantification. Another key point is that it’s not simply the biases of the people who build those technologies in the absence of auditing and transparency and regulation and collaboration, it’s also the datasets that those systems learn from. We know our world doesn’t treat women as equally as men. We know that black people are treated much worse in many countries, including the United States. These are statements of fact. So if we’re going to build technologies that are learning from the world, they’ll end up implementing activities based on a racist or sexist world.

“This is a common theme we see in the US: we socialize the costs and privatize the profits.”

Because people treat technologies as neutral—especially because they don’t know how the technology works; people don’t even know when they’re interacting with an AI system because there’s no disclosure—we’re going to end up building a world that engineers the worst forms of inequality. Another example that Cathy O’Neill touches on briefly is facial recognition systems. From Michelle Obama to Serena Williams to Oprah Winfrey, every major facial recognition system, whether Chinese or American, thinks these people are men. Google’s image recognition algorithm also mistook various pictures of black people for gorillas. How did Google deal with the situation? They removed gorillas from their image system. Instead of dealing with the underlying architecture that is computationally producing these inequalities and injustices, we’re scratching at little scabs on the surface. We need to do better than that.

What can we do individually? Should we be boycotting Facebook if we feel it’s not conducting itself ethically?

I use Facebook. I use Twitter. I use Amazon. I’m trying to be mindful of how much reliance and dependency I have on them, but I believe that is a question that can only be answered by each of us individually, unless we’re talking about doing something on massive scale that would actually have an impact. The interventions would need to be systemic, if they occurred through a ‘delete Facebook’ movement, for example.

But I don’t feel the burden of this should be on individual users. I would like the burden to be on activist regulators, journalists and, most importantly, our tech gazillionaires.

Overall, though, I get the feeling you’re quite optimistic?

It’s partly my personality. I’m critical, but I’m hopeful and believe in movements. I’m not a cynical person. And I see opportunities. My work has been getting a lot of attention and there’s a reason people are interested in these themes. Also, I don’t want to be so naive as to think that the nice people I’ve been on panels with from Facebook are going to change things, but they’re not sociopaths. They just don’t know what’s going on. Some of them do know and are doing terrible things—and we’ve seen some evidence that Zuckerberg has tread on that ground a little bit—but let’s hope they’re mostly just ignorant and that by bringing these issues to light we can encourage them to experiment with other models of being profitable and worth a lot of money.

I actually met Mark 10 years ago. I sat with him and talked to him for over an hour. He offered me Facebook data at that time. So what we saw happening with Cambridge Analytica and Russia was a tacit understanding and assumption at Facebook. I don’t think they thought of what they were doing as necessarily wrong. Part of the problem is they’re a bit clueless about these things. Maybe they are a bit more clued in now, but that’s why these books we’re all writing are important.