During the 2016 State of the Union address, President Obama proposed that one of the four “big questions” that the United States needs to answer is: “How do we make technology work for us, and not against us–especially when it comes to solving urgent challenges?” For our inaugural blog post, the SKAT Publications Committee invited contributors to respond to Obama’s statement. Responses consider issues such as what is a “technology that works for us” versus a “technology that works against us?” and who is included (or not) in the “us” part of Obama’s statement.

Many thanks to our contributors:

David Banks, Rensselaer Polytechnic Institute

Gwen Ottinger, Drexel University

Joseph Waggle, University of Maryland, College Park

Working for All of Us: We Need Democratic Control over Technology

David A. Banks, Rensselaer Polytechnic Institute

It is laudable that in 2016 our leaders are thinking about technology as something that could work against some humans’ interests. When the President of the United States asks how to “make technology work for us, and not against us—especially when it comes to solving urgent challenges,” our first inclination as social scientists should be to define the “us” under investigation. Presumably, the president would reply with “every American,” but even the most cursory reading of science and technology studies literature would suggest that this is impossible. Technologies are neither inherently good nor bad for all humans; rather, they are the artifacts of social action and often participate in political controversy and social stratification before, during, and after their initial creation. The levies in New Orleans, the drones that fly over the U.S. southern border and throughout the Middle East, and the corroded pipes of Flint, Michigan stand in testament to the ways in which technology participates in politics.

A far more realistic question might be: “How do we redistribute control over and allocation of technologies?” Such a reframing decenters the artifact itself and brings analytic focus to the methods we employ to make technology in the first place. If decades of research into innovation and regulation by social scientists has taught us anything, it is that our current paradigm of “design first and regulate later” makes for bad products and even worse policy. The former sets loose unintended consequences that cause real harm to people and the latter is ham-fisted, too late, or most commonly, both. Worse yet, as social scientists we may be over-prescribing DIY, citizen-led science and technology at the expense of social movements that would otherwise be changing the landscape of scientific research and technological innovation.

Consider, for example, the issue of fracking. Energy companies have tried to convince the public that they are merely scaling up old and trusted technology when in fact we are entering into a categorically new era in humans’ relationship to terra firma. The deleterious consequences of fracking range from contamination of communities’ water supplies to regular earthquakes where there once were none in living human memory. The U.S. government’s inability to and disinterest in successfully maintaining safe drilling practices has been met with a peculiar popular reaction; despite immediate and direct harms, Americans have chosen to carefully but steadfastly appeal to broken regulatory frameworks while also spending considerable time and effort cataloging and monitoring environmental impacts. This reaction seems to have no historical analogue. I feel confident in saying that historically, if a large outside party arrived and inflicted massive damage on a basic resource, that party would be met with pitchforks and bricks, not test tubes and spreadsheets.

While I am not prepared to unconditionally advocate for pitchforks and bricks, I think we as social scientists should think twice about our role in advocating for scientific inquiry and technological development by non-experts. The hidden danger in citizen science is that it lends credence to the notion that certainty is needed for collective action. Instead of collecting data, we should be deconstructing the kinds of evidentiary demands needed to take action on disasters that stem from environmental racism, organized ignorance around issues important to minority groups, and unchecked innovation more broadly. Instead of replacing necessary environmental monitoring with crowd sourced DIY methods, we must absolutely demand that those institutions charged with protecting the common good do their jobs. At some point, we need to know when communities should stop collecting data and start demanding the resources that they are owed.

None of this is meant to disparage the excellent work done by many of the scholars that make up SKAT and Science and Technology Studies. Rather, I am pointing toward the simple fact that our institutions of science and governance are increasingly found to be criminally negligent in their duties.

The pipes in Flint are a particularly maddening example of both the government’s and the academy’s disinterest towards human life. In February, the Chronicle published an interview with Virginia Tech scientist Marc Edwards who stated unequivocally that environmental crises are the direct result of the “perverse incentives that are given to young faculty,” and that scientists are so reliant on fragile funding networks that calling out wrong doing in the private and public sector is tantamount to a career killer. I identified a similar dynamic in a 2014 Tikkun Magazine essay in which I suggested that a way around this dynamic would be a no-strings-attached block grant program for communities paired with “a clearinghouse of sociologists, water chemists, lawyers, economists, and geologists all fully paid by the federal government and willing work with a community to solve problems identified by its residents.”

It is no longer enough to work on methods and tools that let communities fix part time what well-funded governments and universities are ruining full time. We need programs that aid in producing swift but certain knowledge claims that activate reformed or radically altered governance mechanisms. Making technology work for more people more often starts with building better organizations, not filling in holes in (sometimes intentionally) tattered governance regimes. If, as Bruno Latour says, technology truly is “society made durable” then creating better technology means creating a better society. This begins not at the design table, but in communities and the governing institutions that purport to serve them.

David A. Banks is an editor for Cyborgology, a member of the Theorizing the Web organizing committee, and a PhD candidate at Rensselaer’s Science and Technology Studies Department.

Gwen Ottinger, Drexel University

“How do we make technology work for us, and not against us?” President Obama’s question acknowledges a fundamental dilemma: not all technological innovation is necessarily progress. We can all think of technologies that have effects that work against ecosystem integrity, human welfare, and/or social justice. Sociologists might add that the less desirable effects of technology tend to fall most heavily on those who are already least well off. Just think of where hazardous waste goes, where power plants are sited, where the rare earth elements that make our smart phones work are mined. All the more reason, the President might agree, to make sure that technology is working for all of us.

But how? That the President’s question needs to be asked at all underscores a second problem: our current approach to developing, deploying, and adopting technology fixates on technology’s promise to benefit society—and remains oblivious to the harms it might cause. With the exception of drugs and other therapeutic technologies, innovations are seldom scrutinized to determine whether they provide promised benefits and still less often to find out what their “side effects” are.

In fact, too often innovators actively avoid knowledge that might suggest that benefits are smaller or harms greater than originally projected. And when those who suffer the side effects step up with their evidence of those harms, they are routinely disbelieved, discredited, and disrespected. When, for example, African-American residents of communities next to oil refineries point to their children’s asthma as indicative of the hazards of petrochemical operations, and the transportation technologies they support, they are blamed for their “unhealthy” lifestyles and possible industrial culprits are never investigated. In our current system, the technologists’ role is to promote and defend his innovations, not to learn about the full range of its implications.

We need to change how we treat new innovations and, ultimately, how we innovate if we want to ensure that technology works for us—all of us. The deployment of a new technology should be treated as an experiment, not just on physical systems or the environment, but on social structure and people’s ways of life. The experiment could begin with the hypothesis that the technology will work for us, but evidence would have to be collected to confirm, or disprove, the hypothesis.

Innovators should have neither sole control over nor sole responsibility for designing these experiments. Stakeholders of all sorts, especially the marginalized groups that stand to be affected, should be involved in order to ensure that the experimental design reflects diverse ways of thinking and multiple kinds of expertise. Participants should define together what it means for the technology to “work for us.” They should collectively determine what would constitute evidence that it does. They should brainstorm the kinds of unintended harms that might arise as a result of the technology—then decide which are worth monitoring, and how. And they should discuss how they will use to collected evidence to conclude whether the technology earns a place in our society, whether it needs to be modified, or whether it ought to be abandoned.

An experimental approach to new technology would be an innovation itself: we would need to invent institutions to enable experiments to be designed and conducted in a broadly participatory way. And we would need to re-engineer our political systems to ensure that the results of the experiments resulted in appropriate actions by law makers. Institutions for participatory knowledge production and deliberative decision-making are not without precedent, of course, but existing versions would require creative adaptation for the purpose of tracking the effects of new technologies.

Engineers and industrialists can be expected to object: such a scheme would halt innovation in its tracks! That seems doubtful. Creative, clever people will continue to see opportunities to make money and/or make the world a better place, and a clearly laid-out, consistently applied requirement that their innovations be rolled out in a more thoughtful, deliberative manner is unlikely to be enough to deter them from pursuing their ideas.

Instead, what we could expect is that savvy innovators will add new criteria to their design specifications. Knowing that they will be held accountable for even unintended harms caused by their technologies, technologists should begin to favor innovations that can be deployed a little at a time, support monitoring, be modifiable at minimal expense, and, in the worst case, be rolled back entirely. Incrementalism, transparency, flexibility, and reversibility would become additional technological challenges, complements to efficiency, speed, size, and power—and surely no more insurmountable.

Admittedly, an experimental approach would almost certainly slow down the pace of innovation. But that, too, seems a crucial aspect of ensuring that technology works for all of us. Instead of barreling forward into a technological landscape where benefits to some groups are achieved at the cost of harms to others, we could lay, brick by brick, a technological foundation for raising everyone up.

Gwen Ottinger is an Assistant Professor in the Center for Science, Technology, and Society and Department of History and Politics at Drexel University.

Joseph Waggle, University of Maryland, College Park

Pathologizing technology shifts blame away from unequal social arrangements and onto the technologies that we use to maintain them. It’s a stirring speech to go out on, but as a platform for change, falls flat.

It was likely his last visit to the floor of the House of Representatives, ever, and definitely his last as President of the United States. So when President Barack Obama gave his final State of the Union address earlier this year, he probably wanted it to leave an impression. But it wasn’t a goodbye.

Presaging criticism from several Republican candidates and others in the wake of Justice Antonin Scalia’s death last month, President Obama confirmed in clear language that he had no intention of coasting to the end of his presidency.

In an atmosphere where the Majority party in Congress, many of the men vying to replace him, and the citizens who support them expect the president to sit quietly and run out the clock on his final term, it is no wonder that President Obama framed his speech not as a to-do list for his final twelve months as president, but rather as an optimistic look into the far future, to the clearer horizons that might yet exist beyond the haze of the current political situation.

And that political situation is a very, very hazy one.

The next nine months will likely be defined by the race for next presidency, a race that, still in the primary stages, already reads more like postmodern performance art than a real-world bid for leadership. In the Republican-dominated Congress, empty seats are projected to go to even more conservative Republicans, pushing the needle farther and farther right, to the shock and awe of the Grand Old Party. And globally, international summits like the United Nations Framework Convention for Climate Change (UNFCCC) 21st Conference of Parties (COP21) are drawing greater and greater attention to climate change, a policy issue that the United States still lags behind on, both socially and politically, compared to other developed nations.

So when the President asserted that the fight against climate change meant making “technology work for us, and not against us,” he was talking about more than coal-fired power plants and solar panels. He was talking about a utopian future in which technological cures can heal social wounds.

Consider the following statement from President Obama’s State of the Union Address:

“We live in a time of extraordinary change — change that’s reshaping the way we live, the way we work, our planet and our place in the world. It’s change that promises amazing medical breakthroughs, but also economic disruptions that strain working families. It promises education for girls in the most remote villages, but also connects terrorists plotting an ocean away. It’s change that can broaden opportunity, or widen inequality. And whether we like it or not, the pace of this change will only accelerate.”

Replace “change” with “technology” in that statement and you have the cautious dream of an ecological modernist, a man who sees our society at a tipping point of great social progress but only at the risk of using untested, unwieldy, mystifying technology to achieve it.

The problem is that you can’t replace change with technology. You can foster change with technology, grow it, spread it around. You can also slow its progress, reverse it, or suffocate it altogether. Social change, the change President Obama is talking about above, requires agency, and technology has none.

What I want to focus on here is not the “technology” in President Obama’s statement, but rather the “us.” Technologies, including new kinds of knowledge and new ways of knowing the world, are only as powerful or as dangerous as the people who use them. Placing the blame on technology for greenhouse gas emissions, or the connectivity of terrorists, or widening income inequality shifts the focus away from people. It erases the people who have no access to these technologies, lionizes the people who do, and renders invisible the power relations that keep them so far apart.

President Obama’s final state of the union was, as he said, not “just about the next year. I want to focus on the next five years, ten years, and beyond.” For big issues like climate change, even that stretch of time may be short-sighted. But looking to technologies to save us–and blaming technologies for putting us in harm’s way to begin with–is a kind of willful blindness that only serves to reinforce the patterns that create the harm from which we so desperately want technology to deliver us.

Joseph Waggle is a doctoral candidate in the Department of Sociology at the University of Maryland, College Park and a research fellow in the Program for Society and the Environment (PSE). He is also a member of the SKAT Publications Committee.