The Pew Research Center reported this week that 62 percent of U.S. adults get news on social media, including 18 percent who do so often – that's up sharply from just four years ago when the figure was 49 percent. Fully two-thirds of U.S. Facebook users get news from the site (up from 47 percent just three years ago); that's roughly 44 percent of the population, according to Pew – which is more than the cumulative news reach of YouTube (10), Twitter (9), Instagram (4), LinkedIn (4), Reddit (2), Snapchat (2) and Tumblr (1), the next seven biggest social media news sources.

And Facebook only last year passed Google as the biggest referrer of Internet traffic to news sites.

This is in part, as I've noted previously, that in an age of widespread perceived media bias and mistrust of journalists, search engines and social media platforms are viewed as dispassionate and unbiased sorters of the news – a 2012 Pew study, for example, found that roughly two-thirds of Americans believe search results are unbiased. And a study released by Edelman earlier this year found that 60 percent of those surveyed trust Google for news more than actual news outlets.

But while big social media – be it Facebook or Google News – has news-purveying components they're not news organizations as such and don't have news missions. They're part of larger companies with agendas that don't necessarily include fairly informing the citizenry. And they have real power, regardless of whether they're using it or not.

The role of big social media in news distribution has been top of mind with the recent controversy surrounding Facebook reportedly suppressing conservative content. Facebook denied any wrong-doing, and then this week announced that an internal investigation had confirmed their innocence but that they were tweaking their procedures to ensure that nothing like what didn't happen would ever, umm, not happen again.

Briefly a topic of partisan interest, the story has mostly faded. And that's too bad because there's a larger issue at hand. If Facebook (or Google) really wanted to interfere in politics it has arguably much more effective ways of doing so – scarily effective ways. And the fact that there's no evidence that big social media is using the tools available to it to manipulate elections is cold comfort; the potential requires at minimum proactive consideration if not actual preventive action. Hell, Republicans insist that voter ID laws are vitally important to combat a fraud for which there is as little proof – you'd think they'd be interested in confronting something with potential to operate on a vastly larger scale.

On Election Day in 2010, Facebook and a handful of political scientists conducted an experiment, showing 61 million users messages intended to nudge them to vote. Their conclusion was that they succeeded and caused an additional 340,000 people to cast votes that day. That's great, right? More civic engagement is a good thing. But what if Facebook decided to put its finger on the scale and just mobilize Hillary Clinton voters? Or something less explicitly partisan: Suppose Facebook decided that it would be a good idea to help demographic groups which turn out in numbers that are lower than their share of the population, like, say, Latinos. Such targeted voter motivation, dubbed "digital gerrymandering" by Harvard Law School's Jonathan Zittrain, would have the salutary effect – if you're a Clinton supporter (and in case you're wondering, Facebook officials have given far more money to Clinton than anyone else this cycle) – of turning out more strongly Democratic voters.

And then there's Google, which has its own ways that it could push voters toward favored candidates. Robert Epsteinand Ronald Robertson of the American Institute for Behavioral Research and Technology have conducted more extensive research measuring what they call "Search Engine Manipulation Effect" – looking at whether Google, say, could shift votes by tweaking its search engine to favor one candidate. His conclusion is that doing so could "easily shift the voting preferences of undecided voters by 20 percent or more – up to 80 percent in some demographic groups," as Epstein wrote in Politico last summer – with virtually no one knowing they are being manipulated. Around the world, Epstein and Robertson calculate, Google could flip upwards of 25 percent of national elections if it wanted to wield that power.

This wouldn't even necessarily require deliberate malfeasance on the part of Google's executives (or a rogue programmer or programmers). Arizona State University, New America and Slate concluded last December that – owing to organic factors and not because of deliberate manipulation – Google's search engine "is not fair; it favors some candidates, and it opposes others. And so far, it seems to prefer Democrats." Hey maybe this is just the result of the relative quality of Democrats and Republicans – reality after all has a well-known liberal bias – but maybe it's worth looking into a bit deeper to see whether it's a glitch, a manipulation or the algorithm working properly?

And here's perhaps the most important thing: Whether we're imagining Facebook engaging in digital gerrymandering or news feed nudges or Google tweaking its search results, "they can do a lot of things which would be completely undetectable," Epstein says. "They're not simply undetectable but ephemeral. … They're also legal – they're completely legal." The power of social and search are such recent developments that there are not laws on the books to handle them. (Though one could argue that if Facebook or Google tipped the scale for a candidate or party it would qualify as an in-kind campaign donation.)

But Google's corporate motto was until recently "Don't be evil." (Make of it what you will: That phrase was dropped from the company's mission statement last October.) And Mark Zuckerberg seems like an upstanding fellow. So is there reason to fear that either company would use its influence in nefarious ways? I guess it depends on your definition of evil. Spokespeople for the two companies have repeatedly said that they don't and wouldn't do such things. On the other hand we do know that the Federal Trade Commission investigated Google and found that the company has already tweaked its search results for its own benefit. We also know that a Google employee arranged for the company's "Street View" mapping cars to, as Wired later put it, "secretly intercept Americans' data sent on unencrypted Wi-Fi routers." And we know that this spring Facebook employees were wondering out loud about “what responsibility” Facebook has “to help prevent President Trump in 2017”; that’s in addition to the company – perhaps civic mindedly – messing with the 2010 turn-out figures, the company played with users' news feeds in an effort to manipulate their emotions. This was explained away as innocent overreach and part of the ever-experimenting culture of the company. Maybe it was; but given the power and the stakes it has the potential to be innocence along the lines of children playing with loaded weapons. If someone gets shot it doesn't matter so much whether the little tyke meant to do it.

In a very real way companies like Facebook and Google redefine the idea of too big to fail: They have become so big and powerful that we cannot afford to let them fail at their stated missions of political noninterference.