As CEO Mark Zuckerberg laid out in a sizable manifesto earlier this year, Facebook has grand ambitions to create a platform that goes beyond just connecting family and friends. More than social media, it wants to create the social infrastructure that powers communities from niche interest groups to nations. But can tools to improve civic engagement from a clickbait-driven company really be good for democracy?

As part of its new Town Hall project, Facebook recently unveiled three new features aimed at linking government representatives with the citizens they’re elected to serve. The first, constituent badges, enables users to voluntarily identify themselves (via address verification) as residents of a particular district. Politicians will then be able to see these badges, allowing them to determine whether, say, a commenter on their page resides in the area they represent. The second, district targeting, will allow elected officials to create posts and polls soliciting feedback that will only be visible to their constituents. Finally, there’s constituent insights, which will permit politicians to view and comment on news stories that are popular in their political districts.

This all sounds nice on the surface. After all, it seems like Facebook finally is beginning to acknowledge its political influence, a role it initially rejected in the aftermath of the 2016 election. And it’s seemingly channeling that power for public benefit by making it easier for constituents to engage with elusive politicians.

Yet, there’s reason to stop short of lauding the social media network’s attempts to better our representative government. Not only is it unclear whether these tools will actually foster meaningful engagement, it’s also questionable whether private companies like Facebook—even if they’re well-meaning—should be trusted to keep democracies’ best interests at heart.

For starters, whether by intention or accident, Facebook seems to be designing democracy in its own clickbait-driven image. The Town Hall tools all revolve around attention: They encourage both politicians and users to find ways to better grab one another’s eyeballs (to use the social media term of art). But the kind of attention Facebook demands from us—incessantly chasing what’s trending, viral, getting clicks—rarely works in favor of the long-term public interest. Think of Donald Trump, obsessively monitoring his ratings and flitting to whatever message gets a rise from his base. Such messaging may get attention, but it’s not what’s best for the American people. Yet this kind of frenetic attention baiting is essential to Facebook’s business model. It’s not hard to imagine a scenario in which these new tools end up pushing politicians to primarily focus on the whims or dank memes of the day, instead of heeding to the more substantial issues.

I’m also concerned about the assumptions about human behavior and demographics that Facebook appears to be making with these features. For one, the Town Hall tools assume that what people read and share on the social media platform makes for a reasonably good representation of what they want their politicians to be concerned about. But there’s little reason to believe this is true. Users often post articles that they dislike, disagree with, find hilariously stupid—I certainly do. But it’s unlikely that Facebook will be successfully categorizing these posts as such before they share them with politicians. Does what you read when you’re slacking off at work actually represent the big issues you’re concerned about, or is it just what happens to catch your eye at 4:30 p.m. on a Friday? Is a news story important just because it happens to be trending?

What’s more, we don’t actually control what we see in our Facebook feed; Facebook does. Unlike a newspaper or magazine that shows readers stories the editorial board deems significant, Facebook’s algorithm is designed to show stories it thinks an individual user wants to view. The website simply isn’t that motivated to get you to eat your vegetables (if vegetables are “dry foreign affairs stories with large words”). Its interests lie in keeping you scrolling endlessly through your feed. Research has shown that this filtering distorts the content that people see, possibly creating damaging filter bubbles and further polarizing politics. It also gives Facebook outsize control over what media it chooses to promote—a state of affairs that benefits the company but may not always benefit you.

This distortion isn’t just coming from the company, either. As Facebook itself admitted, governments and other actors have exploited the platform to spread misinformation. The company’s efforts to eradicate this issue are still at an early stage, and from the limited information provided, it’s not hard to imagine these propaganda bots and digital information warriors gaming the address-verification system and pretending to be constituents.

The second big assumption that the Town Hall tools idea makes is that Facebook users are reasonably representative of the actual opinions of the population. Except this isn’t a sure thing: There’s ample evidence that privileged groups are more “visible” in big data sets (for example, people with regular internet access are more likely to be engaged in social media than those without). While 68 percent of U.S. adults use Facebook—making it the most widely used social media platform in the country—these users are more likely to be young, low-income, and female.

Bias may also impact who actually uses and engages with Facebook’s Town Hall tools. Just 9 percent of social media users say that they regularly discuss or comment on politics, and only 25 percent of social media users follow political figures. Users concerned with privacy may not feel comfortable handing over sensitive information like their home address to the company (with good reason), even if it means they can’t be verified as a constituent. Others may feel pressured to disclose their private information as a prerequisite for interacting with their representatives. While Facebook likely could correct for these biases, it might not want to; after all, the company directly benefits from more users and more user data.

Finally, Facebook makes a third problematic assumption: that elected officials and their teams are capable of sifting out important, policy-actionable items from a constant, minimally curated stream of what their constituents are reading and posting. It’s a worldview with an exceptionally optimistic perspective on the filtering abilities of most people—let alone politicians. It’s not just the barrage of data that’s the issue, but the fact that it’s also essentially meaningless if it can’t be properly analyzed. Considering there are entire industries of highly trained professionals who spend their careers trying to do this—and still regularly get things wrong—what makes Facebook think these tools will be anything but a distraction for elected officials?

Beyond thinking about how these tools might influence the responsiveness of politicians to their constituents, it’s also worth considering the motivations of the politicians themselves. During the 2016 election, for example, the Trump campaign worked to dissuade potential Clinton supporters from turning out through something called “dark posts”—in this case, targeted negative ads that were invisible to anyone but the recipient, obscuring them from public scrutiny. It remains unclear how much of a difference, if any, these posts made on the outcome. But the tactic is unlikely to go away—and may be made more powerful by leveraging the social media platform’s Town Hall tools by enabling politicians to better tailor their messaging to constituents’ particular pain points. Elected officials might also be able to gain an advantage over potential electoral challengers, as it doesn’t appear that Facebook offers similar tools to candidates yet. Has the company gamed out these unpleasant but realistic possibilities?

We don’t know because Facebook isn’t transparent about its processes or about its overarching goals with these recent forays into democracy building. And it’s important to remember that it doesn’t have to be. It’s a publicly held company whose primary responsibility is to its shareholders. It’s not a representative government. Nor is it a public utility. Though it may be using lofty press releases and Zuckerberg’s not-so-subtle appearances at Iowa truck stops to promote a new, civic-minded image, it has no obligation to be inclusive or representative, or to do what’s best for citizens.

This does not mean Facebook is evil or ill-intentioned; it simply means that Facebook can only be what it is. As Donald Trump is proving, you can’t run government like a real estate business. We shouldn’t run government like a social media company, either.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.