As someone who’s spent the past eight years working as a social media consultant, I’m a bit worried about Facebook’s new Reactions emoji–not from a business standpoint, but as a member of the human race.

For me, the most troubling aspect of the new emoji options is that since 2006, Facebook has relied heavily on their news feed algorithm to determine what users do and don’t see. This algorithm incorporates data about everything you do on Facebook, from private messages to checking your ex’s new profile picture to the amount of time you spend engaging with a post or watching videos. I find the idea that Facebook’s new emotional reactions could eventually be incorporated into these algorithms deeply troubling–largely because it could impact the information we consume.

Facebook has said that it will wait a while before deciding how to incorporate the new Reactions into its news feeds:

“In the beginning, it won’t matter if someone likes, “wows” or “sads” a post — we will initially use any Reaction similar to a Like to infer that you want to see more of that type of content.”

But it would seem to be only a matter of time before that will change. Facebook also notes:

“Over time we hope to learn how the different Reactions should be weighted differently by News Feed to do a better job of showing everyone the stories they most want to see.”

To understand why this possibility is so troubling, it’s useful to look at the example below:

According to the betting odds, Donald Trump is likely to sweep the Republican primaries on Super Tuesday. I personally don’t “like” this. This news makes me feel both angry and sad. But how do I want to present myself? Do I want to show that sadness or anger? And more importantly: Am I increasing the number of people who will see this update if I say that it makes me angry? (Anger is one of the primary reasons why people share posts on social media.)

When Facebook simply relied on ”Likes,” comments and shares, there was some level of transparency as to why a given post might appear in my feed. But unless Facebook opens up the conversation about how they’ll weight different Reactions, we have no way to know how the new options will be used to filter information–including news stories.

Furthermore, I would bet that even if Reactions don’t affect our news feeds right now, Facebook is still monitoring that data in the interim. I wouldn’t be surprised if our emotions during this grace period affect what we see in the future.

Will we be served only positive content because Facebook has determined that we prefer to be happy? Will I be more likely to see “Angry” content because I personally stay on Facebook longer when I’m reading stories that make me mad? Or if I indicate that a post makes me angry, will I stop seeing posts about that topic altogether–and thus make my way through life in blissful ignorance? Will I get an honest picture of what’s happening in my social network–or will the new data be tweaked to tap into my personal emotional preferences and get me to stay logged on as long as possible?

No matter how Facebook ends up handling these questions, it scares me that the amount of anger, sadness, surprise, laughter or happiness that I experience in my news feed will be determined by this algorithm. It scares me that Facebook may be able to regulate our emotions without giving us any insight into how that regulation works.

Given Facebook’s scale, and the number of people who use it as their primary access point to the news, this prospect is deeply unsettling. So I’m asking Facebook’s product team to open up the conversation and internal thinking about how Reactions will affect the algorithm moving forward. If the goal is to encourage people to share more about their lives online than just the highlight reels, the weighting of emotions should be structured accordingly. The social media network may have started as a dorm-room project, but it now has a great deal of power over our perceptions of the world. Most of us would rather not live in the Matrix.