With Europe’s Parliamentary elections approaching later this month, Facebook has set up an operations room to monitor for misinformation, fake accounts, and election interference that violates the site’s rules, reports The New York Times and The Guardian. The effort is designed to prevent the types of wide-scale campaigns that could influence elections.

The Times says that the room is similar to the room that Facebook set up back in October 2018 ahead of the 2018 midterm elections in the United States, and elections in Brazil, which the company closed at the end of November. Facebook also set up a similar center in Delhi, ahead of this year’s elections in India. The reports says that this new room, located at Facebook’s European headquarters in Ireland, will remain open through the duration of the upcoming elections, which will be held between May 23rd and May 26th.

In January, Facebook announced a series of new tools that it would later launch in March, designed to “help prevent foreign interference in the upcoming elections and make political advertising on Facebook more transparent.”

The Guardian notes that the room is staffed with “around 40 people,” which includes native speakers of “all 24 official EU languages.” The Times notes that Facebook wouldn’t say what actions the center has taken since it opened, but it did outline that the assembled team reviews material that is flagged by its automated systems or by users. The various team members takes a look at the material and makes a recommendation as to whether or not it should be removed. “In some instances, what’s flagged will lead to a bulk takedown of posts and accounts.”

Both reports cite that the company still has issues locating and removing bad actors, pointing to a campaign that Facebook removed in Spain recently ahead of its election. The Guardian notes that Facebook’s systems didn’t spot the campaign, and Facebook’s head of cybersecurity policy, Nathaniel Gleicher, noted that Facebook can’t handle the problem on its own — “the reality of security is you need as many people focused on the problem as possible.”

He outlined that the company is approaching abuse in two ways: using artificial intelligence to make it difficult for bad actors to manipulate its systems, and to take down those accounts quickly. Facebook, he says, is trying to get “bad actors to spend their time trying to defeat the filter, rather than trying to drive their messages.” The Times also notes that Facebook is playing a sort of cat-and-mouse game, reacting to groups as they change up their methods to overcome the changes that it’s put into place. Long-term, Gleicher tells the Times, they’re working to harden Facebook to manipulation, making it more difficult for bad actors to spread misinformation across its platform.