Facebook unveiled a series of changes on Tuesday to limit hate speech and extremism on its site, as scrutiny is rising on how the social network may be radicalizing people.

The company began its announcements early on Tuesday by saying it would expand its definition of terrorist organizations, adding that it planned to deploy artificial intelligence to better spot and block live videos of shootings. Hours later, in a letter to the chairman of a House panel, Facebook said it would prevent links from the fringe sites 8chan and 4chan from being posted on its platform. And late in the day, it detailed how it would develop an oversight board of at least 11 members to review and oversee content decisions.

Facebook, based in Silicon Valley, revealed the changes a day before the Senate Commerce Committee will question the company, Google and Twitter on Capitol Hill about how they handle violent content. The issue of online extremism has increasingly flared up among lawmakers, with the House Judiciary Committee holding a hearing in April about the rise of white nationalism and the role that tech platforms have played in spreading hate speech. On Tuesday, a bipartisan group of congressmen also sent a letter to Twitter, Facebook and YouTube about the presence of international terrorist organizations on the sites and how those groups foment hate.

Facebook in particular has been under intense pressure to limit the spread of hate messages, pictures and videos through its site and apps. As the world’s largest social network, with more than two billion users, as well as owner of the photo-sharing site Instagram and the messaging service WhatsApp, Facebook has the scale and audience for violent content to proliferate quickly and globally.