[Senate Hearing 115-232] [From the U.S. Government Publishing Office] S. Hrg. 115-232 OPEN HEARING: SOCIAL MEDIA INFLUENCE IN THE 2016 U.S. ELECTION ======================================================================= HEARING BEFORE THE SELECT COMMITTEE ON INTELLIGENCE OF THE UNITED STATES SENATE ONE HUNDRED FIFTEENTH CONGRESS FIRST SESSION __________ WEDNESDAY, NOVEMBER 1, 2017 __________ [GRAPHIC NOT AVAILABLE IN TIFF FORMAT] Printed for the use of the Select Committee on Intelligence Available via the World Wide Web: http://www.govinfo.gov U.S. GOVERNMENT PUBLISHING OFFICE 27-398 PDF WASHINGTON : 2018 ---------------------------------------------------------------------------------------- For sale by the Superintendent of Documents, U.S. Government Publishing Office, http://bookstore.gpo.gov. For more information, contact the GPO Customer Contact Center, U.S. Government Publishing Office. Phone 202-512-1800, or 866-512-1800 (toll-free). E-mail, gpo@custhelp.com. SELECT COMMITTEE ON INTELLIGENCE [Established by S. Res. 400, 94th Cong., 2d Sess.] RICHARD BURR, North Carolina, Chairman MARK R. WARNER, Virginia, Vice Chairman JAMES E. RISCH, Idaho DIANNE FEINSTEIN, California MARCO RUBIO, Florida RON WYDEN, Oregon SUSAN COLLINS, Maine MARTIN HEINRICH, New Mexico ROY BLUNT, Missouri ANGUS KING, Maine JAMES LANKFORD, Oklahoma JOE MANCHIN III, West Virginia TOM COTTON, Arkansas KAMALA HARRIS, California JOHN CORNYN, Texas MITCH McCONNELL, Kentucky, Ex Officio CHUCK SCHUMER, New York, Ex Officio JOHN McCAIN, Arizona, Ex Officio JACK REED, Rhode Island, Ex Officio ---------- Chris Joyner, Staff Director Michael Casey, Minority Staff Director Kelsey Stroud Bailey, Chief Clerk CONTENTS ---------- NOVEMBER 1, 2017 OPENING STATEMENTS Burr, Hon. Richard, Chairman, a U.S. Senator from North Carolina. 1 Warner, Hon. Mark R., Vice Chairman, a U.S. Senator from Virginia 4 WITNESSES Stretch, Colin, Vice President and General Counsel, Facebook..... 7 Prepared statement........................................... 9 Edgett, Sean, General Counsel, Twitter........................... 16 Prepared statement........................................... 18 Walker, Kent, Senior Vice President and General Counsel, Google.. 38 Prepared statement............................................... 41 SUPPLEMENTAL MATERIAL Exhibits used by Chairman Burr................................... 47 Exhibits used by Vice Chairman Warner............................ 54 Exhibits used by Senator Collins................................. 69 Exhibits used by Senator King.................................... 79 Answers to questions for the record from Colin Stretch........... 100 Answers to questions for the record from Sean Edgett............. 130 Answers to questions for the record from Kent Walker............. 173 Submission from Senator Harris................................... 199 OPEN HEARING: SOCIAL MEDIA INFLUENCE IN THE 2016 U.S. ELECTION ---------- WEDNESDAY, NOVEMBER 1, 2017 U.S. Senate, Select Committee on Intelligence, Washington, DC. The Committee met, pursuant to notice, at 9:34 a.m. in Room SH-216, Hart Senate Office Building, Hon. Richard Burr (Chairman of the Committee) presiding. Committee Members Present: Senators Burr, Warner, Risch, Rubio, Collins, Blunt, Lankford, Cotton, Cornyn, Feinstein, Wyden, Heinrich, King, Manchin, Harris, and Reed. OPENING STATEMENT OF HON. RICHARD BURR, CHAIRMAN, A U.S. SENATOR FROM NORTH CAROLINA Chairman Burr. I'd like to call the hearing to order. Good morning. I'd like to welcome our witnesses today. Before I introduce them, I want to say, on behalf of the full Committee, that our hearts and our prayers go out to the individuals in New York, the families and the friends of those who were affected by a senseless terror act. To most on this Committee, we've come to expect this. We spend countless hours working through the threats that exist to this country and around the world, and it's sad that we've come to the point where, really, nothing can happen that surprises us. But it's the responsibility of this Committee to work hand- in-hand with our intelligence community to help to keep America safe by providing the tools that they need to accomplish their mission. We will continue to do that. As is the case that we're here today, and I welcome our witnesses, Colin Stretch, Vice President and General Counsel at Facebook; Sean Edgett, General Counsel at Twitter; and Kent Walker, Senior Vice President, General Counsel at Google. For several months now, the media has been fixated on the role that social media platforms played in spreading disinformation and discord during the 2016 elections. This is an opportunity for each of you to tell your respective stories and, if necessary, correct the record. My sense is that not all aspects of those stories have been told accurately. I'll note for the record that this Committee is now having its seventeenth open hearing this year, and the twelfth at which we'll be discussing Russia and Russia's activities. Today, I'm hopeful we can provide the American people with an informed and credible assessment of how foreign actors used your platforms to circulate lies and to agitate unrest during last year's elections. I'm also hopeful you'll share with us what your companies are doing to make it harder for foreign actors to use your platforms' automated accounts and falsified stories to influence sentiment in the United States. Very clearly, this kind of national security vulnerability represents an unacceptable risk, and your companies have a responsibility to reduce that vulnerability. While we're on the topic of responsibility, I want to use this forum to push back on some narratives that have sprung up around the subject. A lot of folks, including many in the media, have tried to reduce this entire conversation to one premise; foreign actors conducted a surgical, executed covert operation to help elect a United States president. I'm here to tell you this story does not simplify that easily. It is shortsighted and dangerous to selectively focus on one piece of information and think that that somehow tells the whole story. We've heard from the media how a series of, I quote, ``Russian-linked Facebook ads were specifically aimed at Michigan and Wisconsin during the lead-up to last year's presidential election,'' unquote, and that, quote, ``some of those ads targeted specific demographic groups in two states,'' unquote. The narrative here is that ads linked to Russia were targeted at pivotal states and directly influenced the election's outcome. What you haven't heard is that almost five times more ads were targeted at the State of Maryland than of Wisconsin, Maryland, which is targeted by 262 ads in comparison to Wisconsin's 55 ads, and Maryland was not up for grabs. It was a State the Democrat candidate carried by 26 percent; or that 35 of the 55 ads targeted at Wisconsin ran prior to the Wisconsin primary, before there was an identified Republican candidate; and moreover, that not one of those 55 ads mentioned President Donald Trump by name; or that the key election State of Pennsylvania had fewer ads targeted at it than Washington, D.C., where 87 percent of the electorate voted for Hillary Clinton; or that the three most heavily targeted states in America--Maryland, Missouri, and New York--were all determined by at least 18-point margin, and two of them won by Hillary Clinton. One point the media has gotten correct is that more of these geographically targeted ads ran in 2015 than 2016--again, before President Trump was identified as the Republican candidate for president. But some of the context surrounding the more than $100,000 worth of divisive ads on hot-button issues purchased by Russian actors is missing. To add some detail here where the media has failed to do it and put the $100,000 into a frame of reference, the total ad spend for the State of Wisconsin was $1,979, with all but $54 being spent before the primary--again, before the emergence of a Republican candidate. The ad spend in the State of Michigan was $823; Pennsylvania, $300. To believe the narrative, you have to accept that these sophisticated, well-resourced Russian actors studied our process, assessed what states would be critical to the election result, then snuck and invested all of $300 to execute their plan in Pennsylvania--$300. More than five times as much money was spent on advertising in California, a State that hasn't voted Republican in presidential elections since 1988. Even with the benefit of numbers and what can be calculated and measured, this is an incredibly complex story. We can look at the amount of money spent, the number of ads purchased, and draw conclusions about priorities. We can look at the divisive content of the ads and the pages that they directed people towards, the number of tweets and retweets, and the manipulated search results, and draw inferences about the intent of the information operation. What we cannot do, however, is calculate the impact that foreign meddling in social media had on this election, nor can we assume that it must be the explanation for an election outcome that many didn't expect. I understand the urge to make this story simple. It's human nature to make the complex manageable, find explanations, and interpret things in ways that conform to your conclusions. But that's biased. Pointing to a State and saying that no ads ran there after the election doesn't prove intent, or even motive. It just shows that no ads ran there after the election. This subject is complicated. There's a whole new vocabulary that comes with this stuff. Impressions are different than views. Views are different than clicks. But there's one thing that I'm certain of and it's this: Given the complexity of what we've seen, if anyone tells you they've got this all figured out, they're kidding themselves. And we can't afford to kid ourselves about what happened last year and continues to happen today. That complexity, I'll note, is exactly why we depend on you for expert insight and reliable information. Sixty percent of the U.S. population uses Facebook. A foreign power using that platform to influence how Americans see and think about one another is as much a public policy issue as it is a national security concern. Crafting an elegant policy solution that is effective, but not overly burdensome, demands good faith and partnership between companies and this Committee. Just recently, on the basis of a more complete and sophisticated analysis, the original estimate that 10 million Americans were exposed to Russian-origin content on Facebook was increased to 126 million. That tells me that your companies are just beginning to come to grips with the scale and the depth of the problem. That's encouraging, but know this: we do better when you do better. I'd urge you to keep that in mind and to work with us proactively to find the right solution to a very constant and complaining challenge. I'll take a moment here to stress what this hearing is and is not about. This isn't about relitigating the 2016 U.S. presidential election. This isn't about who won or who lost. This is about national security. This is about corporate responsibility, and this is about the deliberative and multifaceted manipulation of the American people by agents of a hostile foreign power. I'll say it again: agents of a hostile foreign power reached into the United States, using our own social media platforms, and conducted an information operation intended to divide our society along issues like race, immigration and Second Amendment rights. What's even more galling is that, to tear us apart, they're using social media platforms Americans invented, in connection with the First Amendment freedoms that define an open and democratic society. While it's shocking to think that foreign actors use the social networking and communications mediums that are so central to our lives today in an effort to interfere with the core of our democracy, what is even more troubling is the likelihood that these platforms are still being used today to spread lies, provoke conflict and drive Americans apart. Your three companies have developed platforms that have tremendous reach and, therefore, tremendous influence. That reach and influence is enabled by the enormous amount of data you collect on your users and their activities. The American people now need to understand how Russia used that information and what you're doing to protect them. Your actions need to catch up to your responsibilities. We have a lot to get to this morning. I'm going to stop here. Again, I want to thank each of our briefers--our witnesses today, and I turn to the vice chairman for any comments he might have. OPENING STATEMENT OF HON. MARK R. WARNER, A U.S. SENATOR FROM VIRGINIA Vice Chairman Warner. Well, thank you, Mr. Chairman, and let me also express our concerned thoughts about the tragedy yesterday in New York. Let me get right at it. In the age of social media, you can't afford to waste too much time or, for that matter, too many characters, in getting the point across. So I'll get straight to the bottom line: Russian operatives are attempting to infiltrate and manipulate American social media to hijack the national conversation and to make Americans angry; to set us against ourselves, and, at their most basic, to undermine our democracy. They did it during the 2016 U.S. presidential campaign. They are still doing it now. And not one of us is doing enough to stop it. That's why we're here today. In many ways, the threat is not new. Russians have been conducting information warfare for decades. But what is new is the advent of social media tools with the power to magnify propaganda and fake news on a scale that was unimaginable back in the days of the Berlin Wall. Today's tools in many ways seem almost purpose-built for Russian disinformation techniques. Russia's playbook is simple, but formidable. It works like this. First, disinformation agents set up thousands of fake accounts, groups and pages across a wide array of platforms. These fake accounts populate content on Facebook, Instagram, Twitter, YouTube, Reddit, LinkedIn, and many other platforms. Each of these fake accounts spends literally months developing networks of real people to follow and like their content, boosted by tools like paid ads and automated bots. Most of the real-life followers have no idea that they are caught up in these webs. These networks are later utilized to push an array of disinformation, including stolen e-mails, state-led propaganda like RT News and Sputnik, fake news, and divisive content. The goal is pretty simple. It's to get this so-called news into the news feeds of many potentially receptive Americans and to covertly and subtly push those Americans in the directions the Kremlin wants to go. As someone who deeply respects the tech industry and who was involved in that industry for more than 20 years, it's taken me quite a bit of time--and I'm still learning--to truly understand the nature of this threat. Even I struggle to keep up with the language and the mechanics, the difference between bots, trolls, and fake accounts; how they generate likes, tweets, and shares; and how all these players and actions are combined into an online ecosystem. What is clear, however, is that this playbook offers a tremendous bang for the disinformation buck. With just a small amount of money, adversaries use hackers to steal and weaponize data, trolls to craft disinformation, fake accounts to build networks, bots to drive traffic, and ads to target new audiences. They can force propaganda into the mainstream and wreak havoc on our online discourse. And if you look back at the results, it's a pretty good return on investment. So where do we go from here? I believe it will take all of us--you, some of the platform companies, the United States government, and the American people--to deal with this new and evolving threat. The social media and innovative tools each of you have developed have changed our world for the better. You've transformed the way we do everything from shopping for groceries to growing small businesses. But Russia's actions are further exposing the dark underbelly of the ecosystem you have created, and there is no doubt that their successful campaign will be replicated by other adversaries--both nation-states and terrorists--that wish to do harm to democracies around the globe. This is not a unique American phenomenon. As such, each of you here today needs to commit more resources to identifying bad actors and, when possible, preventing them from abusing our social media ecosystem. Thanks in part to pressure from this Committee, each company has uncovered, I believe, only some of the evidence of the ways Russians exploited their platforms during the 2016 election. For Facebook, much of the attention has been focused on the paid ads that Russian trolls targeted to Americans. However, these ads are just the tip of a very large iceberg. The real story is the amount of misinformation and divisive content that was pushed for free on Russian-backed pages, which was then spread widely on news feeds of tens of millions of Americans. According to the data Facebook has provided, 120 Russian- backed pages built a network of over 3.3 million people. From these now-suspended pages, 80,000 organic unpaid posts reached an estimated 126 million real people--more than a third of the population. This is an astonishing reach from just one group in St. Petersburg. And I doubt that the so-called Internet Research Agency in St. Petersburg represents the only Russian trolls out there. Facebook has more work to do to see how deep this goes, including into the reach that we've just found in the last 48 hours of information you've provided, of IRA-backed Instagram posts, which, again, if we just take for an example, 80,000 posts from IRA-based trolls on Facebook, 120,000 pieces of content on Instagram, and we don't even have the data on how many--how much that content reached. The anonymity provided by Twitter and the speed by which it shares news makes it an ideal tool to spread disinformation. According to one study, during the 2016 campaign, junk news actually outperformed real news in some battleground states, leading up to Election Day. Another study found that bots generated one out of every five political messages posted on Twitter over the entire presidential campaign. I'm concerned, sir, that Twitter seems to be vastly underestimating the number of fake accounts and bots pushing disinformation. Independent researchers, people who've testified before this Committee, have estimated that up to 15 percent of active Twitter accounts, or potentially 45 million- plus accounts, are fake or automated. Despite evidence of significant incursion and outreach from researchers, Twitter has to date only uncovered a small piece of that activity, although I will acknowledge that in the last few days your numbers have gone from about 200 accounts to over 2,700 accounts. And again, I believe there's more to be done. Google search algorithms continue to have problems in surfacing fake news or propaganda. Though we can't necessarily attribute to Russian efforts, false stories and unsubstantiated rumors were elevated on Google Search during the recent mass shootings in Las Vegas. Meanwhile, YouTube has become RT's go-to platform. Google has now uncovered 1,100 videos associated with this Russian campaign. Much more of your content was likely spread through other platforms. But it's not just the platforms that need to do more. The United States government has thus far proven incapable of meeting this 21st-century challenge. Unfortunately, I believe this effort is suffering in part because of lack of leadership at the top. We have a President who remains unwilling to acknowledge the threat that Russia poses to our democracy. President Trump should stop actively delegitimizing American journalism and acknowledge and address this very real threat posed by Russian propaganda. I believe that Congress, too, must do more. We need to recognize that current law was not built to address these threats. I partnered with Senators Klobuchar and McCain on what I believe is the most light-touch legislative approach, which I hope all my colleagues on this panel will review. The Honest Ads Act is a national security bill intended to protect our elections from the foreign interference we all want to avoid. Finally, but perhaps most importantly, the American people also need to be aware of what is happening to our news feeds. We all need to take a more discerning approach to what we are reading and sharing and who we're connecting with online. We need to recognize the person at the other end of that Facebook or Twitter argument may not be a real person at all. The fact is that this Russian weapon has already proved its success and cost-effectiveness. We can be assured that other adversaries, including foreign intelligence operatives and potentially terrorist organizations, have read this playbook and are already taking action. It's why we, collectively, must act. To our witnesses today, I hope you will detail what we saw in the last election and, most importantly, tell us what steps you will undertake for us to get ready for the next one. We welcome your participation and encourage your commitment to addressing this shared responsibility. Thank you, Mr. Chairman. Chairman Burr. Thank, Senator Warner. I'd like to notify Members we will have seven-minute rounds today by seniority. Gentlemen, if I could ask you to please stand and raise your right hand. Do you solemnly swear to tell the truth, the whole truth, and nothing but the truth? Mr. Stretch. Yes. Mr. Walker. I do. Mr. Edgett. Yes. Chairman Burr. Please be seated. Mr. Stretch, we're going to recognize you, then Mr. Edgett, then Mr. Walker. Mr. Stretch, the floor is yours. STATEMENT OF COLIN STRETCH, VICE PRESIDENT AND GENERAL COUNSEL, FACEBOOK Mr. Stretch. Chairman Burr, Vice Chairman Warner and distinguished Members of the Committee, thank you for this opportunity to appear before you today. My name is Colin Stretch and since July 2013 I've served as the General Counsel of Facebook. We appreciate this Committee's hard work to investigate Russian interference in the 2016 election. At Facebook, our mission is to create technology that gives people the power to build community and bring the world closer together. We are proud that each of you uses Facebook to connect with your constituents, and we understand that the people you represent expect authentic experiences when they come to our platform to share and connect. We also believe that we have an important role to play in the democratic process and a responsibility to protect it on our platform. That's why we take what's happened on Facebook so seriously. The foreign interference we saw during the 2016 election is reprehensible. That foreign actors hiding behind fake accounts abused our platform and other internet services to try to sow division and discord and to try to undermine our election process is directly contrary to our values and everything we stand for. Our goal at Facebook is to bring people closer together. These foreign actors sought to drive people apart. In our investigation, which continues to this day, we have found that these actors used fake accounts to place ads on Facebook and Instagram that reached millions of Americans over a two-year period, and that those ads were used to promote pages which in turn posted more content. People shared these posts, spreading them still further. Many of these ads and posts are inflammatory. Some are downright offensive. We know that much of this content is particularly hurtful to members of the Facebook community that engaged with this content believing it was authentic. People should believe content on Facebook is authentic, and should not have to worry that they are being exploited in a cynical effort to prey on painful fault lines in our society in order to inflame discourse in this country. In aggregate, the ads and posts we are here today to discuss were a very small fraction of the overall content on Facebook, but any amount is too much. All of these accounts and pages violated our policies, and we removed them. Going forward, we are making significant investments. We're hiring more ad reviewers, doubling or more our security engineering efforts, putting in place tighter ad content restrictions, launching new tools to improve ad transparency, and requiring documentation from political ad buyers. We're building artificial intelligence to help locate more banned content and bad actors. We're working more closely with industry to share information on how to identify and prevent threats, so that we can all respond faster and more effectively. And we're expanding our efforts to work more closely with law enforcement. We know bad actors aren't going to stop their efforts. We know we'll all have to keep learning and improving to stay ahead of them. We also know we can't do this alone. That's why I want to thank you for this investigation. We look forward to the conclusions you will ultimately share with the American public. And I look forward to your questions. [The prepared statement of Mr. Stretch follows:] [GRAPHICS NOT AVAILABLE IN TIFF FORMAT] Chairman Burr. Mr. Edgett, the floor is yours. STATEMENT OF SEAN EDGETT, GENERAL COUNSEL, TWITTER Mr. Edgett. Chairman Burr, Vice Chairman Warner and Members of this Committee, Twitter understands the importance of the Committee's inquiry into Russia's interference in the 2016 election, and we appreciate the opportunity to appear here today. The events underlying this hearing have been deeply concerning to our company and the broader Twitter community. We are committed to providing a service that fosters and facilitates free and open democratic debate and that promotes positive change in the world. We are troubled by reports that the power of Twitter was misused by a foreign actor for the purpose of influencing the U.S. presidential election and undermining public faith in the democratic process. The abuse of our platform to attempt state- sponsored manipulation of elections is a new challenge for us and one we are determined to meet. Today we intend to show the Committee how serious we are about addressing this new threat by addressing the work we are doing to understand what happened and to ensure that it does not happen again. At the time of the 2016 election, we observed and acted on instances of automated and malicious activity. As we learned more about the scope of the broader problem, we resolved to strengthen our systems, going forward. Elections continue all the time, so our first priority was to do all we could to block and remove malicious activity from interfering with our users' experience. We created dedicated teams within Twitter to enhance the quality of information our users see and to block malicious activity whenever and wherever we find it. Those teams continue to work every day to ensure Twitter remains a safe, open, transparent and positive platform. We have also launched a retrospective review to find Russian efforts to influence the 2016 election through automation, coordinated activity, and advertising. While that review is still underway, we have made the decision to review and share what we know today, in the interest of transparency and out of appreciation for the urgency of this matter. We do so recognizing that our findings may be supplemented as we continue to work with the Committee staff and other companies, discover more facts, and gain a greater understanding of these events. My written testimony details the methodology and current findings of our retrospective review in detail. We studied tweets from the period September 1st to November 15th, 2016. During that time, we did find automated and coordinated activity of interest. We determined that the number of accounts we could link to Russia and that were tweeting election-related content was comparatively small: about one one-hundredth of a percent of total Twitter accounts at the time we studied. One-third of one percent of the election-related tweets people saw came from Russian-linked automated accounts. We did, however, observe instances where Russian-linked activity was more pronounced, and have uncovered more accounts linked to the Russian-based Internet Research Agency as a result of our review. We have also determined that advertising by Russia Today in seven small accounts was related to the election and violated either the policies in effect at the time or that have since been implemented. We have banned all of those users as advertisers, and we will donate that revenue to academic research into the use of Twitter during elections and for civic engagement. We are making meaningful improvements based on our findings. Last week, we announced industry-leading changes to our advertising policies that will help protect our platform from unwanted content. We are also enhancing our safety systems, sharpening our tools for stopping malicious activity, and increasing transparency to promote public understanding of all of these areas. Our work on these challenges will continue for as long as malicious actors seek to abuse our system and will need to evolve to stay ahead of new tactics. We have heard concerns about Russia actors' use of Twitter to disrupt the 2016 election and about our commitment to addressing that issue. Twitter believes that any activity of that kind, regardless of magnitude, is unacceptable, and we agree that we must do better to prevent it. We hope that our appearance today and the description of the work we have undertaken demonstrates our commitment to working with you, our industry partners, and other stakeholders to ensure that the experience of 2016 never happens again. Cooperation to combat this challenge is essential. We cannot defeat this evolving, shared threat alone. As with most technology-based threats, the best approach is to combine information and ideas to increase our collective knowledge. Working with the broader community, we will continue to test, to learn, to share, and to improve so that our product remains effective and safe. I look forward to answering your questions. [The prepared statement of Mr. Edgett follows:] [GRAPHICS NOT AVAILABLE IN TIFF FORMAT] Chairman Burr. Thank you, Mr. Edgett. Mr. Walker, the floor is yours. STATEMENT OF KENT WALKER, SENIOR VICE PRESIDENT AND GENERAL COUNSEL, GOOGLE Mr. Walker. Thank you very much, Chairman Burr, Vice Chairman Warner, Members of the Committee, for the opportunity to speak with you today. My name is Kent Walker. I'm Senior Vice President and General Counsel at Google. I oversee our legal, our policy, our trust and safety, and our Google.org teams. I've worked at the intersection of technology, security, and the law for over 25 years, starting my career as an assistant U.S. attorney for the U.S. Department of Justice focused on technology crimes. Let me start my conversation with you today by joining your earlier comments, acknowledging the victims and families of the awful attack in New York yesterday. As a New York employer, we know how strong and tough New Yorkers are and we look forward to do anything we can to help. Turning to the issues before the Committee today, Google believes that we have a responsibility to prevent the misuse of our platforms, and we take that very seriously. Google was founded with the mission of organizing the world's information and making it universally accessible and useful. The abuse of the tools and platforms we build is antithetical to that mission. Google is deeply concerned about attempts to undermine the democratic elections. We are committed to working with Congress, law enforcement, others in our industry, and the NGO community to strengthen protections around elections, to ensure the security of users, and to help combat disinformation. We recognize the importance of this Committee's mandate, and we appreciate the opportunity to share information and talk about solutions. Of course, disinformation and propaganda campaigns aren't new and have involved many types of media and publications over the years. And for many years, we've seen attempts to interfere with our online platforms. We take these threats very seriously. We've built industry-leading security systems, and we've put those tools into our consumer products as well. Back in 2007, we launched the first version of our Safe Browsing tool, which helps protect users from phishing, malware, and other attacks. Today, Safe Browsing is used on more than 3 billion devices worldwide. If we suspect that users are subject to government- sponsored attacks, we warn them about that. And last month, we launched our Advanced Protection Program, which helps protect those at greatest risk of attack, like journalists, business leaders, and politicians. We face motivated and resourceful attackers, and we are continually evolving our tools to stay ahead of ever-changing threats. Our tools don't just protect our physical and network security, but they also detect and prevent attempts to manipulate our systems. On Google News, for example, we use fact-check labels to help users spot fake news. For Google search, we have updated our quality guidelines and evaluations to help surface more authoritative content from the web. We've updated our advertising guidelines as well to prohibit ads on sites that misrepresent themselves. And on YouTube, we employ a sophisticated spam and security breach detection system, designed to detect anomalous behavior and catch people trying to inflate view counts of videos or numbers of subscribers. And as threats evolve, we will continually adapt in order to understand and prevent new attempts to misuse our platforms. With respect to the Committee's work on the 2016 election, we've looked across our products to understand whether government-backed entities were using our products to disseminate information in order to interfere with the U.S. election. While we did find some deceptive activities on our platform associated with suspected government-backed accounts, that activity appears to have been relatively limited. Of course, any activity like this is more than we would like to see. We've provided the relevant information to the Committee, have issued a public summary of the results of our review, and we will continue to cooperate with the Committee's investigation. Going forward, we will continue to expand our use of cutting-edge technology to protect our users and will continue working with governments to ensure that our platforms aren't abused. We will also be making political advertising more transparent, easier for users to understand, and even more secure. In 2018, we'll release a transparency report showing data about who is buying election ads on our platform and how much money is being spent. We'll pair that transparency report with a database, available for public research, of election and ad content across our ads products. We're also going to make it easier for users to understand who bought the election ads they see on our networks. Going forward, users will be able to easily find the name of any advertiser running an election ad on Search, YouTube, or the Google Display Network through an icon on the ad. We'll continue enhancing our existing safeguards to ensure that we permit only U.S. nationals to buy U.S. election ads. We already tightly restrict which advertisers can serve ads to audiences based on political leanings. Moving forward, we'll go further by verifying the identity of anyone who wants to run an election ad or use our political interest-based tools, and confirming that that person is permitted to run that ad. We certainly can't do this alone. We'll continue to work with other companies to better protect the collective digital ecosystem. And even as we take our own steps, we remain open to working on legislation that promotes electoral transparency. Moreover, our commitment to addressing these issues extends beyond our services. We've offered in-person briefings and introduced a suite of digital tools designed to help election websites and political campaigns protect themselves from phishing, unauthorized account access, and other digital attacks. We're also increasing our long-standing support for the bipartisan Defending Digital Democracy Project. Let me conclude by recognizing the importance of the work of this Committee. Our users, advertisers, and creators must be able to trust in their security and safety. We share the goal of identifying bad actors who attempted to interfere with our systems and abuse the electoral process. We look forward to continued cooperation both with the Members of this Committee and with our fellow companies to provide access to tools that help citizens express themselves, while avoiding abuses that undercut the integrity of elections. Thank you again for the opportunity to tell you about our ongoing efforts. We look forward to our continuing work with Congress on these important issues, and we are happy to answer any questions you might have. [The prepared statement of Mr. Walker follows:] [GRAPHICS NOT AVAILABLE IN TIFF FORMAT] Chairman Burr. Mr. Walker, thank you for your testimony. The Chair would recognize himself and share with Members that I'm going to talk about one specific ad that--it's not going to count to my seven minutes, and the Vice Chairman is going to do the same at the beginning of his, to sort of set the stage for much of what we'll talk about today. As an example, I'd like to highlight one specific case with real-world implications involving two different Facebook groups, both of which are associated with the Russian Internet Research Agency. You'll see the first board that is up. [The material referred to follows:] [GRAPHICS NOT AVAILABLE IN TIFF FORMAT] The first group's called ``The Heart of Texas,'' with over 250,000 followers. This account promoted pro-Texas causes and included posts many would characterize as anti-immigration or anti-Muslim. The tagline for this group, as referenced in the top left-hand corner of the first chart, is ``Texas: homeland of guns, barbecue, and your heart,'' with the words ``time to secede'' emboldened on the Texas flag. Turning to the second group, which is in the bottom right- hand, it's called ``The United Muslims of America,'' with over 328,000 followers. This account claimed to be pro-Islamic themes. The tagline for this group, as referenced in the bottom-right corner of the first chart, is ``I'm a Muslim and I'm proud.'' So, if I could have the second board up. The Heart of Texas group created a public event on Facebook, to occur at noon, May 21st, 2016, at the Islamic Center in Houston, Texas, to stop, quote, ``to stop the Islamization of Texas,'' unquote. The same group then placed an advertisement on Facebook to promote their event, with over 12,000 people viewed. The United Muslims of America subsequently created an event on Facebook to occur at noon, May 21st, 2016, at the Islamic Center in Houston, Texas, to, I quote, ``save Islamic knowledge''--same time, same place as the Heart of Texas event. The group then placed an advertisement, targeting people in Houston, Texas, area to promote their event to support the Islamic Center. More than 2,700 people viewed this ad. If I could have the third board. On May 21st, 2016, local news captured the events as they unfolded, reporting on the protest stage by the Heart of Texas group and resulting counter-protest. The pictures you see on the third board are from the streets in front of the Islamic Center in Houston, Texas. What neither side could have known is that Russia trolls were encouraging both sides to battle in the streets and create division between real Americans. Ironically, one person who attended stated, ``The Heart of Texas promoted this event, but we didn't see one of them.'' We now know why. It's hard to attend an event in Houston, Texas, when you're trolling from a site in St. Petersburg, Russia. Establishing these two competing groups, paying for the ads, and causing this disruptive event in Houston cost Russia about $200. Mr. Stretch, you commented yesterday that your company's goal is bringing people together. In this case, people were bought together to foment conflict, and Facebook enabled that event to happen. I would say that Facebook has failed their goal. From a computer in St. Petersburg, Russia, these operators can create and promote events anywhere in the United States and attempt to tear apart our society. I'm certain that our adversaries are learning from the Russian activities and even watching us today. Simply put, you must do better to protect the American people and, frankly, all of your users from this kind of manipulation. My time can start now. I have one simple question, yes or no from each of you. I'll start with Mr. Stretch and work my way to your left. The Federal Election Campaign Act prohibits any foreign national from spending funds in connection with any Federal, State, or local elections in the United States. Doesn't this law prohibit your publication of this content? Mr. Stretch. Mr. Stretch. Prohibit publication of the content we've seen? Chairman Burr. Does FEC law apply to Facebook? Mr. Stretch. Certainly, FEC law, yes, applies to---- Chairman Burr. Prohibits foreign dollars influencing an election? Mr. Stretch. It prohibits foreign actors from using really any medium, including Facebook, to influence a foreign--a U.S. election. Chairman Burr. So FEC law applies to Facebook? Mr. Stretch. Yes, it does. Chairman Burr. Mr. Edgett. Mr. Edgett. It applies to Twitter as well. Chairman Burr. It applies Twitter. Mr. Walker. Mr. Walker. Yes, sir. Chairman Burr. Great. The prevalence of social media use among military members, who spend so much time outside the country, deployed away from friends, away from family, seems a likely target for foreign intelligence agencies who want to collect details on U.S. force movements, deployments, and other sensitive insight. Do you monitor your platforms for indications that your users in the U.S. military are targeted in any way? Mr. Stretch. Mr. Stretch. Senator, yes, and I would say that that sort of--that sort of security work really falls into the traditional cybersecurity work that we've long been focused on. We've had a threat intelligence team for years now that has been focused on tracking foreign actors, and it's exactly that sort of threat that we believe has historically been an area of focus for our adversaries, and likewise an area of focus for us on the defensive side. Chairman Burr. Mr. Edgett. Mr. Edgett. Similar to Mr. Stretch, we've been focused on that type of threat for years. We're also focused on education on the other side and helping law enforcement and military personnel understand how to use Twitter and both its benefits and its risks. Chairman Burr. Mr. Walker. Mr. Walker. We've been looking at cyber espionage for some years, and so this is all in focus. Because we're not a social network, we may not have as much visibility as to whether individual users of our service are veterans or not, but that would certainly be an area of concern. Chairman Burr. These questions are for Facebook, Mr. Stretch. In a blog published September 6th, 2017, Alex Stamos, Facebook's Chief Security Officer, wrote that the company had discovered about 3,000 political ads that were paid for through 470 fake accounts and pages that likely operated out of Russia. Facebook shut down these accounts on the grounds that they were inauthentic. Had these accounts not violated Facebook's prohibition against fake accounts, would they have been shut down? Mr. Stretch. Senator, many of them would have, because many of them violated other policies related to the type of content that's permitted on the platform. The authenticity issue is the key. Referring to the content you surfaced earlier, it pains us as a company, it pains me personally, to see that we were--that our platform was abused in this way. People in this country care deeply in--about issues of public concern, and it's one of the strengths of our country that people are so willing to speak freely about them. The fact that foreign actors were able to use our platform to exploit that openness is a deeply painful lesson for us, and one we're focused on learning from going forward. Chairman Burr. Does it trouble you that it took this Committee to get you to look at the authentic nature of the users and the content? Mr. Stretch. Senator, we are certainly troubled--I'd say more than troubled--by the evidence of abuse of our platform during 2016, and we're certainly grateful for the Committee's investigation and the attention you're bringing to this issue. We think it's very important. We do believe that it's a larger issue than any one company, and we believe that, going forward, there are opportunities, not just for us to do better, but for us to work together to make sure we're all addressing this threat appropriately. Chairman Burr. What characteristics would indicate that an account or a page is likely operated out of Russia? Mr. Stretch. There are a number of characteristics that can signal potential location. The most obvious one that is typically the most reliable is location information that's transmitted by the user's browser when they access Facebook. It's also the most easily manipulable. There are many other signals that similarly will suggest location, but, because of the way the internet is architected, can also be faked. Our job is to look not just for the signals that are in plain sight, but understand how they can be manipulated, and look for patterns of activity that reveal efforts to abuse our platform that are shrouded, both geographically and in other ways. Chairman Burr. Mr. Edgett, your vice president at Twitter stated that Twitter's expanding its team and resources and building new tools and processes to combat automated Twitter accounts, or bots. What is Twitter's process for identifying a bot? Mr. Edgett. We have a lot of data behind sort of the things you see on Twitter that looks at the activity of an account-- and remember, there are hundreds of millions of accounts--the activity of an account as it relates to other accounts. So as you or I, Senator, tweet, our activity looks pretty normal. As an automated account tweets thousands of times an hour, or logs in thousands of times a day, that looks pretty suspicious. So our technology is looking for that anomaly that differentiates sort of normal accounts from automated accounts. But spammers and bad actors are getting better at making themselves look more real. Chairman Burr. So what percentage of accounts on Twitter are actually bots and not real people? Mr. Edgett. So we do a monthly audit of this and investigation, and determined that for years less than 5 percent of our accounts are false accounts or spam. Chairman Burr. What happens to accounts on Twitter that are suspended by Twitter? Is there an indefinite status? Mr. Edgett. Once we suspend an account, they're--especially an automated account--they're typically permanently banned from the platform. And we also do work to link those accounts with new accounts that may pop up. So the more we investigate and look into this and build the web of information around the signals we're seeing from these accounts, the better we get at linking those accounts and stopping them before they get on the platform. Chairman Burr. My time has expired, but I'm going to ask you to submit in writing for the record Twitter's assessment of why independent assessments of the number of bots on Twitter constantly, consistently, are higher than the 5 percent that you've stated today, if you would provide that for the record. Mr. Edgett. Happy to provide that for the record and address it today. Chairman Burr. Thank you. Vice Chairman. Vice Chairman Warner. Thank you, Mr. Chairman. I also want to demonstrate, but I'd also--as we're getting ready--we have had testimony before this Committee from a representative of NATO that fake and bot accounts on Twitter are more in the 12 percent to 15 percent account. A vast number of research studies--you know, 320 million active Twitter accounts--even if you assume 10 percent, you're still talking 30-plus million potential accounts that could be misused and abused. If we could put up the chart here, this is another example of how people are kind of lured in. [The material referred to follows:] [GRAPHICS NOT AVAILABLE IN TIFF FORMAT] Vice Chairman Warner. The first ad is an ad that is pretty benign. It's obviously targeted towards Christians. It's a-- it's an ``Army of Jesus'' Facebook ad, 217,000 followers. You like that page, and here's what happens: you would get a series of, for the most part, relatively benign Bible quotes or other items. This ad appeared in October of 2016. Late October, early November, suddenly, this benign site, in addition to your Bible quotes, suddenly we're getting these other posts, not paid ads, but posts from this organization. This message, obviously the bottom one would've gone to the 217,000 followers. We have no idea how many times it was liked or shared with other individuals. Again, we've got two different examples of the type of tools, how people are lured in, and then once they're lured into what they think is a pro-Texas or pro-Muslim or here pro- Jesus account--and then they are manipulated by foreign actors and foreign agents. Go ahead and start my time. First of all, I hear all your words, but I have more than a little bit of frustration that many of us on this Committee have been raising this issue since the beginning of this year, and our claims were frankly blown off by the leaderships of your companies, dismissed, said, there's no possibility, nothing like this happening, nothing to see here. It bothers me, if you're really committed to trying to work with us to resolve this, that it took until this Committee continually went at you, and it was July and early August when you made your first presentations. And, candidly, your first presentations were less than sufficient and showed in my mind a lack of resources, a lack of commitment, and a lack of genuine effort. Candidly, your companies know more about Americans in many ways than the United States government does. And the idea that you had no idea any of this was happening strains your credibility. So my first question is this, and I want a yes or no answer, not a filibuster. Will you commit to continue to work with this Committee to provide additional information and additional documents as needed as we continue to explore this challenge and threat on a going-forward basis? We go right down the line. Mr. Stretch. Mr. Stretch. Yes. Vice Chairman Warner. Mr. Edgett. Mr. Edgett. Absolutely. Vice Chairman Warner. Mr. Walker. Mr. Walker. Absolutely. Vice Chairman Warner. Next, one of the things that I continue--again, and I will commend you here--that from the first our friends at Facebook, you identified 470 accounts, 3,000 ads. And most of the work, at least it appears to me, from at least Twitter and Facebook, has all been derivative of that initial data dump. And again, this is a yes or no question: Do you believe that any of your companies have identified the full scope of Russian active measures on your platform? Yes or no? Mr. Stretch. Senator, our investigation continues. So I would have to say no, certainly not with certainty. Vice Chairman Warner. Mr. Edgett. Mr. Edgett. No, and we're still working on this. Vice Chairman Warner. Mr. Walker. Mr. Walker. I believe we haven't done a comprehensive investigation, but, as Mr. Stretch says, these are ongoing issues, and we continue to investigate. Vice Chairman Warner. Let me start again with Facebook here. You've identified 470 accounts from one troll farm in St. Petersburg. There have been plenty of press reports of other troll farms in Russia. There have been reports of other activities that were Russian-controlled in Central Europe and Eastern Europe. In meetings with your leadership, as you became more aware of this problem, you aggressively promoted the fact, for example, that you took down 30,000 accounts around the French elections. Now, you say not all of those were Russian- related. Have you gone back and cross-checked those Russian-related accounts that you took down in France to see if any of those accounts were active in the American election? Mr. Stretch. Senator, the 30,000 accounts that we took down in---- Vice Chairman Warner. The accounts that were related to Russian accounts that you took down. Your leadership in fact bragged about how proactive you were in the French election process. Did you check those accounts to see if any of them were active in the American elections? Mr. Stretch. Senator, the system that ran to take down those accounts, which were fake accounts of, really, all type and for any purpose, has--is now active worldwide---- Vice Chairman Warner. Have you---- Mr. Stretch [continuing]. And has been operated---- Vice Chairman Warner. Just please answer my question. Have you reviewed the accounts you took down in France that were Russian-related to see if they had played any role in the American election? Mr. Stretch. Senator, I apologize. I'm trying to answer the question. Vice Chairman Warner. Well, the answer is yes or no. I don't want a long explanation. I want to know if you have done this. I've been signaling this to you for some time. We wanted to make sure that you would review those accounts. We wanted to make sure--the 470 accounts that paid for the 3,000 ads, you said these were all accounts, except for one, that were paid for in rubles. Did you even run those accounts to see if any of those accounts were paid for with dollars or euros or other currencies? Mr. Stretch. Senator, those--let me try to state it---- Vice Chairman Warner. Yes or no? Mr. Stretch. So we have, and we continue---- Vice Chairman Warner. Mr. Stretch, yes or no? Mr. Stretch. Yes, we are looking and have looked at every possible indication of Russian activity in the 2016 election, and the investigation continues. Vice Chairman Warner. Sir---- Mr. Stretch. That includes any evidence we've identified from those 30,000 accounts, as well as a number of---- Vice Chairman Warner. All those accounts have been run, that database has been run, to see if any of those accounts were active in the United States? Mr. Stretch. I will have to come back to you on that, Senator. Vice Chairman Warner. Sir, we've had this hearing scheduled for months. I find your answer very, very disappointing. On the question of--we just discovered, and I appreciate this, you had 80,000 views in terms of Russian--views on Facebook. We now discovered in the last 48 hours 120,000 Russian-based posts on Instagram. Have you done any of the similar analysis on those 120,000 posts? We know the 80,000 ended up reaching 126 million Americans. Have you done that same analysis on the 120,000 posts on Instagram? Mr. Stretch. Yes, Senator, we have. Vice Chairman Warner. And how many Americans did those touch? Mr. Stretch. The data on Instagram is not as complete, but the data that we do have indicates that, beginning in October of 2016, those Instagram posts reached an additional 16 million people, in addition to the 126 million people that we identify---- Vice Chairman Warner. So now we're seeing the Russian activities roughly at 150 million-plus Americans, without knowing how many times they were re-shared. Mr. Stretch. If I can add that the time period prior to October 16th, where our data is less reliable, would yield an incremental 4 million. So all told, that gets you to approximately a little less than 150 million. That's correct, Senator. Vice Chairman Warner. Mr. Edgett, on the Twitter account, you--there was one activity--and this was not something that happened during 2016. Again, I agree with the Chairman. We're not here to relitigate 2016. But there was a fake Tennessee Republican account, TEN-GOP. The irony was this account had 154,000 followers; the real Tennessee GOP party had 13,000-- 13,400 followers, I believe, based on your numbers. I find very interesting, there have been some people who have said, ``Well, people should be able spot these fake accounts.'' Well, if they're able to spot these fake accounts, you had the President's communications director retweeting this account, Kellyanne Conway. You had the President's son, Donald Trump, Jr., retweeting this account. My question is, why did it take so long to take this down when the Tennessee Republican party was asking you repeatedly? Mr. Edgett. Yeah, that--and that was an absolute miss, and we've gotten better since. We've refined our policies around impersonation and parody accounts---- Vice Chairman Warner. Let me just close with this. My time's about up. Mr. Edgett. Sure. Vice Chairman Warner. We've looked on this subject, on political information and disinformation. But the same way that these bots and trolls and click farms and fake pages, groups, algorithm gaming can be used in politics, these same tools can and have been used, I believe, to assist financial frauds around stock schemes. I think there is a lot of this activity in broad-based digital advertising. I think we've seen some of this in schemes to sell counterfeit prescription drugs, as well as the ability to encourage folks to download malware. I believe this is a real challenge, and to get this right we're going to need your ongoing cooperation. Thank you, Mr. Chairman. Chairman Burr. Senator Risch. Senator Risch. Thank you. Gentlemen, thank you for coming today. By now it's probably pretty obvious to everyone that this Committee has spent lots and lots and lots of time on this, both as it relates to the election and on these kinds of things not related to the political process here in the country. But we have spent a lot of time, and I think have been able to reach some conclusions on this. No-one's exempt. I come from a State that's a lot smaller, Idaho. They tried to do exactly in Idaho exactly what was done in Texas, where they tried to promote a meeting where they had two conflicting sides, and no one showed up. So there was no success. In Idaho, just like Texas, it had absolutely nothing to do with the 2016 presidential election. It was simply a cultural type of acrimony that they were attempting to promote. The Chairman talked about the news reports that allege that the Russians used social media to promote a particular candidate, and may even, some of those, suggest that it changed the result of the election. But this whole thing goes a lot deeper than that. One of the things we've discovered--and I think you probably are aware of this--that you can't look at those ads and say, ``Okay, they were all promoting one particular candidate.'' There were ads going both ways, for and against both candidates, by the Russians. I'm going to get back to that in just a second. But the other thing that I think that we've come to a conclusion on, and very early, is that the U.S. isn't the only country suffering from this. The Europeans, France, Austria, Germany, just among others, have suffered from the exact same thing, and that is Russian attempted interference with their domestic affairs. I put a section in the sanctions bill, the Iran, Russia, North Korea sanctions bill, that requires the Executive Branch to do a study on this, on the effect in Europe, because they were much more overt in Europe than they were here--most of the work they did here was covert--and probably because in European countries there is actually a fair amount of Russian sympathy where they can mobilize these people, not so much here in the United States. Obviously some, but not nearly, what is there. So I want to come at this from a different perspective. The 2016 elections got a lot of the politicians riled up because it went after the political process. But my conclusion is, and I think most people here would agree with me, that--and indeed, Senator Warner referred to this--that this is a lot deeper than just the elections. There are a lot of things that the Russians are trying to do, and not just inject themselves into the electoral process. It seems to me that, after you step back and look at this and say, ``What's going on here? What is the motivation? What are they doing?'' I always look at something from an objective standpoint--``What is their objective? What are they trying to accomplish?'' And you walk away from it just shaking your head, because we Americans don't think the same way they think about promoting our country. So the conclusion I've reached is that the Russians are doing what they've done all along, long before your technology even existed, and that is trying to sow discord, simply trying to sow discord. My question to each of you is: have you tried to analyze what the Russians were trying to accomplish here, not only in the 2016 elections, but in these other kinds of ads, with the discord? What are your personal views on that, whether they've--what they're trying to accomplish? Mr. Stretch. Mr. Stretch. Senator, it's very difficult for us to ascribe motive. It's I think why this Committee's work is so important. We've tried to provide you as much information as we can, and we hope that, with your visibility into other sources of information, you will be able to help the American people have a better assessment of what the motive is. We think that'll help all of us do better to prevent this sort of activity in the future. Senator Risch. Would you agree with me that the motive isn't obvious here, given the difference in the way they handle these things? Mr. Stretch. Yes, I would agree with that. Senator Risch. Mr. Edgett. Mr. Edgett. I would agree with that, as well. I mean, based on what we've seen, the advertisements from Russia Today, the types of content that was being put out by the IRA, also the automated account content, looks as if it's merely focused on divisiveness. But we're still investigating this issue, and look forward to working with this Committee to help put the whole picture together. Senator Risch. Mr. Walker. Mr. Walker. The large majority of the material we saw was in the socially divisive side, rather than direct electoral advocacy, yes. Senator Risch. And that has really been the focus of the media, that, oh, this was all about the 2016 election. You agree with me that this is much broader than that and is, as you say, divisive--aimed at divisiveness, or aimed at discord? Would you all agree with that? Mr. Edgett. Yes, and that's a problem we're trying to tackle every day. Senator Risch. Mr. Stretch, do you agree with that? Mr. Stretch. Yes, I would agree and note that the time period in question and the activity we saw even continued after the election. Senator Risch. Mr. Walker. Mr. Walker. That seems reasonable, hard for us to know and, again, ultimately for the Committee to decide. Senator Risch. I appreciate that. And as I said, my view of this is this a whole lot broader than simply the 2016 election. Mr. Walker, I have a specific question for you. I think I heard you say that you're enacting a policy where only a U.S. national can buy an election ad. Is that correct? Mr. Walker. That's correct. Senator Risch. Okay. What about other countries? Obviously, you operate in places other than the United States. Can a U.S. national buy an ad, for instance, for a French or German or Austrian campaign? Mr. Walker. I haven't studied the laws of individual countries, but we are not confining our work to the U.S. We are looking at other elections around the world to make sure that we do whatever we can to minimize electoral inference. Senator Risch. So what you're going to do is try to confine people to their own elections in their own countries? Is that pretty much your objective? Mr. Walker. Certainly that's the case for the United States, and any other country around the world where that's the law that's true, yes. Senator Risch. I think that's going to be a big challenge for you, but good luck, and I wish you well in that endeavor. Thank you, Mr. Chairman. Chairman Burr. Senator Feinstein. Senator Feinstein. Thanks, Mr. Chairman. I sat in the Judiciary hearing yesterday. It was a subcommittee hearing, but was able to ask some questions. And I want to just make a personal comment, because I've been very proud, and I know Senator Harris is as well, to represent this tech community from California. But I must say, I don't think you get it--I think the fact that you're general counsels, you defend your company--that what we're talking about is a cataclysmic change. What we're talking about is the beginning of cyber warfare. What we're talking about is a major foreign power with the sophistication and ability to involve themselves in a presidential election and sow conflict and discontent all over this country. We are not going to go away, gentlemen, and this is a very big deal. I went home last night with profound disappointment. I asked specific questions. I got vague answers, and that just won't do. You have a huge problem on your hands, and the United States is going to be the first of the countries to bring it your attention, and others are going to follow, I'm sure, because you bear this responsibility. You've created these platforms and now they are being misused, and you have to be the ones to do something about it, or we will. And this Committee is Intelligence. It's different from yesterday, so they're privy to different facts, and they're very potent facts. Let me go back to a couple of questions that I asked yesterday. Mr. Edgett, yesterday, you testified that Twitter only began to remove voter suppression posts that told people they could vote by texting or tweeting after you found out about them from other Twitter users. These were illegal tweets. Waiting for users to alert Twitter isn't sufficient. I'll give you another chance. What is Twitter doing to proactively identify illegal voter suppression tweets? Mr. Edgett. Thank you for letting me address that. We're constantly improving, not only on our technology around automated accounts that are trying to amplify these types of messages---- Senator Feinstein. That's not enough. Mr. Edgett [continuing]. But also on putting people and technology on the content and the behavior and trying to make our workflows, our reporting flows, more efficient and using artificial intelligence to prioritize things like the illegal voter suppression ads and other things that we see on the platform, and taking those down faster. We are getting better, but this is a problem that we are focused on getting better at every day. Senator Feinstein. Well, you have to find a way to prevent them from going up. Mr. Edgett. That's right. And that's why we tend to---- Senator Feinstein. That's the problem. Mr. Edgett. Right. That's why we tend to focus on behavior behind the accounts, to know before the content goes up. We've seen--we've seen great strides in other areas not related to that, and so we're trying to take that same solution to this problem set. Senator Feinstein. Mr. Walker, I asked your colleague yesterday why Google didn't immediately revoke Russia Today's preferred status after the intelligence community determined and publicly stated that RT was a part of the Russian government's efforts to interfere in our election. Mr. Salgado told me that RT only lost its preferred status because of a, quote, ``drop in viewership,'' end quote, not because it was part of the Kremlin's propaganda machine. This response was deeply troubling, and frankly, did not answer my question. So here it is again. Why didn't Google take any action regarding RT after the intelligence community assessment came out in January of 2017? Mr. Walker. Let me start by---- Senator Feinstein. I'm sorry, your mic isn't on. Mr. Walker. Senator, let me start by responding to your initial comments to assure you we take this and have taken this issue very seriously. The question of cyber espionage is one that we have been working on for some years, publicly and privately, working with other companies and working on our own to identify some of these threats. This is one manifestation of that, but not the only one. With regard to RT, we recognize the concerns that have been expressed about RT and concerns about its slanted coverage. This is of course a question that goes beyond the internet. RT is covered--its channel is on major cable television stations, on satellite television stations. Its advertising appears in newspapers, magazines, airports. It's run in hotels in pretty much every city in the United States. We have carefully reviewed the content of RT to see that it complies with the policies that we have against hate speech, incitement to violence, et cetera. So far, we have not found violations, but we continue to look. Beyond that, we think that the key to this area is transparency--that Americans should have access to information from a wide variety of perspectives, but they should know what they're getting. And so we already on Google provide information about the government-funded nature of RT. We're looking at ways to expand that to YouTube and potentially other platforms. Senator Feinstein. Well, as you might guess, I'm really not satisfied with that. That's been the trend of the testimony all along. I think we're in a different day now. We're at the beginning of what could be cyber war. And you all, as a policy matter, have to really take a look at that and what role you play. I think my time is almost up. Let me try one more. A British report recently concluded that social media platforms such as Facebook, Twitter, and YouTube failed to remove extremist material posted by banned jihadist and neo-Nazi groups, even when that material was reported. The source for this is the British Parliament Home Affairs Select Committee. Last night, we saw a horrific attack on innocent people in New York by an individual who may have been radicalized online. We know one person, who is Anwar al-Awlaki, with 75,000 hits, the major radicalizer in the United States on the internet. I'm working on legislation to require tech companies to report known terrorist activity on their platforms to law enforcement and to provide law enforcement with civil injunction authority. So thank you, Mr. Chairman. Chairman Burr. Thank you, Senator Feinstein. Senator Rubio. Senator Rubio. Thank you. Thank you all for being here. Mr. Stretch, I want to ask you--and it relates to all this in the following way, but let me work there. Guo Wengui is a whistleblower and a critic of the Chinese government, and his Facebook account was blocked, and Facebook has informed us and has said publicly that he violated terms of service. I think he published personal identifying information about individuals, and that violated the terms of service, so--and I understand that argument. My question--so what I want to be clear is, was there any pressure from the Chinese government to block his account? Mr. Stretch. No, Senator. We reviewed a report on that account and analyzed it through regular channels using our regular procedures. The blocking was not of the account in its entirety, but I believe was of specific posts that violated our policy. Senator Rubio. But you can testify today that you did not come under pressure from the Chinese government or any of its representatives, or people working for them, to block his account or to the block whatever it is you blocked? Mr. Stretch. I want to make sure I'm being precise and clear. We did receive a report from representatives of the Chinese government about the account. We analyzed that report as we would any other and took action solely based on our policies. Senator Rubio. Facebook is not allowed to operate in China. Is that correct? Mr. Stretch. Yes, that's correct. Our consumer services are blocked in China, that's correct. Senator Rubio. Okay. There have been press reports that Facebook may have potentially developed software to suppress posts from appearing in people's news feeds in specific geographic areas. And the speculation is it's being done for the purposes of getting into the Chinese market. Is that accurate? Has Facebook developed software to suppress posts from appearing in people's news feeds in specific geographic areas? Mr. Stretch. Senator, as you know, we are--we are blocked in China, so any software we have is certainly not operative there. We do have many instances where we have content reported to us from foreign governments that is illegal under the laws of those governments. So a great example of this is Holocaust denial in Germany, for example. And our position with respect to reports like that is, if there is content that's visible in a country that violates local law and we're on specific notice of that content, we deploy what we call geoblocking, or I.P. blocking, so that the content will not be visible in that country, but remains available on the service. Senator Rubio. So, for example, if criticizing a government is illegal in that country, you have the capability to block them from criticizing the government and thereby gaining entry into that country and being allowed to operate? Mr. Stretch. We have the capability to ensure that our service complies with local law, that's accurate. We take a very nuanced approach to reports of illegal content. We believe our mission is to enable people to share and connect, and we believe that political expression is at the core of what we provide. And so---- Senator Rubio. What if that political expression is illegal in the country? Mr. Stretch. So, in the vast majority of cases where we are on notice of locally illegal content, it has nothing to do with political expression. It's things like blasphemy in parts of the world that are--that prohibit blasphemy. Senator Rubio. We could probably do a whole hearing on that topic. But here's why that's related to what we're talking about today: terms of service is the reason why he was knocked off. All of your companies have terms of service. Is a foreign influence campaign a violation of the terms of service of any of the three companies represented here today? If you can prove that someone is doing it on behalf of a foreign government, seeking to interfere in an election, does that violate your terms of service? [Pause.] Any of you? Any of the three companies, in terms of being able to operate or post things, and particularly Twitter and Facebook? Mr. Edgett. Generally, it would violate a number--we don't have state-sponsored manipulation of elections as one of our rules. But generally the other types of rules, like inflammatory ads content, would take down most of these, these posts. So we don't outright ban it, but---- Senator Rubio. Well, let me ask you this. I've read that you can buy a bot army from between $45 to $100. Is buying and--if you can prove that someone's bought up and put together a bot army, would that be a violation of terms of service? Mr. Edgett. Those would violate our terms of service around the use of automated accounts, and those are the things that we're catching every day. We're blocking 450,000 suspicious logins a day. We're challenging 4 million accounts every week, to make sure that they're actually real people. But we have--we have terms of service around---- Senator Rubio. I didn't get an answer on the face--is that a violation of terms of service, to buy for a foreign influence campaign or to put together a bunch of fake ads and put them together? Mr. Stretch. That campaign violates our terms and our policies in a number of ways. And we do not permit automated means for accessing the site, so using the bots likewise would be a violation. Senator Rubio. Okay. If someone goes on and posts the Social Security number and date of birth of an individual, that's a violation of terms of service, correct? Mr. Stretch. For Facebook it is, yes. Senator Rubio. I would imagine for all the platforms. Mr. Edgett. It is. Senator Rubio. What about if someone goes online and posts classified information, illegally obtained, that threatens the lives of individuals or methods, or potentially disrupts the ability to disrupt a plot that can endanger the lives of people? Is posting that online or posting that in one of your platforms a violation of terms of service? It happens sometimes. I don't know if you're aware of it. Mr. Edgett. We work with law enforcement all the time on matters like that, and balance free speech rights, obviously, with those of--obviously, an imminent threat, we would--we would take very seriously and act on right away. Senator Rubio. I guess my point is, personal identifying information is illegal to post it, right? It threatens someone's identity. It's also illegal to steal and reveal classified information. And I'm just curious if that's also a violation of terms of service, since in fact it could have real-life implications on individuals who could be compromised because of that release. Do we have any evidence that Russian accounts uploaded U.S. voter registration data and used it in conjunction with custom audiences to target specific voters by name? Do any of you have any information that registered voter data was uploaded and used to customize advertising or messaging to individual voters? Mr. Edgett. We haven't seen evidence of that. Mr. Stretch. The same is true for Facebook. Senator Rubio. And my last question is--the scope of this was not limited to 2016 or even the presidential race. As an example, I think, with the help of some of the companies here, we've identified Being Patriotic, LGBT United, United Muslims of America, Stop A.I., Heart of Texas; all were used to attack my campaign during the primary. What's interesting, though, is on the 3rd of July and on the 8th of August, after the primary but when I chose to run for reelection, one of those, LGBT United, attacked again. So my point being these operations, while we're talking about the 2016 presidential race, they're not limited to 2016, and they were not limited to the presidential race, and they continue to this day. They are much more widespread than one election. It is about our general political climate. Is that correct? Mr. Stretch. I would certainly agree with that statement, Senator. Senator Rubio. Okay. Thank you. Chairman Burr. Senator Wyden. Senator Wyden. Thank you, Mr. Chairman. With the current fascist leadership of Russia enthusiastically undermining our democracy, America must defend the values that made us great and aggressively confront this espionage and the enemies that sponsor it. The tools of the espionage range from political ads, to issue ads, sock puppets to fictional news stories, and from rallies, to protests, to marches, all presented under false pretenses. While the Supreme Court has ruled that Congress may place some limits on strictly political advertising, the other activities I just mentioned are beyond the reach of government and government regulation in a free society. To fight back against this espionage, Americans have to rely on our marketplace of ideas and the institutions that support it. Gentlemen, today you three represent those institutions. Now, you've discussed your response to these attacks, but it is self-evident, in relation to the power your platforms now have, in the past election you failed. And this is especially troubling because the same Federal law that allowed your companies to grow and thrive, the Section 230 law, gives you absolute legal protection to take action against those who abuse your platforms to damage our democracy. The same algorithms that power your companies can be used to identify the behavior indicative of these attacks, including fake accounts and fake news stories, and identify the source of money purchasing your ads. Now, I'm of the view ads are a small part of a much bigger problem. Fake users posting stories on Facebook, videos on YouTube, links on Twitter can be used by foreign and domestic enemies to undermine our society. You need to stop paying lip service to shutting down bad actors using these accounts. You've got the power and Congress has given you the legal protection to actually act and deal with this. So I want to start with a couple of quick yes or no questions and just go right down the row for the three of you. Mr. Walker, are you satisfied with your platform's response to foreign interference in the 2016 election? Yes or no? Just yes or no. Mr. Walker. We--we are constantly doing better. Senator Wyden. Is the answer no? Mr. Walker. We could have done more, but I think we are doing more today and have done more since the election. Senator Wyden. I'll take that as a no. Mr. Edgett. Mr. Edgett. No, we need to do more. Senator Wyden. Mr. Stretch. Mr. Stretch. The same is true. Senator Wyden. Okay. Do you all have--and we'll start with you, Mr. Walker--the technical ability and resources to better respond to future misinformation campaigns? Yes or no? Mr. Walker. Yes. The safe harbors and the Good Samaritan laws are important underpinnings for all of this. And we are doing more, we have done more to combat fake news---- Senator Wyden. Mr. Edgett, yes or no? Mr. Walker. Yes. Mr. Edgett. Yes. Senator Wyden. Mr. Stretch. Mr. Stretch. Yes, and I would add, though, that I do believe we need information-sharing among industry, as well as working with the government, to enable us to do this effectively. Senator Wyden. All right. Gentlemen, specifically now describe the changes you're going to pursue that respond to not just the ads, but the sock puppets, the hoaxes, and the confidence operations? We'd like to walk out of here knowing the changes you're going to support going forward. Mr. Walker. Mr. Walker. Sure. Let me give you four on the ad side and three on the non-ad side. Senator Wyden. Quickly. Mr. Walker. Absolutely. The transparency report that we talked about for ads; an archive of content that--of all ads' content that's available; icons that make information on the site available to users as to who sponsors an ad; and enhanced verification techniques. When it comes to non-ads material, fake news, we're improving our algorithms, our rater guidelines, and the signals we use. We're using fake news fact-check labels to improve users' ability to evaluate fake news, and we're looking at our ads policies to improve and toughen rules against sites that misrepresent their nature. Senator Wyden. Mr. Edgett. Mr. Edgett. Coming out of the 2016 election and early this year, our CEO asked our entire engineering product and design teams, which make up a large majority of the company, to tackle the problem of safety, abuse, and misinformation on our platform, and to drop everything else that we're doing and to figure this out. We formed we call an information quality team. Senator Wyden. Those are three sentences. What are the changes? Mr. Edgett. Yes. We formed an information quality team focused on looking at both behavior and content and seeing how we could stop bad actors from using automated activity to amplify their message. We have just announced new transparency rules around not just political ads, but all advertisements, to educate not just American citizens, but our worldwide users. We are also continuing to collaborate with law enforcement and committees like this to make sure we're putting the right-- -- Senator Wyden. I know very few specifics from that answer. Mr. Stretch. Mr. Stretch. Senator, let me try four things. First, today there are 10,000 people working at Facebook on safety and security across our security product and community operations teams. By the end of 2018, there will be more than 20,000. Second, we announced last week a series of ad transparency steps, drawing on the ideas in the Honest Ads Act that Senator Warner talked about earlier, that will bring much greater visibility to advertising generally and particularly to political advertising. Third, we are tightening our ad policies to limit divisiveness and to limit violence in the use of our ad tools. And fourth, we're standing up an organization to enable better industry sharing of threat information and also to help us work better with law enforcement so that we can share information and expertise in order to address this threat going forward. Senator Wyden. My last question is, it's not clear that you all or the public understand the degree of this sophisticated and manipulative intelligence operation. The Russians created Facebook pages, posted YouTube videos, all trying to appeal to specific audiences. Some of the content wasn't fake. It was intended to gather an audience and gain trust. It told people that they were already receptive to that, after gaining that trust, you could execute the espionage, for example, by gathering liberals and then discouraging them from voters. Mr. Stretch, I'd like you to confirm that this technique was used in the election. Mr. Stretch. Senator, we've provided all the information we can about the content that we've identified on the system. I think to make the sort of assessment you're describing really requires this Committee's work to look at all of the online and offline activity that would be necessary to effectuate a campaign like that. Senator Wyden. My time has expired. We have specific cases that that was used. I would like in writing, within a week, what you're doing about it. Thank you, Mr. Chairman. Chairman Burr. Senator Collins. Senator Collins. Thank you, Mr. Chairman. It is very clear that Russian activities on your social media platforms go far beyond the paid political ads that appeared last year. The primary purpose of Russia's active measures is to exploit and to aggravate the divisions in American society and to undermine public confidence in our democratic institutions. And those efforts have not stopped. They continue to this very day. As Senator Risch has pointed out, no area of the country is immune. So let me give you an example, and we've passed it out to you, by describing three unpaid posts from Facebook pages created by the Russians that refer to the governor of Maine, Paul LePage. [The material referred to follows:] [GRAPHICS NOT AVAILABLE IN TIFF FORMAT] Senator Collins. There are two negative posts related to the governor on one Russian Facebook page, called Williams & Calvin, that appeared in August of 2016. There's a video of comments made by Maine's governor from that same month. And the post in part says the following: ``LePage called up white people to kill blacks. After this statement, we can clearly see what kind of people serve in American government: white racist supremacy--that's for sure. The only way to avoid mass killings of black people is to fire LePage and all who have the same racist beliefs from American government.'' There was a second post on the same website about 10 days later. Let me just read part of that: ``It is not a secret that America is the country of white supremacy, and people like LePage must be replaced from their positions in the government. America doesn't need racist politicians. Black people are tired of white supremacy.'' Then, this year, this year, in August of 2017, Maine's governor was the subject of a positive post on a different Russian-backed Facebook page, called Being Patriotic. In this case, the post defended comments that the government made at the time about Confederate monuments. The post ends with its own incendiary conclusion. It says: ``When even the governor is not safe from leftist haters, then what can we say about ordinary citizens? Liberals are now acting like terrorists. They try to intimidate everyone who disagrees with them. Hope our police will take appropriate measures against these cowards.'' Now, let me point out something. Our governor was not up for reelection last year; he is term-limited. He cannot run for reelection as governor. And yet these comments were made both last year and just a few months ago. And the posts are just three among 80,000 that reveal the Russian playbook of playing both sides off against each other and of sowing discord and division with inflammatory rhetoric. And there were other posts that involved lower-level officials in the State of Maine that we found as well. And the Russians continue to push this kind of divisive rhetoric to this very day. So my question to you is: what are you, as American companies, doing to effectively counter unpaid content posted by the Russians that is clearly designed to specifically polarize and anger the American people? And I would argue that you have a special obligation here, given your reach in American society and the fact that you are patriotic American companies. Mr. Stretch. Mr. Stretch. Senator, we agree that we have a special responsibility here. We value the trust that users place in our services. And when they show up to connect with friends and family and to discuss issues, they need to know that the discourse they see is authentic. What is so painful about this type of content is that it exploits truly and passionately held views and then inflames them to create more discord and more distrust. To prevent this, we are investing much more heavily in authenticity. We believe that one of the cornerstones of Facebook is that users are known by their real names and so that creates a level of authenticity in the discourse that users can trust when they come to the platform. This sort of content erodes that trust and it's contrary to everything we stand for as a company. As Americans, it's particularly painful because it is so exploitative of the openness of our society. And so the investment we are making and the commitment we are making is to ensure that our authenticity policy is more effectively policed and monitored to prevent exactly this sort of behavior. Senator Collins. Mr. Edgett, what is Twitter doing? Mr. Edgett. We're focusing on a number of things. The one we see the greatest strides in and where we see the greatest effect and protections from our users is on the amplification side, in the use of automated accounts. These bad actors need an audience for their voice, and generally they don't have a followership. So they are trying to use activity on the platform to automate and amplify their voices. So we're looking behind the message and behind the content at the behavior of doing that, and have been successful in doubling our effectiveness of doing that, year over year, and looking at the behavior, taking down millions of accounts every single week because they're not actually humans, they're actually---- Senator Collins. Well, this just happened in August of this year. This isn't something old. Mr. Edgett. Right. We continue to try to stay ahead of their activities. We're also looking at things like coordinated human activity, where real people are coming together, like the IRA, and actually putting out divisive content like this. We are able to link those accounts and take action on them as we learn, not just what they're saying, but what's behind. What's behind it only we can see on the Twitter side. We've had great strides on the terrorism front in that regard, and we believe we can apply the same techniques and methodologies to this, this problem. Senator Collins. Mr. Walker. Thank you. Mr. Walker. We're also very concerned about this kind of deceptive and divisive content. We remove it immediately from our services and we have removed these. Going forward, and actually already, we have engaged a number of things to avoid the problem of fake news: changes to our algorithms, improving the training that our raters get in evaluating quality, labeling fake news where we can find it, working with third parties, et cetera. Senator Collins. Thank you. Chairman Burr. Senator Heinrich. Senator Heinrich. Thank you, Chairman. Mr. Stretch, I want to start with you. Last month, President Trump called Russian-purchased Facebook ads a hoax. I've looked at those Russian-sponsored Facebook ads. I certainly hope you've had a chance to review them. Are they in fact a hoax? Mr. Stretch. All the information we've provided to the Committee did run on Facebook, so---- Senator Heinrich. It's a yes or no answer. I know you're a lawyer; it's hard. But---- [Laughter.] Mr. Stretch. No. The existence of--those ads were on Facebook and were not a hoax. Senator Heinrich. So in the interest of just clearing this up and giving the American people some transparency into this, so that they can see the nature of what typically gets used to divide the American populace, why not simply release those Russian-financed Facebook ads to the public? Redact the pictures, but release the contents, so that people can understand how this works? Mr. Stretch. Senator, we believe this Committee is really best placed to determine what information to release. We stand ready to assist in that, in that effort. We agree that the more people can see the type of content that ran and the divisions that were sought to be exploited, the better. Senator Heinrich. Well, I think we have a disagreement on this Committee as to whether or not to release those. I would urge all of you as platforms to consider that kind of activity as well. I want to move on to Russia's RBC magazine, which recently revealed that St. Petersburg's troll factory employed hundreds of trolls, including 90 at the, quote-unquote, ``U.S. desk'' alone, and spent about $2.3 million in 2016 to meddle in U.S. politics, actually contacted U.S. activists directly and offered them thousands of dollars to organize protests. Your platform--your platforms are all global. They're not just U.S. platforms. And there is substantial open-source reporting right now suggesting that similar divisive activity may be occurring, for example, in the Catalonian region of Spain right now. What are each of you doing right now to make sure that your platforms aren't being used in similarly divisive ways across the globe, to sow discord in Western democracies? And in particular with the Catalonian example, are you familiar with what you're doing there? Mr. Stretch. Senator, we are focused on preventing this form of abuse globally. So when we say we have an obligation to protect the platform from being used for abuse, that's a global obligation. So we are focused on elections as they appear on the calendar, including the Catalonian election that occurred recently, as well as the other elections that are on the calendar going forward. We're focused on ensuring that all actors on the platform comply with local law, as Mr. Walker suggested earlier, and we are focused on making sure that any foreign threat actors that are seeking to undermine democracy anywhere are removed from the platform. Senator Heinrich. Have each of you taken, had to take corrective action against actors in that debate who are not who they purported to be? Mr. Stretch. Mr. Stretch. Senator, the key I'd say progress we've made is---- Senator Heinrich. That's a yes or no, once again. Mr. Edgett. Mr. Edgett. I believe so, but I'll need to follow up with your staff. Senator Heinrich. Thank you. Mr. Walker. Mr. Walker. We're constantly removing fraudulent and deceptive accounts from our services. I'm not familiar with the specifics there. Senator Heinrich. You can get back to us. Mr. Edgett, given the discussion we've had about automated Twitter accounts and bots--and the range is obviously very wide, but we know that's a problem. And you made an assertion earlier that I want to come back to and just make sure it's accurate. Do you require at Twitter, by service agreement, that profiles are linked to real names, real people, or some other way to make sure that those go back to real human beings, from Social Security numbers to other unique identifiers? Mr. Edgett. We do not. We require some information at sign- up, but we don't require you to verify your identity. We have services that verify identities on the platform. Senator Heinrich. Why on Earth not? Mr. Edgett. Because we see the power of Twitter being used by folks like political dissidents, embedded journalists in difficult countries who use the ability to not have to identify themselves by name, like on other platforms, to speak their truth to power. We see that---- Senator Heinrich. So the reason is for social dissidents and people in third world countries or where there is a hostile government regime? It is not your business model? You're not reliant, for example, on those automated accounts to generate revenue? Mr. Edgett. We don't rely on--there's some good automation on the platform and I'm happy to talk about that. But we do not rely on this, the bad, malicious automation that we're talking about here. Senator Heinrich. If I were running a political campaign today and I were to advertise on local television, on cable television, in print or on the radio, or even through the mail, I would have to have a ``paid for by'' disclaimer on those ads. Now, Mr. Walker I believe has already addressed this issue. But is there any policy reason that online social media ads, given how effective and influential they have clearly become, shouldn't meet that same level of transparency? Mr. Edgett. We agree with the transparency efforts and last week announced that we're creating a transparency center, not just for political ads, which will have even more information than all ads, but a transparency center for all ads, so that you can see not just the ad that you've seen and why it's been targeted to you, but all of the other ads created from that same advertiser. On the election front, you'll also be able to see who's paying for the ad, how much they've spent on this ad campaign and all ad campaigns, and you'll--able to see what the targeting criteria are, so to better educate around why these ads are on the platform. Senator Heinrich. I appreciate that, Mr. Edgett. Mr. Stretch. Mr. Stretch. The same is true for Facebook. We are working both on political ad transparency, enabling more visibility into campaign ads by third parties, and also enabling campaigns to meet their disclosure obligations in their--in their online communications. Senator Heinrich. Thank you, Mr. Chairman. Chairman Burr. Senator Blunt. Senator Blunt. Thank you, Mr. Chairman. So, Mr. Edgett, in response to Mr. Heinrich's question, there was a lot of information that you could get based on that policy, if you pursue it, like all the other ads they ran, how much. Would you get that by going to another spot? Surely that's not all right there on the ad. Mr. Edgett. Obviously we're a character-constrained platform, so we will be identifying very clearly whether or not something is a political ad, so that you can--you can see it right away. And then, depending on if you're on a web browser or on a mobile phone, you'll have to hover over or click on a spot to then see a sort of a full transparency center that gives you all that information right away. Senator Blunt. So would you be able, on the ad itself, to go ahead and put enough of a disclosure there that it's clear, when you're looking at the ad, who paid for it and how to find more information out about who paid for it? Mr. Edgett. We're still working through the technical details, but believe we'll be able to get that in front of---- Senator Blunt. Mr. Walker, are you trying to do anything similar to that? Mr. Walker. We are. Our idea is to have an icon that a user can click