Twitter has not provided the House and Senate Russia investigations with any additional Kremlin-backed imposter accounts and bots since at least Nov. 1, The Daily Beast has confirmed.

The lack of new disclosure comes as evidence continues to mount that inauthentic Russian activity continues apace on the microblogging platform.

Twitter first identified 201 non-bot accounts tied to the St. Petersburg-based troll farm known as the Internet Research Association on Sept. 28. Barely a month later, for a Nov. 1 congressional hearing, the company increased that figure tenfold, to 2,752—in addition to the existence of 36,746 Russia-linked bot accounts involved in election-related tweets. Twenty days after that, however, Twitter has yet to provide an updated amount, let alone specific propaganda accounts, to legislators, three sources familiar with the inquiries tell The Daily Beast.

While not disputing The Daily Beast’s story, a Twitter spokesperson said in a statement that the company is “continuing to work closely with congressional investigators to provide information relevant to their inquiries, consistent with our policies and federal privacy rules. We aggressively enforce our policies and, as appropriate, take action on content that violates our terms of service.”

The spokesperson continued: “As we noted in our testimony before Congress, we are deeply concerned by Russian state-sponsored misuse of social media to influence the 2016 U.S. election and believe that activity of this kind is unacceptable.”

The ability of congressional investigators to fully understand the extent of Russian interference in the 2016 election relies substantially on obtaining a comprehensive picture of the Kremlin’s manipulation of social media. Math alone suggests the accounts Twitter has identified are not the sum total of Kremlin propaganda on its platform.

In Senate testimony on Nov. 1, Twitter general counsel Sean Edgett estimated that “less than five percent” of Twitter’s users are bots. With a monthly user base estimated at 330 million people, that works out to approximately 16.5 million bot accounts, which are only one form of inauthentic or propagandistic account.

Not all the bots are linked to Russia. But thus far Twitter has revealed only 36,000 of those bots, three orders of magnitude lower than Edgett’s estimate. And fewer still are the manned imposter accounts that continue to insert Kremlin propaganda into American political discourse.

In the November hearing, Sen. Mark Warner (D-VA) noted in that researchers not employed by Twitter estimate that the company’s automated inauthentic accounts are closer to 15 percent of its user base, or nearly 50 million accounts. Warner made clear he wanted Twitter to dive deeper and report back.

“As I emphasized during our hearing, I remain concerned that the companies still have not discovered, much less countered, the full extent of Russian disinformation and broader exploitation of their platforms,” Warner, the senior Democrat on the Senate intelligence committee, told The Daily Beast on Monday. “Academics and independent researchers continue to, retrospectively, uncover evidence that the reach of this disinformation exceeded the estimates provided by Facebook and Twitter. And these researchers continue to identify bad actors exploiting the same, established playbook to distort public discourse and elevate fake news.”

Even Kremlin-tied accounts disclosed by Twitter do not necessarily get banned from the platform when they reappear under new names. @Jenn_abrams, or “Jenna Abrams,” was among the imposter accounts Twitter disclosed to Congress. “Abrams” sprung back up under a new name earlier this month, with a username confirmed by the same WordPress account tied to previous troll farm activity. And Twitter only shuttered the new account after CNN pushed the company on the Russian troll account’s weeks-long renaissance under the new name: @RealJennaAbrams.

Reporters at CNN let the company know of the Russian troll farm-linked account’s revival and showed proof it was run by the same users as before. Twitter did not comment to CNN and took no action on the account until after CNN’s article was published.

“Twitter told CNN when we first contacted them about the account that they were looking into it,” a CNN source told The Daily Beast. “They removed the account a few minutes after we published the story and contacted CNN to confirm they had done so.”

In the November congressional testimony (PDF), Edgett said that Twitter was undergoing an ongoing “retrospective analysis of activity on our system that indicates Russian efforts to influence the 2016 election through automation, coordinated activity, and advertising.”

Edgett indicated that algorithms detecting patterns associated with automated tweets—not all of which are necessarily malicious or a violation of the company’s terms of service—play an outsized role.

Twitter, Edgett told Congress, is built to detect such activity “at the account creation and login phase” and is “bolstered by internal, manual of reviews conducted by Twitter employees.” The company further supplements its detection efforts by “user reports” that inform algorithmic tweaks. In other words, Twitter relies to some unspecified extent on user and journalistic identification of Kremlin propaganda.

In the statement to The Daily Beast, the Twitter spokesperson indicated that the company faced challenges in making efforts at removing Kremlin propaganda both scalable and intellectually consistent.

Twitter software “identified and challenged” 4 million suspicious accounts worldwide on average per week in October 2017. Three million of those suspicious accounts faced a roadblock “upon signup”—more than double what it was able to identify and prevent last year.

A similar challenge, especially when dealing with millions of accounts, is to balance the reasonable privacy interests of Twitter users with the company’s interest in analyzing and disclosing material to Congress.

“Twitter is a global company, and disinformation does not stop at the U.S. border. The tools we use to fight malicious automation and disinformation have to be both globally applicable and scalable for the Twitter platform and the nearly half a billion Tweets served every day,” the spokesperson said.

“We are constantly driving innovation and machine learning to improve our techniques in fighting these challenges across time zones and national borders. We also have to do all of this in a way that protects user privacy and security. “