Diane Abbott has said she experiences more racism now than at any time in her 35 years in politics, as research has found black female politicians and journalists are almost twice as likely as their white peers to be abused on Twitter.

The shadow home secretary urged Twitter to end anonymity for its users, and said the culture of abuse was being reflected in mainstream media and political discourse, with an increase in “dogwhistle” politics.

Her comments came as Amnesty International released the results of crowdsourced research that found black women in journalism and politics were 84% more likely than white women to be mentioned in abusive tweets.

The report warned that such abuse had a “chilling effect” on freedom of expression by women online and “undermines women’s mobilisation for equality and justice – particularly groups of women who already face discrimination and marginalisation”.

Abbott said: “I never had this scale of abuse when I first came into politics and racism was an issue then as now, but it’s the anonymity and the ease of Twitter which has put racists into overdrive.

“The first thing my staff have to do in the morning is go online and delete and block all the stuff … And it feeds on itself: people see other people peddling racist abuse, so they think they can as well and they feel almost strengthened in their wish to do so.”

Previous research by Amnesty found that Abbott received almost half (45.1%) of all the abusive tweets sent to female MPs ahead of last year’s general election. As well as general derogatory and racist abuse, she regularly received death threats and rape threats, she said.

And she warned that racism online was being reflected in media and politics, “not in the same visceral way, but I think that it finds an echo in mainstream coverage and that’s very troubling”.

She pointed to Conservative attack adverts posted on billboards in the north of England during the last election that targeted her and Jeremy Corbyn. “Not Jeremy and John McDonnell, which would have been the obvious thing to do, but me and Jeremy,” she said. “It’s dogwhistle politics.”

Volunteers for Amnesty’s “Troll Patrol” project analysed 228,000 tweets sent during 2017 to 778 female politicians and journalists from across the political spectrum in the UK and US. They found that about one in every 14 of them contained abusive or problematic language – content that either promoted violence against specific groups or was hurtful or hostile.

Disaggregating the data showed that “left-leaning” politicians – such Democrats in the US and Labour – were 23% more likely to be targeted for abuse than those from the right. That trend was reversed for journalists, with those working for rightwing publications – such as the Daily Mail, the Sun or Breitbart – receiving 64% more abusive tweets.

The research was carried out after Twitter refused to share data on reports of abuse against women and other groups. Kate Allen, Amnesty UK’s director, said: “Twitter is failing to be transparent about the extent of the problem … The company must take concrete steps to properly protect women’s rights on the platform.”

The findings come ahead of a government white paper – planned for early next year – that is expected to propose statutory regulation for social media. Jim Killock, executive director of the Open Rights Group, which campaigns for rights online, said all social network platforms had difficulties in distinguishing between bullying and robust comment.

“Twitter does need to make sure the extremes of abusive behaviour are tackled,” he said. “If these are allowed to continue, it undermines the arguments for protecting free expression and keeping state regulation away from speech.”

Vijaya Gadde, legal, policy, and trust and safety global lead at Twitter, said the company was committed to “improving the collective health, openness, and civility of public conversation” on the platform.

“Our abusive behaviour policy strictly prohibits behaviour that harasses, intimidates or silences another user’s voice,” she said. “We are also transparently investing in better technology and tools to enable us to more proactively identify abusive, violative material, to limit its spread and reach on the platform and to encourage healthier conversations.”