EVEN your selfies and brunch photos aren’t safe from the long arm of Facebook, it was revealed overnight, as the tech giant admitted it had taken billions of photos from Instagram accounts around the world to boost its own research.

More than 3.5 billion photographs were harvested from the photo-sharing platform without users’ knowledge, chief technology officer Mike Schroepfer told the audience at the company's annual developers' conference F8, revealing they had been used to enhance the company’s artificial intelligence technology.

MORE: What you need to know about the Facebook data breach

MORE: Cambridge Analytica to shut down after data breach scandal

Mr Schroepfer said Facebook had become overwhelmed with so much dangerous material “like offensive content, spam, hate speech, fake accounts, fake news, clickbait and more”, it had become too much for human moderators to regulate.

Instead, the company was creating an artificially intelligent moderation system, he said, to detect inappropriate images on its website.

To speed up its development — by “100 times”, he said — Facebook harvested any images shared on Instagram with hashtags, and fed the photographs into its own system over 22 days.

“We built some breakthrough technology that takes publicly available, hashtagged images at an unprecedented scale,” Mr Schroepfer told the crowd.

“We require new breakthroughs, and we require new technologies to solve problems all of us want to solve.”

The images taken from Instagram, which Facebook bought for $US1 billion in 2012, not only included photographs of food, as it showed to the crowd, but more personal images such as family portraits.

Neither Instagram nor Facebook users were warned of the practice before the company mined their photos.

Facebook artificial intelligence and machine-learning director Srinivas Narayanan called its new approach “incredibly cool”, and revealed using people’s personal images delivered Facebook an advantage over fierce rival Google, with a 13.6 per cent improvement in recognising images.

Mr Narayanan said Facebook technology could now recognise the content of images with 85.4 per cent accuracy, compared to Google’s 79.2 per cent.

But he said it was still not good enough to recognise all “clickbait, engagement-bait, pornography, violence, and other inappropriate content” that flooded the social network.

“Some content is still more difficult for AI to understand,” Mr Narayanan said.

“For example, it’s helping to detect hate speech but humans need to review it to understand the intent, the subtleties of language, and context.”

Facebook’s unexpected Instagram photo raid comes just weeks after the company’s biggest data scandal in its history, when it was revealed the social network shared the private details of 87 million users with a researcher, who sold them to political consultancy Cambridge Analytica.

The firm, which allegedly used the personal information to influence the 2016 US election, revealed it was shutting down today after it “determined that is no longer viable to continue operating as a business” in the wake of the scandal.

In a statement, the company said it had “been vilified for activities that are not only legal but also widely accepted as a standard component of online advertising”, but would continue to co-operate with investigators looking into its operations.