Facebook, Instagram, and Twitter have limited the amount of data that’s accessible to Predictim, a California-based startup that uses machine learning to vet potential babysitters. The social networks took action against the company after a report by The Washington Post last week detailed its methods, attracting widespread criticism.

Predictim claims to use “advanced artificial intelligence” to judge a babysitter’s suitability. This includes combing through an individual’s Facebook, Instagram, and Twitter histories before offering an automated assessment of their character. The company claims it can predict whether the individual is a drug user, if they might bully or harass others, and even if they have a “bad attitude.”

Experts have criticized Predictim’s service as unscientific, noting that machine learning is notoriously unreliable when it comes to parsing complex data like human speech. AI might be good at recognizing objects in photos or digitizing handwriting, but it can’t reliably interpret nuances in tone and speech like sarcasm or jokes. Experts in data policy also noted that Predictim’s software doesn’t explain how it comes to its decisions, which means that a potential babysitter could lose a job without ever knowing why or offering an explanation.

Facebook, Instagram, and Twitter have all taken action against Predictim over the last few weeks, report The Washington Post and BBC News. Facebook says it dramatically limited the company’s access to user data on Facebook and Instagram after it violated a ban against developers using this information to vet job candidates. Twitter revoked Predictim’s access to its API (which is used to access data on a scale, rather than reading individual profiles) earlier this week. A spokesperson told the Post, “We strictly prohibit the use of Twitter data and APIs for surveillance purposes, including performing background checks.”

“Everyone looks people up on social media, they look people up on Google.”

Predictim’s chief executive and co-founder, Sal Parsa, said the company had done nothing wrong. “Everyone looks people up on social media, they look people up on Google,” Parsa told BBC News. “We’re just automating this process.”

Parsa also suggested that the social networks, which scan users’ data in order to target ads to them, have their own motivations for blocking his software. “Twitter and Facebook are already mining our data,” Parsa told the Post. “It’s right there, user-generated data. Now there’s another start-up that’s trying to take advantage of that data to help parents pick a better babysitter, and make a little money in the process.”

Regardless of these actions, wider questions regarding the legality of automated background checks are still up in the air. Some experts have suggested that Predictim’s software might violate state-level bans on employers demanding access to the social media accounts of job applicants. Other social networks want to block this sort of data scraping altogether. LinkedIn is currently in the middle of a legal battle against third-party site HiQ, which scraped its data to predict when employees might leave their jobs.

Meanwhile, companies continue to create more AI-powered screening services, not just looking at social media activity but other metrics including individuals’ interview performance. Experts worry that handing over crucial decisions to algorithms that cannot explain their thinking or understand human behavior is unwise, while firms selling these services say a machine-level approach is more detailed and impartial.