While the Serengeti National Park is home to some of the world’s most breathtaking wildlife, it’s also home to 225 hidden cameras — known as camera traps — that unobtrusively document the prides of lions and packs of hyenas traversing the Tanzanian savannah. Documenting these animals and their whereabouts is essential for monitoring the population of endangered species, preserving biodiversity, and also seeking out new phenomena or even species that have yet to be discovered. And until now it’s also been a huge pain.

As you might imagine, this necessary work is incredibly laborious. Combined these cameras have yielded three million images, all of which had to be manually labeled by 30,000 citizen scientists as part of the Snapshot Serengeti project, an effort to arm ecologists with the information they need to do the research. But thanks to a recent breakthrough, much of that work may soon be outsourced to animal-identifying A.I.

That’s according to an international team of researchers who recently devised a way to have artificial intelligence solve a major ecological challenge: How do we turn the millions of Serengeti photos into usable data in a timely fashion? They detailed their work in a June 5 paper published in the journal Proceedings of the National Academy of Sciences. First-author Mohammad Sadegh Norouzzadeh tells Inverse this research could spare ecologists from mundane tasks to give them more time to work towards conservation endeavors.

“We can save them time and provide them with information quickly and accurately,” he explains. “The current process they’re using is very slow so it can give them outdated information. Machine learning can supply up-to-date information so they can plan for conservation efforts. That’s why we think this is such a critical advancement for ecology.”

It would normally take volunteers from Snapshot Serengeti roughly two to three months to label six-months worth of images. Norouzzadeh’s system can go through the same batch of pictures in under an hour with human-like accuracy. But don’t think this will eliminate the need for citizen scientists entirely, at least for now.

For starters, the A.I. mastered the art of labeling by analyzing the three million pictures the volunteers had pre-labeled, meaning we’re not yet at a point where we can outsource the work completely. Norouzzadeh says the system is capable of processing 99.3 percent of camera trap images, but it still needs a human’s touch for the remaining 0.7 percent.

“What we’ve actually done is make the task more challenging for citizen scientists,” he says. “Our system can automatically label the easy images, but we still need humans to identify more complex information, like what the animal is doing in the image or its age, something our system isn’t currently capable of.”

Still, Norouzzadeh believes this is a critical first step. In the future, he hopes to combine the information extracted from these photos with the precise date and time they were taken. This could enable the A.I. to predict migration patterns of certain animals in order to give ecologists a much deeper understanding of what they’re looking at on a screen.

Providing ecologists with data less than an hour after a photo is taken would permit them to take action faster than ever before. This has great potential for being used to protect animals against poachers or spotting invasive species before they have a chance to change an ecosystem. Furthermore, accelerating nature documentation could aid trailblazing new evolutionary discoveries that would have otherwise taken months, if not years, to simply gather data for.

Turns out that A.I. and nature aren’t so far apart after all.