How do we make the most of the thousands of wildlife images generated by camera traps every day? A new service aims to provide insight and help conservation efforts.

I remember setting our first camera traps in August 1998 for what would become an 8-year tiger-monitoring program in the Bukit Barisan Selatan National Park in southern Sumatra. Ullas Karanth, a Wildlife Conservation Society tiger biologist in India, had recently pioneered the use of camera traps to study the big cats, and we were hoping to introduce these methods to Indonesia.

The plan worked, and we were able to produce some of the first rigorous density estimates of tigers for a Sumatran protected area. Other studies followed, and today conservationists use camera trapping to monitor tigers in almost all of the major protected areas on Sumatra.

In addition to tigers, the traps captured images of Sumatran rhinos, elephants, clouded leopards and Malaysian tapirs, as well as 45 other mammal and bird species.

With so much information on the non-target species, or bycatch, of our tiger study, I began to wonder how other programs were using camera traps and how they used the bycatch data. I soon realized that more than 70% of camera trap studies before 2008 were single-species studies, mostly targeting spotted and striped cats, and that very few researchers were using the data related to the other species accidentally caught on camera.

Around the same time, Conservation International was initiating the Tropical Ecology Assessment and Monitoring (TEAM) initiative, under the leadership of Sandy Andelman. A key protocol of that program was camera trapping for terrestrial mammals and birds. Sandy asked me to evaluate the program and make recommendations for its long-term sustainability.

I recommended it narrow its focus and open up to partners. Sandy transformed TEAM into the TEAM Network, inviting WCS and Smithsonian to join the efforts. Soon standardized sampling methods were instituted at all TEAM sites around the world, along with open sharing of the data.

Meanwhile researchers were finding new uses for their own bycatch in camera trap data. Conservationists increasingly focused on wildlife communities, not just single species. Traps proved a great tool to document species richness, community structure, and the dynamics of local extinction and colonization. Bycatch data that had sat hidden away in filing cabinets and on hard drives — in many cases unexamined — suddenly became more fascinating.

As interest in the biodiversity aspect of camera trap data increased, the technology improved exponentially. Inside a decade we went from using an instamatic film camera attached to a motion sensor and housed in a leaky box to a sleek, waterproof, digital camera capable of running for months and taking tens of thousands of photos at half the price.

These parallel developments created a problem in urgent need of a solution. Biologists were drowning in data as the scale of camera trap studies that had previously produced hundreds or thousands of photographs now generated hundreds of thousands of images that needed to be identified and catalogued, often with extra data about the camera’s location.

Two parallel efforts to address that problem grew up under the guidance of the TEAM Network and a collaboration between the Smithsonian and North Carolina Museum of Natural Sciences called eMammal — and soon a new consortium arose, which we’ve called Wildlife Insights.

An online service, Wildlife Insights allows anyone, especially researchers or citizen scientists, to upload camera-trap photos, which will then be analyzed by the program’s artificial intelligence engines to identify wildlife in the images. So far the AI can identify 614 vertebrate species, a number we expect to grow as experts upload more images and provide their insight into what the photos contain. Images and data will be securely archived and summary information will be available to anyone visiting the platform. Meanwhile, anyone signing in to help identify images will receive even more access to the service’s tools, while their contributions further improve the AI’s abilities and accuracy.

Our goals are twofold. First, we hope this will encourage data sharing and collaboration among researchers and conservationists. Second, we want the public to participate though citizen science projects and their interest in camera trap photos.

To this end, the Wildlife Insights family includes Google, providing expertise in cloud technology and machine learning for image recognition; Conservation International; Yale University’s Map of Life; North Carolina Museum of Natural Science; the Smithsonian’s National Zoo and Conservation Biology Institute; the Wildlife Conservation Society; World Wide Fund for Nature; and Zoological Society of London.

Wildlife Insights represents the world’s largest effort to organize camera trapping into a “big data” framework — applying cutting-edge tools to streamline data entry, management and analysis and engaging the public to care more deeply about wildlife and act for its conservation.

And it’s already working. Since Wildlife Insights launched in December, almost 4.5 million images have been uploaded, representing 21 countries and 23 organizations.

That’s just the start. Through data sharing on the Wildlife Insights platform, we expect to improve monitoring of exploited wildlife populations, help evaluate progress toward the goals of the Convention on Biological Diversity and the Sustainable Development Goals, and assist communities in managing indigenous territories.

But most fundamentally, we hope to provide a space for thousands, perhaps millions, of people to learn about and participate in science projects on biodiversity across the globe and in our own backyards.

The opinions expressed above are those of the author and do not necessarily reflect those of The Revelator, the Center for Biological Diversity or their employees.