The method would assist governments and charities trying to fight poverty but lacking precise and reliable information on where poor people are living and what they need, the researchers based at Stanford University in California said.

Eradicating extreme poverty, measured as people living on less than $1.25 U.S. a day, by 2030 is among the sustainable development goals adopted by United Nations member states last year.

A team of computer scientists and satellite experts created a self-updating world map to locate poverty, said Marshall Burke, assistant professor in Stanford’s Department of Earth System Science.

It uses a computer algorithm that recognizes signs of poverty through a process called machine learning, a type of artificial intelligence, he said. Results of the two-year research effort have been published in the journal Science.

The system shows an image to a computer, “and the computer’s job is to figure what the image is,” Burke said.

The computer was initially fed data from household surveys by five African nations - Uganda, Tanzania, Nigeria, Malawi and Rwanda - and nighttime satellite imagery of the same countries.

Nighttime images are a basic tool to predict poverty because a higher intensity of nightlight is associated with higher levels of development.