Algorithms, software, and smart technologies have a growing presence in cities around the world. Artificial intelligence (AI), agent-based modelling, the internet of things and machine learning can be found practically everywhere now—from lampposts to garbage bins , traffic lights, and cars . Not only that, these technologies are also influencing how cities are planned, guiding big decisions about new buildings, transportation, and infrastructure projects.

City-dwellers tend to accept the presence of these technologies passively—if they notice it at all. Yet this acceptance is punctuated by intermittent panic over privacy—take, for example, Transport for London’s latest plans to track passenger journeys across the transport network using Wi-Fi, which drew criticism from privacy experts. If there was more widespread understanding about how these technologies work, then citizens would be in a better position to judge what data they’re comfortable with sharing, and how to better safeguard their privacy as they navigate the city.

That’s why, in recent study, I set out to unpack how some of the algorithms behind AI and machine learning operate, and the impact they have on familiar urban contexts such as streets, squares, and cafes. But instead of trying to explain the mystifying mathematics behind how algorithms work, I started looking at how they actually “see” the world we live in.

How algorithms “see”

If we really want to see what machines see, we need to force ourselves to think like computers. This means discounting everything we usually perceive with our senses and rationalize through our brain, and instead go through a step-by-step process of data acquisition. This is exactly what we tried to demonstrate with The Machine’s Eye: a simulation that shows the steps through which a hypothetical AI system “reads” a physical environment and is able to profile the people in it.

It starts from a pitch-black situation—with no information—and gradually gathers data from a number of interconnected devices: smartphones, microphones, CCTV, and other sensors. It starts by detecting and organizing information directly from the physical environment: the dimension of the room, type of establishment, the number of people inside, their languages, accents, genders, and types of conversation. It then interpolates these data with what can be found about each individual online, data mining from social media, online posts, databases, and personal pages.

Our AI machine is finally able to bring all these data together into an accurate profile of a targeted individual, inferring the likelihood of personal relationships, family prospects, life expectancy, productivity, or “social worth”—that is, their contribution to society in financial and social terms, within the context of this fiction.

In this simulation, all data are fictitious—the main purpose of the video is to raise awareness about what a truly connected internet of things, operated by an advanced AI system, could hypothetically do.