Self-driving cars have sparked a “billion dollar war over maps,” but the cars are the most boring thing about it. How do machine intelligences read and write the world? And what Other intelligences deserve our attention?

We now take it for granted that our machines can sense almost any space in the world, from deep sea trenches to the chambers of the human heart. Building on thousands of years of research in physics, war, and natural history, doctors in the 1940s began using ultrasound to scan human and animal bodies. Taking cues from dolphins and bats and Leonardo da Vinci’s early echolocation experiments, naval scientists in the early 20th century learned how to detect mines and submarines with sonar. Early cathode ray studies by Wilhelm Röntgen, Nikola Tesla, and Thomas Edison led to the development of x-ray photography, which enabled radiologists to see broken bones, art historians to read the layers of an oil painting, and physicists to study crystalline structures.

Revolutions in machine sensing have transformed fields like medicine and engineering and creative production, several times over. Now, finally, these technologies are reaching their apotheosis, converging in — sound of balloon deflating — the self-driving car!

We need to ask critical questions about how machines conceptualize and operationalize space? How do they render our world measurable, navigable, usable, conservable?

Sorry if I sound disappointed. According to hype, autonomous vehicles will ease congestion, shorten commutes, reduce fuel consumption, slow global warming, enhance accessibility, liberate parking spaces for better uses, and improve public health and social equity. All well and good. Analysts predict that by 2050 self-driving cars will save 59,000 lives and 250 million commuting hours annually and support a new “passenger economy” worth $7 trillion USD. Google’s parent company Alphabet is positioning itself to lead that economy, with synergies among Waymo (self-driving cars), Waze (navigation), Sidewalk Labs (urban tech), and Google Maps, plus search and advertising (and maybe law enforcement and private security, too!). And there are hundreds of players, large and small. The industry has swept up cartographers, GIS specialists, roboticists, and engineers and technicians of all kinds, entangling them in what one observer calls “a billion dollar war over maps.”

That’s probably an understatement, because the applications go well beyond self-driving cars. Everything from autonomous warfare to logistics to geo-targeted advertising depends on map superiority. On the friendlier end of the spectrum, maps drawn by AI have potential to transform myriad areas of research and design, and to influence policy and governance, starting with environmental protection and public health.

With the stakes so high, we need to keep asking critical questions about how machines conceptualize and operationalize space. How do they render our world measurable, navigable, usable, conservable? We must also ask how those artificial intelligences, with their digital sensors and deep learning models, intersect with cartographic intelligences and subjectivities beyond the computational “Other.” I’m using “intelligence” broadly here, to encompass the various ways that knowing has been conceived across disciplines and cultures. There are a lot of other Others — including marginalized and indigenous populations and non-human environmental actors — who belong on the map, too, and not merely as cartographic subjects. They are active mapping agents with distinct spatial intelligences, and they have stakes in the environments we all share.

Maps Made for and by Machines

Utopia machine or not, the self-driving car has captured the public imagination like few devices since the smartphone. I suspect that’s because we marvel at, or even envy, its powers of perception, which are largely derived from 19th-century technologies that are familiar enough to be relatable. The most basic of these is an omnidirectional array of cameras. Teslas, for example, have redundant front cameras that cover different visual depths and angles, so that they can simultaneously detect nearby lane markers, construction signs on the side of the road, and streetlights in the distance. Radar sensors, unimpeded by weather, track the distance, size, speed, and trajectory of objects that may intersect the vehicle’s path, and ultrasonic sensors offer close-range detection, which is particularly useful when parking.

The car sees the world as an organized 3D code-space, and as an assembly of reflective objects that may interrupt the order of that code-space.

Beyond those tools of looking and listening, most self-driving cars also generate a real-time map of the world. Light detection and ranging (Lidar) sensors bounce super-fast laser pulses off surrounding objects and measure reflection times to create a high-resolution 3D model of the immediate environment. Unfortunately, these contraptions are bulky and expensive and can be stymied by bad weather and reflective surfaces. Engineers are rushing to develop smaller, solid-state designs, which would simplify manufacturing and cut costs, even if it won’t solve the problems with environmental sensitivity. Intellectual property related to Lidar was at the heart of the recent scandal that led to the firing of Anthony Levandowski, an Uber engineer who was formerly the technical lead at Waymo.

So the car “sees” and “hears” the world as an organized three-dimensional code-space, with signs and lines directing its operation; and, simultaneously, as an assembly of reflective objects — pedestrians, bicycles, other cars, medians, children playing, fallen rocks and trees — that may interrupt the order of that code-space, and with which each car must negotiate its spatial relationship. Driving is often challenging for humans because we must code-switch, as the car does, between different ways of reading and understanding the world, while also distributing our vision among different widths and depths of field and lines of sight (not to mention dashboards and text messages and unruly passengers). Human ears are multitasking, too.

All this sensory processing, ontological translation, and methodological triangulation can be quite taxing. Tesla (which, for now, insists that its cars can function without Lidar) has built a “deep” neural network to process visual, sonar, and radar data, which, together, “provide a view of the world that a driver alone cannot access, seeing in every direction simultaneously, and on wavelengths that go far beyond the human senses.” Waymo catalogs the mistakes its cars make on public roads, then recreates the trickiest situations at Castle, its secret “structured testing” facility in California’s Central Valley. The company also has a virtual driving environment, Carcraft, in which engineers can run through thousands of scenarios to generate improvements in their driving software.

And machine pilots (again, like humans) do not operate on real-time sensory input alone. Just as we have Siri and Google and mental maps, driverless cars tap into external sources of geospatial data. Standard GPS is accurate within several feet, but that’s not good enough for autonomous navigation. Industry players are developing dynamic HD maps, accurate within inches, that would afford the car’s sensors some geographic foresight, allowing it to calculate its precise position relative to fixed landmarks. Layering redundant forms of place-awareness could help overcome ambiguity or error in locally sensed data. Meanwhile, that sensor data would feed into and improve the master map, which could send real-time updates to all vehicles on the Cloud network. In other words, autonomous vehicles will rely on an epistemological dialectic, balancing empiricism with carto-rationalism, and chorography with geography.

Time to Reflect Reality = the metric of lag time between the world as it is and the world as it is known to machines.

Lots of companies are building maps like this, including Alphabet’s Waymo, German automakers’ HERE, Intel’s Mobileye, and the Ford-funded startup Civil Maps. They send their own Lidar-topped cars out into the streets, harvest “probe data” from partner trucking companies, and solicit crowdsourced information from specially-equipped private vehicles; and they use artificial intelligence, human engineers, and consumer “ground-truthing” services to annotate and refine meaningful information within the captured images. Even Sanborn, the company whose incredibly detailed fire insurance maps anchor many cities’ historic map collections, now offers geospatial datasets that promise “true-ground-absolute accuracy.” Uber’s corporate-facing master map, which tracks drivers and customers, is called “Heaven” or “God View”; the parallel software which reportedly tracked Lyft competitors was called “Hell.” That’s quite some epistemological (and ecclesiastical) chutzpah.

Yet achieving real-time “truth” throughout the network requires overcoming limitations in data infrastructure. The rate of data collection, processing, transmission, and actuation is limited by cellular bandwidth as well as on-board computing power. Mobileye is attempting to speed things up by compressing new map information into a “Road Segment Data” capsule, which can be pushed between the master map in the Cloud and cars in the field. If nothing else, the system has given us a memorable new term, “Time to Reflect Reality,” which is the metric of lag time between the world as it is and the world as it is known to machines.

Machine Mapping for the Rest of Us

Honestly, I don’t give a leaping Lidar about self-driving cars. Or any cars, for that matter. I just can’t get excited about innovations that fetishize personal mobility and the alienating, landscape-destroying, maintenance-intensive infrastructure that sustains it.

What I really want to discuss is this: How can we use all these new and old technologies to improve the physical world that we humans (and our non-human companions) read and inhabit? Some urban designers imagine that the containment of vehicular traffic will allow more street space for walkways and bike lanes and parks. Others note that we need to be intentional about how we redesign cities to accommodate the semantic preferences of our robot companions. Geoff Manaugh mischievously suggests that the machines’ sensory quirks, like Lidar’s vulnerability to mirrored surfaces, might prompt us to consider “how we could design spatial environments deliberately to deceive, misdirect, or otherwise baffle” autonomous agents. In a coming age of robot warfare and policing, we could see designers specializing in the creation of robot-illegible worlds rather than machine-readable ones.

How can we use these technologies to improve the physical world that we humans (and our non-human companions) read and inhabit?

What further impact might these “other mappings,” have on spatial design practices? Lidar is used by urban and transportation planners to create highly-detailed digital surface models, by preservationists to create point cloud surveys, by architects to model complex building sites, and by astronauts to study the surface of the moon. Aerial drone imagery is also emerging as a new way of seeing. Relief workers, including those laboring in the wake of hurricanes Harvey, Irma, and Maria, use drones to aid in disaster recovery. Archaeologists use drones to create site models, assess land uses and vegetation, and study earthworks and buried structures. Karl Kullmann has identified similar advantages for landscape architects. Designers long accustomed to the overhead view of GIS and satellite imagery, which emphasizes “large-scale associations, systems, and infrastructures,” can now operate from a “lower and more individualized oblique” vantage, sensitive to the “near-scale” qualities of place. Kullmann proposes that these two machine views — representing two different altitudes, scopes, and perspectives — are complementary ways of sensing and knowing a site. They embody distinct, though reciprocal, politics of vision and ethics of engagement. The operational premise here is similar to that underlying the self-driving car. We take it on faith that redundant data and multiple perspectives will yield greater precision, a better outcome, a higher truth.

Even techie jargon like “Time to Reflect Reality” points to looming social and philosophical questions about how machine vision will change our conception of the physical world. Artist James Bridle, who has made several works exploring the politics of automation, observes,

Self-driving cars bring together a bunch of really interesting technologies — such as machine vision and intelligence — with crucial social issues such as the atomization and changing nature of labor, the shift of power to corporate elites and Silicon Valley, and the quasi-religious faith in computation as the only framework for the production of truth — and hence, ethics and social justice.

We need to consider how humans and machines experience space and time differently, through various senses. We need to examine how the components of spatial intelligence are operationalized differently by (and for) humans and machines. And we need to grapple with what it means to create an unprecedentedly robust map of the world meant mostly for non-human agents. The car itself is less interesting to me than the critical terrain it’s mapping.

We need to grapple with what it means to create a robust map of the world meant for non-human agents. The car itself is less interesting than the critical terrain it’s mapping.

Some social researchers, activists, and humanitarian organizations have reframed their research questions — and even re-envisioned entire fields of practice — to take advantage of the investigative power of machine mapping. The same tools and methods used in business planning to evaluate prospective sites for Marriott hotels or Amazon distribution centers can be used in hazard forecast mapping, to determine where landslides or floods might occur. Some clever cartographic triangulation is also happening in the real Amazon, where environmental advocates have used satellite and drone imagery, supply-chain maps, and interviews with local farmers to identify newly cleared lands in Bolivia and Brazil. They have attributed nearly 2 million acres of deforestation to soybean farms that are allegedly supplying the huge American agribusinesses Cargill and Bunge. Neural network learning algorithms are commonly used in geospatial analysis to identify changes happening on the Earth’s surface — from deforestation to coastal erosion — and to predict future changes. The Disease Surveillance and Risk Monitoring project plots malaria cases alongside weather and topographic data to predict outbreaks, creating high-resolution risk maps that can be used to prioritize mosquito control efforts across Africa. This is a variation on the “traveling salesman” problem: how to efficiently deploy resources, whether Uber cars or aid workers with vaccines. Development organizations are taking some of the same technologies that are fueling our driverless dreams and extending them into realms where ethics, environment, public health, and social justice are the primary concerns.

Yet it’s difficult to use maps to address structural inequality when geospatial data aren’t equitably distributed. This is often the case in poorer countries, where ground-level data is scarce, or not available in a form that is accessible to those who are making decisions about aid and development. A team from Stanford has attempted to identify impoverished and at-risk communities in Central and East Africa from 35,000 feet up, using only open data. Satellite images of nighttime illumination are often used as an index for economic development, but the Stanford researchers trained a neural net to read nighttime and daytime images of the same regions, and to pick out variables, like roofing material or distance from urban areas, that contextualize clusters of nighttime light. Their model, trained also on World Bank statistics, learned to identify visible features — like roads, farmlands, and access to water — that correlated with nighttime illumination and economic well-being.

Urban planning should not be mistaken for an algorithmic pattern language; a city plan is more than the aggregation of spatial features an AI has correlated with wealth.

Such triangulation is meant to fill in the gaps in mapping but is not a real substitute for the kind of ground-level engagement that should precede policymaking or aid work. Meanwhile, Digital Globe and Stamen Design have built Penny, a mapping tool that uses machine learning to analyze satellite image indices of wealth. Eventually, planners could use the tool (which doesn’t account for income inequality) to select urban features that promote wealthier neighborhoods. And researchers at MIT and Harvard have created Streetchange, which uses AI to correlate changes in cities’ physical appearance with economic and demographic shifts and neighborhood “improvement.” While these are useful avenues of research, it’s important to remember that “poverty” is far more overdetermined and sensitive than any satellite image can capture. Poverty-as-lived is much more nuanced than its operationalization for any neural net. And urban planning should not be mistaken for an algorithmic “pattern language”; a city plan is more than a mere aggregation of spatial features an AI has correlated with “wealth.”

Conor O’Shea makes a similar observation about the use of drones in landscape architecture. While drones and satellites, used in tandem, might help to critically reframe designers’ perspectives, “human-to-human interviews, community outreach, political engagement, and research-based design strategies matter more than ever.” We need multiple eyes, ears, hands, sensors, and brains — automated and manual, digital and analog, machinic and human — on the case.

Cartography’s Intelligent Others

So now we’ve examined the well-funded and widely publicized attempts to map the world as a code-space legible to machines. And we’ve considered efforts to use those machinic sensibilities and intelligences to solve perennial human and environmental challenges. Social researchers and aid workers are also using “other,” artificial intelligences to map human Others — the silenced, the vulnerable, the marginalized. It’s a noble gesture, to recognize the subjectivity of a historically invisible population by rendering it visible on the map. But that visibility can also mean vulnerability to harm or exploitation. Since those human Others have long been absent in the datasets we use to make our maps, we now rely on methodological maneuvers and machine-readable proxies to render them mappable. We must take care not to equate the impoverished with the thatched roofs over their heads.

The politics of the overhead satellite view and the universal GPS grid do not always map onto the way traditional cultures relate to their own environments.

Of course, the history of cartography is deeply entangled with statecraft and colonialism, with the claiming of Other lands and the erasure of Other people. Yet indigenous groups have also embraced mapping as a means of “reclaiming their sovereignty over the lands, negotiating aboriginal rights, and regaining dignity during conflicts with governments and institutions,” according to cartographer Sébastian Caquard. Often they have adopted the geospatial practices of their colonizers to concretize their land claims and try to shield themselves from land grabs and resource-extraction schemes. The politics of the overhead satellite view and the universal GPS grid — even the Euclidean demand for points, lines, and areas — do not always “map onto” the way traditional cultures relate to or conceive of their own environments. “The process of mapping,” Nancy Lee Peluso argued in an influential 1995 article, “almost forces the interpretation of customary rights to resources territorially, thereby changing both the claim and the representation of it.”

Rather than merely incorporating the Other as a map subject, we should think more deeply about Othering cartographic subjectivity, or acknowledging that Others have developed their own map-making practices that diverge from Western convention. Consider the wooden coastline maps made by Inuit communities of Greenland, the Marshall Islanders’ stick charts, the Native North American petroglyphs, and Aboriginal Australian songlines. They all reflect unique ways of sensing, navigating, inhabiting, relating to, using, and finding meaning in an environment. Songlines, for instance, are oral maps that correlate star constellations and other astronomical phenomena with points on the ground. They embody an Aboriginal worldview, chronicling the nation’s coming-into-being, while also serving a pragmatic economic purpose as a map of trade routes.

Those Marshallese stick charts, meanwhile, recorded navigators’ ability to identify nearby islands by sensing the disruption in ocean swell patterns. Since the island navigators were charting reflections, we might regard their methods as analog antecedents to contemporary sensing machines like sonar and Lidar. Yet the meaning of reflected waves is entirely different for these human “sensors.” As Karin Amimoto Ingersoll explains in Waves of Knowing: A Seascape Epistemology, many surfers and traditional navigators in the Pacific cultivate “oceanic knowledge” by irrational, intuitive means, by fusing mathematics and physics — which are among our computational sensors’ strengths — with dreams, vibrations, and oral histories.

It is an approach to knowing through a visual, spiritual, intellectual, and embodied literacy of the ‘āina (land) and kai (sea): birds, the colors of the clouds, the flows of the currents, fish and seaweed, the timing of ocean swells, depths, tides, and celestial bodies all circulating and flowing with rhythms and pulsations. …

That knowledge simply couldn’t be captured on the maps created by voyagers like Captain James Cook, who sought to locate all points of significance on a “static grid of coordinates,” relying on stable coastlines for cartographic reference. Ingersoll argues that the European ideology of acquisitive exploration is reflected in a cartography based on categorization and control. In contrast, Pacific Islanders’ maps are “inherently mobile.” They make possible a type of movement that involves dynamic interaction between the islands, sea, stars, and those bodies floating between them. “In Pacific navigation, the map moves with the voyager.” (See also the Italian Limes project, which aims to map “movable borders” in the Alps, where political boundaries shift in response to ecological processes. )

Anthropologist Stefan Helmreich studies how waves carry different meanings for cosmologists, cardiologists, artists, oceanographers, surfers, economists, and social theorists. Ocean waves are formal, mappable objects, but they are also geographically specific, culturally shaped, and politically charged. Just as automated cars are trained to operationalize “safety,” and machine learning models are trained to operationalize “poverty,” the swells we call “waves” are measured, parametricized, and modeled through buoy sensors and computer simulations. And those wave models and maps are trained primarily on northern oceans, which are much more heavily instrumented. Our southern oceans, much like our land-based cultures of the Global South, tend to be “data poor” — despite the fact that they represent a tremendously diverse and expansive ecology. The southern hemisphere has more uninterrupted ocean, is hit by much more solar radiation, has extensive coral and mangrove depletion, and has more ice, which is breaking up as the climate changes. We need more “thinking from southern oceans,” Helmreich argues — and, by extension, more modeling and mapping “from below,” too.

The swells we call ‘waves’ are measured, parametricized, and modeled through buoy sensors and computer simulations. And those wave models and maps are trained primarily on northern oceans, which are much more heavily instrumented.

Even as we turn to sophisticated computer models to understand global climate change, we should remember that many human communities have already vividly imagined their own climate futures (as well as climate pasts and presents). As environmental geographer Annette Watson and traditional hunter Orville Huntington explain, “indigenous peoples have been enrolled in climate change research for decades, participating in data-gathering, as writing collaborators, and serving as the symbolic ‘canary in the coal mine’ for public outreach and policy-making.” Yet their intimate local knowledge, their distinctive “seascape” or “icescape” epistemologies, are often filtered through Western Enlightenment methods and conventions. Much is lost in that filtering. Oral traditions chronicle environmental shifts that unfold over many generations, much longer than Western science has been collecting environmental data. And there’s often a sacred or spiritual dimension to their cosmological and geological knowledge, which rarely fits into geospatial data models. Watson and Huntington note that in the Koyukon communities of Alaska, ecological “respect” is a primary virtue: “Showing respect means that an individual understands ‘their proper place’ in relationship to Other beings, both animate and inanimate.” “Respect” implies a critique of the Western fetishization of technology and the desire to witness, map, and catalog everything.

Watson and Huntington call upon environmental researchers and mappers to recognize that indigenous communities can offer much more than “local color” to substantiate researchers’ theories — that, instead, indigenous knowledge furnishes valid and valuable Other epistemologies and ontologies. Margaret Wickens Pearce, in her work with the Penobscot Nation in Maine, has been exploring new cartographic designs that embody the community’s sense of place. She features descriptive place names, highlights particular vantage points that demonstrate how those descriptive names come alive on-site, embeds stories, and traces systems that organize various locations into networks of relations, like hunter’s networks and ancestral stories, rather than grids of latitude and longitude.

Yet efforts to listen and learn from those Other cartographies can easily go awry. According to a Google blog post, the A n angu Aboriginal nation recognizes “no distinction between the physical and metaphysical, or the animate and inanimate. People, earth, plants and animals are inextricably connected.” Not easily deterred in its quest to “organize the world’s information,” Google sought to “bring these cultural and spiritual dimensions to the Street View Experience” by supplementing the map with oral histories and songs in its Story Spheres platform. The company partnered with the “traditional owners” of Ulu r u-Kata Tju t a National Park and the regional government to “celebrate and preserve A n angu culture through technology.” Yet when I “visited” the park through street view imagery captured (in accordance with Tjukurpa law) by a regional tourism agent, the dimension I sensed most clearly was my own Otherness. I felt like an intruder. This act of “rendering visible” only reminded me of the embodied experience and cultural context I was missing — and that, perhaps, I shouldn’t be privy to.

When I ‘visited’ the park through street view imagery captured (in accordance with Tjukurpa law), the dimension I sensed most clearly was my own Otherness.

Hoping to bridge such gaps, and striving not for global reach but for local resonance, the Center of Creative Arts in St. Louis partnered with the Office for Creative Research in New York to create a pop-up St. Louis Map Room in a shuttered middle school. In a city with a deep history of racial divides and cultural Othering, the Map Room brought people together to think about the geography of their city, and to learn more about one another’s urban patterns, city memories, spatial affinities, territorial aversions, and senses of place. The organizers stocked the room with historical maps and civic data on everything from land use to racial diversity to tree counts, and with abundant art materials and even some map-making robots. Then they invited community groups to draw on these cartographic resources and integrate their own place-based intelligences to construct 10×10-foot thematic maps. Museum curators, students, activists, health care workers, and others mapped multiple St. Louises: redlined St. Louis, homeless St. Louis, St. Louis schools, St. Louis transit, and so forth. Subsequent visitors could then take those maps, and project geo-rectified historic maps or city data on top, exploring new correlations.

As Jer Thorp of the Office for Creative Research explained, the maps were intentionally big, to allow various physical modes of interaction: “Groups of people can gather around a map to look at it from different vantage points. People can walk across the map, experiencing distance in a meaningful way.” Shifts in scale and perspective abet a new spatial awareness. There’s a material politics at play here, too. Those large sheets structure collaboration and conversation, and they represent civic intelligence — a mix of official data and “indigenous” knowledge — writ large. They will become part of the official archive of St. Louis; and the Map Room hardware, plans, and curricula will be open-sourced so that other cities and towns can repeat the experiment. Thorp told me a Map Room New Orleans is in the works, as well as a few projects in Canada.

Non-Human Mapping Agents

Many of those who participated in the program had never before been granted authority to reimagine their city’s geography. While community mapping projects are not uncommon, the Map Room was notable for its use of projected overlays and robot assistants, which plotted fixed cartographic objects like roads and landmarks, leaving the more interpretive, aesthetic, and connotative map activity to people. As Thorp explained,

I was specifically interested in the authority that machine-made marks seemed to hold over ones that are made by hand. While it seems like a commonly held belief that machines (robots, computers) make better, more accurate maps, it became clear to the mapmakers in the Map Room that the robots, while certainly precise, had nothing to say about the city. Indeed their ultimate purpose was to do the boring labor of laying the scaffolding for geographic narratives, marking the cross streets and walking paths and public parks so that the real stories of the city could emerge through the much more meaningful marks made by the human hands.

The robots symbolized the complementary relationship between mechanical and organic means of experiencing and representing place, between quantitative and qualitative methods, satellite views and fieldwork, rationalism and empiricism.

Researchers are recognizing the existence of spatial intelligences that exceed human capacity. How do other species perceive their own environments?

Non-human mapping agents — from Lidar sensors to drones to plotting robots — can be critical agents in the cartographic enterprise. Yet the non-human Other is also an agent in those landscapes we seek to map. As we’ve already seen, fish and waves dominate Pacific Islanders’ seascape epistemologies; ice and hunting paths shape Inuit maps; and flora and fauna are central to the landscape imaginaries of environmentalists and extraction companies alike. In most Western frameworks, non-human Others, from animals to plants to minerals, are represented as resources to be exploited: things to eat or mine or look at in a national park. But now the convergence of certain new ways of thinking — Anthropocene studies, new relational ontologies, Latourian actor-networks, feminist ethics, and what Donna Haraway describes as “SF” thinking, science fiction, speculative fabulation, string figures, speculative feminism, science fact, so far — has inspired many contemporary mapmakers to think about how to represent these Others not merely as human resources, but as entities with equal claim to the landscapes they inhabit, and with their own spatial intelligences.

Over the past 15 years or so, we’ve seen rising interest in animal geographies. Henry Buller, who wrote a three-part literature review for Progress in Human Geography, argues that this work advocates for “an interspecies contact … based upon a more convivial, less fixedly human and more risky approach to boundaries, to political actors and to political outcomes that inherently challenges what it means to ‘belong’ or to ‘pertain’ to a particular terrain.” Other species have preceded humans, and they co-exist with humans, in pretty much every terrain — even those where they are now considered “invasive” or “pests.” Researchers are recognizing, through these other species, the existence of multiple intelligences and spatial ontologies that exceed human capacity, and they are devising methodologies to “reveal what matters, or what might matter, to animals as subjective selves.” How do other species perceive their own environments?

Animals are, in a sense, cartographers, too; they use cognitive maps for migration, foraging, nectaring, defining territory, ranging, predatory avoidance, and so forth. And they’ve evolved sensory techniques for spatial perception and memory: bats rely on echolocation, bees on scent, and fish on electroreception. Some researchers argue that chimpanzees are capable of mapping forest terrain in both Euclidean and topographic modes. Slime molds use their slime trails to remember where they’ve been. Even the “crown-shyness” of some tree species, i.e. the maintenance of channels within their canopy, “could be seen as a map-like striation of space, a sort of territorialization.” Animal space is “a lived space, multisensorial,” argues Dennis Skocz, and GIS and other conventional mapping techniques are ill equipped to represent it.

Of course we humans can never know “what [it’s] like to be a bat,” as philosopher Thomas Nagel reminds us. Any attempt to represent, witness, or embody non-human subjectivity involves translating that experience through our own human senses and minds. Yet we cannot deny that these Others “have experiences fully comparable in richness of detail to our own.” Acknowledging that they share our cartographic terrains, and that their spatial experiences are rich and relevant, maybe even as critical foils to our own, can help us appreciate the immeasurable diversity of our environments. For example, the Lower Elwha Klallam Tribe in Washington is documenting the restoration of the Elwha River after dam removal through “Fishview” maps, which is just one of the many mapping projects that center animal migrations or responses to changing ecological conditions. Catherine D’Ignazio argues that “what we gain from these [Other] mappings is situated, rich knowledge about the spatial experiences of beings other than ourselves (knowledge that could be incredibly useful for scientific research) as well as an awareness of how things could be different.” Environmental artist Ellie Irons corroborates the importance of “seeing from other species’ points of view” — especially from the view of uncharismatic species like weeds and fungi — “even if we know that, in the end, it’s somewhat impossible, and even hubristic” to imagine that we understand their experience.

Plenty of artists and designers have wondered how other species perceive, inhabit, and construct their habitats, and how the artists’ own creative work can represent these non-human subjects’ “lived multisensoriality.” That work, whether conventionally cartographic or not, can be instructive for those seeking to map other spatial sensibilities. Sam Easterson’s Animal-Cams, Diana Thater’s immersive video installations evoking different animal subjectivities, and Lars Chellberg’s and Tomas Saraceno’s work with spider webs all explore how animals build and perceive their environments. The spurse collective uses “non-metric” mapping and other field methods to explore human and non-human ecologies — from the waterfronts of Maine to urban sidewalks — and how those assemblages impact forces as big as climate change and as small as local food systems. The Environmental Performance Agency, created by Irons and collaborators as a rogue EPA in the early days of the Trump administration, conducts similar work.

Many of these artists conceive a non-human that extends well beyond the animal. Rachel Strickland’s video work explores “the social lives of urban trees.” Nina Katchadourian finds cartographic resonance in patches of moss, while Heather Burnett investigates slime mold navigation. Irons not only studies the geography of weeds, but also uses their pigments to map them. D’Ignazio gives flowers a voice to address the water quality in their habitat, and Karolina Sobecka maps clouds, and the microbes that live in the troposphere, as environmental agents. Lauren Rosenthal has created an atlas where political boundaries are replaced by watershed divides, imagining a world in which ecological markers are as politically powerful as culturally-defined states.

In their review of conservation maps, Leila Harris and Helen Hazen interrogate how we use such maps to “cite, reconsider, challenge, or reify particular power relations between humans and non-human ‘others,’ solidify certain spaces as appropriate for particular species, [and] generate notions of ‘desirable’ species that we seek to conserve.” They propose that conservationists avoid maps that (1) present humans and non-humans as separate; (2) privilege Western cartographic practices; (3) favor spaces and subjects that are deemed more mappable; (4) frame all issues as territorial; and (5) privilege the static and localizable over the fluid, seasonal, and shifting. The ways that maps are created, funded, studied, and mobilized have profound implications for policymaking, governance, and the deployment of conservation resources, which may determine “the very survival” of the Other, or even ourselves.

So we know what not to do. But who is doing it right? Who else is effectively incorporating other subjectivities and intelligences in cartography? Consider Bear 71, an interactive web documentary produced by the National Film Board of Canada. The project was launched to much acclaim in 2012, then relaunched in a WebVR version in 2017. It compiles geographic data and trail and traffic camera footage chronicling the movements and life events of a single female grizzly bear in Banff National Park, thus documenting “the intersection between humans, animals and technology.” The creators write that the bear was “tracked and logged as data, reflecting the way we have to see the world around us through Tron and Matrix-like filters, qualifying and quantifying everything, rather than experiencing and interacting.” We see how human settlements, roads, fences, food smells, other species, and even 71’s GPS tracking collar have changed the course of this creature’s life; and, further, how maps and screens and sensors have profoundly changed the ways humans relate to the natural world. This gorgeous project, while not easily replicable or scalable, offers an exemplary map of multiple intelligences, while also embedding a critique of its own technologies of cartographic representation.

In 2016, the Office for Creative Research partnered with the Great Elephant Census and various NGOs to document the decline in Africa’s elephant population, which dropped 30 percent from 2007 to 2014. Wildlife and park staff and pilots crossed the continent in low-flying airplanes, conducting a survey of live and dead elephants, as well as other wildlife, livestock, humans, houses, and environmental features, so that researchers could explore relationships between these variables. All that data was mapped, contextualized, and made query-able on The Elephant Atlas. The website describes the census methodology and explores the impact on elephant populations of global forces like poaching, habitat loss, human conflict, and climate change, as well as political, cultural, and economic changes at the country level. Each of these compounding factors has its own geography, but not all those geographies lend themselves to representation in cartographic form. When I took my mapping class on a visit to the OCR studio, we learned that the Atlas’s balance of specificity, abstraction, and context reflected the designers’ sense of responsibility toward their subjects. They sought to “make visible” improving or declining wildlife conditions in different contexts, yet aimed not to provide too much granular specificity, since such visibility could render these threatened creatures and habitats even more vulnerable.

Mapping Intelligently

Elephants are renowned for their spatial memory. Behavioral observations have long shown this to be true, but scientists recently corroborated the empirical evidence by outfitting elephants with tracking collars that monitored their movement toward widely distributed watering holes on expansive, featureless terrains. As advanced sensing machines and spatial technologies become cheaper and more powerful, we will see many more studies like this. Aided by aerial imagery and GPS, binoculars and audio recorders, we can now map everything from elephants and refugees to icebergs and Ubers. We should do so critically and intentionally, bearing in mind that those subjects and agents have their own geographies and spatial sensibilities, and so do the instruments we use to map them.

We can aim for an atlas, a prismatic collection of mappings, that invites comparison and appreciation of the ways in which our world is both known and unknown.

Increasingly, we turn to artificially-intelligent sensing machines — with their purportedly more objective, efficient, exhaustive, and reliable means of observation and orientation — to shape the protocols and politics of interaction among the various beings who share our cartographic terrain. Yet we must never forget that those computational instruments operationalize space differently — differently from one another and from other “species” of intelligent agents, including us. Drones and dragonflies sense and navigate the world in unique ways. Sonar and Lidar construct distinct empirical terrains, hearing and flashing their environments into existence. Satellites abstract terrestrial realities into macro patterns: from 500 miles up, the geography of hardship is a patch of thatched roofs.

These new, artificially intelligent agents may well generate efficiencies in transit and logistics. They might offer insight into how certain groups of people messed up the world, and how we can fix it. Yet these computational intelligences, and ours, aren’t the only ones that have a stake in that world’s evolution. We need to recognize the world’s myriad intelligent agents not only on our maps, but also in our cartographic methods. We must choose our tools and methods wisely, fully aware of their affordances and limitations, sensitive to how they render the world knowable — and how they register and reflect the world as it is known to its many intelligent, invested inhabitants. Ideally, we should balance or juxtapose different modes of knowledge and production: Western scientific and indigenous epistemologies, human and other-species ontologies, mechanical and organic means of experiencing and representing place, cartographic rationalism and empiricism, projection and retrospection. No single über-map can encompass all such subjectivities and sensibilities. Instead, we can aim for an atlas, a prismatic collection of mappings, that invites comparison and appreciation of the ways in which our world is both known and unknown.