A couple of weeks ago, shortly after the Amazon C.E.O. Jeff Bezos unveiled, on “60 Minutes,” that his company plans to deliver packages to customers with a swarm of autonomous, flying drones, Google made an announcement that seemed far less explosive: Andy Rubin, the former head of Android, would lead an “effort to create a new generation of robots.” Over the weekend, Google revealed how sweeping its ambitions truly are: the company purchased Boston Dynamics, a robotics firm best known for products like Big Dog, a four-legged device that carries cargo across rough terrain, and the Cheetah, which can run faster than Usain Bolt. If Amazon and Google’s collective plans succeed over the next few years, they could usher in a new era of human-robot interaction, one in which we regularly find ourselves face to face with robots in both public and private spaces.

While some analysts initially suggested that Google’s goal was to more thoroughly automate factories—highly controlled environments that are well-suited for a fleet of semi-independent robots—it’s now clear that the company’s team of engineers and scientists has a vision of truly dexterous, autonomous robots that can walk on sidewalks, carry packages, and push strollers. (A disclosure: Carnegie Mellon’s Community Robotics, Education, and Technology Empowerment Lab, which I direct, has partnered with Google on a number of mapping projects that utilize our GigaPan technology.) Before its acquisition of Boston Dyanmics, Google ingested seven start-ups in just six months—companies that have created some of the best-engineered arms, hands, motion systems, and vision processors in the robotics industry. The companies Meka and Schaft, two of Google’s recent acquisitions, designed robot torsos that can interact with humans at work and at home. Another, Redwood Robotics, created a lightweight, strong arm that uses a nimble-control system. Boston Dynamics, Google’s eighth acquisition, is one of the most accomplished robotics companies on the planet, having pioneered machines that can run, jump, and lift, often better than humans. Its latest projects have focussed on developing full-body robot androids that can wear hazmat clothing and manipulate any tool designed for humans, like front loaders or jackhammers. The potential impact of Google’s robot arsenal, already hinted at by its self-driving car effort, is stunning: Google could deploy human-scale robots throughout society. And, while Amazon is busy optimizing delivery logistics, Google bots could roboticize every Amazon competitor, from Target to Safeway.

If robots pervade society, how will our daily experiences change? I recently heard the C.E.O. of a domestic drone company describe the vision that inspires her: she wants a drone to deliver bottles of water as she completes her morning runs. Imagine the scene as joggers scatter through the park with their drones humming overhead, jostling for position, sounding like a dozen leaf blowers. This melange of visual clutter and noise pollution, what I’ve dubbed “robot smog,” will transform the worst effects of digital devices into real-world annoyances that cannot be silenced or hidden in a coat pocket. Today, interactions with machines generally occur on our own terms—toasters, microwaves, and even smartphones do what we tell them—but soon, we could be looking up at a quadrotor drone hovering in the park, wondering whether to walk underneath it or cut into the grass to avoid its downdraft. Autonomous robots will displace our sense of control precisely because they are out of our control, but occupy the physical world and demand our attention.

Even though robots will require us to interact with them on a constant basis, the gestures and idioms that facilitate human-to-human interaction will be of limited use. The knowledge, sensors, and capabilities of robots will be unknown to us, since there is no federal standard for how robots must act. The incidental interactions we’ll have will be awkward: Can the robot understand my speech? Is it making eye contact? If I curse at it, will it embarrass me, or get out of my way? Is the robot staring at me because it wants to interact, or is it just waiting for me to move? Because robots won’t be fully independent beings but machines connected to massive databases in the cloud, there will be a lack of information parity between us and them. I know little about the robot, but the robot knows everything about me. If it’s an Amazon drone, it knows my name, my address, and my reading and shopping habits. If it’s a Google bot and I use an Android phone, it knows where I’ve been driving, where I had dinner, and my appointments for tomorrow. These interactions will require thoughtful problem-solving on our part.

The fact that robots will be hyper-connected to the cloud means that they will not only benefit from vast stores of information about us but that they could also act as highly distributed sensors, constantly feeding information back to corporate and government databases. An everyday smartphone already transmits how its owner moves through space; robots with computer vision could track where we look, discern our emotions through facial analysis, and read our body language through gesture recognition. Just as marketing analysts study and capitalize on our Internet-browsing behavior, real-world data analytics could, for instance, allow Amazon to deploy highly targeted advertising for pool cleaners, car-parking aids, and fertilizer based on what its drones see as they fly over back-yard swimming pools, home garages, and unhealthy vegetable gardens.

Thanks to the maker movement, low-cost robots with previously unimaginable applications are already populating our sidewalks, and we’re beginning to see some of the consequences of a burgeoning robot population: in Los Angeles, sign-spinning robot mannequins attract the attention of shoppers at a fraction of the cost of human hawkers, and in Atlanta the BumBot stalks the sidewalk in front of O’Terrill’s Pub, threatening vagrants with a high-pressure water cannon in an effort to keep the entrance inviting for patrons. There is no moral compass governing the arc of robot innovation, protecting our sidewalks from tasteless and offensive bots. These are the first signs of robot smog.

Some people warn that the alliance of Google and Boston Dynamics—and, more generally, the most information-rich companies in the world breaking out of their digital boundaries—heralds something like the birth of Skynet, the evil artificial-intelligence company in the “Terminator” series. But the truth may be more prosaic: you are hurrying down the sidewalk, and you spot a shiny robot looking straight at you. You cross the street—no time to deal with a sales pitch—only to see two more robots closing in on you. Your phone rings: “Relax. We’re just delivering your new Android smartphone. Answering this call is your acceptance of our new terms of service.”

Illah Nourbakhsh is a professor of robotics at Carnegie Mellon’s Robotics Institute, the director of the university’s Community Robotics, Education, and Technology Empowerment Lab, a former robotics group lead at NASA’s Ames Research Center, and the author of “Robot Futures.”

Photograph: Ana Nance/Redux