NOBODY likes it when a taxi takes longer than expected to arrive. But that is what is happening with self-driving cars. Building a vehicle that can handle a busy street, with cyclists, pedestrians, roadworks and emergency vehicles, is a tall order. In March a pedestrian was killed in Tempe, Arizona when a self-driving Uber vehicle failed to spot her as she wheeled her bicycle across an empty road at night, and the vehicle’s safety driver failed to hit the brakes. There is a growing sense that the technology has, so far, overpromised and underdelivered. So a trial of self-driving vans that began in Frisco, Texas on July 30th is notable for its realistic approach to what the technology can do today.

Drive.ai, a startup, has deployed seven minivans to transport people within a limited area of the city that includes an office park and a retail area. “We are identifying a valuable use case that we can deploy with today’s technology,” says Andrew Ng, a board member and a pioneer of “deep learning”, the technique that underpins the current boom in artificial intelligence. As the technology evolves, he says, so will autonomous-vehicle services. For now, though, Drive.ai is keeping things simple.

All pick-ups and drop-offs happen at designated stops, to minimise disruption as passengers get on and off. Riders hail the vans using an app and go to the nearest stop; a vehicle then appears to pick them up. (The vehicles do not circulate continuously like shuttle buses, but wait to be called, and plan their routes dynamically.) Use of the service is free of charge for now.

The vans are painted a garish orange and clearly labelled as self-driving vehicles. “We weren’t going for pretty, we were going for distinctive,” says Mr Ng, who draws an analogy with yellow school buses: people understand that some kinds of vehicles behave in particular ways, and accommodate them accordingly. Screens mounted on the vans’ exteriors let them communicate with pedestrians and other road users, for example to tell a pedestrian that it is safe to cross a road. Rather than trying to build a vehicle that mimics a human-piloted one, Drive.ai is making the self-driving nature of its vehicles explicit.

Similarly, rather than trying to build a vehicle that can navigate roadworks (a notoriously difficult problem, given inconsistent signage), Drive.ai has arranged for the city authorities to tell it where any roadworks are each day, so that its vehicles can avoid them. The company has also liaised with emergency services (another potential source of confusion for autonomous vehicles) and held a series of town-hall meetings to answer questions from locals.

Drive.ai will limit the service to daylight hours, which makes things simpler and safer. Each vehicle will initially have a safety driver, who will shift to a passenger seat if all goes well. If a van gets confused it can stop and call for help: a remote supervisor then advises it how to proceed (rather than driving the vehicle remotely, which would not be safe, says Mr Ng).

It might sound as though Drive.ai is cheating, by simplifying the problem in so many ways. But the end result is still a useful service; letting workers visit the retail park for lunch without having to worry about driving or parking. And it provides a foundation from which to expand the service in future, in Frisco and elsewhere.

Drive.ai plans to license its technology to others, and has struck a deal with Lyft, a ride-hailing firm, to operate vehicles in and around San Francisco. “I think the autonomous-vehicle industry should be upfront about recognising the limitations of today’s technology,” says Mr Ng. It is surely better to find pragmatic ways to work around those limitations than pretend they do not exist or promise that solving them will be easy.