Earlier this month, the National Highway Traffic Safety Administration made perhaps its most unique and authoritative announcement yet pertaining to autonomous vehicles – what many have referred to as driverless or self-driving cars. In a letter to Google, the agency sent a strong signal to Detroit, Silicon Valley, and Washington by informing the company that the automated systems that pilot Google’s self-driving vehicle could be considered the “driver” under federal law.

In his explanation, the agency’s chief counsel concluded, “It is more reasonable to identify the “driver” as whatever (as opposed to whoever) is doing the driving.”

It’s a moment that many have since hailed as a major victory for the automakers and software makers looking to unleash the power and safety innovations of autonomous vehicles in the U.S. and around the world. However, what may prove equally (or even more) important were questions that regulators left unanswered in the letter – as well as some foreboding hints buried between the lines.

Let’s start with one of the company’s questions about the braking systems, for instance. On the grounds that the self-driving computer system can be considered the vehicle’s driver, it stands to reason that the computer system would be in charge of engaging and disengaging systems like the vehicle’s service brakes. However, the agency warned that such a system may run afoul of current federal safety standards that require service brakes to be activated “by means of a foot control.” Unfortunately, the agency did not address whether allowing a passenger to brake the car at any time (when the car is supposed to be the driver) would actually result in a less safe situation or undermine the safety intent of the original standard.

But that’s not the only area where the agency responded, to paraphrase: “We’re not sure.”

Google asked, for instance, for guidance related to steering wheels and turn signals. While NHTSA’s letter acknowledged that motor vehicle safety standards do not specifically require the presence of a steering wheel, they warned that plans to allow the vehicle to manage the turn signals conflicts with requirements that those signals be able to be operated “manually.”

“We cannot verify Google’s compliance with these express requirements,” NFTSA's letter reads.

It’s one of seven instances in the letter in which the agency states that it cannot verify, interpret, certify or conclude that Google’s technology (or any other company’s technology that envisions a similar self-driving car system) would comply with existing regulations. That creates uncertainty for any company building similar technology related to autonomous vehicles and it exemplifies just how many complicated questions federal regulators still have to answer.

Suffice it to say, clarity and consistency will be important for rulemakers going forward.

However, it’s not merely what answers the agency eventually arrives at that will determine how soon we will see self-driving cars on the road. It’s also how they set out to provide those answers.

In a sense, NHTSA has two roads it can take as it establishes safety standards for self-driving cars: They can start from scratch and create a set of separate standards for autonomous cars, piling yet more rules on top of the existing requirements for traditional human-controlled vehicles. Or they simply interpret or update existing standards in ways that facilitate this new innovation without sacrificing human safety.

A new “rulemaking” (start from scratch) approach would take many years to complete, experts say, which would slow the introduction and evolution of this technology and would likely hinder innovations that could render additional regulatory conclusions unnecessary. Conversely, the “reinterpreting” approach (which has been stressed by Transportation Secretary Anthony Foxx) would be much quicker and much more effective in the long run.

Unfortunately, parts of NHTSA’s response suggests it may be hinting toward delays as it starts to tackle some of the more complex questions surrounding driverless cars.

“In some instances, the issues presented simply are not susceptible to interpretation and must be resolved through rulemaking or other regulatory means,” the letter said. It’s those last three words – “other regulatory means” – that have the technology and auto communities particularly worried.

It’s worth noting, too, that taking such an approach may make it impossible for President Obama and the Department of Transportation to hit their goal of developing guidance for the deployment and operation of autonomous vehicles within the next six months. However, if federal regulators decide to create a brand new set of standards, that timetable becomes wholly unrealistic. It could also deter lawmakers from funding a request for a $4 billion autonomous vehicle pilot program included in the president’s final budget.

So while the agency’s interpretation that a computer can indeed be a vehicle’s driver is encouraging, one must hope that it represents a broader shift in the way federal regulators try to tackle the important questions surrounding driverless cars.

Vehicle connectivity (including autonomous technology) has the potential to improve safety on the roads to unprecedented and even to once unimaginable levels. In fact, the potential to eliminate the most common cause of traffic accidents – driver error – is the greatest motivating force behind these incredible innovations. With that in mind, we must continue to encourage more thoughtful and less rigid approach to regulations, so that we don’t pump the brakes on this potentially powerful new technology.