Pay-as-you-drive insurance plans, where premiums are based on an individual's actual driving habits, pose a potential privacy risk for motorists, a recent study has found.

The study, conducted by researchers at the University of Denver, Colorado, found that driving habits data such as speed, time of travel, number of miles driven, braking and acceleration data could paint a surprisingly detailed picture of an individual's movement in a specific time period.

Insurance companies often like touting the fact that no location data is collected under usage-based insurance plans. But that only creates a false sense of privacy among users of such insurance plans, the researchers noted in their study, titled "Inferring Trip Destinations From Driving Habits Data."

"Customer privacy expectations in non-tracking telematics applications need to be reset, and new policies need to be implemented to inform customers of possible risk," the research paper said.

With pay-as-you-drive plans, insurance companies typically require drivers to plug in a small telematics device into the vehicle's on-board diagnostic port. The device monitors the vehicle operator's driving behavior and records data like speed, cornering and braking patterns over a specified time period.

The information is used to adjust insurance rates and to offer more customized plans for individual drivers. Insurance companies claim that such plans can help substantially lower auto insurance rates, especially for safe and low-mileage drivers.

Several major insurance companies, including Progressive, State Farm, National General and Esurance currently offer such plans. The National Association of Insurance Commissioners predicts that 20% of all vehicle insurance in the U.S. will incorporate some form of usage-based insurance within five years.

Vehicle telematics-based insurance programs offer many advantages for consumers and insurance companies. But they come with hidden risks, said Rinku Dewri, one of the authors of the study and assistant professor of the department of computer science at the University of Denver.

While insurance companies may not collect any actual tracking data, a lot can be inferred from the data that is collected, Dewri said. "Our work started with the hypothesis that non-tracking driving habits data can potentially be used for tracking," Dewri said.

Using just speed and distance data, the researchers attempted to find out if they could correctly identify the destinations of the trips during which data was collected. As part of the effort, the researchers extracted "quasi-identifying" information such as traffic stops, driving speed and the number of turns made by the driver during the trip. They then matched that data with publicly available map information to see if they could identify the destination.

"Assuming that we know where the trip started our algorithm consults a road map to identify all those routes that has intersections at least at those distances from the start point of the trip where the driver made a stop or a turn," Dewri said. "In some cases, we found 10 candidate routes; in others, we found more than 150 candidates."

The researchers applied a ranking method to the routes to predict the top destinations for the trip. "We observed that in 60% of the cases, our algorithm placed the true destination in the top three possibilities," Dewri said. Even when the number of potential routes was large, the destinations often tended to end up with a small geographic area.

The study highlights the issue of unwanted disclosures, where consumers unknowingly reveal something they do not want to with data they are willing to reveal, Dewri said. "Unfortunately, there is no theory that will immediately tell what may get disclosed, or inferred, from the data we share."

The best way that consumers can protect themselves against privacy risks associated with usage-based insurance is to demand more transparency from their insurance companies, he noted.

"Programs using these devices should make the consumer aware of the potential risks, even if these programs are themselves not involved in making secondary inferences," Dewri said. "The clearer we are on how the data is used, the better methods we can design that will retain the utility of the data, without making it prone to unwanted inferences."

Jaikumar Vijayan covers data security and privacy issues, financial services security and e-voting for Computerworld. Follow Jaikumar on Twitter at @jaivijayan or subscribe to Jaikumar's RSS feed . His e-mail address is jvijayan@computerworld.com.

See more by Jaikumar Vijayan on Computerworld.com.