Photo

While we fret about losing privacy and other dangers of the digital revolution, one sad change is happening with little notice: Our technology is stealing the romance of old conversations, that quaint notion that some things are best forgotten.

Remember the get-to-know-me chat of a first date or that final (good or bad) conversation with someone you knew for years? Chances are, as time has passed, your memory of those moments has changed. Did you nervously twitch and inarticulately explain your love when you asked your spouse to marry you? Or, as you recall it, did you gracefully ask for her hand, as charming as Cary Grant?

Thanks to our near-endless access to digital recording devices, the less-than-Hollywood version of you will be immortalized on the home computer, or stored for generations in some digital computing cloud.

Wearable devices like Google Glass are only a hint of what is to come — ever smaller and cheaper, and tied to inexpensive digital storage. Records of voices and events will be a permanent part of the Internet the way text is already, held forever and searched, mined and inspected.

Casual conversations and off-the-cuff quips are about be put through the data blender, scrutinized and organized and pumped through algorithms in search of deeper meaning.

Computer analysis of talk will yield new insights by closely analyzing the so-called metadata of speech — its intonations, pauses and interjections. At what point in the conversation did a joke help close a sale? Who ultimately prevails: the person who talks loudly, or the person who repeats their point the best? What the National Security Agency learns about people by studying metadata may be only the start of how much we can tell, apart from the mere meaning of words.

In short, speaking from the heart could become speaking from the talking points of a computerized recommendation engine.

“There are lots of ways that information is coded” in speech, said Ron Kaplan, a scientist at Nuance Communications, which makes voice-recognition and analysis software. “Phoneticians and phonologists have all kinds of theories about how these work, but they’ve been hard to automate. With lots and lots of data, you’ll be able to see all these patterns.”

Mr. Kaplan is interested in making it easier for your thermostat to understand you when you say you’ll be back next Tuesday, or for Netflix to offer the right choices when you say, “A Bond movie, but not with Roger Moore.” But he also thinks examining conversation will lead to a new understanding of how we interact, and how the individual interacts with the crowd. “It’s hard to say what the societal effect will be” from that, he said.

Some worry about what this will mean not just for the future, but for how we treat the past.

“It could almost be a comedy routine: before they tell the mother the sex of her baby, they play it a recording that says, ‘In the interest of better customer service, portions of your life may be recorded or monitored,’” said Brad Templeton, a director of the Electronic Frontier Foundation. “I’ve been telling people to change their behavior in the present, because they don’t know how that recording will be analyzed in the future.”

There is much to be gained from storage, of course. Who would not thrill to hear Lincoln at Gettysburg, or Shakespeare playing even a lesser role at the Globe? But Shakespeare’s plays were also reconstructions from the memories of diverse actors, some years after a performance. Our greatest literature was generated by an imperfect collective recollection, as much as it was written by one person.

While it might be interesting at this distance to know if Neville Chamberlain believed it when he said his deal with Hitler had brought “peace in our time,” will we want a real-time analysis of pauses and tones after the president speaks on a national emergency? Which is preferable for our hearts and minds, the theater of politics or deference to the algorithm?

There are also certain difficulties associated with a world of perfect recollection. A recent discussion on the Web site Quora about what it is like to have a photographic memory gives a taste: it’s nice for passing college courses, but it also makes everything seem the same.

Like most of history’s technology-forced changes, this enters our lives as a convenience. A Nuance product is already used to identify high net-worth customers of Barclays Wealth Management after 15 seconds of deliberately casual speech, instead of using irksome PIN codes and interrogations about their mothers’ maiden names. A new version of the software was released last week, with the added benefit of identifying the voice prints of known frauds, and putting callers who exhibit suspicious behavior onto a “gray list.”

Already, stockbrokers are issued cellphones that record all their interactions wherever they are, instead of just calls made from their desks. Conference calls and video chats can be transcribed as text, and later annotated, or tagged, for keywords someone spoke about an important meeting or product.

“Voice is going to places it hasn’t gone before, with people developing applications to post conversations to Facebook, or games that prove you are a good son because you call Mom every week,” said Jason Goecke, the chief executive of Tropo, which makes voice-tagging software. “There are no social constructs for pervasive recording yet, just a hodgepodge of state regulations based on the phone system.”

Mr. Goecke and others call this transformation Hypervoice, a play on the links put on the written word to create hypertext in the early days of the Web. This month the Hypervoice Consortium opened an online forum to create standards and practices for how we’ll manage the switch from the transient to the permanent.

That quintessential American trait, self-reinvention, may well be threatened in the hard world of video and audio documentations and the chase of objective truth.