Last week, the U.S. and EU announced a tentative agreement to allow U.S. companies to continue sending and receiving personal information about EU residents across EU borders — everything from an online employee directory for a multinational company to a Facebook profile stored in the cloud.

An earlier agreement, known as the Safe Harbor Privacy Principles, which went back 15 years and was relied on by some 4,000 companies, was declared illegal last year based on concerns, highlighted by the Edward Snowden disclosures, that compliance with surveillance requests from U.S. government agencies, notably the NSA, may have put U.S. companies into conflict with the EU’s broadly written privacy directives.

It’s entirely unclear, however, if the so-called “EU-U.S. Privacy Shield” will pass muster with EU authorities. Most of the changes add new but largely toothless restrictions, including expedited dispute resolution requirements, solely on private sector organizations. It will add layers of annual reviews and expand the privacy bureaucracies at both the Department of Commerce and the Federal Trade Commission.

Yet the Court of Justice’s rejection of the old Safe Harbor was based entirely on potential U.S. government practices, for which there is little indication of changed policy or procedure.

Final approval for the Privacy Shield deal, reflecting the complex bureaucracy that plagues the EU, will include review by a dizzying array of governmental and quasi-governmental privacy bodies, the Commission itself, and its member states. Further legal challenges are all but guaranteed. Uncertainty will cloud internet-based companies doing business abroad for months, if not years, to come.

Meanwhile, the Privacy Shield, even if it survives its trial by fire, will likely do nothing to add even a modicum of new protection to the personal information of European citizens.

That may not have ever been the real goal. Many of the EU’s member states perform the same kinds of surveillance on their own citizens that the U.S. does (often working together with the NSA). And the privacy practices of other nations doing business with the EU are even worse, yet they haven’t been subjected to the same kinds of finger-wagging rhetoric as the U.S.

There’s more than a whiff of hypocrisy here, suggesting once again that the privacy red flag is being waved more to hamstring U.S. tech giants than to protect EU citizens. It’s all part of last year’s Digital Single Market initiative in the EU, which, despite its name, has so far been more about erecting protectionist trade barriers than solving Europe’s innovation deficit. (The EU is also ramping up wide-ranging antitrust actions against leading U.S. internet companies, for example.)

To the extent that the privacy concerns in Europe are genuine, they are a reflection of a profoundly different approach to privacy in two giant economies. U.S. privacy law, inspired by our revolutionary founding, focuses more on restrictions, such as the Fourth Amendment, that protect citizens from information collection and use by government rather than private actors. In fact, private actors are often protected from such restrictions by the First Amendment.

But in Europe, scarred by catastrophic abuses of personal information that include the Inquisition, centuries of religious wars, the Holocaust, and the surveillance states of the former Soviet bloc countries, citizens enjoy broad privacy protections from companies and each other. In Europe, the government is seen as the principal protector of personal information from abuse by non-governmental institutions — the opposite of the U.S. model.

Which is not to say that U.S. law doesn’t protect personal information. In an interview last week, FTC Commissioner Julie Brill, who participated in the Privacy Shield negotiations, noted that a wide range of specific U.S. laws strongly protect particularly sensitive data, including financial, employment, and health data and personal information about children, from private misuse.

And it’s hardly clear that the EU’s broad privacy directives translate to stronger protections. The rhetoric may be strong, but the EU’s central government is weak, leaving enforcement to member states, whose implementations and enthusiasm vary wildly. As a result, privacy law in the EU is even more disjointed than in the U.S.

Reflecting on the last two decades of interactions between the internet and its would-be regulators, it should be clear by now that regardless of where they live, digital consumers can’t hope to secure real protections for their personal information from traditional governments, domestic or foreign.

In large part, that’s because the architecture of the internet and the unique economic properties of information make it effectively impossible to control digital conduct across borders drawn during the Industrial Age. The internet was born global.

At the same time, information misuse causes real damage to the information economy. If not minimized, it can severely undermine the essential trust that is the principal fuel of the digital age. So what can business leaders do to solve the problem better than policymakers?

First, they can recognize and support the efforts of NGOs working to set standards, ensure transparency, and enforce reasonable security practices for information collection and use. Organizations such as TrustE and the Better Business Bureau, for example, offer “trust seals” to services that promise to abide by specific information practices. These relatively low-cost self-regulatory bodies have been gaining momentum and effectiveness.

Second, all businesses must recognize the growing power of consumers to vote with their clicks, rejecting products and services whose information usage profiles violate their preferences. Such behavior may sound unlikely, especially when consumers don’t always know how and by whom their personal information is being used. But even internet leaders, especially social network providers, have learned the hard way that failing to collaborate with users on privacy design can quickly sink promising new products or require frequent and hasty revisions that offer more granular information-sharing choices.

To the list of privacy-related misfires that includes Google Buzz, Facebook Beacon, and LinkedIn’s “social ads,” we can now add the failed launch (at least for now) of Google Glass. Though little more than a head-mounted smartphone, the product’s Orwellian aesthetic generated visceral discomfort, even before anyone had seen it, that the company was unable to overcome despite deploying an army of Glass-wearing goodwill ambassadors, known as “explorers.”

That visceral response, which I refer to as the “creepy factor,” suggests the third and perhaps most important principle in avoiding future privacy crises, at least the kind not generated by governments themselves. And that is simply to ride out the storm.

While survey after survey suggests consumers have growing concerns about the private use of personal information, their behavior regularly betrays their stated preferences. We say we’re uncomfortable sharing information with third parties, but when a specific choice is presented to us, consumers have proven adept at weighing the costs against the benefits.

For example, most of the web’s free content requires the user to accept tracking cookies that customize advertisements. In the EU, consumers must explicitly accept the cookies, but so what? Nearly everyone does. The consent is not a protection; it’s an annoyance.

What is true is that novel uses of information — think of the Internet of Things, wearable health and fitness trackers, autonomous vehicles and drones, big data and artificial intelligence — often generate the creepy response, but only at the beginning. If a product can survive the initial period of discomfort, and if the information exchange it offers proves a fair one, most privacy crises resolve themselves.

To minimize the damage, however, successful companies have learned to include users in information design, educate the market before a product launch, and practice rigorous transparency and self-enforcement of basic privacy principles. These practices — not more inter-governmental agreements, frameworks, empty laws, and self-interested threats — are the essential tools to solving future privacy problems.

If you don’t believe me, just watch what happens next.