In the years since that New Year’s Eve, Y2K has become an enduring punchline. The whole incident is now remembered mostly as a non-issue whose overblown media hype was matched only by the massive amounts of money governments around the world deployed to solve it. That image of Y2K as a non-event persists in the cultural memory, used still to dismiss supposedly looming catastrophes in politics or technology.

But Y2K wasn’t just an over-exaggerated media-fueled mass panic. Behind the scenes, as people hoarded food and water, or joined doomsday cults, programmers worked tirelessly to prevent anything from going wrong. In the months leading up to 2000, there were genuine concerns within the IT world about the Y2K bug, and a subsequent concerted effort to avoid widespread problems when ‘99 switched over to ‘00.

To believe that Y2K amounted to nothing by chance alone — to believe that it was media hype and nothing more — is to “engage in a destructive, disparaging revisionism that mindlessly casts aside the foresight and dedication of an IT community that worked tirelessly for years to fix the problem,” Don Tennant, editor in chief of ComputerWorld, wrote in 2007.

Y2K wasn’t just an over-exaggerated media-fueled mass panic.

But the fact that so many people feel this way amounts to a kind of weird triumph: the evidence for all the work is the absence of disaster. In his 2009 retrospective on Y2K, Farhad Manjoo concluded that the success of Y2K preparations “has bred apathy” — that the lack of Y2K armageddon has made it more difficult for people to heed warnings “about global warming or other threats…the fact that we fixed it may make it harder to fix anything else in the future.”

There might be yet another way to look at it.

The two, combined, narratives of what transpired on Y2K — that it was strictly a non-event, or, that it was a non-event because of programmers were skilled enough to predict and avert it — actually bred something else: confidence.

Whether you believe Y2K was much ado about nothing from the start, or whether you understand that it was only so because of human intervention, the lasting legacy might not be one of apathy, but trust — both in the machines we created, and in our ability to understand and control them. Either the networks and systems we had created to that point were inherently designed to be strong and secure (or even indestructible), or we were readily able to predict and avoid areas of weakness.

Armed with this confidence, in the years since Y2K, we have created more and more complex networks and systems to enhance, guide, or even take over many facets of our daily lives. Whereas in 1999, many aspects of our day-to-day living remained offline, today little is left untouched by computer systems, networks, and code: Talking to friends and family, reading a book, listening to music, buying clothes or food, driving a car, flying from place to place — all of these activities depend on the network. Increasingly, the network extends to devices that, in 1999, were not considered to have much technological potential: household appliances like refrigerators or thermostats.

Now, we’re discovering what a false sense of security we’ve created. Along with it should come the realization of just how little we understand about the programs that permeate our lives and the networks that link them. Unlike 20 years ago, we appear less and less capable of predicting what will go wrong, or of stopping it before it does.