Today in Tedium: Every time I use pieces of the early internet, I get this warm feeling in my chest. It’s hard to describe, but I imagine it’s a feeling not unlike the feeling that went through the crowd during the Sex Pistols concert at Manchester’s Lesser Free Trade Hall on June 4, 1976. There’s a sense of purity and simplicity there that is hard to recapture through other means—the sense that I’m witnessing something culturally important that, in its own way, could change the world. It feels unadulterated, without the frayed ends and sense of familiarity that come with years or even weeks of constant use. And it’s one of those things where, if you feel it once, it’s kind of like a drug. I had that feeling recently when I was reading up on Gopher, a part of the internet that got overshadowed by the World Wide Web, but in its own quiet way, still lingers on. I wanted to check its pulse—and while it’s not a hustle-and-bustle in the way that, say, Twitter is, it carries. Tonight’s Tedium talks about the Gopher scene in 2017. Yes, there still is one. — Ernie @ Tedium

70 The standard port number that Gopher uses for online connections, a standard set in stone in 1993 by the Internet Assigned Numbers Authority. (The web, generally uses port 80, while Telnet uses port 23, and FTP port 21.) Despite being in heavy use throughout the early ’90s, the technology faded from use as the web became more common, and as a result, it’s difficult to find a modern tool that allows you to connect to Gopher sites. (One exception is Matt Owen’s Gopher Browser, a client for Windows that came to life relatively recently.) The Overbite Project, located at Floodgap Systems, has a list of preferred clients, if you’re interested in hopping on board.

An ascii-art photo of Floodgap founder Cameron Kaiser. (per Kaiser’s homepage on Gopher) This guy might be the most influential figure in the Gophersphere in 2017 Cameron Kaiser has a lot of time, heart, and soul invested in Gopher. But don’t mistake his passion for the protocol and its many servers for mere nostalgia. He sees Gopher as structurally better than the Web in a number of hugely important ways. “I like a lot of things about Gopher—its easy parsing, the simple protocol, low bandwidth and computing requirements and relatively few moving parts,” he explained to me in a interview. “I think the Web has gone the wrong direction on all of these attributes, and I didn’t want to see Gopher go away in its shadow.” The operator of Floodgap Systems, who has been active on Gopher since 1993 and has operated his own servers since 1999, has found himself in the position of being the Gopher protocol’s most important steward. Among the things that Floodgap does that are valuable for Gopher: It watches over a sizable repository of unique content on its own Gopher server; it maintains a list of active and recently updated Gopher servers, so they can be easily found and used; it hosts the only active Veronica-2 search engine on the entire Gophersphere; it keeps a list of clients for each platform; and, most importantly for people who don’t have access to such clients, it offers a web-based proxy for accessing Gopher sites. While he points out there are some weaknesses in the technology he offers, it’s hard to ignore the impressiveness of what’s mostly a one-man shop. He points in particular to the strides of his Veronica-2 system. “Even though Veronica-2 is hardly Google-class, I’m proud of how much it has indexed, that the system is also aggressive about expiring servers that are gone, and the fact that it gives people a reliable foothold into Gopherspace to look at what’s there,” he noted by way of example. “Floodgap is also one of the few sites providing automatically maintained news and weather; there is a battery of systems on the backend that find, convert and index content for use and it all runs generally without intervention.” Why put in all this work? In large part, it’s because he sees Gopher as an extremely important platform, one that is both structurally consistent and is designed to put the power of the interface into the hands of the user—unlike a website where the visual look and functionality is driven by the developer. This, notes Kaiser, holds benefits specifically for machines of an older vintage. “The retro community is discovering the ugly truth: If it can’t browse the Web, people think it’s not useful as a computer,” he explained. “And a 1MHz 6502 or an old 68K Mac can’t browse the modern web. But they can browse Gopher because the protocol and interface makes little demand on the client, which happily by simple convergence is also Web-like, and there are many resources out there that are still hosted on Gopher.”

“Gopher is the information without the flair, the HTML without the Javascript. Gopher gives me what I want when what I want is to read stuff, not like/comment/interact/favorite/share etc. I’m a big fan of all of those things, but sometimes I just want to read a thing on an old computer and follow a few links. Gopher lets me do that. It’s ultimate Old Web and I am one of those ultimate Old Web ladies who still uses Lynx occasionally just so some BOFH will see it in their web logfiles and, hopefully, smile.” — Jessamyn C. West, a Vermont librarian and onetime MetaFilter employee, discussing why she worked to convince the community site to bring back its long-dormant Gopher server, which it relaunched last year after a 15-year hiatus. (BOFH, in case you’re wondering, is “Bastard Operator From Hell,” a fictional sysadmin that dates back to the Gopher era.) So how much use is the Gopher version of MetaFilter getting? According to site operator Josh Millard, the read-only server is generally pretty quiet and allowed to live on its lonesome, but it does have a certain appeal for some types of users, especially on long comment threads, when CSS and Javascript can slow down the page. “It’s definitely got some appeal as a lightweight option for the nuclear bunker,” Milliard said. MetaFilter is by far the best-known mainstream site in the modern-day Gophersphere, but it’s far from the only one.

GopherVista. People are still doing innovative things with Gopher, even now Last month, the long-dormant search engine AltaVista made a surprising comeback onto the internet, in all its late-’90s glory. No, Verizon didn’t get any weird ideas about reviving the name after completing its recent acquisition of Yahoo. Instead, a young hacker-type that works for CloudFlare launched a brand new version of AltaVista, based on a 20-year-old server app called AltaVista Personal, for the simple purpose of creating a Gopher search engine. “The idea was originally a concept I had to prove to a friend you can still run 1996 software in a modern system,” Ben Cox explained. “Gopher is a conveniently retro data source!” Cox, who is 22 and was as a result a toddler when AltaVista’s server software was first released, noted that much of his work is based around the intricacies of the HTTP and HTTP/2 protocols, making working in Gopher a comparative cake walk. “Unlike HTTP and HTTP/2, where there are lots of odd rules you may have to follow, Gopher has very few rules you have to follow, and most of them involve the logic behind serving the directory pages, not content itself,” he explained. “this makes it a great hobby project since it’s entertaining to use, and not likely to be frustrating to deal with edge cases.” (In case you’re in the mood to try to build your own Gopher Altavista server, he helpfully put the code up on GitHub.) Gopherpedia He’s not the only hobbyist cracking Gopher’s bones. A slightly older project that added a lot of value to Gopher as a whole is Gopherpedia, which (as you might guess) is a Gopher version of Wikipedia. In a text-only interface like Lynx, it feels utterly natural, like Wikipedia was made for this format. I know I was smitten. But creator Colin Mitchell says that he sees the tool as being better for some use cases than others, due in no small part to its lack of hyperlinks. “I hear from a lot of people that they use Gopherpedia because it works really well on low-bandwidth connections. If you know exactly what you want to read about, you can look it up and start reading without loading all the extra chrome that comes with Wikipedia,” Mitchell told me in an interview. “On the other hand, I think Gopherpedia really suffers from the lack of hyperlinks, because one of the great things about wikipedia is the serendipity of finding really interesting links in an article you’re reading.” So why Wikipedia? Turns out Mitchell had spent some time working a Ruby-based Gopher server named Gopher 2000, and wanted a project that put the server through its paces. He picked the largest thing possible, of course. “I like to joke that it’s probably the biggest site in Gopherspace in terms of content, but I think that must actually be true,” he added. While not officially sanctioned by the Wikimedia Foundation, it’s polished enough that it seems like it should be. (While the server runs into the occasional hiccup, it’s quite slick for a service that 50 to 100 users rely on daily.) And he’s still making improvements. At first, the platform imported Wikipedia articles en masse, but eventually he moved to an API-based interface “so in theory it’s always up to date.” So what drives projects like these, anyway? Clearly, the public benefit of these ideas is relatively small. A big part of it might simply be that it’s good for practice. Mitchell cited his work on Gopherpedia as a boost to his skills with the Ruby programming language, for example. “I’ve gained a lot of respect for early internet technologies, and an interest in keeping them alive as much as possible,” Mitchell noted.