An Elegiac Introduction David Shields’ 2010 (anti)novel Reality Hunger is a kind of 205-page manifesto composed of 618 brief numbered sections, the longest no more than a few pages, though most are limited to a few sentences. These passages are, by and large, uncited quotations from other authors (an appendix at the end that the fearful publisher forced Shields to insert reveals the sources, though Shields advises the reader to tear those pages right out). The quotes — and Shields’ original material, which, ironically, is probably the most interesting part of the book — all more or less work to advance Shields’ dispiriting thesis: fiction is over and done with; we cannot keep suspending disbelief and sustaining the illusion demanded by narrative, which we now too clearly experience as no more than a contrived, unwieldy framework of artifice cobbled together to lubricate the delivery of the one or two significant insights that the author hopes to convey. “I’ve become an impatient writer and reader,” Shields proclaims in his own voice in Reality Hunger. “[S]omething has happened to my imagination, which can no longer yield to the earnest embrace of novelistic form…. Forms serve the culture; when they die, they die for a good reason: they’re no longer embodying what it’s like to be alive…. The novel is dead. Long live the antinovel, built from scraps.” Why not, Shields reasons, drop the pretense and get right to the punchlines: the insights themselves? Why not create a work composed of insights, aphorisms and apothegms, à la Nietzsche, Emerson or Montaigne? Moreover, since writers like these have already done this better than we ever could, why not simply compile a compendious chrestomathy of such authors at their most quotable, sprinkle in some of one’s own bits of wisdom relevant to the theme to add more of a personal touch and let the reader sort it out and enjoy? Why not? Well, truth be told, because the results, despite the punchy appeal and the breezy brevity of the quoted passages and of the book as a whole, are not actually all that conducive to enjoyment; all Shields’ efforts to organize the passages under thematically indicative chapter titles notwithstanding, they are slow going without the novelistic framework that entails conventions such as plot and character to keep the reader’s attention tethered to the text. With neither such narrative glue holding the pieces in place nor the guiding light of genius, the mind of a single great essayist or aphorist, to advance the argument, we are left with something that is accretive but not cumulative. Without an overarching structure, the parts do not sum. There is a lot of scratching the surface, but our itch finds no relief. The “antinovel built from scraps,” in other words, is unfortunately, just scraps. This is no accident. The “something” that “has happened to [Shields’] imagination” and turned him into “an impatient writer and reader” is the same something that has happened to our culture and wreaked havoc on our ability to immerse ourselves, contrary to Shields’ thesis, in fiction and non-fiction alike. The renowned Russian writer Lyudmila Ulitskaya has similarly commented that neither we nor she is any longer capable of sustaining interest in long books (nor, alas, even in long articles like this one), so she consciously endeavors to keep her books brief. Nicholas Carr, in his 2010 piece “Is Google Making Us Stupid?” in The Atlantic Monthly, quotes, as representative, a blogger about the use of computers in medicine saying, “I can’t read War and Peace anymore. I’ve lost the ability to do that. Even a blog post of more than three or four paragraphs is too much to absorb. I skim it.” And there’s, no doubt, a lot more where that came from. I would be shocked if a systematic study of word counts of articles appearing in periodicals over the course of the past 30 years did not show a downward-sloping tendency. As Shields writes (this time, quoting), “We all need to begin figuring out how to tell a story for the cell phone.” He adds, “The short-short … feels particularly relevant to contemporary life. Delivering only highlights and no downtime, the short-short seems to me to gain access to contemporary feeling states more effectively than the conventional story does. As rap, movie trailers, stand-up comedy, fast food, commercials, sound bites, phone sex, bumper stickers, email, voice mail, and Headline News all do, short-shorts cut to the chase.” Yeah … fantastic. Something is afoot, and it is closely connected to the technological revolution through which we are now living. Aspects of it have been observed and described by scientists, pundits and philosophers alike. Marshall McLuhan, Nicholas Carr, Mark Bauerlein, Alan Kirby and Gilles Lipovetsky, among many others, have contributed to what must needs be a long conversation extended across history (precisely the sort of conversation that is no longer likely to take place in our culture, as I discuss below). But I have not seen anywhere, at least not in terms I find satisfactory, a succinct account of our cultural predicament, our era, that puts all the big puzzle pieces in place. This is what I hope to do. By the end, you will understand why, despite my scintillating prose and cogent argumentation, your mind will probably wander … unless, of course, it wanders so much that you stop reading midway through, in which case you will not even have the distinct pleasure of coming to learn the etiology of your failure. Nor, then, will you gain an understanding of the manner in which our pandemic cognitive uncertainty coupled with our decreasing capacity to process such uncertainty results in the elevation of quantity over quality in forming our consensus views of matters large and small. But you likely won’t care; you’ll already have moved on to something else … even if I promise to follow David Shields’ example and furnish plenty of quotes and insights along the way. As you might expect, I am only partially joking. Living in a technologically advanced society, you have probably already experienced some degeneration in your capacities to focus, to evaluate information and to be engaged fully in your own experiences. If you somehow haven’t, then those around you and especially those younger than you most certainly have. The generation now coming of age hardly had the chance to develop such capacities in the first place. And the problem will get worse before it gets better. But get better it must. We cannot keep going like this. Perhaps through the universalization of advanced neurofeedback training that is already beginning to emerge or through some other medium we have yet to imagine, our abilities to focus, engage and evaluate will come back to us. Whatever comes, however, will surely be very different from the culture we once had; and so this — whatever else it is — must necessarily be a lament for the past and for the present alike. The Four Ages of (Media) Man Any division between eras is necessarily artificial. There are few sharp breaks and many atavistic remnants of earlier eras and proleptic anticipations of later ones, which is why we can find the features typically associated with postmodernism in many modernist classics and vice versa. This is also why it is possible for some among us to offer prophetic diagnoses of what is to come. Marshall McLuhan was such a one, and his anticipation of many of the features of our present age in his focus on interactivity and his notion of the “global village” compel attention. But I want to start with another one of his ideas, one that was not wholly original with him but that he helped popularize. As far as breaking down eras goes, this breakdown is about as solid as it gets. History (to date) consists of four major periods: the period before writing was invented, when communication was all oral, the period after the introduction of writing, the period after the invention of the printing press, when mass propagation of the written word became possible, and the period when new electronic media — telephones, radios, televisions and computers — revolutionized our lives, increased our connectedness and turned our aesthetic and communicative experiences multisensory. Each of these periods has various characteristic features closely linked to and driven by the dominant media of the era. The age of orality, for example, requires the cultivation of memory, and the verbal art of the time is of necessity mnemonic — poetry rather than prose, and poetry that is highly metric, with plenty of repeated formulae like “the wine-dark sea,” “the rosy-fingered dawn” or “fleet-footed Ajax” ready to be deployed as needed to fit the meter. Without written chronicles of successions of events, history, as Oswald Spengler observed, is not conceived of as a linear or progressive unfolding of events, but rather, as mythological, cyclical or, per Parmenides, an unchanging eternity. In the next age, when writing comes to the fore, our individual memories decline while our collective memory gains immeasurably. Prose is born, and with it the serious work of sustained, ruminative nonfiction, philosophical works, for instance, which would have been inconceivable before writing’s advent. The creation of a tradition consisting of authors and thinkers engaging one another across the centuries becomes possible for the first time. A sense of history is born. With texts being laborious to produce and, therefore, uncommon artifacts, literacy is equally rare, with the literate often concentrated together in centralized repositories of texts, such as monasteries. The invention of the printing press gradually but dramatically transforms our economy and society. Ordinary people can have their very own copy of the Bible, and so individual revelation and Protestantism become possible. Literacy spreads. Languages become standardized, as spelling, grammar and punctuation conventions slow the rate of linguistic evolution. The written word hits the market. The nation-state emerges as a community linked by a common language and literature (or, as Benedict Anderson has argued, by national newspapers). The dissemination of ideas and of culture, humanitarian and scientific alike, intensifies in pace, as a greater number of literate people consume and produce culture. Deep engagement with works of imagination becomes a means of diversion and spiritual fulfillment, and thus, the Artist is born; we come to see creators as secular saints. This development combines with the popularization of a mechanistic, scientific worldview (yielded by scientific progress) among the literate to elbow theology gently or not-so-gently out of the room, and the role of religion in our lives is diminished. Toward the middle and end of the 19th century, mass literacy finally arrives and with it Ruskin’s dream of a society where workers are thinkers and thinkers workers, Pater’s exaltation of aesthetic experience as our path to salvation and Arnold’s vision of a new mass Hellenism (coupled with his lament for the reality of growing mass Philistinism), in which Culture — “the best that has been thought and said in the world” — can be our collective legacy. After a turbulent period when great wars are fought over great (and not-so-great) ideas, even as the edge of the cultural horizon, literary and scientific alike, has dipped far beyond the depth of the common man, great books curricula spread as a means of educating a populace returning from war and in need of learning to keep up in a world where literacy — now entailing much more than the mere capacity to decipher words — is the threshold ability necessary to sustain a high-tech economy that is key to protecting the state from dangerous enemies within and without. In the 1960s, intellectuals and their ideas reach the absolute apogee of their influence. But by then, the page is already turning … or rather, we are already tuned in to a different frequency, switched over to a different channel. Electronic media are swiftly ushering in the fourth major era of human civilization, which is still with us today. Art and entertainment begin to speak to us, literally. The transmission of information turns instantaneous. The slow, meditative act of reading in solitude, of being spoken to in well-chosen words that emanate from far-off times and demand to be interpreted is replaced by quick and ready multisensory input. As Kennedy’s t.v. victory over Nixon in the same debate Nixon won among radio listeners definitively establishes, appealing images trump well-chosen words. High-stimulation entertainment and infotainment — rollicking rollercoaster rides, as the two-bit critic’s cliché goes — replaces low-stimulation activities like tackling the great books, which required too much of the reader. Instead, new opportunities for participation arise, ones that, through the mass dissemination of technological capacities once prohibitively expensive for all but a few, enable us to become creators, artists, publishers, directors, critics and scientists. In this sense, mere universal literacy in the processing of the written word was a modest goal; now, it would appear, we can strive for universal competence in every conceivable avenue of cultural production. From the Postmodern to the Virtually Real But I seem to have let our fitful and spirited stallions gallop too fast for our clunky chariot, and so I feel it my duty as the driver to shout a sobering Ho! Ho! (even while wishing I could do the same for that much larger, clunkier runaway chariot of our civilization that I’ve undertaken to describe). Let us retrace our steps a bit and take a more precise measure of the ground we are presently traversing. The era of electronic communication is, after all, not an indistinguishable whole but a steady progression, and the radio is one thing, television another and computers and the internet yet a third or fourth. Let’s approach the issue from a slightly different angle, however. Though some would maintain that we are still in the midst of postmodernism or in the death throes of postmodernism or that postmodernism is, by its nature, an extended death throes (modernism’s? ours? both?), many would argue that since approximately the early or mid-1990s, we have been living in a new epoch. Since, per Hegel, the owl of Minerva flies only at the coming of the dusk, the recognition of the period change comes somewhat later than its purported inception, but in an influential 2006 essay, “The Death of Postmodernism and Beyond,” Alan Kirby named the period “pseudomodernism,” later rechristening it “digimodernism.” Gilles Lipovetsky has spoken of “hypermodernity” or “supermodernity,” the unilluminating stop-gap “post-postmodernism” has made appearances, and other names I need not catalogue here are also floating around. We can leave the final choice of nomenclature to whoever will be doing the digging through our real or virtual bones; however, on the premise that it is rather silly for every period from hereon out to be obliged to include some variant of the root “modern” in its appellation, and on the further premise that the cultural productions most characteristic of our present period bear little resemblance to the difficult, allusive, fragmented works characteristic of high modernism (only the fragmentation remains), I prefer an entirely different name — “virtual realism” — because (i) it obviously plays off of the notion of “virtual reality,” a term that has the advantage of actually being in colloquial use in our present day and referring to a central technology or technological aspiration of our time; (ii) it accounts for our newfound thirst for and fetishization of the experience of the “real” in an increasingly ersatz milieu while simultaneously capturing the inherent “virtuality” of that sought-for “realness,” (iii) in adverting to a much earlier period — realism — it recognizes an extent to which we are taking a giant cultural step backwards, even as the “virtual” component keeps us rooted firmly in the forward-looking, techno-utopianism of the present moment; and (iv) it invokes the important fact that communication and other cultural production increasingly take place in real time, or, at least, virtually in real time, thereby pointing to, as it were, the sped-up character of our cultural chariot. My pitch for the name thus concluded, let me turn to more important concerns: what are the characteristics of virtual realism? I want to start by ticking off in short order some purely factual developments all relating to technological changes in our midst and then proceed to discuss in more detail some of their more intriguing consequences. Here, then, are the basic facts, which will, hopefully, feel obvious and uncontroversial: The decrease in technological and, therefore, economic cost associated with computers, the internet, smartphones and other relevant gadgetry has permitted more and more of the world’s population to go “online,” resulting in the formation of what McLuhan first termed the “global village,” a worldwide community of potential viewers, listeners, readers, creators and consumers.

The same kinds of technological advances have given that global community relatively cheap and easy access to the avenues of content creation formerly reserved for certain elites, with the result that there is now a blending of the line between the sources of culture and its audience and a far greater level of interactivity between these two formerly distinct categories of persons.

On account of the above factors, viz., the vast increase in the number of people communicating worldwide, there is a concomitant vast increase in the number of interpersonal communications, coupled with a marked decrease in the size, i.e., information content, of such communications due, in part, to the very fact that the communications are so great in number, so that the time spent on any single one cannot but be limited.

The flowering of the internet along the lines here described has given rise to an enormous upswing in the amount of information and pure unprocessed data practically and theoretically available to us.

Accompanying these developments is the growth of social media, making possible the close observation, i.e., “following,” the daily lives (including images, events, opinions and recommendations) of large numbers of people, including celebrities of all stripes, leading to a significant diminution of the private sphere and altered expectations of what kinds of events can, should and will be kept private.

Spurred on, again, by these same sorts of technological advances, including portable and wireless electronics possessing, by our earlier standards, incredible computational power, there has been a proliferation of widely available, highly addictive, high-stimulation entertainments, e.g, video games, Youtube clips, Facebook, etc.

The globalization of culture and communication has also made possible an increasingly globalized market, greatly facilitating the replacement of the distinctive and the local by globe-spanning chain stores and e-businesses, e.g., amazon.com, drugstore.com, etc., capable of engaging in e-commerce and distributing products and services throughout large geographic territories; the result is that many formerly unique and particular parts of the country and the world look, sound and feel more and more uniform. And So… Having established the groundwork, let’s now turn to the more interesting part: the sociocultural consequences and implications of the aforementioned developments. Swimming, once again, against the very tide I’m describing, I will address them not in the biggest-headline-first order that our increasingly limited attention span is now best adapted to process, but rather, in the least-remarkable-first order that makes, here as elsewhere, the most rational sense because the phenomena I will describe earlier are logically prior to the ones to which I am building up. But lest you cannot wait — and if you’re a full-fledged member of our punch-drunk culture, you likely cannot — let me anticipate a few of my punchlines. The phenomena upon which I will be expounding are the manner in which our accelerated cultural clip — i.e., the syntagmatic instantaneanism of our contemporary culture that has transitioned from the “Literary Work” to the “Text” to the “msg” as its principle unit of semantic meaning — and the mass democratization inherent in our newfound interactivity to the nth degree, despite all their substantial benefits, (i) markedly impoverish our cultural production, (ii) corrupt our communication and (iii) dramatically reduce the role of publishers, editors and other cultural gatekeepers, leading to the publication of nearly everything and anything; these developments, when combined with our (iv) obsessive focus upon the pedestrian groundedness of the “real,” as distinguished from the high-minded fancy of the imaginary and the visionary, result in: (v) the fragmentation of our consciousness in a sea of distraction, bringing about a deplorable decline in traditional literary and philosophical culture and the possibilities such culture offered for the experience of deep, purposive immersion so essential to human thriving; and (vi) an en masse loss of trust in all our erstwhile sources of ideational authority, bringing about a vast upsurge in cognitive uncertainty, which, in turn, spawns a further decline in the quality of our cultural, political and scientific dialogues and a replacement of the trained judgment and discernment formerly offered by authorities armed with human intellect and expertise by blind machine categorization and preferences driven and dictated by sheer numbers. It seems I’ve sabotaged myself again, haven’t I? Apparently incapable of reducing my many-headed hydra of a thesis to the size of a single venomous bite, I’ve offered you instead an unwieldy, unquotable, untweetable structural mess of a sentence that lacks venom but can’t stop fussing and hissing. Well, so you’ve persuaded me to try once more. I’ll give you a tiny tidbit to tweet out to all your frenetically texting “friends”: when, in resolving matters of art, science, history and philosophy, we cease to defer to experts and let the masses have their say, we lose these critical disciplines both in themselves and as our bulwarks against the otherwise unprincipled power of the market and the State. No, still, alas, far over the 140-character limit. Shall I give it another go? When it comes to truth and beauty, democracy brings nihilism, which means our most critical questions will be resolved without truth and beauty counting in the calculus. 143 characters, and that’s not counting the spaces (Twitter counts them). Now, I give up. And when it comes to tweets, texts, tidbits, sound bites and all our on-the-go, off-the-cuff, compressed, character-counted categories of communication, so, I suggest, should we all. David Shields is wrong: if truth and beauty are our ultimate goals (shouldn’t they be?), we had better all figure out how, as a culture, to take the long-long over the short-short every day of the week. But, now then, let us return to the process of spelling out the consequences of the fact that we seem to be moving in the exact opposite direction. Mass Participation Means Massive Precipitation, a/k/a More Means Dumber . Increasing the absolute size of the audience for cultural productions inherently implies a less elite, less specialized, less literate audience, which, in turn, implies a dumbing down of cultural content. While the internet makes niche and affinity marketing easier than it ever was, most disseminators of culture, and especially commercial disseminators, measure success in terms of page views, Facebook “likes,” advertising revenue and/or actual sales. There is, thus, a clear incentive to broaden one’s scope to cater to the greatest number of possible users. The interactivity of many modern-day websites as well as television and radio shows (with all sorts of programming driven by real-time or offline polls, votes, posts, call-ins, etc.) likewise means that the old distinction between a talented creator and a passive, receptive audience has been significantly blurred in our times, a change greatly facilitated, as well, by the cheapening of the technology necessary to engage in content creation and by the fact that — as discussed further below — we are no longer in the habit of listening passively, but rather, are constantly communicating one thing or another. The end-result is, again, that content driven by the tastes of a broader number of people is, all else being equal, likely to aim lower and, hence, to be inferior in its aesthetic aspirations. Aldous Huxley, writing in 1934, already recognized this problem at a much earlier point in its development: For every page of print and pictures published a century ago, twenty or perhaps even a hundred pages are published today. But for every man of talent then living, there are now only two men of talent.... It still remains true to say that the consumption of reading — and seeing — matter has far outstripped the natural production of gifted writers and draughtsmen. Walter Benjamin quotes this very passage from Huxley in his famed 1936 essay, The Work of Art in the Age of Mechanical Reproduction, and he follows this quotation with a terse pronouncement informed by his Marxist viewpoint: “This mode of observation is obviously not progressive.” But, progressive or not, Huxley is surely correct: talent is scarce; those eager to speak are many … and Benjamin does not dispute the underlying fact that the line between creator and spectator is blurring. He expands upon the notion: For centuries a small number of writers were confronted by many thousands of readers. This changed toward the end of the last century. With the increasing extension of the press, which kept placing new political, religious, scientific, professional, and local organs before the readers, an increasing number of readers became writers — at first, occasional ones. It began with the daily press opening to its readers space for “letters to the editor.” And today there is hardly a gainfully employed European who could not, in principle, find an opportunity to publish somewhere or other comments on his work, grievances, documentary reports, or that sort of thing. Thus, the distinction between author and public is about to lose its basic character. The difference becomes merely functional; it may vary from case to case. At any moment the reader is ready to turn into a writer. That such a diagnosis was possible in the 1930s suffices to demonstrate that some measure of interactivity has been with us for many decades. The emancipatory potential inherent in free public education, technological progress, urbanization and higher standards of living has been driving us hitherward for quite some time, and authors from the lowbrow (e.g., choose-your-own-adventure books) to the highbrow (e.g., Milorad Pavic in his brilliant masterpiece, The Dictionary of the Khazars, and in his other works as well) have been paying homage to the growing role of the audience in the production of the text since before the internet made its appearance and well before its full ambit came into view (the choose-your-own-adventure genre apparently hasn’t made much progress since, as the recent “interactivization” of Hamlet makes clear, see here, quoting the author coming forward with this deep insight: “It occurred to me that [Hamlet’s] favorite speech, ‘To Be Or Not To Be’, is structured like a choice, almost like those old Choose Your Own Adventure books, and I thought, ‘Oh my God, I have to write this’”). While such interactivity had and still has the potential to be progressively revolutionary in the way Benjamin had hoped, the fact is plain for anyone to see that, at least for now, it has made for a rather sad and stupid spectacle, unleashing all our pent-up passionate intensity without bringing much light or truth along with it. Logorrhea and Graphomania - Speaking More and Saying Less . Irving Berlin, a compulsive letter writer, used letters as a means of procrastination: “All I produce is little fragments,” he wrote in one letter. “I really must try and achieve one solid work … and not scatter myself in all these directions all over the place.” Letters, for him, were “a kind of drug — great relief from work.” For us, the fortunate result is that we now have published volumes of his letters, informing us of his views and thoughts on a wide variety of matters. If letters can be a drug and a distraction from work, what should we think of e-mail, texts and social media? Moreover, while long, handwritten correspondence delicately folded, sealed in addressed envelopes, paid for and placed in the mail, whereupon it would take many days or weeks to reach its destination, encourages care, completeness, formality and attention to detail, what exactly is encouraged by endless streams of two-line or two-word texts typed while walking or driving or by 140-character-max tweets? Snapchat, which a New York Times writer recently pronounced “[t]he most widely used new app these days, at least among my 20-something cohort,” (read more here) allows users to send out photos that then quickly disappear forever after they are viewed by the recipient, so that the sender cannot later be humiliated when (s)he has to see them turn up when (s)he is clothed and sober once again. We are now constantly communicating. The threshold for what is worth communicating has dropped through the floor. Our communication is dashed off; it is quick and ephemeral. It is full of typos, light on grammar and virtually bereft of ideas or other value worth preserving. It is sloppy and silly and inculcates bad habits of mind. If a major component of the legacy of Irving Berlin and other intellectuals of earlier ages was preserved in their letters, what will we and our would-be-intellectuals leave behind? The Elimination of Intervention – Approaching the Brink of Publishing Everything . Between our collective cup and lip, there used to be many a slip, a failsafe slippery slope, in fact, a stumbling block in the way, both of necessity and by design, that served to ensure that if one of us poured in an insipid brew or an outright dose of poison, the rest wouldn’t be obliged to drink. The stumbling block was an imperfect solution, far from a panacea. It was both over- and under-inclusive: sometimes disguised drops of soporific or toxic concoctions unfit to drink did trickle through, and at other times, wholly healthful tonics tumbled untouched right into history’s dustbin. But, all things considered, we could have done worse; the clumsy contrivance discharged an essential function. That function, in case my equally clumsy metaphors are insufficiently transparent, is filtering what gets published. We used to have professional editors, publishers and agents around to serve that role. This could not have been otherwise: publishing used to be an expensive endeavor, so we had no choice but to limit what got published and make sure it got done right. To an extent, these intermediaries still exist today, of course. But more and more, because of the technological changes I have been describing, it is impossible for anyone to intervene to keep all our self-published, real-time nonsense from seeing the light of day, or even to take promising but obviously not-yet-fully realized work and steer it wisely toward its ideal fruition. The internet hands to each and all our very own personal printing press, which the smartphone then puts in our pockets. It may not ensure widespread exposure, but even those reins are handed over to mechanisms very different from those that once prevailed. The qualities necessary to become an internet sensation, to collect clicks and “likes” or other measures of popularity, are, needless to say, quite different from those required to garner the attention of the professional’s critical eye, and while the latter can be bought (but so can the former: http://www.theguardian.com), captured by connections or charmed by aesthetic affections-of-the-moment, a certain professionalism at least served to defend the fort against the incursions of the more untutored barbarians whose rampages would’ve trampled our last vestiges of civilization long ago. Without an intermediary to police the border, under virtual realism there is hardly a distinction to be made between communication and publication, so that the selfsame constant, sloppy, thoughtless acts of communication considered in the previous section now often constitute publication as well. The right Facebook update, Youtube upload or message board post, after all, might get more page views than a more formally “published” piece like this one. The consequences of this process is a marked lowering of the bar for what is seen as comprising content fit for public consumption, and this, in turn, has dramatic consequences, both in terms of blurring the already sketchy line between public and private by making people feel it is perfectly appropriate (or even expected) to expose a good deal of themselves (literally and figuratively) and in terms of significantly undermining our trust in and respect for published content, a matter I will discuss in far greater detail further below. The Fetishization of the “Real” - Craving Reality, Finding Banality . The fact that in the epoch of virtual realism we spend so much time online, engaged in virtual communication, virtual commerce and virtual life, even as the physical environment around us is growing increasingly globalized and homogenized births in us a nostalgia for the “real.” Postmodernism’s problematization of the category of the “real” has likewise done us no favors in this respect. As David Shields writes in Reality Hunger, “Living as we perforce do in a manufactured and artificial world, we yearn for the ‘real,’ semblances of the real.” And again (quoting), “Our culture is obsessed with real events because we experience hardly any.” So, we get such phenomena as the fetishization of the “real” in ghetto culture and hip-hop culture, where “keeping it real” has become a mantra that invokes something like a need to stay true to concerns about the material and emotional “realities” of life in the ghetto. We, thus, have the benefit of such tone-deaf unintentional comedy as Jennifer Lopez flaunting her money and her body in the 2002 hit single and video for “Jenny from the Block,” even as she tells us, “Don’t be fooled by the rocks [i.e., gemstones] that I got / I’m still, I’m still Jenny from the block / Used to have a little, now I have a lot / No matter where I go, I know where I came from (South-Side Bronx!),” and then this priceless bit: “I stay grounded as the amounts roll in / I’m real, I thought I told you / I’m real, even on Oprah.” And speaking of the Queen of All Media, the domination of daytime t.v. by talk shows is another nod to the hegemony of “the real.” So, of course, is reality t.v. Ever since MTV’s The Real World (premise: a bunch of twenty-somethings of below-average-intelligence but far-above-average comeliness share a house) debuted in 1992, we have been treated to one reality show after another, whether it be attractive people trying to brave the “wild” (Survivor), fashion designers making clothes for models (Project Runway), pretty faces with pretty voices competing to see who can do the best job belting out bland pop songs (American Idol, X-Factor, The Voice), sexpot wrestlerettes trying to make it big in the WWE’s wild world of imitation wrestling (Total Divas) or collectors and profiteers trying to maximize value as they bid on abandon storage lockers in which manipulative producers have (allegedly) planted exotic items among the actual ones left behind (Storage Wars), to name just a small sampling. We have, as well, the recent spate of tattoos (and other body modification), through which people often try to connect with or invoke some sort of “real” culture — ethnic, racial, religious, national or tribal identity, etc. — by having a permanent and painful, i.e., “real,” inscription carved into their flesh. Inevitably, none of these gestures in the general direction of the “real” to counter the virtual within which we live manages to attain the real “real,” if such a thing there be, but rather, achieves no more than acts of contrived showmanship. Shields, at least, understands this much, as he quotes for us Emerson’s admonition that “[t]here are no facts, only art.” The sad thing, culturally speaking, is that what is sacrificed in the process of reaching for the “real” is artistry and imagination. With very few exceptions, the “real” that we get substitutes for depth, for art and culture that would have required far more careful and conscious exertions of artistic imagination or even, God willing, artistic genius. If art, as the cliché goes, holds up a mirror to reality, then while art’s mirror ideally captures reality in its most revealing and illuminating moments, the mirror the “real” holds up to reality succeeds in being no more than an excuse for intellectual laziness and other failures of imagination, showing us all those ordinary moments art would have wisely filtered out. But to keep us from getting bored with such ordinariness, we have to substitute for the aesthetic appeal that art would have conferred all kinds of more immediate appeals: sex, violence, toilet humor, dumb people doing dumb things, etc. In other words, appeals to our most base cravings and depictions of our most vulgar moments are chosen as the most easily digestible fodder for the lowest common denominator that drives ratings and sales. Portrayed in this fashion, we see ourselves reflected back upon ourselves again and again, so that while the “real” imitates the worst in us, we take this depiction for what we really are or for what we might aspire to become and consciously or unconsciously begin to imitate the “real.” The end-result is the redoubling and amplification of our own most deplorable tendencies. Once upon a time, we, in our hubris, imitated or sought to imitate the gods, then saints or chivalric knights, then political leaders and cultural heroes, then those lesser cultural heroes we call celebrities. Now, we imitate each other in our worst moments. “[I]n tomorrow’s world, men will be gods for each other,” the literary theorist René Girard wrote in his 1966 classic of cultural criticism, Deceit, Desire and the Novel. Tomorrow is now fully upon us, and thus it is that our quixotic quest for the “real” winds up inundating us in the base and the banal. Cracks in Consciousness, Deep and Shallow . I have already described our civilization’s progress. Once upon a time, we were all illiterate. Then, with, first, the invention of writing and, later, the invention of the printing press, literacy entered the picture and then broadened from the province of an elite few to the birthright of the Many. Though such literacy had and continues to have many practical and economic uses, it may have found its ultimate calling in the Great Book, the transcendent, transformative work of fiction or non-fiction that engages us deeply and brings with it the power to rouse our souls. But, its hard-won lures powerless in the face of the technological revolution that is rapidly taking place and reformatting our cognitive capacities in its wake, the Great Book will soon recede into obscurity. Readers of deep literature will continue to exist, no doubt. But these will be few and far between, as the masses sink back into a state that might be thought of as literacy, and even as functional literacy, but that will never again aspire to the challenge of being the kind of high literacy capable of engagement with the best that has been thought and said. This much goes without saying: when the transformation we are undergoing reaches its full fruition, our loss as a civilization will be immeasurable. The undeniable fact is that highly stimulating, rapid-fire and incessant contemporary communication and entertainment have made us more or less incapable of deep or sustained immersion in long or difficult texts. Nicholas Carr has elaborated upon this concept admirably in his 2010 essay on the theme, (read the essay here), and in his 2011 book-length follow-up, The Shallows, so I do not want to belabor the point, but let me make a few observations pertinent to my theme. First, as with the advent of interactivity, the trend toward a culture rife with distractions and interruptions has been with us for some time, and Benjamin saw this one coming as well. He framed his approach to the issue in the context of the then-burgeoning art of film, which he naively saw as revolutionary in its potential to democratize the aesthetic experience. He speaks of the film, the distracting element of which is also primarily tactile, being based on changes of place and focus which periodically assail the spectator. Let us compare the screen on which a film unfolds with the canvas of a painting. The painting invites the spectator to contemplation; before it the spectator can abandon himself to his associations. Before the movie frame he cannot do so. No sooner has his eye grasped a scene than it is already changed. It cannot be arrested. Duhamel, who detests the film and knows nothing of its significance, though something of its structure, notes this circumstance as follows: “I can no longer think what I want to think. My thoughts have been replaced by moving images.” The spectator’s process of association in view of these images is indeed interrupted by their constant, sudden change. Benjamin then again cites, with disapproval, the same Duhamel’s complaint that “the masses seek distraction whereas art demands concentration from the spectator,” and then concludes that “[r]eception in a state of distraction, which is increasing noticeably in all fields of art and is symptomatic of profound changes in apperception, finds in the film its true means of exercise.” T.V. ups the ante, since with t.v. we get not only the constant succession of changing images but also commercials interrupting the programming and then the impatient viewer repeatedly having the option to interrupt his own viewing experience by changing the channel. With cheaper, better technology and lowered attention spans, the pace of both t.v. and movies speeds up, with the cuts coming more and more frequently, moving to the pace of music videos. Watch a commercially successful mainstream action film from the 1980s, and see if it doesn’t feel slow by comparison to today’s summer blockbusters. Compare a cartoon from your childhood to the ones on t.v. today. And, for that matter, try to get a kid who has just spent some time watching t.v. to do something that requires patience and concentration. Good luck. But the internet and smartphone technology seal the deal, even as they seal the fate of traditional literary texts. As Benjamin explains, anticipating McLuhan, “the manner in which human sense perception is organized, the medium in which it is accomplished, is determined not only by nature but by historical circumstances as well.” Or, as Carr puts it, citing a University College London study, “It is clear users are not reading online in the traditional sense; indeed there are signs that new forms of ‘reading’ are emerging as users ‘power browse’ horizontally through tiles, contents pages and abstracts going for quick wins. It almost seems they go online to avoid reading in the traditional sense.” Harvard University’s recent report on the dire state of humanities education features a similar argument as one big reason such education might be in decline: “Human societies, both literate and non-literate, have universally understood themselves through works of art that require deep immersion. In the twenty-first century, however, deep immersion is no longer the order of the technological day. New technologies disfavor the long march of narrative, just as they militate against sustained imaginative engagement.” See “The Teaching of the Arts and Humanities at Harvard College: Mapping the Future” from the Faculty of the Arts and Sciences (May 31, 2013), available here. Recent research suggests that we get, on average, 70-75 e-mails per day interrupting our other activities, that we are primed to open them when we receive them on account of our built-in impulse to be aroused by and react to novelty in our environment, that we experience something like a fight-or-flight response when we do receive and open them (elevated heart rate, shallow breathing, etc.) and that it accordingly takes us some time to re-orient ourselves and re-engage with whatever task we’d dropped in media res. See here. And this accounts for the disruptive effect of e-mail alone. When texts, tweets and social media status update notifications enter the picture, we are truly lost in a sea of distraction. As it is, 30% of us check our phones while at dinner, and 54% check them in bed. See id. I am decently confident that among people who own smartphones and are between, roughly, the ages of 15 and 45, both those numbers are surely closer to 90% or more. More recently, research has shown that even listening to a lecture in view of someone else multitasking on their laptop leads, on average, to a 17% drop in retention, enough to get you from an A to a B- on an exam. See here. Books, meanwhile, are disappearing from our midst. They are no longer a prominent aspect of our physical environment, which inherently makes us less likely to imagine them as a universal leisure activity in which all of us can take part. The less ubiquitous they already are, in other words, the still less ubiquitous they will become. Bookstores are closing up shop, making us less likely to see one and stop in to browse or to check out an item in the window display that caught our eye. The physical book itself is being displaced by e-books and e-readers, so that we are less likely to see people reading around us or see what it is they’re reading when they do read (and I must say I used to enjoy discriminating in my snap judgments of people based on what they were reading, whereas now I have to condemn virtually one and all for reading absolutely nothing). Nor is it the case that e-books have simply replaced physical books. If my own informal longitudinal observations of commuter habits in public transportation or the many recent laments for the loss of interest in the humanities are any guide, people are less likely than ever to be reading books of any sort. One can see their irises dancing erratically, their fingers spastically skipping, swiping or typing away on their phones (and I am certain we would find elevated heart rates and shallow breathing if we were to check), all telltale signs that they are playing video games or absorbed in some sort of electronic communication. If one of the principal allures of books, moreover, was their capacity to allow us to escape out of the prison of our own consciousness and imagine what it is like to be other people in other places and times, then something like that need is now being much more directly fulfilled by our capacity to “follow” other people’s, including celebrities’, daily lives on social media. We know what they’re doing, we get their spur-of-the-moment observations and opinions and we even get photos and videos (take that, literature!). Who can endure the difficult, demanding interaction encompassed in that sublime, imagined meeting of the minds that takes place when the work of a great creator meets its ideal match in the consciousness of an engaged, creative reader when we now have constant real and real-time interaction between reader-authors and author-readers unmediated by a difficult, demanding text hovering like a veil between their puckered lips? Who needs intricate, challenging stream-of-consciousness writing of The-Sound-and-the-Fury variety when we have all the sound and fury we need in following the unedited, spontaneous multimedia stream of consciousness of Facebook “friends,” frenemies and celebrities? The problem (or one among many) is that while experiencing The Sound and the Fury can transform us, experiencing all our multimedia sound and fury signifying nothing simply infuriates us. A bevy of recent research suggests that due to a phenomenon known as “FOMO” — “fear of missing out” — the more time we spend on Facebook, Instagram or other social media, seeing people we know or people we want to know post updates and especially photos of themselves doing things that look great by comparison to the solitude we might be experiencing while we’re checking these photos and other updates makes us feel generally miserable about ourselves. See, e.g., http://www.bbc.co.uk and http://www.slate.com/articles/technology/. This doesn’t happen when we spend solitary moments deeply engaged in great books or other great art since great art immerses us in worlds the sublime otherness of which (even when those worlds seem superficially similar enough to our own in terms of time and place) does not call forth FOMO. René Girard’s notion of mediated desire packages this insight in a more high-minded — and, I think, ultimately more psychologically accurate — way than does the rather shallow notion of FOMO. The things we want are things we see other people enjoying, but, as I noted above, Girard’s point is that with time, those “other people” whose desires we routinely borrow have come down from the heavens and become increasingly like ourselves. This gives rise to the phenomenon of “double mediation” — we imitate them, while they imitate us, with neither participant fully aware of what is going on — a phenomenon that was impossible when we borrowed our desires from the gods, saints, chivalric knights or even celebrities. Here is Girard’s apt diagnosis of the societal malady that then begins to afflict us all when double mediation becomes commonplace: In the world of [double] mediation, the contagion is so widespread that everyone can become his neighbor’s mediator without ever understanding the role he is playing. This person who is a mediator without realizing it may himself be incapable of spontaneous desire. Thus he will be tempted to copy the copy of his own desire. What was for him in the beginning only a whim is now transformed into a violent passion. We all know that every desire redoubles when it is seen to be shared. Two identical but opposite triangles are thus superimposed on each other. Desire circulates between the two rivals more and more quickly, and with every cycle it increases in intensity like the electric current in a battery which is being charged. We now have a subject-mediator and a mediator-subject, a model-disciple and a disciple-model. Each imitates the other while claiming his desire is prior and previous. Each looks on the other as an atrociously cruel persecutor. All the relationships are symmetrical; the two partners believe themselves separated by a bottomless abyss but there is nothing we can say of one which is not equally true of the other. There is a sterile opposition of contraries, which becomes more and more atrocious and empty as the two subjects approach each other and as their desire intensifies. Constant communication and social media, having come well after the 1966 publication of this passage, metastasizes the problem to which Girard was prophetically pointing, as we find ourselves incessantly inundated with an unsettling stream of information about the lives of other people much like ourselves and can hardly so much as entertain a thought or desire for anything before it is snatched away from us and made into someone else’s thought or desire that simultaneously empties out and quickens our own. If, per Emerson, “[i]n every work of genius we recognize our own rejected thoughts,” which “come back to us with a certain alienated majesty,” then what we recognize in every banal social media status update we immediately check on our smartphones are our own inchoate, mundane thoughts and desires coming back to us sullied, amplified, banalized and pulverized, until we feel alienated from ourselves. “Double mediation gradually devours and digests ideas, beliefs, and values,” Girard fittingly concludes. Another way of approaching the same issue is that much of our high-stimulation, modern-day distractions are geared toward immediate satisfaction, toward satisfying a craving or a quick need. Even if such distractions succeeded in making us feel momentarily happy, however (which, per the discussion above, they do not), this would still not suffice. In addition to momentary hedonic happiness, we need a sense of eudaimonia, a sense that our lives have a larger meaning or purpose, a connection to something larger than ourselves. When we do not have that sense — and we increasingly do not — intriguing recent research suggests that we remain physiologically agitated, with our bodies manifesting “the same gene expression patterns as people who are responding to and enduring chronic adversity.” See www.theatlantic.com. “Empty positive emotions … are about as good for you as adversity,” the researchers concluded. Great art requiring deep focus and immersion, of course, is one of the main ways in which that sense of meaning and purpose stemming from a connection to something larger than oneself comes our way. If we have arrived at a cultural moment bereft of such art and, instead, are subjected to a constant barrage of meaningless nonsense, we are in trouble. Robbed by our non-stop streams of communication, entertainment, media and social media of our calm and focus, of our thoughts, desires, ideas and values and of our very sense of purpose, we are ever more persistently present but absent at the same time. As when we are distracted and texting away at dinner with family and friends or at a club or party where many of those present are busy checking their phones instead of communicating with one another, dancing, or otherwise enjoying each other’s company, we are now hardly ever wholly present in our own thoughts and activities. We are always preoccupied, always focused on something else and focused on nothing in particular as a result. If, as we are often told, we should aspire toward mindfulness as the ideal conscious state, what we are now routinely experiencing on every front is the absolute and utter mindlessness that is its polar opposite. The (Cognitive) Uncertainty Principle: the Elevation of Factual Quantity over Quality . Josef Stalin, who often encountered facts about the world as obstacles to be overcome in re-educating the populace to believe in the truth and beauty of the alternate reality engineered by the Soviet state, once quipped that “facts are obstinate things.” If he were alive today, Stalin might well never have said or thought anything of the sort. It is simply no longer true. As steady streams of Wikipedia updates by ordinary people like you and me attest, facts now change rather easily all the time. We have never been more confused about what to believe, which one might expect to yield circumspection and skepticism, and it has … but only to an extent; what has happened, instead, is a marked increase in vituperation, polarization and incivility. And while we are busy shouting away ever more rapidly, loudly and incoherently, matters of import are getting resolved by State actors, automatic processes and market forces acting in our stead. Allow me to explain. “The life span of a fact is shrinking,” writes David Shields in Reality Hunger (quoting). Indeed, it is. The general acceleration in the pace of communication and information flow has predictably brought about a state of affairs where the amount of data at our disposal far exceeds our ability to process it intelligently and draw justified conclusions based upon it. This, however, doesn’t stop us from trying. It used to be the case, when the sources of authoritative claims about any given matter were limited to a known few, that such factual claims, when made, would enter into a single-threaded conversation that could be extended over years, decades or even centuries. Claims could be unpacked and analyzed and, with sufficient time, perhaps verified or falsified. The process may not ever have embodied the heights of human rationality, but, one way or another, the test of time worked its rough magic. Even through most of the latter half of the 20th century, a few big media outlets served to structure and direct our national dialogue upon any issue that managed to percolate through the then-thick-skinned filter to the cultural surface. For better or for worse (and there is certainly some measure of each), that centralized, monolithic culture is gone. It has fragmented into tiny pieces. We are now all potentially sources of information; all of us are speaking, and hardly anyone is listening. Without the guidance of a responsible editorial hand, we have flooded the internet with our viewpoints, facts and beliefs. Corporations and political interest groups fund studies to promote their competing versions of the truth and then go out on the stump to promote their alleged discoveries. Legions of seemingly factual claims pop up daily and disappear just as quickly into the void. Others catch on and linger, but without any opportunity for systematic scientific scrutiny, the ones that stick around might do so only because they are more well-funded or more amusing or outlandish or make for better internet memes. The “truth” might surface somewhere in cyberspace at some later point, but it will often be drowned in the noise. Because of the present pace of our communication, last month is old news, and by the time anyone gets around to verifying, falsifying or even intelligently discussing what we have heard, the conversation has moved on to something else. There is, rather, no conversation, but instead, the simultaneous public chatter of thousands of voices clamoring to be heard apropos of everything and anything under the sun. No resolution is possible, for as between ships passing in the night, there is no confrontation to resolve. Perhaps it is often the case that the most widely held theoretical viewpoints of the intellectuals of one epoch become the practical standpoints of the masses in the next. The history of the fact has, in any event, followed this trajectory. If, in the postmodernist period, the fact died a theoretical death, with intellectuals coming from a variety of positions having denounced the pretense to factuality, objectivity and essentialism, howsoever named, then in our own period of virtual realism, their theories have become our practical realities. Assailed both intellectually and technologically, the fact has become remarkably fragile, even as various interest groups and political constituencies — feeling simultaneously unsettled and liberated by the end of the imposition of consensus factual narratives emanating from above — have adopted and clung to their own favored factual narratives with all the desperate ardor of creation myths that furnish their raison d’être. Thus, with all talk of scientific “consensus” now instantly and reflexively provoking mass skepticism, it is easier than ever for religious fundamentalists to believe, the solid weight of evidence and legitimate scientists’ unanimous consensus and protestations notwithstanding, that the earth is less than 6,000 years old or that manmade climate change is a fiction perpetrated by researchers in the pay of conniving political elites. Along similar lines, as commercial interests avail themselves of the 24/7 news cycle and innumerable publishing channels to report on or advertise (the line between reports and ads having become increasingly finely drawn) each new “miracle food” or study pertaining to food science as if it were time-tested medical dogma, our collective stupefaction in the face of a barrage of contradictory claims emerging daily only grows. And yet we exacerbate the problem by rebroadcasting these dietary dispatches to one another and championing our nostrums, thereby amplifying the noise. Experts in relevant fields — scientists, doctors, chess masters, professors of literature and other humanities, professional critics, authors and journalists, etc. — are variously trained to deal with the kinds of cognitive uncertainty that tend to arise in their respective realms of endeavor. They do not invariably arrive at right or even good decisions and resolutions — perhaps, because we are limited and fallible, while many uncertain questions are hard to resolve, they are even wrong more often than they are right — but we may have some assurance, at least, that scientists performing studies have some understanding of the basic tenets of the scientific method, of the distinction between causation and correlation and that sort of thing, or that professional film critics have seen a broad range of films, have some elementary knowledge of film history and are decently skilled in the art of textual interpretation. But when the gates are thrown open and we en masse jerk loose the reins (without really grabbing firm hold of them ourselves), all bets are off. Anyone who has had the experience of browsing a community forum devoted to medical or other scientific issues will know what I mean: confronted with a dizzying array of viewpoints, shreds of reasoning, citations to studies seemingly lending support to every side of every question, links, blanket claims and cross-claims, often cased in unstructured sentence fragments, we have a difficult time navigating our way through the jumble. We do not know whom to trust, what to believe. Our newly accustomed state of cognitive exasperation then becomes pandemic, so that, having grown used to feeling benumbed by it all, we no longer trust even those who, in former ages, we would have deferred to as experts in the field. As in the realm of literature, those who once passed for authorities in every cultural domain have lost the ability to inspire our confidence; as they became numerous and then ubiquitous, they and their texts have been stripped of their “aura.” Benjamin observed that, as far as literature was concerned, the quick and easy proliferation of copies substantially identical to an original work — such that it no longer makes much sense even to speak of an “original” as distinguishable from its copies — and the interactivity of our new cultural productions had led to the loss of “aura” once possessed by these older works of art emanating from a single authoritative and original creator. While great authors are, by and large, geniuses, and while even most run-of-the-mill authors are, by and large, at least competent experts and professionals or competently and professionally edited, we are, by and large, unqualified idiots. If high modernism prized the talented author, and postmodernism brought with it a great leveling of high and low culture — the great book as a manufactured commodity among many others — and the consequent proliferation of very ordinary authors oriented to consumer culture, then virtual realism brings us the real-time “author,” who is not actually a professional author in any traditional sense and who constantly broadcasts through non-traditional channels his bits and pieces of meaningless multimedia “work” unchecked and unedited. The transition to postmodernism had already introduced a great increase in our cognitive uncertainty. As Roland Barthes’ popularization of the transition from the modern and pre-modern Literary Work to the postmodern Text in his essay “From Work to Text” (1971) describes, while traditional literary science “teaches respect for the manuscript and the author’s declared intentions,” the Text “reads without the inscription of the Father.” In the Text, the author’s “life is no longer the origin of his fictions but a fiction contributing to his work.” “[T]he Text,” Barthes continues, “requires that one try to abolish (or at the very least to diminish) the distance between writing and reading”; “it asks of the reader a practical collaboration.” Writing at a time when we had not yet reached the absurd point where we are turning Hamlet into a choose-your-own-adventure story, and breaking Kickstarter records in the process, Barthes explains that while “[c]ertainly there exists a pleasure of the work (of certain works); I can delight in reading and re-reading Proust, Flaubert, Balzac, even — why not? — Alexandre Dumas …, this pleasure, no matter how keen and even when free from all prejudice, remains in part (unless by some exceptional critical effort) a pleasure of consumption; for if I can read these authors, I also know that I cannot re-write them (that it is impossible today to write ‘like that’) and this knowledge, depressing enough, suffices to cut me off from the production of these works.” His essay, “The Death of the Author” (1968), drives the point home still more plainly: While “[t]he explanation of a work is always sought in the man or woman who produced it, as if it were always in the end, through the more or less transparent allegory of the fiction, the voice of a single person, the author ‘confiding’ in us,” “the text is henceforth made and read in such a way that at all its levels the author is absent.” He continues: “We know now that a text is not a line of words releasing a single ‘theological’ meaning (the ‘message’ of the Author-God) but a multi-dimensional space in which a variety of writings, none of them original, blend and clash. The text is a tissue of quotations drawn from the innumerable centres of culture.” (David Shields appears to have taken this literally and prescriptively.) And the death of the author is the death of the professional critic as well, Barthes (correctly) reasons: “Once the Author is removed, the claim to decipher a text becomes quite futile.” “[W]hen the Author has been found, the text is ‘explained’ — victory to the critic. Hence there is no surprise in the fact that, historically, the reign of the Author has also been that of the Critic, nor again in the fact that criticism (be it new) is today undermined along with the Author.” While “[c]lassic criticism has never paid any attention to the reader; for it, the writer is the only person in literature,” “the birth of the reader must be at the cost of the death of the Author,” he concludes. But we have now taken a still further step beyond the transition described by Barthes, even if we can see many of the phenomena he was describing continuing to play out in our midst. Before the advent of postmodernism, we had the “Literary Work” emanating from the mind of an authoritative author, the definitive meaning of which was to be found in understanding the author’s intent (much like the quest for the true meaning of Scripture was to be understood as an effort to apprehend the mind of God), which could be best done by professional scholars and critics knowledgeable about the subject of their study. With postmodernism and the transition to the metaphor of the “Text”, we saw a democratization of the field; we no longer had a single authority figure — whether author or critic — who could claim superior access to the Text’s unitary meaning. Instead, meaning became cognized as an ever-changing construction composed of the interaction of reader and Text, so that authority was dissipated, while the cognitive ambiguity of the Text was, for that very reason, necessarily increased. There was no longer a single interpretation that was better than any other. And yet, even with the Text, we still had the sense that meaning exists (or that multiple meanings exist), so that it remained a worthwhile endeavor for us as readers to undertake the search, to interpret, to read deeply and seek coherence in that with which we had been presented. With the coming of virtual realism, however, even this much is no longer true. Now, the “Text” no longer serves as apt metaphor. What we have, instead, is what we might fittingly call the “msg,” often brief, hurriedly dashed-off, unedited, ungrammatical, unpunctuated fragments, just pieces of the “real,” random outtakes from celebrity or perfectly ordinary lives in progress — status updates, tweets, photos, videos, text messages and the like (some, though not all, of these subsumed under Gérard Genette notion of the “paratext”) — being broadcast out into the black hole of cyberspace, sometimes even without any actual human agency interceding to direct the process, as pre-programmed functionalities of various social media, apps and other technologies, the operation of which we do not bother to control even when we can do so, send out, i.e., publish, to us and to a potentially large group of “connected” others, automatic updates, reminders and invitations that we sometimes ignore but too often lack the discipline to ignore. The msg, thus, frequently confronts us as an unwelcome intruder into our lives, and we are often at a loss as to what to do with it precisely because it lacks even the aspiration to being meaningful that the Text still possesses. It is a fragment without any sense of coherent embeddedness in a larger whole. With it, the cognitive uncertainty that had already increased when the advent of the postmodern Text brought with it the loss of a single source of authoritative meaning reaches a breaking point. Confronted with the msg, we have no sense of what to do or what to think, or whether we should even bother to do or think anything at all. With all our talk of the “interactivity” of our modern media, it is quite ironic that while the monolithic and overtly non-interactive Literary Work actually often inspired in us readers the absolute heights of engagement because — just like readers trying to discern God’s intent in the authoritative texts that purportedly convey His Word — we cared enough about the meaning instilled in the Work by the mind of its great Author to rouse all the best in ourselves in trying to divine that meaning, now that we have reached a cultural moment when overt interactivity is rampant and when each of us has access to faster, more effective avenues of publication than all of those formerly guarded by zealous gatekeepers, our attitude toward the persistent stream of real-time fragments that find their way into the public eye is very often one of total numbness, of apathy. We are indeed interacting, constantly, in fact, but we are bringing only the most superficial parts of ourselves to bear upon any such interaction. We are no longer deeply engaged. We are not reading and writing, but rather, browsing and chatting. If, in Barthes’ time, he could announce that “the birth of the reader must be at the cost of the death of the Author,” then in our own, we should issue this amendment to his proclamation: the ultimate evisceration of the distinction between the author and the reader is tantamount to the death of both. The further irony of this state of affairs is that the very kinds of technological changes that have vastly increased our sense of cognitive uncertainty have likewise put the nail in the coffin of high literacy, the principal skill we need in order to process uncertainty and ambiguity across domains. The 2013 report of the American Academy of Arts & Sciences on the state of humanities education in America, prepared in response to a bipartisan request from members of Congress (available here), has several apt passages: We live in a world characterized by change — and therefore a world dependent on the humanities and social sciences. The humanities and social sciences teach us to question, analyze, debate, evaluate, interpret, synthesize, compare evidence, and communicate — skills that are critically important in shaping adults who can become independent thinkers. Today, our need for a broadly literate population is more urgent than ever. As citizens, we need to absorb an ever-growing body of information and to assess the sources of that information. [T]he liberal arts train people to adapt and change over a lifetime. [T]he cultivation of the imagination through the study of literature, film, and the other arts is essential to fostering creativity and innovation. The Humanities promote that kind of tolerance, that degree of healthy self-doubt, which Learned Hand used to remind us of by quoting Oliver Cromwell in his statement to the Scots: [c]onsider that ‘you may be mistaken.’ But there is no reason to stop at such fine-sounding bromides; actual research likewise supports the notion that reading — and, specifically, reading literary fiction — generally makes people more comfortable with ambiguity, less bothered by a lack of closure, more sophisticated and creative in their thinking and less likely to think in black and white. See, e.g., http://www.psmag.com. If, however, with the coming of virtual realism, we are now awash in fragmented bits and pieces of the “real” instead of immersing ourselves in literary fiction, even as we are, as a society, simultaneously publishing more and more information at a faster and faster rate and, therefore, encountering uncertainty and ambiguity far more than we used to, the results are certain to be unpleasant. Those results are already here. People no longer know how to disagree. Those reader forums quite often degenerate into shouting matches. And the decline of our single-threaded national conversation is evident in the balkanization of our political arena, with black-and-white thinking going on aplenty, the political parties and their constituencies ever more at each other’s throats, gravitating ever more toward the extremes, so that, increasingly, we are left with unsatisfying expedient political compromises like Obamacare instead of real consensus solutions to the most pressing quandaries of the day. We are, in the meantime, increasingly reliant on machines to sift through the mess of information we are making all around us. While machines can make some headway in our stead, see, e.g., http://blogs.hbr.org, they also have inherent and likely insuperable limitations, see, e.g., http://nplusonemag.com: the correlations they draw for us, the categorizations they devise and any ultimate answers they furnish to most of the kinds of queries we find interesting will be questionable; we rely on them at our own risk. If, for instance, we let them sort posts based on how often they have been viewed, how many responses they have received or how many posts that forum user has previously made, we are elevating quantity over quality in making our ultimate choices. The dangers of doing so are not just theoretical: as studies have found, “[w]hen someone ‘likes’ an online comment, other people become much more likely to give the comment a thumbs up, too, suggesting that our opinions and decisions may be at the mercy of what others seem to think.” (read here). When, for instance, experimenters, at random, vote comments on websites up or down or evaluate songs positively or negatively, a phenomenon known as “social influence bias” (also known as “groupthink”) often leads us to follow suit. See here. The elevation of quantity over quality is a general problem that comes to the fore whenever we no longer have the means to make searching, effective and intelligent distinctions. If we lack the discriminatory faculties to choose X over Y because X is better and can no longer even rely on professional critics we trust to make that choice for us, then Y might be selected for us by default not because it is better but simply because it is more well-known, more well-funded or more popular. And Thus… Many of us tend to think that our present time is a golden age as far as the unleashing of the creative potential of the masses is concerned. The internet, the story goes, has allowed anyone who can afford to get online an unprecedented amount of access to and power over information that was formerly confined to the very few. Now, the story continues, just as Benjamin had hoped, so many of us who could never have commanded the resources to do so before have become filmmakers, writers, musicians, entrepreneurs or other creators of one sort or another. And to an extent, this story is true. The forbidding aura that once separated us from the means of becoming our own creators may well be gone for good. The internet has, indeed, conferred countless benefits upon us all and is, in respects named and unnamed, great and small, daily empowering, informing, enlightening and entertaining ordinary people throughout the world. But there is also a more sinister side to the change. The growing power and influence of the masses is the very same phenomenon as the effective diminution and eventual downfall of the old creative class, the class of highly qualified thinkers, intellectuals, critics, artists and scientists. This old intelligentsia served as a bulwark against the incursions of the market. It kept government honest — or, at least, more honest than it otherwise would be — by vituperating against it intelligently enough and often enough to make us care. It kept our minds elevated above the gutter; it kept our focus firmly aloft, above the stench of our own sewage, the noxious miasma of the “real.” It enabled us to imagine something better than ourselves: the kind of people we wanted to become. Bereft of the lofty class of those who could lift free of our gravity, who had the vision to soar over our individual habitations and enter the rarefied space of the Platonic ideal, we are, for all our newfound speed, merely crawling this way and that, blindly following in each other’s tracks along the earth’s hard crust. I am reminded, in this connection, of the views of the sophist Protagoras, who held that we ourselves are the measure of all things, that nothing is false and nothing true. If this is so, he reasoned, if we have no superior access to truth, no means of questioning the “wisdom” we imbibe from the powers-that-be, we might as well simply obey the laws of the State and believe in its gods. Perhaps this was just misdirection, an overt teaching conveyed for the safety of teacher and students alike that concealed a deeper secret teaching. We may not ever know. What we do know, however, is that our civilization in this era of virtual realism has lost a tenuous balance. Without a degree of cognitive uncertainty, civilization ossifies, as it did in the former Soviet Union or in our modern-day theocracies, where government purports to offer a definitive answer to nearly every significant question we might think to pose, the creative potential of the masses is stymied and progress grinds to a halt. But when cognitive uncertainty spins out of control, when we empower the masses, when we defer to no authority and hold no experts exalted above ourselves, when we no longer know what to think about anything, we court another kind of danger. Because truth and beauty are the only principled vantage points from which we can criticize the exercise of power on the part of the market and the State, in the face of our tech-revved total democracy — and, thus, total uncertainty — as to what is beautiful and what is true, we can hardly help but adopt a version of Protagoras’ advice: unbeknownst to us, we abdicate decision-making entirely, letting the State, the market and their machines furnish us with ready-made answers to questions ranging from what restaurant or travel spot to choose to what treatment to adopt for our gravest diseases. Is this part of why, though our technology has made our daily menu of viable options seemingly inexhaustible, we seem no closer to feeling satisfied with our choices, as though our choices were, after all, not our own? And maybe we are, in fact, less satisfied. Maybe, despite all the glowing gadgetry potentially illuminating our outbound way, we have, instead, used it to project a bigger, brighter, more mind-numbing agglomeration of banal images driving us deeper into the dark interior of the Cave. Why else, having gotten to our destinations, do we promptly recede into our accustomed trance, as we unfurl our smartphones and start swiping, clicking and texting away, checking to see what our friends are up to, always laboring under the vague impression that regardless of where we are, the party, the real party, is always elsewhere?