Ryan Heuser

About me

I am Junior Research Fellow in 'Immateriality' at King's College, Cambridge, where I study English literature and the digital humanities. My work brings computational methods of text analysis to the study of literature and its history. I work on changes in literary language across the modern period; on historical semantics; computational poetics; literary geography; and the theory and methods of the digital humanities more broadly. I recently received my PhD in English from Stanford University, where I was a founding member of the Stanford Literary Lab and its Associate Research Director from 2011 to 2015. You can also find me on Twitter; Github; Google Scholar; Academia.edu; or by email at rj416@cam.ac.uk.

Book projects

Abstraction: A Literary History

My first book project, Abstraction: A Literary History, traces a slow-moving rise and fall in abstract language across centuries of literary history. Mixing close and distant reading, the book uncovers how these changes in literary semantics mediate changes in social organization. I focus on three literary forms of abstract language: 'abstract style', in the syntactic symmetries and semantic formulae of the periodical essay; 'abstract persons', in the personified abstractions of the mid-century ode; and 'abstract realism', in the "tell, don't show" narration of the early realist novel. Through this history and framework, the book also aims to recuperate abstraction as both a method and an object of literary study.

2020-02-18: "Abstract Realism" (visual summary of the chapter on fiction)

Computational capitalism

I am also beginning a second book project which examines the role of computation in contemporary culture and labor. Computation – from its theoretical roots in the 1940s, to its military-academic development in the 1950s and ’60s, to its invasion of industry in the 1970s, personal computing in the ’80s and internetworked computing in the ’90s – has revolutionized nearly every aspect of modern life. And yet analyses of computation’s cultural role often remain caught between techno-libertarian boosterism and reactionary critique. Imprecisely hailed as a ‘fourth industrial revolution’, computation in fact bears a longer and more complex relation to the contemporary history of capitalism. After all, computation is inseparable from the global economy’s most significant transformations since the 1970s: its so-called ‘deindustrialization’ became possible only through computerized models of global supply chain management, and its financialization only through computerized financial modeling and its internetworks' near-instantaneous and -distanceless transaction speeds. Although Marxian accounts of these techno-materialist changes exist – most notably post-operaismo accounts of immaterial (Hardt & Negri, Lazzarato) or cognitive labor (Vercellone, Boutang), as well as more recent ‘post-post-operaismo’ accounts of digital labor (Huws, Fuchs) or cybernetic capitalism (Dyer-Witheford) – such frameworks never quite escape their frequent criticism: that they over-emphasize the cognitive dimensions of labor and under-appreciate its continued basis in material work, especially the domestic and socially-reproductive work often performed by women.

Though abstract, such debates about 'immaterial' and 'material' labor are both important and tangible within the ongoing Covid-19 employment crisis, which has made manifest an existing and growing division within the composition of labor: between those who can and cannot work from home, a distinction which often turns on the role of computation to that labor. This book argues that leftists take more seruously the structural role of computation to labor.

Collaborations

Cambridge Keydata Project

I am directing a new collaborative digital humanities project based working at the intersection of computational semantics and conceptual history – the title alluding to Raymond Williams' classic 1976 text, Keywords: A Vocabulary of Culture and Society. Co-directed with Pete de Bolla, Prof. of English there, and bringing together linguists, computer scientists, literary scholars, historians of political thought, and others, to join in a shared project in digital knowledge. Based in Cambridge, the project is also open to virtual collaboration (as, indeed, everything now is) – do get in touch if you have.

Antimetricality

I am working with Arto Anttila and Paul Kiparsky, metrical phonologists at Stanford, to design tools to evaluate the 'antimetricality' of a text: the degree to which its stress patterns depart from any known metrical pattern. Such measurements of metrical 'tension' or 'ambiguity' have a history: prose most distances itself rhythmically from verse at the height of the eighteenth century. We have a pre-print of a paper available here.

Blog

Word Vectors in the Eighteenth Century

This page is meant as a set of links and resources related to my work using word vectors to study eighteenth-century literature. This work asks the question: how can new vector-based models of semantics reveal the historicity of specific configurations of meaning in eighteenth-century literature? Most of this work is published serially as blog posts, linked below. The later of these are "slideshow essays"-experiments with the forms of visual rhetoric that work so well in the digital humanities-rather than traditional blog posts. There is also a video of a talk I've given about this work. Lastly, I've uploaded several word2vec models I'm using, trained on a corpus of eighteenth-century literature; and linked to some relevant code (more code will be coming soon).

Miscellaneous writing

Graph Blog

Occasionally I post graphs and brief summary results to Twitter of my DH work as it proceeds.

Other

Prosodic: A metrical-phonological parser, written in Python. For English and Finnish, with flexible language support.

Poesy: Tools for poetic analysis (stanzaic, metrical, and rhyme forms), written in Python.

LLP: Literary Language Processing (LLP): corpora, models, and tools for the digital humanities, in Python.

Slingshot: Python wrapper for MPI to "slingshot" a small Python or R function against the Goliath of Big Data.

Teaching

CV