One of the recent areas of apologetic debate concerns the text of the New Testament. Can we be confident that the texts that we have, which are then translated into our own native language, are a reliable record of the texts that were first written by the authors? Are the processes of copying and transmission trustworthy? The study of this question, and the task of discerning the original text from the multitude of copies that we have, is known textual criticism.

Elijah Hixson and Peter Gurry have recently edited a volume looking at the key questions, Myths and Mistakes in New Testament Textual Criticism, and I interviewed Peter about the issues that they raise.

IP: Textual criticism is usually viewed as being at the most specialist or ‘geeky’ end of biblical studies! Why do you think it is of particular importance just now?

PJG: This is true. It’s probably the geekiest of the subdisciplines because it’s so technical and requires such a degree of precision. It’s also ‘geeky’ in that it’s often seen as dry, tedious, and sometimes unnecessary. It certainly can be tedious though I think it’s anything but dry. What could be more fascinating than tracing the history, reception, and form of this text we Christians revere? As for being unnecessary, I think textual criticism’s importance is becoming more apparent to both lay believers and New Testament scholars. On the lay front, the influence of Bart Ehrman on popular conceptions of the Bible has meant that pastors often have to address the subject more than before. When textual criticism hits the cover of Newsweek or National Geographic, you can be sure something has shifted at the popular level.

Add to that things like the Gospel of Jesus’ Wife and the Mark fragment formerly known as First-Century Mark and you can see why people in the pews are taking notice. On the academic side, major research projects (like the Editio Critica Maior) and new hand editions (like NA28/UBS5 and THGNT) have meant that scholars need to pay more attention to textual criticism because their critical editions are changing right under their noses.

IP: You mention in your book the importance of Bart Ehrman’s work. Why do you think his contribution is so significant? Have there been good responses to it?

PJG: I think Ehrman hits the sweet spot in his popular work. He brings a combination of things that scratch the itch of a secularizing society that, especially in America, is still hugely influenced by conservative Christianity. The first is his academic credentials. He’s worked and written in academia for decades and he brings a wealth of informed knowledge to questions that even Christians often don’t know much about. Second, his deconversion narrative is one that appeals to a swath of American society (and its media) that finds the beliefs and motives of Christians—especially evangelical ones—incredulous. Since he has “been there and done that,” he provides reassurance to atheists and skeptics that thinking people don’t need to take Christian beliefs too seriously (see his interview with Sam Harris as a good example). Third, he’s always been a great communicator. He has a way of wearing his knowledge lightly so that listeners learn a lot from him without feeling dumb in the process. That’s very disarming for many people, I think. Finally, I think conspiracy narratives aimed at power and influence appeal to people who distrust institutions. This is one reason why it’s easier to write a bestseller about the story behindwho changed the Bible and why than it is to write one on why those same changes don’t undermine the Christian faith.

In terms of Ehrman’s argument that textual variants disprove the Bible’s inspiration, two good responses that come to mind are Reinventing Jesus and The Heresy of Orthodoxy. (I also like Peter Head’s Grove booklet How the New Testament Came Together as a first port of call, but it’s hard to get in the States. [Ed: it is available as a PDF online]) But maybe the best response isn’t really a response at all. It’s the work of Ehrman’s own supervisor, Bruce Metzger, whose scholarship mostly predates Ehrman’s popular work and who was convinced that textual criticism posed no real threat to historic Christians beliefs.

IP: Your book is entitled Myths and Mistakes in New Testament Textual Criticism. Amongst ordinary believers, what do you think is the most surprising thing they might find in this book?

PJG: Pictures! Besides that, they may be surprised at how much there is to learn about how we got the New Testament text. Whether it’s exploring autographs in their first-century context, understanding why some of our most important manuscripts have “extra” books in them, or learning how Bible translators in the field deal with textual variants, there is a lot here for the non-expert. I learned quite a bit myself! What I most hope believers come away with is a conviction that we shouldn’t appeal to bad arguments to defend the Bible—and that we don’t need to.

IP: You are not afraid to take on the views of other evangelicals—and there are some things (for example, the fact that autographs probably did not last for long) that some will find disappointing. What do you think are the most common missteps that evangelicals make—sometimes in a desire to foster confidence in the reliability of the New Testament?

PJG: The most common misstep is using outdated arguments. For example, it’s extremely common to compare the large number of New Testament manuscripts to those of famous classical authors. In many cases, the classical data is still taken from F.F. Bruce’s book The New Testament Documents. That’s a great book in many ways, but it also wasn’t updated much from its original edition in the 1940s. This means his stats for classical works are almost a century out-of-date now. If that wasn’t bad enough, it gets worse because when authors and apologists turn from Bruce’s classical stats to the New Testament, they invariably use the most up-to-date (and often biggest) number of manuscripts they can find. The result is a very lopsided and outdated comparison that makes the point but does so unfairly.

Other missteps involve giving overly precise dating to our earliest manuscripts of the New Testament, giving exaggerated manuscript counts (I’ve been guilty myself), and failing to think carefully about how manuscripts work when using stats about the number of variants. There are others, of course, and each chapter of the book takes on one myth or mistake and tries to offer a helpful corrective.

IP: At several points, the chapters in the book compare approaches of the Christian community (including scribes) with broader ancient practice. Is there evidence that the Christian community took a distinctive approach to the copying and distribution of manuscripts?

In some cases, yes. We know that Christian scribes were unique in using abbreviations for certain words we call nomina sacra or sacred names. This initially included words like “Father,” “Jesus,” “Christ,” “God,” and then expanded to include others like “David” or “Jerusalem.” There is also the well-known Christian preference for the codex format (as opposed to the scroll). Scholars continue to offer explanations for both these early Christian phenomena, but I don’t think we can say for sure in either case how or why the preference arose. On the other side, there is plenty of evidence that Christian scribes were much like their non-Christian counterparts: they made mistakes, they tried to correct them, they cared about accuracy, and they copied the documents they did precisely because they valued them.

IP: Ehrman has made big claims about the number of variants there are in the NT manuscripts, and about the evidence that theological concerns affected the faithful transmission of manuscripts—issues tackled by your chapter and the one that follows by Robert D Marcello. How important are these variants for the reliability of the NT—and how many have significant implications for doctrine?

PJG: Yes, the irony is that his number of variants (he usually says 200,000–400,000) is probably too conservative. My estimate, which I based on three robust datasets, is closer to half a million non-spelling differences among Greek manuscripts. That’s a quite a lot and proper context is crucial to appreciate it. In John 18, for example, I counted over 3,000 variants among over 1,600 collated Greek manuscripts. That’s again a lot, especially given that John 18 has only about 800 words in our printed Greek New Testaments. But when we realize that every word copied by a scribe comes with the potential for error and we further remember that each of those 1,600 manuscripts required the scribe to copy about 800 words in this chapter, the result is about 3,000 variants for almost 1.3 million words copied. That’s around one distinct variant for every 400 words copied. Not so bad.

More importantly, most of these variants are either insignificant to the meaning, are easily recognized as scribal mistakes, or both. The scholar’s edition of the Greek New Testament (NA28) lists 154 while the translator’s preferred edition (UBS4) lists only 10. Most commentators discuss a handful of these. Of the major English translations I checked, not one lists a single variant in the footnotes of John 18. And this is probably right since they don’t warrant the attention of English Bible readers. Of course, not all variants are insignificant. There are even some that, in my view, are both difficult to resolve and occur in passages of theological or practical importance (e.g., Luke 23.34; 1 Cor 14.34–35; Jude 5). But, since Christian doctrine at its best is based on “the whole counsel of God” (Acts 20.27) and not on isolated verses, I conclude that, while some textually debated texts do, in fact, bear on doctrine and practice, no Christian doctrine or practice is in jeopardy because of textual criticism. In this, I think all can agree, even Ehrman.

When it comes to variants that were created for theological reasons, it’s beyond doubt that scribes did sometimes change the text to avoid apparent problems or to “improve” their copies theologically. The way to detect this is to study a scribe’s entire work across a manuscript rather than to look at certain variants in isolation. When this is done, I find that many supposed theologically-motivated variants have more mundane explanations.

To give an example, it’s been claimed that the scribe of Codex Bezae left out the phrase “and whoever marries a divorced woman commits adultery” in Matt 5.32 because he wanted to protect men from bearing the Scarlet Letter. Now, that’s a very spicy claim. There are two problems with this. The first is that, if this was his goal, he failed us men since he left the offending phrase in the text at Matt 19.9. Second, this explanation ignores the observable tendency in Codex Bezae of leaving out words and phrases by accident when similar word endings are involved. That’s exactly what we have in Matt 5.32 (μοιχευθην αι … μοιχατ αι ) and so this is almost surely an accidental omission in Bezae. From this and similar cases, I’m convinced that theological motive should be a last resort to explain variants. Whenever a mechanical explanation presents itself, we should prefer that as more likely than theological motive.

IP: Textual criticism feels to many like a quite technical and specialist concern—and the ordinary reader might feel anxious about the lack of early complete manuscripts of parts of the NT. Are there good grounds to be confident—and is this something worth engaging in an apologetic context?

PJG: Yes! There is no doubt that Christians (and all readers) can trust the text that we have as a reliable record of what the authors wrote! But I think the issue in textual criticism are something not only worth engaging in but, at times, necessary to. Where the authority of the Bible is challenged—whether by scholars or by popular media—Christians, and especially church leaders, will need a response. This response, to the best of our abilities, needs to be a responsible one and that’s exactly why Elijah and I put this book together.

We hope it offers a resource for pastors, apologists, and laypeople to better appreciate the robust evidence we do have for the text of the New Testament. In some cases, this means doing the uncomfortable work of rejecting bad arguments for the Bible. But this is done in the service of truth and with confidence that the good arguments we have left are more than adequate to the task. For those who do have doubts and for those who minister to them, we hope the book is a welcome resource.

IP: Thanks very much for your time Peter.

Peter Gurry (PhD, University of Cambridge) teaches New Testament and co-directs the Text & Canon Institute at Phoenix Seminary. He is the author of A Critical Examination of the Coherence-Based Genealogical Method in New Testament Textual Criticism and an editor of Myths and Mistakes in New Testament Textual Criticism. He is also a Board Member for the Institute for Biblical Research and a sub-editor in text and canon for Religious Studies Review. He lives in north Phoenix with his wife, five overactive children, and their cat.

If you found this article helpful, share it on social media using the buttons on the left. Follow me on Twitter @psephizo. Like my page on Facebook.

Much of my work is done on a freelance basis. If you have valued this post, would you consider donating £1.20 a month to support the production of this blog?