Tuesday, December 29, 2015

Coherence-Based Genealogical Method

For those interested in keeping up with methodological discussions in New Testament textual criticism, TC: A Journal of Biblical Textual Criticism has released the results of a panel discussion on the Coherence-Based Genealogical Method that might be of interest. I have personally benefitted much over the years from methodological discussions with NT colleagues and am a strong proponent of interdisciplinary dialogue. The supposed gulf between OT and NT textual criticism is often grossly exaggerated to justify isolationism and methodological weaknesses, and the two fields are essentially more similar than different, in my opinion. The CBGM is the primary attempt to handle the mass of textual evidence statistically and undergirds ongoing revisions to the Nestle-Aland text of the NT via the Editio Critica Maior. It is quite complicated and remains controversial, but it is well worth trying to get at least a big-picture view of the methodological discussion.

For those who don't have time to work through the details or need help keeping the big picture in mind, I will briefly summarize/simplify the steps that go into this process at present.

1. Transcribe all extant Greek witnesses into an electronic format (XML). When I worked for the IGNTP project on John, two transcribers transcribed the text of each manuscript independently by changing a base text (textus receptus) to match the manuscript being transcribed. They include information about text, layout, lacunae, and corrections in a Word document according to a set pattern. A senior scholar then automatically compared the two transcriptions and personally reconciled any conflicts, yielding extremely accurate final transcriptions. A technical officer then ran a script to convert the transcriptions into XML format. I think they are now starting to use an online transcription editor that creates the XML directly. In my opinion, NT scholars do a much better job than OT scholars at exhausting the direct manuscript tradition and storing the raw data in electronic format, though they probably have much to learn from OT scholars on the use of versions, which unfortunately feature minimally in the CBGM.

2. Automatically collate the electronic texts and correct the automatic collations as a human editor to isolate appropriate variation units (strings of text where variation occurs) and the variant readings. In this process, the editor "regularizes" the spelling to eliminate minor orthographic differences and identify meaningful agreement and disagreement in collated variant readings.

3. Automatically calculate the statistical proximity of each text to each other text based on the percentage agreement in variation units. They call this "pre-genealogical coherence," because it reflects their absolute statistical level of agreement without respect to the nature of the agreements or actual genealogical relationships. In other words, variants are counted first, and only later weighed and recounted.

4. Attempt to relate the variant readings in each variation unit to each other by a local stemma (i.e., a stemma of variant readings) that explains the direction of development of particular readings from other readings. At this point, textual critics use the internal and external evidence (including ancient versions) in basically the same way as TC has traditionally been practiced. Pre-genealogical coherence serves as an important criterion in evaluating external evidence at this stage.

5. By evaluating and recording each variation unit (though it is possible to include only variation units that can be confidently adjudicated at the initial stage, if you want) on its own, you create a database of all your decisions about the initial text and local stemmata of readings.

6. Since this database records both your proposals for the initial text and the directions of development of readings from others, the computer can work out the ramifications of your decisions for the textual flow of the tradition called the "genealogical coherence." It can also flag up decisions which create incoherent pictures of textual transmission, which can then be reconsidered in closer detail. For instance, if you said that one reading developed from another, but that decision implies textual relationships that do not fit with your other decisions, you can reconsider the decision in light of a clearer picture the relationships between textual states. It is important to note that the computer does not make the textual decisions for you, but only keeps track of all your previous decisions simultaneously and points out inconsistencies.

7. Once problematic variation units are flagged up, you can reexamine them in light of the picture of the general textual flow resulting from your decisions. If you did not come to a conclusion about a particular issue in your initial evaluation, you can now reexamine it with the additional information as well. In other words, you can use the overall consistency or "coherence" of the textual relationships resulting from your text-critical decisions as a criterion for reevaluating and refining your prior conclusions. This is where the complicated parts of the CBGM really come in, and I really haven't used it enough to give a thorough treatment, but I'll try to highlight a few important points.
  • Texts are abstracted from the material contexts in which they are found (a major point of controversy), so a historically later "text" can theoretically be hypothesized as a source for a historically earlier "text." The implication of this is that the CBGM does not produce a stemma of manuscripts intended to reflect the historical relationships between manuscripts, but rather the general flow of the text between known states of the text. In this regard, the CBGM is better suited for reconstructing the initial text and isolating important texts than it is to tracing the history of the text in real time.
  • Search for an optimal number of source texts to explain each known state of the text. On the assumption of a generally conservative transmissional tendency, try to identify which texts would be required to provide sufficient source material to explain the origins of the readings without needlessly multiplying sources. Texts can have multiple ancestors, but the number of ancestors hypothesized should be kept to the minimum required to explain the evidence sufficiently. Potential ancestors that explain many readings in the text are to be preferred to those with only occasional helpful source material. Potential ancestors should be sought in documented states of the text, not reconstructed hyparchetypes. Again, this process is not making a claim that the manuscript itself was copied from these sources, but only that its text somehow inherited text from the textual states now documented in the other manuscripts.
  • Local stemmata that fit coherently within the general textual flow are typically to be preferred to those that do not.
  • Readings emerging in distant texts may be explained as contamination/mixture or as having been created independently multiple times in the tradition. In other words, the overall coherence of the tradition can be used to clarify just how "indicative" a variant reading is for genealogical relationships. Some readings that are shared secondary readings may simply be coincidence or from occasional mixture.
  • On the supposition that the impact of contamination/mixture between different texts was typically less extensive than that of normal copying processes, the coherence of the general textual flow can be used to minimize or bypass the occasional effects of contamination and accurately trace the directions of the flow of the text, even in an open (i.e., contaminated) system.
8. After several iterations of reevaluating variation units in light of an increasingly clear picture of the textual history (illustrated by provisional textual flow diagrams), construct a global stemma to illustrate the directions of the development of the text through documented states of the text. I have never actually seen a completed global stemma according to the CBGM, and as far as I know no one has yet achieved this wishful stage of the process. It is not entirely clear to me what exactly this end result would look like, but I found it very encouraging that Klaus Wachtel claimed on p. 6 of his paper, "The relative chronology of the development of text as shown by the global stemma can and will be put into relation with known historical data like the dates of the manuscripts carrying the textual witnesses, the dates of authors citing from the respective writings, and the dates of translations."

I hope that this simplistic summary of a very complex (but not impenetrable) process will be helpful, and of course NT colleagues are welcome to offer corrections to any misrepresentations! I applied some basic principles of this approach in my dissertation (though not fully) on the Dead Sea Scrolls containing Exodus to great effect, and I would encourage consideration by other textual scholars looking to expand their methodological repertoire.

Wednesday, December 16, 2015

Glassman Holland Research Fellowship

The Albright Institute of Archaeological Research has posted a call for applications for a three-month postdoctoral fellowship available to European researchers here. You can find more information and apply here. The AIAR also has a number of other good fellowships to consider. I spent six months there this spring and enjoyed my time immensely. They have a great team and research environment, and I highly recommend it to those who may be interested.

Monday, November 30, 2015

SBL 2015 - Part 2

In part 2 we will survey some of the text-critically relevant papers from SBL 2015, though there were many others I was not able to attend.

The combined Textual Criticism of the Hebrew Bible/Philology in Hebrew Studies session with the theme "Theory and Practice in Textual Criticism: The HBCE Project" was a stimulating session. Sidnie White Crawford discussed the use of the Temple Scroll for her edition of Deuteronomy, suggesting that it has affinities with the Septuagint, but that its unique readings are always secondary. Interestingly, the TS reads an imperfect based on יבחר "will choose" against the perfect בחר "has chosen" in Deuteronomy, further demonstrating that the variant readings are both old readings. Ronald Troxel gave a heavily theoretical paper on what exactly is the nature of "text," insisting that "text" is a socially constructed, unifying concept that supersedes its multiple instantiations in manuscripts. Ingrid Lilly pushed back on how overly rigid generic categories can lead scholars to make textual decisions based on literary expectations that may be foreign to the works they are examining.

Brandon Bruning suggested that the phrase מראת הצבאת in Exodus 38:8[Heb] should be taken as "visions concerning the troops," explaining it in the context of the construction of the tabernacle according to the pattern "shown" to Moses throughout Exodus. Jason Bembry suggested that the LXX-B reading "his concubine went away from him" in Judges 19:2 came first (there is also no hint of sexual immorality in Josephus), which later interpreters took rather as the woman "committing harlotry" or "getting angry at the man". Julio Trebolle Barrera presented a very detailed paper demonstrating that documented redactional seams often occur at points marked in manuscripts by vacats to indicate text segmentation. Urmas Nõmmik suggested some specific, undocumented literary critical developments in the Hebrew tradition of Job based on comparison with the Old Greek text. And Seth Adcock suggested that the shorter text of Jeremiah 10 was abbreviated from the longer text to accommodate an apotropaic usage in light of an interpretation of the Aramaic verse 10:11.

In a session on recognizing the Kaige recension in the historical books, Andrés Piquer Otero examined a number of cases where good old Georgian readings permit the identification of Kaige readings in the Lucianic text. Tuukka Kauhanen proposed a diagnostic model for identifying Kaige based on observable symptoms in the text, in much the same way doctors diagnose illnesses from symptoms. Pablo Torijano Morales argued that Ra 460 should be considered an Antiochean or Lucianic manuscript especially closely related to 700, yielding now seven Antiochean manuscripts in Kings (19-108 82-93-127-460-700). Julio Trebolle Barrera showed that 158 and 56-246 have numerous Antiochean readings inserted into their generally Kaige texts, often in the form of doublets.

In a session on textual criticism of the Pentateuch and Daniel, I argued that preserved manuscript remains and reconstructions suggest that approximately half of the copies of the book of Exodus evident from the Qumran remains were in fact situated in large pentateuchal collections, in most cases probably complete Torah scrolls. I illustrated the process of reconstructing 4QExod-c as a complete Torah scroll by showing that Exodus began in the middle of a column (suggesting it was preceded by Genesis) and that the circumference of the scroll was so large that it must have contained a text approximately the same length as the rest of the Pentateuch. David Rothstein showed how a variant reading in 4QPhyl-k and several Kennicott manuscripts finds reflexes in later rabbinic interpretations of Deut 11:4, with the waters pursuing the Egyptians. Dan McClellan supported the interpolation theory to explain the occurrence of the "angel" of the Lord and suggested cognitive scientific parallels to his proposed development of the concept. Amanda McGuire noted and evaluated the many differences between the Old Greek and MT/Theodotion in Daniel 9:27.

In a joint Aramaic Studies/Qumran session in honor of Moshe Bernstein, Edward Cook addressed the complications of distinguishing between ambiguity, polysemy, and contextual variation in lexicography. I was unfortunately unable to attend a talk by Loren Stuckenbruck on the translation of Aramaic forms into Greek in several works composed in Aramaic, as well as one by Jan Joosten on the need to look broadly at the history of Aramaic to read texts like the Genesis Apocryphon. Daniel Machiela explored the use of wisdom motifs in unexpected places in various Aramaic text. And Michael Segal suggested that Daniel 6 (particularly in the MT tradition) was assimilated to parallels in Daniel 3 and in Esther.

An entire IOSCS session was devoted to reviewing Frank Shaw's The Earliest Non-Mystical Jewish Use of IAO, with responses by Ronald Troxel, Kristin De Troyer, Robert Kraft, and Martin Rösel. In this book Shaw discusses the earliest evidence for the use of ιαω for the tetragrammaton, though he never comes down conclusively on the question of whether or not this transliteration was originally used by the Septuagint translators. The respondents were generally appreciative--though with critical feedback--but Martin Rösel disagreed sharply at points.

A good Qumran session rounded off the conference, with Matthew Goff suggesting that rabbinic sources can shed light on the fragmentary Qumran material from the Book of Giants. Seth Adcock reiterated his defense of the longer text of Jeremiah 10. Moshe Bernstein gave a review of early generic classifications of the Genesis Apocryphon, as well as noting their weaknesses and reflections in contemporary discussions. I then suggested a number of textual groups and statistical clusters that can be identified from within the Qumran corpus of Exodus materials, perhaps most importantly a newly-recognized tight group consisting of 4QpaleoGen-Exod-l and 4QExod-c. Ira Rabin then examined the results of her chemical analysis of several scrolls, suggesting that 1QIsa-a, 1QS, and 1QSb were prepared according to the same process. As usual, she included a number of little gems, such as explaining how--before the use of lime treatments for parchment, which dissolves the fat layer between the layers of skin and fuses them into a single layer--ancient parchment preparers could split the skins into two separate layers, producing very fine writing supports.

All in all, it was a great conference with many challenging topics. I got to meet many new people and catch up with old friends, and I consider the conference a great success.

Saturday, November 21, 2015

SBL 2015 - Part 1

The first SBL session I went to today was a very helpful session on critical editions from the German Bible Society. Richard Weis reviewed in detail the historical development of editorial principles that set the stage for BHQ, which he illustrated from examples in the new Genesis volume by Abraham Tal. Kay Joe Petzold discussed the editorial concepts behind the Masorah in BHS and BHQ, the latter of which is a more strictly diplomatic presentation of the Masorah in the Leningrad Codex, against Gérard Weil's attempts to reconstruct a single Masorah against L in BHS.

On the New Testament side, Holger Strutwolf announced the new NA/UBS editorial committee: Christos Karakolis, David Parker, Stephen Pisano, David Trobisch, and Klaus Wachtel. David Trobisch then gave a preliminary look at key issues discussed by the committee, stressing his desire to rearrange the books of the NT according to the order of ancient manuscripts.

Joseph Sanzo and Ra'anan Boustan stressed the difficulties of identifying Christian or Jewish socio-religious backgrounds to ancient magical texts and artifacts, given shared cultural elements. This, or course, is a complex problem in trying to understand the background of Septuagint manuscripts as well.

Friday, November 20, 2015

Evangelical Theological Society 2015

The Evangelical Theological Society held its annual meeting in Atlanta from 17-19 November, and I thought I would summarize some of the text-critically relevant papers for those who did not attend.

Russell Fuller and Richard McDonald argued that the study and teaching of Hebrew should be based on the use of Arabic grammatical categories, since it is the closest living language with a long history of grammatical analysis. They suggested that modern linguistic approaches have led to more confusion than insight, and that the use of native Semitic grammatical analyses better explains many phenomena. I must admit that the idea that we even need a paradigm language seems to me unnecessarily limiting. Neither does native Arabic grammar seem to me necessarily to be the best tool for studying Hebrew grammar. Nevertheless, it was a good reminder that we stand in a long line of grammatical tradition, and we would do well not to neglect the study of earlier grammarians and cognate languages.

Benjamin Giffone presented a theological paper on the problem for Evangelical bibliologies of defining a single, definitive text in a tradition that was repeatedly edited. He suggested that it is all but inevitable to have to appeal to "Catholic" arguments from community determination. He raised many insightful, probing theological questions, but unfortunately had no particular answer to give.

Eric Tully presented a helpful model for distinguishing between textual variants in a source language text and translation shifts in a target language text. He suggested gradually accumulating a database of tentative conclusions on individual readings, which can then inform later decisions or be corrected by later decisions. This iterative approach is not particularly new, but it was nice to see it clearly laid out.

Chris Stevens compared Titus in P32 and Sinaiticus, showing that the two are almost completely identical. He also used Sinaiticus to reconstruct the lacunae in P32 (rather than the NA text), further showing how close they are based on the near-perfect fit. I personally am a big fan of comparing the early witnesses to each other directly, rather than through intermediary textual witnesses or editions, so I appreciated that part of his paper.

Michael Kruger reevaluated P.Antinoopolis 12 (0232), a miniature codex containing 2 John. He suggested a 5th century date based on the physical features of the codex and the hand. He noted an error in the editio princeps, which led to a major error in the reconstruction of the codex. While admittedly speculative, he suggested that Hebrews may have been included in the codex along with the Catholic epistles, which would fill up the requisite amount of text indicated by the page numbers on the fragment.

Tomas Bokedal reviewed the history of the study of the nomina sacra, suggesting that Jesus was the primary member of the group of five reflecting an early core. He suggests the other names were chosen to line up with Christological creeds, indicating titles attributed to Jesus. This, of course, would imply a Christian origin for the nomina sacra.

Eric Mitchell presented on an unpublished fragment of Deuteronomy located at the Southwestern Baptist Theological Seminary. I missed the first part of the presentation, but if I understand correctly, it is a late 1st century BCE fragment from Qumran with regular morphologically long 2mp suffixes, but only one (semi-)meaningful difference from the MT. Unfortunately, the fragment reads the broken word י]בחר, so I don't know if it is possible to tell whether it read the perfect (SP) or imperfect (MT) in that important ideological difference, though Eric reconstructed with the MT.

I caught the last part of Peter Gurry's paper on the textual variants in the divorce passages in the Gospels in light of the Coherence-Based Genealogical Method. Among other things, he suggested that genealogical coherence suggests a preference for a longer reading in Matt. 19:9.

Nicholas Perrin argued against Watson that P. Egerton 2 does not witness to a pre-Johannine source, but is rather secondary. Among other arguments, he suggests that key stylistic features of the common text are part of broader themes in John that cannot be explained on the basis of P. Egerton 2 alone.

David Yoon looked at the use of ekthesis (putting the first letter of a line in the margin for visual prominence) in Galatians in Sinaiticus, suggesting that the text segments divided by ekthesis cannot be identified as paragraphs according to modern understandings, since they occur too frequently and sometimes even mid-sentence. He did not come to a definitive conclusion as to what exactly was the function of the scribal practice.

Saturday, May 23, 2015

Textual Communities Workshop, KU Leuven 11 and 12 June 2015

I received the following announcement from Peter Robinson, which may be of interest to some. For Old Testament scholars who may not know, Peter Robinson has a long-standing project editing the Canterbury Tales and is one of the leaders of the use of the digital humanities for the creation of critical editions. For those interested in learning the platform he has created for making critical editions, this would be a great opportunity.

Textual Communities Workshop, KU Leuven 11 and 12 June 2015 

Museumzaal (MSI 02.08, Erasmusplein 2, 3000 Leuven)
This workshop will serve three overlapping purposes. 
First, it will introduce the Textual Communities system for creating scholarly editions in digital form. Textual Communities allows scholars and scholarly groups to make highest-quality editions in digital form, with minimal specialist computing knowledge and support.  It is particularly suited to the making of editions which do not fit the pattern of “digital documentary editions”: that is, editions of works in many manuscripts or versions, or editions of non-authorial manuscripts. Accordingly, Textual Communities includes tools for handling images, page-by-page transcription, collation of multiple versions, project management, and more. See the draft article describing Textual Communities at https://www.academia.edu/12297061/Some_principles_for_the_making_of_collaborative_scholarly_editions_in_digital_form.
Second, it will offer training to transcribers joining the Canterbury Tales project, and to scholars leading transcription teams within the project.  The project is undertaking the transcription of all 30,000 pages of the 88 pre-1500 witnesses of the Tales (18000 pages already transcribed but requiring checking; 12000 needing new transcription). Participants will be given accounts within the Textual Communities implementation of the Canterbury Tales project, introduced to the transcription system, and undertake their first transcriptions of pages from the Tales.  See http://www.textualcommunities.usask.ca/web/canterbury-tales/wiki/-/wiki/Main/Becoming+a+transcriber.
Third, it will offer an introduction to the principles of manuscript transcription for digital editions to any scholars or students considering undertaking a digital edition project based on a manuscript. The materials of the Canterbury Tales project will be used as a starting point for discussion of transcription, supplemented by reference to other textual traditions on which the workshop leaders have worked (including Dante, medieval Spanish and New Testament Greek).
This workshop will be useful to scholars undertaking a wide range of digital edition projects, especially of works existing in multiple witnesses.  Because both the architect of Textual Communities (Robinson) and its chief programmer (Xiaohan Zhang) will be present, it will be useful also for technical consultants who plan to work with the Textual Communities API. And, of course, it will be useful for transcribers joining the Canterbury Tales project.
There is no charge for this workshop, but places will be limited.  Please contact Barbara Bordalejo barbara.bordalejo@kuleuven.be or Peter Robinson peter.robinson@usask.ca to confirm attendance. For accommodation, see http://www.leuven.be/en/tourism/staying/index.jsp.

Tuesday, March 17, 2015

Doctoral Thesis Posted Online

I have posted online the final version of my doctoral thesis A Contextualized Approach to the Hebrew Dead Sea Scrolls Containing Exodus for the University of Birmingham under Charlotte Hempel, which I successfully defended on 25 February 2015 without corrections. I welcome corrections and comments from interested readers.

Update: The University of Birmingham has hosted a permanent URL http://etheses.bham.ac.uk/5780/ for the thesis. The full text will be available at this link from 1 July 2015.

Sunday, March 1, 2015

Greek Fragments in European Libraries and Museums: A Whirlwind Tour

I just posted a blog post on the CSTT blog about my recent examinations of manuscripts in Berlin, Vienna, Paris, and Birmingham.

Friday, January 23, 2015

An Author's Text in an Editor's Hands

Update: Just so everyone is aware, the editorial team at Logos has expressed apologies for the mishap and is currently revising the published text discussed below to account for changes I suggested. This is one of the many great advantages of electronic resources! I consider myself a satisfied Logos user and would gladly publish with them again. I cite this only as an interesting example of something that temporarily slipped through the cracks in the complicated modern publication process.

I recently had the opportunity to write a series of short dictionary entries for the Logos e-publication The Lexham Bible Dictionary, including "Ancient Libraries," "Masorah," and "Masoretes." Overall it was a fairly pleasant experience. On reading the final versions in the Logos resource, however, I immediately noticed differences between my submitted version and the published version. There were clear stylistic differences from my own work and a whole host of other problems that clearly betrayed to me an editor's hands. Sometimes the editor improved the text by making it more concise or simpler for a lay audience, which was not particularly objectionable. But at other times, the editor altered the text in significant (and to my mind detrimental) ways. I would like to explore briefly the interaction between author and editor in this publication process. To do this, I will list some of the more interesting/disturbing examples of the published version (in italic font) and the submitted version (in bold italics).

Perhaps of interest for all my Qumran friends, with the omission of a single word, it seems I am now committed in (electronic) print to the Essene hypothesis (!).
  • Khirbet Qumran, a communal residence associated with the Jewish Essene sect.
  • Khirbet Qumran, a community residence in the Judean Desert near the Dead Sea commonly associated with the Jewish sect called the Essenes

Some changes rendered the text garbled nonsense.
  • Many scholars suggest the term comes for the root often considered to mean “to transmit,” thus concluding that the Masorah tradition is transmitted from generation to generation preserve the text
  • Most scholars relate the word to the root מסר, whose precise nuance is contested. Many suppose the common root meaning “to transmit,” thus concluding that the Masorah is the body of traditions transmitted from generation to generation for preserving the text.

One particularly egregious structural rearrangement makes the Masorah magna a subset of the Masorah parva.
  • Many Masorah parva notes indicate the number of times a particular word or group of words occurs in a certain form within a portion of Scripture, sometimes listing alternative forms found elsewhere. These notes are designed as an external control to ensure its precise preservation. Many types of Masorah parva notes exist, but the two most frequent include:
    • 1.   Kethiv/Qere. [only section headings listed]
    • 2.   Masorah Magna.
  • There are many types of Masorah parva notes, of which two of the most frequent and important are discussed here.
    • 1. Usage Statistics -- Many Masorah parva notes indicate the number of times a particular word or group of words occurs in a certain form within a portion of Scripture, sometimes listing alternative forms from elsewhere. These notes are designed as an external control on the copying of the text to ensure its precise preservation.
    • 2. Kethiv versus Qere
  • Masorah Magna [i.e., new section header]

There were also a number of problematic transliterations.
  • Other scholars derive it from the root ‘sr [i.e., ayin for aleph], “to bind”
  • Other scholars derive it from the root אסר “to bind”
  • ben Asher reads יִשָּׂשכָר (yissoshkhar)
  • ben Asher reads יִשָּׂשכָר

All this just goes to show the complexity of the entire concept of an "authorial text," no less in the present age than in antiquity. Once I emailed the file to submit it to Logos, the text was truly out of my hands and at the mercy of others. There are probably some devious literary critics smiling right now...


Thursday, January 8, 2015

Greek Exodus Fragments at the Staatliche Museen zu Berlin

My family and I have been travelling throughout Europe for the past few weeks on a grand Christmas market tour, and I decided to stop by some major European libraries and museums along the way. On the 23rd of December I was able to visit the Ägyptisches Museum und Papyrussammlung of the Staatliche Museen zu Berlin to examine several Greek Exodus fragments held in the collection. Marius Gerhardt was kind enough to welcome me and assist me in the study room, even during the holiday season.

I was able examine parts of three manuscripts:
  • Ra 835 (P. 11766 + 14046) - The large fragment of this manuscript was on display in the museum (and so unavailable for detailed examination), but I was able to examine a fragment containing parts of Exodus 5.
  • Ra 960 (P. 13994) - This fragment, once thought to be missing, has thankfully been found again, and I was able to examine its contents including parts of Exodus 23 and 31.
  • Ra 978 (P. 16990) - This fragment contains parts of Exodus 34.
  • Unfortunately, Ra 836 (P. 14039) was being restored, so I was not able to examine it.
These fragments have all been published, but a few notes are in order from my visit. One of the most interesting things for me was that these fragments were all parchment fragments, rather than papyri. Before looking into them more closely, I had assumed they were papyri. They were included in Wever's "Papyri and Fragments" category and housed in the papyrus collection in Berlin. But as it turns out, they were not. In fact, many of the surviving Greek Exodus fragments were written on parchment, and references to these manuscripts are frequently unclear or inaccurate with regard to the material medium. This is a good example of the need to double check original materials, rather than simply relying on secondary literature.

Perhaps most importantly, Marius also pointed me in the direction of online digital images of each of these manuscripts. The museum has been very good about digitizing their collection in Berlin, and many high-quality digital images are available online. During our time in the study room in fact, Marius uploaded several new fragments from the collection! He noted that they have had problems with scholars publishing the images and stressed to me that they were for research purposes, not for publication. The downloadable images have a resolution of 600 dpi on a white background, which is sufficient for most purposes. I was able to access digital images of three out of the four manuscripts I was looking for in the collection:
Textually, the most interesting phenomenon I looked into was the transition from Exodus 23:13 to 31:12 in Ra 960. On the verso side, 23:13 ends near the end of a line in the middle of the column and is followed by a vacat of 1-3 letter spaces left of the right margin. 31:12 then begins at the left margin of the following line without any obvious indications preserved of the massive jump in the text. Looking at this fragment was very helpful for me to understand the nature of this intriguing manuscript.

All in all, it was a very fruitful and enjoyable experience at the Staatliche Museen zu Berlin, and I would like to thank Marius and the rest of the team again for allowing me the opportunity. There really is nothing like first-hand familiarity with the manuscripts you are working on, and I recommend all textual scholars to get to know their sources well.