I want to highlight an important article entitled The Length of a Scroll: Quantitative Evaluation of Material Reconstructions recently published by Eshbal Ratzon and Nachum Dershowitz in PLOS ONE. I have had many opportunities to discuss the methods of reconstruction with the authors over the past few years, and I greatly appreciate their rigorous work on the topic. For those who work on material reconstructions, this article provides important counterbalance to the traditional Göttingen approach, which in my experience often expects unrealistic precision. While I am not quite as pessimistic about the practical application of the method as Ratzon and Dershowitz, there can be little doubt that there are often very large margins of error to be factored in. Difficulties in observing and measuring patterns of damage and many unknown variables necessitate a very careful and cautious approach.
One cautionary example from my own experience can illustrate this. I once looked at the reconstruction of T-S NS 3.21, and--using typical values from the Dead Sea Scrolls corpus--the numbers seemed to suggest that there was no way the scroll could have contained the entire Torah. I suggested to Ben Outhwaite that the scroll may only have contained Genesis, and then he went and discovered a further fragment of the scroll from Exodus! Only after the fact did I find out that the parchment was extremely thin, much more so than the average values I assumed from the Dead Sea Scrolls corpus. One unknown variable made the huge difference between a Genesis scroll and a complete Torah scroll. This is especially worrisome for a corpus like the Dead Sea Scrolls, where measurements of parchment thickness are rarely available for control. The method works in theory, but the results are only as good as the data and assumptions that go into it.
For those interested in further discussion about the method, my own approach to reconstruction should be published next year:
Drew Longacre. “Methods for the Reconstruction of Large Literary (Sc)rolls from Fragmentary Remains.” In Research Approaches in Hebrew Bible Manuscript Studies. Edited by Élodie Attia-Kay and Antony Perrot. Textual History of the Bible Supplement. Leiden: Brill, forthcoming.
Here is the abstract from Ratzon's and Dershowitz's article:
Abstract
Scholars have used mathematical models to estimate the missing length of deteriorated scrolls from ancient Egypt, Qumran, Herculaneum, and elsewhere. Based on such estimations, the content of ancient literature as well as the process of its composition is deduced. Though theoretically reasonable, many practical problems interfere with the method. In the current study, the empirical validity of these mathematical models is examined, showing that highly significant errors are quite frequent. When applied to comparatively intact scrolls, the largest contribution to errors is the subjectivity inherent in measuring patterns of damaged areas. In less well preserved scrolls, deterioration and deformation are more central causes of errors. Another factor is the quality of imaging. Hence, even after maximal reduction of interfering factors, one should only use these estimation methods in conjunction with other supporting considerations. Accordingly, past uses of this approach should be reevaluated, which may have substantial implications for the study of antiquity.
Are there any particular salient examples of DSS that call for reevaluation under their (and/or your) methodology?
ReplyDeleteThanks,
Stephen
Yes, lots. :) In my own work I have focused on Torah scrolls and Psalm scrolls recently. These have particular potential, since large scrolls (Torah and psalter) are quantitatively very different from small scrolls (individual books or small psalm collections). Thus, the method can be very useful here, despite the large margins of error.
Delete