Databases (Archaeology)

The use of structured information lends itself extremely well to database techniques and this has been an enduring methodology for archaeological projects ever since the days of mainframe computing. At a recent Methods Network Expert Seminar which focused on virtual representations in historical and archaeological research (http://www.methodsnetwork.ac.uk/activities/es04mainpage.html), Vince Gaffney proposed the view that archaeologists had in fact been using computational-type methods even before computers became widely available and that this formal approach to data recording, a legacy of the work of figures such as Pitt Rivers and Petrie, was easily transposed onto database systems when they started to become more widely available in the 1980’s. The successful integration of the sites and monuments record (SMR) databases and the more recent OASIS initiative that enables searching across considerable quantities of ‘grey’ literature (unpublished routine documentation) relating to archaeological fieldwork, was cited as proof of the strength of the methodologies at work within the discipline.

A quick review of fifteen recent and ongoing projects listed in the ICT Guides database (that relate to archaeology) reveal that at least eleven of them explicitly list the use of database software as being integral to the project methodology. The packages listed in order of popularity are:

  • Microsoft Access (6)
  • AdLib (2)
  • MySQL (2)
  • Filemaker (1)

The use of database software in itself is unremarkable but a closer look at a specific project that has recently undertaken a rationalisation of its approach to data management might usefully illustrate how significant productivity gains can be achieved by mapping data systems more accurately onto the actual working practices of a project team.

The IT report from the Catalhoyuk excavation in 2005 (http://www.catalhoyuk.com/archive_reports/2005/ar05_37.html) gives a very clear overview of how demanding it can be to accommodate the varied needs of different sections working on a large excavation, many of which had developed discreet databases to suit their own requirements, e.g. human remains, faunal remains, archaeobotany, and so forth. The approach of the Catalhoyuk IT team was to consolidate the data (much of which was in Microsoft Access format) onto a back-end Microsoft SQL server whilst enabling users to continue to use the forms and queries they had formulated for their own specific purposes over the duration of the project. By redefining the data model and dividing information into ‘core’ and ‘specialist’ categories, it was possible to create a multi-level database structure that in some cases rescued undocumented specialist information from previously stand-alone databases.

The database consolidation process allows specialist teams access to the entire range of core records for the first time (in some cases enabling the identification of duplicate records) whilst the core/extension model also means that fields for specialist data can still be created when data models are found to be inadequate to accommodate newly defined material. Extension tables in one database may appear as core tables in another which increases the potential for specialization across areas of expertise. Perhaps more significantly, the cessation of the requirement to assign an object to a particular type of database had a significant impact on the neutrality of the approach used to initially record items. As Mia Ridge states in her report,

[quote]Previously, artefacts were assigned to specialists and were subtly but implicitly labelled by the specialism and specialist application within which they were recorded. [..] The use of isolated specialist databases may bind the artefact to an implicit interpretation limited by the method or location used to record its materiality.
(http://www.catalhoyuk.com/archive_reports/2005/ar05_37.html)[/quote]

Syndicate content