The Stoa Consortium

Syndicate content
Serving news, projects, and links for digital classicists everywhere.
Updated: 17 hours 37 min ago

OEDUc: Exist-db mashup application

Wed, 02/08/2017 - 12:52
Exist-db mashup application working
group

This working group has worked to develop a demo app built with exist-db, a natively XML database which uses XQuery.

The app is ugly, but was built reusing various bits and pieces in a bit less than two days (the day of the unconference and a bit of the following day) and it uses different data sources with different methods to bring together useful resources for an epigraphic corpus and works in most of the cases for the examples we wanted to support. This was possible because exist-db makes it possible and because there were already all the bits available (exist-db, the xslt, the data, etc.)

Code, without data, has been copied to https://github.com/EpiDoc/OEDUc .

The app is accessible, with data from EDH data dumps of July at http://betamasaheft.aai.uni-hamburg.de:8080/exist/apps/OEDUc/

Preliminary twicks to the data included:
  • Adding an @xml:id to the text element to speed up retrival of items in exist. (the xquery doing this is in the AddIdToTextElement.xql file)
  • Note that there is no Pleiades id in the EDH XML (or in any EAGLE dataset), but there are Trismegistos Geo ID! This is because it was planned during the EAGLE project to get all places of provenance in Trismegistos GEO to map them later to Pleiades. This was started using Wikidata mix’n’match but is far from complete and is currently in need for update.
The features
  • In the list view you can select an item. Each item can be edited normally (create, update, delete)
  • The editor that updates files reproduces in simple XSLT a part of the Leiden+ logic and conventions for you to enter data or update existing data. It validates the data after performing the changes against the tei-epidoc.rng schema. Plan is to have it validate before it does the real changes.
  • The search simply searches in a number of indexed elements. It is not a full text index. There are also range indexes set to speed up the queries beside the other indexes shipped with exist.
  • You can create a new entry with the Leiden+ like editor and save it. it will be first validated and in case is not ok you are pointed to the problems. There was not enough times to add the vocabularies and update the editor.
  • Once you view an item you will find in nasty hugly tables a first section with metadata, the text, some additional information on persons and a map:
  • The text exploits some of the parameters of the EpiDoc Stylesheets. You can
    change the desired value, hit change and see the different output.
  • The ids of corresponding inscriptions, are pulled from the EAGLE ids API here in Hamburg, using Trismegistos data. This app will be soon moved to Trismegistos itself, hopefully.
  • The EDH id is instead used to query the EDH data API and get the information about persons, which is printed below the text.
  • For each element with a @ref in the XML files you will find the name of the element and a link to the value. E.g. to link to the EAGLE vocabularies
  • In case this is a TM Geo ID, then the id is used to query Wikidata SPARQL endpoint and retrive coordinates and the corresponding Pleiades id (given those are there). Same logic could be used for VIAF, geonames, etc. This task is done via a http request directly in the xquery powering the app.
  • The Pleiades id thus retrieved (which could be certainly obtained in other ways) is then used in javascript to query Pelagios and print the map below (taken from the hello world example in the Pelagios repository)
  • At http://betamasaheft.aai.uni-hamburg.de/api/OEDUc/places/all and http://betamasaheft.aai.uni-hamburg.de/api/OEDUc/places/all/void two rest XQ function provide the ttl files for Pelagios (but not a dump as required, although this can be done). The places annotations, at the moment only for the first 20 entries. See rest.xql.
Future tasks

For the purpose of having a sample app to help people get started with their projects and see some of the possibilities at work, beside making it a bit nicer it would be useful if this could also have the following:

  • Add more data from EDH-API, especially from edh_geography_uri which Frank has added and has the URI of Geo data; adding .json to this gets the JSON Data of place of finding, which has a “edh_province_uri” with the data about the province.
  • Validate before submitting
  • Add more support for parameters in the EpiDoc example xslt (e.g. for Zotero bibliography contained in div[@type=’bibliography’])
  • Improve the upconversion and the editor with more and more precise matchings
  • Provide functionality to use xpath to search the data
  • Add advanced search capabilities to filter results by id, content provider, etc.
  • Add images support
  • Include all EAGLE data (currently only EDH dumps data is in, but the system scales nicely)
  • Include query to the EAGLE media wiki of translations (api currently unavailable)
  • Show related items based on any of the values
  • Include in the editor the possibility to tag named entities
  • Sync the Epidoc XSLT repository and the eagle vocabularies with a webhook
Categories:

OEDUc: Disambiguating EDH person RDF working group

Tue, 25/07/2017 - 17:15

One of the working groups at the Open Epigraphic Data Unconference (OEDUc) meeting in London (May 15, 2017) focussed on disambiguating EDH person RDF. Since the Epigraphic Database Heidelberg (EDH) has made all of its data available to download in various formats in an Open Data Repository, it is possible to extract the person data from the EDH Linked Data RDF.

A first step in enriching this prosopographic data might be to link the EDH person names with PIR and Trismegistos (TM) references. At this moment the EDH person RDF only contains links to attestations of persons, rather than unique individuals (although it attaches only one REF entry to persons who have multiple occurrences in the same text), so we cannot use the EDH person URI to disambiguate persons from different texts.

Given that EDH already contains links to PIR in its bibliography, we could start with extracting (this should be possible using a simple Python script) and linking these to the EDH person REF. In the case where there is only one person attested in a text, the PIR reference can be linked directly to the RDF of that EDH person attestation. If, however (and probably in most cases), there are multiple person references in a text, we should try another procedure (possibly by looking at the first letter of the EDH name and matching it to the alphabetical PIR volume).

A second way of enriching the EDH person RDF could be done by using the Trismegistos People portal. At the moment this database of persons and attestations of persons in texts consists mostly of names from papyri (from Ptolemaic Egypt), but TM is in the process of adding all names from inscriptions (using an automated NER script on the textual data from EDCS via the EAGLE project). Once this is completed, it will be possible to use the stable TM PER ID (for persons) and TM person REF ID (for attestations of persons) identifiers (and URIs) to link up with EDH.

The recommended procedure to follow would be similar to the one of PIR. Whenever there’s a one-to-one relationship with a single EDH person reference the TM person REF ID could be directly linked to it. In case of multiple attestations of different names in an inscription, we could modify the TM REF dataset by first removing all double attestations, and secondly matching the remaining ones to the EDH RDF by making use of the order of appearance (in EDH the person that occurs first in an inscription receives a URI (?) that consists of the EDH text ID and an integer representing the place of the name in the text (e.g., http://edh-www.adw.uni-heidelberg.de/edh/person/HD000001/1 is the first appearing person name in text HD000001). Finally, we could check for mistakes by matching the first character(s) of the EDH name with the first character(s) of the TM REF name. Ultimately, by using the links from the TM REF IDs with the TM PER IDs we could send back to EDH which REF names are to be considered the same person and thus further disambiguating their person RDF data.

This process would be a good step in enhancing the SNAP:DRGN-compliant RDF produced by EDH, which was also addressed in another working group: recommendations for EDH person-records in SNAP RDF.

Categories:

OEDUc: EDH and Pelagios location disambiguation Working Group

Wed, 05/07/2017 - 16:56

From the beginning of the un-conference, an interest in linked open geodata seemed to be shared by a number of participants. Moreover, an attention towards gazetteers and alignment appeared among the desiderata for the event, expressed by the EDH developers. So, in the second part of the unconference, we had a look at what sort of geographic information can be found in the EDH and what could be added.

The discussion, of course, involved Pelagios and Pleiades and their different but related roles in establishing links between sources of geographical information. EDH is already one of the contributors of the Pelagios LOD ecosystem. Using the Pleiades IDs to identify places, it was relatively easy for the EDH to make its database compatible with Pelagios and discoverable on Peripleo, Pelagios’s search and visualisation engine.

However, looking into the data available for downloads, we focused on a couple things. One is that each of the epigraphic texts in the EDH has, of course, a unique identifier (EDH text IDs). The other is that each of the places mentioned has, also, a unique identifier (EDH geo IDs), besides the Pleiades ID. As one can imagine, the relationships between texts and places can be one to one, or one to many (as a place can be related to more than one text and a text can be related to more than one place). All places mentioned in the EDH database have an EDH geo ID, and the information becomes especially relevant in the case of those places that do not have already an ID in Pleiades or GeoNames. In this perspective, EDH geo IDs fill the gaps left by the other two gazetteer and meet the specific needs of the EDH.

Exploring Peripleo to see what information from the EDH can be found in it and how it gets visualised, we noticed that only the information about the texts appear as resources (identified by the diamond icon), while the EDH geo IDs do not show as a gazetteer-like reference, as it happen for other databases, such as Trismegistos or Vici.

So we decided to do a little work on the EDH geo IDs, more specifically:

  1. To extract them and treat them as a small, internal gazetteer that could be contributed to Pelagios. Such feature wouldn’t represent a substantial change in the way EDH is used, or how the data are found in Peripleo, but we thought it could  improve the visibility of the EDH in the Pelagios panorama, and, possibly, act as an intermediate step for the matching of different gazetteers that focus in the ancient world.
  2. The idea of using the EDH geo IDs as bridges sounded interesting especially when thinking of the possible interaction with the Trismegistos database, so we wondered if a closer collaboration between the two projects couldn’t benefit them both. Trismegistos, in fact, is another project with substantial geographic information: about 50.000 place-names mapped against Pleiades, Wikipedia and GeoNames. Since the last Linked Past conference, they have tried to align their place-names with Pelagios, but the operation was successful only for 10,000 of them. We believe that enhancing the links between Trismegistos and EDH could make them better connected to each other and both more effectively present in the LOD ecosystem around the ancient world.

With these two objectives in mind, we downloaded the geoJSON dump from the EDH website and extracted the texts IDs, the geo IDs, and their relationships. Once the lists (that can be found on the git hub repository) had been created, it becomes relatively straightforward to try and match the EDH geoIDs with the Trismegistos GeoIDs. In this way, through the intermediate step of the geographical relationships between text IDs and geo IDs in EDH, Trismegistos also gains a better and more informative connection with the EDH texts.

This first, quick attempt at aligning geodata using their common references, might help testing how good the automatic matches are, and start thinking of how to troubleshoot mismatches and other errors. This closer look at geographical information also brought up a small bug in the EDH interface: in the internal EDH search, when there is a connection to a place that does not have a Pleiades ID, the website treats it as an error, instead of, for example, referring to the internal EDH geoIDs. Maybe something that is worth flagging to the EDH developers and that, in a way, underlines another benefit of treating the EDH geo IDs as a small gazetteer of its own.

In the end, we used the common IDs (either in Pleiades or GeoNames) to do a first alignment between the Trismegistos and EDH places IDs. We didn’t have time to check the accuracy (but you are welcome to take this experiment one step further!) but we fully expect to get quite a few positive results. And we have a the list of EDH geoIDs ready to be re-used for other purposes and maybe to make its debut on the Peripleo scene.

Categories:

OEDUc: recommendations for EDH person-records in SNAP RDF

Mon, 03/07/2017 - 14:17

At the first meeting of the Open Epigraphic Data Unconference (OEDUc) in London in May 2017, one of the working groups that met in the afternoon (and claim to have completed our brief, so do not propose to meet again) examined the person-data offered for download on the EDH open data repository, and made some recommendations for making this data more compatible with the SNAP:DRGN guidelines.

Currently, the RDF of a person-record in the EDH data (in TTL format) looks like:

<http://edh-www.adw.uni-heidelberg.de/edh/person/HD000001/1> a lawd:Person ; lawd:PersonalName "Nonia Optata"@lat ; gndo:gender <http://d-nb.info/standards/vocab/gnd/gender#female> ; nmo:hasStartDate "0071" ; nmo:hasEndDate "0130" ; snap:associatedPlace <http://edh-www.adw.uni-heidelberg.de/edh/geographie/11843> , <http://pleiades.stoa.org/places/432808#this> ; lawd:hasAttestation <http://edh-www.adw.uni-heidelberg.de/edh/inschrift/HD000001> .

We identified a few problems with this data structure, and made recommendations as follows.

  1. We propose that EDH split the current person references in edh_people.ttl into: (a) one lawd:Person, which has the properties for name, gender, status, membership, and hasAttestation, and (b) one lawd:PersonAttestation, which has properties dct:Source (which points to the URI for the inscription itself) and lawd:Citation. Date and location etc. can then be derived from the inscription (which is where they belong).
  2. A few observations:
    1. Lawd:PersonalName is a class, not a property. The recommended property for a personal name as a string is foaf:name
    2. the language tag for Latin should be @la (not lat)
    3. there are currently thousands of empty strings tagged as Greek
    4. Nomisma date properties cannot be used on person, because the definition is inappropriate (and unclear)
    5. As documented, Nomisma date properties refer only to numismatic dates, not epigraphic (I would request a modification to their documentation for this)
    6. the D-N.B ontology for gender is inadequate (which is partly why SNAP has avoided tagging gender so far); a better ontology may be found, but I would suggest plain text values for now
    7. to the person record, above, we could then add dct:identifier with the PIR number (and compare discussion of plans for disambiguation of PIR persons in another working group)
Categories:

Global Philology Workshop Week in Leipzig

Thu, 29/06/2017 - 07:11

Within the framework of the BMBF funded Global Philology Planning Project, we would like to announce three workshops that will be taking place at the University of Leipzig in the next two weeks:

Categories:

Head of Digital Research at the National Archives

Wed, 21/06/2017 - 11:17

Posted for Olivia Pinto (National Archives, Kew, UK):

Job Opportunity at The National Archives

Head of Digital Research

About the role

The National Archives has set itself the ambition of becoming a digital archive by instinct and design. The digital strategy takes this forward through the notion of a disruptive archive which positively reimagines established archival practice, and develops new ways of solving core digital challenges. You will develop a research programme to progress this vision, to answer key questions for TNA and the Archives Sector around digital archival practice and delivery. You will understand and navigate through the funding landscape, identifying key funders (RCUK and others) to build relations at a senior level to articulate priorities around digital archiving, whilst taking a key role in coordinating digitally focused research bids. You will also build key collaborative relationships with academic partners and undertake horizon scanning of the research landscape, tracking and engaging with relevant research projects nationally and internationally. You will also recognise the importance of developing an evidence base for our research into digital archiving and will lead on the development of methods for measuring impact.

About you

As someone who will be mentoring and managing a team of researchers, as well as leading on digital programing across the organisation, you’ll need to be a natural at inspiring and engaging the people you work with. You will also have the confidence to engage broadly with external stakeholders and partners. Your background and knowledge of digital research, relevant in the context of a memory institution such as The National Archives, will gain you the respect you need to deliver an inspiring digital research programme. You combine strategic leadership with a solid understanding of the digital research landscape as well as the tools and technologies that will underpin the development of a digital research programme. You will come with a strong track record in digital research, a doctorate in a discipline relevant to our digital research agenda, and demonstrable experience of relationship development at a senior level with the academic and research sectors.

Join us here in beautiful Kew, just 10 minutes walk from the Overground and Underground stations, and you can expect an excellent range of benefits. They include a pension, flexible working and childcare vouchers, as well as discounts with local businesses. We also offer well-being resources (e.g. onsite therapists) and have an on-site gym, restaurant, shop and staff bar.

To apply please follow the link: https://www.civilservicejobs.service.gov.uk/csr/jobs.cgi?jcode=1543657

Salary: £41,970

Closing date: Wednesday 28th June 2017

Categories:

Pleiades sprint on Pompeian buildings

Tue, 20/06/2017 - 17:07

Casa della Statuetta Indiana, Pompei.

Monday the 26th of June, from 15 to 17 BST, Pleiades organises an editing sprint to create additional URIs for Pompeian buildings, preferably looking at those located in Regio I, Insula 8.

Participants will meet remotely on the Pleiades IRC chat. Providing monument-specific IDs will enable a more efficient and granular use and organisation of Linked Open Data related to Pompeii, and will support the work of digital projects such as the Ancient Graffiti.

Everyone is welcome to join, but a Pleiades account is required to edit the online gazetteer.

Categories:

OEDUc: EDH and Pelagios NER working group

Mon, 19/06/2017 - 12:21

Participants:  Orla Murphy, Sarah Middle, Simona Stoyanova, Núria Garcia Casacuberta

Report: https://github.com/EpiDoc/OEDUc/wiki/EDH-and-Pelagios-NER

The EDH and Pelagios NER working group was part of the Open Epigraphic Data Unconference held on 15 May 2017. Our aim was to use Named Entity Recognition (NER) on the text of inscriptions from the Epigraphic Database Heidelberg (EDH) to identify placenames, which could then be linked to their equivalent terms in the Pleiades gazetteer and thereby integrated with Pelagios Commons.

Data about each inscription, along with the inscription text itself, is stored in one XML file per inscription. In order to perform NER, we therefore first had to extract the inscription text from each XML file (contained within <ab></ab> tags), then strip out any markup from the inscription to leave plain text. There are various Python libraries for processing XML, but most of these turned out to be a bit too complex for what we were trying to do, or simply returned the identifier of the <ab> element rather than the text it contained.

Eventually, we found the Python library Beautiful Soup, which converts an XML document to structured text, from which you can identify your desired element, then strip out the markup to convert the contents of this element to plain text. It is a very simple and elegant solution with only eight lines of code to extract and convert the inscription text from one specific file. The next step is to create a script that will automatically iterate through all files in a particular folder, producing a directory of new files that contain only the plain text of the inscriptions.

Once we have a plain text file for each inscription, we can begin the process of named entity extraction. We decided to follow the methods and instructions shown in the two Sunoikisis DC classes on Named Entity Extraction:

https://github.com/SunoikisisDC/SunoikisisDC-2016-2017/wiki/Named-Entity-Extraction-I

https://github.com/SunoikisisDC/SunoikisisDC-2016-2017/wiki/Named-Entity-Extraction-II

Here is a short outline of the steps might involve when this is done in the future.

  1. Extraction
    1. Split text into tokens, make a python list
    2. Create a baseline
      1. cycle through each token of the text
      2. if the token starts with a capital letter it’s a named entity (only one type, i.e. Entity)
    3. Classical Language Toolkit (CLTK)
      1. for each token in a text, the tagger checks whether that token is contained within a predefined list of possible named entities
      2. Compare to baseline
    4. Natural Language Toolkit (NLTK)
      1. Stanford NER Tagger for Italian works well with Latin
      2. Differentiates between different kinds of entities: place, person, organization or none of the above, more granular than CLTK
      3. Compare to both baseline and CLTK lists
  2. Classification
    1. Part-Of-Speech (POS) tagging – precondition before you can perform any other advanced operation on a text, information on the word class (noun, verb etc.); TreeTagger
    2. Chunking – sub-dividing a section of text into phrases and/or meaningful constituents (which may include 1 or more text tokens); export to IOB notation
    3. Computing entity frequency
  3. Disambiguation

Although we didn’t make as much progress as we would have liked, we have achieved our aim of creating a script to prepare individual files for NER processing, and have therefore laid the groundwork for future developments in this area. We hope to build on this work to successfully apply NER to the inscription texts in the EDH in order to make them more widely accessible to researchers and to facilitate their connection to other, similar resources, like Pelagios.

Categories:

OEDUc: Images and Image metadata working group

Tue, 13/06/2017 - 13:54

Participants: Sarah Middle, Angie Lumezeanu, Simona Stoyanova
Report: https://github.com/EpiDoc/OEDUc/wiki/Images-and-image-metadata

 

The Images and Image Metadata working group met at the London meeting of the Open Epigraphic Data Unconference on Friday, May 15, 2017, and discussed the issues of copyright, metadata formats, image extraction and licence transparency in the Epigraphik Fotothek Heidelberg, the database which contains images and metadata relating to nearly forty thousand Roman inscriptions from collections around the world. Were the EDH to lose its funding and the website its support, one of the biggest and most useful digital epigraphy projects will start disintegrating. While its data is available for download, its usability will be greatly compromised. Thus, this working group focused on issues pertaining to the EDH image collection. The materials we worked with are the JPG images as seen on the website, and the images metadata files which are available as XML and JSON data dumps on the EDH data download page.

The EDH Photographic Database index page states: “The digital image material of the Photographic Database is with a few exceptions directly accessible. Hitherto it had been the policy that pictures with unclear utilization rights were presented only as thumbnail images. In 2012 as a result of ever increasing requests from the scientific community and with the support of the Heidelberg Academy of the Sciences this policy has been changed. The approval of the institutions which house the monuments and their inscriptions is assumed for the non commercial use for research purposes (otherwise permission should be sought). Rights beyond those just mentioned may not be assumed and require special permission of the photographer and the museum.”

During a discussion with Frank Grieshaber we found out that the information in this paragraph is only available on this webpage, with no individual licence details in the metadata records of the images, either in the XML or the JSON data dumps. It would be useful to be included in the records, though it is not clear how to accomplish this efficiently for each photograph, since all photographers need to be contacted first. Currently, the rights information in the XML records says “Rights Reserved – Free Access on Epigraphischen Fotothek Heidelberg”, which presumably points to the “research purposes” part of the statement on the EDH website.

All other components of EDH – inscriptions, bibliography, geography and people RDF – have been released under Creative Commons Attribution-ShareAlike 3.0 Unported license, which allows for their reuse and repurposing, thus ensuring their sustainability. The images, however, will be the first thing to disappear once the project ends. With unclear licensing and the impossibility of contacting every single photographer, some of whom are not alive anymore and others who might not wish to waive their rights, data reuse becomes particularly problematic.

One possible way of figuring out the copyright of individual images is to check the reciprocal links to the photographic archive of the partner institutions who provided the images, and then read through their own licence information. However, these links are only visible from the HTML and not present in the XML records.

Given that the image metadata in the XML files is relatively detailed and already in place, we decided to focus on the task of image extraction for research purposes, which is covered by the general licensing of the EDH image databank. We prepared a Python script for batch download of the entire image databank, available on the OEDUc GitHub repo. Each image has a unique identifier which is the same as its filename and the final string of its URL. This means that when an inscription has more than one photograph, each one has its individual record and URI, which allows for complete coverage and efficient harvesting. The images are numbered sequentially, and in the case of a missing image, the process skips that entry and continues on to the next one. Since the databank includes some 37,530 plus images, the script pauses for 30 seconds after every 200 files to avoid a timeout. We don’t have access to the high resolution TIFF images, so this script downloads the JPGs from the HTML records.

The EDH images included in the EAGLE MediaWiki are all under an open licence and link back to the EDH databank. A task for the future will be to compare the two lists to get a sense of the EAGLE coverage of EDH images and feed back their licensing information to the EDH image records. One issue is the lack of file-naming conventions in EAGLE, where some photographs carry a publication citation (CIL_III_14216,_8.JPG, AE_1957,_266_1.JPG), a random name (DR_11.jpg) and even a descriptive filename which may contain an EDH reference (Roman_Inscription_in_Aleppo,_Museum,_Syria_(EDH_-_F009848).jpeg). Matching these to the EDH databank will have to be done by cross-referencing the publication citations either in the filename or in the image record.

A further future task could be to embed the image metadata into the image itself. The EAGLE MediaWiki images already have the Exif data (added automatically by the camera) but it might be useful to add descriptive and copyright information internally following the IPTC data set standard (e.g. title, subject, photographer, rights etc). This will help bring the inscription file, image record and image itself back together, in the event of data scattering after the end of the project. Currently linkage exist between the inscription files and image records. Embedding at least the HD number of the inscription directly into the image metadata will allow us to gradually bring the resources back together, following changes in copyright and licensing.

Out of the three tasks we set out to discuss, one turned out to be impractical and unfeasible, one we accomplished and published the code, one remains to be worked on in the future. Ascertaining the copyright status of all images is physically impossible, so all future experiments will be done on the EDH images in EAGLE MediaWiki. The script for extracting JPGs from the HTML is available on the OEDUc GitHub repo. We have drafted a plan for embedding metadata into the images, following the IPTC standard.

Categories:

Open Epigraphic Data Unconference report

Wed, 07/06/2017 - 12:12

Last month, a dozen or so scholars met in London (and were joined by a similar number via remote video-conference) to discuss and work on the open data produced by the Epigraphic Database Heidelberg. (See call and description.)

Over the course of the day seven working groups were formed, two of which completed their briefs within the day, but the other five will lead to ongoing work and discussion. Fuller reports from the individual groups will follow here shortly, but here is a short summary of the activities, along with links to the pages in the Wiki of the OEDUc Github repository.

Useful links:

  1. All interested colleagues are welcome to join the discussion group: https://groups.google.com/forum/#!forum/oeduc
  2. Code, documentation, and other notes are collected in the Github repository: https://github.com/EpiDoc/OEDUc

1. Disambiguating EDH person RDF
(Gabriel Bodard, Núria García Casacuberta, Tom Gheldof, Rada Varga)
We discussed and broadly specced out a couple of steps in the process for disambiguating PIR references for inscriptions in EDH that contain multiple personal names, for linking together person references that cite the same PIR entry, and for using Trismegistos data to further disambiguate EDH persons. We haven’t written any actual code to implement this yet, but we expect a few Python scripts would do the trick.

2. Epigraphic ontology
(Hugh Cayless, Paula Granados, Tim Hill, Thomas Kollatz, Franco Luciani, Emilia Mataix, Orla Murphy, Charlotte Tupman, Valeria Vitale, Franziska Weise)
This group discussed the various ontologies available for encoding epigraphic information (LAWDI, Nomisma, EAGLE Vocabularies) and ideas for filling the gaps between this. This is a long-standing desideratum of the EpiDoc community, and will be an ongoing discussion (perhaps the most important of the workshop).

3. Images and image metadata
(Angie Lumezeanu, Sarah Middle, Simona Stoyanova)
This group attempted to write scripts to track down copyright information on images in EDH (too complicated, but EAGLE may have more of this), download images and metadata (scripts in Github), and explored the possibility of embedding metadata in the images in IPTC format (in progress).

4. EDH and SNAP:DRGN mapping
(Rada Varga, Scott Vanderbilt, Gabriel Bodard, Tim Hill, Hugh Cayless, Elli Mylonas, Franziska Weise, Frank Grieshaber)
In this group we revised the status of SNAP:DRGN recommendations for person-data in RDF, and then looked in detail about the person list exported from the EDH data. A list of suggestions for improving this data was produced for EDH to consider. This task was considered to be complete. (Although Frank may have feedback or questions for us later.)

5. EDH and Pelagios NER
(Orla Murphy, Sarah Middle, Simona Stoyanova, Núria Garcia Casacuberta, Thomas Kollatz)
This group explored the possibility of running machine named entity extraction on the Latin texts of the EDH inscriptions, in two stages: extracting plain text from the XML (code in Github); applying CLTK/NLTK scripts to identify entities (in progress).

6. EDH and Pelagios location disambiguation
(Paula Granados, Valeria Vitale, Franco Luciani, Angie Lumezeanu, Thomas Kollatz, Hugh Cayless, Tim Hill)
This group aimed to work on disambiguating location information in the EDH data export, for example making links between Geonames place identifiers, TMGeo places, Wikidata and Pleiades identifiers, via the Pelagios gazetteer or other linking mechanisms. A pathway for resolving was identified, but work is still ongoing.

7. Exist-db mashup application
(Pietro Liuzzo)
This task, which Dr Liuzzo carried out alone, since his network connection didn’t allow him to join any of the discussion groups on the day, was to create an implementation of existing code for displaying and editing epigraphic editions (using Exist-db, Leiden+, etc.) and offer a demonstration interface by which the EDH data could be served up to the public and contributions and improvements invited. (A preview “epigraphy.info” perhaps?)

Categories:

Digital Classicist London seminar 2017 programme

Tue, 23/05/2017 - 11:26

Institute of Classical Studies

Senate House, Malet Street, London WC1E 7HU

Fridays at 16:30 in room 234*

Jun 2 Sarah Middle (Open University) Linked Data and Ancient World Research: studying past projects from a user perspective Jun 9 Donald Sturgeon (Harvard University) Crowdsourcing a digital library of pre-modern Chinese Jun 16* Valeria Vitale et al. (Institute of Classical Studies) Recogito 2: linked data without the pointy brackets Jun 23* Dimitar Iliev et al. (University of Sofia “St. Kliment Ohridski”) Historical GIS of South-Eastern Europe Jun 30

&

Lucia Vannini (Institute of Classical Studies) The role of Digital Humanities in Papyrology: Practices and user needs in papyrological research

Paula Granados García (Open University) Cultural Contact in Early Roman Spain through Linked Open Data resources Jul 7 Elisa Nury (King’s College London) Collation Visualization: Helping Users to Explore Collated Manuscripts Jul 14 Sarah Ketchley (University of Washington) Re-Imagining Nineteenth Century Nile Travel & Excavation for a Digital Age: The Emma B. Andrews Diary Project Jul 21 Dorothea Reule & Pietro Liuzzo (Hamburg University) Issues in the development of digital projects based on user requirements. The case of Beta maāḥǝft Jul 28 Rada Varga (Babeș-Bolyai University, Cluj-Napoca) Romans 1by1: Transferring information from ancient people to modern users

*Except Jun 16 & 23, room G34

digitalclassicist.org/wip/wip2017.html

This series is focussed on user and reader needs of digital projects or resources, and assumed a wide definition of classics including the whole ancient world more broadly than only the Greco-Roman Mediterranean. The seminars will be pitched at a level suitable for postgraduate students or interested colleagues in Archaeology, Classics, Digital Humanities and related fields.

Digital Classicist London seminar is organized by Gabriel Bodard, Simona Stoyanova and Valeria Vitale (ICS) and Simon Mahony and Eleanor Robson (UCL).

ALL WELCOME

Categories:

Unlocking Sacred Landscapes, by Giorgos Papantoniou

Fri, 05/05/2017 - 16:07

Project report by Dr Giorgos Papantoniou, papantog@uni-bonn.de

Previous multi-dimensional approaches to the study of ancient Mediterranean societies have shown that social, economic and religious lives were closely entwined. In attempting to engage with Cyprus’s multiple identities – and the ways in which islanders may have negotiated, performed or represented their identities – several material vectors related to ritual and sacred space must be taken into consideration. The sharp modern distinction between sacred and profane is not applicable to antiquity, and the terms ritual, cult and sacred space in Unlocking Sacred Landscapes: Cypriot Sanctuaries as Economic Units are used broadly to include the domestic and funerary spheres of life as well as formally constituted sanctuaries. Perceiving ritual space as instrumental in forming both power relations and the worldview of ancient people, and taking ancient Cyprus as a case study, the Project aims at elucidating how meanings and identities were diachronically expressed in, or created by, the topographical and economic setting of ritual and its material depositions and dedications.

The evidence of cult or sacred space is very limited and ambiguous before the Late Bronze Age. During the Late Bronze Age (ca. 1700-1125/1100 BC), however ritual spaces were closely linked to industrial activities; the appropriation, distribution, and consumption of various resources (especially copper), labour and land was achieved by the elite through exploitation of supernatural knowledge. The Early Iron Age (ca. 1125/1100-750 BC) landscapes are very difficult to approach. We can, however, identify sanctuary sites in the countryside towards the end of this period. This phenomenon might well relate to the consolidation of the Iron Age Cypriot polities (known in the archaeological literature as Cypriot city-kingdoms) and their territories. While urban sanctuaries become religious communal centres, where social, cultural and political identities are affirmed, an indication of the probable use of extra-urban sanctuaries in the political establishment of the various polities of the Cypro-Archaic (ca. 750-480 BC) and Cypro-Classical (ca. 480-310 BC) periods has recently been put forward.

During the Hellenistic period (ca. 310-30 BC), a process of official neglect of the extra-urban sanctuaries signals a fundamental transformation in the social perception of the land. After the end of the city-kingdoms, and the movement from many political identities to a single identity, extra-urban sanctuaries were important mainly to the local extra-urban population. By the Roman period (ca. 30 BC-330 AD), the great majority of Hellenistic extra-urban sanctuaries are ‘dead’. When the social memory, elite or non-elite, that kept them alive ‘dies’, they ‘die’ with it; what usually distinguishes the surviving sites is what the defunct sites lacked: political scale and significance. As the topography of Roman sanctuary sites reveals, this is not to say that extra-urban sanctuaries did not exist anymore. Over time, however, they started to become primarily the concern of local audiences. The annexation and ‘provincialisation’ of Cyprus, with all the consequent developments, were accompanied by changes in memorial patterns, with less focus on regional or local structures, and more intense emphasis on stressing an ideology which created a more widely recognisable ‘pan-Cypriot’ myth-history, which was eventually related to Ptolemaic, and later to Roman imperial power and ideology.

This Project puts together a holistic, inter-disciplinary approach to the diachronic study of the ancient Cypriot ritual and cult. While it aims at bringing together textual, archaeological, epigraphic, art-historical, and sociological/anthropological evidence, for the first time it incorporates ‘scientific’ spatial analysis and more agent-centred computational models to the study of ancient Cypriot sanctuaries and religion. By inserting in a GIS environment the Cypriot sanctuary sites the relation of sacred landscapes with politico-economic geography put forward above is tested both at regional and at island-wide level.

The Project falls under the umbrella of a larger Research Network entitled Unlocking Sacred Landscapes.

For further information: http://www.ucy.ac.cy/unsala/

Dr Giorgos Papantoniou
Research Training Group 1878: Archaeology of Pre-Modern Economies
Rheinische Friedrich-Wilhelms-Universität Bonn
Institut für Archäologie und Kulturanthropologie
Abteilung für Klassische Archäologie
Lennéstr. 1
D-53113, Bonn
Germany

Categories:

Job advertisement: Postdoctoral Research Associate (KCL)

Wed, 03/05/2017 - 12:23

Posted on behalf of Will Wootton (to whom enquiries should be addressed):

Training in Action: From Documentation to Protection of Cultural Heritage in Libya and Tunisia

As part of this new project funded by the British Council’s Cultural Heritage Protection Fund, a Post-Doctoral position will be employed at King’s College London. The Research Associate will work on the project initially for 10 months, with the contract likely to be renewed for a further 12 months.

The deadline for applications is 10th May. For further information, see here:
https://www.hirewire.co.uk/HE/1061247/MS_JobDetails.aspx?JobID=76726
And here:
http://www.jobs.ac.uk/job/BAY043/post-doctoral-research-associate-on-training-in-action-from-documentation-to-protection-of-cultural-heritage-in-libya-and-tunisia/

We would be most grateful if you could circulate this email to interested parties as the deadline is imminent.

Dr Will Wootton
King’s College London,London WC2R 2LS.
Tel. +44 (0)207 848 1015
Fax +44 (2)07 848 2545

Categories:

Open Epigraphic Data Unconference, London, May 15, 2017

Tue, 02/05/2017 - 17:26

Open Epigraphic Data Unconference
10:00–17:00, May 15, 2017, Institute of Classical Studies

This one-day workshop, or “unconference,” brings together scholars, historians and data scientists with a shared interest in classical epigraphic data. The event involves no speakers or set programme of presentations, but rather a loose agenda, to be further refined in advance or on the day, which is to use, exploit, transform and “mash-up” with other sources the Open Data recently made available by the Epigraphic Database Heidelberg under a Creative Commons license. Both present and remote participants with programming and data-processing experience, and those with an interest in discussing and planning data manipulation and aggregation at a higher level, are welcomed.

Places at the event in London are limited; please contact <gabriel.bodard@sas.ac.uk> if you would like to register to attend.

There will also be a Google Hangout opened on the day, for participants who are not able to attend in person. We hope this event will only be the beginning of a longer conversation and project to exploit and disseminate this invaluable epigraphic dataset.

Categories:

Historia Ludens: Conference on History and Gaming, 19 May 2017

Fri, 28/04/2017 - 15:52

Posted on behalf of Alexander von Lünen (to whom queries should be addressed):

University of Huddersfield
19 May 2017

This conference follows up on the workshop “Playing with History” that has been held in November 2015 in Huddersfield. Gaming and History is gaining more and more traction, either as means to “gamify” history education or museum experiences, or as computer games as prism into history like the popular History Respawned podcast series (http://www.historyrespawned.com/).

Besides discussing gamification or using (computer) games, we also want to explore gaming and playing in a broader historical-cultural sense. Can “playing” be used as category for historical scholarship, maybe alongside other categories such as gender, space or class? Historian Johan Huizinga’s Homo Ludens from 1938 looked at play and its importance for human culture. Can historians make similar cases for more specific histories? In recent publications historians have pointed to the connection between cities and play. Simon Sleight, for example, has worked on the history of childhood and urban history, i.e. young people appropriating public urban spaces for their ludic activities and their struggle with authorities over this. Archaeologists, as another example, have shown that much of the urban infrastructure of Ancient Rome was dedicated to games, playing and gambling, as it had such a big role in Roman life.

The conference will thus discuss terms like “gaming”, “playing” and “history” in broad terms. There are academic papers in the morning and round-table sessions in the afternoon for networking and demos.

Tickets (£10) are available via the University of Huddersfield web shop. Please note: there are travel/conference bursaries for postgraduate students available on request; please contact Dr Alexander von Lünen (a.f.vonlunen@hud.ac.uk) for details.

Full details and programme at https://hudddighum.wordpress.com/2017/03/06/historia-ludens-conference-on-history-and-gaming-19-may-2017/

Categories:

CFP: Cyborg Classics: An Interdisciplinary Symposium

Tue, 25/04/2017 - 11:41

Forwarded on behalf of Silvie Kilgallon (to whom enqueries should be addressed):

We are pleased to announce a one-day symposium, sponsored by BIRTHA (The Bristol Institute for Research in the Humanities and Arts) to be held at the University of Bristol, on Friday July 7th 2017.

Keynote speakers:

  • Dr Kate Devlin (Goldsmiths)
  • Dr Genevieve Liveley (Bristol)
  • Dr Rae Muhlstock (NYU)

The aim of the day is to bring together researchers from different disciplines – scholars in Archaeology & Anthropology, Classics, English, History, and Theology as well as in AI, Robotics, Ethics, and Medicine – to share their work on automata, robots, and cyborgs. Ultimately, the aim is an edited volume and the development of further collaborative research projects.

Indicative key provocations include:

  • To what extent do myths and narratives about automata, robots, and cyborgs raise questions that are relevant to contemporary debates concerning robot, cyborg, and AI product innovation?
  • To what extent, and how, can contemporary debate concerning robot, cyborg, and AI product innovation rescript ancient myths and narratives about automata, robots, and cyborgs.
  • Can interdisciplinary dialogues between the ‘soft’ humanities and the ‘hard’ sciences of robotics and AI be developed? And to what benefit?
  • How might figures such as Pandora, Pygmalion’s statue, and Talos help inform current polarized debates concerning robot, cyborg, and AI ethics?
  • What are the predominant narrative scripts and frames that shape the public understanding of robotics and AI? How could these be re-coded?

We invite scholars working across the range of Classics and Ancient History (including Classical Reception) and across the Humanities more widely to submit expressions of interest and/or a title and abstract (of no more than 250 words) to the symposium coordinator, Silvie Kilgallon (silvie.kilgallon@bristol.ac.uk). PhD students are warmly encouraged to contribute. The deadline for receipt of abstracts is May 31st, 2017.

Categories: