Space/time: Live discussion from the workshop

We are using this thread to keep note of ideas and questions coming up during the two day workshop (Space/Time: Methods in geospatial computing for mapping the past) here in Edinburgh - you are welcome to join us, ask questions and participate!

SOSORNET

Visualisation of space reference

[quote]Richard Talbert, Kai Brodersen, Space in the Roman World. Its Perception and Presentation. Antike Kultur und Geschichte, Bd. 5. Münster: LIT Verlag, 2004. Pp. 160; b/w figs. 19. ISBN 3-8258-7419-2. €14.90.[/quote]

(See e.g. review at http://ccat.sas.upenn.edu/bmcr/2005/2005-09-41.html )

Another reference on visualizing space and time

Donna J. Peuquet: Representations of Space and Time; The Guilford Press (June 20, 2002). Reviews: http://www.amazon.com/Representations-Space-Time-Donna-Peuqu...

Yet another URL

http://www.history.ac.uk/digit/peer/ - 'Peer review and evaluation of digital resources for the arts and humanities', funded by the AHRC last year.

GRADE project URL

AHRC GIS e-Science report

Of possible relevance: 'Geographical Information System (GIS) e-Science: developing a roadmap' (see http://www.ahessc.ac.uk/files/active/0/GIS-report.pdf), the report of a workshop funded by the AHRC last year in Belfast (referring to the Vision of Britain' website that Guy just mentioned (http://www.visionofbritain.org.uk/index.jsp). This argues that humanities data can and should be organized more or less wholly through georeferencing, because every humanities data object exists in time and space.

Standards

William has just underscored Vince's earlier point that Standards Must Not Constrain, with which I also totally agree. Since we have been discussing terminology, I wonder if the very word 'standards' actually expresses what we're talking about here? Is the word itself too prescriptive?

Standards

"Standards" originally--whether technological, academic, or other--should mean minimum standards, no? I don't see that as a problem with terminology...

Standards

I wonder if the 'minimum' bit needs emphasizing somehow, especially in the light of William's highlighting of the fact that standards are by definition external, and therefore - at least potentially - 'imposable'. Just a thought.

Re: Standards and metadata in geospacial data

Some more questions on standards raised by William Kilbride's paper this afternoon:

[quote=William Kilbride]Do we know the standards well enough ...
... to know when we can break them?
Who is making our standards ...
... and why should we trust them?
Is standards compliance a measure of quality ...
... or a professional expectation?
What advantage comes from engaging with standards?
What credit comes from engaging with standards?
Do we need standards at all?[/quote]

Mashup or messup: it's (not) up to us

One tension arising from this paper is that between our need to create, maintain, and adhere to appropriate, reliable, stable, and well-supported data schemas, and the fact that there are standards out there created and supported by disciplines and entities better-funded than ours.

This is a real problem: do we use an existing and implemented standard such as the Alexandria Digital Library schema, which necessarily involves the loss of some of our detailed data? Or do we simply "surface" our data in a lossy form via a stylesheet, while maintaining our core data in our own, more powerful format?

Are we in control of the data standards for this discipline? Are we in a position to dictate (or provoke) the schemas that Google and the like will become aware of, or do we only have the opportunity to make ourselves heard by using the standards that exist already, even if imperfect (for our purposes)?

(Does it matter? With the power of folksonomie and free text searching, won't all our data become visible anyway?)

Re: Mashup or messup: it's (not) up to us

I believe we cannot do without standards. In fact, we should not deviate from these standards, but rather try and get them out to google and into the communities - by giving a good example of how it can be useful for all to adhere to standards!

Re: Mashup or messup: it's (not) up to us

[quote=JJ]I believe we cannot do without standards. In fact, we should not deviate from these standards, but rather try and get them out to google and into the communities - by giving a good example of how it can be useful for all to adhere to standards![/quote]

I absolutely agree with this assertion of the importance of common standards. There is no excuse for inventing yet another schema and adding to the informati-chaos that is out there (although the Google-ettes would argue that with the now almost infinite power of data-mining the folksonomized chaos is a good thing). What happens, however, when your data model is more complex than the most appropriate standard in the field? Do we dumb down our data? Do we create a dumbed-down serialisation of our data in the standard, while also using this as a pathway into our more complex and useful data? Do we try and modify the existing standards or invent new ones that are better and more appropriate?

Talking points

- Tom brought up the issue of how large non-digital geographical enterprises, such as the Barrington atlas, evolve into community based projects such as Pleiades. How does this relate to yesterday's discussion on 'Web 2.0/community' and top-down standards?

- What is the role of folksonomies and community tagging in such environments?

- How can metadata be generated in a way that allows ontological cross-searching of multiple datasets?

- What are the implications of georeferencing data for multiple sources in the manner of points/lines/polygons?

- SOAs and the delivery of 'mashed up' bundles of mapping services? What, where, how?

Academic credit?

Looking at this blog it seems to me that you are discussing community involvment in web2 map making. I am not sure about how academics can be credited for such work - after all this is proper research (or is it not?) and should be part of your research output evaluation. I wonder...

new maps, new ways of thinking

A very interesting aspect of the discussion yesterday was the relation between map making and understanding space (outside of maps). GIS, as was mentioned several times, should not only be seen as a way of enhancing existing 2D maps, but rather as a chance to create new 3D maps that can be interactive, content rich and might be open to a web 2.0 community process. We assumed that this will eventually lead to new ways of understanding space, a new perception of space as the media and ideas around us do change the way we perceive (or construct) reality. This reminded me of the 18th century, when romantic nature freaks in Britain walked around the countryside with big frames to look at the newly discovered 'beautiful' nature through these frames as if the 'scenes' were pictures - admittedly not a very good example for how maps influence perception (the changes in map making in th 16th century, something Leif might touch upon in his talk today, would be more suitable, I guess).

Time and space

Yesterday we also discussed the relationship between time, space and action. Vince Gaffney stressed that time does not exist or is meaningless without space and action. Action in space can be understood as a way to model time.

Agent Based Modelling (ABM)

ABM has been mentioned in most of todays presentations, for instance in the context of programming agents and set them free in a digital environment to find out where they would settle and then look for patterns in their movements. The idea behind that is that even a very simple agent (basically a small computer programme that follows simple rules such as 'settle near water' 'don't settle in a marshland'), or, to be more precise, a group of these agents, can be used to understand very complicated processes.

I now wonder whether archaeologists do look to computer games for suggestions about programming agents - some of games use quite sophisticated programming with many hundreds or even thousands of agents that perform tasks such as settling, fighting, collecting food etc. It seems to me that the datasets used by archaeologists are more sophisticated than that of the average computer game, but maybe there is something to learn about programming agents.

Computer games are definitely of use for archaeologists as Vince Gaffney earlier mentioned that some of his projects use 3D engines of computer games to allow the users to move in the environment (re)created by the project.

Agent Based Modelling (ABM)

We are currently discussing the question in how far it is possible to build individual behavior and cultural issues into the model (for instance Islandic settlers refusing to eat seals after their cows die because of a famine - food is still available, but is not thought of as edible for cultural reasons). Tony Wilkinson stressed that it is actually possible to build such subroutines into ABM and that it has been done in the context of one of his projects.

Agent Based Modelling (ABM) and gaming

ABM, as was discussed yesterday, is not something we engage with to recreate the past, but rather to learn what processes can lead to a certain result, how likely an outcome is, what sort of things could happen. It was also mentioned that ABM can be a chance to learn something about the cultural logic of a society (how to model culture into ABM was discussed earlier): ABM can give you a range of possible outcomes of a process and then looking at the one that actually happened and trying to understand why it happened can then maybe tell you something about the cultural logic that caused it.

In the discussion at the end of the day it was also stressed that archaeologists do indeed engage with the games industry and try to make their products (especially 3D gaming engines) useful for there research - where possible. This was partly discussed in the context of war gaming and influences of military thinking and technology inspired by war (serious war gaming and gaming in general were named as external ICT drivers).

a question from the lecture

There is a question from the lecture which we did not really follow up on:

[quote=paregorios]To what degree does the proliferation of both free-commercial and open source map/visualization tools and formats (and their integration with web-wide search) affect how we should be thinking about:

* the invention of new map/visualization interfaces for our repositories and projects (should we bother?)
* the surfacing of feeds and formats recognized by these 3rd party services as pointers to or serializations of our data (shouldn't we?)[/quote]

Re: a question from the lecture

In reply to paregorios:

Although we should avoid reinventing the wheel as much as possible, I think it's important not to rely on proprietary tools and formats (even if they are currently free), especially if there is any danger that they can be taken away from us. (Formats maybe less so--KML is a standard, and a form of it will continue to be open and available even if Google decide to lock down and "close" a later version of it--but relying on the mapping drivers of Google Maps/Earth is not so future-proofed.)

On the other hand, if the tools we are talking about are Open Source, then so long as they have some contribution to make to our work it behooves us--does it not?--to use these tools, develop them if necessary, and then pay back our improvements to the community that will use them and help to ensure that the standards we are using continue to be current.

Once we have a robust, open data standard or schema in place, then of course the best way to surface our data in ways that are as useful as possible to the rest of the world is to create stylesheets that will create representations of this data in Atom or other serialized formats.

Syndicate content