Electric Archaeology: Digital Media for Learning and Research

Syndicate content Electric Archaeology
it's not just digital, it's electric!
Updated: 8 hours 27 min ago

3d models from archival film/video footage

Sat, 20/01/2018 - 23:06

Yesterday, I helped Andrew troubleshoot some workflow regarding vr-to-real-world photogrammetry. You should go read his post. As I was doing that, I was thinking that the same flow would work for archival video (which I’ve done with visualSFM, but not Regard3d, so challenge accepted! By the way, the VSFM workflow was Ryan’s regarding models from drones).  So I grabbed some aerial photography of Pompeii from WWII era ish, and gave it a spin. It worked, but it was an ugly ‘beta’-worked, so I left my machine running over the weekend and I’ll know by Monday whether or not the result is any better. I wrote up the workflow, thinking it’d be useful for my class, and deposited with Humanities Commons. I pasted it below, as well. Lemme know if it works for you, or if I’ve missed something.

~o0o~

It is possible to make 3d models from archival film/video footage, although the quality of the resulting model may require a significant amount of sculpting work afterwards to achieve a desireable effect. It depends, really, on why one wants to build a 3d model in the first place. Archaeologists for instance might want to work with a 3d rendering of a building or site now lost.

The workflow
The workflow has a number of steps:

1. obtaining the video (if it is on eg. youtube)
2. slicing the video into still images
3. adding camera metadata to the images
4. computing matched points across the images
5. triangulation from the matched points
6. surface reconstruction

Necessary software
nb these are all open-source or free-to-use programs

1. Youtube-dl https://rg3.github.io/youtube-dl/
2. ffmepg https://www.ffmpeg.org/
3. exiftool https://www.sno.phy.queensu.ca/~phil/exiftool/
4. regard3d http://www.regard3d.org/
5. meshlab (for post-processing) http://www.meshlab.net/

Step One Downloading from Youtube

Archival or interesting footage of all kinds may be found on youtube and other video streaming services. Youtube-dl is a sophisticated program for downloading this footage (and other associated metadata) from youtube and some other sites. Find a video of interest. Note the url. Then:

youtube-dl https://www.youtube.com/watch?v=nSB2VeTeXXg

Try to find video that does not have watermarks (the example above has a watermark and probably is not the best source video one could use). Look for videos that are composed of long cuts, that sweep smoothly around the site/object/target of interest. You may wish to note the timing of interesting shots, as you can download or clip the video to those passages (see the youtube-dl documentation)

Step Two Slicing the Video into Stills

ffmepg is a powerful package for manipulating video and audio. We use it to cut the video into slices. Consult the full documentation to work out how to slice at say every 5 seconds or 10 seconds (whatever is appropriate to your video). Make a new directory in the folder where you’ve downloaded the video with mkdir images. Then the command below slices at every second, numbers the slices and puts them into the frames subdirectory:

ffmpeg -i "downloaded-film.mp4" -r 1 frames\images-%04d.jpeg

Windows users would call ffmpeg with ffmepg.exe (if they haven’t put it into their system’s path variable). Step Three Adding Camera Metadata

We will be using Regard3d to stitch the images together. Regard3d needs to know the camera make, model, focal length (mm), and sensor width (mm). We are going to fudge this information with our best approximation. ‘Sensor width’ is the width of the actual piece of hardware in a digital camera upon which light falls. You’ll have to do some searching to work out the best approximation for this measurement for the likely camera used to make the video you’re interested in.

Find the camera database that Regard3d uses (see the documentation for Regard3d for the location on your system). It is a csv file. Open it with a text editor (eg Sublime Text or Atom. not Excel, because Excel will introduce errors). Add the make, model, and sensor width information following this pattern:

make;model;width-in-mm

Regard3d reads the exif image metadata to work out which camera settings to use. Focal length is read from the exif metadata as well. We assign these like so, from the command line in your frames folder:

exiftool -FocalLength="3.97" *.jpeg exiftool -Make="CameraMake" *.jpeg exiftool -Model="CameraModel" *.jpeg

Note that the make and model must absolutely match what you put into the camera database csv file – uppercase, lowercase, etc matters. Also, Windows users might have to rename downloaded exiftool file to exiftool.exe and put it into their path variable (alternatively, rename it and then put it in the frames folder so that when you type the command, your system can find it easily).

Step Four Computing Matches

Open Regard3d and start a new project. Add a photoset by selecting your frames directory. Note that when you used the exiftool, the original images were copied within the folder with a new name. Don’t select those original images. As the images load up, you will see whether or not your metadata is being correctly read. If you get NaN under make, model, focal length, or sensor width, revisit step three again carefully. Click ok to use the images.

Click on compute matches. Slide the keypoint density sliders (two sliders) all the way to ‘ultra’. You can try with just the default values at first, which is faster, but using ‘ultra’ means we get as many data points as possible, which can be necessary given our source images.

This might take some time. When it is finished, proceed through the next steps as Regard3d presents them to you (the options in the bottom left panel of the program are context-specific. If you want to revisit a previous step and try different settings, select the results from that step in the inspector panel top left to redo).

The final procedure in model generation is to compute the surfaces. When you click on the ‘surface’ button (having just completed the ‘densification’ step), make sure to tick off the ‘texture’ radio button. When this step is complete, you can hit the ‘export’ button. The model will be in your project folder – .obj, .stl., and .png. To share the model on something like Sketchfab.com zip these three files into a single zip folder. On sketchfab, you upload the zip folder.

Step Five Clean Up

Double click on the .obj file in your project folder. Meshlab will open and display your model. The exact tools you might wish to use to enhance or clean up your model depends very much on how your model turned out. At the very least, you’ll use the ‘vertice select’ tool (which allows you to draw a box over the offending part) and the ‘vertice delete’ tool. Search the web for help and examples for the effective use of Meshlab.

Regard3d

Wed, 17/01/2018 - 00:33

I’m trying out Regard3d, an open-source photogrammetry tool. A couple of items, memo-to-self style of thing:

    • its database does not have cellphone cameras in it. Had to google around to find the details on my particular phone
    • its database is this: https://github.com/openMVG/CameraSensorSizeDatabase 
    • just had to find where it was on my machine, and then make an entry for my phone. I’m still not sure whether I got the correct ‘width’ dimension – running with this. 
    • nb don’t do this with excel – excel does weird things to csv files, including hidden characters and so on which will cause Regard to not recognize your new database entry. Use Sublime Text or another text editor to make any changes. You can double click on an image in the imageset list inside Regard and add the relevant info one pic at a time, but this didn’t work for me.
    • I took the images with Scann3d, which made a great model out of them. But its pricing model doesn’t let me get the model out. So, found the folder on the phone with the images, uploaded to google drive, then downloaded. (Another nice thing about Scann3d is when you’re taking pictures, it has an on-screen red-dot/green-dot thingy that lets you know when you’re getting good overlap.)
    • Once I had the images on my machine, I needed to add exif metadata re focal length.  Downloaded, installed, exiftool. Command:  exiftool -FocalLength="3.97" *.jpg
    • In Regard3d, loaded the picture set in.
    • The next stages were a bit finicky (tutorial) – just clicking the obvious button would give an error, but if I had one of the image files selected in the dialogue box, all would work.
    • here’s a shot of the process in…erm… process…

  • Console would shout ‘error! error!’ from time to time, yet all continued to work…

I’m pretty sure I saw an ‘export to meshlab’ button go by at some point… but at any rate, at the end of the process I have a model in .ply and .obj!  (ah, found it: it’s one of the options when you’re ready to create the surface). All in all, a nice piece of software.

Medical Teaching Model (Syphilis) by electricarchaeo on Sketchfab

 

Markov Music; or; the Botnik Autogenerator Reel

Fri, 22/12/2017 - 02:18

You must’ve seen the Harry Potter chapter written with markov chains / predictive text (not AI, I should point out). I went to the site, and thought, I wonder what this could do with music written in the text ABC notation format. So, grabbing the same source files that gave us Mancis the Poet (where I used RNN to generate the complete files), I loaded Botnik with Cape Breton Fiddle tunes. Then I generated a text, clicking madly in the middle of the interface. The result:

A ab|ca fe|dfba f2 ed|ceac b2a| f2 ed|ceac b2a| f2 ed|ceac b2a| f2 ed|ceac b2a| f2 ed|ceac b2a| f2 ed|ceac b2a| f2 ed|ceac b2a| f2 ed|ceac b2a|
Da|bf{g}fe b a2 f2|d a ab|ca fe|dfba f2 ed|ceac b2a| f2 ed|ceac b2a| f2 ed|ceac b2a| f2 ed|ceac b2a| f2 ed|ceac b2a| f2 ed|ceac b2a| f2 ed|ceac b2a| f2 ed|ceac b2a| f2 ed|ceac b2a| f2 ed|ceac b2a| f2 ed|ceac b2a| f2 ed|ceac b2a|

Which, when you add some made-up metadata like so to the front:

X:1
T:Botnik Autogenerator Reel
R:reel
C:algorithmic
O:21st century Canadian
B:
N:
Z:
M:C|
L:1/8
Q:108
K:D

…becomes a file that can be turned into MIDI, thence mp3. You can generate your own fiddle line with this corpus: http://botnik.org/apps/writer/?source=bbbedbbada64da161a2055387eb50dae.

And here you go: the music notated above as MIDI piano:

Let’s Imagine

Tue, 12/12/2017 - 18:33

There’s a game I play, sometimes, when I wander around campus. You’ve probably played it too. My version of ‘let’s imagine’ always seems to end up focussed on the physical fabric of the university. How I’d make it look. What buildings I’d build. Articulations with the larger city. So let’s imagine.

Carleton has an enormous hole in its campus, overlooking one of the prettiest locations in the city, Dow’s Lake. Here’s the map:

See all that empty grey area above the words ‘Carleton University’? Gravelled parking lot, snow dump in the winter. And that’s it. Here’s an aerial photo:

I’d love to fill that with a ginormous building that fronts onto Dow’s Lake. This building would be a kind of studio space for every national, regional, and local museum in the Ottawa area. Every single one of these institutions can show but a fraction of their collection. So, I’d love to build a space where materials can be rotated in, so that they are available for research, teaching, and the public. A giant living lab for our national collections. Since the collections span every department and faculty we have, I see no reason why this couldn’t break down silos and barriers across disciplines, in our teaching. I’d have a big ol’ international design competition, make the thing a jewel, too.

Apparently, our campus master plan  imagines this as a ‘North Campus’, filled with lots of different buildings. Sounds good. Can we make one of them be the Carleton Musea?

…while I’m at it, I’d like a pony, too.

(featured image, Michelle Chiu, Disney Concert Hall, unsplash.com)

Procedural History

Fri, 08/12/2017 - 19:09

I see this is my third post with this title. Ah well. Just playing with a script I found here

shawngraham$ python2 historygen.py
In the beginning there was “Baubrugrend Free State”, “Dominion of Clioriwen”
In that era, the people could bear it no longer, and so these ones rebelled from ‘Baubrugrend Free State’ ==> ‘Province of Vrevrela’
In that era, the people could bear it no longer, and so these ones rebelled from ‘Province of Vrevrela’ ==> ‘Free People’s Republic of Craepai’
It is a terrible thing when brothers fight. Thus ‘Free People’s Republic of Craepai’ became “Eiwerela”, “Broteuvallia”
It is a terrible thing when brothers fight. Thus ‘Dominion of Clioriwen’ became “Duchy of Corica”, “Orican Republic”
The thirst for new lands, new glory, and the desire to distract the people, led to new conquests ‘Duchy of Corica’ conquered ‘Eiwerela’
The thirst for new lands, new glory, and the desire to distract the people, led to new conquests ‘Duchy of Corica’ conquered ‘Broteuvallia’
The thirst for new lands, new glory, and the desire to distract the people, led to new conquests ‘Duchy of Corica’ conquered ‘Orican Republic’
In that era, the people could bear it no longer, and so these ones rebelled from ‘Duchy of Corica’ ==> ‘United States of Heukan’
In that era, the people could bear it no longer, and so these ones rebelled from ‘United States of Heukan’ ==> ‘Kingdom of Amoth’
END “Kingdom of Amoth”

The script can also make a nice diagram; now to get it to write the history AND the diagram at the same time.

The directionality of the arrows is a bit confusing. You almost have to read it backwards. However, since it is just a .dot file, I think I can probably load it into something like yEd and make a prettier timeline.

update I’ve added Tracery to the script, made the output a bit more lyrical:

> shawngraham$ python2 historygen.py

Gather by, young ones, and let me tell you of our nations and peoples.

In the beginning there was “Duchy of Corica”

These people shared a single peninsula, shielded from the rest of the world by tall mountains.

Flooding ruined the crops; the famine weakened them all and so, ‘Duchy of Corica’ dissolved in fragments, eventually becoming “Province of Eabloris” and “Voches” and “Uamafai “

A few years later, the strength of the people could bear it no longer, and they rose up in violent revolution. The old ‘Province of Eabloris’ was no more; a new dawn broke on ‘Heawoth’.

As it came to pass, the Queen gave up power and fled into exile. The old ‘Heawoth’ was no more; a new dawn broke on ‘Iroa’

Flooding ruined the crops; the famine weakened them all and so,“Uamafai ” and “Voches” became ‘Eiwerela’.

As it came to pass, the Satrap gave up power and fled into exile. The old ‘Eiwerela’ was no more; a new dawn broke on ‘Oyune’

Low cunning and high treachery divided them and so, ‘Oyune’ dissolved in fragments, eventually becoming “Broteuvallia” and “Islands of Hekla” and “Kingdom of Abroth”.

Low cunning and high treachery divided them and so, ‘Islands of Hekla’ dissolved in fragments, eventually becoming “Satrapy of Yaislaxuin” and “Dominion of Clioriwen”.

The clouds grew dark, and hunger stalked the land, so sickness weakened them all and so, “Dominion of Clioriwen” and “Satrapy of Yaislaxuin” became ‘Kingdom of Amoth’.

The thirst for new lands, new glory, and the desire to distract the people, led to new conquests. ‘Broteuvallia’ conquered ‘Kingdom of Amoth’

A few years later, the Queen gave up power and fled into exile. The old ‘Iroa’ was no more; a new dawn broke on ‘Province of Vrevrela’

Standing proud upon the ruins there are only now “Broteuvallia”and “Kingdom of Abroth”and “Province of Vrevrela”.

(feature image: Chester Alvarez, Unsplash

Winter 2018 Teaching Preview

Mon, 06/11/2017 - 21:45

HIST5702

HIST3812

Procedural History

Sun, 05/11/2017 - 23:09

Procjam 2017 – ‘make something that makes something!’ is on right now.  My interest in procedural generation at the moment concerns the way we think about something when we’re making something else that makes that something. (Still with me?)

I’ve dabbled in sound, and in text, and in bots; now I’m thinking about procedural history. Now, in a way, that was what I was doing when I got into this whole DH scene way back when – agent based modeling. But there’s something about the way something like Dwarf Fortress writes history – and people in turn flesh those histories out (this was the subject of an undergraduate honours thesis done for me by a student, who, to my frustration, has never posted the work online).

So I’m coming back to it. This is also partly because I’m interested in how ideas about how cities work are codified in city sim games that then get draped in the trappings of Antiquity (am writing a piece on this at the moment). Today I came across a very cool project from the procjam community, by David Masad (who I note is using ABM for his dissertation work), called ‘WorldBuilding‘. It takes the algorithm for fantasy maps that you may have seen at play in that twitter account, ‘uncharted atlas’, generates a world by simulating landscape and erosion, introduces an ABM of nomads who settle down into cities, who interacted via trade routes, and who pay tribute or go to war with one another.

All within one python notebook.

Here is the dusty plateau, fringed by a verdent coast, where one or two valleys give access to the interior. Nomads arrive, and in the course of time, settlements and routes emerge:

And we can begin to model their interactions.  Masad is using Axelrod’s Tribute Model. He then dips into the logs and is able to generate the annals of this world:

“From 0 to 49, Ceotpe saw slow growth. In this period it received tributes from Oqlou.
From 49 to 50, Ceotpe saw slow decline. In 50 it, and joined its allies in one battle.
From 50 to 87, Ceotpe saw slow growth. In this period it received tributes from Oqlou.
From 87 to 88, Ceotpe saw slow decline. In 88 it, and joined its allies in one battle.
From 88 to 90, Ceotpe saw slow decline. In this period it, and joined its allies in one battle.
From 90 to 92, Ceotpe saw rapid decline. In this period it fought a war against Oqlou.
From 92 to 99, Ceotpe saw slow growth.”

The code also produces a line chart of a city’s fortunes – for instance, the city of Tigei had a much different history:

The logs spell out what was happening:

” 43, ‘Receive tribute’, ‘Oqlou’
50, ‘Joined war against’, ‘Itykca’
62, ‘Receive tribute’, ‘Oqlou’
73, ‘Receive tribute’, ‘Oqlou’
88, ‘Joined war against’, ‘Itzyos’
90, ‘Joined war against’, ‘Itzyos’
92, ‘Led war against’, ‘Oqlou'”

I could imagine, say in a senior undergrad seminar where I had the time and commitment from students, to use this code as the kernal for a deep exploration of how history intersects with games and simulation. How do you get from an annal to a history? The students would work at the creative tension between what a game shows and what a game merely suggests and what the player brings to that gap.

Something like that. It’s dark, it’s november, the time changed, I’m tired, not exactly coherent. But there’s something extremely cool here (not least the usage of jupyter notebook to illuminate the code). My gut, which I consult on such things, thinks this stuff is important.

featured image Peter Lewicki, unspash

Old Bones Daily – HeritageJam 2017 Entry

Tue, 24/10/2017 - 15:08

Update: We won!

I’m pleased to take part in the 2017 HeritageJam; Kate and I present Old Bones Daily:

  • Visit online at http://shawngraham.github.io/hj2017
  • Make your browser full-screen for best effect
  • Fully responsive so can be read equally well on mobile
  • Reload the page for the latest headlines and stories and photos

Paradata

The paradata for the project is printed as the fifth column on the newspaper’s front page.

This ‘newspaper’ that you are looking at pays hommage to that older newspaper culture of reprinting, while at the same time commenting, on modern ‘news media’ by procedurally generating texts from the ‘bones’ of generative grammar. Each time this page is reloaded, the news is generated anew from a Tracery grammar. This grammar is derived from the study of 19th century newspapers in Western Quebec (The Equity) and northern New York State. It selects passages from these papers where the word ‘bone’ is present or implied, and recombines them in sometimes surprising or revealing ways. There are stories of injury, and columns with helpful advice. Sometimes a humorous anecdote is recounted; sometimes popular accounts of the latest academic research.

And sometimes, out and out fraud.

Some of the passages are presented verbatim, using the language and reflecting the mores of the age (with no generative grammar intervention) thus demonstrating another perspective on ‘bones’ and whose are accorded human dignity. The passages were collected in the first place by ingesting OCR’d papers into AntConc, and generating a concordance and keywords-in-context file.

Tracery was designed and built by Kate Compton @galaxykate. This newspaper layout was designed and shared by user silkine on codepen.io. Images are drawn from the British Library’s Flickr stream, where they have been tagged with the word ‘bone’ – thus, another algorithmic expression of the bones underlying the web.

Source Files

Please see the source repository at http://github.com/shawngraham/hj2017

To replicate this newspaper for your own amusement, consult the file, newsgrammar.js and insert new values in the keys. To alter the layout and placement of these elements on the newspaper, create new var and associated div in the js\app.js file. Place the div in the appropriate location in the main index.html file. Extra css or html for a particular chunk of text should be wrapped inside the values in the newsgrammar.js file. For a tutorial on how Tracery functions, see Shawn Graham’s tutorial at The Programming Historian. Tracery can power webpages, games, and twitterbots. What would be the effect of ‘Old Bones Daily’ if it were translated into the new news medium of Twitter?

Sources for Newspaper articles:

The Shawville Equity (consult this digital finding aid project by Carleton undergraduate student Jeff Blackadar)

Syracuse NY Daily Journal via ‘Old Fulton NY Post Cards’ collection of digitized newspapers at http://www.fultonhistory.com/

Licensing

The original CSS for the newspaper layout is MIT Licensed by user Silkine on codepen.io

The Tracery generative grammar is released by Kate Compton under Apache License Version 2.0, January 2004

Images obtained via the British Library’s Flickr stream are all public domain works

We assert that our use of the original source newspapers is fair use

We release the code in the source repository into the wild, such parts of it that are uniquely ours, under CC BY.

A Twitter Hiatus

Fri, 20/10/2017 - 16:02

I'm taking a break from this place. It sucks. If you want me, email.

— Shawn Graham (@electricarchaeo) October 13, 2017

For some time now, Twitter just hasn’t been fun. Don’t get me wrong: if you want to know what’s going on in digital archaeology (or whatever field) and you want to know what your peers are up to, it’s hard to beat.

But it’s been making me sick. It’s been making me anxious. It’s been eating far too much of my time. The world’s shitty enough as it is, without giving the shittiness a direct pipeline into my brain.

There’s no need to iterate all the bad ways Twitter serves to make the world a worse place, including its use of addictive design principles. It’s baked in, as they say: the trolls, the nazis, and Trump will never be booted off, because they generate clicks, they generate attention. Women and people of colour on the other hand…

So I’m taking a break. I’m trying. I’ve installed a site blocker on all my devices. In recognition that I’ve been such a heavy twitter user for so long, and that’s the principle vector for how some people collaborate with me, I have things set up so that I can check my DMs periodically. I did retweet some things related to singer-poet Gord Downie’s death, which as a Canadian of a certain age, well, you just have to do.

Other than that, I’ve been off Twitter for a week now. And dammit if I don’t feel more productive already…

If you need me, email me.

Feature image Vicko Mozara, Unsplash 

 

Call for Collaborators: The Open Digital Archaeology Textbook Environment (ODATE)

Tue, 17/10/2017 - 15:06

The Open Digital Archaeology Textbook Environment is a collaborative writing project led by myself, Neha Gupta, Michael Carter, and Beth Compton. (See earlier posts on this project here).  We recognize that this is a pretty big topic to tackle. We would like to invite friends and allies to become co-authors with us. Contact us by Jan 31st; see below.

Here is the current live draft of the textbook. It is, like all live-written openly accessible texts, a thing in the process of becoming, replete with warts, errors, clunky phrasing, and odd memos-to-self. I’m always quietly terrified to share work in progress, but I firmly believe in both the pedagogical and collegial value of such endeavours. While our progress has been a bit slower than one might’ve liked, here is where we currently stand:

  1. We’ve got the framework set up to allow open review and collaboration via the Hypothes.is web annotation framework and the use of Github and gh-pages to serve up the book
  2. The book is written in the bookdown framework with R Markdown and so can have actionable code within it, should the need arise
  3. This also has the happy effect of making collaboration open and transparent (although not necessarily easy)
  4. The DHBox computational environment has been set up and is running on Carleton’s servers. It’s currently behind a firewall, but that’ll be changing at some point during this term (you can road-test things on DHBox)
  5. We are customizing it to add QGIS and VSFM and some other bits and bobs that’d be useful for archaeologists. Suggestions welcome
  6. We ran a test of the DHBox this past summer with 60 students. My gut feeling is that not only did this make teaching easier and keep all the students on the same page, but the students also came away with a better ability to roll with whatever their own computers threw at them.
  7. Of six projected chapters, chapter one is in pretty good – though rough – shape

So, while the majority of this book is being written by Graham, Gupta, Carter and Compton, we know that we are leaving a great deal of material un-discussed. We would be delighted to consider additions to ODATE, if you have particular expertise that you would like to share. As you can see, many sections in this work have yet to be written, and so we would be happy to consider contributions aimed there as well. Keep in mind that we are writing for an introductory audience (who may or may not have foundational digital literacy skills) and that we are writing for a linux-based environment. Whether you are an academic, a professional archaeologist, a graduate student, or a friend of archaeology more generally, we’d be delighted to hear from you.

Please write to Shawn at shawn dot graham at carleton dot ca to discuss your idea and how it might fit into the overall arc of ODATE by January 31st 2018. The primary authors will discuss whether or not to invite a full draft. A full draft will need to be submitted by March 15th 2018. We will then offer feedback. The piece will go up on this draft site by the end of the month, whereupon it will enjoy the same open review as the other parts. Accepted contributors will be listed as full authors, eg ‘Graham, Gupta, Carter, Compton, YOUR NAME, 2018 The Open Digital Archaeology Textbook Environment, eCampusOntario…..

For help on how to fork, edit, make pull requests and so on, please see this repo

 

Featured Image: “My Life Through a Lens”, bamagal, Unsplash