Scholars' Lab

Syndicate content
Works in Progress
Updated: 17 hours 34 min ago

Crafting Our Charter – Praxis Program 2017-2018

Tue, 19/09/2017 - 16:28

As a historian, when I think of charters, the first things I think of are royal charters.

The first result when you Google charter, on the other hand, is Charter Telecommunications Company because of course.

But as members of the new Praxis Fellowship cohort, my fellow fellows and I tried to chart (I’m sorry) a very different path. The result of our work, The Praxis Charter, 2017-2018, is the first thing we ever created together.

http://praxis.scholarslab.org/charter/charter-2017-2018/

Transparency is one of our core values, so I am going to use this post to reveal the process by which we made this document.

Our charter’s first draft was written in a jam session in a Scholars’ Lab meeting room, and the fact that we are all teachers was readily evident. We privately brainstormed, we paired and shared those ideas, and then we had a class discussion with Christian at the technological helm. I often think of grad school as a lesson in liminality.

“Piled Higher and Deeper” by Jorge Cham www.phdcomics.com

That was on full display as we drew on the techniques we use to facilitate classroom discussions to jumpstart our own collaborative work. The liminality of grad school isn’t always to its credit, but in this case, the results were lovely. As a teacher and a historian of education, I spend a lot of time thinking about pedagogy. The pedagogy modeled here made my heart happy! We melded the skill sets of both teachers and students pretty seamlessly to create a productive partnership.

Our conversation always seemed to come back to values. Values are, I think, the core of this document. Of course, for every positive value, there is an equal and opposite disvalue. The opposite of humility is egotism. The opposite of flexibility is rigidity. The opposite of transparency is obfuscation. I think this connects to a comment my fellow Torie made, that writing this charter was almost cathartic, because we could list every problem we had encountered with group work and essentially say: not that.

This, of course, points to the idea of conflict. As our joyful leader Brandon Walsh noted, past Praxis cohorts have tended to avoid naming conflict in their charters in the hopes that their silence would prevent it from ever rearing its ugly head. Think of conflict as the he-who-must-not-be-named of group work, if you will.

Ignoring conflict didn’t really work out for the Ministry of Magic though, and I doubt that the academy fairs much better. My hope is that by setting out clear goals, values, and strategies for coping with conflict we will enable our future selves to handle disagreements with aplomb and grow from them, rather than shrink from them.

Perhaps the most radical value embodied in our charter is our commitment to “the creation of a participatory democracy.” Participatory democracy is an idea coined by one of my favorite historical figures, civil rights and feminist icon Ella Baker. Participatory Democracy embraces two ideas, “a decentralization of authoritative decision-making and a direct involvement of amateurs or non-elites in the political decision-making process.” Participatory democracy seems like the perfect fit for the Praxis Program as we are all relative amateurs in the digital humanities, and we have been given the task of working and learning together. It also just seems to fit our collective personality. When we talked about past Praxis strategies, we decided we didn’t want to divide and conquer the tasks ahead like many previous years had. We wanted to work on individual elements of our project together so that we could get the most out of our training. This would also allow us to commit to a truly shared vision.

In so many ways, a charter is a reminder of our deeply held values. We all carry around ideals of honesty and creativity, kindness and diversity, but writing out a charter makes you actually reflect on those values and why you hold them dear. Writing a charter allows you to reflect on what it is you like about collaborative work – and what it is you don’t, and then make a promise to yourself and to others to try and embody the best of what collaboration has to offer.

As for our radical experiment in participatory democracy, I can already hear people asking, is that practical? The true answer is: I don’t know. But Praxis seemed like just the place to try it out.

 

Welcome new DH Developer Zoe LeBlanc!

Mon, 18/09/2017 - 13:51

We are delighted to announce that Zoe LeBlanc has accepted our DH Developer position!

Zoe rose to the top of an extremely strong pool of over 60 applicants. A History ABD at Vanderbilt University, she focuses on post-colonialist movements and media in Cairo and other capitals. She brings solid technical experience in the areas of front-end web design, text and image analysis, and mapping and data visualization, with skills including React, Redux, Elixir, and Postgres, and fluency in French and Arabic.

Zoe is a rising junior DH scholar, presenting on network analysis at a well-attended panel at DH2017 in Montreal, as well as through a DH2017 poster on an archival research app she learned to build in response to archival research challenges.

Her particular expertise and passion for making technically difficult DH methods accessible and enjoyable to all complements the SLab’s emphasis on pedagogy and mentorship. She balances the SLab’s literature scholars and complements our history scholars, both diversifying our areas of work to the Middle East and adding new expertise in archival research in countries with different archival practices and challenges from the U.S.

Come by the Lab once Zoe joins us in mid-October to say hi!

/etc/rc.local

Tue, 12/09/2017 - 18:32

Hello again, my fine digital-humanist friends! It’s a delight to be back in the Scholars’ Lab this year!

For those who don’t know me, my name is Christian Howard, and I am a PhD Candidate at UVA in English literature and one of the 2017-2018 Praxis Fellows. If you do happen to know me, you might also know that I was fortunate to work in the Makerspace of the Scholars’ Lab last year. In any case, I’m excited to combine the knowledge that I gained there working on hands-on, material projects with finer computer skills and the even greater conceptualizations into which I expect our Praxis team will delve.

I’ve recently been rereading Johanna Drucker’s Graphesis: Visual Forms of Knowledge Production, and I want to reflect briefly on one of Drucker’s points, which I think is especially central to our Praxis team this year. Drucker brilliantly exposes “data” as constructs, constructs that cannot “pre-exist their parameterization.” As such, Drucker opts for the alternative term, “capta,” stating: “Data are capta, taken not given, constructed as an interpretation of the phenomenal world, not inherent in it” (128). Capta comes from the Latin verb capio, capere, which, translated literally, means “to capture, take, seize.” Yet in a more figurative sense, capere could also mean “to take in, understand.” It is partly because of this pun that I find Drucker’s redefinition particularly apt, for it is precisely the act of “capturing” information that facilitates our understanding of that information. In other words, every decision to define the parameters under which “data” will be taken is itself an interpretive strategy.

So what does this mean for humanists, and digital humanists in particular? I’ll quote Drucker again, this time at length:

“To expose the constructedness of data as capta a number of systematic changes have to be applied to the creation of graphical displays. That is the foundation and purpose of a humanistic approach to the qualitative display of graphical information. That last formulation should be read carefully, humanistic approach means that the premises are rooted in the recognition of the interpretive nature of knowledge, that this display itself is conceived to embody qualitative expressions, and that the information is understood as graphically constituted” (128-129).

It is this recognition – namely, in the fundamentally interpretive nature of data-as-capta – that distinguishes the humanities as a discipline.

As a Praxis cohort, we are still working to define the shape that our project will take; nonetheless, in developing our charter or mission statement, we have unanimously agreed that transparency is of the utmost importance to us. As such, we are committed not only to sharing the result of our collaboration with the public, but also to showing the processes through which our project develops, thereby enabling anyone to trace the interpretations and assumptions underlying our own work.

Well, that’s all the heavy-lifting for today. For those of you who found this introductory post too lengthy, I’ve provided a handy summary for you below:

TL;DR: Born at a young age, I have pursued my education in order to justify my caffeine-dependency. Most recent greatest achievement? I’ve just beaten my all-time personal record of most consecutive days lived! Time to celebrate with some coffee and chocolate.

 

Drucker, Johanna. Graphesis: Visual Forms of Knowledge Production. Cambridge: Harvard University Press, 2014.

About my research, computers and Digital Humanities

Mon, 11/09/2017 - 02:24

In my inaugural post a few days ago, I introduced myself to the world in kind of an oblique way. Some people may wonder what I am studying or what my research interests are. This post is here to mend this omission. In large brush strokes, I will talk about my dissertation and then about some general research interests that connect me to digital humanities. Coincidentally, a brief mention of a computer prototype from the late 60’s will echo for the Praxis folks our last meeting (Sept. 5, 2017) and the lesson on the history of computers.

My current project focuses on three French contemporary authors who are using new technologies to create and disseminate their work, as well as connect with their audience. More specifically, I am looking at the ways in which new technologies expand the boundaries of literature to include practices often reserved to other artistic disciplines. I am also interested in the new online literary communities clustering around the websites of my corpus and in the margins of the print and prize-driven French literature.

Having escaped the pages of the book, literature meets with visual arts, with sound and performance, in new poetic hybrids. The book is always a place where textual content can return to, but it is not the only option. Moreover, various acts of transcoding, made possible through digital technologies, have liberated writing from its exclusive attachment to text. Our contemporary “associated technical milieu” has made the creative gesture a practice available to anyone with a computer connected to the Internet.

“Rather than dissociating consumption from production, as did broadcast mass media (from phonography to global real-time television), today’s microtechnologies and the social networking practices they facilitate connect them: if you can use these technologies to consume, you can also use them to produce.”1

Interestingly enough, the gap between amateurs and professionals is narrowing , which revives Jean Dubuffet’s concept of “art brut” (i.e. art made by people without formal training). Under these circumstances where everything is created by everybody, how does a contemporary author find her place? How does she define her space and the value of her work?

Kenneth Goldsmith dubbed these practices “uncreative writing” and traced their origin to some French avant-garde techniques such as those invented by the Situationists (détournement, psychogeographical drifts) and Oulipo.

“Oulipo, short for Ouvroir de littérature potentielle, or ‘Workshop for Potential Literature’ was founded fifty years ago, in 1960, by the writer Raymond Queneau and the mathematician François Le Lionnais with the purpose of exploring the possible uses of mathematics and formal modes of thought in the production of new literature. Oulipo sought to invent new kinds of rules for literary composition, and also to explore the use of now-forgotten forms in the literatures of the past. ”2

Georges Perec, one of the most popular authors among the Oulipo group (the star!), has experimented with algorithmic writing, imitating the inner workings of a computer program, in The art and craft of approaching your head of department to submit a request for a raise,3or with extreme self-imposing lipogrammatic constraints in A Void4 (exclusively composed of words that don’t contain the letter “e”), has also written a a very brief enthusiastic text about computers. Published at a time where computers were still the size of a room, Perec anticipated their everyday personal and social use. “Why not us?” he asks, claiming a programmable machine for creative purposes at home, a place already targeted by a horde of appliances: washing machines and toasters, coffee makers and vacuum cleaners, TV sets and food processors.

A dynamic medium for creative thought: the Dynabook

Around the same time, at the Palo Alto Xerox PARC Alan Kay and Adele Goldberg were working on a prototype computer strikingly similar to a today’s tablet. They called it Dynabook (portmanteau for dynamic book) and they imagined it as

“a self-contained knowledge manipulator in a portable package the size and shape of an ordinary notebook. Suppose it had enough power to outrace your senses of sight and hearing, enough capacity to store for later retrieval thousands of page-equivalents of reference materials, poems, letters, recipes, records, drawings, animations, musical scores, waveforms, dynamic simulations and anything else you would like to remember and change.” 5

Dynabook, unlike any other computer of its generation, was not targeting the military or corporate business. It was designed “for kids of all ages”, people who would use it to enhance their learning and creativity. I want to emphasize the last words here: “to remember and change”. If the computer was to become personal, it was not only because of its capacity to store information, archiving one’s files, and consequently exteriorizing and extending one’s memory but also by offering new techniques to process the information stored and eventually to create new. Technology has always been about extending human capabilities.

“The human evolves by exteriorizing itself in tools, artifacts, language, and technical memory banks. Technology on this account is not something external and contingent, but rather an essential—indeed, the essential—dimension of the human.” 6

As a matter of fact, the idea of a mechanical memory storage was not new. Vannevar Bush in his well-known article “As we may think” published in 1945 had already introduced a mechanical memory (memex) for individual use 7. Beyond the scope of the Universal Turing Machine –a machine that could simulate other machines– Alan Kay and Adele Goldberg’s ambition was to create a Universal Media Machine, a machine that could simulate all other media forms, from books to images to films.

“For educators, the Dynabook could be a new world limited only by their imagination and ingenuity. They could use it to show complex historical inter-relationships in ways not possible with static linear books. Mathematics could become a living language in which children could cause exciting things to happen. Laboratory experiments and simulations too expensive or difficult to prepare could easily be demonstrated. The production of stylish prose and poetry could be greatly aided by being able to easily edit and file one’s own compositions.” 8

But in order to achieve this goal of becoming a ”platform for all existing expressive artistic media”, Dynabook had to exceed its function as a storing machine, by adding a new structural level on top of the hardware allowing an easy interaction with the machine. Hence, GUI was born with tools and icons that could help the user perform the same actions across applications, without needing to know the underlying programmatic commands.

“Putting all mediums within a single computer environment does not necessarily erase all differences in what various mediums can represent and how they are perceived—but it does bring them closer to each other in a number of ways. Some of these new connections were already apparent to Kay and his colleagues; others became visible only decades later when the new logic of media set in place at PARC unfolded more fully; some may still not be visible to us today because they have not been given practical realization. One obvious example of such connections is the emergence of multimedia as a standard form of communication: web pages, PowerPoint presentations, multimedia artwork, mobile multimedia messages, media blogs, and other communication forms which combine multiple mediums. Another is the adoption of common interface conventions and tools which we use in working with different types of media regardless of their origin: for instance, a virtual camera, a magnifying lens, and of course the omnipresent copy, cut and paste commands. Yet another is the ability to map one media into another using appropriate software—images into sound, sound into images, quantitative data into a 3D shape or sound, etc.—used widely today in such areas as DJ/VJ/live cinema performances and information visualization. All in all, it is as though different media are actively trying to reach towards each other, exchanging properties and letting each other borrow their unique features. ” 9

The success of the personal computer was therefore due to its structural coupling with software that led –so far– to three major shifts in the way we interact with media. Word processors to movie editors, allowed the user to mix, juxtapose, cut and paste, alter, and eventually produce new media. Using the same machine to perform changes in the stored contents was an empowering new form of grammatization.

Return to kindergarten

I borrow the concept of grammatization from Bernard Stiegler. Derrida’s former student, Stiegler calls grammatization every flow that becomes a process through a series of discrete marks, grammés, that can form a code (grammar) and can be endlessly reproduced in all sorts of combinations. Writing, for example, is the grammatization of speech and it is made possible by the invention of the letters (grammata ) of the alphabet. Alphanumeric linear writing, up until personal computers came along, was the dominant form of recording, from facts (history) to thoughts and ideas (literature). So much so that the activities of learning to read and write were the main literacy focus of a certain humanistic tradition, from grade school to the academy.

In his seminal book Does Writing Have A Future?, Vilém Flusser speculates on the disruption of this tradition brought forth by the computers and their new ways of writing through digital recording and digitization. Without discarding the value of the alphanumeric writing he embraces the possibility of new forms of writing that could lead to a progressive replacement of “the alphabet or Arabic numerals”.

What was once written can now be conveyed more effectively on tapes, records, films, videotapes, videodisks, or computer disks, and a great deal that could not be written until now can be noted down in these new codes. … Many people deny this … They have already learned to write, and they are too old to learn the new codes. We surround this … with an aura of grandeur and nobility.

Flusser foresees with a great clarity what is yet to come when he publishes his book in 1987. What may seem as a radical stance, results from his position not to resist or reject the new technologies, but to discover their creative and pedagogical potential altering and adding new avenues to the the millennia old practices of reading and writing. But the newness of these tools, their sometimes complex inner workings call for a return to kindergarten.

We have to go back to kindergarten. We have to get back to the level of those who have not yet learned to read and write. In this kindergarten, we will have to play infantile games with computers, plotters, and similar gadgets. We must use complex and refined apparatuses, the fruit of a thousand years of intellectual development, for childish purposes. It is a degradation to which we must submit. Young children who share the nursery with us will surpass us in the ease with which they handle the dumb and refined stuff. We try to conceal this reversal of the generation hierarchy terminological gymnastics. While we’re about this boorish non-sense, we don’t call ourselves Luddite idiots but rather progressive computer artists. 10

Isn’t it the “digital turn” that Flusser anticipated with his “infantile games with computers”? And isn’t it Flusser’s kindergarten spirit that lives in labs and DH centers across the academy? Similarly, most recent “making turn” also happens in the same centers and labs.

”As the historian David Staley explains, the “maker turn” introduces “an approach to the humanities that moves our performances off the page and the screen and onto the material world, a hermeneutic performance whereby humanists create non-textual physical objects.” 11

Inspired by Patrick Jagoda’s recent article on “Critique and Critical Making”, this year’s Praxis cohort is set to explore the intersection of DH and the bricolage of physical computing. Taking the cue from Pierre Bayard’s How to talk about books you haven’t read12 , we have been wondering “how to make books you haven’t read talk!” But more about it in the next post. Stay tuned!

  1. Excerpt from Mark B. N. Hansen’s introduction to Bernand Stiegler’s chapter on Memory published in W. J. T Mitchell et Mark B. N Hansen, Critical Terms for Media Studies (Chicago; London: The University of Chicago Press, 2010).
  2. David Bellos in his introduction to Georges Perec’s The art and craft of approaching your head of department to submit a request for a raise, (London; New York: Verso Books, 2011).
  3. Georges Perec, The art and craft of approaching your head of department to submit a request for a raise, trad. par David Bellos (London; New York: Verso Books, 2011).
  4. Georges Perec, A Void (London: Harvill, 1994).
  5. Lev Manovich, Software Takes Command: Extending the Language of New Media, International Texts in Critical Media Aesthetics 5 (New York, NY: Bloomsbury, 2013).
  6. Mark Hansen’s Introduction to Bernard Stiegler’s article on Memory, in W. J. T Mitchell et Mark B. N Hansen, Critical Terms for Media Studies (Chicago; London: The University of Chicago Press, 2010).
  7. Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and, to coin one at random, “memex” will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.
  8. Personal Dynamic MediaAlan Kay, Adele Goldberghttp://www.vpri.org/pdf/m1977001_dynamedia.pdf
  9. Lev Manovich, Software Takes Command: Extending the Language of New Media, International Texts in Critical Media Aesthetics 5 (New York, NY: Bloomsbury, 2013). Emphasis mine.
  10. Vilém Flusser et Mark Poster, Does Writing Have a Future?, Electronic Mediations, v. 33 (Minneapolis: University of Minnesota Press, 2011).
  11. Patrick Jagoda, « Critique and Critical Making », PMLA 132, no 2 (1 mars 2017): 356‑63, doi:10.1632/pmla.2017.132.2.356.
  12. Pierre Bayard, How to talk about books you haven’t read (New York, NY: Bloomsbury USA : Distributed to the trade by Holtzbrinck Publishers, 2007).

Hello World!

Wed, 06/09/2017 - 17:47

My name is Spyros Simotas and I am a PhD candidate at the French Department at UVa. This year, I am also a Praxis fellow at the Scholars’ Lab. In this first blog post I would like to briefly introduce myself honoring Brandon’s ice-breakers.

Brandon always comes to our meetings with an ice-breaker. Here are the three we have had so far:

  1. Which is your favorite animal?
  2. Which is your favorite plant?
  3. Who would you like to have dinner with, dead or alive?

My favorite animals are elephants, my favorite plants are palm trees and if I could have a meal with anyone dead or alive, I would like to have coffee with David Lynch.

I like elephants because they are big, they make the sound of a trumpet and they care about each other. Despite their size, elephants do not pose a threat to other beings. They are also smart and they can paint. Has anyone ever calculated the size ratio between an elephant and an average-sized bug? Bugs are the most common wild life form we are stuck with in the industrialized and post-industrialized world. Domesticated farm animals that we use for food or pets don’t count. We are stuck with bugs both literally and metaphorically. Unfortunately, I have never seen an elephant hanging from the wall, or lurking inside a piece of software.

I have seen palm trees! The reason I like them is because of their simple shape. Their trunk doesn’t branch out, it only ends with a crown of leaves, like a messy toupee. Palm trees are easy to draw. When I lived in California, I remember that sometimes, their tops would disappear in the early morning mist. Also, three cut out palm trees figure on the cover of The Cure’s Boys Don’t Cry as a fine representation of their iconic hair style. Which brings me to David Lynch and his own impeccably messy hairdo.

Having begun his career with Eraserhead, it is hard to tell whose, his character’s or his own hair, is the source of inspiration for this electrified spiky hair style. Since then, he has created a lot of strange and heartbreaking characters. Joseph Merrick’s story, better known as The Elephant Man, The Staight Story, not to mention all the characters from his early 90’s TV series Twin Peaks revived recently, 25 years later, for a third and final season. Thanks to his book on meditation, consciousness and creativity, I was also introduced to TM. It is a small book, called Catching the big fish, very easy to read and highly recommended.

As an ending to this post, I chose the following excerpt from the chapter “The Circle” where Lynch refers to the feedback loop between an art work and its audience.

“I like the saying: “The world is as you are.” And I think films are as you are. That’s why, although the frames of a film are always the same—the same number, in the same sequence, with the same sounds—every screening is different. The difference is sometimes subtle but it’s there. It depends on the audience. There is a circle that goes from the audience to the film and back. … So you don’t know how it’s going to hit people. But if you thought about how it’s going to hit people, or if it’s going to hurt someone, or if it’s going to do this or do that, then you would have to stop making films.”1

I think the same can be said about digital humanities. Our public scholarship, experiments, code, teaching, and service, also reflect who we are and reverberate with our audience. In our first Praxis meeting, we talked about impact, trying to pinpoint the idea of success. But ultimately, we don’t know “how it’s going to hit people.” In which case, it is always useful to remember the well-known Marshall McLuhan scheme of technology as an extension of certain urges or desires. It is important to understand what is the urge that we are trying to extend because technology, according to Jonathan Harris (who also came up in our first discussion), can have “dramatic effects” on people. That’s why, he calls for “a self-regulated ethics that comes from the mind and the heart of the creator.” Finding our own common interests and desires as a team will help us define the direction we want our project to go. At this early stage, we only know that we want to work with data from the Library, using technology to create new interactions with the archive. But it is with the principles of love, care and good intentions that we embark on this year’s Praxis adventure.

  1. David Lynch, Catching the Big Fish: Meditation, Consciousness, and Creativity, 2016.

Digital Humanities Fellows Applications – 2018-2019

Fri, 01/09/2017 - 19:36

[Read closely: our menu options have changed. Note especially the changes to the application timeline, eligibility, and funding structure of the fellowship. Questions should be directed to Brandon Walsh, Head of Graduate Programs for the Scholars’ Lab.]

We are now accepting applications for the 2018-2019 DH Fellows Cohort!

Applications are due Wednesday, November 1st.

The Digital Humanities Fellowship supports advanced doctoral students doing innovative work in the digital humanities at the University of Virginia. The Scholars’ Lab offers Grad Fellows advice and assistance with the creation and analysis of digital content, as well as consultation on intellectual property issues and best practices in digital scholarship and DH software development.

Fellows join our vibrant community, have a voice in intellectual programming for the Scholars’ Lab, make use of our dedicated grad office, and participate in one formal colloquium at the Library per fellowship year. As such, students are expected to be in residence on Grounds for the duration of the fellowship.

Supported by the Jeffrey C. Walker Library Fund for Technology in the Humanities, the Matthew & Nancy Walker Library Fund, and a challenge grant from the National Endowment for the Humanities, the highly competitive Graduate Fellowship in Digital Humanities is designed to advance the humanities and provide emerging digital scholars with an opportunity for growth.

The award provides living support in the amount of $20,000 for the academic year, as well as full remission of tuition and University fees and the student health insurance premium for single-person coverage. Living support includes wages for a half-time graduate teaching assistantship in each semester.  A graduate instructorship, particularly one with a digital humanities inflection, may be substituted for the GTA appointment based on availability within the fellow’s department. Applicants interested in such an option should indicate as such in their application and discuss the possibility in advance with Brandon Walsh.

See past fellowship winners on our People page. The call for applicants is issued annually in August.

Eligibility, Conditions, and Requirements

  • Applicants must be ABD, having completed all course requirements and been admitted to candidacy for the doctorate in the humanities, social sciences or the arts at the University of Virginia.
  • The fellowship is provided to students who have exhausted the financial support offered to them upon admission. As such, students will typically apply during their fifth year of study or beyond for a sixth year of support.*
  • Applicants are expected to have digital humanities experience, though this background could take a variety of forms. Experience can include formal fellowships like the Praxis Program, but it could also include work on a collaborative digital project, comfort with programing and code management, public scholarship, or critical engagement with digital tools.
  • Applicants must be enrolled full time in the year for which they are applying.
  • A faculty advisor must review and approve the scholarly content of the proposal.

How to Apply

A complete application package will include the following materials:

  • a cover letter, addressed to the selection committee, containing:
    • a summary of the applicant’s plan for use of digital technologies in his or her dissertation research;
    • a summary of the applicant’s experience with digital projects;
    • and a description of UVa library digital resources (content or expertise) that are relevant to the proposed project;
  • Graduate Fellowship Application Form;
  • a dissertation abstract;
  • and 2-3 letters of nomination and support, at least one being from the applicant’s dissertation director who can attest to the project’s scholarly rigor and integration within the dissertation.

Questions about Grad Fellowships and the application process should be directed to Brandon Walsh. Applicants concerned about their eligibility, for whatever reason, are strongly encouraged to write as well.

* Please note that, per University policy, a student who has undertaken affiliate status and ceased to enroll full time is not eligible to resume full-time enrollment or hold a graduate teaching assistantship.  Because GTA appointments are a component of the DH Fellowship, students who have already undertaken affiliate status are not eligible to be considered for this award.

2017 Virginia Higher Ed GIS Meeting

Wed, 30/08/2017 - 17:13

2017 Virginia Higher Ed GIS Meeting

November 2, 2017 – 10am to 3pm (check in begins at 9:30am)

Scholars’ Lab, Alderman Library – University of Virginia – Charlottesville, VA

A meeting of all Virginia higher education Esri/GIS representatives and other GIS support people

This meeting is for Esri designates and other GIS support staff to come together to discuss common needs and solutions.  We will kick off with a plenary talk from an Esri representative.  Then in an “unconference” format, the group will decided the topics for the remainder of the day.  Depending on interest and need, we will break into groups for further discussions.

Registration (required): https://tinyurl.com/VAgis2107

Schedule

9:30am – Check in Begins

10am – Plenary Session w/ Esri Education Account Manager – Ridge Waddell (tentative)

11am – Group Topic Discussion Decision Making

11:30am – Lunch

Noon – Topic Discussions – break-outs if necessary

2:45pm – Group Next Steps

3pm – Adjourn

NOTE:  Lunch is being provided by the UVa Library’s Scholars’ Lab.  Because of this, we ask that everyone register in advance.  It is assumed that everyone will drive in for the day and not stay in Charlottesville.  However, we are happy to provide hotel information.  More details on parking, etc. to follow to registered participants.  If you have questions about anything, please feel free to contact Chris Gist at cgist@virginia.edu.

CFP: PMLA Special Issue, Varieties of Digital Humanities

Thu, 24/08/2017 - 18:08

I want to call attention to the opportunity to publish your work in the leading journal in literary studies.  Miriam Posner and I will be co-editing a special issue on digital humanities, and we very much welcome varieties of approaches as well as topics.  PMLA has a very strenuous and blind peer review process that gives ample feedback–usually, it’s well worth this feedback even if excellent work, in the end, doesn’t make the difficult cut.  But that also means, in other words, that it’s not just up to the two of us to decide what will actually appear in the journal.  We would be happy to advise on the kinds of submissions you might send in.  Feel free to reach out at booth@virginia.edu.  Here is the CFP wording that appears at the above site, where you may find instructions on how to prepare and submit the 9000-word-maximum document file.

Deadline for submissions: 12 March 2018

Coordinators: Alison Booth (Univ. of Virginia) and Miriam Posner (Univ. of California, Los Angeles)

Digital humanities (DH) may not be a full-fledged discipline, but it has advanced beyond “the next big thing” to become a reality on many campuses. Like many fields that have received a great deal of attention, DH derives energy from internal combustion and external friction—dissenters, supporters, and detractors see different sides of what may after all be too large a variety of practice to cohere as a field in the future. This moment, then, seems a good time to ask, What is next for DH? And what can we learn from what has come before?

PMLA invites essays that will help assess the past of DH, outline its current state, and point to its future directions among diverse participants, allies, and critics. The special issue welcomes well-informed critical essays that articulate varieties of digital experience with DH as it is commonly understood and as it is practiced in a more expansive, even contested, way, including but not limited to the following topics: game studies; digital narrative and poetry; social media and blogging; digital arts, including music and theater; digital pedagogy in languages, literatures, and writing (teaching with technology, e-portfolios, immersive technology, mapping assignments); textual editing; edited digital archives of manuscript or print materials; natural language processing and textual analysis of large corpora such as historical newspapers or a genre or a literary era; prosopographies, from ancient to modern; 3-D printing or modeling; virtual reality and photogrammetry documenting cultural heritage sites or artifacts; mapping and time lines to visualize trends in cultural or literary history; issues of copyright and commercial databases; theories and histories of digital technologies and their industrial and cultural impact; the growing field of criticism on digital scholarship and institutional change; advocacy or cultural criticism oriented toward new media and transformative practice.

The PMLA Editorial Board welcomes collaborative or single-author essays that take note of digital humanities of these or other varieties, whether centered on education or other spheres, whether ephemeral or long-standing. Submissions that consider a specific project should go beyond reporting on its methods and findings and emphasize its implications for digital literature and language scholarship. Of particular interest are reflections on DH as practiced beyond North America and Europe. Issues and themes might include accessibility, sustainability, standards of evidence, transforming the academic career, changing or pursuing further the abiding questions in the discipline. Histories, predictions, and manifestos may be welcome, but all essays should be accessible and of interest to the broad PMLA readership.

 

https://www.mla.org/Publications/Journals/PMLA/Submitting-Manuscripts-to-PMLA

Walt Whitman’s Jack Engle and Lola Montez: New from Collective Biographies of Women

Tue, 15/08/2017 - 21:39
By On 14 August 2017 · Add Comment [Edit]

 

[This is the first part of a short essay I posted on the blog of Collective Biographies of Women and elsewhere on August 15, 2017.  See https://pages.shanti.virginia.edu/CBW_Blog/?p=441&preview=true#_ftn6 for the entirety, with additional notes and references.]

In February, 2017, there was some exciting news of the kind that gratifies literary scholars everywhere.  Graduate student Zachary Turpin had discovered a lost short novel that Walt Whitman serialized anonymously in New York’s Sunday Dispatch in 1852.  The Life and Adventures of Jack Engle, as narrated by a young clerk of that name, gives impressions of New York life as Whitman experienced it before he became revered as the Good Gray Poet.  I am no Whitman scholar and have little to add to the discussion of US periodicals in the 1850s.  But as I quickly devoured the news and the novel itself, I was taken with a minor character closely related to my own research: the Spanish dancer, Inez.  Could this be a version of Lola Montez?

Photo by Robbie Hott, July 3, 2017, Lola Montez by Joseph Stieler, 1847, for Ludwig I’s Gallery of Beauties at Schloss Nymphenburg, Munich

The improbable “auto-biography” of Jack Engle now attributed to Whitman claims in the preface to be a “true story” about “familiar” people; “the main incidents were of actual occurrence,” giving “the performers in this real drama, unreal names” (Whitman, Engle 3). Clearly, the “life and adventures” of the quasi-Dickensian hero differ from Whitman’s (Walt was no orphan, for example). But Whitman might have given an unreal name to the real Lola Montez, Spanish dancer, whom I have long featured in my digital project on women’s biographies, Collective Biographies of Women or CBW (Booth).  The Irish-born adventuress who became the Countess of Landsfeld, who was buried in New York as Eliza Gilbert in 1861, has received many full-length and brief biographies. Whitman’s connection to this celebrity is not unknown, though little remarked.  She was in New York during the production of Whitman’s novella, Jack Engle.  On January 5, 1852, weeks into her first star turn in New York, she danced in Un Jour de Carneval à Seville in the role of Donna Inez (Morton 205).[1] Then, after controversial appearances in Boston, Hartford, and elsewhere, she appeared at the Broadway Theatre in Lola Montez in Bavaria, a play in five acts recounting her famous alliance with King Ludwig I and the rebellions and backlash that led to the king’s abdication (“The Danseuse, the Politician, The Countess, the Revolutionist and finally the Fugitive”; Morton 218). Whitman could easily have seen her reprise of this play at the Bowery Theatre on 28 June, or could have attended one of her benefit performances that spring, as Jack attends Inez’s benefit performance in the novella. Certainly Whitman and Montez coincided when she was back in New York six years later and they frequented Pfaff’s, Whitman’s bohemian hangout after first publication of Leaves of Grass (Lehigh University).

These enterprising mid-century figures have more interesting qualities in common than coinciding in New York in certain years.  Her defiant self-making is not out of keeping with his celebration of the body.  Notably, during their shared New York-bohemian years, both published highly gendered self-help.  Manly Health and Training, an advice book by “Mose Velsor,” was serialized in the New York Atlas in 1858, and Zachary Turpin recently discovered Whitman’s authorship (Velsor).  The Arts of Beauty: or Secrets of a Lady’s Toilet (New York: Dick & Fitzgerald, 1858) capitalized on Lola Montez as the famous author, drawing upon her series of popular lectures in New York, London, and elsewhere (Montez).

Whitman left unacknowledged his authorship of the episodic entertainment, Jack Engle.  We might then allow a canonical poet to steer clear of a notorious entertainer whose vocational tag in the Oxford Dictionary of National Biography is “adventuress.”  To follow through on my first impulse to post that “Whitman’s Inez is Lola Montez,” it would take more than the known connections in 1858; the novel, again, was churned out topically and serially in 1852.  The Whitman scholars I contacted were less than convinced that Inez resembles Montez.  I share their opinion that Inez can be a composite of Spanish dancers Whitman might have known in New Orleans (she was there in 1853, he in 1848) or New York, as well as some features of George Sand and others whom Whitman admired.  The fictional Spanish dancer has no exalted political past and, like other characters in the novel, she derives a great deal from the conventions of romance and melodrama. But it is certain that Lola Montez was big news in New York in the early months of 1852, and there are interesting connections with Whitman’s plotline of the hero’s growing intimacy with a belle of the town.[2] Though “Spanish” connotes hot-blooded, it also connotes veiled and hard to get. The portrayal of the novel’s Spanish dancer points to significant features of the well-educated, entrepreneurial celebrity. Whitman’s version also renders the performer more bourgeois and less interesting than the real thing, downplaying Montez’s kinky suggestiveness. The differences are a measure of the fictional purpose of this minor character.  The hero rises from street life to office work and a brief escapade outside the law that ends happily, all the more because he was never in real danger of falling in love with Inez.

Lola Montez in a daguerreotype (color added),

1851, by Southworth & Hawes

            Inez and Lola: Not Cheap

You know the type: “Spanish,” “dancer”; theaters would be places to find all sorts of accessible women.  But Whitman’s Inez and the real Lola Montez might be called, in hard-boiled speak, classy dames. I intentionally hit on the sore point of typecasting, because it is almost inescapable, even in fact-based historical biography.  The surprise is not the higher quality of love object implicit in the reputations of Inez and Lola, and not even that they evince manners and education, but that they are businesswomen, capitalists.  In Jack Engle, the narrator is a reluctant young apprentice in Covert’s law office, where he notices a young lady client.  Covert is advising her on a doubtful purchase of shares (happily, it turns out she never buys into the fake scheme).  “She had the stylish, self-possessed look, which sometimes marks those who follow a theatrical life. Her face, though not beautiful, was open and pleasing, with bright black eyes, and a brown complexion. Her figure, of good height and graceful movement, was dressed in a costly pale colored silk” (27).  She calls out to the pet dog, also named Jack, who jumps up and muddies her dress.  Inez is annoyed, and then laughs it off—a preview of her responses to drooling men and to Jack himself.  In chapter six of Jack Engle, Inez appears “really fascinating” on stage in the “short gauzy costume of a dancing girl. Her legs and feet were beautiful, and her gestures and attitudes easy and graceful” (29). These characterizing details correspond somewhat with the historical Montez.  Montez was fair, with striking blue eyes, unlike Inez.  She was frequently depicted in association with animals.  Contemporaries range between calling Montez altogether beautiful, or merely fascinating with a face that was not beautiful.  But then of course there was her figure.  Accounts usually disparage Montez’s performing ability, but those who were not too scandalized avidly praised the legs and the costume.  Images in newspapers always emphasize the tiny waist, ballooning bosom, and short skirts….

Notes

[1] Kirsten Greusz suggests Inez was a common name for the Spanish-beauty type, as in antebellum novels “Inez the Beautiful, or, Love on the Rio Grande” (Harry Hazel, 1846) or Augusta Evans Wilson’s “Inez, A Tale of the Alamo” (1850).  I also consulted with Ed Whitley, Ken Price, and Ed Folsom.

[2] Ed Folsom and Ken Price, in their article on Whitman for The Walt Whitman Archive, indicate Whitman’s affiliations with women activists Abby Price, Paulina Wright Davis, Sarah Tyndale, and Sara Payson Willis (Fanny Fern), as well as the “queen of Bohemia” Ada Clare.  CBW includes only Fanny Fern of these women, though abolitionists and activists for women’s rights do appear in some collections listed in our bibliography.

Read more at https://pages.shanti.virginia.edu/CBW_Blog/?p=441&preview=true#_ftn6

Welcome Senior Developer Shane Lin!

Tue, 15/08/2017 - 18:32

The Scholars’ Lab team is thrilled to welcome Shane Lin as our new Senior Developer!

Shane first joined the Scholars’ Lab as a Praxis Program graduate fellow in 2012. Since then, he’s served as a Technologist in our Makerspace, where he’s provided invaluable guidance on research and pedagogy related to desktop fabrication and physical computing. This past academic year, Shane was a Digital Humanities graduate fellow and worked on software to study networks of information exchange related to cryptography on Usenet lists. That fellowship work contributes to his doctoral work in History at UVA, and his dissertation on the history of cryptography and evolving notions of privacy since 1975.

In addition to being an incredible developer and scholar, Shane is a talented photographer, and has taken nearly all the photos of our staff and students. Come by the Lab to say hi to Shane, or welcome him via email at ssl2ab at virginia.edu.

Fall 2017 UVa Library GIS Workshop Series

Thu, 03/08/2017 - 19:10

All sessions are one hour and assume participants have no previous experience using GIS.  Sessions will be hands-on with step-by-step tutorials and expert assistance.  All sessions will be held on Tuesdays from 3PM to 4PM in the Alderman Electronic Classroom, ALD 421 (adjacent to the Scholars’ Lab) and are free and open to the UVa and larger Charlottesville community.  No registration, just show up!

September 12th

Making Your First Map with ArcGIS

Here’s your chance to get started with geographic information systems software in a friendly, jargon-free environment.  This workshop introduces the skills you need to make your own maps.  Along the way you’ll get a taste of Earth’s most popular GIS software (ArcGIS) and a gentle introduction to cartography. You’ll leave with your own cartographic masterpieces and tips for learning more in your pursuit of mappiness at UVa.

September 19th

Georeferencing a Map – Putting Old maps and Aerial Photos on Your Map

Would you like to see historic map overlaid on modern aerial photography?  Do you need to extract features of a map for use in GIS?  Georeferencing is the first step.  We will show you how to take a scan of a paper map and align in it in ArcGIS.

September 26th

Getting Your Data on a Map

Do you have a list of Lat/Lon coordinates or addresses you would like to see on a map?  We will show you how to do just that.  Through ArcGIS’s Add XY data tool and Geocoding (address matching), it is easy to take your tabular lists and generate points on a map.

October 10th

Points on Your Map: Street Addresses and More Spatial Things

Do you have a list of street addresses crying out to be mapped?  Have a list of zip codes or census tracts you wish to associate with other data?  We’ll start with addresses and other things spatial and end with points on a map, ready for visualization and analysis.

October 17th

Taking Control of Your Spatial Data: Editing in ArcGIS

Until we perfect that magic “extract all those lines from this paper map” button we’re stuck using editor tools to get that job done.  If you’re lucky, someone else has done the work to create your points, lines, and polygons but maybe they need your magic touch to make them better.  This session shows you how to create and modify vector features in ArcMap, the world’s most popular geographic information systems software.  We’ll explore tools to create new points, lines, and polygons and to edit existing datasets.  At version 10, ArcMap’s editor was revamped introducing new templates, but we’ll keep calm and carry on.

October 24th

Easy Demographics

Need to make a quick demographic map or religious adherence?  This workshop will show you how easily navigate Social Explorer.  This powerful online application makes it easy to create maps with contemporary and historic census data and religious information.

October 31st

Introduction to ArcGIS Online

With ArcGIS Online, you can use and create maps and scenes, access ready-to-use maps, layers and analytics, publish data as web layers, collaborate and share, access maps from any device, make maps with your Microsoft Excel data, customize the ArcGIS Online website, and view status reports.

November 7th

Expanding Content in ArcGIS Online

You can also use ArcGIS Online as a platform to build custom location-based apps.  You can create stories and context around online maps for things like storytelling, tours or map comparisons.   Many of these applications have templates that make for easy viewing on mobile devices.

What can digital humanities tell us about Character?

Tue, 25/07/2017 - 21:31

My part of the collaboration with James has been thinking through what this text has to tell us about “Character” as a literary category and to consider how digital tools can help modern users interact with eighteenth-century characters.

There’s been a learning curve for me as I find out more and more about what digital formats can and can’t do. I think my biggest challenge has been learning to think about digital material spatially—in order for something to exist in our final product we have to think about where it goes and how to attach it. Our original plan was to preserve every page in three separate files—one with the image of the text, one with a transcription of the text, and a third that contained commentary for that page. The hope was that we could sync every file by line and thus create a no frills edition that could be accessible and transparent for all users.

We’ve been forging full steam ahead with the transcriptions, and I’ve learned a great deal about how to preserve physical features on page in a digital translation. I began to realize that I think of character conceptually, not spatially, and thus finding a way to break down what this text can tell us about Character by page began to seem less and less feasible—let alone breaking it down by line! A line by line commentary is useful to explicate specific things in the text—allusions that would escape a twenty first century reader, say, or translating Latin phrases into English. Each of these things occur at a specific place in the text, and are thus well suited to line by line annotation. We’ve shied away from doing that kind of annotation—not because it’s not useful, but because it’s already been done, and done well, first by Robert Thyer for the 1759 edition and for modern audiences by Charles Daves in 1970.

Butler’s work is a collection of Theophrastan Characters—a genre of writing that enjoyed a revival when Butler was writing in the late seventeenth century, but which had fallen out of fashion by the time the collection was published posthumously in 1759. Theophrastan Characters are an odd genre. They break down characters into general “types” and give a description that ostensibly describes every person that falls under that category. For instance, when Butler writes about “An Amorist” that “His Passion is as easily set on Fire as a Fart, and as soon out again.” We are meant to assume that 1) this is true of all Amorists, and 2) if we ever meet somebody whose passion is, err, easily stirred and just as quickly extinguished, that person is an Amorist.

We’re used to breaking down literary characters into round and flat characters, or individuals and types. Theophrastan Characters dwell completely on the side of types, which, when you think about it is kind of nuts. We tend to think of people specifically, not generally. If I were to ask you to imagine a lawyer, you would probably think of a lawyer you know, or a famous lawyer you’ve seen in the news or in pop culture—Elle Woods, say, or Johnny Cochran. But Butler asks us to imagine a generic lawyer, someone whose “Opinion is one thing while it’s his own, and another when it is paid for,” a figure who represents all lawyers everywhere. This is familiar to us when we think about type—who doesn’t love a good lawyer joke? But it’s strange when we consider this figure as a “Character.” In literature, even type characters require a modicum of specificity, which is dictated by their literary surroundings. When a lawyer appears in Bleak House, even though that lawyer is just a flat, type character, we still imagine a single figure in Chancery during 1852 litigating Jarndyce vs Jarndyce; it could not be Elle Woods, or Johnny Cochran or your college friend who went to law school. But Butler’s characters are devoid of context—his lawyer is at once every lawyer and no lawyer at all.

I’m hoping this project will be able to tell us two things. First, what tools do you use to create a general character? Just a surface read through shows us that Butler seldom uses traits or characteristics to describe his characters—they’re too individualizing. Instead he writes largely with metaphors. An Amorist is “like an Officer in a corporation” and a Lawyer is “like a French duelist”—which of course begs the question, what are the officers of corporations and French duelists like? Are there other devices that Butler uses? Does he use the same devices for every character? My plan is to run the text through Stylo to see if we can learn anything about how Butler creates his types.

Second, what will it take to find examples of Butler’s characters? What does it take to fit a specific person into a general description? Could we argue that perhaps Butler is describing Johnny Cochran, even if he is not describing Elle Woods? How would we show that Cochran fits into Butler’s category? By looking at what he’s done? How he acts? Who he is? Leaving aside lawyers, would we be able to find examples of Henpect Men or Fifth Monarchy Men in today’s world—or are types too dependent on their political and cultural context to translate?

Now that we have a good number of transcriptions we can begin to create a corpus, which I hope will be able to answer some of these questions.

Transcribing Typography with Markdown

Wed, 05/07/2017 - 23:28

Digital technologies are not new solutions to our old problems, but are new problems asking for us to return to old solutions. People have been transcribing texts for as long as there have been texts. So it is no surprise that some of the earliest applications for computers were concerned with transcribing texts. These applications built on ideas based on previous ideas which were themselves based on yet earlier ones—the genealogical chains of some going back centuries. These genealogies have their own fascinating histories, but the problem that concerns this post is finding ways to use those centuries of accumulated ideas to reproduce texts in a computerized platform that has developed its own sense of how texts should function. That is, how do we digitally transcribe using the techniques that already exist? Techniques developed in prior decades can be difficult to use with computer-based methods because these computer-based methods have only recently developed and have not yet been applied to as wide a range of sources as traditional transcription methods. The solutions that computer applications first stumbled across for one problem initially seem to be right for every problem, but on careful study it turns out they often obscure important details. We do not yet have an extensive repertoire of computer transcription to consider, so it naturally seems like the solution that exists, however idiosyncratic, is the solution for all situations. For example, <em> is widely used to indicate both italics and emphasis, which seems fine, but conflates two different sorts of text forms: The former could indicate poetry, summaries, Latin or citations, while the later suggests something that is an exception—a key glossary term or foreign phrase. The conventional computerized scheme is to treat the stylistic choice of italic as separate from the semantic choice of emphasis. For a non-emphasis use of italic, a style could be applied to distinguish that particular semantic meaning, but these semantic meanings would have to be carefully integrated into the existing scheme without conflicting with any other components. This integration becomes increasingly complicated as the schemes become more complete because there are more opportunities for conflict. New standards for HTML and TEI make the situation more flexible, but fundamentally transcriptions on computers must either force their intellectual goals into an existing framework—i.e. marking a distinct convention of italics as though it was the same as emphasis—or extend the framework itself—i.e. inventing a tag that distinguishes all the semantic types of italic used in the work.

When I consider what digital tools could bring to the humanities, I envision new forms of writing which challenge us to reimagine traditional questions in systems with new capabilities but which accept their own limited application and eventual obsolescence in a long history of writing technologies. If a writer recognizes that eventually all technologies become obsolete, they cannot reasonably ignore the wisdom recorded by the technologies of the past. Furthermore, writing serves a wider range of uses when it collaborates with as many conventions as possible, encoding the wisdom of old and new systems and clearly seeing the goals of past writers as well-worn traditional tools, and not creating new tools when a better old one remains to be understood. Timothy Morton’s idea of “ecology” serves as well as any other metaphor here: he invites a move from placing nature on a pedestal to thinking of ecology as a sort of ambient poetics. Rather than asking how nature writing is done, we ask about the contexts in places, space and material that surround both nature and the writing about nature. I propose that his idea of ecology can be extended to the way we think of computer tools and their relationship to the wider range of writing tools—the more widely we share tools and link to other ideas, the more comprehensible will be our approach which also sets us up for the next innovation. We ought not put a particular scheme on a pedestal but instead recognize how it interacts with all other schemes.1

Translating the physical and intellectual features of a text into a system of transcription requires judgment and I think adapting the most established technologies to the task does more good than taking on the unproven baggage of complicated new systems. A note on the scope of technology I’m talking about here: By established, I don’t mean HTML or even ASCII, I mean conventions that span hundreds or even thousands of years: punctuation, glossing, words in sequence, and that sort of thing. The stability of the technology of punctuation means that it has been adopted in more recent technologies and could be used without careful consideration, but I think that people concerned with the past need to ask what questions the past has already answered well enough. Questions of how to transcribe have two components: the intellectual work of selection and description that needs to be done and the conventions available for encoding. The former requires developing judgment, the later only familiarity with a particular system.

Adapting the ethos of the 1999 essay by David L. Vander Meulen and G. Thomas Tansell, “A System of Manuscript Transcription,” which broke new ground by adhering as closely to common sense as possible rather than breaking new ground with a byzantine new system, we can write to foreground the historical text and our judgments about it.2 Their approach admits that a transcription re-forms a text, requiring decisions and highlighting where those decisions are made. Consider a hypothetical example in Markdown using something like the Vander Meulen-Tanselle approach:

The [?funny *conceled*] cat ran after the [*next two words inserted*] big brown dog.

Which renders as: “The [?funny canceled] cat ran after the [next two words inserted] big brown dog.”

The convention of using square brackets to mark later insertions is so well established that it requires little explanation to understand that these are editorial comments and that the text that the editor believes is correct is outside of the brackets.3 Enclosing italic text in asterisks is a Markdown convention, but is relatively familiar to most readers of digital texts. In contrast, one way to mark the same text in TEI would be:

<p>The <del> <unclear> funny </unclear> </del> cat ran after the <add> big brown </add> dog </p>

But those more familiar with TEI might be tempted to write:

<p>The <del rend="strikethrough" cert="medium"> <unclear reason="smudged"> funny </unclear> </del> cat ran after the <add place="above"> big brown </add> dog </p>

These tags instantiate collaboratively delineated concepts intended to be broadly applicable to a wide range of texts, which aspire to encode the document as it is. Various style sheets can cause these semantic meanings to render as floating notes, in brackets, as plain text or in nearly any other way imagined by the designer of the particular style sheet. However, there is no default rendering and no rule about which details to record. TEI users are tempted to imagine that they merely record what they see rather than applying their judgment. Does knowing that the certainty of the word “funny” is medium, that it was canceled with a strike, and that “big brown” was above the line add to the meaning of this text? I think, typically, it would not. But, if in a project the method of cancellation mattered, it could easily be included in the editorial comments in the Vander Meulen-Tanselle system in plain English. Indeed, if the method of cancellation mattered, writing it in English would emphasize it rather than hide it in the appearance of the display. Another difference between a TEI-like approach and an approach like Vander Meulen and Tanselle’s is whether or not the system is expanded using the resources of plain English or by using the coding expertise of a specific community, and, by implication, whether either system encourages developing judgment appropriate to the task at hand and demonstrating where that judgment is applied—or if it encourages outsourcing judgment to experts unfamiliar with your particular problem and hiding that application of judgment under the aegis of an official standard. In one case, capable readers of English can understand the markup, while in the later case, you would need access to the TEI standards and discussions to determine how to express and understand a feature correctly according to people probably not involved with your particular project. The plain English system can be read by competent and informed readers; the TEI system requires specialist expertise.

Now, I do not mean to pick on the members of the TEI Consortium, who do good work, but to outline the sort of thinking that influences the use of digital technology in transcription. If we care to write about a text in history, we must think about which of the innumerable facts about that text apply to the project at hand. This work can be shortcut by adhering to any present or future standard that displaces judgment onto someone else, but at the expense of the system working for your project. A system needs to grow from the problems in the project.

For this project, we aim to recreate, digitally, the experience of reading eighteenth-century texts for commonplace entries. Having looked at several eighteenth-century books that survive and thinking about manuals on preparing commonplace books (Locke has a particularly good one), we realized that the physical layout of the text affected reading. Since texts occur in some layout which presents their meaning, it is obvious that the layout links to meaning, but you would not think so based on text encoding standards that ignore white space and line breaks. Furthermore, commonplacing readers may read with a pen in hand to mark the text itself in the margin. Conventional computer document formats are not well suited to those approaches that remain attentive to the layout of the page of the text they comment on. Recently, Ted Nelson began a new lecture series describing how current computer systems reflect arbitrary compromises that stuck. One example he mentions is that most modern document formats include no way to indicate marginal notes. For centuries, people commonly commented on texts with marginal notes, but because the early designers did not use those sorts of notes in their own day-to-day work, they were not included in most modern standards for texts.

We aim to capture features like white space and line breaks that are not currently recorded in transcription systems, but which would have been part of the eighteenth-century text’s features. But, how can we wrestle the strangeness of old texts to the modern systems we know? One approach draws on improv theater—just start doing it and then look over the work to see which parts are useful. For this portion, three of us—Sarah Berkowitz, Elizabeth Ferguson and James P. Ascher—each transcribed a few pages of the printed text we have been studying—Samuel Butler’s Characters—after talking about the aims of the project. We compared our approaches to find what did and did not work. These solutions, like other good transcription solutions, were developed collaboratively but are quite specific to this project. Our hope is that the reasoning here, while it should not be used to answer your specific questions, should demonstrate an approach to thinking about your own project.

Headings

Within Butler’s Characters, each character has its own distinct heading, so they are important to record because they structure the text and name the theme for each part. On the printed page, these headings have a different appearance from the main text: They are centered, in a larger face, use capitals and small caps, and have rules above and below them. One approach might be to transcribe thus:

{Double Rule} A {Small Caps} HUFFING COURTIER

Or, with kerning between the letters

{Double Rule} A S M A L L P O E T

Both of these approaches recreate the layout of the text in the document and preserve the capitalization. Another approach could ignore the layout and kerning:

[*two rules*] PREFACE.

Or, following the Markdown convention for signaling a heading, we could write:

[*two rules*] # AN # ALDERMAN

All four approaches, although they may seem somewhat different, are shaped by two consistent categories of decision making: first, the selection of the elements worthy of notice and, second, the system of recording those elements that have been noted. In each example, the heading is marked as different: in two cases by giving the typography and in two other cases by interpreting the typography as bearing semantically equivalent meaning to a heading. Going back to the aim of the project, recreating the reading practices of the past, since the typography of headings is fairly consistent, it seems right to merely apply semantic judgment to the text and save the reader of the transcription time—as long as we’re confident of our interpretation. Sure, they could figure out that centered small caps words are headlines, but instead we could tell them every headline is like that in an introduction and just note any variation. All four transcriptions, however, regard the presence of the rules as significant. The rule seems like an element to notice since a reader of the text would come to expect the pattern and the rules interrupt the purely alphabetic text before and after. The matter of how to transcribe is more about convenience for converting the file and will be treated later.

Initial Letters

Each section also begins with what is called a “drop cap,” a capital letter that is bigger than the adjacent letters, and which drops down several lines. We have a few examples of how to record this as well (unrelated portions of the transcription are omitted):

[*two line initial*]HAS taken his Degree in Cheating [...] [*two line initial*] the higheſt of his Faculty [...]

Another:

[I]s one, that would fain make [...] [ ]which Nature never meant him; [...]

Another:

[Two Line initial*]Is a Cypher, that has no Value himsself, but from the Place he stands in. All his Hap-

Another:

T^2^HE writing of Characters [...] much in Faſhion [...]

The first three mark the lines that a the drop cap initial spreads to. This spread may be significant as drop caps in many French books of this same time period go upwards rather than downwards, extending above the line. Furthermore, the word on the next line might be confused as relating to the drop capital; resolving this confusion could be part of the reading practice. Note also, that the first and third example preserve the space between the capital and the text of the second line, the second does not (but that seems fine because the extension of the capital is already blank). The last transcription, which is based on the suggestion of Fredson Bowers in his Principles, uses a superscript in Markdown to signal the initial, but does not record whether or not it goes up or down.4 This choice may be fine if we are restricted to English books of a certain type where these always seem to go down, but might be a problem if developing a transcription standard that covers all types of books. Since in our case the practice is uniform across the text and merely serves to signal the reader that they have arrived at the beginning of a new section, Bower’s approach seems to best preserve the readability of the text. It would render “T2HE writing of Characters.” While the superscript 2 is somewhat confusing, a note could explain and the superscript follows a convention developed from early twentieth-century work on incunabula that has been used in certain serious scholarship for over a century.

Long ‘s’ characters

Eighteenth-century printers used both the long s—i.e. ‘ſ’—and the short s—i.e. ‘s.’ Typically, the long s is used in the middle of a word and the short one at the beginnings and endings, but the practice is hardly consistent. It is not until 1785 that the short s really dominates mainstream printing as it does now. For this reason, most transcriptions note the presence of a long s in some way. We have three examples in our transcriptions: as “ſ,” as “ss” or as “s*.” Each transcriber seems to want to include the letter as distinct from ‘s,’ which seems like the right choice since long s at the very least could potentially be confused with an ‘f’ by current, or past, readers. However, each mode of transcription emphasizes the long s in different ways. On the one hand, one marks it with an asterisk, suggesting that something unusual is going on; the asterisk is the convention for notae and footnotes, so this could work to mark and make visible all the places that this unusual letter occurs. Consider this passage,

He has found out a Way to s*ave the Expence of much Wit and Sens*e: For he will make les*s than s*ome have prodigally laid out upon five or s*ix Words s*erve forty or fifty Lines. This is a thrifty Invention, and very eas*y; and, if it were commonly known, would much in- creas*e the Trade of Wit, and maintain a Mul-

It makes it extremely clear that the long s occurs throughout in the beginning and middle of words, pointing out that it deviates from the convention I proposed above. On the other hand, using “ss” blends in and seems to be another spelling of a word. Since spelling was mostly normalized in this time period, a deliberate misspelling can signal special typography, but we can never be totally certain that that something might not be an error or a word we do not know. Consider “paſs” and “pasſ,” which would both be transcribed in this system as “passs,” which seems to conflate the common situation with the odder one. Using the Unicode character, ‘ſ,’ seems to both mark the letter as exceptional and to leave a readable text. Those familiar with the conventions of eighteenth-century printing will be not be surprised by it, but for those who are not familiar with the conventions, it would recommend itself for further study.

Footnotes, Quotations and Certainty

Brief, conventional quotations provide no special problems since the quote mark in ASCII and Unicode serves well enough, but two kinds of typographic style associated with references require special attention: footnotes in smaller type at the foot of the page and running quotes. An example that combines both (slightly altered so that only those two conventions are apparent),

if it were commonly known, would much in- crease the Trade of Wit, and maintain a Mul- We read that Virgil used to make, &c ] This alludes to a Passage in the Life of Virgil ascribed to Donatus. " Cum Georgica scrie- " traditur quotidio meditatos mane plurianos versus dic_tare so- " litus, [---Illegible need to check original copy (sarah)]"

While the footnote text in the original is smaller, this transcription does not document that fact, reasonably since the bracket and the verbal text itself alerts us that we are in a footnote. Additionally, skipping one line and continuing on the same page preserves the ability to make line references to both the main text and the footnotes in the same system. Yet, in comparison to our treatment of headings, it seems like we could provide some signal to the reader that the text before them has special semantic value. Following the convention of square brackets, we can adopt the Markdown notation, “[^1]” and “[^1]:,” to signal, respectively, the location in the source text that is marked and the lines of the text that comment on it. The problem we run into using this notation with this particular footnote is that it comments not on the page it occurs on, but on the facing page. If several footnotes occur, the number can be incremented, but—as far as I know—footnotes commenting on a text in another file are uncommon enough as to not have been dealt with. It seems that such a linkage needs two pieces of information: a linking symbol (i.e. the footnote mark or passage footnoted) and the file that contains the proper footnote. Markdown provides such a mechanism in the “reference link”; i.e. our source passage could be [words][^1] and the note could read [^1]: otherfile.md. The problem is that to Markdown, this reference notation means the word “words” becomes a hyperlink to file “otherfile.md,” which isn’t quite the right linkage. A simple extension of this scheme would be to include a statement of the location at both ends of the footnote viz., for page 22,

[...] The words were found in the notes. \*[^1] [^1]: [*the footnote occurs on page 23*]

Page 23 has an additional footnote but still refers back:

[...] Someunrelated text with its own note, \*[^1] that doesn't relate to the wrong note. [^1]: This is a note for this page, so no comment. [^2]: [*referring to the note on page 22*] I here note.

Notice a few aspects of this approach. Since the numbering for footnotes in the brackets is not part of the transcription, but merely an aid for the abstract structure, it only matters that the numbers are consistent within one document. Footnote “1” on one page could very well be footnote “2” on another page while preserving the enumeration or symbols provided by the printer. In this imaginary example, we prefixed the asterisk with a backslash so that any computerized parser would see that it is a symbol not a special character indicating an italic font.

In the original example, note also that the running quotation marks are simply transcribed in the margin with their apparent spacing. This seems right as the aim is to preserve the reading experience of the page which would have these marked out by quotes. Lastly, note the final editorial comment that expresses the uncertainty of the transcription. While a uniform language for expressing uncertainty is desirable, the nature of uncertainty is so various that providing free-reign to the editor to explain what’s going on seems the most prudent.

Italics, Small Caps and Other Type Changes

Transcribing italics is both conventional and is part of the procedure used by all three of the transcribers in this project. Options include tagging the text with [i] or {i} or <i> as well as using * to enclose the passage. In each case, the transcription identifies the moment in setting type where the compositor would have switched from pulling letters out of the physical case of roman type to pulling letters out of the physical case of italic type and vice versa. Whichever sign is used, the aim is to note the presence of another style of typeface with the same body size. As Markdown understands asterisks to mean italic, something like *this* seems to be the right approach only because it makes the text easier to parse with conventional tools and does not misstate the situation.

Following the same technique, small capitals often signal different sorts of information in the text. Markdown does not provide a solution that is quite as elegant as the one for italic, but these type changes are a bit less common. The convention is to write <span style="font-variant:small-caps;">In Small Caps</span> for a passage with I, S and C in full caps with the remaining letters in small caps. Aside from being unwieldy, this captures exactly what is happening and provides a pattern for other changes between type cases that warrant note in the transcribed text: Simply alter “small-caps” to the appropriate font-variant of interest for a particular project. It is worth remembering, however, that the goal is to digitally recreate a reading experience, so for cases of different type sizes indicating a heading, it seems sufficient to use the mark for heading. For cases of different type sizes indicating footnotes, the semantic marking for footnotes seems sufficient.

Lastly, we may come across a broken or damaged letter. One example transcribes a t that is damaged as <t>, which follows the old tradition of using angle-brackets to indicate portions of the text that have been mutilated but which the editor can recover. However, given the inconsistent use of different kinds of brackets in different kinds of editions, this might be confusing. Another option is to use editorial notes, i.e. t [*previous letter shifted*] which interrupts the text to announce the damage. A further option would be to note the t plainly and include a note at the bottom of the page, such as this one:

[*letter t on line 8 shifted*]

This last approach emphasizes that the situation with the letter t is totally comprehensible—it is just shifting type—and would not interrupt the reading experience. It seems that the choice between these last two approaches has to do with how prominent the mistake seems to be.

Material in the Skeleton: Headlines, Page Numbers and Signatures

A page of text includes not just the text of the work, but also what Genette would call paratexts which indicate the subject of chapters, location in the book or instructions to a binder. These occur in type imposed into a forme, but have a different sort of relationship to the text set in a continuous operation by the compositors. Only in unusual circumstances would an author expect a printer to follow their page numbers, so compositors normally just provide the body text and footnotes; when these are imposed, a skeleton forme of page numbers, signatures and headlines from the previous imposition can be moved over and the details corrected for the new set of pages. A full description of a book should account for all the textual elements of a page, but it makes perfect sense to segregate the information that was inserted as a guide surrounding the main text from the text itself. Since this project includes a conventional bibliographical description, this information can be put there while the text transcriptions can focus on the text that forms the work of the compositors before the material was imposed in a skeleton forme.

Spacing Between Letters and Around Punctuation

Eighteenth-century punctuation often used spacing differently than we do now. A semicolon might have a thin space before it and a thick space after it. To fit some punctuation into a line, there might be no spaces after a period but before the next sentence begins. In another line, the space after a period might be exceptionally large, or the spaces between words exceptionally large. A compositor may put extra spaces between the letters of a word to give it emphasis as a heading, which seems like a semantic choice rather than one that aims to preserve the justification of the line. That is, there seem to be two possible reasons to have a noticeable variation in space: either the need to provide a line of type of a certain width or to indicate a type of semantic information. Experience must be the guide in distinguishing between these two, but the situation should generally be clear after some study. The problem becomes one of how to represent variations in spacing widths if it is decided that they represent semantic meaning.

Since each line is justified separately, the unit of analysis must be the line. Different widths of spaces between two different lines almost certainly represent the need to justify that particular line, but different widths within a line may—if the editor judges it to have meaning—be transcribed within that line. To encode this, Unicode provides a range of spaces of different widths. The characters “HAIR SPACE,” “PUNCTUATION SPACE,” “EM SPACE,” “THREE-PER-EM SPACE,” “FOUR-PER-EM SPACE,” “SIX-PER-EM SPACE” cover a wide variety of types of spaces and widths of spaces (the Unicode Standard itself covers these in far more detail). The most sensible treatment of space, since a compositor would not really be distinguishing a space used for punctuation from a similarly narrow space, would be to follow Peter Blayney’s approach in the Appendix to his Texts of King Lear.5 When different sizes of space appear in the same line, simply use different sizes of space in the transcription to indicate that. The only modification for our project is that those spacing elements ought—in the judgment of the editor—to bear some sort of semantic meaning.

An Example Page

This post has discussed a wide range of choices in transcribing a text to preserve the reading experience from a printed book that can be summarized simply: use Markdown and Unicode and make judgments clear in square brackets when alterations are needed to use Unicode and Markdown. Yet, it can be useful to have an example—however fabricated—that brings these elements together,

[*two rules*] #AN\ EXAMPLE PAGE T^2^he *text* on this page isn't in any book, but <span style="font-variant:small-caps;">Demonstrates</span> some techniques you might use to tranſ-\ scribe texts as you see them. Note that each line breaks with a\ backſlash before the newline. [^1] This signals the difference between\ a newline needed to fit the text on one screen and one which rep-\ resents an actual line break. "We find that the quotes run\ " along the side for extended quotes. Just as they do in\ " eighteenth century texts." And , that punctuation spaces can be\ coded as such . What a text! [*last word poorly inked, could be "hex"*] [^1]: before the newline ] a newline is a special sort of chara-\ cter that means you begin a new line of text. [*the letter t in "techniques" on the first line of the text is shifted upward*]

One way this would render by default would be:

[two rules]

#E X A M P L E P A G E

T2he text on this page isn’t in any book, but Demonstrates some techniques you might use to tranſ-
scribe texts as you see them. Note that each line breaks with a
backſlash before the newline.6 This signals the difference between
a newline needed to fit the text on one screen and one which rep-
resents an actual line break. “We find that the quotes run
” along the side for extended quotes. Just as they do in
” eighteenth century texts.” And , that punctuation spaces can be
coded as such . What a text! [last word poorly inked, could be “hex”]

[the letter t in “techniques” on the first line of the text is shifted upward]

  1. Timothy Morton, Ecology Without Nature: Rethinking Environmental Aesthetics (Cambridge, Mass.: Harvard University Press, 2009).
  2. David L. Vander Meulen and G. Thomas Tanselle, “A System of Manuscript Transcription,” Studies in Bibliography 52 (1999): 201–12.
  3. For those unfamiliar, brackets are a standard symbol used to indicate editorial additions and italics distinguish descriptions and explanations of the roman text: since “big brown” is conjectured to belong to the final text, the words are placed outside of the brackets; since “funny” is conjectured to belong to an earlier version, the word is place inside the brackets.
  4. Fredson Bowers, Principles of Bibliographical Description (New York: Russell & Russell, 1962).
  5. Peter W.M. Blayney, The Texts of King Lear and Their Origins (Cambridge: Cambridge University Press, 1982); Blayney is studying the recurrence of types so chooses to transcribe both semantic meaning and what evidence he finds of the typographical habits of the compositors.
  6. before the newline ] a newline is a special sort of chara-
    cter that means you begin a new line of text.

Neatline 2.5.2

Wed, 05/07/2017 - 18:15

New release!

First, a huge thank you to Jamie Folsom and Andy Stuhl from Perfomant Software Solutions LLC, who did the heavy lifting on the coding for this release. We couldn’t have done it without them. We’re grateful, as well, to Neatline community member Adam Doan (@doana on Github) from the University of Guelph, whose code contributions made Neatline’s first accessibility functionality possible.

What’s Fixed:

Google Maps API issues. We originally embedded the API key for Google Maps directly in the Neatline code, but Google changes the way apps should connect to their codebase fairly regularly, and with little or no warning. It’s just easier for everyone if you can directly configure an API key for your specific installation of Neatline, so that’s what we’ve done. Updated installation and configuration instructions (with screencaps!) are available on our documentation site .

WMS map layer issues. We thought we had this one squished, but it came back again because of issues with our implementation of OpenLayers 2.0 and conflicts with the way that MapWarper passes data via URL. MapWarper WMS layers will now render properly as exhibit items and as base layers for an exhibit.

What’s New:

Accessibility. Thanks to Neatline community member @doana, you can now specify a URL to an accessible version of your Neatline exhibit in the exhibit’s settings. If the accessible URL exists, a hidden link will be rendered at the top of the public exhibit page directing users of assistive technology to the alternative page so that their screen reader can render the page for them. This feature relates specifically to Guideline 1.1 of WCAG 2.0. Our documentation of this new feature will be available on docs.neatline.org by July 10, 2017.

For more detail on this update, check out the Changelog.

Ready to download? Get the latest release from the Omeka Add-Ons Repository.

Encounter an issue? ask a question on the Omeka Forums or submit an issue, or feature request, directly to us on our issue tracker.

Job Opening: Curious about focusing on DH development?

Tue, 27/06/2017 - 19:24

You might have seen our opening for a Senior Developer—we’re now seeking an additional colleague for our R&D team: DH Developer! Apply here (posting number #0621212), or read on for more information.

We welcome applications from women, people of color, LGBTQ, and others who are traditionally underrepresented among software developers. In particular, we invite you to contact us even if you do not currently consider yourself to be a software developer. We seek someone with the ability to collaborate and to expand their technical skill set in creative ways.

This is a full-time position, with flexibility to help you achieve a healthy work-life balance. Like all our team members, this role includes 20% of your time devoted to your own research initiatives and professional development. This “personal R&D” time includes  access to our experimental humanities makerspace, other high-end facilities, and expert collaborators and mentors.

About us

The University of Virginia Library is the hub of a lively and growing community of practice in technology and the humanities, arts, and social sciences. As part of that community, the Scholars’ Lab has risen to international pre-eminence as a library-based center for digital humanities. The Scholars’ Lab collaborates with faculty, librarians, and students on a range of projects and tools, including spatial humanities, interface design, innovative pedagogy, data visualization, text analysis, digital archiving, 3D modeling, virtual reality and gaming, and other experimental humanities approaches.

The Library and the Scholars’ Lab are committed to diversity, inclusion, and safe spaces, and we have focused recent speaker series and practice on accessibility and social justice (check out our team-authored charter for more on our values). We welcome curious, critical, and compassionate professionals who are keenly interested in the overlaps between technology and the humanities (literature, history, art, cultural heritage, and related fields).

The Scholars’ Lab currently consists of 11 staffers (plus a senior developer role), as well as an amazing cohort of graduate fellows and makerspace technicians.

Anticipated salary range: $65,000-75,000, plus benefits such as professional development/conference travel funding, health insurance, and retirement savings plan

Responsibilities

Under the direction of the Head of R&D for the Scholars’ Lab in the UVA Library, the DH Developer

  • works with scholars from the humanities and social sciences to understand their needs and define project goals
  • provides professional opinions on appropriate project deliverables and reasonable schedules for completion of projects
  • collaborates on building applications that enable scholars and library users to collect, manage, produce, manipulate, or analyze digital information resources in interesting ways
  • writes original code, and tests and improves on existing code
  • learns about and engages with new technologies toward widening and deepening the Scholars’ Lab’s pool of staff expertise
  • creates documentation for both internal Lab and external non-technical audience use

Qualifications 

Minimum requirements:

  1. Experience equivalent to one full-time year with either a programming language (such as—but not limited to—PHP, Ruby, Python, Java), or HTML, CSS, and JavaScript
  2. 2 years of web development experience, with tech skills demonstrated in an accessible portfolio of work.
  3. Familiarity with a code version control system such as Git.
  4. Ability to work and communicate with technical and non-technical collaborators.
  5. Either education through the master’s level or equivalent experience through your job, hobbies, or other activities, preferably in the humanities or library/information science.
  6. Interest in the humanities (literature, history, art, cultural heritage, etc.)

Preferred:

  1. Graduate degree or equivalent professional or other experience in the humanities or social sciences.
  2. Knowledge of and interest in the digital humanities.
  3. Experience with collaborative project work.
  4. Experience with any of the following: data collections, analysis, visualization, and interpretation; front-end web development and design; back-end web development; systems and database management; text analysis or image analysis methods and tools; frameworks such as Rails, Django, and Zend; TEI, XML, Solr, Cocoon, Tomcat.
  5. Experience taking initiative to suggest or begin new projects, and to carry out projects with little supervision.

Interested?

You can apply here (posting number #0621212), but please feel free to reach out with any questions—for yourself or a friend—by emailing visconti@virginia.edu or tweeting @scholarslab or @literature_geek. In particular, we’re very happy to talk with anyone who’s interested, but not sure whether they have the required technical background. All job discussions will be treated as confidential.

LAMI Summer Fellows 2017

Wed, 21/06/2017 - 16:30

For the third year in a row, the Scholars’ Lab and the University of Virginia Library are helping host summer fellows from the Leadership Alliance Mellon Initiative (LAMI) at UVA. The students will pursue original research this summer at UVA in consultation with a faculty mentor. For our part, the Scholars’ Lab and the Library have worked with Keisha John, Director of Diversity of Programs in the Office of Graduate and Postdoctoral Affairs, to organize a weekly series of workshops introducing the students to digital humanities and library research methods. They’ll be getting a broad introduction to digital research and the resources of the library as they think towards graduate school, and we’ve also coordinated weekly board game sessions over lunch (for SLab-style bonding).

In addition to introducing these students to the resources available at UVA and in the library system, the program aims to increase the number of demographically underrepresented students pursuing graduate work and careers in the academy. You can find more information about the program in a 2015 press release put out by UVA Today when our first cohort was in residence. Two of our own graduate fellows, Jordan Buysse and Sarah McEleney, are serving as dh mentors.

These are the students that you might meet if you happen to be around the Scholars’ Lab this summer. Look for more information about them and their projects by clicking through to their bios!

They’re a fantastic group, and we’re excited to work with them this summer. Thanks to all of our colleagues at UVA, the Library, and the Scholars’ Lab for their participation in the program.

 

What Should You Do in a Week?

Mon, 05/06/2017 - 16:00

[Cross-posted to my personal blog.]

For the past several years, I’ve taught a Humanities Programming course at HILT. The course was piloted by Wayne Graham and Jeremy Boggs, but, these days, I co-teach the course with Ethan Reed, one of our DH fellows in the Scholars’ Lab. The course is a soup-to-nuts introduction to the kinds of methods and technologies that are useful for humanities programming. We’re changing the course a fair amount this year, so I thought I’d offer a few notes on what we’re doing and the pedagogical motivations for doing so. You can find our syllabus, slides, resources, and more on the site.

We broke the course down into two halves:

  • Basics: command line, Git, GitHub, HTML/CSS
    • Project: personal website
  • Programming concepts: Ruby
    • Project: Rails application deployed through Heroku and up on GitHub

In the first half, people learned the basic stack necessary to work towards a personal website, then deploying that site through GitHub pages. In the second half, students took in a series of lessons about Ruby syntax, but the underlying goal was to teach them the programming concepts common to a number of programming languages. Then, we shifted gears and had them work through a series of Rails tutorials that pushed them towards a real-life situation where they’re working through and on a thing (in this case a sort of platform for crowdsourcing transcriptions of images).

I really enjoyed teaching the Rails course, and I think there was a lot of good in it. But over the past few years it has raised a number of pedagogical questions for me:

  • What can you reasonably hope to teach in a week-long workshop?
  • Is it better to do more with less or less with more?
  • What is the upper-limit on the amount of new information students can take in during the week?
  • What will students actually use/remember from the course once the week is over?

To be fair, week-long workshops like this one often raise similar concerns for me. I had two main concerns about our course in particular.

The first was a question of audience. We got people of all different skill levels in the course. Some people were there to get going with programming for the first time. These newcomers often seemed really comfortable with the course during the first half, while the second half of the course could result in a lot of frustration when the difficulty of the material suddenly seemed to skyrocket. Other students were experienced developers with several languages under their belt who were there specifically to learn Rails. The first half of the course seemed to be largely review for this experienced group, while the second half was really what they were there to take on.  It’s great that we were able to pull in students with such diverse experiences, but I was especially concerned for the people new to programming who felt lost during the second half of the course. Those experienced folks looking to learn Rails? I think they can probably find their way into the framework some other way. But I didn’t want our course to turn people off from programming because the presentation of the material felt frustrating. We can fix that. I always feel as though we should be able to explain these methods to anyone, and I wanted our alumni to feel that they were empowered by their new experiences, not frustrated. I wanted our course to reflect that principle by focusing on this audience of people looking for an introduction, not an advanced tutorial.

I also wondered a lot about the outcomes of the course. I wondered how many of the students really did anything with web applications after the course was over. Those advanced students there specifically for Rails probably did, and I’m glad that they had tangible skills to walk away with. But, for the average person just getting into digital humanities programming, I imagine that Rails wasn’t something they were going to use right away. After all, you use what you need to do what you need. And, while Rails gives you a lot of options, it’s not necessarily the tool you need for the thing in front of you – specially when you’re starting out.

So we set about redesigning the course with some of these thoughts in mind and with a few principles:

  • Less is more.
  • A single audience is better than many.
  • If you won’t use it, you’ll lose it.

I wondered how we might redesign the course to better reflect the kinds of work that are most common to humanists using programming for their work. I sat down and thought about common tasks that I use programming for beyond building apps/web services. I made a list of some common tasks that, when they confront me, I go, “I can write a script for that!” The resulting syllabus is on the site, but I’ll reiterate it here. The main changes took place in the second half of the course:

  • Basics: command line, git, GitHub, HTML/CSS
    • Project: personal website
  • Programming concepts: Python
    • Project(s): Applied Python for acquiring, processing, and analyzing humanities data

The switch from Python to Ruby reflects, in part, my own changing practices, but I also find that the Pythonic syntax enforces good stylistic practices in learners. In place of working on a large Rails app, we keep the second half of the course focused on daily tasks that programming is good for. After learning the basic concepts from Python, we introduce a few case studies for applied Python. Like all our materials, these are available on our site. But I’d encourage interested folks to check out the Jupyter notebooks for these units if you’re interested. These are the new units on applications of Python to typical situations:

In the process of working through these materials, the students work with real, live humanities data drawn from Project Gutenberg, the DPLA, and the Jack the Ripper Casebook. We walk the students through a few different options for building a corpus of data and working with it. After gathering data, we talk about problems with it and how to use it. Of course, you could run an entire course on such things. Our goal here is not to cover everything. In fact, I erred on the side of keeping the lessons relatively lightweight, with the assumption that the jump in difficulty level would require us to move pretty slowly. The main goal is to show how situations that appear to be much more complicated still boil down to the same basic concepts the students have just learned. We want to shrink the perceived gap between those beginning exercises and the kinds of scripts that are actually useful for your own day-to-day work. We introduce some slightly more advanced concepts along the way, but hopefully enough of the material will remain familiar that the students can excel. Ideally, the concepts we work through in these case studies will be more immediately useful to someone trying to introduce programming into their workflow for the first time. And, in being more immediately useful, the exercises might be more likely to give a lasting foundation for them to keep building on into the future.

We’ve also rebranded the course slightly. The course description has changed, as we’ve attempted to soften jargon and make it clear that students are meant to come to the course not knowing the terms or technologies in the description (they’re going to learn them with us!). The course name has changed as well, first as a joke but then in a serious way. Instead of simply being called “Humanities Programming,” the course is now “Help! I’m a Humanist! – Programming for Humanists with Python.” The goal there is to expose the human aspect of the course – no one is born knowing this stuff, and learning it means dealing with a load of tough feelings: anxiety, frustration, imposter syndrome, etc. I wanted to foreground all of this right away by making my own internal monologue part of the course title. The course can’t alleviate all those feelings, but I hoped to make it clear that we’re taking them into account and thinking about the human side of what it means to teach and learn this material. We’re in it together.

So. What can you do in a week? Quite a lot. What should you do – that’s a much tougher question. I’ve timed this post to go out right around when HILT starts. If I figure it out in the next week I’ll let you know.