Geoffrey Rockwell

Syndicate content
Research notes taken on subjects around multimedia, electronic texts, and computer games.
Updated: 17 hours 34 min ago

Canadian Social Knowledge Institute

Wed, 20/09/2017 - 23:02

I just got an email announcing the soft launch of the Canadian Social Knowledge Institute (C-SKI). This institute grew out of the Electronic Textual Culture Lab and the INKE project. Part of C-SKI is a Open Scholarship Policy Observatory which has a number of partners through INKE.

The Canadian Social Knowledge Institute (C-SKI) actively engages issues related to networked open social scholarship: creating and disseminating research and research technologies in ways that are accessible and significant to a broad audience that includes specialists and active non-specialists. Representing, coordinating, and supporting the work of the Implementing New Knowledge Environments (INKE) Partnership, C-SKI activities include awareness raising, knowledge mobilization, training, public engagement, scholarly communication, and pertinent research and development on local, national, and international levels. Originated in 2015, C-SKI is located in the Electronic Textual Cultures Lab in the Digital Scholarship Centre at UVic.

Categories:

Skinner on his Teaching Machine and programmed learning

Sun, 03/09/2017 - 00:52

From Teaching in a Digital Age by A. W. Bates I came across this 1954 video of Skinner explaining his Teaching Machine inspired by behaviourism. The machine runs a paper script, but it isn’t that different from the computer based drill training today. You get a question, you write your answer, and you get feedback.

Later we got machines that projected slides and hypertext systems. See Programmed Instruction and Teaching Machines.

Categories:

Hey, Computer Scientists! Stop Hating on the Humanities

Fri, 01/09/2017 - 21:40

Wired Magazine has a nice essay on Hey, Computer Scientists! Stop Hating on the Humanities. The essay by a computer scientist argues that CS students need to study the ethical and social implications of what they build. It can’t be left to others because then it will be too late. Further, CS students should be scared a little:

Professors need to scare their students, to make them feel they’ve been given the skills not just to get rich but to wreck lives; they need to humble them, to make them realize that however good they might be at math, there’s still so much they don’t know.

Categories:

Replaying Japan 2017

Sun, 27/08/2017 - 18:52
Playing Missile Command at the Strong (Photo by Okabe)

Last week I was at the 5th International Conference on Japan Game Studies, Replaying Japan 2017. You can see my conference notes here. The conference was held in the Strong Museum of Play which has a terrific video game collection and exhibit. (I vote for holding all conferences in museums!)

Some of the highlights included:

  • A keynote by Tom Kalinske on how “The Experts are Always Wrong.” Kalinske was brand manager for Barbie in the early days and headed up SEGA America when it went up against Nintendo.
  • A keynote by Rachael Hutchinson on “Refracted Visions: Transmedia Storytelling in Japanese Games.” Hutchinson did a great job at discussing all the different forms of “trans”-media in Japanese game culture.
  • A tour of the Strong archives which contain everything from Ralph H. Baer‘s papers to a large number of working arcade game cabinets. (See my Flickr album on the Strong.)

Keiji Amano gave a paper I co-authored at the conference “On the Infrastructure of Gaming: The Case of Pachinko” in which we dealt with infrastructure like the ball feeding machines, “Hall Computers” that managed the machines, and content.

I also spoke about digital archiving at the University of Alberta and the infrastructure needed.

Categories:

Conference Report: DH 2017

Fri, 25/08/2017 - 23:11

This year I kept notes about the Digital Humanities 2017 conference at McGill. See DH 2017 Conference Report. My conference report also covers the New Scholars Symposium that took place before.

The NSS is supported by CHCI and centerNet. KIAS provided administrative support and the ACH provided coffee and snacks on the day of. We were lucky to have so many groups supporting the NSS which in turn supports new scholars to come to the conference and to articulate their issues in an unconference format.

DH 2017 itself was a rich feast of ideas. There was too much going on to summarize in a paragraph, but here are two highlights:

  • We had an opening keynote in French from Marin Dacos. He talked about the “Unexpected Reader” that one gets when publications are open.
  • We had a great closing keynote by Elizabeth Guffey on “The Upside-Down Politics of Access in the Digital Age” that asked about access for disabled people in the digital realm.

The participants of the New Scholars Symposium identified the following as topics to watch and think about:

  • AI and Machine Learning
  • Crowdsourcing
  • Building Twitterbots
  • Training Opportunities
  • Pedagogy
  • Digital Collections and Copyright
  • Diverse Voices
Categories:

Alice and Bob: the World’s Most Famous Cryptocouple

Tue, 01/08/2017 - 16:52

Alice and Bob is a web site and paper by Quinn DuPont and Alana Cattapan that nicely tells the history of the famous virtual couple used to explain cryptology.

While Alice, Bob, and their extended family were originally used to explain how public key cryptography works, they have since become widely used across other science and engineering domains. Their influence continues to grow outside of academia as well: Alice and Bob are now a part of geek lore, and subject to narratives and visual depictions that combine pedagogy with in-jokes, often reflecting of the sexist and heteronormative environments in which they were born and continue to be used. More than just the world’s most famous cryptographic couple, Alice and Bob have become an archetype of digital exchange, and a lens through which to view broader digital culture.

The web site provides a timeline going back to 1978. The history is then explained more fully in the full paper (PDF). They end by talking about the gendered history of cryptography. They mention other examples where images of women serve as standard test images like the image of Lena from Playboy.

The design of the site nicely shows how a paper can be remediated as an interactive web site. It isn’t that fancy, but you can navigate the timeline and follow links to get a sense of this “couple”.

Categories:

Secretary Clinton’s Email (Source: Wikileaks)

Wed, 19/07/2017 - 22:30

Thanks to Sarah I was led to a nice custom set of visualizations by Salahub and Oldford of Secretary Clinton’s Email (Source: Wikileaks)The visualizations are discussed in a paper titled Interactive Filter and Display of Hillary Clinton’s Emails: A Cautionary Tale of Metadata. Here is how the article concludes.

Finally, this is a cautionary tale. The collection and storage of metadata from any individual in our society should be of concern to all of us. While it is possible to discern patterns from several sources, it is also far too easy to construct a false narrative, particularly one that fits an already held point of view. As analysts, we fall prey to our cognitive biases. Interactive filter and display of metadata from a large corpus of communications add another tool to an already powerful analytic arsenal. As with any other powerful tool it needs to be used with caution.

Their cautionary tale touches on the value of metadata. After the Snowden revelations government officials like Dianne Feinstein have tried to reassure us that mining metadata shouldn’t be a concern because it isn’t content. Research like this shows what can be inferred from metadata.

Categories:

DataCamp

Mon, 17/07/2017 - 22:09

I’ve been playing with DataCamp‘s Python lessons and they are quite good. Python is taught in the context of data analysis rather than the turtle drawing of How to Think Like a Computer Scientist. They have a nice mix of video tutorials and then exercises where you get a tripartite screen (see above.) You have an explanation and instructions on the left, a short script to fill in on the upper-right and interactive python shell where you can try stuff below.

I’m working through it as a potential programming text for my upcoming Big Data and Text Analysis class. In the past I have used How to Think Like a Computer Scientist, which is well done, but not all the exercises are relevant. There is a book version of this last introduction with the title, Think Python.

Stéfan Sinclair, with some help from me, has created a nice set of materials on The Art of Literary Text Analysis. These are Jupyter notebooks and they walk students through setting up Jupyter notebooks to Topic Modelling. Other good python tutorials from the digital humanities can be found at the Programming Historian. The Programming Historian has a series of modular lessons that cover the basics and they have been reviewed.

The advantage of DataCamp is the interactive exercises though I imagine at a certain point it is better if students work in their own programming environment rather than the constrained one provided. I should add that DataCamp has a free DataCamp for the Classroom. If you sign up for this you can create a class and invite students to use the lessons. You can also track how they are completing the materials.

Categories:

What It’s Like to Use an Original Macintosh in 2017 – The Atlantic

Wed, 12/07/2017 - 03:35

The Internet Archive’s new software emulator will take you back to 1984.

From Twitter again (channelled from Justin Trudeau) is a story in the Atlantic about the Internet Archive’s early Macintosh emulatorWhat It’s Like to Use an Original Macintosh in 2017. The emulator comes with a curated set of apps and games, including Dark Castle, which I remember my mother liking. (I was more fond of Déjà Vu.) Here is what MacPaint 2.0 looked like back then.

I’m amazed they can emulate the Mac OS in JavaScript. I’m also amazed at the community of people coming together to share old Mac software, manuals, and books with the IA.

Categories:

Calling Bullshit: Syllabus

Wed, 12/07/2017 - 02:04

Each of our lectures will explore one specific facet of bullshit. For each week, a set of required readings are assigned. For some weeks, supplementary readings are also provided for those who wish to delve deeper.

On Twitter I came across this terrific syllabus: Calling Bullshit: Syllabus. The syllabus is learned, full of useful links, clear and funny. I wish I could write a syllabus like this. For example, here are some of the learning objectives:

  • Recognize said bullshit whenever and wherever you encounter it.
  • Figure out for yourself precisely why a particular bit of bullshit is bullshit.

What could be more important an objective in the humanities?

 

Categories:

The Real Threat of Artificial Intelligence – The New York Times

Sun, 25/06/2017 - 21:00

It’s not robot overlords. It’s economic inequality and a new global order.

Kai-Fu Lee has written a short and smart speculation on the effects of AI, The Real Threat of Artificial Intelligence . To summarize his argument:

  • AI is not going to take over the world the way the sci-fi stories have it.
  • The effect will be on tasks as AI takes over tasks that people are paid to do, putting them out of work.
  • How then will we deal with the unemployed? (This is a question people asked in the 1960s when the first wave computerization threatened massive unemployment.)
  • One solution is “Keynesian policies of increased government spending” paid for taxing the companies made wealthy by AI. This spending would pay for “service jobs of love” where people act as the “human interface” to all sorts of services.
  • Those in the jobs that can’t be automated and that make lots of money might also scale back on their time at work so as to provide more jobs of this sort.


So far the essay follows a fairly well worn path, one followed by speculation in the 1960s about how we had to get better at leisure. Where Lee gets interesting is in his reflections on the globalization of this trend.

  • AI businesses tend to concentrate, especially big-data driven AI, so there will be fewer and fewer that are more and more globally dominant.
  • The dominant AI businesses will be in the US and China. The US will specialize in the developed world and China in the developing world. What happens to other countries with no rich AI businesses to tax in order to employ the unemployed?
  • Countries that have decreasing populations will have an advantage as they will have fewer unemployed to deal with. Large and growing populations will become an economic disadvantage.
  • The countries without the profitable businesses and with growing businesses will end up dependent on the few (US and China) wealthy countries that will lead to a new global order.

So if most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software — China or the United States — to essentially become that country’s economic dependent, taking in welfare subsidies in exchange for letting the “parent” nation’s A.I. companies continue to profit from the dependent country’s users. Such economic arrangements would reshape today’s geopolitical alliances.

Categories:

Teaching machines to understand text

Sun, 25/06/2017 - 20:21

Teaching machines to understand – and summarize – text is an article from the The Conversation about the use of machine learning in text summarization. The example they give is how machines could summarize software licenses in ways that would make them more meaningful to us. While these seems a potentially useful application I can’t help wondering why we don’t expect the licensors to summarize their licenses in ways that we can read. Or, barring that, why not make cartoon versions of the agreements like Terms and Conditions.

The issues raised by the use of computers in summarizing texts are many:

  • What is proposed would only work in a constrained situation like licenses where the machine can be trained to classify text following some sort of training set. It is unlikely to surprise you with poetry (not that it is meant to.)
  • The idea is introduced with the ultimate goal of reducing all the exabytes of data that we have to deal with. This is the “too much informationtrope again. The proposed solution doesn’t really deal with the problems that have beguiled us since we started complaining since part of the problem is too much information of unknown types. That is not to say that machine learning doesn’t have a place, but it won’t solve the underlying problem (again.)
  • How would the licensors react if we had tools to digest the text we have to deal with? The licensors will have to think about the legal liability (or advantage) of presenting text we won’t read, but which will be summarized for us. They might chose to be opaque to analytics to force us to read for ourselves.
  • Which raises the question of just what is the problem with too much information? Is it the expectation that we will consume it in some useful way? Is it that we have no time left for just thinking? Is it that we are constantly afraid that someone will have said something important already and we missed it?
  • A wise colleague asked what it would take for something to change us? Are we open to change when we think of too-much-information as something to be handled? Could machine learning become another wall in the interpretative ghetto we build around us?
Categories:

CSDH 2017 conference

Tue, 06/06/2017 - 14:42

Last week I was at the Congress of the Humanities and Social Sciences attending the Canadian Society for Digital Humanities 2017 conference. (See the program here.) It was a brilliant conference organized by the folk at Ryerson. I loved being back in downtown Toronto. The closing keynote by Tracy Fullerton on Finer Fruits: A game as participatory text was fascinating. You can see my conference notes here.

Stéfan Sinclair and I were awarded the Outstanding Achievement Award for our work on Voyant and Hermeneutica. I was also involved in some of the presentations:

  • Todd Suomela presented a paper I contributed to on “GamerGate and Digital Humanities: Applying an Ethics of Care to Internet Research”
  • I presented a paper with Stéfan Sinclair on “The Beginnings of Content Analysis: From the General Inquirer to Sally Sedelow”.
  • Greg Whistance-Smith presented a demo/poster on “Methodi.ca: A Commons for Text Analysis Methods”
  • Jinman Zhang presented a demo/poster on our work on “Commenting, Gamification and Analytics in an Online Writing Environment: GWrit (Game of Writing)”
Categories: