In this video, I describe a [relatively] easy method for adding subtitles to videos. Why would you want to do this? Well, beyond the obvious reasons like providing accessibility to interested individuals who have hearing disabilities, we can use subtitles to create a moment by moment accounting of the contents.
For the Last House on the Hill project, we have been working to develop a methodology for exploring primary data and associated multimedia [movies, videos, texts, data sets] at a very fine-grained level, looking for the intrinsic relationships that are often buried within these digital objects. Anyone who has attempted to annotate or add subtitles to videos knows what a painstaking job it can be. In our case, where there over 75 hours of field recordings on videotape, the idea of manually annotating the videos is simply beyond our capabilities to achieve with the time and energy that we have.
Following the method that is described in this video blog post, it is possible to produce high-quality subtitles in relatively short order. There are added benefits, for you to does a good job of producing useful timecode that we can associate with other “entities” in our project–the people, places, things and other media that are related to a particular moment in a particular video. This is kind of awesome.
Here’s an example from https://vimeo.com/5750065, a field video diary about the progress being made on a particular burial feature in Çatalhöyük. We have broken down the caption into distinct and related entities:
Once a selection of videos that will help us to tell the excavation story are chosen, it will be worth it to go through and add a time coded subtitles track, and then use the track, along with the well-written caption, to make it possible for users of our digital publication to randomly access these videos in order to put a focus on a particular person (Lori Hager), place (Feature 617) or date (July 31, 2000). This “faceted” experience we believe will lead to a much deeper engagement with the archaeological process as we experienced it.This particular video is only 2 minutes long, but in order to create the rich caption, the video clips 11 and 12 needed to be watched, notes needed to be made, and the various entities within the caption needed to be extracted. We estimate that there are over 900 of these clips. No matter how efficient we can make the process, it still will take hours, hundreds or thousands of hours, to go through them all.
In later posts, we’ll go through the process of codifying the videos and linking them to the rich world of data held within the Last House on the Hill.