This was my first time attending the Digital Humanities Summer Institute at the University of Victoria, the first of what I hope will be one of many to come. DHSI is now in its 14th year and this year’s directorial group included Ray Siemens (University of Victoria), Constance Crompton (University of British Columbia, Okanagan), Jentery Sayers (University of Victoria), Diane Jakacki (Bucknell University), and Jason Boyd (Ryerson University). [Full bios and other members who helped make DHSI happen can be found here.] The purpose of this institute is to introduce and train scholars, students, librarians, and other professionals in the humanities, as well as other disciplines, to new computing tools and methodologies through an intensive, week-long training period. You can view the full course listings offered this year, as well as past DHSI course offerings.
I enrolled in the Understanding Topic Modeling course, led by Neal Audenaert a Senior Software Engineer (Texas A & M University, Texas Center for Applied Technology). This course introduced participants to the algorithms, models, and theories used in Topic Modeling, specifically LDA (latent dirichlet allocation), and a variety of topic models that can provide different understandings of your data, such as modeling topics over time (dynamic topic modeling). I’ll discuss my class experience in greater detail in a future post with examples of the material we covered during this course and some of the data that I worked with. In this post, I will provide a brief overview of my experience and discuss some of the projects, tools, and discussions, which interested me while at DHSI.
What I really liked about DHSI is that it differs from other institutes in that discussion and learning occurs through a community-based approach. Archivists, programmers, librarians, software engineers, faculty, students (etc, etc.) all work and learn together. It is a week-long exchange of knowledge and ideas where we can ask questions, re-think our own approaches to how we do research in our own disciplines through the use of computational tools and methods, which are being applied in digital humanities. Many of these tools and methods are borrowed or built-upon from areas outside of the humanities–social science, computer science, mathematics–we then think about their application in our own specific disciplines or fields, such as the application of topic modeling on textual data drawn from nineteenth-century music periodicals, which can then show us the trends in music reception, performance, trade, or influence.
During the week at DHSI, participants spent the large portion of each day in their courses, however each day opened with a morning colloquium, in which participants presented their current projects or research, as well as asked for feedback on projects that were in the pre-development stage. These were presented in five-minute intervals (lightning talks) and demonstrated the diversity of approaches, tools, and methods, but also intersections between disciplines or fields. Following the daily classes were Birds of a Feather discussions (#DHSIbof), in which two speakers reflected on the same topic, providing different perspectives before opening the conversation to the audience for discussion and reflection.
The morning colloquia represented a variety of disciplinary areas, including literary studies, history, archaeology, information science, social science, feminist studies, cultural studies, medieval studies, and sound studies. Tools or methods applied or explored for possible application included geo-spatial and temporal analyses, TEI (text encoding using XML and XSLT), database frameworks, web-design, game design/theory, and critical editing. There were a number of projects with a focus on text analysis, as well as textual encoding. For example, Douglas Duhaime (University of Notre Dame) presented on “New Approaches to Digital Text Analysis: Introducing the Literature Online API,” in which he discussed his reason for building an API that would query the Literature Online (ProQuest) subscription database. One of the main reasons was due to the lack of search functionality through the user-interface of Literature Online (LiON) alone. Duhaime was interested in examining the full-text of Shakespearean works so he could study author attribution. By querying the API he is able to pull out ngrams, bigrams or trigrams in Shakespeare’s work, which can then be analyzed and compared with other corpora, if it is included in the LioN database. Using this approach, he can demonstrate how Shakespeare or other authors quoted or re-cycled passages from other authors. Duhaime has posted his python script for the Literature-Online-API on github, however he will be revising the script, because LiON has recently updated their database and made changes which affect the API.
Another interesting project was “On the Page, On the Screen: Uncovering the Digital Lives of Readers Using Linguistics, Temporal, and Geospatial Analysis” presented by Anouk Lang (U Strathclyde) in which she is studying reading patterns of contemporary readers by examining their literary activity through online reviews and social media comments. She is applying topic modeling to the data that she has been able to pull from various sites, as well as using temporal and geo-spatial analysis tools so that she can see changes in readership over time.
A project, which will be of interest to scholars wishing to examine the evolution or genesis of an author’s text, was presented by Zailig Pollock (Trent University) and Josh Pollock (Microsoft) (they also taught the XSLT course). In their presentation, “A Dynamic Text/Image Interface for Representing of the Genesis of a Text,” they demonstrated a self-built prototype, which allows the user to closely examine and encode a text in a viewer, which positions a text editor side-by-side with a digital text. In the text editor the user can begin to encode and tag important changes in the original document. The father-son team have developed a coding system, which will allow the user to encode the temporality of the layers within a single text. This looks like an extremely promising tool, which will allow users to accurately encode texts for a better understanding of the way in which an author wrote and revised her text. The prototype is discussed in further detail in “The Digital Page: Brazilian Journal,” Editing Modernism in Canada (March 4, 2014).
One of the many commonalities across these projects was that the source materials or objects were unique primary source materials held by cultural heritage collections, archives, and libraries. Primary sources, such as Gibson’s hand-bound science-fiction anthologies from the late 19th-mid 20th centuries (discussed by Stefania Forlini, Bridget Moynihan, University of Calgary), periodicals from the 1890s, epigraphs, pre-20th century primary sources related to textual, material, and cultural histories of the Caribbean (discussed by Liz Hopwood, Northeastern University). Another non-surprising commonality across these projects were their partnerships with their institutional library, archives and librarians, archivists, as well as other cultural heritage and memory institutions. So why is this important? It demonstrates a continuous and growing need for access and discoverability of primary source materials, which are generally acquired, cataloged, and preserved by libraries, archives, and museums (LAMs). The ongoing need for access to these materials so they can be re-examined, mined, analyzed, and presented in new ways by digital humanists, makes it imperative for scholars and students to work and collaborate directly with LAM professionals. It is also crucial that the professionals who work with these materials understand the needs of the digital humanities community. Many already do, because they too are part of the digital humanities community. Understanding how scholars and students want to use primary source texts or data now, as well as in the future, will inform LAM professionals in the ways in which materials are ingested, described, curated, and presented making it possible for digital humanists to input texts through tools, such as Mallet or encode them with TEI, as well as output them as graphs, charts, timelines, spatial representations, or the many other (and growing) possibilities. Basically, in order to support the work being done in digital humanities, LAMs need to collaborate and partner with scholars (by which I mean anyone who does research or scholarship in a particular field or fields) and students who will want to use their collections and apply computational methods for research purposes.
The Birds of a Feather discussions covered areas such as open-access, peer-review, career opportunities in/out of the academy, graduate student training, as well as what it means to be a digital humanist. These discussions allowed participants to reflect and contribute to an open-discussion about issues and concerns that are present in the digital humanities. One of the main areas of concerns continues to be the question of how academic institutions support graduate training in DH, as well as whether they have a responsibility to prepare graduate students for careers in and out of the academy. You can view the discussion, which ensued on Twitter by searching for #dhsibof.
Attending DHSI afforded me the opportunity to reconnect with several colleagues and meet others for the first time, who will now become my colleagues. Attending the week-long course, colloquia, and Birds of a Feather discussions was wonderful in and of itself, because these various interactions allowed me to expand my own approaches and thinking about existent projects, such as Documenting Teresa Carreño, forthcoming projects, and possibilities for application in the library. Outside of these planned events were opportunities to make new colleagues and interact on a non-hierarchical level with graduate students, librarians, programmers, academic administrators, and scholars, which created a non-threatening environment in which everyone was encouraged to interact and learn from one another. DHSI was a truly energizing experience, opening up new paths of inquiry for many, as well as reinforcing an intersecting and cross-disciplinary social network that we can always connect with and hopefully collaborate with on future projects. Next year’s DHSI is already being planned with dates in June 8 – 12, 2015. DHSI conversations were archived by Ernesto Priego (City University, London) and can be found here: Digital Humanities Summer Institute 2014: A #dhsi2014 Archive. figshare.
If you’re interested in reading posts from other DHSI participants, here is a selection that I found online:
- Alana Fletcher (Queen’s University), “Tool Tutorial: Out-of-the-box Text Analysis,” Editing Modernism in Canada, June 8, 2014.
- Juliette Levy (University of California, Riverside), “Day 2 of DHSI 2014: the intersection of theory, games, and pedagogy.” June 3, 2014.
- Paige Morgan (University of Washington), “How to Get a Digital Humanities Project of the Ground.” June 5, 2014.
- Emily Robins Sharpe (Keene State College), “What Does Collaboration Mean?,” Editing Modernism in Canada, June 7, 2014.
- Erin E. Templeton (Converse College), “DHSI 2014: Birds of a Feather “The Next Generation.” June 6, 2014.
- Roger T. Whitson (Washington State University), “DHSI 2014 Day 1, or Why We Need the MLA Report.” June 3, 2014.
[I am grateful to DHSI for a tuition scholarship and for professional development time from the University of Connecticut Libraries to attend DHSI 2014.]