Week 2: Reflective Blog Post

An image of the Robots Reading Vogue project home page

This past week, I felt the digital humanities become a far less daunting discipline than I had the first week. This is due largely in part to the work we did this week dissecting digital humanities projects. Aside from what we did in class, I also had the opportunity to explore the project Robots Reading Vogue.

The Robots Reading Vogue project encapsulates the collaborative work of students, librarians, and many others as they try to draw meaning from the Vogue Archive dataset. The scale of this collaborative work is more than evident from the extensive bibliography found on the site. The Vogue Archive contains a multitude of images and textual data through which valuable insights can be extracted via techniques in the field of natural language processing. What makes this dataset especially interesting is that it features data that is captured over the timescale of several decades. Using Miriam Posner’s categories for anaylzing a digital humanities project, I had identified Vogue Archive as the Source. From a an exploration of the project, some of the techniques include n-grams, topic modeling, and word2vec. These computational tools have primarily used to conduct research in gender studies. The tools and techniques here would be under the umbrella of Processes. Far more difficult to pinpoint was what specific web technologies were used in the Presentation of the project. As a computer science major, the blackbox of this project’s presentation made me especially curious. Specifically, a question I had was whether Yale has their own version of a content managment system designed by them for them?

Any questions I have I’m sure I can direct them to the creators of the project. In this case, we have Lindsay King (associated with the Haas Arts Library from Yale) and Peter Leonard (Digital Humanities Lab from the University of Richmond) to ask.

css.php