Hiphop lezen: kwantitatieve en kwalitatieve methoden voor letterkundig onderzoek naar hiphop
Terwijl de wereld om ons heen steeds meer lijkt te verengelsen, grijpen zowel Nederlandse als Belgische jongeren massaal naar een jeugdcultuur in hun eigen taal: hiphop. Van Frenna tot Zwangere Guy en van Ronnie Flex tot Shay, Blu Samu of Coely – hiphop is de dominante jongerencultuur van dit moment, zowel wereldwijd als in Nederland. Die ongekende populariteit van hiphop, een door identiteitsvraagstukken gekenmerkt muziekgenre en idem jeugdcultuur, roept de vraag op hoe Nederlandse jongeren (artiesten en actief publiek) in hiphop hun culturele identiteit (her)definiëren. Op die vraag promoveert neerlandica en letterkundige Aafje de Roest (1993) aan de Universiteit Leiden (sectie Moderne Nederlandse letterkunde). Haar door NWO-gefinancierde onderzoek combineert kwalitatieve en kwantitatieve methoden om tot een antwoord op deze vraag te komen. Maar hoe onderzoek te doen naar een snel veranderende jeugdcultuur die misschien wel per definitie ‘ongrijpbaar' moet blijven? In dit college verkent De Roest het antwoord op die vraag, en neemt zij je aan de hand van recente case studies uit de Nederlandse en Vlaamse scene mee in het spel van hiphopjongeren, die tegen een lokale achtergrond, maar in een werelds perspectief, hun culturele identiteit vormgeven.
The challenges of investigating loosely structured genres and of operationalizing semantic content
Literary studies are often dealing with genres that are well established in literary discourse but can, on closer inspection, not be identified on the level of textual features. In other words, there are loosely structured genres that are not instantiated as clear-cut text types. The German novella, which is split up into two genres, that of the ‚Erzählung‘ and that of the ‚Novelle‘, is such a disordered genre. Research in literary genres, however, usually presumes the existence of a common text type on the level of textual features that can be revealed, for example, with stylometric analysis or based on classification tasks. It is the aim of a larger project to reveal the latent structures of German novellas. The presentation gives a systematic outline of the challenge of analyzing the historical change of the novella as a loosely structured genre.
Towards a Collection of Digital Literature from Flanders and the Netherlands (1971–2022)
Digital literature is an umbrella term that encompasses differing types of multimodal works of literature that are all reliant on the digital environment for their production, dissemination and/or consumption (Rettberg 2018). Digital literature can refer to hypertext fictions, algorithm-generated poetry, works created in virtual reality, online fan fiction, and various other permutations. Digital literature emerged as a concept and a field of study in the 1980s and 1990s. The rapidly changing nature and function of digital media since then have urged new definitions and approaches to this art form.
Stop tracking science. The aggregation and selling of users’ data by science publishers
The business model of science publishers has change over recent years. Not only content but data analytics is the new core of science publishing industry. This has detrimental effects on universities. My talk reconstructs the history of science publishing and analyses the current techniques of collecting traces of scientists using university libraries and science publishing platforms. Finally, the talk discusses a way out.
Machine Learning for Digital Scholarly Editions: The Case of eScriptorium
Digital and computational tools and methods are becoming increasingly part of scholarly activity, including in Digital Scholarly Editing. One example of this is in transcribing texts from manuscripts, where machine learning is becoming more and more effective. To this end, eScriptorium is being developed to leverage Machine Learning to help in transcription, whether automatic, semi-automatic or manual. In principle the software should be useful for any type of edition, in any language and script and from any date. In practice, however, this raises many questions, including to what extent AI can or should be employed in preparing editions, how much the expert should remain ‘in the loop’, but also to what extent it is even possible to develop a single tool that can work for everything from Greek papyrus to 20th-century notebooks to Old Vietnamese inscriptions and beyond. This talk will therefore present the current state of the art while also addressing some practical and theoretical questions that remain for the future.
Lost and often found works
William Marx’s 2021-2022 course at the Collège de France centered around “lost works,” as “there are more lost works than there are existing ones.” Coincidentally, an international group of researchers in medieval studies published an important article in Science on the “forgotten books” of the Middle Ages. Here again, the literary scholar is invited to look at what is not or no longer present, but what might be recreated with the help of digital humanities’ tools. October 14, we organize a debate between William Marx, our Antwerp digital humanists, and other guests on this question at the University of Antwerp.
Failure to connect: exploring the human relationships at the heart of digital humanities
Digital humanities means many things to many people – we talk of DH as being a range of methods, technologies, theoretical approaches to ask and answer research questions. But unlike traditional forms of humanities research, the research projects is not often one that can be tackled alone. DH nearly always requires collaboration with people from different subject domains, with technical experts and often with non-academic staff such as librarians, museum staff or administrative support.
This paper explores the impact of this growth in collaboration through the lens of failure and what happens when collaborations and partnerships don’t go as planned. We have all experienced failure in our professional lives, but it is rarely acknowledged due to risks to reputation or to future funding. But by exploring what can go wrong, we can identify some of the key collaborative skills needed by today’s digital humanists, and begin to understand how to equip the researchers of the future to thrive.
Rediscovering the performance practice of musicians in the long 19th century through handwritten annotations on music scores.
FAAM, Flemish Archive for Annotated Music, is a database and research platform aiming to revive the performances of musicians from the 19th and early 20th century through the study of their annotations on music scores. The Heritage Library of the Royal Conservatoire Antwerp provides a substantial collection of historical annotated scores made by Flemish amateur musicians, performers, conductors, and composers of the long 19th century.
Historical Language Models and their Application to Word Sense Disambiguation
Large Language Models (LLMs) have become the cornerstone of current methods in Computational Linguistics. As the Humanities look towards computational methods in order to analyse large quantities of text, the question arises as to how these models are best developed and applied to the specificities of their domains. In this talk, I will address the application of LLMs to Historical Languages, following up on the MacBERTh project. In the context of the development of LLMs for Historical Languages, I will address how they can be specifically fine-tuned with efficiency to tackle the problem of Word Sense Disambiguation. In a series of experiments relying on data from the Oxford English Dictionary, I will highlight how non-parametric and metric learning approaches can be an interesting alternative to traditional fine-tuning methods that rely on classifiers that learn to disambiguate specific lemmas.
The wandering verse: the computational detection of micro-intertexts in medieval literature
Intertextuality is a ubiquitous concept in literary studies, which – because of its notoriously open-ended nature – covers a variety of correspondences between texts. Signaling intertexts is an important editorial responsibility, because it can deepen one's reading experience of a literary work. Text reuse detection has become a popular task in the computational humanities too, although its evaluation is complicated by the lack of exhaustively annotated datasets of intertexts. Historic scholarship on medieval epics provides us with a wealthy inventory of micro-intertexts between medieval works, although their status is still hotly debated. Some philological communities have been keen on identifying intertexts as authorial features, whereas others have stressed their conventional status, especially in the wake of the oral-formulaic theory. In this talk, I will present a study on Middle Dutch epic literature, as well as an extension of this work to contemporary Middle English literature, in particular the bookshop theory surrounding the famous Auchinleck manuscript. I will argue that the intricate web woven by computationally detected intertexts can invite radically innovative readings of medieval literature.