#1 It has been estimated by Stefan Winkler Nees (from the Deutsche Forschungsgemeinschaft-DFG) in 2011 that 90% of all digital research data is lost.[1] We don’t know how many of this data belonged to the Humanities and hopefully, these numbers are better today, but we can assume that still a lot of Humanities data (and other data) is lost, because of missing infrastructures, or because no one has taken care of the long-term availability of this data in time.[2]
But, even if data is not lost: Does the available data sparkle joy, to borrow a term from the ubiquitous Marie Kondo? Are these datasets accessible for research, well documented using standards, available in interoperable formats etc., in short: is it FAIR[3] data, too?
Tide prediction machines, prosopography and digital humanities are the three main axes upon which my first-year Master’s degree research project is constructed. I’m a student of the “Cultural History of Science and Technology, Digital Humanities and Mediation” course at the University of Western Brittany’s Centre François Viète research laboratory in Brest. In this blogpost I’m going to explain what these three most important components of my project are, how they link together and how you could integrate certain elements into your own work, with a focus on digital humanities tools.
Tide Prediction Machines
Tide prediction machines are analogue computers that were used to predict the times of high and low tides worldwide from the end of the 1800s up to the digital age. The first tide prediction machine was designed by William Thomson (later Lord Kelvin) and built in London in 1873. It was developed as a response to increasing pressure from commercial shipping lines towards the middle of the 19th century who wanted a greater number of more accurate tidal predictions more quickly than could be calculated by hand. As well as helping shipping lines and navies to safely navigate the seas and the shores, tide prediction machines became crucial to the building of ports and effective flood defences. A total of 25 of the 33 tide prediction machines ever built were constructed in the UK but were then shipped to other countries who wanted to do their own calculations. Although analogue tide prediction machines are no longer in use, nearly all still exist today as museum pieces across the world. The photo below shows the Bidston Kelvin tide prediction machine, which was built by Kelvin Bottomley and Baird Ltd. in Glasgow (Scotland) between 1924 and 1925. It was first used in Bidston (England) by Liverpool Observatory and Tidal Institute before being shipped to Paris for use by the French navy’s hydrographic service (today the Shom) and then to their site in Brest (France), where it can still be found today.
Prosopography
Prosopography is a research method usually used by historians to study the lives of groups of people. It involves the creation of a collective biography or the gathering of data relating to the common aspects of the lives of individuals who are part of a particular population. It can take the form of a database of all of the people within the chosen population along with information about the biographical phenomena that transcend their individual lives. Prosopography overcomes the problem of the scarcity of historical data by collecting together all available fragments, which can then be compared, synthesised and analysed, thus compensating for any gaps in the data. Instead of looking at the exceptional and unique, prosopography focuses on the general and average. It is in this way that prosopography makes visible the particular characteristics representative of the chosen population.
Digital Humanities
Digital humanities is an interdisciplinary area of study that lies in the intersection between the humanities and digital technology. Research in digital humanities goes in two directions: it uses digital technologies to ask questions about and to create new knowledge in the humanities, and uses the humanities to ask questions of and to reflect upon digital technology. The digital technologies in digital humanities come in the form of tools, applications and software (purpose-built for digital humanities or not) that can be used for the effective production and dissemination of research in the humanities.
That’s great, but… how do they all feature in one project?
Tide prediction machines
Tide prediction machines have been studied very little, despite having been used to perform life-saving calculations and making a crucial contribution to maritime history. I want to find and analyse the common features in the lives of tide prediction machines, to give an appreciation of their importance, the extent of their use and usefulness, and an impression of the shape of their lives. I therefore decided to study the “life cycle” of analogue tide prediction machines: the methods used to predict tides before the construction of the first machine, the motivation for creating this machine, the manufacturers, users and uses of the machines, maintenance and repair of the machines, the decommissioning of the machines, subsequent methods of predicting tides and finally the state of the machines today in terms of conservation and scientific mediation.
Prosopography
My project supervisor then introduced me to the practice of prosopography, but as you might have realised above it is a research method used almost exclusively to study groups of people. The 33 tide prediction machines that I’m studying form a coherent group in which each artefact can be described by a time evolution cycle. Given the analogies with human groups, my methodological research hypothesis is that it is possible to apply prosopographic methods to successfully analyse the life cycle of tide prediction machines.
Every member of the “population” of tide prediction machines must be subjected to the same questionnaire. I do not expect to find an answer to each question for each machine but that will not pose a problem in this prosopographical study; the aim is to find the common features in the lives of analogue tide prediction machines.
Digital humanities
Given the data-centric nature of prosopography, I see the use of digital humanities tools as essential to my project. Much of my data is temporal and geospatial: dates of “life events” such as construction, displacement, renovation and decommissioning, and locations of construction, use and conservation. Each machine has its own unique life and has its own story to tell, but there are nevertheless many features that group them together and events that they have in common. I’m using diverse digital humanities tools to associate, visualise and interpret this data, thereby distinguishing particular characteristics representative of tide prediction machines.
Which digital humanities tools?
I’m now going to introduce you to some of my favourite digital humanities tools and I’ll explain how I’m using them in my current work, how they’ve helped me and how they could be useful to you.
Palladio
Designed and built by Stanford University’s Humanities + Design Lab, Palladio is a digital humanities tool for the visualisation of complex historical data. This web-based open source application focuses on data-driven historical research. Palladio is ideal for those who want to visualise networks, temporal data or geospatial data all from the same database using an intuitive graphical interface. From a single database, researchers can create various visual elements that can be customised according to the data available. I have used Palladio to create a few visuals of my data, all of which have helped me by highlighting patterns in the data and making me ask new questions of it. I will now show you examples of how I’ve used three different features in Palladio to visualise my data.
Graphs in Palladio
The graph function offered by Palladio can be used to visualise a network or networks and the relationships within and between them. The points can be filtered and sized according to other parameters in the database, which is why the initial database needs to be well thought out and constructed with Palladio in mind. The graph below shows the relationship between the city of construction (dark grey) and the current location (light grey) of tide prediction machines; the points are sized according to the number of machines in each case.
Maps in Palladio
To make use of the map function, Palladio requires data in the “latitude, longitude” format. These points can be plotted on a map of the world and can be coloured, sized, linked and filtered. The base map can also be changed to show different geographical features, or a custom map can be uploaded. I created the map below to show the number of tide prediction machines used in any given place (red), and how many can be found in each different place today (green).
The points displayed on a maps or in a graph can be filtered by time, if the database contains data in a date format.
Timespans and timelines in Palladio
Any date data can also be visualised with a timespan, which can have various different layouts, or with a timeline, which takes the form of a bar graph. The timespan below shows the lifetime of tide prediction machines, and the timeline shows the number of machines that were built each year.
Palladio is an excellent example of a digital humanities tool that can be used by historians to visualise data. Data visualisations can help researchers to make the most of the available data at any point during the research process. They also give meaning to data and facilitate its comprehension by the audience.
Time.Graphics
Time.Graphics is a web-based tool for the creation of timelines. It offers only this one function, but it does it well. Single events or timespans can be added to the timeline with a title, description and colour. I made the timeline below to show all the significant dates of tide prediction machines built in the UK: date of manufacture, displacements, renovations, prizes awarded and decommissioning. I assigned each individual machine with a different colour to help identify them on the timeline. Interpreting and comparing data from a colourful and visual representation can be much easier than reading a database, for me and for those not familiar with my research.
Hypotheses
As part of the OpenEdition infrastructure, Hypotheses is a platform for blogs in humanities and social sciences research. My classmates and I have a joint class blog hosted by Hypotheses for our Master’s course, and we regularly write about our classes, texts we’re reading and how our research is advancing. Blogging is new to me but I’m finding it to be an excellent way of organising my thoughts by forcing me to write about what I’m doing in a way that’s easily understood by others. It’s also proving to be an excellent way for us to learn more about how our classmates are conducting their research and what they’re discovering.
Twitter
Where would I be without Twitter? Not writing this blogpost, that’s for sure. I started using Twitter when I discovered that researchers and academics use it a lot, and I wanted to be part of it! Without Twitter I wouldn’t have found out about this competition, nor about a conference I’ll be talking at later this month, nor about many other events, interesting people or research. There’s a real sense of community amongst us #twitterstorians, who I’ve found to be encouraging, supportive and sympathetic (because things don’t always go well!).
Other tools
There are a few other digital humanities tools that I like to use such as CmapTools for making concept maps with which I can keep track of my thoughts in a visual way, Zotero for creating a bibliography and storing information about what I’m reading, and Overleaf: the best LaTeX editor ever (in my opinion).
It may seem that the use of digital humanities tools is labour-intensive, but the right ones for your project used in the right way can be immensely useful for your audience and, more importantly, for yourself and your work. They can help you to see your research in a new light, give you alternative ways of doing things, provide you with more opportunities for sharing your work and collaborating with others.
Why should you care about …?
Tide prediction machines
The accuracy of the calculations made by tide prediction machines were of global importance, as the lives of people at sea and the efficient running of worldwide maritime transport depended upon it. These marvellous machines deserve to be better known; I’m trying to better understand their lives so that their stories can be told and shared.
Prosopography
The prosopographical method can be very useful for researchers whose data is incomplete; it avoids generalising from select examples and favours studying collectively all available cases, regardless of completeness, thereby ensuring the data is correctly represented.
Prosopography is a well-established research method in history that can be used for a wide range of studies in various historical periods. These studies have, until now, all been based upon populations of people, but what I’m doing by using prosopography to study artefacts is relatively new. By studying historical data in this way we can analyse relationships across time and space and attempt to better understand the past.
Digital humanities
Digital humanities tools can be useful in the three main stages of a project: they can help you to advance in your research, to disseminate your results more effectively, and they can help your audience to understand what you’re trying telling them.
Different tools can facilitate the advancement of your research by giving you an environment in which your data and research can be stored, managed and analysed. Data visualisation tools can highlight relationships within and between networks, help you to spot patterns, uncover and draw your attention to trends previously hidden in the data, encourage you to view information in other ways and cause you to make new enquiries. Many digital humanities tools also offer a collaboration function, meaning that you can work together with your colleagues on the same file.
As for diffusing your findings, digital humanities can offer you a wide range of platforms for presenting and sharing your data, in many different ways.
Data presented in innovative formats will make your research more accessible and understandable by different types of learners. Visual representations could be better for younger audiences or those unfamiliar with your area of study because reading and interpreting visualisations of data can be easier and quicker than databases or text. Data in visual forms are also more attractive and more likely to be remembered, thereby facilitating their retelling.
What’s next?
I’ve already gathered a lot of data for my prosopography of tide prediction machines, but I’m still searching for answers to certain questions. I need to continue analysing the data using various digital humanities tools and then I will be able to write up my results about the life cycles of tide prediction machines and evaluate the success of this prosopographical study of artefacts.
You can read more about the story of tide prediction machines, prosopography, Digital Humanities and other interesting topics on Helen’s blog.
To spread the word on emerging good practices around data management in Digital Humanities and to give visibility to young scholars’ critical reflections on how they adopt digital methods and tools to their research questions, DARIAH-EU offered two travel scholarships, open to early career researchers, to attend the Annual Event and showcase their topic as a poster presentation at the conference.
The two winning posts are:
Tide Prediction Machines, Prosopography and Digital Humanities: what are they and how do they fit together? – Helen Mair Rawsthorne
“Here be dragons”: Open Access to Research Data in the Humanities – Ulrike Wuttke
A few words about the authors:
Helen Mair Rawsthorne is studying for a Master’s degree in Epistemology and the History of Science and Technology: the Cultural History of Science and Technology, Digital Humanities and Mediation as an online student at the University of Western Brittany in Brest, France. Her Master’s thesis will be a prosopographical study of analogue tide prediction machines. She graduated with a BSc in Physics from the University of Bristol in 2016 and wrote her Bachelor’s dissertation on the evolution of the relationship between science and technology throughout history.
Ulrike Wuttke is a medievalist and textual scholar by training with a specialisation in Medieval Dutch Literature (Doctor of Literature, Universiteit Gent). Since her PhD, she contributes to digital humanities projects and networks such as the Working Group Data Centres of the Verband Digital Humanities im deutschsprachigen Raum, the metablog OpenMethods or the DARIAH Working Group DiMPO (Digital Methods and Practices Observatory). Since 2017, she works and teaches at the University of Applied Sciences Potsdam, Department of Information Science, where she has recently joined the team of the DFG-Projekt RDMO (Research Data Management Organizer). Her special interests are Open Science, Research Data Management and Digital Methods and Tools.
The winning entries will be published on the blog in the coming days – stay tuned!
Congratulations to both our winners and thanks again to everyone who participated in the blog competition!
Have you ever considered blogging about your research tools and methods? Come and share the methodological advancements or challenges of your research and win a scholarship to attend the DARIAH Annual Event 2019 on “Humanities Data”, to be held in Warsaw, on May 15-17.
DARIAH-EU is offering two travel scholarships, open to early career researchers, to attend our Annual Event. On top of the travel bursary, successful applicants will automatically win the opportunity to showcase their topic as a poster presentation at the conference.
To participate, all you need to do is to send us a blog post focusing on one or more of these topics below:
Descriptions of or critical reflections on methods and tools from your own research projects
Case studies about the optimal reuse of data or tools
Practical or theoretical reflections about how and why humanities research is conducted digitally
How the increasing influence of digital methods and tools changes scholarly attitudes and scientific practices of humanities research
How to apply
You don’t need to have your own blog to apply. Please send us your blog post together with a little introduction of yourself to the following e-mail addresses: erzsebet.toth-czifra@dariah.eu and eliza.papaki@dariah.eu
Scholarship details:
DARIAH-EU will reimburse your basic travel costs to Warsaw (flight/train/bus ticket, accommodation and public transport transfer) up to 500 EUR.
Ready, set, go!
The deadline for submissions is the 9th of April. We are looking forward to receiving your posts and will announce the winner on the 12th April 2019.
Helping Digital Humanities scholars to find the tools and methods that are the most relevant to their work is a core mission of OpenMethods. By shifting perspectives from research outputs to the underlying workflow, procedures and tools, we aim to strengthen the culture of reuse of already existing resources in DH. But we do not only propagate the culture of reuse. Putting our money where our mouth is, we are also seeking ways to put DH tools in service of more effective content discovery and enrichment.
As a reusability exercise, recently we have created plugins to achieve interoperability with the entity recognition NERD service and the research discovery platform ISIDORE to increase the visibility and discoverability of our content
The NERD plugin
NERD is a service that recognizes and disambiguates named entities.
This plugin allows integration of the NERD service with WordPress. As a form of content enrichment, the plugin automatically creates tags from the named entities offered by NERD when provided the full text of the original article that has been republished on OpenMethods. The tags, in return, are used to propose extra information coming from Wikipedia and Wikidata. These tags contribute to the better discoverability and searchability of content on OpenMethods and add extra context layers to our content.
It is reusable and only needs a NERD server to work with, one instance of this server is freely available at Huma-Num, please contact them for more information.
The Rich meta in RDFa plugin
The second plugin is for increasing the findability of our content on other discovery platforms via metadata enrichment. We wanted Isidore, an indexing and search service for the humanities and social sciences, to harvest our content and make it searchable and findable on their platform that serves as a single point entry for a wide range of open SSH resources, such as data, publications and other materials. To enable indexing by Isidore, OpenMethods has to be able to present information (i.e. a set of dublin core metadata elements such as title, description, date, etc.) in a format understood by Isidore. For this purpose we created this WordPress plugin that allows users to add Dublin Core metadata enrichments in RDFa within the HTML header of each post.
Currently it is only used for harvesting by ISIDORE, but it can easily be changed to accommodate other applications. In the future, it will be possible to change the namespace and element names of the RDFa data for a given input such as title or excerpt.
As a result, content from OpenMethods is now connected with and embedded in a large corpus of electronic publications, corpora, databases and scientific news, and thus Isidore users can find OpenMethods posts when looking for information about Digital Humanities. Another benefit of the plugin is that it indirectly, through OpenMethods, enables the integration of content that could not be indexed otherwise on research discovery platforms because of their language (other than English) or content type properties (e.g. blogs, videos or podcasts scattered across the web with insufficient metadata).
The figure below allows for a sneak peek behind the scenes and gives an at-a-glance summary on how the plugins build on DH resources.
As OpenMethods only offers users a very short introduction to and a quotation from the republished source content, first we have to go and look for the full text of the original content (the source of the entire article) from the article’s source site through an HTTP(S) request and keep the relevant textual elements (and exclude other, irrelevant elements of the page e.g. advertisements, menu etc.).
Once the data is retrieved, it is followed by a call to NERD (runned on Huma-Num servers) so that we return the Named Entities that could be found in the text.
These Named Entities are then ingested by OpenMethods to create tags and to be able to send the user back to entities on Wikipedia / Wikidata for more information.
To add an extra discovery layer to the OpenMethods platform, we also integrated a third plugin developed by the Isidore team: it enables listing of similar content recommendations on OpenMethods posts.
You are more than welcome to explore how the two plugins jointly contribute to the enrichment and greater visibility of the OpenMethods posts. Besides, as you would expect, both of them are free to download and are available for reuse. Take a look into their documentation and let us know if it inspired you to think about ways to put them to good (re)use in your own projects.
We owe special thanks to Pierre Mounier and Nicolas Larrousse for their idea to develop the two plugins, to Yoann Moranville who created them and to the Huma-Num team, especially to Laurent Capelli who helped us with the design and the Isidore indexing.
Have you ever considered blogging about your research tools? Is it the hassles around maintaining your own blog or finding your audience that holds you back? Would you write about what helps you to do effective research only occasionally? Then this blog is for you. It collects guest posts about Digital Humanities tools and methods. Digital Humanities scholars as well as their collaborators (data scientists, computer scientists, developers, librarians, archivists etc.) are writing for each other about know-hows, best practices, limitations & benefits or reuse potentials of research tools and methods.
This blog is an extension of the OpenMethods metablog. OpenMethods is a platform aimed at republishing and bringing together all formats of Open Access publications (e.g. re-search articles, preprints, blog posts, videos, or podcasts) in different languages about Digital Humanities methods and tools to spread the knowledge and raise peer recognition for them. The platform has been developed in close partnership with and supervision of the DARIAH community since it is an offspring of the DARIAH “Humanities at Scale” project. Relevant content is selected and curated by an international group of Digital Humanities experts to be republished on OpenMethods. The Digital Humanities methods and tools blog will serve as a pool for selection: the most interesting posts will be republished on OpenMethods.
Our goal is to reach and engage the widest possible array of Digital Humanities communities ranging from scholars taking the first steps towards going digital in their research to Digital Humanities experts who are shaping specific research areas as representatives for particular methods.
On the OpenMethods platform we do not publish original contributions but if you have an interesting blog post or video to share with the broader community around Digital Humanities, you are very welcome to share it with us as a post on this blog.