By Emily Ward @1066unicorn and Carys Brown @HistoryCarys
If there was one thing that the Making Big Data Human conference made clear, it was that ‘Big Data’, and indeed digital methodologies in general, provide some very exciting opportunities to advance historical research. From the ambitious and wide-ranging National Archives’ Traces Through Time project, which looks to create a generic method to look at historical individuals across enormous datasets, through to the more specific but equally exciting Casebooks Project, the conference participants were treated to a feast of ideas about how historical methods are adapting to the changing nature of data in a digital age.
But what exactly is ‘big data’, and what did the Doing History in Public team have in mind when we decided to explore how we could make it ‘human’? The basic definition of ‘big data’ is ‘extremely large data sets that may be analysed computationally’.[1] For historians this might, as Jane Winters demonstrated in her keynote lecture, be a case of using the archived web as an historical source, or of exploring parliamentary proceedings from three different countries over a period of more than 200 years.
Read more