Searching for Authenticity: Parallels between Early Modern England and the Age of AI


By Zoë Jackson (Twitter: @ZoeMJackson1, Bluesky: @zoejackson.bsky.social)

If you teach, or work in a public-facing role, or indeed have any online or social media presence at all, you will probably be aware of a chatbot called ChatGPT. Since its launch by OpenAI in late 2022, individuals and companies have found a range of uses for the tool. AI options, including large language models (LLMs), are now available (or forced upon us, depending on your perspective) with almost any task we perform at a computer, whether searching the web or reading a PDF.

Unfortunately, whatever the potential promise of generative AI like ChatGPT, these technologies can also be problematic. Being well informed now means being able to distinguish between accurate news, actual images, and even, apparently, legitimate airline refund policies, and those fabricated in whole or in part by artificial intelligence.

When ChatGPT first appeared on my radar, I immediately thought of my own doctoral research. As an early modern historian studying seventeenth-century memory, truth, and the law, the parallels between AI-generated content and my area of study may not be obvious. (As far as I know, the people I study did not have to interact with anything like generative AI, or its outputs.)

But the problems that such technologies produce are not entirely new. In fact, in some ways, they’re extremely similar. One of the biggest problems I see today (in the news, in social media, and beyond) with ChatGPT and the other forms of rapidly advancing AI is the greater problems they create with authenticity and legitimacy. How do we know this image/text/voice is real? How can we know which information or images to trust, and which are the inventions of artificial intelligence?

AI itself may not have been an issue of concern to early modern people, but questions of authenticity and legitimacy certainly were. Although any period and place can have its own claims to truth-problems, scholars have argued for early modern, and particularly seventeenth- and eighteenth-century, England as a particularly uncertain period for the concept of ‘truth’.[1] These issues came up frequently in my doctoral research, as I searched for how early modern people approached and understood the concept of perjury (lying under oath) in court.

My research in part studied how questions of perjury and memory related to broader developments in early modern England. Scholars have traced the relationship of memory as a reliable source of information to the growing importance of, and authority given to, writing and print.[2] It may be difficult to imagine from a modern perspective, but for at least part of the early modern period, there was not necessarily an assumption that writing was a more reliable and accurate recording of the past than remembering. Writing and printed documents, it was feared, could be forged.[3]

This perception was, however, changing. In his treatise on legal evidence, first circulated in the early part of the eighteenth century and published posthumously, the judge and writer Sir Jeffrey Gilbert asserted that written records were more reliable than remembered evidence, ‘for the Testimony of an honest Man, however fortified with the Solemnities of an Oath, is yet liable to the Imperfections of Memory, and as the Remembrance of things fail and go off, Men are apt to entertain Opinions in their Stead’. For these reasons, ‘the Contracts reduced to Writing, are the most sedate and deliberate Acts of the Mind and are more advantageously secured from all Corruption, by the Forms and Solemnities of the Law, than they possibly could have been, if they were retain’d in Memory only’.[4]

Early modern people needed to assess the information with which they were provided, and the source of that information, for credibility and accuracy. This critical assessment would have been particularly relevant to the procedures of the law, where the determination of the reliability of evidence in a particular case could influence its outcome, and it is these assessments with which I am most familiar through my research. But, as for their modern counterparts, such assessments of information would also have taken place outside the courtroom, in any situation in which individuals encountered reported information.[5]

Part of what makes AI-generated products so problematic, potentially dangerous, and difficult to assess, is that there isn’t a person beyond the machine. Even when we know something has been produced by ChatGPT, we don’t necessarily know why it’s spitting out what it does. Neither, one could say, does the chatbot. It seems worthwhile to reiterate that artificial intelligence is artificial intelligence – it does not think. So when these chatbots produce answers to questions, sources for research, or anything else, it doesn’t know or care if these things are true. The process by which AI produces plausible-sounding but completely made-up outputs has generously been called ‘hallucinating’ – though some scholars have characterised it more critically.[6]

The problem of AI and the sometimes eerily realistic fictions it creates might be specific to our twenty-first-century world. But the problem of deciphering truth from fictions, in different media and from different sources, has existed for centuries. As historians, we are taught to approach all sources with their context in mind: why were they created and what do we know about who created them? Early modern people had their own hierarchies for determining the reliability of particularly sources and information. We now must fit generative AI, and its creations, into our own modern-day spectrum of credibility.

Cover Image: Anonymous, woodcut of an early modern courtroom, 17th century. © The Trustees of the British Museum. Shared under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) licence.


[1] Andrew Hadfield, Lying in Early Modern English Culture: From the Oath of Supremacy to the Oath of Allegiance (Oxford: Oxford University Press, 2017); Mark Knights, Representation and Misrepresentation in Later Stuart Britain: Partisanship and Political Culture (Oxford: Oxford University Press, 2005); Frances E. Dolan, True Relations: Reading, Literature, and Evidence in Seventeenth-Century England (Philadelphia: University of Pennsylvania Press, 2013).

[2] Daniel J. Woolf, The Social Circulation of the Past: English Historical Culture 1500–1750 (Oxford: Oxford University Press, 2003); Adam Fox, Oral and Literate Culture in England, 1500-1700, Oxford Studies in Social History (Oxford: Oxford University Press, 2000).

[3] Woolf, The Social Circulation of the Past, 287–9.

[4] Geoffrey Gilbert, The law of evidence (Printed by Henry Lintot, Law-Printer to the King’s most Excellent Majesty; for W. Owen, at Homer’s Head, near Temple Bar, 1756), Eighteenth Century Collections Online, accessed 16 September 2024, p. 6; M. Macnair, ‘Gilbert, Sir Jeffray (1674–1726), judge and legal writer’, Oxford Dictionary of National Biography, 23 Sep. 2004; accessed 16 Sep. 2024, https://www.oxforddnb.com/view/10.1093/ref:odnb/9780198614128.001.0001/odnb-9780198614128-e-10688; See also Woolf, The Social Circulation of the Past, 297-8.

[5] See Dolan, True Relations.

[6] See, for example, Joe Slater, James Humphries, Michael Townsen Hicks, ‘ChatGPT Isn’t “Hallucinating”—It’s Bullshitting!’, Scientific American, 17 July 2024, https://www.scientificamerican.com/article/chatgpt-isnt-hallucinating-its-bullshitting/.


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.