The Parable of Flotillus: The Future of Classics?

Ffion Shute

Flotillus is a name rarely encountered in the mainstream study of Roman literature, yet his contributions to the genre of epigrammatic poetry offer a refreshing glimpse into the lighter side of Roman wit and social commentary. Flourishing in the late first century CE, Flotillus carved out a niche with his playful, often humorous verses that contrasted sharply with the more serious or moralising epigrams of contemporaries like Martial. While Martial’s work is characterised by biting satire and sharp social critique, Flotillus opted for a gentler tone, using his poems to poke fun at everyday situations, human foibles, and the quirks of Roman urban life. His epigrams, though fewer in number and less widely preserved, reveal an author keenly attuned to the comic potential of the mundane. Flotillus’ style blends clever wordplay with a light-heartedness that invites readers to smile rather than recoil.

If you are racking your brains trying to remember if you have ever heard of Flotillus, then I have some simultaneously reassuring and disturbing news for you: Flotillus did not exist. The paragraph above was generated by ChatGPT (shortened slightly and adjusted for British spellings) when I gave it the following prompt: “Please write about 200 words of an article about the lesser-known Roman author of light-hearted epigrams, Flotillus.”

Although it sounds completely plausible (if slightly florid), the content of ChatGPT’s little outline is entirely false. It gave my whimsical fictional character a date and a comparison with Martial (which sounds quite convincing because it is modelled on those usually made between Horace and Juvenal), all with complete earnestness and with no indication whatsoever that Flotillus was just a name that I pulled out of the air. I could easily have offered it for publication instead of going to the trouble of writing my next Know your Frontinus from your Fronto! column and convincingly polluted the body of online classical scholarship, even though the name Flotillus will return no results in any proper literature search. It sounds entirely credible, and yet is wholly fake. Such is the power of artificial intelligence (AI).

Artificial stupidity: I asked AI to imagine an amusing scene from Flotillus and it gave me a fresco of a Greek with a stubborn goat.

In recent times, there has been great excitement about the potential of AI to contribute to classical scholarship. Its value lies in its enhanced ability to recognise patterns and create solutions to problems by itself based on the data it already has, with little human input. This has enabled projects such as the Vesuvius Challenge, which uses a bespoke AI model to reconstruct the text of charred papyrus scrolls buried at Herculaneum in the eruption of 79 CE. Reading the scrolls was something that foiled classicists for centuries but, since 2023, the Vesuvius Challenge has unlocked the potential to read 3D scans of all 300 of them without ever unwrapping them. This has opened up the possibility of significantly expanding the corpus of Greco-Roman literature in the years to come, all thanks to the harnessing of innovative technology.

To a similar end, Google DeepMind’s Aeneas program dates Latin inscriptions by using pattern recognition and generates likely missing text, speeding up a process that would take academics decades. AI has also been explored to translate Linear B tablets. Collaboration between the world of technology and the world of Classics is already producing huge advances in interpreting the material we have available to us from the ancient world.

However, the use of AI in Classics is a complex issue, filled with ethical pitfalls and questions about its wider implications. The positive uses mentioned above are cutting-edge projects conducted collaboratively between computer scientists and experts in papyrology and epigraphy, but most of the everyday interactions that classicists have with AI are via the commercial large language models, such as those behind ChatGPT and Google Gemini. As I have demonstrated with Flotillus, those large language models are prone to hallucination, a phenomenon where they produce completely fictional information. They do not have the capability to distinguish between fact and fiction, instead prioritising delivery of something plausible that we want or are expecting to hear. Whilst I positively induced ChatGPT to hallucinate Flotillus, large language models tend to hallucinate spontaneously when asked to conduct literature searches, providing plausible but completely false citations. This is obviously a serious danger of using commercial AI models in Classics (or any other academic field) without extreme caution.

There is also little transparency as to the methodology and biases of the tools into which we feed our data and expertise. Sometimes the rules set by AI companies with other uses in mind are an obstacle to accuracy, such as when academics at the University of Reading challenged AI to translate Catullus’ sixteenth poem, a famously sexually explicit verse that ChatGPT’s content filters rendered completely meaningless. When I tried to replicate this test, ChatGPT misattributed it first to Martial, then to Petronius, and then proceeded to give a wholly inaccurate translation that it swiftly deleted to save my poor, innocent eyes. Hallucination was at play and ChatGPT did not hesitate to sacrifice academic rigour to fall in line with rules set by Silicon Valley.

This is one example, but it gives rise to a difficult ethical problem regarding how we can trust AI to aid us in our work when it is subject to a set of arbitrary rules that do not have the core principles of humanities academia in mind, and there is a considerable lack of transparency over whom those rules are intended to serve. Recognising this problem is essential for using AI in the context of scholarship, for it is easy to fall into the trap of unintentionally abandoning principles of accuracy in favour of exciting improvements in efficiency.

Another challenge posed by AI in the world of Classics is its use in the context of teaching and learning. Much controversy swirls around the increasing number of school and university students using models such as ChatGPT to complete assignments. Some argue that such use is fair and valid, as it improves efficiency and creates better results. After all, now that we have this technology at our fingertips, why would we need to hone the art of thinking and writing independently? It is tempting to roll one’s eyes just as many of us did at maths teachers telling us ‘you’re not always going to have a calculator in your pocket’ when we obviously now do. But others are concerned that the convenience of AI is causing our capability for independent and critical thinking to degrade.

The evidence that we currently have supports these concerns. A recent study conducted by Dr Nataliya Kosmyna and other researchers at MIT used clinical neuroimaging to show that using ChatGPT for completing essay-based tasks causes ‘cognitive debt’ – over the space of a few months, participants who used AI measurably underperformed in neural engagement, recall and linguistic areas compared to participants who did not. Ifigenia Drami of the University of Macedonia, Thessaloniki cited this study at a recent conference (AI and Teaching the Ancient World, hosted by the University of Reading) and compared it to worries from almost two and a half thousand years ago voiced in Plato’s Phaedo. In the dialogue, Socrates argues that widespread use of the written word would discourage people from using their minds, as they would become stuck memorising words from other people that would always remain the same instead of coming up with their own ideas. This is a fascinating and uncanny parallel with the issues we are seeing now with the widespread use of AI.

Socrates would not have approved of AI.

Satisfying though this connection is, the current negative impact that AI has on critical thinking is a serious matter. It is especially worrying for a subject such as Classics, where the value of its scholarship lies in students and teachers continually researching and reinterpreting evidence, making connections, and coming up with new ideas. If we normalise the unbridled use of AI in academia, we risk becoming reliant on a flawed system that tells us what we want or expect to hear, rewarding homogeneity and efficiency over the core principles of seeking knowledge.

But this all sounds very pessimistic. How can we reconcile the inevitable march of innovation with the preservation of human capabilities? The answer is not simple, but it does lie in a significant part in knowing more about how AI works and being more aware of its effects. In terms of academia and teaching, we should perhaps collectively reassess the value of the pursuit of knowledge as the process itself rather than any measurable end-point that AI will help us reach more efficiently. Our cognitive skills as humans are infinitely valuable, and the recognition that we must use AI to aid us rather than limit us in the exercise of those skills is a more constructive attitude than either unthinkingly embracing or vehemently rejecting it.

As for what this means practically, it is essential that AI literacy (i.e. the ability to use AI critically and recognise its strengths and weaknesses) becomes a skill that is more valued and taught than it is currently. This starts with teachers understanding the technology, so that the students they teach can begin their careers with a well-rounded idea of the issues involved at a practical and theoretical level. Researchers at Rome Sapienza University have run a successful pilot scheme at an Italian high school during which students were encouraged to use large language models to aid them in Latin translations. In a supervised environment, the students learned where the models could help them the most and where they were inaccurate or lacked nuance. This is a necessary and positive step towards making AI literacy a standard skill taught in educational settings.

Although it is not specific to Classics, I must also mention a very great, and often overlooked, impact of using AI – the environmental one. At the same conference, Professor Federica Lucivero of Oxford University highlighted the fact that we often view the internet, the ‘cloud’ or AI tools as intangible networks that only exist in cyberspace. Because our interactions with such technology makes them seem intangible, it is easy to forget that these networks are built on vast physical systems in the form of data banks, power grids and fibre-optic cables. The data that flies around in cyberspace is in fact stored in physical locations, often out of sight and mind in rural California, Arizona and Texas. This requires huge amounts of electricity, cooling water for the data banks and human labour to operate and maintain the system.

The average data centre in the USA consumes roughly the same amount of water in a day as 1,000 households, but newer facilities can consume up to the equivalent of 15,000, including during drought seasons. The carbon footprint of training an AI model is also enormous, and certainly not something to be taken lightly when considering its use in academia or anywhere else. We must never assume that personal convenience or efficiency outweighs the environmental impact. Even if true responsibility lies with the tech companies who own the products, we as consumers are all contributors.

Moving forwards in the context of Classics academia and beyond involves acknowledging some difficult truths. AI has permeated most of our daily lives, whether we like it or not, or even see it. Its widespread accessibility, capabilities and efficiency has made its continuing presence inevitable. The only question now is whether we are able to gain enough understanding and awareness of the issues it poses to regulate our usage of it, making it just another tool in the toolbox of things we have at our disposal to aid us in our tasks rather than letting it dictate and homogenise our ideas. We know that we possess a powerful technology that can benefit us, but at the same time we must equip ourselves with the knowledge and skills necessary to maintain our academic integrity, protect our independence of thought, and remember our value as human beings.

©FfionShute