In I Poli Zei, one of Greece’s major printed and online magazines:
The occasion for this discussion was the highly timely International Conference “Writing, Translating, Thinking in the Age of Artificial Intelligence,” which will take place in Athens on February 27th and 28th. Τhe conference attempts to map, among other things, the new realities brought about by the sweeping arrival of Large Language Models (LLMs) in our intellectual life.
– In the context of the International Conference “Writing, Translating, Thinking in the Age of Artificial Intelligence,” what do you consider to be the greatest challenge that writers, translators, and intellectuals are called to face today?
The speed at which LLMs are being deployed doesn’t give humans and our society, politics, law, and education, enough time to adjust. It’s hard to believe that ChatGPT was released only 3 years and a few months ago.
Compare this with an earlier game-changing technology: writing. Writing developed over thousands of years, and humans have had thousands of years to develop reasonable ways to use it. Many people don’t even realize anymore how profoundly disruptive the technology of writing actually is.
LLMs are even more disruptive, not only because of the speed of their deployment, but also because of what they do. They transform meaning without understanding meaning. They change how we think, interact, and act. We don’t really understand how much of our language use and interactions can be automated in this way. So far, we’ve had only the first glimpses into the impact of automated generation of meaningful text and other artifacts.
Here in Athens, a lively debate about how writing affects knowledge and understanding was already underway 2,500 years ago. People felt the disruptions of the technology of writing, much as people today feel that LLMs disrupt our mental life and society.
Then, philosophers opposed sophistry and rhetoric without regard for truth. In 2005, a contemporary version of this has led the philosopher Harry Frankfurt to ask: “Why is there so much bullshit?” today, we are flooded with “AI slop.” And more: Machines replace human mental work, and they are used to manipulate our minds and behavior.
LLMs force us to rethink old questions about the human mind, our lifeworld, and technology. But we need to put real effort into reaching a better understanding quickly—not only to catch up, but also to anticipate what’s coming. This requires fundamental philosophical reflection on LLMs and their impact on writing, translation, thinking, knowledge, understanding, communication, society, and so on.
In all of these dimensions, we can build on ancient Greek discussions. Traditional LLMs are trained on written text, and even when they process video, they do so by means of written tokens.
Yet we also need to go much further. Writing can change meaning to some extent, but LLMs are made to generate meaningful artifacts. They target human sense-making. We need to understand how LLMs can be used to manipulate people’s thinking and behavior on a large scale.
today, a few billionaires have the tools to influence the information that billions of people receive and how it’s presented to them. That handful of people have their own interests, purposes, and fears, and often they don’t align with what’s good for most of us. When they can manipulate what and how we think on a large scale, that’s dangerous.
– You argue that LLMs are often mistaken for authors. In your view, what fundamentally grounds the concept of “authorship”?
In my view, authorship is fundamentally grounded in sense-making. I’m not saying authors are the ultimate authority on a text’s meaning, or that we need to understand their psyche to understand their work. But to be an author, you must have at least some understanding of the meaning of what you’ve created. That understanding is necessarily incomplete. Writing always remains open to interpretation, contestation, and reinterpretation.
Authorship is more than just stating information; it opens a horizon of sense-making for others to explore. The horizon is in the in-between between authors and interpreters. LLMs can have a part in this. They are built of human-authored text and process human-authored prompts. But by themselves, they’re not authors.
Collecting, computing, or generating can be parts of authorship, and authors engage in these activities, too. But authorship is much more. While LLMs do more than just parrot human language use, their inner workings are limited to computations on texts written by others. They’re often built and framed in ways that produce the illusion of an author.
But we shouldn’t be fooled by that illusion. A human who produces text by calculating numbers wouldn’t be an author either.
We have different words for this. The word “producer” or “generator” better captures the role of such a human. Calling an LLM a text-producing machine, or a text-generator, will lead to less confusion than attributing authorship to it.
Attributing authorship to LLMs ignores how they’re built and how they function. It overlooks all the people involved in designing, training, deploying, and prompting an LLM. And it hides the fact that there are people responsible for an LLM’s output.
Sometimes that’s done on purpose. Grand claims about Artificial General Intelligence—which is supposedly able to do everything a human can do—distract from the actual problems with actually existing LLMs.
– Your lecture at the Conference focuses on the relationship between authorship, interpretation, and meaning in the context of texts generated by LLMs. In dialogue with the “death of the author,” as formulated by Roland Barthes, would you say that the rise of LLMs calls into question the role of the author itself, or rather a specific conception of what authorship means?
Barthes rejects the idea that language simply transmits an author’s internal psychological states to another consciousness. When he says that “language speaks,” he tries to explain the phenomenon that we can make sense of language use without knowing its author.
I interpret this to mean that meaning is embedded in language use within a context, including time and culture. Writing strips away much context, but readers reconstruct meaning by placing the text in their own context.
Ultimately, meaning is embedded in a more or less shared world. It doesn’t reside solely in the listener or reader, but both play important roles. Neither the reader nor the author can be eliminated.
LLMs transform the interplay between author, language, and reader. Reflecting on LLMs can challenge specific conceptions of authorship. Philosophers like Coeckelbergh and Gunkel use Barthes to argue that one particular concept should be given up, that of the author as the sole authority of the meaning of a text. I agree that this specific notion is a fabrication that can be dismissed. But that’s not true for the concept of the author altogether.
Barthes, Foucault, Derrida, Wittgenstein, and others didn’t merely dismiss a simplistic concept of meaning and authorship. They showed that writing is embedded in societal, cultural, historical, philosophical, political, and economic contexts. Their profound insights lie in this broader understanding, not just in rejecting one narrow view.
Thinking about LLMs gives us a fresh perspective on authorship. This perspective vindicates some claims by these philosophers, who are often treated as unquestionable authorities. More, it also equips us with new tools to critically reflect on their ideas and illuminate the ongoing discussion of authorship. LLMs profoundly reshape the author’s role, and we need renewed philosophical reflection to understand why and how.
– In your work, you explore the relationship between the human mind and digital technologies. How does sustained interaction with LLMs affect the way we think, write, and interpret?
Even before AI, some writers were already avoiding the effort involved in serious reading and writing. They’d do nothing but collect bits of information and paste them into pre-given schemes. Now, AI-generated text floods everything. Authors are being replaced with LLMs, and the Internet is becoming swamped with AI slop. This flood of mindless text has serious consequences for how we make sense of the world and undermines the foundations of free speech, society, and democracy.
LLMs repeat patterns they extract from their text corpus and which are reinforced during training. They tend to produce schematic, clichéd, and often biased text. They can help authors smooth their writing and make it more understandable to others. On the other hand, they can flatten content and style, and inhibit original writing.
LLMs have already changed how people write and talk. For instance, there’s been an explosion in the use of the word “delve” since ChatGPT launched. The word had been rare in most English-speaking regions, but not in Nigeria—where many of the click workers reside who trained the models. You could see this as a form of revenge by the colonized against those who imposed their language on them… I’m not completely serious here. But it’s a fact that people are talking and writing more and more like an LLM.
I already mentioned that LLMs can be misused to manipulate how people think and behave. That’s dangerous. Especially when you consider that LLMs also promote deskilling. Many people use them to do mental work for them rather than to improve their own mental work.
When people stop using their mental capabilities, they lose them. People are tempted to neglect not only simple mental tasks, but also critical reflection. All of this is happening at a time when we desperately need the ability to understand complex relationships and interactions.
At the same time, I see great potential in LLMs. They’ve enabled me to do things that were once far more difficult or simply not feasible. I can use them to perform complex searches, uncover hidden information, conduct research, analyze texts, interact with texts, make complex changes to text, challenge my own thinking, smooth my writing, learn languages, and so much more. LLMs can be useful for everyone who seriously works with text. And thinking critically about them sheds new light on old problems—like the discussions about writing in Ancient Athens I mentioned.
– Do you see Artificial Intelligence primarily as a tool that supports creativity, as a driver of broader cultural transformation, or as a development whose long-term consequences remain difficult to foresee?
AI is a tool, but it’s also much more than that. It’s one expression of the digitization, a process that’s been unfolding for centuries. Digitization isn’t merely a consequence of electronic computers—computers are themselves a result of digitization, and they’ve accelerated the process in turn. The long-term consequences are hard to predict, but they clearly affect every aspect of human life: our thinking, experiences, emotions, and even human nature itself.
Since these are all topics philosophy deals with, fundamental philosophical reflection is useful for understanding how digitization changes cultures and minds. I wrote several articles on this, which are freely available on my website [www.durt.de]. I also made a short video explaining why philosophy is essential for understanding AI. Interested readers can easily find it on YouTube.
– What advice would you offer to young writers and translators who experience uncertainty in the face of rapid technological change?
Don’t just take advice from others—think for yourself. AI is already taking over thoughtless work. The days of making a living translating manuals from one major language to another are over. Low-level writing tasks are being automated. But high-quality work remains. Thoughtful translators still have opportunities. Original writers have even more. Keep your focus on quality.
I’d also recommend learning how technology can support great work. Not only complex text analysis with LLMs, but also simple things. Like touch typing, note-taking and bibliography software, and dictation and text-to-speech engines.
At the same time, enjoy lifelong learning. Reflect on what you genuinely want. Instead of chasing a seemingly well-defined career that may vanish, use this moment to engage with what truly matters to you. Have a well-grounded motivation, connect with your heart. Enjoy working out your own view, and seriously challenge yourself with different perspectives. Experiment. Think authentically and continuously learn. That’s what enables you to author great work.
– If the “author” is undergoing transformation in our current context, how would you describe this shift in authorial identity? What kinds of skills (philosophical, linguistic or digital) do you think the author of the future will need?
LLMs only work because they are trained on the incredible work of writers throughout history; at least those whose texts have survived, been digitized, and selected by tech companies. There are real authors behind the LLM-generated text. And there is responsibility. It’s just harder to see.
I’m deeply worried that many people are using LLMs the wrong way. They use them to summarize texts rather than to actually engage with the material. They use them to do mental work for them—things like brainstorming, or even writing text. When writers replace their own thinking with LLMs, they’re not developing their own skills. They never learn, or they fail to practice, essential abilities like clear thinking and authentic writing. The result is superficial text and self-inflicted deskilling. They also risk publishing clichéd text they wouldn’t truly stand behind.
Since LLMs can churn out endless amounts of superficial content, the ability to recognize and avoid superficiality will become even more critical. We’ll need to sharpen our own thinking, use LLMs thoughtfully without becoming dependent on them, and understand how they are transforming language itself.
– If, fifty years from now, we were to discover that an iconic literary work of our time had been written largely with the assistance of an LLM, would that change the way we read and evaluate it?
Fifty years from now, public attitudes will have shifted dramatically. We can’t really predict how people will think and feel then. I mean, even today, some people don’t care much about authenticity. There are those who don’t even care whether their “lover” has feelings for them, as long as they just do what they want. But for those who do value authenticity, I imagine attitudes about AI-generated text will evolve quite a bit.
I think it matters who’s speaking or writing. Not necessarily to understand the psyche of the author, though texts often do reveal a lot about it. But to understand the content itself—to make better sense of it, and to interpret it more fully.
If an LLM helps create an iconic literary work, that would be fascinating. By their nature, though, LLMs generate average text. They are unlikely to produce iconic literature. I already said that Frankfurt’s problem of “bullshit” has only been amplified by LLMs. LLMs mass-produce mindless content, exponentially more than before. We’re already drowning in clichéd and superficial text, and LLMs generate even more of it.
Writers who lean too heavily on LLMs generally won’t produce great literature. But if this succeeds, it would tell us something interesting about the power of computational processing over the massive text corpus the LLM has been trained on. And it would show us new ways in which these tools might be used.
– If you were asked to design an LLM specifically for writers and translators, what is the one feature you would definitely want it to have and what is the one feature you would consciously choose to exclude?
I’d want it to challenge what authors have written and to find counterarguments. And I’d want it to refuse to generate a first draft. That’s where AI takes over and replaces human authorship. Authors get degraded to prompters and editors. It’s not that LLMs want to enslave humanity—but if we don’t use them properly, we risk enslaving ourselves with algorithms.

Άντα Κουγιά
Παιδί του κέντρου, παιδί της πόλης που ζει ακόμη! Σπούδασε δημοσιογραφία και θέατρο, χάνεται σε βιβλιοπωλεία και σε μουσικές του ελληνικού κινηματογράφου, σκέφτεται με αυτόνομο τρόπο για το τι άλλο θα μπορούσε να γίνει…
