LLMs and the Patterns of Human Language Use

Abstract

Large Language Models (LLMs) such as ChatGPT and other generative AI systems are the subject of widespread discussions. They are often used to produce output that ‘makes sense’ in the context of a prompt, such as completing or modifying a text, answering questions, or generating an image or video from a description. However, little is yet known about the possibilities and implications of human-sounding machines entering human communication. The seemingly human-like output of LLMs masks a fundamental difference: LLMs model statistical patterns in huge text corpora, patterns that humans are not normally aware of. Humans do perceive patterns at various levels, but when we produce ordinary language, we do not explicitly compute statistical frequency distributions.
The workshop aims at an interdisciplinary and philosophical understanding of the processing of statistical patterns of LLMs and their possible function in communicative exchange. In the controversy about the communicative potential of LLMs, we start from the thesis that LLMs do not understand meaning and investigate the extent to which they can nevertheless play a role in communication when people interact with them. To this end, concrete examples of LLM applications, such as the use of LLMs in software engineering and for whistleblower protection, will be explained by and discussed with their developers. This is important not only for a better understanding of the kinds of exchanges that are possible with LLMs, but also for the question of how far we can trust them and which uses are ethically acceptable.

List of speakers and lecture titles:
*if you want to find out more about the speakers click on their name – abstract a listed below

  • SYBILLE KRÄMER (Leuphana University of Lüneburg): How should the generative power of LLMs be interpreted?
  • CHRISTOPH DURT (TU München) & TOBIAS HEY (Karlsruhe Institute of Technology): Philosophy and Software Engineering: A Transdisciplinary Perspective on LLMs and the Patterns of Human Language Use
  • XYH TAMURA (University of the Philippines): How does Esposito’s “Artificial Communication” compare with Gygi’s Japanese “Emergent Personhood”?
  • ELENA ESPOSITO (University of Bologna / University Bielefeld): Communication with nonunderstandable machines
  • GEOFFREY ROCKWELL (University of Alberta): ChatGPT: Chatbots can help us rediscover the rich history of dialogue
  • IWAN WILLIAMS & TIM BAYNE (Monash University): The NLP Trilemma: Language, Thought, and the Nature of Communication
  •  ANNA STRASSER (DenkWerkstatt Berlin): What we can learn from developmental psychology for dealing with non-understanding LLMs
  • MIA BRANDTNER (LMU Munich): Interpretative Gaps in LLMs – The Trial to Bridge the Limits of Language with Multi-Modal LLMs
  • DAVID GUNKEL (Northern Illinois University): Does Writing Have a Future? – Literary Theory for LLMs
  • BETTINA BERENDT & DIMITRI STAUFER (TU Berlin / Weizenbaum Institute): Anonymizing without losing communicative intent? LLMs, whistleblowing, and risk-utility tradeoffs.

SCHEDULE

DAY 1 (THURSDAY 29 AUGUST)

10:15Introduction to all speakers
10:45SYBILLE KRÄMER (Leuphana University of Lüneburg): 
How should the generative power of LLMs be interpreted? 
11:15discussion
11:45CHRISTOPH DURT(TU München) & TOBIAS HEY(Karlsruhe Institute of Technology): Philosophy and Software Engineering: A Transdisciplinary Perspective on LLMs and the Patterns of Human Language Use
12:15discussion
14:00XYH TAMURA (University of the Philippines): 
How does Esposito’s “Artificial Communication” compare with Gygi’s Japanese “Emergent Personhood”?
14:30discussion
15:00ELENA ESPOSITO(University of Bologna / University Bielefeld): 
Communication with nonunderstandable machines
15:30discussion
16:30GEOFFREY ROCKWELL (University of Alberta): 
ChatGPT: Chatbots can help us rediscover the rich history of dialogue
17:00discussion

DAY 2 (FRIDAY 30 AUGUST)

9:15IWAN WILLIAMSTIM BAYNE (Monash University): 
The NLP Trilemma: Language, Thought, and the Nature of Communication
9:45discussion
10:45ANNA STRASSER (DenkWerkstatt Berlin): 
What we can learn from developmental psychology for dealing with non-understanding LLMs
11:15discussion
11:45MIA BRANDTNER (LMU Munich): 
Interpretative Gaps in LLMs – The Trial to Bridge the Limits of Language with Multi-Modal LLMs
12:15discussion
14:00DAVID GUNKEL (Northern Illinois University): 
Does Writing Have a Future? – Literary Theory for LLMs
14:30discussion
15:00BETTINA BERENDT & DIMITRI STAUFER (TU Berlin / Weizenbaum Institute): 
Anonymizing without losing communicative intent? 
LLMs, whistleblowing, and risk-utility tradeoffs
15:30discussion
16:30 PANEL DISCUSSION WITH 
Anita Keshmirian, Christiane Schöttler, Sabine Thürmel, Hadi Asghari

ABSTRACTS

Sybille Krämer: How should the generative power of LLMs be interpreted?
The debate about contemporary generative media goes hand in hand with a sublime anthropomorphism: people and software, especially text-producing chatbots, are understood concerning the model of human cognition and communication, which is either surpassed or failed. The lecture attempts to avoid this category fallacy through a (i) cultural-technical and a (ii) language-philosophical argument:
(i) In terms of cultural techniques, the written character of chatbot interaction must be emphasized. The digitized written material of a society embodies a cultural unconscious, which generative media forensically uncover as patterns and combine into new patterns. However, the token-statistical approach of LLM-based algorithms forms a machine counterpart and alternative to interpretation and hermeneutics, which (exception: complex cryptological practices) is not accessible to humans.
(ii) In terms of the philosophy of language, interaction with chatbots does not have the character of speech acts or communication, which in social life are characterized by the fact that the representation of content is simultaneously the establishment of a social relationship in the act of speaking together. Rather, it is about a co-performance of humans and technology, whose efficiency is based on the constitutive otherness and alterity of technology.

Christoph Durt & Tobias Hey: Philosophy and Software Engineering: A Transdisciplinary Perspective on LLMs and the Patterns of Human Language Use
Human language use is permeated by patterns that LLMs use to generate text and other artifacts that are meaningful to humans. The patterns of human language use also play a core role in human communication and sense-making, which, however, cannot be reduced to stochastic computation. This raises the question of the extent to which LLMs can replace the shared understanding humans may have of written or oral expressions. Beyond this problem of “explicit shared understanding” (ESU) there is the problem of “implicit shared understanding” (ISU). This is the shared understanding of groups of people that is not explicitly written down but tacitly understood and may therefore be even harder to replace with stochastic computation.
Software Engineering is the discipline of computer science concerned with the objectification of ambiguous, contradictory, and incomplete natural language requirements into highly objectified source code. LLMs have great potential in the process of software development and are on the verge of replacing humans in more and more tasks, which give the problems of ESU and ISU practical relevance. We explore the extent to which LLMs can replace human ESU and ISU with respect to concrete software engineering tasks.

Xyh Tamura: How does Esposito’s “Artificial Communication” compare with Gygi’s Japanese “Emergent Personhood”?
In Esposito’s work, “Artificial Communication”, and in Gygi’s work on Japanese relationships with technology, “Robot Companions”, communication can occur between people and things, and for the Japanese, it is not only communication that can emerge but personhood as well. Both of these works present ways to navigate and contextualize AI, robots, and technology in our world today. It can be helpful to see how these two perspectives can inform each other, especially a perspective that is non-Western.
Esposito says that it is better to think of AI as “artificial communicators” instead of “artificial intelligence”. Esposito takes the view that these systems are not intelligent, but a type of “artificial communication” can “emerge” from the interaction of the user and the LLM, even if the LLM cannot “think”. Esposito takes Luhmann’s concept of communication, where communication happens when there is a receiver to perceive something as communication. This is somewhat functionalist, in that if something is functionally communication, then it is so.
Gygi uses Bird-David’s concept of personification to help frame the Japanese mode of personification. For Bird-David, entities are not personified first then socialized with later, but they are personified “as, when, and because” they are socialized with. Recognizing that a conversation is taking place with another being means being in fellowship with it, and does not require recognizing a common essence. It makes the being “a self, in relation with ourselves”. While the Japanese tend not to attribute inochi (life) to robots, the kokoro (heart/mind/psyche) of a robot can be something that emerges and is embodied in the way people interact with it. In some way, for Gygi, the Japanese have a functionalist and fluid view of selfhood/the psyche, or kokoro.
Both of these concepts have similarities with each other, in that properties emerge as a result of interactions and relations between beings and things regardless of interiority. For both of these perspectives, interaction and relation shape communication and personhood, and how and when they appear. In a sense, Gygi’s use of Bird-David’s concept of personification extends Luhmann’s communication concept: if there is a receiver to perceive something as a self, then in the ongoing interaction, that self exists. Both of these concepts offer frameworks that allow us to use and interact with technology in meaningful ways, even with any ongoing issues regarding robot and AI interiority.

Elena Esposito: Communication with nonunderstandable machines
LLM-based chatbots’ ability to generate contextually appropriate and informative texts can be taken as an indication that they are also able to understand text. This presentation argues instead that the separation of the two competences to generate and to understand text is the key to their performance in dialog with human users. This argument requires a shift in perspective from a concern with machine intelligence to a concern with communicative competence. The approach will be tested on the current discourse on prompt engineering, which involves formulating the requests made to chatbots in such a way as to help them give the right answers. Since it is explicitly about intervening in communication and not in the mechanisms of the machine, the term prompt engineering is actually not appropriate. Rather, it is about developing a kind of prompt rhetoric that, in the tradition of the classical “art of speaking”, teaches how to structure communication in such a way that it achieves maximum effectiveness.

Geoffrey Rockwell: ChatGPT: Chatbots can help us rediscover the rich history of dialogue
The public release of ChatGPT in November of 2022 provoked renewed interest in artificial intelligence (AI) and the possibility that machines could become interlocutors in a meaningful dialogue. ChatGPT reached 1 million users in record time as people found questioning the chatbot compelling even when it hallucinated. In this talk, Geoffrey Rockwell will discuss how dialogue has been important to our imagination of AI since at least Alan Turing’s essay “Computing Machinery and Intelligence” (Mind, 1950).  He will turn to the long tradition of philosophical dialogue to understand what dialogue with a machine could be.

Friday 30 August 2024

Iwan Williams & Tim Bayne: The NLP Trilemma: Language, Thought, and the Nature of Communication
Although naïve interaction with LLMs generates a robust impression that one is communicating with a minded agent, critics have alleged that this perspective involves excessive anthropomorphising of LLMs. One important point of disagreement concerns the kind of linguistic agency that LLMs exhibit, if any: do they engage in speech acts such as assertion? Although LLMs certainly seem to make assertions, there are good reasons for caution here, for LLMs appear to lack the kinds of mental states (such as communicative intentions) which are plausibly regarded as required for assertion. This challenge can be captured in terms of the following trilemma:
• Necessary condition for assertion: Assertion requires communicative intent.
• Lack of communicative intent in LLMs: LLMs do not have communicative intentions.
• Power of assertion: LLMs make assertions.
In this talk we evaluate various strategies for resolving this trilemma.

Anna Strasser: What we can learn from developmental psychology for dealing with non-understanding LLMs
With the hype around LLMs, everyone seems to have a strong opinion about the capacities of LLMs – what they can do, cannot do, may one day do, and will never do. And many terms that philosophers previously reserved for describing the distinguishing features of humans as rational agents are now being applied to machines, leading to intense debates over such notions as comprehension, knowledge, reasoning, and phenomenological consciousness.
There is no question that LLMs have an amazing capacity to generate linguistic output that makes sense to humans, though it is questionable whether they themselves can be said to understand what their outputs mean. Nevertheless, humans do interact with these machines in ways that strongly resemble genuine conversation and we need to find illuminating ways of describing this activity.
To make progress here, we need to take a closer look at the difference between competence with comprehension and competence without comprehension and ask if there are forms of communication for which a level of competence without comprehension is sufficient. To this end, I shall look at the linguistic development of children and at other communicative situations where it is not obvious that both partners possess comprehension as well as competence.
I  shall argue that even in interaction between humans, certain communicative activities, or language games, are asymmetric in the distribution of abilities. I shall then ask to what extent such language games offer a helpful template for describing human interactions with LLMs.

Mia Brandtner: Interpretative Gaps in LLMs – The Trial to Bridge the Limits of Language with Multi-Modal LLMs
The popularity of LLMs has shown that we can make sense of most speech LLMs produce. There remain, though, some sentences that only convey meaning when they are spoken by humans, for example, “I love you”. I hold that we can interpret those human spoken sentences meaningfully because those sentences express more than what can be expressed in language – they encompass a human form of life. Hence, the interpretation of those sentences does not come naturally to us when they are the product of LLMs, leaving an “interpretative gap” between human speech acts and machine speech products. Those interpretative gaps arise, I believe, at least within four categories of sentences: sentences containing 1) indexicals, 2) non-purely-linguistic speech acts, 3) speech about the meaning of culture and religion, and 4) art, such as poetry.
I argue that these interpretative gaps point us to what distinguishes us as human and thus to where LLMs and AI in general must necessarily fail to replicate us. Consequently, I contend, these interpretative gaps cannot be filled with more or other kinds of input data and training. Data does not compensate for our human form of life that machines qua their being machines lack. Datafication of the world, in fact, takes away the aliveness of our experience, cultural and religious embeddedness. This is why multi-modal LLMs with visual and auditory inputs must face the same fundamental boundaries as basic LLMs, because what they gain with those additions is simply more “dead” data. I conclude therefore that such machines shall never bridge the limits of language like we, as humans in reference to our lives, can.

David Gunkel: Does Writing Have a Future?—Literary Theory for LLMs
This paper argues that large language models (LLMs) and generative AI signify not the end of writing but the terminal limits of a particular conceptualization of writing that has been called logocentrism. Toward this end, the analysis will 1) review three fundamental elements of logocentric metaphysics and the long shadow that this way of thinking has cast over the conceptualization and critique of LLMs and generative AI; 2) release a deconstruction of this standard operating procedure that interrupts influential and often-unquestioned assumptions about authorship, truth, and semiology; and 3) formulate the terms and conditions of an alternative way to think and write about LLMs and generative AI that escape the conceptual grasp of logocentrism and its hegemony. In doing so, the paper will argue that writing indeed has a future but only if we reconceptualize how we think about writing and write about thinking.

Bettina Berendt & Dimitri Staufer: Anonymizing without losing communicative intent? LLMs, whistleblowing, and risk-utility tradeoffs.
Whistleblowing is essential for ensuring transparency and accountability in both public and private sectors. However, (potential) whistleblowers often fear or face retaliation, even when reporting anonymously. The specific content of their disclosures and their distinct writing style may re-identify them as the source. Legal measures, such as the EU WBD, are limited in their scope and effectiveness. Therefore, computational methods to prevent re-identification are important complementary tools for encouraging whistleblowers to come forward. However, current text sanitization tools follow a one-size-fits-all approach and take an overly limited view of anonymity. They aim to mitigate identification risk by replacing typical high-risk words (such as person names and other NE labels) and combinations thereof with placeholders. Such an approach, however, is inadequate for the whistleblowing scenario since it neglects further re-identification potential in textual features, including writing style. Therefore, we propose, implement, and evaluate a novel classification and mitigation strategy for rewriting texts that involves the whistleblower in the assessment of the risk and utility. Our prototypical tool semi-automatically evaluates risk at the word/term level and applies risk-adapted anonymization techniques to produce a grammatically disjointed yet appropriately sanitized text. We then use a LLM that we fine-tuned for paraphrasing to render this text coherent and style-neutral. We evaluate our tool’s effectiveness using court cases from the ECHR and excerpts from a real-world whistleblower testimony and measure the protection against authorship attribution (AA) attacks and utility loss statistically using the popular IMDb62 movie reviews dataset. Our method can significantly reduce AA accuracy from 98.81% to 31.22%, while preserving up to 73.1% of the original content’s semantics.

General description

LLMs and the Patterns of Human Language Use
Large Language Models (LLMs) such as ChatGPT and other generative AI systems are the subject of widespread discussions. They are often used to produce output that ‘makes sense’ in the context of a prompt, such as completing or modifying a text, answering questions, or generating an image or video from a description. However, little is yet known about the possibilities and implications of human-sounding machines entering human communication. The seemingly human-like output of LLMs masks a fundamental difference: LLMs model statistical patterns in huge text corpora, patterns that humans are not normally aware of. Humans do perceive patterns at various levels, but when we produce ordinary language, we do not explicitly compute statistical frequency distributions.
The workshop aims at an interdisciplinary and philosophical understanding of the processing of statistical patterns of LLMs and their possible function in communicative exchange. In the controversy about the communicative potential of LLMs, we start from the thesis that LLMs do not understand meaning and investigate the extent to which they can nevertheless play a role in communication when people interact with them. To this end, concrete examples of LLM applications, such as the use of LLMs in software engineering and for whistleblower protection, will be explained by and discussed with their developers. This is important not only for a better understanding of the kinds of exchanges that are possible with LLMs, but also for the question of how far we can trust them and which uses are ethically acceptable.

CALL FOR REGISTRATION

Limited in person places: First come – first serve (ONLY 4 places left)
Online participation via Zoom after registration 

Please fill out the registration form and indicate whether you plan to attend in person or virtually.

REGISTRATION FORM

If the registration button does not work, simply copy the link to the form here: https://forms.gle/FgUXChDj1EsvEM8x7

At the end of July, we will send notifications of successful registration. All registered participants will then have the opportunity to participate in a CALL FOR QUESTION (CFQ).

CALL FOR QUESTION

Why?

  • Especially in interdisciplinary events, it is often the case that the representatives of the different disciplines are not really aware of the questions that concern the other disciplines.
  • A CFQ allows the speakers to get an idea of the audience’s interests in advance.
  • It is also likely that many of the questions submitted will be ideal for starting the discussion.
  • The CFQ gives everyone the opportunity to develop questions at their own pace. Of course, this is by no means intended to call into question the option of asking questions spontaneously.

How does the CFQ work?

  • Anyone planning to participate in the workshop (online or in person) can use the materials provided on the website to think about which questions are important to them.

 THE ORGANIZERS

ANNA STRASSERBETTINA BERENDTCHRISTOPH DURTSYBILLE KRÄMER
(DenkWerkstatt Berlin / LMU Munich)(TU Berlin / Weizenbaum Institute)(TU Munich)(Leuphana University of Lüneburg)

Longer description

LLMs based on generative AI are often used to produce output that makes sense in relation to a prompt, such as completing or modifying a text, or producing an image or video from a description. But the apparent human-like output masks a fundamental difference: LLMs model statistical patterns in huge corpora of text, patterns that humans are usually either unaware of or only tacitly aware of. Humans do experience patterns at various levels, often quite vividly, but when we produce ordinary language, we do not explicitly compute statistical patterns.

Rather, people make sense of language, although even within a discourse on the same topic, the degree and manner of understanding can vary widely between people. However, meaningful exchange is still possible to some extent, even if the participants have very different understandings of the topic, and some may have no understanding at all. By exploiting statistical patterns in large corpora of text, LLMs produce text that is – to an astonishing degree – grammatical and meaningful to us, and one can expect further surprises. The relationship between meaningful language use and statistical patterns is an open question, and considering it in the context of LLMs promises new insights. This will be important not only for a better understanding of the kinds of exchanges possible with LLMs, but also for questions about how much we can trust them and what uses are ethical.

In the international and interdisciplinary workshop, we will discuss the ways in which, despite the fundamental difference in text production, LLMs can still participate in human language games. Are there certain types of language games that can be modeled almost perfectly by LLMs, and are there others that resist computational modeling? It is widely recognized that patterns are an important feature of human cultural development. What kinds of patterns in human language use can be modeled by computations on statistical patterns? What is the relationship between patterns and rules? What is the role of patterns for LLMs, and what is their role in experience and language? Since LLM text production is so radically different from that of humans, can communicative principles such as trust and reliability apply to human-machine interaction? We will discuss these and other questions, both in terms of the fundamental philosophical issues involved and in terms of concrete new and future applications of LLMs. Philosophical insights on the topic will be brought into discourse with the experience of computational scientists developing new applications.

Funded and supported by: