The Digital Transformation of Human Orientation: An Inquiry into the Dawn of a New Era

Winner of the $10,000 essay prize of the Hodges Foundation of the Philosophy of Orientation (HFPO)

Reference

Durt, Christoph. 2022. “The Digital Transformation of Human Orientation. An Inquiry into the Dawn of a New Era.” In How does the digitization of our world change our orientation? five award-winning essays of the prize competition 2019-21 held by the Hodges Foundation for Philosophical Orientation, edited by Reinhard G. Mueller and Werner Stegmaier. Orientation Press.

Abstract

The digital transformation of the world began long before the invention of electronic devices, and it has always been intertwined with scientific, economic, ethical, and metaphysical orientations. Yet, the essay argues, future changes to human orientation will be even more pervasive. We can already witness how the digital transformation is taking off in a new direction in which the transformation of human orientation is the means or even the goal. Digital technology as basic as a search engine not only helps us to find our way through the internet, but also nudges and reorientates us. The new possibilities for surveilling humans and using them as a resource for data provide the foundation for the targeted and intelligent transformation of human orientation through arising forms of artificial intelligence. The possibilities for altering orientation and consequent behavior multiply with each part of the user’s experiential environment that can be digitally controlled, which makes augmented reality and the metaverse extremely attractive to some of the world’s richest corporations. Gaining orientation about these developments is key—not to turn back the wheel, but to find ways to use the enormous potential to improve rather than to disturb orientation.

Introduction

It is almost impossible to overlook the fact that digital technology already has an enormous impact on human life today, and there is little doubt that its impact will increase dramatically, with no end in sight. Much less obvious is, however, what exactly the digitization, digitalization, or digital transformation[1] consists of. These concepts refer to changes not only of our surrounding world but also of our experience and understanding of our world, ourselves, and our relation to the world. But how can we understand the nature of these changes? We are in midst of a development we do not yet understand and the future of which is unknown. Not only as individual humans but also as members of humankind we find ourselves in a new situation in which we need to find our way. This is a paradigm case for orientation since “[i]n orientation, one is at first dealing with something one does not yet know about: a new situation.”[2] Considering the widespread confusion concerning our new situation and the pathways open to us, it is clear that orientation is sorely lacking (see section 2).

The digitization is often reduced to the development of digital devices and the changes their use brings to human life and the world we live in. But focusing only on the devices and the consequences of their use misses the chance to gain philosophical orientation in a more fundamental sense. To gain philosophical orientation about the multifaceted development these concepts refer to, the concept of orientation is crucial, in a sense that is often overlooked. This essay shows that the digitization of our world not only fundamentally changes human orientation but that it does so in an essentially novel way. It changes human orientation not only as an accidental consequence or because it is embedded in an attempt for metaphysical orientation (see section 3.2). Rather, something radically new is on its way, and orientation lays at its heart. Digital technology is increasingly built for the very purpose of changing human orientation, and it does so in increasingly intelligent ways.[3]

The core role of human orientation is overlooked by the main discussions of digital technology, which tend to be based on the misconception that intelligent technology must replicate, emulate, or simulate human intelligence (see section 2 and 7). This essay discusses not only the changes digitization brings to metaphysical orientation and to orientation as a consequence of the new possibilities for collecting and processing big data (see section 4), but also how digital technology aims to change human orientation (see sections 5, 6, and 7). All of this shows that orientation is a core concept for understanding both digital technology and its impact on human life.

‘Digitization’ has become a buzzword that is much used but is not well understood. The concept of digitization is commonly used with two distinct yet interrelated meanings. In a narrow sense, it refers to the digitization of analog qualities, for instance when a printed text is scanned and either saved in a graphical format or further processed by means of optical character recognition (OCR) to make the text electronically searchable. Digitization in the narrow sense means the transformation of analog qualities into digital symbols. Section 3 explains that digitization in the narrow sense was already fundamental to digitization in the wider sense centuries before the advent of electronic computers. The wider sense consists in the much larger process that comprises century-long developments of theoretical and practical character, which are pursued in philosophy, science, technology, and society. Sometimes ‘digitization’ refers to the narrow sense and ‘digitalization’ to the wider sense,[4] but in this essay both termsrefer to the wider sense unless further qualified. It will be shown that the digitization of the world is a long and complex process that involves digitization in the narrow sense but cannot be reduced to individual aspects.

Some technology, both analog and digital, is used for orientation purposes and therefore obviously changes orientation. Both analog and digital clocks orient in time, and both a compass and a GPS system can be used to find the right direction. Technology that is used as a tool for improved orientation in a particular environment will be called here orientation technology. The environment may be the tempo-spatial world, some virtual environment, or just information. Much of digital technology is orientation technology, for instance, search engines, not least because digital technology has caused a flood of information in which we need to orient ourselves.

Digital technology tends to be more precise and advanced than analog devices, but that alone does not make it fundamentally different from analog orientation technology. Like analog orientation technology, even the most advanced GPS location systems only contribute initial means of orientation, such as the determination of one’s location and the directions to one’s destination: “all they really allow one to do is determine locations, and they do so only if these places are already defined as targets. The standardized orientation technology simplifies only the beginning of orientation.”[5] Orientation technology only contributes one part to orientation as a whole.

Already improvements in initial orientation represent big changes and can entail even bigger consequences. But even so, we must also recognize that this is only a small part of how the digitization changes orientation. Section 4 will explain how digital orientation technology is beginning to do more than just provide initial orientation. Furthermore, digital technology that is not orientation technology can also change orientation and is increasingly created for this purpose. A major means to alter human orientation is by changing the situation in which we need to orient ourselves. The situation can be changed in a variety of ways that do not necessarily force the users to a behavior but may nudge and persuade them (see section 5), and may involve the creation of an artificial environment (see section 6).

Digital technology that is not used for orientation enables its users to do things that may be more cumbersome or even impossible without it, and thereby alters and often fundamentally changes the situation. The large extent to which technology changes situations often does not become clear because the technology is embedded so profoundly in our behavior. When the technology breaks down and stops working in the expected way, however, we realize how much we have become dependent on it. In such a situation the conventional use of the technology is disrupted, and the technological device is no longer a tool that is, in Heidegger’s expression, ready-to-hand (zuhanden), but rather demands attention.[6] The broken technology confronts us with a new situation in which we either need to fix the technology or find other means to deal with the situation. Technology also comes to the fore when one learns using it. In such cases, technology is not only a means to be considered in a situation, but largely defines the situation in which we need to orient ourselves. The more the use of a technology becomes habitual, in contrast, the less it is noticed.

A rather obvious example of how radically technology can change a situation are weapons, which are tools built to decisively alter the power structure of a situation. A robber with a gun makes for a radically different situation to one in which a robber does not have any weapon. The changes technology brings to a situation is an obvious topic of political, sociological, psychological, and ethical concern. While weapons can serve basic desires for power or self-defense, the mere existence of a functioning gun in a shared space may add worries about the potential for misuse to a situation and the risk of accidents. In general, there are numerous reasons why technology is frequently something we need to worry about. It may not work as intended, it may work but be used in unintended ways, its use may have side effects or negative consequences, it may enable undesired actions, the technology itself may develop in undesired ways in the future, and so on. Such risks become the bigger the more powerful the technology becomes—and the more we use it and become dependent on it.

Our dependency on technology is complex and can partly be compared to psychological addiction. Usually there is no one forcing us to use digital devices, people buy their devices of their own free will, and even then could, in theory, simply refrain from using them. In reality, however, temptations that are just a click away are no easier to resist than lighting a cigarette is for a chain smoker with a lighter in one hand and a cigarette in the other. We all know how hard it is to avoid being distracted by a nearby device that is ringing, chirping, sounding, vibrating, flashing, or blinking. Many of the digital devices around us and the applications they contain are designed to grab our attention. Attention is taken away from other tasks, and the frequent interruptions diminish the attention-span and can impair cognitive performance.[7] Since attention and cognitive performance are important for orientation, frequent distractions and diminishing attention-spans can distort orientation. Digital technology competes for attention and content is designed or selected to keep users engaged as long as possible, creating and reinforcing habitual behavior. There are countless ways in which digital technology can become addictive, all of which, of course, skew orientation in the direction of the addictive behavior.

The above and countless other concerns about the impact of digitization on human orientation can lead to all kinds of interesting investigations. Philosophy must be careful not to get lost in the details, however, and not to lose the focus on the bigger question regarding the structures and conditions of human orientation. This essay argues that digitization is not sufficiently understood if it is conceived in terms of the use of digital technology and the consequences of its use. Digitization comprises much more. In particular, the digitization of our world is intertwined with human orientation in a very intimate sense that will be explained below. A whole other dimension of changes of orientation by information technology will be explored. The digitization entails changes for, on the one hand, the use of information for orientation (section 4: orientation with information) and, on the other, the need to orient in the vast amount of digital information that is becoming parts of our lives (section 5: orientation in information).

The changes the digitization brings to orientation are so profound that it makes perfect sense to say that they constitute a new situation for humankind. Digital technology confronts us not only with numerous altered and new situations in which we need to orient ourselves, but also with a new situation of humankind in which we need to find our way. Our situation requires orientation about the changes in orientation due to the digitization of our world.

2. Orientation about Digitization

Orientation about digitization is sorely lacking. Assessments of digital technology, its future development, and its impact on our world have led to quite different and often contradicting assessments. There is no lack of vocal “experts,” who, sometimes with a quasi-religious eschatological zeal, either promote salvation phantasies or warn that the development of “full artificial intelligence could spell the end of the human race.”[8] In particular, “Artificial Intelligence” (AI) has become a buzzword. Some claim that the human mind can be saved on hard drives and artificially be reawakened, resulting in immortality, such as a famous author and director of engineering at Google.[9] Calculative inferences are taken to suffice to predict the future development of digitization. Others, such as a vocal professor at Oxford University, are sounding the alarm that AI is an “existential risk”[10] to humanity due to its alleged future ability of developing superintelligence.[11]

The vast difference between these assessments of the impact of AI on our future should make us halt for a moment and question the underlying assumptions. Both sides, the alarmists and the enthusiasts, share a common assumption: that digital technology is on the way to developing a general mind akin to the human mind. In fact, the very concept of Artificial Intelligence suggests that intelligence can be either natural or artificial, and the above authors jump to the conclusion that both the intelligence of humans and that of machines involve minds that can truly understand and will, and hence are able to replace the body and may want to destroy humans. Considering this assumption makes it clear why AI is thought to have the desired or feared consequences. From the beginning of AI as a field of study, which is usually traced back to the Dartmouth Summer Research Project on Artificial Intelligence in 1956[12] and which received much inspiration from Alan Turing’s prior writings on the conditions of intelligent machines,[13] AI has frequently been presented as having the potential to gain and supersede human intelligence. Claims such as that “[i]n from three to eight years we will have a machine with the general intelligence of an average human being”[14] have not proven accurate, however, which has led to the drying-up of funding, only to give rise to new assertions and then another “AI winter.”[15] Yet, the failure of these claims hasn’t led their proponents to abandon them. Instead, they double down on their assertions and simply project them to a later point in time. The apparent preoccupation with exact dates for events such as “singularity” distracts from the fact that at the heart of the assertions is not a scientific study but the belief in the possibility that AI can develop general intelligence together with the belief in the necessity that this must happen one day. Beneath this belief lies once again the assumption that some digital technology is developing a general mind that is replicating, emulating, or simulating the human mind as a whole, and not just certain limited capacities.

Yet, despite amazing progress in particular areas such as deep learning, digital technology seems to “hit a wall”[16] when it comes to general capabilities, even those that come easy to humans and often seem self-evident, such as common-sense knowledge. The fact that digital technology has trouble with easy common-sense tasks while excelling at narrow tasks that humans have trouble with, such as complicated calculations involving large numbers, calls into question the assumption that it will develop a mind that is akin to the human mind. Animistic interpretations that consider digital technology as able to develop a general understanding or will of its own do not withstand scientific scrutiny. The speculations they provoke, such as about the moment of “singularity” when machines will develop artificial general intelligence, are not science but science fiction. Such speculations may be fun but also have the potential to distort our view of digital technology and the consequences of its use, and hence disorient humans about digital technology. They furthermore disorient humans about themselves when they seem to lend plausibility to the idea that humans themselves are really digital machines. They are also apt to distract our view from the actual existential risks and possibilities tied up with the digitization of our world. Real existing digital technology has already changed our world, and it is sure to continue to do so. Considering digital technology as a sort of being with a mind cannot account for these changes, and it is too simplistic to provide a basis for a sober assessment of the future of digitization.

Despite the narrow-mindedness of the view that artificial intelligence must mirror human intelligence, it is driven by a correct intuition, namely that some forms of digital technology are more than only tools. Of course, many digital devices are tools, but the concept of tool is insufficient to account for core characteristics of digital technology. The reason is not that tools are necessarily simple, nor that they are neutral objects. The nature of tools is not exhausted by their material constitution but depends on their use: a stone may be just a stone, or it may be a tool if it is used for a particular purpose. That tools can be used in different ways, good or evil, does not mean that tools are neutral. Rather, tools suggest certain uses and inhibit others. The use of tools has many aspects; they do not just have a function but also stand in the context of human practice and experience. For these reasons, the philosophy of technology in the tradition of “postphenomenology”[17] is right to “approach technologies [not] as merely functional and instrumental objects, but as mediators of human experiences and practices.”[18]

Even when tools are considered as mediators of human experiences and practices, however, this is still insufficient to account for many of the technological modifications of human orientation investigated in this essay. To see why, the next section will apply fundamental insights of the founder of phenomenology, Edmund Husserl, and his student Martin Heidegger to digital technology. In the attempt to go beyond (post) classical phenomenology, postphenomenology seems to have overlooked many of those insights and their value for the study of technology and in particular digital technology. While the founder of the tradition of postphenomenology, Don Ihde, frequently references Husserl,[19] he tends to merely outline or dismiss Husserl’s extensive investigations[20] in ways that have been criticized as inaccurate by Husserl scholars.[21] Subsequent postphenomenologists have not attempted more thorough investigations. And even if we look beyond postphenomenology, we can find only a few authors who have thoroughly investigated Husserl’s contributions to the philosophy of technology in connection with the other philosophers of the phenomenological tradition.[22] The emphasis is often on how technology changes the relation to one’s living body, and to others.[23] Husserl has been studied in connection with aspects of AI, but very eclectically and in what is overall a dismissive manner.[24] More thorough studies are rare, especially in relation to digital technology and digitization.[25] This essay takes up some of the widely overlooked insights from the phenomenological tradition, in the next section with regard to the digitization of the world and later with regard to the fundamental changes they have on our orientation (section 6).

3. The Digitization of Our World

While digital technology and its use obviously change our world, the usual conception of the digitization of our world as the result of the use of computers and other digital devices is too narrow. Considering only the devices and the consequences of their use overlooks that these are relatively late developments that are part of a much longer process that started centuries before electronic devices were invented. Electronic digital devices do not only contribute to the digitization, but they are also the result of a prior digitization of the world, together with scientific, philosophical, sociological, economic, and political developments that evolved together with that prior digitization.

Phenomenological investigation takes as its starting point not only physical aspects of the world, but the world as a whole, which we inhabit, which we experience in everyday life, and which is meaningful to us, whether we engage in scientific activity or not. Reductionistic philosophers and laypersons accustomed to a naturalistic view of the world, in contrast, often think only of the physical world as investigated by natural science. Reductionistic concepts of the world, however, are insufficient to account for the impact of digitization on orientation. The reason is not only that the digitization of our world involves sociological and other implications, but also that orientation often concerns the world in a comprehensive sense. It is only sometimes about orientation in physical descriptions of the world, such as when one reads an article on physical science, or when one uses a GPS system. But, as pointed out in the introduction, this is only one foothold of orientation that requires further orientation. To get a clearer view of how the digitization of our world changes orientation, we need to consider in more detail the digitization of the world as it is experienced and understood by humans.

3.1. The Digitization of the Lifeworld

Orientation always presupposes a given world: “When we orient ourselves, we always presuppose a pregiven world, in which we orient ourselves, in large and in small, spatially and mentally.”[26] This is also true when we orient ourselves about the world, “for orientation the world is at the same time a boundary condition and object.”[27] The presupposed world that is the background of meaning for science and all theoretical activity Husserl calls in his last work the “lifeworld” (Lebenswelt).[28] The lifeworld is the world of everyday experience, which is structured not by precise laws but by vague regularities that can be grasped through common-sense knowledge and common-sense reasoning.

Husserl holds that modern science, which developed in the time around Galileo Galilei, has “mathematized” nature. Husserl’s concept of the “mathematization of nature,”[29] which he extensively develops in his posthumously published The Crisis of the European Sciences and Transcendental Philosophy, spells out how the seemingly purely objective world of modern science is founded in the world of intuitive experience. The mathematization of nature consists of several consecutive yet interwoven steps that start with measurements. The measurements assign ideal numbers to empirical objects, which are thereby transformed into ideal and ultimately formal objects that can be operated on with the formal methods of mathematical-natural science.[30] Experimental science in turn has contributed to the technical generation of mathematic-scientific reality.[31] Due to the mathematization of nature, reality is conceived in a new way as consisting of numerically described entities. Since Galileo, the concept of the world as fundamentally mathematical has been presupposed by modern physical science throughout its development into Newtonian physics, relativity theory, and quantum physics. That this works for normal practice in the natural sciences is attested by its successes.

It is an additional step, however, to hold that reality is fundamentally physical, and that everything else is reducible to a physical description. Scientists who claim so exceed the scope of their science and engage in a philosophical-ontological discussion. They adopt a position of reductive physicalism, or possibly some lighter form of naturalism. For Husserl, this turns things upside down. The “‘objectively true’ world,”[32] which naturalism takes to be the real world, is in fact induced from the lifeworld. For the scientist who engages in scientific activity, the lifeworld is not functioning as “something irrelevant that must be passed through but as that which ultimately grounds the theoretical-logical ontic validity for all objective verification, i.e., as the source of self-evidence, the source of verification.”[33] Scientific theories must be able to explain the phenomena we experience, and, as already pointed out, scientific measurements are ultimately go back to the lifeworld. Even when science explains that some phenomenon is an illusion, such as a stick half immersed in a glass of water that looks bent, it must explain why it appears broken (and physical optics is well apt to do so). The mathematized world of natural science only seemingly replaces the lifeworld; in reality, it builds upon the lifeworld.

Physical descriptions seem to question the impression that objects and their properties and relations in the world exist in the way they appear in ordinary experience. But while this is true with regard to some appearances, such as in the case of the stick that looks bent, the mathematization of nature does not directly affect ordinary experience. Neither does it eliminate the most fundamental assumption given in ordinary experience, which Husserl calls the “general thesis of the natural attitude.”[34] The general thesis takes for granted that the world exists and that the objects in it and their properties and relations exist in the way they appear in ordinary experience. Such assumptions do not have to be made an explicit topic of thought or predication; they can also be a non-reflective part of experience.[35] Either way, the general thesis gives orientation by allowing us to see the things as persistent parts of the world.

The mathematization of nature radicalizes the general thesis by giving a radically objective description that is further removed from subjective experience. The resulting description is of objects that are purely formal and in principle not experienceable. Paradoxically, reality appears to be at the same time disconnected from the original experience and the cause of all experience. The interrelationship between the lifeworld and the mathematized world usually remains unclear because the mathematized physical description fits the lifeworld like a tailor-made “garb of ideas.”[36] The precise fit of the ideal objectivities of science makes it seem as if nature conceived in ideal mathematical terms would be merely a more precise description of the same world. The radical difference between the two descriptions is covered up by seemingly frictionless methods of measurements, and the possibility of the calculation of new data that can be used to predict future measurements.

In an expression that better befits today’s use of language, the ‘mathematization of nature’ can also be called the ‘digitization of nature,’[37] or the ‘digitization of our world.’ It is a digitization in the wide sense that comprises wide-ranging developments over the course of centuries. At its core is digitization in the narrow sense, although it does not necessarily involve electronic computers. The mathematization or digitization accomplished with analog apparatuses is in principle the same as that resulting from digitizing with a scanner or other digital hardware. The difference is that with digital technology the process of digitization is automated and done by the hardware by itself. Analog technology, in contrast, only provides the means for digitization. For instance, the temperature on an analog mercury thermometer needs to be “read” by a human, and deciding which number corresponds best to the height of the liquid is the most important part of the digitization. In this case, the scientist is the analog-digital converter, a function that in an electronic scanner is fulfilled by its sensor together with other hardware. In either case, the analog world is measured, and values become assigned to points in a correlated ideal space that is used for scientific predictions which again can be correlated to the world. Analog and digital technology can both be used for the mathematization or digitization of our world. The difference is that digital technology is so much faster and thus accelerates the digitization of the world.

3.2. Digitization of the World as Metaphysical Orientation

Although the concept of the world as digital seems to be purely scientific, and many naturalists verbally turn against metaphysics, the claim that the world in itself is numerical is not a scientific claim. It is a metaphysical claim that attempts to get ontological clarity about what is real (primary qualities) and what is reducible to primary qualities (secondary qualities). It responds to a desire for metaphysical orientation; “[b]oth classical metaphysics as well as later concepts of metaphysics and ontology indeed correspond to needs of orientation or respond to problems of orientation.”[38]

The digitization of the world provides orientation with regard to the world as a whole and the role we have in it as humans by drawing a unified picture. When the world is conceived as fundamentally digital, it seems self-evident that digital technology can accurately replicate or at least simulate everything in the world. It seems a safe bet to claim that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”[39] The possibility of singularity seems evident and, if humanity does survive until then and continues developing ever more powerful computers, the only question seems to be when, not if, singularity will happen. The modern picture of the world as digital, which underlies both the enthusiasm and alarmism about the development of AI, was already drawn five centuries ago (see section 2). Today, digital technology is seemingly proving the applicability of naturalistic explanations of the human mind, too. That the development of digital technology seems to confirm the naturalistic orientation with regard to the world as a whole and the humans in it may contribute to explain the quasi-religious zeal with which the possibility of Artificial General Intelligence is promoted.

To those who do take our world to be reducible to the world of science, the enormous success of natural science seems to confirm not only that science is on the right track, but also that reductionistic naturalism is true. While natural science has been very successful in explaining nature, however, the lack of success in explaining the human mind has been grist to the mill of the anti-naturalists, who claim that the mind is not reducible to digital calculation. The decade of the brain is long past, billions of dollars have been poured into brain research, but brain science is still lightyears away from a coherent understanding of how the human mind works. If, however, it is possible to replicate or simulate the mind and all of human behavior, then this seems to vindicate the naturalistic picture. Since digital systems can already do astonishing things, such as beating the world’s best chess and go players, it seems as if eventually they will be able to do all the things they cannot yet do. Every not is understood as a not yet, and the obvious failures of digital systems are not understood as contradicting the naturalist assumption (such as with regard to general capabilities, see section 2). Digital technology seems to finally vindicate what modern physicalism had argued long ago: that the world is digital in its real nature, including us as humans and our “Cartesian Theatre” of “inner” experience that seems to be caused by physical “machines” such as our brains.

Besides data, the core concept used for digital representation today is ‘information.’ The world is taken to consist of information and “reports itself in some way or other that is identifiable through calculation and […] remains orderable as a system of information.”[40] Humans, too, are part of the world and are conceived and treated as systems of information. Not only is nature measured with digital devices, but increasingly also the human body, together with its vital signs, location, and activities. Information is at the core of the digitized world, with not only nature conceived in terms of physics but also humans as part of nature as well as the social, political, and economic environment created by humans. It is as if humans were just “machine parts” (Maschinenteile),[41] parts of a digital system in which everything is ordered by information. This machine is not a computer alongside other things in the world but the whole world conceived as a digital system, regardless of whether there are computers in it or not. A classical image of this system is Galileo’s picture of the “book of nature,” which is written in geometrical terms. A recent depiction is the matrix in the film “The Matrix,” which consists of information that appears to humans as the world they live in. Information processing seems to be much more than what is done in computing, it seems to unlock the workings of the whole world.

Section 6 will come back to the ways in which digitization is changing our orientation toward the world and ourselves. There, the above thoughts will be continued in an investigation into how the digital technology of today and of the future changes our orientation toward the world. Before, however, it is important to get clearer on the connections between information and orientation. Up till now, digitization has been focused on a rather narrow kind of information, and section 7 claims that this is beginning to change. Digital technology will be able to operate with information in a wider sense and that will fundamentally change our orientation toward the world.

In the narrow sense, information is digital and consists of digital representations. This is also the meaning of ‘data’ as it is understood here. Digital representations are the correlates of the world. Information is put in symbolic terms that can be processed by means of logical or mathematical operations. Logical operations with information are at the core of computation, and the construction and maintenance of digital technology requires much more orientation in digital information. Digitization hence increases the need to orient in digital information. Orientation in digital information is only one, very particular, form of orientation, however.

There is also the older sense of semantic information, i.e., information that is meaningful to humans due to the relevance it has in the context of their lives. While orientation in information has always been important for humans, the next two sections consider the novel impact of digital technology on orientation with regard to information. Two ways in which digitization changes our orientation with regard to information are distinguished. Section 4 is concerned with orientation with information. Section 5, in contrast, studies orientation in information. Orientation with information concerns the use of information for orientation, and orientation in information the orientation one needs to gain to select the information relevant for one’s purposes.

4. Orientation with Information

The fast progress of the internet and mobile technology in roughly the first decade of the new millennium was for many users a honeymoon period in which information technology appeared to lead to ever freer speech and almost perfect anonymity. A few years later, however, many have rather abruptly found that digital technology has eliminated much of the privacy they previously enjoyed, with little hope of regaining it. Spectacular leaks and journalistic research have shown the large extent of state surveillance, only some of which is directed against terrorists. The difficulty of keeping private the vast amounts of data stored in digital systems has highlighted two different things: Firstly, there are great new possibilities for investigative journalism. The astonishingly quick sequence of major revelations, many of which involve the leaking of vast trophies of data, such as the WikiLeaks leaks concerning abuses by the American military, the leaks of Edward Snowden and others concerning NSA surveillance capabilities, the publication of the Panama and Pandora Papers, and the leak that revealed the misuse of the Pegasus surveillance system. Such data enables journalism to pursue international investigations of previously unheard magnitudes. They also enable law enforcement agencies to tackle previously hidden illegal tax evasion. Secondly, the extent of surveillance by state and private actors already taking place is enormous and its future potential is truly alarming. The influence of surveillance on orientation merits a differentiated consideration.

4.1. Surveillance and Orientation

Revelations of the surveillance capacities of state actors such as the NSA[42] and private companies such as NSO[43] have shown to the wider public that the exploitation of weaknesses in operating systems such as Android or iOS has become a huge industry. For instance, the Pegasus surveillance software has been used to access the audio and video of smartphones, together with emails, messages, and any other data exchanged or saved on the devices. It has been found on the smartphones of human right activists, political opponents, competitors, and heads of states. It has been used to extensively spy on critics, estranged daughters, and ex-wives of authoritarian rulers and is believed to be used by the Mexican drug cartels.[44]

In view of these revelations, Apple’s advertisement slogan “what happens on your iPhone stays on your iPhone” sounds hollow. Of course, much of what “happens” on a smartphone, such as calls and messages, is not meant to stay there in the first place, and since interception and decryption of digital content is always possible (and often easier than thought by ordinary users), the privacy of calls and messages is always in question. But all other content on the iPhone can be accessed by spyware, too, without any fault by the user. Pegasus makes use of several zero-click exploits, which infect cell phones without any action on side of the user, are sold on the black market, and are used by numerous actors.

Apple’s remark that this affects only a small percentage of their users[45] may be factually correct, but it disrespects the large number of customers who have been affected—one leaked list for NSO’s Pegasus alone contains around 50,000 numbers that may have been infected. It also suggests that other priorities such as saving money and resources for new developments or keeping open the option to constantly add new features to Apple’s iMessage application is taking precedence over the desire to close exploitable weaknesses in its operating systems.[46] The relatively low priority accorded may make sense from a business perspective as there is little competitive pressure in this respect. Some customers may look for alternatives, but since competing operation systems such as Android and HarmonyOS are notorious for their security weakness, changing to a different system would be futile. For non-experts, there is no easy way to make sure that their digital technology has not been hacked.

Targeted surveillance searches for specific information, which is not readily available. Even when an attempt is made to collect all available digital information, such as in NSA’s declared ambition to “Sniff It All, Know It All, Collect It All, Process It All, Exploit It All, Partner It All,”[47] the real purpose is not necessarily to extensively exploit all the collected data and share it with their partners. Extensive use of the data of large parts of the population to control the population goes beyond targeted surveillance. For targeted surveillance, the whole is only important because the data it is interested in is hidden, possibly like a needle in a haystack. Targeted surveillance may target specific people or small groups of people such as alleged terrorists, and these need to be laboriously filtered out from the whole dataset. A dragnet may vet vast numbers of people just to find one terrorist, and the data gathered from all the other subjects needs to be systematically disregarded if the purpose is “just” targeted surveillance. One of the main problems of targeted surveillance is the elimination of superfluous information. Otherwise, the targeted surveillance turns into mass surveillance, which is easily misused to control and subdue humans. The excessive collection, preservation, and use of data, either on purpose or by accident, is hence a constant bone of contention in democratic societies.

Mass surveillance can be an effective means to control and subdue whole populations and is extensively used for precisely this purpose. The new surveillance opportunities bolster the possibilities for what could be called surveillance authoritarianism. Surveillance authoritarianism controls its citizens by means of comprehensive surveillance. It collects and processes data to profile all citizens who use digital technology or are recorded by digital technology such as surveillance cameras. When deviating behavior is detected, citizens are imprisoned, stripped of their rights, or otherwise sanctioned. Conversely, wanted behavior may be rewarded with privileges. That such “Big Brother” dystopias are not far-off is shown by the quite successful surveillance efforts of the Chinese government and its simultaneous implementation of a social credit system. The tight control of minority groups such as the Uighur population in China[48] has been compared to a prison.[49]

Targeted and mass surveillance decisively change orientation with information through orientation in information. The very purpose of targeted surveillance is to decisively improve the orientation of those who order the surveillance. If the surveillance succeeds, it provides them with information they can use to orient themselves with regard to others, and to influence the orientation of others. The persons targeted by the surveillance, on the other hand, may not even suspect that the surveillance is taking place and not change their orientation at all. Those aware of the surveillance, in contrast, or who merely suspect it, frequently report that this has fundamentally changed their behavior in countless situations. Among other things, being observed means they no longer freely voice their opinion in the observed channels, or near to potential surveillance devices, such as smart watches or cell phones. Just the possibility of being observed is enough to change a situation and an individual’s orientation in it. The reorientation will not only depend on the size of the risk and possible negative consequences but also on other factors such as the psychological challenges constituted by potential intrusions of privacy. The privacy or non-privacy of an exchange can fundamentally change the conditions of orientation. They alter the way the situation is viewed and how the different options for action appear. This can cause considerable disorientation and necessitates reorientation.

Due to the new data collection and processing possibilities, and because there is so much more data to operate with in the first place, the potential for surveillance has vastly increased. More people than ever before can be surveilled at the same time, individuals can be targeted much faster, the information gained can be transmitted and processed at speeds that were previously impossible, and it can be evaluated much more quickly and efficiently. Moreover, digital surveillance is often much harder to detect than previous surveillance methods. Actors can operate remotely from anywhere in the world, and they can make use of devices that are already in the target’s possession. Furthermore, the data can be easily passed on or sold to others. All this is true for surveillance for political, economic, and other uses.

Rather than constituting a shadowy business-model, today the collection and processing of data is the main staple of many of the biggest corporations in the world. Data has become an lucrative resource that is collected, exchanged, and processed. Everything that happens digitally can be accessed and processed by digital means, including but not limited to communication, purchases, and searches. A large part of online behavior is registered, processed, and used to profile individuals and to feed them advertisement and other information or misinformation. This is done with or without the consent or knowledge of the user, or with a form of consent the consequences of which are not completely understood by the user. Since with each day more and more of our lives takes place online, each day also creates new possibilities for surveillance. There is a vast number of other data sources waiting to be tapped and there is nearly unlimited potential for improvement in collection and processing. Retrospectively, today’s methods will surely look primitive in comparison to what is to come. In short, the golden age of surveillance has arrived.

Yet, the surveillance used in today’s digital economy is a surveillance type very different from targeted and mass surveillance, and its influence on orientation is very different from that described above. The next section is hence dedicated to a new form of surveillance that has developed for economic purposes.

4.2. Surveillance Consumerism

Big data has already changed much of the economy into what is often called “surveillance capitalism.”[50] The concept of “surveillance capitalism” is modeled on a relatively old concept, that of early industrial capitalism, and despite its possible merits this essay uses instead the more neutral and general term of ‘surveillance economy.’ The focus will be on one aspect, that of surveillance consumerism.

Even more than mass surveillance, surveillance economy not only aims to collect all data but also to exploit it as thoroughly as possible. The aim is not to identify the needle in the haystack and to use the result in a concerted action, but to gather as much data as possible and to use it for minor behavioral changes, each of which may only be of small value. A surveillance economy is not primarily interested in hidden or secret information, but in all kinds of information, much of which may be rather superficial and easily available.

While it is true that data of and about users has value and is being sold, this does not confirm the widespread belief that it has intrinsic value. In fact, data lacks intrinsic value even in the case of the most secret information gathered during surveillance. Like printed money, information derives its value from the fact that it can be used for valuable purposes. The information gathered in targeted surveillance may be valuable to enable the identification of unwanted people and behavior, it may be crucial to do or construct something, it may provide a better estimate of the resources of an opponent, it may be used to blackmail opponents, and so on. Although these are very different uses of information gained from surveillance, they all have in common that the information constitutes extraordinary and possibly secret knowledge. In a surveillance economy, one of the main uses is to change the orientation and consequently the behavior of people, e.g., to sell more. Already advertisements and other forms of promotion in the world and classical media are meant to change the orientation of the humans they target toward certain products, services, or belief systems.

Changes in the orientation of individuals toward an accelerated consumerism are nothing new. In consumeristic societies, consumption satisfies needs that have been artificially created and ultimately becomes an end in itself. Consumerism not only changes the orientation of individuals but also that of culture, society, economy, and politics. Since it is believed that consumption keeps the economy expanding, whenever consumption is flagging, stimulus money is used to bolster consumption. It would be more apparent how much digital technology directs the orientation of modern humankind toward consumerism if most of today’s societies had not already been extremely consumeristic before digital technology became an integral part of our lives. Of course, consumerism is not per see new. The history of humanity has been driven by always increasing production, trade, and consumption, and every century has accelerated this trend. But in the 20th century, consumption made a huge leap forward, and especially after the Second World War, technological progress, resource exploitation, mass production, cultural change, political will, globalization, and analog media such as radio and television, which not only facilitate advertising but also themselves constitute a product for mass consumption, prepared the ground for almost absolute consumerism. The digitization started as an orientation about the world (see section 3), we can now say that “world orientation has become a world market orientation.”[51]

Digital technology has further accelerated the orientation toward consumerism, in obvious ways. It facilitates consumption by increasing productivity, improving logistics, and contributes to basically everything that enables and fosters consumption. Digital devices themselves are consumer products that are quite frequently replaced with newer devices. Their production, maintenance, and use consume considerable resources. They are furthermore mostly used to consume digital content in all its forms. The heavy consumption of media content is promoted by corporations that offer all-you-can-binge streaming services such as Amazon Prime, Apple TV, Netflix, and YouTube, all of which also create new media content. Completing the so-called FAANG corporations, the main function of Facebook is to provide another endless stream for consumption, as is that of Twitter, albeit in a somewhat more active form than the mere streaming of content. In countries outside of the US that want to control the data of their citizens, Alibaba, Baidu, Tencent, and plenty of other companies are no less about consumption. Gaming is a whole other field that constitutes a form of (somewhat more active) consumption and reinforces consumption in other areas. Last but not least, online shopping not only replaces shopping in shops but also enables more and different consumption.

As described in section 3.2, digitization has had a leading role in the creation of a global system that treats humans either a resource or consumer; it is as if humans were just “machine parts.” Even before personal computers and cell phones were invented, the characteristics of such a global system were already emerging. Humans were both consumers and a resource, although their use as a physical resource, which Günther Anders describes with respect to Auschwitz, was relatively rare. This changes, however, in the digital world. Humans are now not only consumers of analog items as well as a never-ending stream of digital content. In addition, they are an important resource, namely that of data and behavior as expressed in data. In surveillance consumerism, surveillance meets consumerism to form a symbiotic union. Both as consumers and as resources, humans are controlled and managed through surveillance. The surveillance is now a form of mass surveillance that collects the data of as many consumers as possible, and it is also a form of targeted surveillance that aims at changing the orientation of individuals according to their specific character.

The digital “machine” orders everything by computations on information. Information makes the digital world go round. Even money is data stored in accounts at electronic banks or in blockchains. This machine is not a computer beneath other things in the world, but it is the whole world conceived as a digital system, regardless of whether there are computers in it or not. The world can be a “system of information” [52] (see section 3.2) without personal computers and mobile phones, its possibility does not depend on what is often thought to constitute the origin of digitization. Computers and mobile devices are not the origin but the result of a long development, which in turn they vastly accelerate.

The flood of advertisement in analog and digital media as we know it is just a watery foretaste of what digitization is bringing to us. An analog comparison may be driving in a car and constantly seeing advertisements, most of which do not look like advertisements. They are not only restricted to billboards or the radio, and even street signs are paid for by businesses. Furthermore, the street itself is built differently for each individual user. In an analog world, this would clearly disturb one’s orientation. Yet, by analogy this is the situation we are in when navigating the internet – except that the internet is not navigated with a car but with tools that are able to guide the user. Tools such as browsers and search engines have restrictions, of course, and most importantly do not neutrally obey the actions of the driver but treat each driver individually and attempt to guide them to where their provider wants them to go. The tools themselves are both about providing orientation to their users, and about changing the orientation of their users. This is a novel situation that will be considered in more detail in the next section.

5. Orientation in Information

In analog times, the information relevant for orientation in a situation had to be found, and the problem often used to be that the information had not been collected, or the information was not easily available. Information had to be laboriously collected, gathered from experts, or researched in libraries. Digitalization fundamentally changed the availability of information. Nothing is easier than to get lost in the flood of online information, with more information at most just one click away. To orient oneself in the flood of available information, today more than ever it is crucial to find the information most relevant to the respective situation. There is a new abundance of information, and it is easily accessible, but orientation in the flood of information is lacking.

One of the most fundamental needs in the digital age is orientation in the available information by searching according to semantically differentiated criteria to find the exact information looked for. This would be the job of orientation technology in the digital age, particular search engines (see introduction). As it stands today, despite the powerful capabilities of digital technology including internet search engines, this fundamental need is not only unfulfilled but systematically undermined. Besides a lack of sufficient data and the technical difficulties of correctly interpreting a search request, there are obvious economic reasons that work against delivering the most relevant information. While the biggest search engines are all free of charge to the end user, they make enormous profits in other ways, many of which involve manipulating the very capacity they are used for. Consequently, a large number of search results do not correspond well to the search request because they consist in advertisements, sponsored links, links to other branches of the company providing the search engine (such as YouTube), or other links that may only somewhat correspond to the searched results but are selected because they make money for the search engine when they are presented, clicked on, or lead to purchases. These are usually the first results that pop up in an internet search. The search engine manipulates search results in accordance with the interests programmed into the search engine. Moreover, many sites are tweaked to be ranked higher in the results of searches they do not genuinely correspond to. The obvious attempt is to orient the end user toward profit-generating sites.

In everyday use, that obvious attempt is often not apparent. The search results are, despite the manipulation, still somewhat useful, and users may feel they know how to evaluate the results. User behavior may already have been changed to an extent that users find it helpful when YouTube links always appear in the first search results, regardless of whether they searched for videos or not. To some it may seem as if internet searches are free (rather than paid for with data and behavior), and they may not feel they should demand much from free services or believe that they could easily opt out of a service and choose a different one. Alternatively, some possibly gleefully believe in the glowing marketing promises that come with shiny devices, neat services, and new features and do not wish to question the dream of effortless freedom. The fact that technological progress does indeed bring with it improved features and devices also hides the manipulations. Users do not even realize how much their orientation in information is hindered by the distortion of search results. Today’s digital world is not only far from realizing its already existing potential, but also disorients its users.

Even selling portals that make money from every item sold on their platform manipulate search results to make additional money from sponsored links. The items they present are selected by criteria that are not made clear to the end user. Search capabilities are intentionally limited, for instance, Amazon does not allow words to be excluded from the search. The algorithms and data used are left opaque, with users manipulated in such a way that their orientation is guided toward the behavior that generates the most profit. The same holds for many more sites, some of which sell directly, attempt to link to sales on other sites, provide reviews that favor certain items, or attempt to otherwise change the orientation of their users. Reviews by other users cannot generally be trusted either, because many of them are in one way or another paid for by the sellers of products and services. The manipulation of searches and information used to find one’s way through the internet has made in-depth orientation in information difficult and time-consuming.

In total, there are probably more sites that attempt to direct the orientation of their users than those who try to convey information, which, of course, also changes the orientation of users. The new possibilities guiding or manipulating customers in addition to digital advertising have together given an enormous boost to persuasive technology, which attempts to change the orientation and behavior of people by persuasion instead of coercion.[53] From the perspective of statistics, small incentives called nudges are sufficient to guide user behavior, and nudging has become a major topic of research.[54]

Originally, the concept of nudges was introduced in behavioral economics to refer to gentle reinforcements of positive behavior that are in a person’s own interest, such as eating something healthy rather than sweets for dessert. The idea is that, although a person would agree that it’s in their own interest, they still need a gentle nudge to do it, such as having the healthy food within easy reach distance. Nudges may provide clues, leads, and footholds that make sense for a given orientation[55] and hence contribute to one’s orientation. Nudges can serve as reminders of intentions a person has, is believed to have, or, at least, should have, which are, of course, very different purposes.[56] Beyond the original idea, ‘nudge’ has become just another word for the influencing or manipulation of behavior by means of small and often unrecognized incentives—behavior that either advances the general interests of the person targeted, or the interest of the actors who stand behind the nudge. Since the interests usually do not completely coincide, nudges easily become a method of manipulation of behavior in favor of the provider of the nudge. In either case, nudges orient the user toward the behavior desired by the provider.

Nudges are supposed to reorient people, but ideally in ways that do not require effort on side of the user. Since the least effort is one the user doesn’t even feel, nudges that work on an unconscious level conform best to the idea of a nudge. The reorientation attained with nudges does not need to reach the level of consciousness. Nudges orient the user toward certain behavior. One primitive example are suggestions presented on websites that take into account what other users who searched for or bought something also searched for or bought. The results or items presented have a higher likelihood of relevance. When they lead the user to buy something they truly need and can afford, a win-win situation is created: the underlying algorithm has facilitated a sale, and the customer has bought something they truly need the existence of which they may not even have been aware of. Even when a search engine provider is only paid tiny amounts for generating a click, the large number of visits mediated makes for good profits. Of course, the better the suggestions are adjusted to the user, the more profit can be made. The power of algorithmic nudging lies in its combination of large numbers with personalized persuasion strategies. Furthermore, the strategy can be adjusted not only by using existing data but in real-time interaction with the users.

The quality of nudges and other persuasion techniques can be much improved when algorithms also take into account the preferences of the user, their ways of reasoning and deciding, the selling strategies they are susceptible to, and so on. To provide more “intelligent” suggestions, it also helps to take into account the logical and semantic relations between searched or bought items and possible suggestions. The more useful the data and the more intelligently it is used, the better this strategy works. This is one of the reasons why the intelligent use of data is worthy of more detailed consideration (section 7). Furthermore, the better the situation can be controlled, the better the strategy works. Controlling a situation is an excellent means of controlling the external conditions of orientation, and will be studied in the next section.

6. Orientation in Extended Realities

As pointed out in the introduction, orientation first deals with a new situation. Controlling the situation is hence one of the best ways to hold sway over the orientation of humans and to change their orientation in ways that guide humans into desired behavior. One of the most comprehensive means to technologically define a situation is to create an artificial environment that limits the possible actions of a user. The technology is then not only an item in the environment but itself provides an environment for the user. This environment defines the role of the users and their possible actions. For instance, a cockpit in a plane or the driver seat in a car are surroundings that radically change the actions open to the pilot or driver. The artificial environment of this technology consists in clearly defined surroundings that enable a set of possible actions by their users and inhibits or prevents others. To use Anders’s expression again, humans become “machine parts,” in our case integrated into a machine in the literal sense. In the pilot or driver seat, however, the human is in control, at least if things go as planned. This includes that the pilot or driver controls where the vehicle is going in its environment. The situation changes substantially when the technology used intentionally alters or even determines the environment. In an artificially created environment, the human becomes part not only of an immediate environment but of a wider, artificially created, system. This wider system constitutes a form of virtual environment that increasingly replaces the world. In such an environment is even possible to completely dominate the orientation of the user. When the user is not able to differentiate and distance herself from the situation, she cannot orient herself.[57] In a Virtual Reality, a user easily becomes overwhelmed by the strange environment and looses orientation. In less extreme situations, the user or viewer may be flooded with always new impressions and information the user can only react to, and loses the ability to act.

The concept of “Extended Reality” (XR) is used here to refer to the whole spectrum from Augmented Reality (AR) through to a complete Virtual Reality (VR). Augmented reality superimposes perceptual digital entities on the ordinary perceived reality. There are many possible uses of Extended Reality, but the focus here is on the extended control it gives the creators over the user experience, data, and behavior. It enables providers to place products in the most effective positions, although advertising is a relatively crude means of controlling the orientation of users, even when it is targeted or takes place in a 3D environment. While the purpose of advertising is to change the orientation of humans and guide them toward desired behaviors, this doesn’t mean that advertising is the best means of guiding user behavior. Nudges are a different way of changing the orientation of users, and controlling the environment of users in Extended Reality vastly increases the options for algorithmic nudges. Controlling the Extended Reality environment enables the provider to tightly control the space of possible user experience and actions, and vastly increases the amount of data that can be collected. One of the reasons for the huge commercial interest in Extended Reality is surely the enormous increase in the amount of collectable data it enables, alongside the complete control of virtually everything in the artificially created environment.

It should hence not come as a surprise that forward-looking corporations are pouring billions of dollars into the development of hard- and software for Extended Reality (Alphabet, Amazon, Apple, Epic Games, HP, HTC, Huawei, Facebook, Microsoft, Netflix, Samsung, and Sony, to name just a few). The chairman, chief executive officer, and controlling shareholder of Facebook recently declared to his employees that the corporation is contingent on its development of an Extended Reality called ‘Metaverse,’[58] and there are rumors that Facebook may even change its name to express the shift of its business model. The Metaverse is frequently thought to be a further development of the internet that incorporates Extended Reality. What it will exactly look like is as yet unclear, and also whether it will ultimately be called Metaverse, Pluriverse, or go by some other name(s). But it is clear already from the huge new opportunities for surveilling users and controlling their behavior that the Metaverse is an attractive long-term investment.

The internet as we know it can be compared to other orientation technologies in that it provides information that can be looked up on maps and in books, tables, and databases. As discussed above, however, the internet already goes beyond the explicit retrieval of static information in that the information presented is not static but changed by user behavior and more or less intelligent algorithms. The Metaverse goes much further than the internet as we currently know it by providing an increasingly augmented and possibly purely virtual reality. What is new about Extended Reality is not just its 3D character and the added possibilities for product placement and advertising it offers. These are minor in comparison with the new possibilities for user interaction. Extended Reality provides an environment that can be experienced and interacted with in more immediate ways than looking up information, reading and typing text, and any other form of explicit representation of information. Far beyond multimedia, it constitutes a dynamic environment in which users can orient themselves by interacting with the environment. This is revolutionary and inaugurates a new phase in the digitalization of the world.

Traditionally, digital computers were operated by means of inputs that had to be entered in a well-defined format. This has been true for a large part of the evolution of digital computing, regardless of whether the software, data, and commands were typed by hand or inserted via punch cards or disks. It has also been true for user-friendly computing, despite many of the technical details of computer operation being hidden for the sake of increased accessibility. While the enrichment of input and output with color, graphics, and video can make computer operation more intuitive, it does not by itself fundamentally change the mode of operation by means of explicit orders. While the internet provides its own universal environment that can be navigated with little knowledge of the underlying operating system and basic processes, it too, like all the above cases, is navigated by means of explicit inputs in well-defined formats. Classical computers provide a situation to their users that is clearly circumscribed. There is a clearly defined and limited set of possible inputs and outputs, either involving a set of symbols and commands or graphical user interfaces. The orders can set in motion complex processes, and due to their ever-increasing complexity, the exact workings of digital technology are becoming increasingly less transparent.

Paradigm actions in traditional computing are the pushing of levers or buttons, the insertion of punch cards or disks, the execution of software by double-clicking, and the browsing of the internet by clicking on links. We have gotten so used to operating computers by means of sets of possible commands that it may easily seem as if this is just how computers are operated. This has begun to change radically, however. Technology and the human body are moving closer together, and users are increasingly becoming wearers of technology. In the future, there will be a huge increase in the number of cyborgs, i.e., hybrids of technology and humans that incorporate technical objects in their physical bodies. Incorporated interfaces and computers will enable new forms of computer interaction. For instance, brain-computer interfaces are being developed that skip sense and efferent organs and directly connect computers with neurons. Brain-computer interfaces, too, however, are still often operated in classical ways, such as when they enable patients with locked-in syndrome to control a computer typing program.[59] Eventually, brain-computer interfaces will undoubtedly contribute to radically different ways of operating machines.

We do not need to speculate about the future, however. There are many other ways of replacing explicit commands with something radically new, and these are already being used. Possibilities include (1) measurements of bodily states or behavior the user may not be aware of. Already today, wearables such as smart watches measure and process exponentially more input than that purposefully entered by users. The traditional operation of computers by clicking links or buttons, too, makes it possible to measure reaction time and to induce preferences, desires, and mindsets that may, for instance, be used for advertisements. This engenders a second possibility for replacing explicit commands, namely by (2) using interactions other than commands to start routines. Any input and even non-input can be used to trigger sets of actions. The inferences usually work best with intimate knowledge of the user, which is one reason why profiling is so vital. The most elegant way of replacing explicit commands is (3) not to require any new input at all. So-called predictive technology can infer from previous actions the next best action and thereby eliminates the need to purposefully trigger the action.

While these three ways of replacing explicit commands are already pursued by currently existing technology, they can be much more successful together with Extended Reality. Extended Reality enables (1) many more and more detailed measurements that are processed and used to modify the operation of the computing system. It enables (2) a manifold of interactions that resemble more real-world interactions than the input-output operation of computers. And it enables (3) the more complete prediction and verification of predictions of many different kinds of embodied user behavior than just measuring clicks and other partial actions while missing the action in between.

To effectively change human orientation, however, new forms of human-computer interaction require another ingredient. The Extended Realities they present have to be coherent and make sense to the humans in them. They need to be coherent and be constantly adjusted in real-time according to the actions of the users and all other available information. Extended Realities need to simulate a novel kind of lifeworld, and since the lifeworld is the world of common sense, they need to appropriately model the things and relations in the world that correspond to common sense. Furthermore, to change human orientation in desired ways, the technology should be intelligent enough to appropriately process the relevant information and to initiate the actions and interactions that persuade the users to carry out the desired behaviors. The technology has to intelligently modify the simulated reality, intelligently adjust itself in real time, and intelligently act on and interact with the user, all for the purpose of intelligently modifying human orientation and, most likely, the further purpose of intelligently changing human action.

7. Intelligent Modification of Human Orientation

It is often thought that the novel character of intelligent technology—often referred to as AI—lies in the fact that it replicates, emulates, or simulates humans (see section 3.2). But simulation of human intelligence is only one of the things done by intelligent technology. “Intelligent” technology does not actually have true intelligence, which would require understanding,[60] but refers to the capability to do things that make sense to humans. The vivacity with which animistic concepts of AI stimulate our imagination hides the fact that the purpose of most existing intelligent technology is not to create more humanoid beings but to change the orientation and consequent behavior of already existing humans. Rather than truly understanding users, intelligent technology in the context of persuasive technology modifies the orientation of its users by more or less intelligently controlling the environment and situation in ways that orients the user toward a set of desired actions. It may take into account the orientation of its users as induced from previous (inter)actions and calculate persuasion strategies that have individually or statistically been proven successful, or promise for other reasons to be most apt in guiding the user’s orientation.

There is much room for improving the computational modification of human orientation, and much pressure to do so. On the one hand, as argued in the last section, extended realities need to constitute a reality that adjusts itself in real time in accordance with user interaction and in a way that corresponds to at least some common-sense expectations by users. On the other hand, to better modify the orientation of users, digital technology needs to deal with very intricate interaction possibilities. The consideration of the playful possibilities of written text exchanges in the context of the Turing Test already shows that this is much harder than usually thought.[61] Wittgenstein was right to abandon his too narrow concept of language as a calculus and to speak of language-games instead.[62] The concept of a language game clears the path for the conceptualization of a much more dynamic kind of rule-following, which has been called “creative rule-following.”[63] Current technology has very limited success maintaining meaningful conversations that require creative rule-following, for instance when large language models produce meaningful text by using statistical correlations in enormous amounts of data.[64]

Because today’s algorithms are good at detecting statistical correlations between presented items and clicks, they can easily be optimized to order the content they present according to what causes the most clicks and thereby keeps users longest on the platform. Such optimizations can have unanticipated side effects, such as that messages from extremist political groups are more amplified than moderate views.[65] To counter biased amplification and the flood of undesired content, large digital corporations are forced to hire many thousands of human content moderators. Not only for this reason, the intelligent processing of semantic content is a strong desideratum and the main challenge for digitization today. As already pointed out, the means of intelligent processing of semantic content does not have to be true understanding of the content and will involve a plurality of methods that have still to be developed.

To get a sense of the new dimension of digital technology that is designed to change human orientation, it is useful to consider the distinction between traditional and modern technology drawn by Heidegger in his article The Question Concerning Technology.[66] Traditional technology, such as a windmill, uses wind to move the millstone that grinds the corn. Modern technology is fundamentally different. Heidegger gives the example of a hydroelectric power station in contrast to a sawmill,[67] but a wind turbine is also a good example. Superficially seen, a wind turbine very much resembles a windmill. It has blades moved by the wind the movement of which is contingent on its strength, an axle connected to its body, and so on. Like a traditional windmill, a wind turbine uses wind to move its mechanics, and like a traditional windmill it can be used to grind corn, although that requires an additional motor that uses the electricity generated to move the millstone. It is precisely here that the two fundamentally differ. Wind turbines constitute modern technology because they tap and transform natural forces and resources in ways guided by modern science. They transform the kinetic energy of the wind into electric energy ready to be transmitted, distributed, and stored. A windmill, in contrast, leaves the forces of nature intact, and its invention does not require any understanding of electricity or other concepts of modern science.

Digital technology is not essentially about the transformation of natural forces and resources. In this respect it is closer to traditional technology, which makes use of natural materials but does not transform them into the constituents described by modern science. Digital technology makes use of modern technology ranging from electric energy to high-tech materials, and like modern technology it is intertwined with modern science. But while digital technology is always realized in physical hardware, it can be realized in multiple setups, such as electrical or optical wires, vacuum tubes, transistors, processors, and quantum mechanical systems. In this sense, it transcends its material basis. It is essentially about the transformation of information units rather than something material. Although it builds on previous technology and natural science, and although it very much supports and is supported by modern technology, digital technology is very different.

Today, digital technology is once again transforming itself into a new kind of technology that reaches beyond the processing of mere syntactic information. It is increasingly also about the intelligent processing of semantic information in ways that can change the orientation and behavior of humans. As said above, the possibility of intelligent processing of semantic information does not imply that the computer understands any of the information. Rather, its hardware and software are designed to transform meaningful information as experienced and understood by humans. This is usually sufficient to modify human orientation and to guide humans into performing the desired behavior.

As described in section 3, the mathematization or digitization of the lifeworld involves the transformation of the experiential world into information in a formal-symbolic sense. Husserl’s description of the process of mathematization can be inverted and used to describe how digital information is used to alter the experience of the lifeworld in Extended Reality.[68] In the digitization of the world, worldly things are dressed in a tailor-made “garb of ideas.” In Extended Reality, digital interfaces either literally wrap the user to convert analog measurements into digital information, in effect digitizing the user in the discussed narrow sense, or the digital interfaces wrap the user to produce experience from digital information. Here, information in the formal-symbolic sense is transformed into user experience. This closes the circle from the user experience to the ideal description of the world and back again. It enables digital technology not only to digitize the human body and all measurable actions, but also to use data to produce experiences of artificially extended realities.

8. Summary and Conclusion

This essay has shown that digitization is not new but has developed since long. An essential step is the mathematization or digitization of nature, which has much accelerated over the course of the last five centuries (see section 3). The transformation of economy and society into a system of information, and the incorporation of humans into this system, prepared our world for digital devices. While digital devices are rightly regarded as the cause of major changes in the world we live in, they are also a result of digitization. Considering only the devices and the consequences of their use in isolation misses the wider context. It furthermore fails to understand the developing nature of the digital devices and systems themselves, which is intrinsically intertwined with human orientation. The digitization of the world goes back to a conceptual undertaking that aims at metaphysical orientation. Even prior to the advent of personal computing and smart phones, the digitization of our world changed human orientation toward the world.

The new possibilities for the collection and processing of enormous amounts of data have inaugurated a golden age of surveillance (section 4). Analog and digital surveillance alike can decisively change the orientation of those who order as well as those who suffer the surveillance. Moreover, the union of individualized mass surveillance with digitally accelerated consumerism is giving rise to a new economic system, that of surveillance consumerism. Surveillance consumerism is not merely about collecting data about consumers, but also about using the data to change the orientation and behavior of users (section 5). Existing means of influencing human orientation and consequent behavior through nudging and persuasive technology are still rather primitive in comparison to the possibilities of future technology aimed at changing human orientation.

One of the most effective means of changing human orientation is to change the situation of the users by changing the specific environment of each user (section 6). It should hence not come as a surprise that many of the world’s biggest corporations are pouring billions into developments with names such as Augmented Reality, Virtual Reality, or the Metaverse. Transforming the environment of users into a digitally orchestrated scenery enables the tight control of orientation and ultimately behavior.

Since digitally created realities need to make sense to the humans in them, they need to be adjusted in intelligent ways (section 7). The intelligent processing of information is furthermore key to changing the orientation of the users and guiding them toward the desired behavior. Computing can adjust to human experience and understanding, but it is a frequent misunderstanding that “intelligent” must do so by replicating, emulating, or simulating human intelligence. AI enthusiasts and alarmists alike rightly claim that intelligent technology has the potential to fundamentally change human existence, but the result will look very different from their anthropomorphic fantasies (section 2). Orientation is here a central concept since digital technology is increasingly able to intelligently modify human orientation.

Taken together, the new possibilities of changing human orientation by collecting and processing vast amounts of data, creating digital environments, adjusting the situation the users find themselves in, and intelligently modifying human experience have led to an enormous potential for the guidance and control of human orientation for political, economic, and other purposes.

On the one hand, this potential bears a risk that may be called existential for humanity as we know it. We may all end up living in a made-up world, not a “matrix” invented by machines, but a new digital world designed to serve the particular interests of a few humans by distracting, disorienting, confusing, manipulating, misguiding, deceiving, controlling, and coercing the rest of us.

On the other hand, the digital transformation of our minds has also an enormous potential to better orient us, help us find our way, augment experience, enhance thinking, and to improve individual and collective thought and decision-making processes. While the digital transformation is increasingly not only a transformation of our world but also of our minds, it is up to us to take control of where it leads to. We are in an unfamiliar situation in which we need to find our way. The first, and key, accomplishment in dealing with the new situation is to gain orientation. This essay is meant to contribute to this first step.

Acknowledgements

I would like to thank David Winkens for comments on a draft of this essay and everyone who participated in the inspiring discussions of its contents. Work on this essay was supported by funding from the Freiburg Institute for Advanced Studies (FRIAS) and the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 754340.

References

Alloa, Emmanuel, “Produktiver Schein: Phänomenotechnik zwischen Ästhetik und Wissenschaft” in: Zeitschrift für Ästhetik und Allgemeine Kunstwissenschaft 60, no. 2 (2015), pp. 11–24. https://doi.org/10.28937/1000106263.

Anders, Günther, Über die Zerstörung des Lebens im Zeitalter der dritten industriellen Revolution. Die Antiquiertheit des Menschen 2 (München: Beck, 1995).

Austin, Patrick Austin / Perrigo, Billy, “What to Know About the Pegasus IPhone Spyware Hack,” in: Time. https://time.com/6081622/pegasus-iphone-spyware-hack/ accessed 14 October 2021.

Bachelard, Gaston, The New Scientific Spirit (Boston: Beacon Press, 1984 [1934]).

Benartzi, Shlomo / Beshears, John / Milkman, Katherine L. / Sunstein, Cass R. / Thaler, Richard H. / Shankar, Maya / Tucker-Ray, Will / Congdon, William J. / Galing, Steven, “Should Governments Invest More in Nudging?” in: Psychological Science 28, no. 8 (August 2017), pp. 1041–55. https://doi.org/10.1177/0956797617702501.

Blumenberg, Hans, Lebenswelt und Technisierung unter Aspekten der Phänomenologie, in: Hans Blumenberg (ed.): Wirklichkeiten, in denen wir leben (Ditzingen: Reclam, 2020), pp. 7-54.

Bohle, Hannah / Rimpel, Jérôme / Schauenburg, Gesche / Gebel, Arnd / Stelzel, Christine / Heinzel, Stephan / Rapp, Michael / Granacher, Urs, “Behavioral and Neural Correlates of Cognitive-Motor Interference during Multitasking in Young and Old Adults,” in: Neural Plasticity 2019 (July 1, 2019), pp. 1–19. https://doi.org/10.1155/2019/9478656.

Bostrom, Nick, “Existential Risk Prevention as Global Priority: Existential Risk Prevention as Global Priority,” in: Global Policy 4, no. 1 (February 2013), pp. 15–31. https://doi.org/10.1111/1758-5899.12002.

———. Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014).

Brown, Tom B. / Mann, Benjamin / Ryder, Nick / Subbiah, Melanie / Kaplan, Jared / Dhariwal, Prafulla / Neelakantan, Arvind et al., “Language Models Are Few-Shot Learners,” in: ArXiv:2005.14165 [Cs], July 22, 2020. http://arxiv.org/abs/2005.14165.

Buckley, Chris / Mozur, Paul,  “How China Uses High-Tech Surveillance to Subdue Minorities,” in: The New York Times, May 22, 2019, sec. World. https://www.nytimes.com/2019/05/22/world/asia/china-surveillance-xinjiang.html.

Buckley, Chris / Mozur, Paul / Ramzy, Austin, “How China Turned a City into a Prison,” in: The New York Times, April 4, 2019, sec. World. https://www.nytimes.com/interactive/2019/04/04/world/asia/xinjiang-china-surveillance-prison.html.

Byler, Darren, “China’s Hi-Tech War on Its Muslim Minority,” in: The Guardian, April 11, 2019, sec. News. https://www.theguardian.com/news/2019/apr/11/china-hi-tech-war-on-muslim-minority-xinjiang-uighurs-surveillance-face-recognition.

———, “Ghost World,” in: Logic Magazine. https://logicmag.io/china/ghost-world/ accessed May 7, 2019.

Crevier, Daniel, AI: The Tumultuous History of the Search for Artificial Intelligence (New York, NY: Basic Books, 1993).

Darrach, Brad, “Meet Shaky, the First Electronic Person: The Fascinating and Fearsome Reality of a Machine with a Mind of Its Own,” in: Life Magazine, November 20, 1970.

Dreyfus, Hubert L., What Computers Can’t Do: The Limits of Artificial Intelligence (New York: Harper & Row, 1979).

———, What Computers Still Can’t Do: A Critique of Artificial Reason (Cambridge, Mass.: MIT Press, 1992).

Durt, Christoph, “From Calculus to Language Game: The Challenge of Cognitive Technology,” in: Techné: Research in Philosophy and Technology 22, no. 3 (2018), pp. 425–46. https://doi.org/10.5840/techne2018122091.

———, “‘The Computation of Bodily, Embodied, and Virtual Reality’: Winner of the Essay Prize ‘What Can Corporality as a Constitutive Condition of Experience (Still) Mean in the Digital Age?’” in: Phänomenologische Forschungen, no. 2 (2020), pp. 25–39.

———, “The Paradox of the Primary-Secondary Quality Distinction and Husserl’s Genealogy of the Mathematization of Nature. Dissertation.” eScholarship University of California, 2012. http://www.durt.de/publications/dissertation/.

Fogg, B. J., Persuasive Technology: Using Computers to Change What We Think and Do (Amsterdam/Boston: Morgan Kaufmann Publishers, 2003).

Fuchs, Thomas, Verteidigung des Menschen. Grundfragen einer verkörperten Anthropologie (Berlin: Suhrkamp, 2020).

Gramelsberger, Gabriele, “Figurationen des Phänomenotechnischen,” in: Gamm, Gerhard / Gehring, Petra / Hubig, Christoph / Kaminski, Andreas / Alfred Nordmann (eds.), List und Tod, in: Jahrbuch Technikphilosophie (Zürich/Berlin: Diaphanes, 2016), pp. 157-168.

Heidegger, Martin, Sein und Zeit (Tübingen: Max Niemeyer, 1967).

———, The Question Concerning Technology, and Other Essays (New York: Garland Pub, 1977).

Husserl, Edmund, Ideen zu einer reinen Phänomenologie und phänomenologischen Philosophie; Buch 1, Band 1: Allgemeine Einführung in die reine Phänomenologie, ed. Karl Schuhmann, Husserliana, III/1 (Den Haag: Nijhoff, 1976).

———, The Crisis of European Sciences and Transcendental Phenomenology: An Introduction to Phenomenological Philosophy, transl. David Carr (Evanston: Northwestern University Press, 1970).

Ihde, Don, “Husserl’s Galileo Needed a Telescope!” in: Philosophy & Technology 24 (2011), pp. 69-82.

———, Postphenomenology and Technoscience: The Peking University Lectures (Albany: SUNY Press, 2009).

———, Postphenomenology: Essays in the Postmodern Context. Northwestern University Studies in Phenomenology and Existential Philosophy (Evanston, Ill: Northwestern University Press, 1993).

———, Technology and the Lifeworld: From Garden to Earth (Bloomington: Indiana University Press, 1990).

Kurzweil, Ray, The Singularity Is Near: When Humans Transcend Biology (New York: Viking, 2005).

Lakhani, Nina, “‘It’s a Free-for-All’: How Hi-Tech Spyware Ends up in the Hands of Mexico’s Cartels,” in: The Guardian, December 7, 2020, sec. World news. https://www.theguardian.com/world/2020/dec/07/mexico-cartels-drugs-spying-corruption.

Lohr, Steve, “Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So,” in: The New York Times, June 21, 2018, sec. Technology. https://www.nytimes.com/2018/06/20/technology/deep-learning-artificial-intelligence.html.

McCarthy, John / Minsky, M.L. / Rochester, N. / Shannon, C. E., “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” 1955. https://rockfound.rockarch.org/digital-library-listing/-/asset_publisher/yYxpQfeI4W8N/content/proposal-for-the-dartmouth-summer-research-project-on-artificial-intelligence.

Milmo, Dan / Dan Milmo Global technology editor, “Twitter Admits Bias in Algorithm for Rightwing Politicians and News Outlets,” in: The Guardian, October 22, 2021, sec. Technology. https://www.theguardian.com/technology/2021/oct/22/twitter-admits-bias-in-algorithm-for-rightwing-politicians-and-news-outlets.

Müller, Oliver, Selbst, Welt und Technik: eine anthropologische, geistesgeschichtliche und ethische Untersuchung. Humanprojekt 11 (Berlin: De Gruyter, 2014).

Newton, Casey, “Mark Zuckerberg Is Betting Facebook’s Future on the Metaverse,” in: The Verge, July 22, 2021. https://www.theverge.com/22588022/mark-zuckerberg-facebook-ceo-metaverse-interview.

Rosenberger, Robert / Verbeek, Peter-Paul, “A Field Guide to Postphenomenology,” in: Postphenomenological Investigations: Essays on Human-Technology Relations, ed. Robert Rosenberger and Peter-Paul Verbeek (Lanham, Md.: Lexington Books, 2015), pp. 9-41.

———, eds. Postphenomenological Investigations: Essays on Human-Technology Relations. Postphenomenology and the Philosophy of Technology (Lanham, Md.: Lexington Books, 2015).

Selinger, Evan, Postphenomenology: A Critical Companion to Ihde (Albany, NY: SUNY Press, 2006).

Snowden, Edward, Permanent Record (London: Macmillan, 2019).

Stegmaier, Werner / Mueller, Reinhard G., Fearless Findings: 25 Footholds for the Philosophy of Orientation (Nashville: Hodges Foundation for Philosophical Orientation, 2019).

Stegmaier, Werner, “Einstellung auf Neue Realitäten. Orientierung als philosophischer Begriff,” in: Neue Realitäten – Herausforderung der Philosophie, 20.-24. Sept. 1993 TU Berlin, Berlin 1993.

———, Philosophie der Orientierung (Berlin / New York: De Gruyter, 2008).

———, What is Orientation? A Philosophical Investigation, transl. Reinhard G. Mueller (Berlin/Boston: De Gruyter, 2019).

“Stephen Hawking Warns Artificial Intelligence Could End Mankind,” in: BBC News, December 2, 2014, sec. Technology. https://www.bbc.com/news/technology-30290540.

Sugden, Robert, “Do People Really Want to Be Nudged towards Healthy Lifestyles?” in: International Review of Economics 64, no. 2 (June 2017), pp. 113-123. https://doi.org/10.1007/s12232-016-0264-1.

Sunstein, Cass R. / Thaler, Richard H., “Libertarian Paternalism Is not an Oxymoron,” in: The University of Chicago Law Review 70, no. 4 (2003), p. 1159. https://doi.org/10.2307/1600573.

“Takeaways from the Pegasus Project,” in: Washington Post, July 18, 2021. https://www.washingtonpost.com/investigations/2021/07/18/takeaways-nso-pegasus-project/.

Thaler, Richard H. / Sunstein, Cass R., Nudge: Improving Decisions about Health, Wealth, and Happiness (New Haven: Yale University Press, 2008).

Turing, Alan Mathison, “Intelligent Machinery,” in: The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life, plus the Secrets of Enigma, ed. B. Jack Copeland (Oxford: Clarendon Press, 2004 [1948]), pp. 410-432

———, “On Computable Numbers, with an Application to the Entscheidungsproblem,” in: B. Jack Copeland (ed.), The Essential Turing (Oxford: Oxford University Press, 2004 [1936]), pp. 58-90.

Vansteensel, Mariska J. / Pels, Elmar G.M. / Bleichner, Martin G. / Branco, Mariana P. / Denison, Timothy / Freudenburg, Zachary V. / Gosselaar, Peter, et al., “Fully Implanted Brain–Computer Interface in a Locked-In Patient with ALS,” in: New England Journal of Medicine 375, no. 21 (November 24, 2016), pp. 2060-2066. https://doi.org/10.1056/NEJMoa1608085.

Waldenfels, Bernhard, Bruchlinien der Erfahrung: Phänomenologie, Psychoanalyse, Phänomenotechnik (Frankfurt am Main: Suhrkamp, 2002).

———, “Phänomenologie und Phänomenotechnik,” in: Julia Jonas / Karl-Heinz Lembeck (eds.), Mensch – Leben – Technik: Aktuelle Beiträge zur phänomenologischen Anthropologie (Würzburg: Königshausen & Neumann, 2006). https://ubdata.univie.ac.at/AC05496274.

Wendland, Aaron James / Merwin, Christopher / Hadjioannou, Christos (eds.), Heidegger on Technology (New York: Taylor & Francis, 2019).

Wiesend, Stephan, “Is Apple to Blame for Failing to Stop Pegasus? – Macworld UK.” https://www.macworld.co.uk/news/apple-blame-pegasus-3806896/ accessed 14 October 2021.

Wiltsche, Harald A., “Mechanics Lost: Husserl’s Galileo and Ihde’s Telescope,” in: Husserl Studies 33, no. 2 (July 2017), pp. 149-73. https://doi.org/10.1007/s10743-016-9204-x.

Wittgenstein, Ludwig, Preliminary Studies for the “Philosophical Investigations”: Generally Known as The Blue and Brown Books (Oxford: Basil Blackwell, 1958).

Zuboff, Shoshana, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: PublicAffairs, 2018).


Footnotes

[1] Different aspects of these concepts can be distinguished (see 3 paragraphs down), but here they will new treated as roughly synonymous.

[2] Werner Stegmaier, What is Orientation? A Philosophical Investigation, transl. Reinhard G. Mueller (Berlin/Boston: De Gruyter, 2019), p. 1.

[3] By using the concept ‘intelligent,’ I do not mean to imply that the technology itself becomes intelligent in the way humans are intelligent (see section 2). Rather, ‘intelligent’ refers to solutions that can be intelligent without necessarily being designed by an intelligent being.

[4] Christoph Durt, “’The Computation of Bodily, Embodied, and Virtual Reality:’ Winner of the Essay Prize ‘What Can Corporality as a Constitutive Condition of Experience (Still) Mean in the Digital Age?’” in: Phänomenologische Forschungen, no. 2 (2020), p. 29, fn. 11.

[5] Stegmaier, What is Orientation?, p. 253.

[6] Martin Heidegger, Sein und Zeit (Tübingen: Max Niemeyer, 1967), p. 73.

[7] Hannah Bohle et al., “Behavioral and Neural Correlates of Cognitive-Motor Interference during Multitasking in Young and Old Adults,” in: Neural Plasticity 2019 (July 1, 2019), pp. 1–19, https://doi.org/10.1155/2019/9478656.

[8] “Stephen Hawking Warns Artificial Intelligence Could End Mankind,” in: BBC News, December 2, 2014, sec. Technology, https://www.bbc.com/news/technology-30290540.

[9] Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology (New York: Viking, 2005).

[10] Nick Bostrom, “Existential Risk Prevention as Global Priority: Existential Risk Prevention as Global Priority,” in: Global Policy 4, no. 1 (February 2013): pp. 15–31, https://doi.org/10.1111/1758-5899.12002.

[11] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, First edition (Oxford: Oxford University Press, 2014).

[12] John McCarthy et al., “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” 1955, https://rockfound.rockarch.org/digital-library-listing/-/asset_publisher/yYxpQfeI4W8N/content/proposal-for-the-dartmouth-summer-research-project-on-artificial-intelligence.

[13] Alan Mathison Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” in: The Essential Turing, ed. B. Jack Copeland (1936; repr., Oxford: Oxford University Press, 2004), pp. 58-90; Alan Mathison Turing, “Intelligent Machinery,” in: The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life, plus the Secrets of Enigma, ed. B. Jack Copeland (1948; repr., Oxford / New York: Clarendon Press; Oxford University Press, 2004), pp. 410-432.

[14] Marvin Minsky according to Brad Darrach, “Meet Shaky, the First Electronic Person: The Fascinating and Fearsome Reality of a Machine with a Mind of Its Own,” in: Life Magazine, November 20, 1970, p. 58D.

[15] Daniel Crevier, AI: The Tumultuous History of the Search for Artificial Intelligence (New York, NY: Basic Books, 1993), p. 203.

[16] Steve Lohr, “Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So,” in: The New York Times, June 21, 2018, sec. Technology, https://www.nytimes.com/2018/06/20/technology/deep-learning-artificial-intelligence.html.

[17] Don Ihde, Postphenomenology: Essays in the Postmodern Context, Paperback, Northwestern University Studies in Phenomenology and Existential Philosophy (Evanston, Ill: Northwestern University Press, 1993); Don Ihde, Postphenomenology and Technoscience: The Peking University Lectures (Albany: SUNY Press, 2009); Evan Selinger, Postphenomenology : A Critical Companion to Ihde (Suny Series in the Philosophy of the Social Sciences) (State University of New York Press, 2006); Robert Rosenberger and Peter-Paul Verbeek, eds., Postphenomenological Investigations: Essays on Human-Technology Relations, Postphenomenology and the Philosophy of Technology (Lanham, Md.: Lexington Books, 2015).

[18] Robert Rosenberger and Peter-Paul Verbeek, “A Field Guide to Postphenomenology,” in: Postphenomenological Investigations: Essays on Human-Technology Relations, ed. Robert Rosenberger and Peter-Paul Verbeek (Lanham, Md.: Lexington Books, 2015), p. 9.

[19] Don Ihde, Technology and the Lifeworld: From Garden to Earth (Indiana University Press, 1990).

[20] Don Ihde, “Husserl’s Galileo Needed a Telescope!” in: Philosophy & Technology 24 (2011), pp. 69–82.

[21] Harald A. Wiltsche, “Mechanics Lost: Husserl’s Galileo and Ihde’s Telescope,” in: Husserl Studies 33, no. 2 (July 2017): pp. 149-73, https://doi.org/10.1007/s10743-016-9204-x.

[22] Hans Blumenberg, Lebenswelt und Technisierung unter Aspekten der Phänomenologie, in: Hans Blumenberg (ed.): Wirklichkeiten, in denen wir leben: Aufsätze und eine Rede (Ditzingen: Reclam, 2020), pp. 7-54; Bernhard Waldenfels, Bruchlinien der Erfahrung: Phänomenologie, Psychoanalyse, Phänomenotechnik (Frankfurt am Main: Suhrkamp, 2002); Oliver Müller, Selbst, Welt und Technik: eine anthropologische, geistesgeschichtliche und ethische Untersuchung (Berlin: De Gruyter, 2014).

[23] Waldenfels, Bruchlinien der Erfahrung; Bernhard Waldenfels, “Phänomenologie und Phänomenotechnik,” in: Mensch – Leben – Technik: Aktuelle Beiträge Zur Phänomenologischen Anthropologie, by Julia Jonas 1970 (Würzburg: Königshausen & Neumann, 2006), https://ubdata.univie.ac.at/AC05496274; Emmanuel Alloa, “Produktiver Schein: Phänomenotechnik zwischen Ästhetik und Wissenschaft,” in: Zeitschrift für Ästhetik und Allgemeine Kunstwissenschaft 60, no. 2 (2015): pp. 11–24, https://doi.org/10.28937/1000106263.

[24] Hubert L. Dreyfus, What Computers Can’t Do: The Limits of Artificial Intelligence, Rev. ed. (New York: Harper & Row, 1979); Hubert L. Dreyfus, What Computers Still Can’t Do: A Critique of Artificial Reason (Cambridge, Mass.: MIT Press, 1992).

[25] Durt, “‘The Computation of Bodily, Embodied, and Virtual Reality’ ”.

[26] Werner Stegmaier, “Einstellung auf Neue Realitäten. Orientierung als Philosophischer Begriff,” in: Neue Realitäten – Herausforderung Der Philosophie, 20.-24. Sept. 1993 TU Berlin (XVI. Deutscher Kongreß für Philosophie, Berlin, 1993), p. 282, my translation.

[27] Stegmaier, “Einstellung auf Neue Realitäten,” p. 282.

[28] Edmund Husserl, The Crisis of European Sciences and Transcendental Phenomenology: An Introduction to Phenomenological Philosophy, transl. David Carr (Evanston: Northwestern University Press, 1970).

[29] Husserl, The Crisis of European Sciences, p. 23.

[30] For a detailed analysis of the steps involved in this mathematization and their philosophical implications, see Christoph Durt, “The Paradox of the Primary-Secondary Quality Distinction and Husserl’s Genealogy of the Mathematization of Nature. Dissertation.” (eScholarship University of California, 2012), http://www.durt.de/publications/dissertation/.

[31] Gaston Bachelard, The New Scientific Spirit (1934; repr., Boston: Beacon Press, 1984); Gabriele Gramelsberger, “Figurationen des Phänomenotechnischen,” in: List und Tod, ed. Gerhard Gamm et al. (Zürich/Berlin: Diaphanes, 2016), pp. 157–68.

[32] Husserl, The Crisis of European Sciences, p. 131.

[33] Husserl, The Crisis of the European Sciences, p. 129.

[34] Edmund Husserl, Ideen zu einer reinen Phänomenologie und phänomenologischen Philosophie; Buch 1, Band 1: Allgemeine Einführung in die reine Phänomenologie, ed. Karl Schuhmann, Husserliana, III/1 (Den Haag: Nijhoff, 1976), p. 60.

[35] Husserl, The Crisis of the European Sciences, p. 62.

[36] Husserl, The Crisis of the European Sciences, p. 51.

[37] Durt, “’The Computation of Bodily, Embodied, and Virtual Reality’: Winner of the Essay Prize ‘What Can Corporality as a Constitutive Condition of Experience (Still) Mean in the Digital Age?’”

[38] Stegmaier, What Is Orientation?, p. 270.

[39] McCarthy et al., “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” p. 1.

[40] Martin Heidegger, The Question Concerning Technology, and Other Essays (New York: Garland Pub, 1977), p. 23.

[41] Günther Anders, Über die Zerstörung des Lebens im Zeitalter der dritten industriellen Revolution, Die Antiquiertheit des Menschen 2 (München: Beck, 1995), p. 112.

[42] Edward Snowden, Permanent Record (London: Macmillan, 2019).

[43] “Takeaways from the Pegasus Project,” in: Washington Post, July 18, 2021, https://www.washingtonpost.com/investigations/2021/07/18/takeaways-nso-pegasus-project/.

[44] Nina Lakhani, “‘It’s a Free-for-All’: How Hi-Tech Spyware Ends up in the Hands of Mexico’s Cartels,” in: The Guardian, December 7, 2020, sec. World news, https://www.theguardian.com/world/2020/dec/07/mexico-cartels-drugs-spying-corruption.

[45] Patrick Austin Austin / Billy Perrigo, “What to Know About the Pegasus IPhone Spyware Hack,” in: Time, accessed October 14, 2021, https://time.com/6081622/pegasus-iphone-spyware-hack/.

[46] Stephan Wiesend, “Is Apple To Blame For Failing To Stop Pegasus? – Macworld UK,” accessed October 14, 2021, https://www.macworld.co.uk/news/apple-blame-pegasus-3806896/.

[47] Snowden, Permanent Record, p. 222. Snowden states that he retrieved the statement from a PowerPoint presentation that was intended to impress foreign allies of the National Security Agency (NSA) but takes it to be an “accurate measure of the scale of the agency’s ambition and the degree of its collusion with foreign governments” (p. 223).

[48] Chris Buckley / Paul Mozur, “How China Uses High-Tech Surveillance to Subdue Minorities,” in: The New York Times, May 22, 2019, sec. World, https://www.nytimes.com/2019/05/22/world/asia/china-surveillance-xinjiang.html; Darren Byler, “Ghost World,” Logic Magazine, accessed May 7, 2019, https://logicmag.io/china/ghost-world/; Darren Byler, “China’s Hi-Tech War on Its Muslim Minority,” in: The Guardian, April 11, 2019, sec. News, https://www.theguardian.com/news/2019/apr/11/china-hi-tech-war-on-muslim-minority-xinjiang-uighurs-surveillance-face-recognition.

[49] Chris Buckley / Paul Mozur / Austin Ramzy, “How China Turned a City into a Prison,” in: The New York Times, April 4, 2019, sec. World, https://www.nytimes.com/interactive/2019/04/04/world/asia/xinjiang-china-surveillance-prison.html.

[50] Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: PublicAffairs, 2018).

[51] Stegmaier, What is Orientation?, p. 248.

[52] Heidegger, The Question Concerning Technology, p. 23.

[53] B.J. Fogg, an experimental psychologist who introduced the term “Captology” (derived from the acronym of Computers As Persuasive Technology), defines persuasion as “an attempt to change attitudes or behaviors or both (without using coercion or deception)” (B. J. Fogg, Persuasive Technology: Using Computers to Change What We Think and Do [Amsterdam/Boston: Morgan Kaufmann Publishers, 2003]). The concept of persuasive technology is in this essay used in a similar way, but the main focus is on how it changes not only attitudes and behavior but orientation in the wider philosophical sense.

[54] Cass R. Sunstein / Richard H. Thaler, “Libertarian Paternalism Is Not an Oxymoron,” in: The University of Chicago Law Review 70, no. 4 (2003): 1159, https://doi.org/10.2307/1600573; Richard H. Thaler and Cass R. Sunstein, Nudge: Improving Decisions about Health, Wealth, and Happiness (New Haven: Yale University Press, 2008); Shlomo Benartzi et al., “Should Governments Invest More in Nudging?” in: Psychological Science 28, no. 8 (August 2017): pp. 1041–55, https://doi.org/10.1177/0956797617702501.

[55] Werner Stegmaier / Reinhard G. Mueller, Fearless Findings: 25 Footholds for the Philosophy of Orientation (Hodges Foundation for Philosophical Orientation, 2019), p. 7.

[56] Robert Sugden, “Do People Really Want to Be Nudged towards Healthy Lifestyles?” in: International Review of Economics 64, no. 2 (June 2017): pp. 113–23, https://doi.org/10.1007/s12232-016-0264-1.

[57] See Werner Stegmaier, Philosophie der Orientierung (Berlin / New York: De Gruyter, 2008), p. 152.

[58] Casey Newton, “Mark Zuckerberg Is Betting Facebook’s Future on the Metaverse,” in: The Verge, July 22, 2021, https://www.theverge.com/22588022/mark-zuckerberg-facebook-ceo-metaverse-interview.

[59] Mariska J. Vansteensel et al., “Fully Implanted Brain–Computer Interface in a Locked-In Patient with ALS,” in: New England Journal of Medicine 375, no. 21 (November 24, 2016), pp. 2060–2066, https://doi.org/10.1056/NEJMoa1608085.

[60] Thomas Fuchs, Verteidigung des Menschen. Grundfragen einer verkörperten Anthropologie (Frankfurt am Main: Suhrkamp, 2020), pp. 43–44.

[61] Christoph Durt, “From Calculus to Language Game: The Challenge of Cognitive Technology,” in: Techné: Research in Philosophy and Technology 22, no. 3 (2018): pp. 425–46, https://doi.org/10.5840/techne2018122091.

[62] Ludwig Wittgenstein, Preliminary Studies for the “Philosophical Investigations”: Generally Known as The Blue and Brown Books (Oxford: Basil Blackwell, 1958), p. 25.

[63] Durt, “From Calculus to Language Game.”

[64] Tom B. Brown et al., “Language Models Are Few-Shot Learners,” ArXiv:2005.14165 [Cs], July 22, 2020, http://arxiv.org/abs/2005.14165.

[65] Dan Milmo / Dan Milmo Global technology editor, “Twitter Admits Bias in Algorithm for Rightwing Politicians and News Outlets,” in: The Guardian, October 22, 2021, sec. Technology, https://www.theguardian.com/technology/2021/oct/22/twitter-admits-bias-in-algorithm-for-rightwing-politicians-and-news-outlets.

[66] Heidegger, The Question Concerning Technology; see also Aaron James Wendland / Christopher Merwin / Christos Hadjioannou (eds.), Heidegger on Technology (New York: Taylor & Francis, 2019).

[67] Heidegger, The Question Concerning Technology, and Other Essays; see also Wendland / Merwin / Hadjioannou, Heidegger on Technology.

[68] Durt, “‘The Computation of Bodily, Embodied, and Virtual Reality.’’”