23
2029 How to Fit Into the World
Fantasies of Feeling

Sascha Pohflepp

Why would we want machines to “feel,” perhaps even like a human? To guarantee that a machine can assume the perspective of its maker by guaranteeing its existence to be “human-centered” as called for by some may foreclose other modes of thinking and ultimately contribute to ensuring humanity’s privileged position within in a planetary ecology of minds, when perhaps we should be working towards a goal of de-centering. Instead, I want to suggest that the potential for “machine feelings” beyond conventional narratives may lie in being able to act as “intercalary elements” (Gilles Deleuze) between different natural minds such as ours and other entities who we find it difficult to directly “communicate” with, such as plants, the Earth system or our very own neural correlates.

In its common usage, the concept of “feeling” stretches from raw perception on the sensory level all the way to complex, human emotions. A physicalist standpoint would hold that feelings must as “mental states [be] states of the body” (Nagel, 1974) and that any feeling would thus emerge from the nerves, neurons and networks that have formed along the arrow of one’s personal history. While humans can emphasize with one another, as no two individuals will ever have identical networks (and activations) and we can never “know” what another human truly feels. Philosopher Thomas Nagel extended this epistemological dilemma when he asked what it might be like “to be a bat” and found that while “our own experience provides the basic material for our imagination,” (1974) the sensory arrays and body plans of a bat are vastly different, giving rise to a radically “subjective character of experience,” which in turn (perhaps proportionally to the measure of “alienness” of the underlying substrate) tends to quickly withdraw from what is conveyable in human language. At the membranes of “ourselves” (individual, species, genera) and with decreasing similarity between neural correlates, knowledge must gradually give way to imagination and simulation.

frog
Walter Pitts with Jerome Lettvin (1959).
On the side of synthetic intelligence, the history of thinking machines in part originated with “feeling” as sensing, when Jerome Lettvin, Humberto Maturana, Warren McCulloch and Walter Pitts were wondering “what the frog’s eye tells the frog’s brain.” (1959) Their findings complicated the belief that eyes are merely sensors and that brains are vast assemblages of logic operators. Instead, it suggested the existence of networks that encode information by “calculating” gradients into a model of the world become flesh. Present-day artificial neural networks are systems that are able to replicate a range of aspects of natural cognition, a chimera in itself which borrows from flies, frogs, birds and humans. For the abstraction afforded by universal computing machines in Alan Turing’s sense this does not prove an unsurmountable problem, rather another target of their simulative effort.

Let us assume for a moment there are in fact multiple forms of intelligence situated within an asymmetrical field that cannot be reduced to a certain set of properties such as pattern recognition of self-awareness. Rather different correlates (natural or synthetic) are giving rise to their own minds, which have their respective strengths and weaknesses. Cognitive roboticist Murray Shanahan has recently extended on Nagel’s work, finding that conscious entities are imaginable that would be “wholly inscrutable, which is to say it would be beyond the reach of anthropology.” (2016)

As real “agency” may be transferred towards other intelligences, there will be unresolvable situations, not unlike with natural nature. There is a certain irony to this claim, as the key narrative of modernity, which has produced intelligent machines, is also one of control. Intelligent autonomous organizations or agents are exactly that—intelligent and autonomous—which means that to yield a measure of control is bound to be necessary if we wish to reap the benefits that the other minds’ different mode of “feeling” the world may offer.

pesto
AI operated indoor produce farm.
On an ecological scale, calls for firmly “human-centered AI” may even appear paradoxical as it may serve to tide-lock a genuinely new ecological entity into an orbit around our own species, quite literally calling for anthropocentrism when at the same time we speak of creating a more level field between inhabitants of Earth by stripping humanity of its planetary privilege. Perhaps it would be helpful then, to instead consider ways of how we may want to relate to other natural entities in the future before we define the way we want a specific technology to relate to us as this, in a sense, may follow.

A recent conversation with an agricultural scientist might provide a glimpse of such “intercalary” elements who may be able to feel what humans cannot: LED illumination that is finely adjustable in terms of wavelength and energy output in combination with sensors that measure certain vital signs of plants is revolutionizing the indoor cultivation of plants. At the present moment such systems are utilizing fairly simple algorithms, but using machine learning to sense the condition of a given plant organism in order to react accordingly appears an obvious avenue for engineering to explore. This effectively means for an AI to employ a variety of sensors (all of which only in the broadest terms map to natural senses found in human bodies) to “feel” the plant. As the neural network is being trained, it learns about the plant and later, through its ability to control the light, establishes “communication” as it reacts to its metabolism.

On its most material level, the neural correlate that this knowledge would have been embodied in is far more alien to us than the plant itself, yet it may come to “know” the plant more intimately than any human gardener ever could, increasing agricultural productivity (and perhaps plant happiness.) More wide-ranging scenarios are currently being suggested in which autonomous distributed organizations (DAOs) without any human agency become custodians of entire landscapes, to protect them from human exploitation or perhaps to better exploit themselves, such as terra0’s “technologically-augmented ecosystems that are […] able to act within a predetermined set of rules in the economic sphere as agents in their own right.” Regardless, those systems suggest a potential an anthropodecentric alliance of feeling within partially synthetic yet non-human ecosystems.

The existence of potentially “inscrutable” feeling machines poses the problem of how to engage with them as they will likely require us to complete shift our ontological relationship to them from a “designed stance” to an “intentional stance” according to the categories that Daniel Dennett has outlined (2009), with the additional problem that our own faculty (or notion) of rationality may not be sufficient to properly situate a given agent’s “rational demands.” Yet, we will likely always perceive a need to gain insight into the modes of thinking and purposes of entities that we are going to design, collaborate with or co-exist with. Shanahan therefore suggests that “to discern purposeful behavior in an unfamiliar system (or creature or being), we might need to engineer an encounter with it,” in order to feel it out, so to speak.

Ludic platforms such as chess or Go, while they were primarily chosen because of the boundedness and rule-baseness of their respective “worlds,” may have incidentally become prototypes of such engineered encounters in which human minds are able to obtain a feeling for the “perception, belief, desire, intention and action” of another kind of intelligence. Conversely, within human history, games have been a formidable technique, not only for training the cognitive modeling of another person’s mind by predicting their future moves, but also to side-step Nagel’s “subjective character of experience” of the human by tapping into intrinsically “inhuman” factors such as the randomness of dice in games of chance.

brain
Given the present lack of a complete knowledge about how natural brains give rise to intelligence and self-consciousness and the recent successes in giving emergence to simple forms of intelligence in fundamentally different substrates, it appears more likely that we are moving toward a future filled with alien encounters rather than with machines that are “feeling” in the human sense if the word. Those encounters might necessitate new games, and will change our understanding of the old ones, as it is presently happening with Go where the spectrum of potentially gainful agential expressions that are afforded by the game (and thus also the way that humans play) has already been expanded. As “we might discover whole new categories of behavior or cognition” (and feeling, perhaps), “relevant parts of our language might be reshaped, augmented or supplanted by wholly new ways of talking,” (Dennett) reflecting how human culture will be changing along with the space of possible minds.

Should we chose to tie the development of synthetic minds to remain in the space of Nagel’s “someone sufficiently similar” in order to be able empathize with us (if this were even possible) we may end up with fantasies of feeling. If we instead embrace the alien subjectivity of other animals, machines or whole ecosystems, we might learn a lot more about the nature of intelligence, not least that of our own minds.


Haigney, Sophie. "Fantasies of Feelings" Machine Feeling, 18 Dec. 2018, machinefeeling2018.home.blog/2018/12/18/fantasies-of-feeling.

Dennett, Daniel. “Intentional Systems Theory.” Oxford University Press, 2009.

Lettvin, J., et al. “What the Frog’s Eye Tells the Frog’s Brain.” Proceedings of the IRE, vol. 47, no. 11, Nov. 1959.

Nagel, Thomas. “What Is It Like to Be a Bat?” The Philosophical Review, vol. 83, no. 4, Oct. 1974.

Shanahan, Murray. “From Algorithms to Aliens, Could Humans Ever Understand Minds That Are Radically Unlike Our Own?” Aeon Magazine, Oct. 2016. https://aeon.co/essays/beyond-humans-what-other-kinds-of-minds-might-be-out-there.

"LEDs in Urban Farming GIF". gfycat. 11 Jun 2017. https://gfycat.com/fr/harmfulashamedbobwhite.