Why would we want machines to “feel,” perhaps even like a human? To guarantee that a machine can assume the perspective of its maker by guaranteeing its existence to be “human-centered” as called for by some may foreclose other modes of thinking and ultimately contribute to ensuring humanity’s privileged position within in a planetary ecology of minds, when perhaps we should be working towards a goal of de-centering. Instead, I want to suggest that the potential for “machine feelings” beyond conventional narratives may lie in being able to act as “intercalary elements” (Gilles Deleuze) between different natural minds such as ours and other entities who we find it difficult to directly “communicate” with, such as plants, the Earth system or our very own neural correlates.
In its common usage, the concept of “feeling” stretches from raw perception on the sensory level all the way to complex, human emotions. A physicalist standpoint would hold that feelings must as “mental states [be] states of the body” (Nagel, 1974) and that any feeling would thus emerge from the nerves, neurons and networks that have formed along the arrow of one’s personal history. While humans can emphasize with one another, as no two individuals will ever have identical networks (and activations) and we can never “know” what another human truly feels. Philosopher Thomas Nagel extended this epistemological dilemma when he asked what it might be like “to be a bat” and found that while “our own experience provides the basic material for our imagination,” (1974) the sensory arrays and body plans of a bat are vastly different, giving rise to a radically “subjective character of experience,” which in turn (perhaps proportionally to the measure of “alienness” of the underlying substrate) tends to quickly withdraw from what is conveyable in human language. At the membranes of “ourselves” (individual, species, genera) and with decreasing similarity between neural correlates, knowledge must gradually give way to imagination and simulation.
Let us assume for a moment there are in fact multiple forms of intelligence situated within an asymmetrical field that cannot be reduced to a certain set of properties such as pattern recognition of self-awareness. Rather different correlates (natural or synthetic) are giving rise to their own minds, which have their respective strengths and weaknesses. Cognitive roboticist Murray Shanahan has recently extended on Nagel’s work, finding that conscious entities are imaginable that would be “wholly inscrutable, which is to say it would be beyond the reach of anthropology.” (2016)
As real “agency” may be transferred towards other intelligences, there will be unresolvable situations, not unlike with natural nature. There is a certain irony to this claim, as the key narrative of modernity, which has produced intelligent machines, is also one of control. Intelligent autonomous organizations or agents are exactly that—intelligent and autonomous—which means that to yield a measure of control is bound to be necessary if we wish to reap the benefits that the other minds’ different mode of “feeling” the world may offer.
A recent conversation with an agricultural scientist might provide a glimpse of such “intercalary” elements who may be able to feel what humans cannot: LED illumination that is finely adjustable in terms of wavelength and energy output in combination with sensors that measure certain vital signs of plants is revolutionizing the indoor cultivation of plants. At the present moment such systems are utilizing fairly simple algorithms, but using machine learning to sense the condition of a given plant organism in order to react accordingly appears an obvious avenue for engineering to explore. This effectively means for an AI to employ a variety of sensors (all of which only in the broadest terms map to natural senses found in human bodies) to “feel” the plant. As the neural network is being trained, it learns about the plant and later, through its ability to control the light, establishes “communication” as it reacts to its metabolism.
On its most material level, the neural correlate that this knowledge would have been embodied in is far more alien to us than the plant itself, yet it may come to “know” the plant more intimately than any human gardener ever could, increasing agricultural productivity (and perhaps plant happiness.) More wide-ranging scenarios are currently being suggested in which autonomous distributed organizations (DAOs) without any human agency become custodians of entire landscapes, to protect them from human exploitation or perhaps to better exploit themselves, such as terra0’s “technologically-augmented ecosystems that are […] able to act within a predetermined set of rules in the economic sphere as agents in their own right.” Regardless, those systems suggest a potential an anthropodecentric alliance of feeling within partially synthetic yet non-human ecosystems.
The existence of potentially “inscrutable” feeling machines poses the problem of how to engage with them as they will likely require us to complete shift our ontological relationship to them from a “designed stance” to an “intentional stance” according to the categories that Daniel Dennett has outlined (2009), with the additional problem that our own faculty (or notion) of rationality may not be sufficient to properly situate a given agent’s “rational demands.” Yet, we will likely always perceive a need to gain insight into the modes of thinking and purposes of entities that we are going to design, collaborate with or co-exist with. Shanahan therefore suggests that “to discern purposeful behavior in an unfamiliar system (or creature or being), we might need to engineer an encounter with it,” in order to feel it out, so to speak.
Ludic platforms such as chess or Go, while they were primarily chosen because of the boundedness and rule-baseness of their respective “worlds,” may have incidentally become prototypes of such engineered encounters in which human minds are able to obtain a feeling for the “perception, belief, desire, intention and action” of another kind of intelligence. Conversely, within human history, games have been a formidable technique, not only for training the cognitive modeling of another person’s mind by predicting their future moves, but also to side-step Nagel’s “subjective character of experience” of the human by tapping into intrinsically “inhuman” factors such as the randomness of dice in games of chance.
Should we chose to tie the development of synthetic minds to remain in the space of Nagel’s “someone sufficiently similar” in order to be able empathize with us (if this were even possible) we may end up with fantasies of feeling. If we instead embrace the alien subjectivity of other animals, machines or whole ecosystems, we might learn a lot more about the nature of intelligence, not least that of our own minds.