A Computationalist Reply to Constitutive Panpsychism

Introduction

For thinkers who want to avoid dualist models of consciousness, panpsychism is an increasingly attractive notion that claims phenomenal consciousness is a property that originates in microscopic physical constituents such as electrons and protons. While these particles are not held to possess the same phenomenal experience as humans do, panpsychism supposes that they have their own “micro-phenomenal” qualities that, when combined in a sufficiently complex way, give rise to the “macro-phenomenal” qualities of complex organisms. One powerful historical critique of the panpsychist view has been the combination problem, most famously articulated by William James. James finds it objectionable that one could combine individually conscious agents to form a new kind of “group” consciousness.

Many attempts have been made to get around this problem. In the first section I shall analyze one panpsychist attempt put forward by Berit Brogaard, a type of panpsychism called “constitutive” panpsychism (CP). While I ultimately remain agnostic as to the success of her arguments, I shall maintain conservatively that instead of seeking to establish a new theory of consciousness at the subatomic level, we should continue to develop theories that study the consciousness possessed by relatively large-scale physical systems. To this end, I will also examine two other workarounds to the combination problem, cosmopansychism and computationalism, concluding that the latter (to me) seems the most promising candidate towards understanding the mind.

Part 1: How Brogaard’s CP Avoids the Combination Problem

Brogaard does not lay out her arguments for CP for the express reason of avoiding the combination problem; nevertheless, I find sketching her argument in this way to be useful in setting up its strengths. Brogaard says the most compelling form of constitutive panpsychism, the claim that “at least some of the phenomenal properties of macrosubjects are metaphysically determined by phenomenal or protophenomenal properties instantiated by microscopic particles” (131), is best understood by a comparison between gravity and consciousness. Her argument can be displayed syllogistically as follows:

  1. While there is a general macroscopic theory explaining gravity, there is no explanation of how gravity arises at the small scale. Therefore, “physicists posit the existence of gravitons: hypothetical tiny, massless elementary particles that emanate gravitational fields.” (138)
  2. A general macroscopic theory for consciousness already exists or is in development (within neuroscience/psychology/philosophy). There is still no explanation of consciousness at the small scale.
  3. Primordial consciousness, like gravity, is a ubiquitous phenomenon that has field-like properties. Gravitons are microscopic particles which emanate macroscopic gravitational fields.
  4. Therefore, we are equally justified in proposing hypothetical “consciousness particles”, or “mentons”, whose qualitative nature at the microscopic scale describe qualitative phenomena at the macro-scale.

I will show that if the above argument is true, Brogaard has a good case to make against one type of combination problem, and successfully avoids it. Since I am not fully convinced of its success, however, in subsequent sections I outline its weaknesses, mainly focusing on premises (2) and (3).

Brogaard sets out multiple versions of the combination problem. While she claims that her CP argument is successful against them all, where I think her version of CP is strongest is against what another philosopher, David Chalmers, calls the “Palette” argument (146). This argument specifically targets “pro-menton” theories which posit hypothetical consciousness particles. The Palette argument is based upon the observation that there seem to be a variety (a palette) of macrophenomenal states. The conclusion of the CP argument claims that there could be a single microphenomenal particle that instantiates the myriad of possible macrophenomenal states. The Palette argument calls into question whether mentons can account for this variety, and if they cannot, then CP must be false. It is a somewhat reverse-combination problem, moving backwards from a single microphenomenal particle to particular macrophenomenal states.

This variety of macrophenomenal experiences is exhibited most obviously through different sense modalities that organisms possess—seeing light is qualitatively distinct from hearing sound, yet both are examples of macrophenomenal qualities. How are both supposed to be instantiated by the menton? For one thing, consciousness in general is not what mentons give rise to; mentons compose what Brogaard calls “primordial consciousness.” Primordial consciousness is simply the barest essence of what it is like to be conscious, in her words, “the difference between a subject’s experience of red and her zombie twin’s analogous physical state” (146). This requires some confidence in the metaphysical possibility of p-zombies, entities which are perfect physical copies of a conscious subject but do not possess phenomenal consciousness themselves[1].

Since mentons are the fundamental constituents of primordial consciousness, it is conceivable that complex structures like particular brain areas could make up for the richness of macrophenomenal states. It is known that different animals have different sensory apparatus, and different degrees of neural sophistication, so they also must have various kinds of sense-experiences. However, if one is devoted to CP and this idea of primordial consciousness, one could still support the notion that they all have the same property of primordial consciousness but belong on a gradient of kinds of macrophenomenal states. In this way, Brogaard’s formulation of CP gets around the Palette argument by rejecting the claim that mentons, microphenomenal particles, even give rise to the variety of macrophenomenal states. A more exhaustive account of primordial consciousness is given below, where I discuss my objections to her argument.

Part 2: The Third Premise

The third premise of Brogaard’s argument where she sets up an analogy between quantum gravity and consciousness is one I believe to be extremely tenuous. How can we point to a subject in which the field’s experts have “no clue about” (138) in in order to elucidate another mysterious phenomenon like consciousness? Furthermore, quantum gravity, or gravity at the micro-scale, is an area of active research today; to claim it is an area in which physicists are totally ignorant seems quite pessimistic. Nonetheless, since knee-jerk reactions can often be incorrect, let us examine premise 3 in detail before dismissing it out of hand.

The analogy between gravity and consciousness is based upon certain striking similarities between the two. One resemblance is that both appear to be ubiquitous features of the universe. Gravity is a principle that holds true for galaxies far away as it does for objects on our planet that are in our immediate experience. The large-scale explanation of gravity, general relativity, describes how gravity is a consequence of the curvature of the field of spacetime around massive objects. To make the case that consciousness is also a field-like phenomenon, Brogaard appeals to Searle’s model of consciousness as a unified field. If consciousness is a field, the “total conscious state” is what is fundamental, and informational fluctuations within the consciousness-field are what represent individual conscious experiences (138). The field on its own is quite boring; it is only because of the action of nervous systems which perform the action of representing that there is informational content at all.

Another reason to assume consciousness is a field-like phenomenon, suggests Brogaard, is the neurological evidence that suggests that narrow conscious states, such as vision, are reliant upon broader conscious states to function properly (139). That is, individual networks that process vision are distinct from deeper brain networks (e.g., the brain stem) that are necessary for maintaining consciousness. Since the vision circuits cannot be called conscious on their own and must get feedback from more fundamental circuits of the brain stem, consciousness must be instantiated on a broader “primordial” level in those fundamental regions which are necessary for consciousness[2]. In other words, this primordial consciousness instantiated by the brain stem could offer the broadest sort of conscious state to which the more complex information of narrower conscious states (sight, smell, etc.) of the cortical areas is relayed. This is meant to provide further evidence that the broad state of primordial consciousness that mentons are hypothesized to emanate could also be responsible for creature consciousness.

Two difficulties I have with the above analogies are as follows. Gravity is ubiquitous because it can describe the motions of stars at the furthest reaches of the galaxy as well as it can explain the movements of objects on our planet. But its existence does not seem at all reliant on our observation of it—it simply is. Gravitation is a property of how objects relate and interact to one another in space, and to the extent that objects have mass and are in space, they must also be under gravitational influence. However, I worry that the perceived ubiquity of consciousness in the universe is not a result of its external reality, but instead because we carry it with us wherever we go. Humans take their conscious experiences everywhere—it stands to reason that when we stare out of a particular lens it also colors everything we see.

The second difficulty is with the level on which primordial consciousness can be understood. It is unclear how microscopic mentons, if they emanate a field of consciousness, could interact with physical brains. All the matter we can observe in the universe, as the Standard Model reveals, is the result of four fundamental forces—the strong and weak nuclear forces, gravity, and the electromagnetic force. All these forces have particles which mediate their interactions, except for the hypothesized graviton[3]. If the menton exists, in what way could it interact with these forces? Should we not already see some indication of its existence if it has causal interactions with matter (i.e. nervous systems) through a consciousness field? Also, Brogaard’s discussion of the brain stem shows, if anything, that primordial consciousness must be a function of the brain itself, not outside it. In fact, it is far more likely that this primordial consciousness is itself a “narrow” state that is integrated with other narrow, but more informationally complex states[4]. Since primordial consciousness evolved earlier in time, it is probable that all other states that evolved afterwards were able to establish pathways that linked themselves to the more fundamental system of primordial consciousness.

Brogaard’s theory is commendable because it presents a theory of consciousness that is physicalist, empirical in principle, and so could be experimentally tested. However, since the theory focuses on a level of explanation (at the level of fundamental particles instead of neurons) below what is necessary to explain consciousness, I think it is unlikely to be correct. It is entirely possible for particles such as mentons to exist; what is lacking is a mechanism of action that explains how they behave and interact with the universe as we know it. So instead, what if consciousness could still be explained with reference to larger scale systems?

Part 3: A Conservative Argument against Premise (2)

“No computer has ever been designed that is ever aware of what it’s doing; but most of the time, we aren’t either.” – Marvin Minsky[5]

The following section will address a systems approach to consciousness, where I focus on premise (2) of Brogaard’s central argument. My argument is a somewhat conservative one, as I argue that certain assumptions and goals within the cognitive sciences are already correct (or mostly so), and that there are good reasons to take them seriously. In her second premise, Brogaard suggests that consciousness needs a theory of the small scale. It is not clear why this is so, as many researchers in the cognitive sciences—artificial intelligence, systems neuroscience, computational linguistics, etc.—will be the first to tell you that whatever consciousness is, it should be described as a function that is algorithmically[6] implemented by nervous systems. To many, this “computationalist” view strikes them as being hopelessly mathematical—dead, in other words—and in no way able to even come close to replicating conscious systems. However, it is not susceptible to the objections made to panpsychism while it maintains its strengths (and has more of its own). Perhaps putting our intuitions aside may serve us better when discussing matters such as consciousness.

Computationalism sidesteps the dichotomy between large- and small-scale features of consciousness by instead asserting that consciousness, as we know it, is a principle or rule by which certain interactions between brain regions are governed. If we take a systems approach, by emulating the rule by which certain NCCs (neural correlates of consciousness) interact, we can also emulate the function that they perform. We can understand better what kind of systems approach I mean if we take a look at James’ formulation of the combination problem again:

“Take a hundred of them, shuffle them and pack them as close together as you can (whatever that might mean); still each remains the same feeling it always was, shut in its own skin, windowless, ignorant of what the other feelings are and mean. There would be a hundred-and-first feeling there, if, when a group or series of such feeling were set up, a consciousness belonging to the group as such should emerge.[7]

James makes the case that when any hundred “elemental units” are packed tightly, they will never transfer any sort of feeling to one another. He was making a case against panpsychism, of course, against the idea that microscopic “conscious” particles could ever combine to form a new whole that also possesses a new sort of consciousness. According to computationalism, this is misguided, since the elements of what we know are certainly conscious systems, brains, are, under this view, unconscious neurons. How is it, then, that we get conscious brains?

One basis for a compelling explanation is that these brains are not static or lazy—their neurons are continually firing, synchronously and asynchronously. One neuron speaks to hundreds of others, and a gang of neurons can “speak” to dozens of other gangs. The brain is sessile, yes, but that does not mean it is inactive. Somewhere in the activity must be the thing we term “consciousness”, although the word is likely, in Marvin Minksy’s words, to be “suitcase[8]” term for a bundle of separate functions that get integrated into a false unity.

To return to the combination problem, then, the way the elemental units can transfer feeling to one another is for them to be able to communicate, that is, transfer information amongst themselves. If 12 individuals are told the words of a sentence—each has one word, and the group is supposed to come up with an entire sentence—there does end up being consciousness of the entire sentence because individuals can communicate to one another their specific word. Lone neurons do not do anything interesting on their own, but within networks, individual neurons can speak to one another. There is simply no need for the entire group to have a unified experience if a single part has a particular experience that it can adequately communicate with the other parts. Therefore, communication between parts is a way of generating group-level behavior even if there is no group-level experience.

A system is composed of interacting parts, and if those parts can transfer information to one another, information that is given to one element can manifest a response at the level of the system. There are examples of these phenomena all throughout nature, in flocks of birds and schools of fish, where group level behavior emerges from simple local interactions. If each bird or fish is receptive to the behavior of its neighbor, when his neighbor swims left—he swims left—and when his neighbor flies up, he does so too. If a predator is coming from the right, for instance, one fish moving to the left is enough for his neighbor to do so, an action which carries to all others. The simple rule “do whatever your neighbor is doing” can therefore result in group level decisions. This goes to show that information transfer between the components of a system can quite effectively result in group level behavior, although not necessarily group level experience.

A more suitable illustration for experience could be the aptly named phenomenon, emotional contagion, which is a property of emotions to spread between socially sophisticated creatures within a bounded space, such as a traffic intersection or an elephant herd. This relies on the fact that for the most part, most of our emotions are not a result of conscious control yet can be consciously felt—a property that according to neuroscience is shared by the majority of mental phenomena. Certain forms of communication besides language, such as visual signals like offensive body posture or auditory ones like loud yelling, bypass higher cortical areas that make you aware of them. Instead, these stimuli are routed to deeper (i.e. evolutionarily early) areas that quickly assess threats, and if need be, make flight or fight decisions to protect you, all of which is set up to be done unconsciously. Since social mammals have more advanced brain areas that serve to emulate mental states of their companions, they tend to have a more sophisticated response to group emotions than simpler organisms like fish.

While humans are not fish or birds, this does not mean we are totally different. Much human mob behavior still exhibits much of the same properties as the aforementioned flock and school behaviors[9]. Emotional contagion within mobs is an oft-cited cause of crowds “taking on a mind of their own”, making decisions that seem difficult to fathom from the perspective of an individual. That is to say, if we imagine that people still act like individuals when in a mob, we cannot fully explain the level of behavior that emerges at the level of the mob. That stimuli processed unconsciously can manifest itself quite consciously may be a disturbing realization, but it seems to be an inherent feature of evolved minds. These examples of within-system communication, both within and between minds, should serve to broaden our intuitions on what systems can do when organized a certain way and are powerful examples of startling emergent behaviors that we do not expect from low level interactions of simple parts.

An objection could be made that all the parts that I mentioned—birds, fish, social mammals—are themselves quite complex, and so complex group behavior is not surprising but expected. One oft-forgotten fact is that cells are not simple either and neither are their interactions. Each neuron on its own is quite complex and interesting (hence why there are molecular and cellular neuroscientists). We only forget because for the purposes of discussing their larger scale interactions, the actions of individual neurons are abstracted until something goes wrong at a higher, systems level of analysis, where attention is again given to simpler units of information processing. Regardless, there have been studies to confirm the fact that when discussing neuronal circuits, the function of the circuit is what has been selected for by evolution, not particular circuit parameters[10]. Hence, analyzing larger scale circuits is a totally consistent course of action to take when considering general functions of brains.

A close look at the brain reveals that there are particular neuronal circuits that perform specialized functions. These specialized circuits implement unique principles that are not implemented elsewhere in the brain, combining their outputs with the outputs of other specialized circuits to perform unified functions, which can either manifest in outward behavior or remain “under the hood”. As a particular illustration, dopamine neurons in the substantia nigra[11] play a central role in reinforcement learning, a principle by which an organism is informed either to do more or less of what it is currently doing. This circuit is implemented all across the animal kingdom, being foundational to learning (specifically, Pavlovian conditioning behaviors) in creatures from birds to mammals. Since this circuit can be defined mathematically (math is just a very specific kind of language), it is currently being implemented in computers as well. There are other circuits throughout the brain, such as supervised learning in the cerebellum[12], which perform similarly algorithmic functions that are continually being discovered.

One major hope is to find the circuit, or set of circuits, which perform the function of the self that acts as a CEO or “conductor[13]” of the brain, integrating disparate information from all across the brain and giving rise to a sense of being a subject which has experiences. Phenomenal consciousness, according to one strain of computationalism, could just be the output of a specialized CEO center of the brain which gives rise to awareness of phenomenal content, since it receives input from most or all sensory cortical areas. Much current research focuses on attentional circuits of the brain, based on the observation that whatever one is not attending to, one is not conscious of, and so cannot remember. Hence, attention seems linked with both consciousness and memory, which aligns with introspection[14]. There are several candidate networks that are implicated in such an attentional function, and they are worth researching further.

 

Part 4: Objections

I have only addressed constitutive panpsychism in the previous sections, in addition to a computationalist account that seeks to avoid the problems I put against Brogaard’s variety of CP. Another kind of panpsychism, “cosmopansychism” could still be a potential replacement of CP and sidesteps James’ combination problem by putting forward the claim that the entire universe instantiates a form of “cosmic” consciousness that individual creatures partake of. This has great parallels with Brogaard’s field theory of consciousness but does not rely on particles with microphenomenal qualities. Instead, there is only one “macro”-phenomenal consciousness and many micro-phenomenal conscious agents which rely on the larger consciousness. This escapes the combination problem by first assuming that consciousness is a unity, then breaking it up into component parts.

Many have viewed pictures of galactic clusters[15], and having realized their similarities with images of neurons, imagine the universe at some level to be similar to a brain. Of course, this is really not the case. One argument against cosmopansychism could utilize the principles of computationalism. If cosmopansychism is to be true, there must be structures at large scales that implement functions that organisms like us also compute within our brains. Images of neurons leave out many things, isolating only relevant structures that researchers want to observe, and likewise for those pictures of galaxies. To look at those pictures and see similarity, while tempting, would be mistaken.

This analogy between brains and galaxies also fails because neurons send signals to one another quite quickly, so for galactic information transfer to occur at a similar speed, signals must travel faster than the speed of light in order to make up for the vast distances. Brogaard’s CP, more than anywhere, would come in handily here, as it could potentially work around this barrier. If consciousness is a unified field like gravity, it could conceivably act across galactic distances, the same way the gravity of one galaxy influences its neighbor.  But if we are committed computationalists, implementing the function of consciousness is the sufficient condition to be conscious, so whatever structure that implements it, no matter the form or size, must be conscious.

Conclusion

If there are circuits in the brain stem whose function is to give rise to primordial consciousness, they will be mapped and discovered, their precise algorithms defined and implemented in machines. Perhaps it is fundamental, evolving evolutionarily early in the brain stem, or it could be a late phenomenon, a consequence of having a cortex. One may think the computationalist perspective misses the forest of consciousness for algorithmic trees…but after all, forests are simply collections of trees! Differences between forests are usually explained by differences in tree species, soil nutrients, temperature, and climate. So, if you want to make another forest, you must study the trees. Similarly, in the brain, it certainly seems like many systems in it are understood best with recourse to algorithms. If we want to make another system like it, it only stands to reason that we must study what it is made of, and seek to understand its unique structure and organization.

An oft-cited analogy to cognitive science research comes from aviation. It demonstrates that flight was achieved not by exactly emulating birds, but by extracting what was known about bird flight. For something to fly, it did not have to look like a bird—it had to fly as a bird flies, maximizing lift and minimizing drag. Understanding the deeper principles, abstracted away from intuition, in this way has a good historical record behind it. It could be the case that there is some deeply misguided approach within the cognitive sciences, but it is not at all true that we have run out of things to try. For consciousness to be an intangible principle behind brain activity would not show it to be boring; it would mean it is the ultimate magic trick of the universe, more magical than we could have ever dreamed. And the fact that we could not have dreamed it, just like gravity, implies that it is not just a figment of our imagination.

[1] I am not at all committed to this belief, but because avoiding the combination problem is at stake, let us run to the end of the argument.

[2] Although Brogaard does not mention this, the brain stem is considered one of the earliest regions to evolve within the brain, and it is found (in various forms) within all the animal kingdoms.

[3] https://home.cern/science/physics/standard-model

[4] Denton et al. (2008), The role of primordial emotions in the evolutionary origin of consciousness

[5] As cited in Bach (2016), The Cortical Conductor Theory: Towards Addressing Consciousness in AI Models

[6] Algorithm simply means “rule-governed”, where usually the rule is mathematically or programmatically defined

[7] James (1890), Principles of Psychology

[8] https://web.media.mit.edu/~minsky/eb4.html

[9] Cannetti (1960), Crowds and Power

[10] Prinz et al. (2004), Similar network activity from disparate circuit parameters

[11] Gadakar et. Al (2017), Dopamine neurons encode performance error in singing birds

[12] Raymond et al. (2018), Computational Principles of Supervised Learning in the Cerebellum

[13] Bach (2016), The Cortical Conductor Theory: Towards Addressing Consciousness in AI Models

[14] http://www.scholarpedia.org/article/Attention_and_consciousness

[15] https://wwwmpa.mpa-garching.mpg.de/~swhite/talk/Ouagadougou2010.pdf

Amanuel Sahilu

This article was written by wpb49