Thursday, 19 March 2026

BK

 A

Originally, in the time of René Descartes, all that was needed to


escape the Church was to separate the mental from the made-up non-

mental—labeled the ‘material’—and claim that science busied itself


solely with the latter. Descartes’ own substance dualism did not try to

eliminate either side of this pair or make one more fundamental than the

other. They were, instead, supposed to be complementary. This

intellectual ethos prevailed among learned elites all the way into the

early 19th century, as one can see, for instance, in these words of the

great Goethe:

Whoever can’t get it into his head that mind and matter, soul and

body ... were, are, and will be the necessary double ingredients of the

universe, ... whoever cannot rise to the level of this idea ought to

have given up thinking long ago.

(As quoted in Rüdiger Safranski’s Goethe: Life as a Work of Art,

WW Norton, 2018, chapter 29.)

Notice that, for Goethe, some form of substance dualism was not at all a

matter of faith, but one of reason, for failing to acknowledge it

represented an abandonment of thinking itself. Goethe was an ennobled

—bourgeois by birth, being the son of a financially-independent lawyer

—member of the intellectual elite of his time; perhaps the most

prominent one. His perspective is thus quite relevant and representative.

By the second half of the 19th century, however, when the game was

no longer the mere survival of bourgeois intellectual elites, but their

cultural hegemony over the clergy, an extra claim became mainstream

among those elites: quantitative descriptions precede the qualities

described, somehow giving rise and essence to the latter. This meant that

what science studies—i.e., matter—is deeper and more fundamental than

the Church’s domain of the psyche. The equality between mind and

matter was abandoned in favor of the latter. Indeed, intellectual

bourgeois hero Charles Darwin—son of a successful doctor and financier

—had already dealt a blow to the clergy by taking away from them the


power to explain life itself. This emboldened the ambitions of the

bourgeoise, so that claiming that mind must be reducible to matter was

the psychologically predictable next attempt at a metaphysical coup de

grâce against the Church. To this day, almost two centuries later, the

claim is still in vigor, for modern Physicalism maintains precisely that


the qualitative-mental can be reduced—in principle—to the quantitative-

physical, even though nobody can even begin to explicate how that


might work.

The point I am trying to make is that mainstream Physicalism is not a

hypothesis motivated by evidence and clear thinking, but a philosophical

side-effect of a psycho-socio-political power game. What passes for

empirical evidence in favor of Physicalism is often evidence merely for

the existence of a world outside our individual minds, not for a world

metaphysically different from mind in general, as an ontological

category or kind of existent. But because we are conditioned to thinking

of everything outside the minds of living beings as non-mental, we

naively misconstrue the undeniable and overwhelming evidence for a

world outside living beings—a world that living beings inhabit—as

evidence for the non-mental.

What leads to such interpretational bias? The answer is Physicalism

itself, for it is only under its premises that mind, being supposedly a

product of metabolism, must always be confined to living beings (the

fact that mental states correlate well with metabolic brain states is

acknowledged, but also does not imply Physicalism, as I shall discuss in

detail later). Therefore—or so the thought goes—the environment

inhabited by living beings cannot itself be mental; ergo, it is material, for

what else is there? This is an example of the circularity that underpins

mainstream Physicalism, as mentioned earlier. If you come from a

physicalist background yourself, you may even need to reread this and

the previous paragraph, perhaps a couple of times, to even see the

circularity in question.

Here is another example: because everything happens as if what

appears on the screen of perception were the real world out there, we

conclude that the real world must be physical, in the sense of having the

structure of the contents of perception. If one sees a train coming and

then steps in front of it, one dies; it all works as though a real physical

train were coming. Yet, the same applies to an airplane without windows:

everything happens as if the dashboard were the sky outside; so much so

that the airplane will crash if the pilot ignores the dashboard or acts


against its indications. Why is that? Because the behavior of the

dashboard is correlated, by construction, with the salient parameters of

the sky outside, insofar as it represents—in an encoded form—the states

of the sky. In other words, the dashboard conveys accurate and

important information about the sky outside, without being the sky. It

was built to do precisely this. But since we’ve become blind to the rather

trivial fact that accurate information can be conveyed through

representational mediation, we fail to see that perception is more akin to

a dashboard than to reality. And what inculcates this bias in our minds?

The answer is, again, unexamined physicalist premises, according to

which the structure of the real world is ‘obviously’ the structure of what

is displayed on the screen of perception. Physicalism is thus largely a

self-perpetuating delusion. That physicalists think of themselves as being

guided by evidence merely betrays the circular character of the delusion,

insofar as their evidence is trivially misconstrued to imply what it

doesn’t.

There is more to be said about how we misconstrue evidence to

acquiesce to our physicalist bias, and I shall come back to it later. For

now, though, the salient point is this: while there was initial clarity

among the people involved in the social power game that Physicalism

was a political move, today this clarity has been lost. We now actually

believe in Physicalism, for the cultural momentum it has amassed for

being repeatedly pronounced as fact over generations—as well as its

now-conditioned association with ‘educated’ and ‘elite’ perspectives—

has become formidable. Just like political propagandists who eventually

start gulping down their own snake oil, we now believe Physicalism

wholesale, because it has been repeated ad nauseam by otherwise

credible and educated people who were supposedly thinking about these

things before we were born. This cultural momentum gives us license to

not think critically about it ourselves, for others have done the hard

thinking for us already, right? And if all those educated people believed

in Physicalism, then it must be true, and we don’t need to spend our

energy reinventing the wheel ... right? All we need to do, when it comes

to our own credibility, career, and social standing, is to repeat the

physicalist story ourselves, so we also come across to others as educated

elite thinkers, who courageously—even heroically—face the tough fact


that nature is dead and meaningless. And thus, this self-perpetuating self-

deception endures robustly, for your children watch you—or the evening


news anchor, the family doctor, the teacher at school, etc.—gulp down

the snake oil. They then repeat it to your grandchildren, and these in turn


to your great-grandchildren, etc. After generations of this pernicious


psychological cycle, a culture can sincerely become quite certain that in-

your-face balderdash—such as the map preceding the territory—holds


water, for a manufactured sense of plausibility eventually saturates it.

Welcome to the 20th century, a time when the Church was already largely

defeated, but bourgeois intellectual elites stepped on their own mines

while returning from the battlefield.

Fortunately, sociopsychological dynamics cannot make something

incoherent and empirically inadequate magically become true. There is

an indelibility to reason and evidence that, like water splashing against

rock, eventually cracks the strongest psychological walls. And so this,

too, began to happen in the late 20th century.

Under mainstream Physicalism, physical entities are defined in terms

of their measurable, quantitative physical properties. In other words, an

electron is its measurable mass, charge, momentum, etc.; there is

supposedly nothing to an electron but its quantitative properties. Still

under Physicalism, these physical entities have standalone existence:

they supposedly exist in and of themselves, independently of observation

or measurement. Observation and measurement merely disclose, unveil

their properties, which already existed—or so the story goes—

immediately prior to the observation or measurement. This notion is

called ‘physical realism.’ And sure enough, the practice of empirical

science did not contradict it up until the late 1970s.

But then, while looking closer and closer into the primary building

blocks of matter, scientists noticed something that defied physicalist

expectations: as it turns out, laboratory results began to show that

physical entities in fact cannot be said to exist prior to measurement.

Instead, physicality is a product of measurement.

This remarkable series of experiments—refined by different research

groups over the span of more than four decades—moved the Nobel Prize

committee to award the lead investigators the Nobel Prize in physics in

2022; the highest honor in science. I shall now briefly describe the

general form of the experiments and discuss why their results refute

physical realism.

The experimental procedure goes as follows: two subatomic particles

—say, A and B—are prepared together, so that they are entangled

(‘entanglement’ is physics jargon for saying that the particles cannot be

described independently of one another). They are then shot in opposite

directions at (near) the speed of light. After a certain distance is covered,


a first scientist—let’s say, Alice—measures particle A, while another

scientist—say, Bob—simultaneously measures particle B at a different,

far away location. What then transpires is that Alice’s choice of what to

measure about particle A determines what Bob sees when he observes

particle B. Let me repeat this, so it sinks in: what one scientist chooses to

measure about one particle determines what the other scientist sees

when he looks at the other particle.

How can this be? How can the choice of what to measure about one

particle determine what the other particle is? Shouldn’t observation

merely reveal what the particle already was, in and of itself, regardless of

what is measured about another? And how can two distant but

concurrent measurements be entirely correlated with one another, despite

the speed-of-light limit that should preclude any information transfer

required for such correlation?

This result is not reconcilable with physicalist premises (unless some

grotesque science fiction fantasies are taken seriously, which I shall

discuss shortly). If the two particles were physically real, in the sense of

having standalone existence, then their measurable properties—which, as

we’ve seen, define their existence—would be whatever they are

regardless of what one chooses to measure about them. Take a piece of

luggage, for instance: it seems to have a certain mass, height, and length

regardless of what is being measured about it. If it weighs 50 kilograms

sitting on a weighing scale, then it will still weigh 50 kilograms even

when it’s not sitting on a weighing scale—or so Physicalism stipulates.

Measurements supposedly reveal something that was already the case

about the piece of luggage immediately before the measurement was

done, not determine it. If mainstream Physicalism were true, the same

should apply to the subatomic particles in our experiment, for a piece of

luggage is simply a compound aggregation of subatomic particles:

measurement should simply reveal the properties the particles already

had, in and of themselves, immediately prior to the measurement.

Experimentally, however, what we see is that the properties of one

particle depend on what we choose to observe about the other. The

particles’ properties don’t have standalone existence but are, instead,

created by the very act of measurement. And since there is nothing about

a physical particle but its measurable physical properties, the particles

themselves cannot be said to exist unless and until a measurement is

done. This, of course, is incompatible with physical realism and,

therefore, mainstream Physicalism itself.


If one still insists on holding on to physical realism, one has to part with

explicit, level-headed, empirically based science; one has, instead, to

entertain one of two highly inflationary and entirely speculative

fantasies. The first is the so-called ‘Everettian Many-Worlds’ hypothesis:

every time an observation is made, all possible outcomes are supposed to


be produced, but each in a separate, parallel universe. The paradigm-

defying outcome we happen to see is the one that happens to be


produced in the parallel universe we happen to inhabit. Copies of us in

other parallel universes observe all the other outcomes, so there is

nothing to fret about if we see stuff that contradicts our expectations and

prejudices. One can almost feel the warm, fuzzy metaphysical

reassurance this provides: whatever you choose to believe is sort of fair

game, for everything that could be observed is observed, just in some

other inaccessible universe, in some other inaccessible dimension, by

some other inaccessible copy of you; ‘inaccessible’ being the operative

word here. This undisguised but admittedly very imaginative subterfuge,

if taken seriously, could be used to justify just about anything that

doesn’t outright contradict laws of large numbers.

Some claim that the idea of parallel universes ‘flows naturally’ from

quantum theory, a notion grounded in a combination of grotesque

epistemic arrogance and a complete abandonment of one’s natural sense

of plausibility. The idea is that, because the equations of quantum

mechanics—which we have come up with to try and get a handle on

nature’s behavior—cannot predict the outcome of any specific event, but

only statistical averages, then of course nature must produce all possible

outcomes; otherwise we, godly intellects that we are, certainly would

have been able to do better by now, wouldn’t we? In other words,

because we, bipedal apes, haven’t managed to predict nature’s behavior

at its finest-grained level, then ... invisible parallel universes!

This idea would be a little more apt if we had some, any, direct

evidence for all this parallel stuff. Alas, we don’t. We must simply

believe in countless parallel universes popping into existence every

infinitesimal fraction of a second—every time there is a microscopic

interaction anywhere in the universe—which is arguably the most

inflationary notion that human thought can coherently produce. It is so

inflationary, in fact, that it is literally impossible to explicitly visualize

how much stuff popping into existence is entailed by it; it’s just too

much to wrap one’s head around; it’s a dizzying, exponential,

thermonuclear explosion of empirically unverifiable stuff that makes the

Big Bang look like a bang snap. That such fantasy is not only taken


seriously, but even publicly promoted by professors from some respected

universities, illustrates how far belief in Physicalism can take otherwise

reasonable, intelligent people down extraordinarily implausible avenues

of pure speculation. From the perspective of psychology, this is

deserving of in-depth study, and I don’t say this sarcastically at all.

The other entirely speculative and supremely vague fantasy is called

‘superdeterminism’: there supposedly are mysterious ‘hidden variables’

in nature—emphasis on ‘mysterious’ and ‘hidden’—that do exactly

whatever needs to be done for the experimental results obtained to

remain consistent with physical realism. What are these hidden

variables? No one has ever specified them explicitly and coherently, so

we can’t even start looking for them through experiments that could

falsify the hypothesis. How, precisely, do the hidden variables do what

they are presumed to do? No one has ever specified that either; they just

somehow do it. But do what, exactly? Whatever must happen in nature

so we can continue to believe in physical realism, despite experimental

results telling us otherwise. If there is any exaggeration in this colloquial

characterization of superdeterminism, it is only mild.

Superdeterminism is akin to saying, if you believe in Creationism,

that nature has a mysterious, hidden agent who does exactly whatever

needs to be done to create the illusion of a fossil record, even though

natural selection is false and the world was created within the past ten

thousand years. How? We have no idea. What is this mysterious agent?

We have no idea; we just call it ‘hidden god.’ And we define this agent

in terms of whatever needs to be true so to enable us to continue to

believe in Creationism, despite overwhelming empirical evidence for an

alternative hypothesis—namely, that the fossil record points to evolution

by natural selection. Underneath its highly technical language, the spirit

of superdeterminism is surprisingly akin to this.

You see, what the laboratory results have been consistently telling us

for over 40 years—so consistently, in fact, that a Nobel Prize has been

awarded to the investigators—is that physical entities have no standalone

existence. They are, instead, products of measurement. But since this

result is metaphysically unacceptable to some, they conjure up undefined

hidden variables and inaccessible parallel universes to rescue our

metaphysical prejudices from the cold clutches of hard experimental

evidence.

To impress upon you the fact that I am not exaggerating, we can look

a little deeper into superdeterminism. According to it, the settings of the

measurement devices used by Alice and Bob somehow change what the


particles A and B are—as opposed to simply, well, measuring them,

which is what measurement devices are made to do—thereby creating

the correlations between what Alice and Bob see. This is akin to saying

that, when you photograph the moon up in the night sky, the aperture and

exposure settings of your camera change what the moon is. This way,

regardless of what you see in the resulting photograph, you don’t have to

part with your favorite theoretical prejudice about the nature of the

moon, for the moon that was there just before you photographed it was

different from the moon on the photo. Moreover, Alice’s and Bob’s

measurement devices have to somehow conspire with each other,

instantaneously and at a distance, so to ensure that the physical

properties of particles A and B are correlated, despite speed-of-light

constraints. But this, too, is miraculously taken care of by the unspecified

hidden variables, whatever they may be.

By reifying Physicalism to the position of necessary, a priori truth,

despite evidence to the contrary, our culture has lent legitimacy to

fantasies that are beyond implausible. After all, since Physicalism must

be true, any way to reconcile evidence with it, no matter how desperate

and implausible, must be a legitimate part of the debate, right? And so

our rational intuitions of plausibility are thrown unceremoniously out the

window. This is how cultures lose themselves to their own nonsense.

Short of theoretical fantasies, we must thus accept, on hard empirical

grounds, that the physical world is created upon observation or

measurement. In other words, physics is telling us experimentally that,

just as we’ve concluded before on entirely different grounds, the

physical world is but a dashboard representation created by

measurement, not the real world out there. The only physical world there

is is the ‘physical’ world on the screen of perception; there is no

underlying, purely quantitative, abstract physical world with standalone

existence.

None of these experimental results is actually surprising or

discombobulating when regarded without metaphysical prejudice: the

dials in an airplane’s dashboard only show something when a

measurement is made, for what they show is precisely the outcome of the

measurement. Without a measurement, the needles in the dials don’t

move and nothing is shown, for there is nothing to be shown. Is this

difficult to understand? Now, in precisely the same way, as experiments

have repeatedly indicated, physical entities are dashboard representations

of measurement outcomes, so that without measurement no physical


entities can exist. Is that difficult to understand? If no measurement is

performed, the dials have nothing to show and, therefore, there is no

physical world; for the physical world is constituted by the dial

indications. In other words, all physical entities are merely ‘physical’

entities.

None of this implies that there is no reality prior to measurement,

otherwise we would have an even bigger problem, as there would be

nothing to be measured in the first place. But there is still a sky when no

airplanes are flying around making measurements. Without airplane

sensors, there just aren’t any dashboard representations of the sky. But

the sky itself—unmeasured—is still there. In exactly the same way, when

we don’t measure the real, external world, there is no ‘physical’ world;

for the ‘physical’ world—as displayed on the screen of perception—is

but an internal representation of our measurements of the real world.

Nonetheless, the real, nonphysical world is still there, regardless.

Things couldn’t be simpler if we just accepted what nature is telling

us, as opposed to forcing our metaphysical prejudices upon nature:

physicality is not the real world, but an internal cognitive representation

thereof; that’s why it only appears upon observation and can’t be said to

exist prior to observation. The world that is measured, in turn, is real, but

not physical, in the sense of not being describable through physical

quantities. That’s all there is to it, and it isn’t difficult to understand.

The dashboard metaphor can even make straightforward sense of the

instantaneous correlations between what Alice and Bob see upon

measuring particles A and B, respectively. These correlations are only

puzzling if we assume that the particles have standalone existence, but

not if they are mere representations. To see this, consider the following

analogy.

Imagine that you are watching a football match at home. Because you

are such a great fan of football, you bought two large TVs to follow the

same match, simultaneously, on two different channels. Imagine also that

the two different broadcasters have their own cameras in the stadium, so

each channel shows different images of the same match. And you watch

the two different images side by side.

Now, obviously, the two images will be entirely correlated with one

another, for they are representations of the same match, the same

underlying reality. The images have no standalone existence, only the

football match in the stadium—the thing in itself—has. Nonetheless, the

images will also be different, for they are produced by different cameras


and camera angles. None of this is counterintuitive or difficult to

understand.

However, if you were a time traveler from the 18th century and didn’t

understand how TVs work, you would be flabbergasted by the

correlations between the two images: how can the little men running

inside the box to the left move in perfect synchronization with the other

little men running inside the box to the right? How can that happen even

when the boxes are totally isolated from one another, so that the little

men can’t talk to each other across the boxes? Incomprehensible!

Of course, the source of this puzzlement is the unexamined

assumption, by our time traveler, that the images aren’t mere

representations, but the things in themselves. If you think that there are

real little men, with standalone existence, running inside the two TV

sets, the correlation of their behavior across the sets would seem magical

indeed. And this is precisely the mistake we make when it comes to the

laboratory experiments being discussed here: we think of the entangled

particles A and B as real things in themselves, not mere representations

of an underlying nonphysical reality. If we understood and accepted the

latter, the experiments wouldn’t seem magical at all: the entangled

particles are two different representations—two different images, two

different camera angles—of the same underlying reality; that’s why they

are correlated instantaneously and at a distance, just like the images on

the two TV sets are instantaneously correlated at a distance. But instead

of acknowledging what nature is telling us, we insist on thinking like

18th-century people in the face of 21st-century experimental evidence.

Quantum physics experiments are not the only instance in which

laboratory results directly contradict physicalist premises and

expectations. Since 2012, results in the field of neuroscience of

consciousness have been doing the same, with overwhelming

consistency. For instance, before 2012 the generally accepted wisdom

was that psychedelic substances, which lead to unfathomably rich

experiential states, did so by stimulating neuronal activity and lighting

up the brain like a Christmas tree. Modern neuroimaging, however, now

shows that they do precisely the opposite: the foremost physiological

effect of psychedelics in the brain is to significantly reduce activity in

multiple brain areas, while increasing it nowhere in the brain beyond

measurement error. This has been consistently demonstrated for multiple

psychedelic substances (psilocybin, LSD, DMT), with the use of

multiple neuroimaging technologies (EEG, MEG, fMRI), and by a


variety of different research groups (in Switzerland, Brazil, the United

Kingdom, etc.). Neuroscientist Prof. Edward F. Kelly and I published an

essay on Scientific American’s website (titled “Misreporting and

Confirmation Bias in Psychedelic Research,” on 3 September 2018)

providing an overview of, and references to, many of these studies. As

Prof. Kelly put it, “impressive and direct measurements of decreased

brain activity” are by far the most robust effect that psychedelics have on

the brain.

This result contradicts mainstream Physicalism for obvious reasons:

experience is supposed to be generated by metabolic neuronal activity. A

dead person with no metabolism experiences nothing because their brain

has no activity. A living person does because their brain does have

metabolic activity—or so the story goes. And since neuronal activity

supposedly causes experiences, there can be nothing to experience but

what can be traced back to patterns of neuronal activity (otherwise, one

would have to speak of disembodied experience). Ergo, richer, more

intense experience—such as the psychedelic state—should be

accompanied by increased activity somewhere in the brain; for it is this

increase that supposedly causes the increased richness and intensity of

the experience (this rationale applies even under the understanding that

experience correlates with intrinsic information, provided that more than

half of the associated neurons remain inactive in the psychedelic state,

which is the case).

Notice that Physicalism would remain consistent with an overall

decrease of brain activity in the psychedelic state, provided that one

could still find localized increases in parts of the brain consistent with

the experience. The reason for this is that, under Physicalism, not all

neuronal processes lead to experience; only the so-called ‘Neural

Correlates of Consciousness’ (NCCs) supposedly do. It is thus

conceivable that psychedelics could reduce activity in processes not

related to conscious experience, while leading to localized increases in

the NCCs. In particular, it is conceivable that psychedelics could impair

inhibitory processes that, once impaired, disinhibit the NCCs. The

problem is that all this relies on there being plausibly sufficient increases


of activity somewhere in the brain—corresponding to the now-

disinhibited NCCs—compared to the baseline, so as to account for the


increase in the richness and intensity of experience. But no such a thing

has been seen.

Since brain activity doesn’t increase in the psychedelic state,

physicalist neuroscientists then conclude that something else in the brain


must. And so the hunt is on for something in the brain that increases

under the effect of psychedelics. Many possibilities have been proposed

and somewhat fallen by the wayside, such as brain activity variability

and functional connectivity. But one remains and is significantly hyped

as the best physicalist hypothesis for accounting for the psychedelic

experience. It goes by various names, such as ‘brain entropy,’

‘complexity,’ ‘diversity,’ and so on (see “The entropic brain – revisited,”

by Robin Carhart-Harris, published in Neuropharmacology, 2018). But

what it means is very straightforward: brain noise—i.e., residual brain

activity that unfolds according to no discernible pattern; brain ‘TV

static,’ if you like.

The idea here is that, although brain activity decreases with

psychedelics, the residual activity that remains is desynchronized by the

drug, thereby becoming relatively more random than in the baseline. And

this relative increase in randomness or entropy—the latter meaning the

degree of disorder of the remaining brain activity—is supposed to

account for the unfathomable experiential immensity of the psychedelic

state. The logic is that more random activity contains more Information

than synchronized activity with discernible patterns. Under a certain

definition of ‘Information,’ which I shall elucidate below, this is indeed

true. And thus, the extra Information physiologically imparted by

psychedelics supposedly accounts for the extra richness and intensity of

the psychedelic experience.

There are many reasons why this ‘entropic brain hypothesis’ is

implausible to the point of being ludicrous, so let’s tackle them

systematically, starting with the underlying logic discussed above. The

fallacy of trying to account for richer, more intense experience in terms

of higher Information content is that it relies on conflating two

completely different definitions of the word ‘information.’

The first definition was that coined by Claude Shannon, father of

information theory, in his seminal 1948 paper, “A Mathematical Theory

of Communication.” The idea there is that Information is a measure of

the level of ‘surprise’ embedded in a message or signal. More

specifically, the more alternative possibilities are eliminated by a

message or signal, the more ‘surprise value’—and, therefore,

Information—it contains. For example, if a message stated simply that a

certain person is married, then only one other possibility would be

eliminated: namely, that the person is single. The level of ‘surprise’ here

is only 50%, since only one out of two possibilities can be eliminated by

the message. But if a message were to contain, say, a picture of the cloud


cover over your city, countless other possible patterns of cloud cover

would be eliminated by it, and the level of ‘surprise’ would be much

greater. That picture would thus contain a lot more Information.

One way to operationalize this particular definition of Information is

to think in terms of compression. A photograph—playing the role of

message, or signal—with clear and repeated visual patterns is

compressible and can, therefore, be stored in a smaller computer file.

The discernible patterns allow the compression algorithm to discard

many pixels from the original image, since the algorithm can later

reconstruct those pixels based on knowledge of the patterns according to

which they appeared in the first place. For example, a photograph of an

empty chessboard is highly compressible, because the black and white

pixels appear on it according to a very regular pattern, so there is no need

to store each and every pixel; all we need is to know the pattern of a

chessboard. But a photograph of TV static is much less compressible, for

the black and white pixels do not follow any recognizable pattern. In this

latter case, nearly all pixels need to be stored.

Shannon’s definition of Information means that, the more

compressible a signal is, the less Information it has, for knowledge of the

associated patterns reduces the degree of ‘surprise’ we have when we

analyze the signal. By the same token, the less compressible a signal is,

the more Information it contains, for our inability to recognize

underlying patterns renders many ‘pixels’ in it unexpected and,

therefore, ‘surprising.’ When I use the word ‘Information’ in Shannon’s

sense, I capitalize it, as I have already been doing.

Now, Shannon’s definition of ‘Information’ is a very technical one,

invented for very specific purposes in communications engineering:

namely, to calculate the minimal bandwidth of the communication

channel required to transmit the message after compression. It doesn’t—

and was never meant to—replace the colloquial use of the word. In the

colloquial sense, the word ‘information’ (this time not capitalized) means

the amount of semantic content of a message or signal. This way, a

message or signal has a lot of information if it means a lot. On the other

hand, a message that means nothing has no information.

The crucial thing to notice here is that, in a very important sense,

Information and information are opposites. A completely random and

uncompressible signal has maximum Information, but no information;

for a random signal means nothing: it has no discernible structures or

patterns that could be recognized and therefore unlock cognitive

associations. TV static has near-maximum Shannon Information, but it


means nothing. Therefore, it has no information in the colloquial sense,

this being the reason why we don’t sit in the living room to watch TV

static; instead, we watch TV programs, which have a lot of recognizable

—and, therefore, compressible—patterns in the form of objects, people,

and events. As such, a signal with a lot of information has, by definition,

lots of recognizable patterns, therein residing its meaning. Yet—and

precisely for this reason—it has relatively little Information in Shannon’s

sense.

When claiming that psychedelics increase the amount of Information

in the brain, the proponents of the ‘entropic brain hypothesis’ are using

Shannon’s technical definition of Information. But when claiming that an

increase in the information content of the brain accounts for the richness

and intensity of the psychedelic experience, they can only be appealing

to the colloquial definition of information. Alas, these two denotations

not only aren’t the same, they effectively are opposites. The proponents’

conflation of the different meanings of the word ‘information’ renders

their entire logic nonsensical. They seem to stick to the mere word

without understanding what it means in different contexts. The intuitive

appeal of their hypothesis is thus no more than a linguistic phantasm.

Indeed, Shannon’s Information was defined for the purpose of

communications, as made clear in the very title of his seminal paper. It is

only when we are dealing with communications that we want to know

how compressible a signal is—i.e., how much Information it has—so to

evaluate the minimal bandwidth of the communication channel required

to transmit said signal. But when it comes to brain activity, nothing is

being communicated; nothing is being transmitted through a channel; the

activity already arises where it needs to be. So to apply Shannon’s

definition of Information here is clearly inappropriate, at best naïve, and

surely misleading.

Moreover, when a subject describes a psychedelic experience as rich

and intense, what the subject means is that the experience has a lot of

semantic content; i.e., it means a lot to the subject, unlocking many

associative links in a cognitive chain reaction. This richness of meaning

is evoked by recognizable cognitive structures and patterns, which is the

opposite of entropy. After all, a psychedelic experience isn’t random or

unstructured; it isn’t akin to TV static. If it were, it precisely wouldn’t be

described as rich or intense, but mind-numbingly boring instead; for

there is nothing more devoid of evocative semantic content than TV

static. A psychedelic ‘trip’ is so unfathomably rich and intense precisely

because it has relatively little Shannon Information, and a whole lot of


information in the colloquial sense. Random, entropic brain activity is

thus precisely the opposite of what one would expect under physicalist

premises; provided, of course, that one actually understands information

theory. Just about anything else would be less implausible a physicalist

account of the psychedelic experience.

I published this criticism of the Entropic Brain Hypothesis (EBH) on

the website of the Institute of Art and Ideas (IAI), on the 21st of June

2023, under the title “Brain noise doesn’t explain consciousness: A

psychedelic experience isn’t akin to TV static.” On the 30th of June

2023, Prof. David Nutt—the most senior member of the team that

originally proposed the EBH—replied in the same venue under the title

“David Nutt: entropy explains consciousness: We don’t need mysticism

to explain psychedelic experience.” The most conspicuous fact about his

answer is that, despite the title chosen by the IAI, Prof. Nutt didn’t seem

to even try to defend the EBH from my criticism, opting, instead, to point

to other fuzzier and even less empirically substantiated physicalist

accounts of the psychedelic experience (allow me to ignore his allusion

to mysticism, for it is not deserving of commentary). As I shall discuss in

the next chapter, this constant switching to other vague accounts, every

time one particular account is substantially criticized, renders

Physicalism impossible to pin down and, therefore, meaningless. Be that

as it may, it appears that even its very creators aren’t prepared to

explicitly defend the EBH from the criticism above, which I suppose is

telling.

But even if we ignore this entire point and pretend, for the sake of

argument, that Information and information are the same thing, the EBH

still has no legs for obvious other reasons. I’ve discussed this ad

nauseam in previous writings, so I shall limit myself to a mere summary

here.

Decades of research in the neuroscience of consciousness have

demonstrated consistent correlations between patterns of brain activity

and reported inner experience. Under Physicalism, this suggests that the

only plausible account of experience is brain activity. But if the EBH

were correct, it would imply that, in the case of psychedelics alone,

something totally else must account for experience. What is the

likelihood that there are two completely different brain mechanisms that

generate experience under physicalist premises? One cannot defend

Physicalism by proposing a completely different theory of consciousness

for each different set of data, as this would be grotesquely inflationary


and render the scientific implications of Physicalism unfalsifiable to the

point of being meaningless.

Moreover, the increase in brain noise—pompously called

‘complexity’ and ‘diversity’ by the proponents, which misleads casual

readers into concluding that psychedelics induce more ‘complex’ or

‘diverse’ brain activity, in the colloquial sense, while the very opposite is

the case—measured during the psychedelic state is ludicrously minute: it

averages at 0.005 in a scale that runs from 0 to 100! (See the paper

“Increased spontaneous MEG signal diversity for psychoactive doses of

ketamine, LSD and psilocybin,” by Michael M. Schartner et al.,

published on 19 April 2017 in Scientific Reports.) The proponents’

defense here is that, minute as it is, the effect is still statistically

significant. But this misses the point entirely: statistical significance—an

arbitrary threshold as it is—only means that the effect probably isn’t a

measurement or methodological artifact; it says precisely nothing about

the strength of the effect. And the strength of the effect is key, for the

proponents are trying to account for the mind-boggling richness and

intensity of the psychedelic experience—a very, very large subjective

effect—in terms of a ludicrously minute physiological effect. This

stretches plausibility.

No comments:

Post a Comment