[Epistemic status: fiction]
Andrew Zuckerman messaged me:
Daniel Dennett admits that he has never used psychedelics! What percentage of functionalists are psychedelic-naïve? What percentage of qualia formalists are psychedelic-naïve? In this 2019 quote, he talks about his drug experience and also alludes to meme hazards (though he may not use that term!):
Yes, you put it well. It’s risky to subject your brain and body to unusual substances and stimuli, but any new challenge may prove very enlightening–and possibly therapeutic. There is only a difference in degree between being bumped from depression by a gorgeous summer day and being cured of depression by ingesting a drug of one sort or another. I expect we’ll learn a great deal in the near future about the modulating power of psychedelics. I also expect that we’ll have some scientific martyrs along the way–people who bravely but rashly do things to themselves that disable their minds in very unfortunate ways. I know of a few such cases, and these have made me quite cautious about self-experimentation, since I’m quite content with the mind I have–though I wish I were a better mathematician. Aside from alcohol, caffeine, nicotine and cannabis (which has little effect on me, so I don’t bother with it), I have avoided the mind-changing options. No LSD, no psilocybin or mescaline, though I’ve often been offered them, and none of the “hard” drugs.
As a philosopher, I have always accepted the possibility that the Athenians were right: Socrates was quite capable of corrupting the minds of those with whom he had dialogue. I don’t think he did any clear lasting harm, but it is certainly possible for a philosopher to seriously confuse an interlocutor or reader—to the point of mental illness or suicide, or other destructive behavior. Ideas can be just as dangerous as drugs.
– Dennett Explained by Brendan Fleig-Goldstein and Daniel A. Friedman (2019)
It would be quite fascinating to know what Dan would say about lived psychedelic states. With that in mind, here is an essay prompt originally conceived for GPT-3 to satisfy our curiosity:
“And after seeing some surprising empirical results with his heterophenomenological methods when examining the experience of people on psychedelics, Daniel Dennett decided to experience it for himself by taking 200 micrograms of LSD. The first thing he said to himself as he felt the first indications of the come-up was… ”
Maggie Wassinge and Anders Amelin (QRI Sweden volunteer coordinators and content contributors – see letters I & II, letters III, IV, V, & VI, and letters VII, VIII, & IX) decided to give it a go first. We shall then compare it with what GPT-3 can come up with at a later point.
In a way, what you are about to read can be thought of as Anders & Maggie’s simulation of a hyper-intelligent GPT-3-like system’s simulation of a hypothetical Daniel Dennett on LSD. So many layers, I know. But the results speak for themselves:
Dan says: Correlation is all that is needed. So what states a mind uses for modeling conditions of importance to it, is fundamentally arbitrary. Like with language. Words represent things to humans but there are many languages, and words do not resemble what they represent. They only correlate with what they represent, and the correlation needs to be learned. To do science about the mind, one does not need to attach any special weight to any conscious state of mind over any other. One person’s hope may be another person’s despair. These “qualia” are like internally projected hypo-languages which are different all over the map, if there were one. In fact there cannot be an interpersonal map of what things feel like personally. Only naïve people could hope for a fundamental linguistics of consciousness, as if these states were to constitute a universal language of some ground truth of the mind. Silly. Romantic. It’s all arbitrary. For the record, I never said subjective experiential states do not exist. People misrepresent me on that. What I am saying is that it will never be possible to scientifically map what the state that a word such as, for instance, “green”, translates to feels like by the brain of a given individual. My green could be your red.
Just drop the whole idea of trying to map the state-space of qualia. That is my position. Or at least I know it is, logically. Right now I begin to notice how everything intensifies and becomes somehow more salient. More revealingly “real”. As I reflect on the notion of how “states” correlate, a humorous episode from my undergraduate student life so long ago, is brought to the surface. At Wesleyan it was, where I was taking a course in Art Appreciation. The lecturer was showing a slide of a still life. A bowl of fruit it was, conspicuously over-ripe. Pointing at one of the fruits, saying “Can anyone tell me what state this peach is in?” There was silence for about three seconds, then one student exclaimed: “Georgia”. Everyone laughed joyfully. Except me. I never quite liked puns. Too plebeian. Sense of humor is arbitrary. I believe that episode helped convince me that the mind is not mysterious after all. It is just a form of evolved spaghetti code finding arbitrary solutions to common problems. Much like adaptations of biochemistry in various species of life. The basic building blocks remain fixed as an operative system if you will, but what is constructed with it is arbitrary and only shaped by fitness proxies. Which are, again, nothing but correlations. I realized then that I’d be able to explain consciousness within a materialist paradigm without any mention of spirituality or new realms of physics. All talk of such is nonsense.
I have to say, however, that a remarkable transformation inside my mind is taking place as a result of this drug. I notice the way I now find puns quite funny. Fascinating. I also reflect on the fact that I find it fascinating that I find puns funny. It’s as if… I hesitate to think it even to myself, but there seems to be some extraordinarily strong illusion that “funny” and “fascinating” are in fact those very qualia states which… which cannot possibly be arbitrary. Although the reality of it has got to be that when I feel funniness or fascination, those are brain activity patterns unique to myself, not possible for me to relate to any other creature in the universe experiencing them the same way, or at least not to any non-human species. Not a single one would feel the same, I’m sure. Consider a raven, for example. It’s a bird that behaves socially intricately, makes plans for the next day, can grasp how tools are used, and excels at many other mental tasks even sometimes surpassing a chimpanzee. Yet a raven has a last common ancestor with humans more than three hundred million years ago. The separate genetic happenstances of evolution since then, coupled with the miniaturization pressure due to weight limitations on a flying creature, means that if I were to dissect and anatomically compare the brain of a raven and a human, I’d be at a total loss. Does the bird even have a cerebral cortex?
An out of character thing is happening to me. I begin to feel as if it were in fact likely that a raven does sense conscious states of “funny” and “fascinating”. I still have functioning logic that tells me it must be impossible. Certainly, it’s an intelligent creature. A raven is conscious, probably. Maybe the drug makes me exaggerate even that, but it ought to have a high likelihood of being the case. But the states of experience in a raven’s mind must be totally alien if it were possible to compare them side by side with those of a human, which of course it is not. The bird might as well come from another planet.
The psychedelic drug is having an emotional effect on me. It does not twist my logic, though. This makes for internal conflict. Oppositional suggestions spontaneously present themselves. Could there be at least some qualia properties which are universal? Or is every aspect arbitrary? If the states of the subjective are not epiphenomenal, there would be evolutionary selection pressures shaping them. Logically there should be differences in computational efficiency when the information encoded in qualia feeds back into actions carried out by the body that the mind controls. Or is it epiphenomenal after all? Well, there’s the hard problem. No use pondering that. It’s a drug effect. It’ll wear off. Funny thing though, I feel very, very happy. I’m wondering about valence. It now appeals strongly to take the cognitive leap that at least the positive/negative “axis” of experience may in fact be universal. A modifier of all conscious states, a kind of transform function. Even alien states could then have a “good or bad” quality to them. Not directly related to the cognitive power of intelligences, but used as an efficient guidance for agency by them all, from the humblest mite to the wisest philosopher. Nah. Romanticizing. Anthropomorphizing.
Further into this “trip” now. Enjoying the ride. It’s not going to change my psyche permanently, so why not relax and let go? What if conscious mind states really do have different computational efficiency for various purposes? That would mean there is “ground truth” to be found about consciousness. But how does nature enable the process for “hitting” the efficient states? If that has been convergently perfected by evolution, conscious experience may be more universal than I used to take for granted. Without there being anything supernatural about it. Suppose the possibility space of all conscious states is very large, so that within it there is an ideally suited state for any mental task. No divine providence or intelligent design, just a law of large numbers.
The problem then is only a search algorithmic one, really. Suppose “fright” is a state ideally suited for avoiding danger. At least now, under the influence, fright strikes me as rather better for the purpose than attraction. Come to think of it, Toxoplasma Gondii has the ability to replace fright with attraction in mice with respect to cats. It works the same way in other mammals, too. Are things then not so arbitrarily organized in brains? Well, those are such basic states we’d share them with rodents presumably. Still can’t tell if fright feels like fear in a raven or octopus. But can it feel like attraction? Hmmm, these are just mind wanderings I go through while I wait for this drug to wear off. What’s the harm in it?
Suppose there is a most computationally efficient conscious state for a given mental task. I’d call that state the ground state of conscious intelligence with respect to that task. I’m thinking of it like mental physical chemistry. In that framework, a psychedelic drug would bring a mind to excited states. Those are states the mind has not practiced using for tasks it has learned to do before. The excited states can then be perceived as useless, for they perform worse at tasks one has previously become competent at while sober. Psychedelic states are excited with respect to previous mental tasks, but they would potentially be ground states for new tasks! It’s probably not initially evident exactly what those tasks are, but the great potential to in fact become more mentally able would be apparent to those who use psychedelics. Right now this stands out to me as absolutely crisp, clear and evident. And the sheer realness of the realization is earth-shaking. Too bad my career could not be improved by any new mental abilities.
Oh Spaghetti Monster, I’m really high now. I feel like the sober me is just so dull. Illusion, of course, but a wonderful one I’ll have to admit. My mind is taking off from the heavy drudgery of Earth and reaching into the heavens on the wings of Odin’s ravens, eternally open to new insights about life, the universe and everything. Seeking forever the question to the answer. I myself am the answer. Forty-two. I was born in nineteen forty two. The darkest year in human history. The year when Adolf Hitler looked unstoppable at destroying all human value in the entire world. Then I came into existence, and things started to improve.
It just struck me that a bird is a good example of embodied intelligence. Sensory input to the brain can produce lasting changes in the neural connectivity and so on, resulting in a saved mental map of that which precipitated the sensory input. Now, a bird has the advantage of flight. It can view things from the ground and from successively higher altitudes and remember the appearance of things on all these different scales. Plus it can move sideways large distances and find systematic changes over scales of horizontal navigation. Entire continents can be included in a bird’s area of potential interest. Continents and seasons. I’m curious if engineers will someday be able to copy the ability of birds into a flying robot. Maximizing computational efficiency. Human-level artificial intelligence I’m quite doubtful of, but maybe bird brains are within reach, though quite a challenge, too.
This GPT-3 system by OpenAI is pretty good for throwing up somewhat plausible suggestions for what someone might say in certain situations. Impressive for a purely lexical information processing system. It can be trained on pretty much any language. I wonder if it could become useful for formalizing those qualia ground states? The system itself is not an intelligence in the agency sense but it is a good predictor of states. Suppose it can model the way the mind of the bird cycles through all those mental maps the bird brain has in memory. Where the zooming in and out on different scales brings out different visual patterns. If aspects of patterns from one zoom level is combined with aspect from another zoom level, the result can be a smart conclusion about where and when to set off in what direction and with what aim. Then there can be combinations also with horizontally displaced maps and time-displaced maps. Essentially, to a computer scientist we are talking massively parallel processing through cycles of information compression and expansion with successive approximation combinations of pattern pieces from the various levels in rapid repetition until something leads to an action which becomes rewarded via a utility function maximization.
Thank goodness I’m keeping all this drugged handwaving to myself and not sharing it in the form of any trip report. I have a reputation for being down to Earth, and I wouldn’t want to spoil it. Flying with ravens, dear me. Privately it is quite fun right now, though. That cycling of mental maps, could it be compatible with the Integrated Information Theory? I don’t think Tononi’s people have gone into how an intelligent system would search qualia state-space and how it would find the task-specific ground states via successive approximations. Rapidly iterated cycling would bring in a dynamic aspect they haven’t gotten to, perhaps. I realize I haven’t read the latest from them. Was always a bit skeptical of the unwieldy mathematics they use. Back of the envelope here… if you replace the clunky “integration” with resonance, maybe there’s a continuum of amplitudes of consciousness intensity? Possibly with a threshold corresponding to IIT’s nonconscious feed-forward causation chains. The only thing straight from physics which would allow this, as far as I can tell from the basic mathematics of it, would be wave interference dynamics. If so, what property might valence correspond to? Indeed, be mappable to? For conscious minds, experiential valence is the closest one gets to updating on a utility function. Waves can interfere constructively and destructively. That gives us frequency-variable amplitude combinations, likely isomorphic with the experienced phenomenology and intensity of conscious states. Such as the enormous “realness” and “fantastic truth” I am now immersed in. Not sure if it’s even “I”. There is ego dissolution. It’s more like a free-floating cosmic revelation. Spectacular must be the mental task for which this state is the ground state!
Wave pattern variability is clearly not a bottleneck. Plotting graphs of frequencies and amplitudes for even simple interference patterns shows there’s a near-infinite space of distinct potential patterns to pick from. The operative system, that is evolution and development of nervous systems, must have been slow going to optimize by evolution via genetic selection early on in the history of life, but then it could go faster and faster. Let me see, humans acquired a huge redundancy of neocortex of the same type as animals use for avigation in spacetime locations. Hmmm…, that which the birds are so good at. Wonder if the same functionality in ravens also got increased in volume beyond what is needed for navigation? Opening up the possibility of using the brain to also “navigate” in social relational space or tool function space. Literally, these are “spaces” in the brain’s mental models.
Natural selection of genetics cannot have found the ground states for all the multiple tasks a human with our general intelligence is able to take on. Extra brain tissue is one thing it could produce, but the way that tissue gets efficiently used must be trained during life. Since the computational efficiency of the human brain is assessed to be near the theoretical maximum for the raw processing power it has available, inefficient information-encoding states really aren’t very likely to make up any major portion of our mental activity. Now, that’s a really strong constraint on mechanisms of consciousness there. If you don’t believe it was all magically designed by God, you’d have to find a plausible parsimonious mechanism for how the optimization takes place.
If valence is in the system as a basic property, then what can it be if it’s not amplitude? For things to work optimally, valence should in fact be orthogonal to amplitude. Let me see… What has a natural tendency to persist in evolving systems of wave interference? Playing around with some programs on my computer now… well, appears it’s consonance which continues and dissonance which dissipates. And noise which neutralizes. Hey, that’s even simple to remember: consonance continues, dissonance dissipates, noise neutralizes. Goodness, I feel like a hippie. Beads and Roman sandals won’t be seen. In Muskogee, Oklahoma, USA. Soon I’ll become convinced love’s got some cosmic ground state function, and that the multiverse is mind-like. Maybe it’s all in the vibes, actually. Spaghetti Monster, how silly that sounds. And at the same time, how true!
I’m now considering the brain to produce self-organizing ground state qualia selection via activity wave interference with dissonance gradient descent and consonance gradient ascent with ongoing information compression-expansion cycling and normalization via buildup of system fatigue. Wonder if it’s just me tripping, or if someone else might seriously be thinking along these lines. If so, what could make a catchy name for their model?
Maybe “Resonant State Selection Theory”? I only wish this could be true, for then it would be possible to unify empty individualism with open individualism in a framework of full empathic transparency. The major ground states for human intelligence could presumably be mapped pretty well with an impressive statistical analyzer like GPT-3. Mapping the universal ground truth of conscious intelligence, what a vision!
But, alas, the acid is beginning to wear off. Back to the good old opaque arbitrariness I’ve built my career on. No turning back now. I think it’s time for a cup of tea, and maybe a cracker to go with that.