State-space of drug effects

A project I have always been interested in doing is creating a data-driven representation of the state-space of subjective effects caused by drugs of all types.

Now I have the know-how for doing that. I just need to gather the data. Help me by answering this 90 second anonymous survey where you are asked to rate a drug you’ve taken. If you are feeling generous, answer it multiple times, choosing a different drug every time.

The survey is here, and it will be open until May 31st.

Psychophysics for Psychedelic Research: Textures

In this post I will provide an account of my personal research project to understand the algorithms that underly human visual pattern-recognition. This project is multidisciplinary in nature, combining paradigms from three fields: (1) the analysis and synthesis of textures, (2) psychophysics and (3) psychedelic research. I will explain in detail how these areas can synergistically help us understand the computational properties of consciousness. In the process of doing so I will describe some of the work I have done in this direction.

tl;dr: With texture synthesis algorithms we can control the statistical features present in textures. By using an odd-one-out paradigm where participants have to find the “different texture” we can identify the statistical signatures of the visual patterns people can perceive. Collecting these signatures under various states of consciousness will reveal the information processing limitations of visual experience. It may turn out that some patterns can only be seen on LSD (psilocybin, mescaline, etc.), and this information will inform a general theory of vision’s algorithms, expanding the scope we have studied so far and suggesting relevant applications of psychedelic consciousness.

Introduction to Spatial Patterns

The world is patterned. In fact, it is so patterned, that it is difficult to identify natural surfaces that have no perceptible regularities. The grass, the trunk of trees, the surface of rocky mountains, the dancing and dissolving of the clouds. All of these natural scenes are full of regularities. For hundreds of millions of years animals on this planet have existed in an environment where regularities are not inconsequential: being able to use or detect camouflage is a matter of life or death to some species. The insects who hide between the leaves pretending to be part of the scene are in an evolutionary arms race against predators and their sensory apparatus. (Here is a neat collection of insect camouflage). Arguably, there are strong evolutionary selection pressures that push predator’s visual system into adapting to recognize the differences between the scene’s visual statistics and the prey’s body appearance.

Other widespread examples for the evolutionary relevance of pattern recognition can be found: Birds may take advantage of the look of cloud formations to determine if they should fly or find refuge, herbivores may seek only plants with specific visual properties to avoid poisonous lookalikes and parasites, the health of potential mates can be assessed by the uniformity of their fur patterns. You get the idea.

Not surprisingly, you today can look at a plain rock and see a lot of visual properties pointed out by patterns you can perceive. Unfortunately, something has kept us quiet about this aspect of our perception: most of these properties are hard to verbalize. Often, you will be able to tell apart two kinds of rocks by grasping the subtle visual differences between them, but at the same time still be unable to explain what makes them different.

What exactly is going on in your mind/brain when you are recognizing characteristic features in textures? We don’t know how the information is processed, why we perceive the features we perceive, or even how the various features are put together in a unified (or semi-unified) conscious representation. The hints we do have, however, are precious.

Receptive Fields

A big hint we can build on is that many neurons in the primary visual cortex (of cats, monkeys, and probably all mammals) respond to visual stimuli in specific areas of the visual field. For a given neuron, the shape of this region is an instance of a well-studied canonical function, as shown in the images below. The area of the visual field a neuron best responds to (by becoming excited or inhibited) is called its receptive field, and the canonical function is the Gabor filter.

As far back as the early 60s, research has been conducted to map the receptive fields of neurons by inserting electrodes into the brain of animals and presenting them with visual input of lines and shapes. In 1961 D. H. Hubel and T. N. Wiesel showed for the first time that neurons in a cat’s cortex and lateral geniculate have receptive fields that look like this:

1961_receptive_field

The crosses indicate regions in which stimuli excites the neuron’s activity while the triangles represent regions in which stimuli inhibits the neuron’s activity. Due to the arrangement of these excitatory and inhibitory regions, these neurons functionally work as edge-detectors. A computer rendering of these receptive fields looks like this:

gabor

Since then a tremendous amount of research has repeatedly confirmed the existence of such neurons, and also uncovered a large number of more complex receptive fields. Some neurons even respond specifically to abstract concepts and high-level constructs. More recently, the simple receptive fields shown above have been modeled as Gabor filters in quantitative simulations. This, in turn, has been successfully used to build brain activity decoders that reconstruct the image that a person is seeing by assuming that the activity of fMRI’s voxels approximately matches the added activity of neurons with Gabor receptive fields (see: Identifying natural images from human brain activity by Kay, Naselaris, Prenger and Gallant).

Thus we can say that we know that a large number of neurons in our visual cortex have Gabor receptive fields and that the collective activity of these neurons contains enough information to at least approximately reconstruct the image a person is seeing (well enough to identify it from a pool of candidates).

We can’t jump from these findings alone to a global theory of visual processing. That is, without also considering what people actually experience. It may turn out, for example, that the activity of the visual cortex contains a lot of information that can be decoded using machine learning techniques applied to fMRI’s voxel brightness. And yet, simultaneously, it could be that people do not consciously represent all of the {decodable} information.

Likewise, a priori we cannot rule out the possibility that some of the information we consciously experience is not actually decodable using brain activity alone. (A quick remark: this may be the case even if one assumes physicalism. Why this is the case will be explained in a future article).

To illustrate this example we can consider the information available in the retina and before it. The light that reaches the outer surface of our eyes contains all of the information available to our mind/brain to instantiate our visual experience. Yet, there is a lot of information there that is ultimately irrelevant for our conscious experience. For instance, there is infrared and ultraviolet light, as well as light that does not make it to our retina, light that fails to elicit an action potential, and so on. If you can discriminate between a really hot and a cold metal using the infrared signature of the light that reaches the eye, you have certainly not shown that we perceive infrared light or that we use it to make distinctions. It merely means that such information is sufficient. We wouldn’t yet know that we actually use it or that it shapes our experience.

But how exactly do we develop an experiment to infer the statistical properties represented by our experience? Here is where the analysis and synthesis of textures becomes relevant.

Analysis and Synthesis of Textures

The idea of using oriented Gabor-like filters (also known as “steerable pyramids”) to analyze and synthesize visual textures was, as far as I know, first proposed by David J. Heeger and James R. Bergen in “Pyramid-based texture analysis/synthesis.” Texture analysis in this context means the use of algorithms to characterize the properties of textures to capture what makes them unique. In turn, texture synthesis, refers to the application of texture analysis to produce an arbitrarily large patch of a synthesized (synthetic) texture so that the synthetic and original texture are as indistinguishable as possible. Of course whether something is “indistinguishable” or not is to a certain degree subjective. Here the criteria for indistinguishability between the original and the synthetic textures is whether a person comparing them side by side could confuse them. In the following section I address how to operationalize and formalize the indistinguishability between patterns using psychophysics.

This particular texture synthesis algorithm works by forcing a white noise image (of any size) to conform to the statistics obtained in the texture analysis step. This is done iteratively, matching the histograms of the synthesized canvas to the various statistics computed from the original texture. I recommend reading the paper to gain a better grasp of what the algorithm does, and to see some stunning examples of the output of this algorithm.

The use of steerable pyramids was later refined by Portilla & Simoncelli‘s texture synthesis algorithm, which currently plays an important role in my research. This algorithm extends Heeger et al. by including additional statistics to enforce, which are (roughly) computed by measuring the autocorrelation between the various components of the steerable pyramid texture representation. Below you can see two pairs of pictures (two originals and their corresponding same-sized synthetics versions), that I recreated using Potilla & Simoncelli’s matlab code:

As you can see, the original and synthetic images are fairly similar to each other. Close inspection is sufficient to notice deadly differences. If you only use your peripheral vision it is very challenging to see major differences.

Psychophysics

Psychophysics is the study of the relationship between physical stimuli and experience (and often behavior). Thanks to psychophysics we now have:

  1. A good approximate map of the phenomenal space of color (CIELAB)
  2. A strong grasp of the nature of color metamers, which in turn underlies all of our color display technologies.
  3. The ability to predict the subjective intensity of the experience elicited by stimuli as a function of the energy of the stimulus (see Weber–Fechner law).

A very powerful idea in psychophysics is the use of just-noticeable differences (JND): Carry a bucket of water. How much water should I add to it so that you are capable of perceiving a difference in the weight of the bucket? Look at a pair of identical light sources. How much blue (in this case, the specific and pure frequency of light that elicits the blue qualia) can I add to one of the lights before you perceive them as being differently colored? Pinch your skin with two needles. How far can their pointy part be before you perceive two needles rather than one?

In these cases, though, there is a natural (and sometimes unique) dimension along which the stimuli can be varied in order to compute the JND. What about visual textures? Here the problem becomes non-trivial. In what way should we change a texture? And how should we accomplish such? There is no clear and obvious scale for describing textures. So what can we do?

Psychophysics of Textures (Take 1):

Before learning about steerable pyramid representations of textures I developed a variety of psychophysical tests to identify the JND between textures. I did this by changing the value of parameters of images with ground-truth statistical properties. If you are curious, feel free to try the first iteration of my experimental paradigm. (Note: I am not collecting data from that experiment, so you should not expect to contribute to science by finishing the task. That said, you can have fun, and a pop-up window with your raw score will appear when you finish both tasks). The average accuracy of the texture discrimination part is about 13/21 (with a chance performance of 3/21), while the performance in the numerical pattern completion task is about 3/5 (with a near 0/5 expected performance when answering randomly).

One of the statistical properties I was changing in the stimuli was the variance of a 2D Gaussian process I implemented with a python script and the magic of Gibbs sampling. Thus, a subset of the images I tested were complete noise except for the first order statistic of local autocorrelation created by the differently parametrized Gaussian process. Here are a couple of examples of such patterns:

These pictures, it turns out, are relatively easy to tell apart when the parameters are sufficiently different. There is a threshold of similarity in the variance parameter after which people cannot distinguish between them. A nice property of this particular method is that you can be sure that if people do recognize the differences, it is because they are somehow extracting features that are a direct consequence of a different value for the variance parameter.

The textures above instantiate the simplest statistical differences that can be created, after the mean and standard deviation of the pixel values. To explore more broadly a wider variety of patterns, I also created textures with fairly complex parametrizable Turing machines (TMs). This, unfortunately, does not lend itself to a clear analysis. In brief, this is because changing a single parameter of such TMs can produce profoundly different results with unclear ground-truth properties:

What information could I gain from the fact that changing x, y or z parameter in a Turing machine by a, b, or c amounts enables accurate discrimination between textures? In principle one could try to explain the performance obtained by making an a posteriori analysis of the correlation between a variety of statistics measured in the textures. But since the textures were not created to have specific statistical properties, the distribution of such properties will not be ideal to find JNDs or discriminability thresholds.

Even if you do this, you will still have more problems. The images are more different (and different in more ways) than what your texture analysis algorithm is capable of detecting. Thus the reason why people can tell these textures apart is not possible to extract from the statistical differences you measure between them. At least not without being extremely lucky and hitting the visual features that matter.

And here, I was stuck several months.

Psychophysics of Textures (Take 2):

After learning about the steerable pyramid model, the work of Eero Simoncelli and the state of the art in fMRI decoding of visual experience, I decided to shift gears and approach the problem with some of the best of the tools created so far.

It turns out that the concept of texture metamers had been developed to describe the perceptual indistinguishability between textures in peripheral vision. Taken from here, the following two images are texture metamers. The images look the same when you center your vision in either of the central red dots. Close foveal inspection of the image, however, will reveal that these are very different pictures!

A specific study caught my attention, and I decided to replicate it as a final class project. Specifically, Texture synthesis and perception: Using computational models to study texture representations in the human visual system by Benjamin J. Balas. I attempted to replicate some of the main results of that study using Mechanical Turk. A full account of the experimental procedure, results and discussion can be found in this wiki here, (written for Stanford’s Psych 221 – Applied Vision Systems). To see the actual experiment performed, try it out here: Replication (and the github repository).

The main idea goes as follows: If textural metamerism can be verified using a given texture synthesis algorithm, then we can be reasonably certain that the algorithm is capturing a large component of what makes textures different from a human point of view. In particular, Benjamin Balas noted that Prtilla & Simoncelli’s algorithm could be “lesioned” by failing to enforce specific statistical features obtained in the texture analysis step. In this way, one can purposefully fail to match specific statistical features between the original and synthesized texture, and then measure how this partial texture synthesis algorithm affects the performance of texture discrimination in humans.

For illustration, here is a set of possible texture synthesis “lesions:”

The operationalization of the experiment used an “odd one out” paradigm: in each trial, three images are presented for 250 milliseconds at 3.5 degrees of vision from the fovea (each image being 2 degree wide in diameter). Two of the three images will come from the same group (ex. two original textures, two marginals removed, etc.). The remaining image is the odd one out. The study measures how often participants detect the odd one out depending on the statistical feature that is not enforced in the synthesized textures (for a more in-depth description: wiki).

Overall the performance of the participants in my replication was much closer to chance than Balas’ results. That siad, qualitatively the replication was successful. Removing the marginals is the lesion that increases performance the most. Magnitude correlation comes in second place. All of the other conditions are at chance level (as far as my sample n=60 with 60 trials each can discern).

As you can see, this specific paradigm is now much more robust than before. And the paradigm is not hard to translate for application in psychedelic research. Unfortunately, Portilla & Simoncelli’s algorithm only creates texture metamers for the peripheral vision in pre-attentive conditions. Upon central inspection, nearly every synthesized texture is at least somewhat distinguishable from the original texture.

The more interesting problem, as I see it, is found in our capacity to see differences between textures upon close and careful examination. This, I think, addresses more directly the subject of this blog. Namely, what is the information that we can represent and distinguish at the (resolution) peak of human consciousness? I imagine that there must be statistical features that we simply do not perceive even when we are looking at them directly and using all of our attention. What is the set of properties that our everyday consciousness can represent centrally?

Psychedelic Research

To understand how a machine works, it helps to know what happens when you break it. You can’t ignore the extra settings and claim you’ve got a theory of everything. Would we be satisfied with the work of neuroscientists and psychologists if they only studied people who are colorblind? They may claim that they have “the essentials” of vision. That color is just a perturbation in the “optimal basics.” And yet, today we know that color plays a relevant computational role in visual discrimination. Suffices to mention that people with grapheme-color synesthesia have an improved performance in “odd one out” identification tests like this (find all the 2’s as fast as possible):

synesthesiatest-300x144

This is because fast low-level association between graphemes and colors help them see clearly and quickly the graphemes that are different. Likewise, every variety of conscious experience may potentially play a computationally relevant role in specific situations. A priori, no variety of consciousness can be dismissed as irrelevant.

Thus, to understand, model and engineer consciousness, we should not prematurely close varieties of consciousness to study. Not only would that prevent us from finding out about consciousness more generally. It may also conspire that we will never understand even what we do decide to study, simply because key pieces of the puzzle are found elsewhere.

Another point is that even though sober human vision is a special case of general vision (and general vision-like qualia), general vision may still be relatively small. The general principles and conditions for vision to work in the first place may be somewhat restricted. Unless the elements are placed just right, the visual system breaks down, at least when it comes to fulfilling a computationally meaningful purpose. Thus, psychedelic research of visual experience may help us quickly distinguish what is essential from what is incidental in vision.

By introducing a psychedelic substance into the nervous system of a person, a remarkable set of visual effects are produced. To anyone interested in seeing a good representation of the way in which various psychedelics affect a person’s conscious experience I highly recommend seeing Disregard Everything I Say’s entry on the visual components of a psychedelic experience (while you are at it, I recommend also checking out his/her post on the corresponding cognitive components of a psychedelic experience). These images will provide a great intuition pump about the kind of beast we are facing to anyone who is psychedelic-drug-naïve (and hopefully inspire a sense of “WOW, you can represent some of what you experience after all!” in those who are less psychedelic-drug-naïve).

Given the same visual input presented to a particular person, a psychedelic substance will in many cases drastically shift the interpretation of such stimuli. Both the interpretation (specifically, what an image is about) and how the image looks and feels tend to vary in synchrony. Likewise, people feel more able to see personal issues in a new light by interpreting them with new schemas and from a different level of awareness. Personally, I suspect that there is a strong and measurable connection between the fluidity of interpretation of personal issues, and the fluidity of interpretation of visual experience. Perhaps all phenomenal constructs are affected in a similar way: By breaking down the previously enforced patterns of thought and perception and opening the way to seeing things differently.

All of the above, while probably true, is still far too vague for a scientific theory. Given our set of tools, and experimental paradigms, to me it makes a lot of sense to start studying the effects of psychedelics in terms of texture perception. As we develop more and better texture analysis/synthesis algorithms, we will acquire a larger repertoire of mathematical properties that describe what is seen during a psychedelic experience. My hope is that we will someday know exactly how to simulate a visual trip.

Why care about psychedelics? Evolution already created the optimal vision system!

Evolutionary selection pressures on perceptual systems do not guarantee that information processing tasks will be solved optimally. In fact, “optimal” only really makes sense in relation to some metric you choose. In some way, everything created by the evolutionary process will be optimal in the sense that is produces the local maximization of inclusive fitness (admittedly it’s more complicated than that). But this is just a tautological notion: Sure evolution is optimal at doing what evolution does. Likewise, rocks have found the optimal way to be themselves. Our visual system works optimally, if you define optimal in terms of 20/20 every-day visual experience.

Instead, it makes more sense if we focus on the specific computational trade-offs that various resource allocation methods and designs provide. We can certainly predict that the particular set of algorithms that our visual system employs to detect visual patterns will satisfy some properties. For instance, they will be good as survival tools in the African Savanna. But just as for our hardwired tastes in food (and our default emotional palette), survival value in the ancestral environment is not necessarily what we currently need or want. And just as it would make sense to modify what we enjoy eating the most (moving away from sugar) to adapt to the current post-industrial environment, it may also turn out to be the case that our visual system is miscalibrated for the tasks we want to solve, and the joys and meaning we would like to experience today.

Psychedelics change our visual system in many ways, some of them more predictable than others. Some people report that small doses of psychedelics increase one’s overall visual acuity (this has yet to be verified empirically). Although counter-intuitive at first, this cannot be ruled out, again, simply because of evolution does not rule out things of this nature. After all, one of the main constraints placed on animals in natural environments is caloric consumption. If a higher visual acuity is possible with your current brain, the marginal benefit such acuity adds may still not surpass the marginal loss of calories that result from excessive brain activity.

Additionally, while higher doses of psychedelics tend to impair many aspects of visual perception (with people on Erowid explaining how extreme tracers can make it hard to walk), low and moderate doses do not have simple one-sided effects even when it comes to accurately representing the world around us. Rather than simply breaking up the pattern recognition capabilities of the visual system, small and moderate doses seem to also open up additional kinds of patterns up for inspection. Pareidolia, for example, is greatly enhanced. Thus, “connecting dots and edges” to match the outline of higher-level phenomenologies (like faces in the mud) happens with ease. Whether this is good or bad depends on the context, and the specific task one is trying to solve.

Perceptual Enhancement via Psychedelics

We have theoretical and anecdotal reasons to suspect psychedelics may turn out to be performance-enhancing in certain visual tasks. We also already have quantitative evidence that this is the case. In the 60s Harman and Fadiman conducted a study about the creativity and problem-solving properties of psychedelics. The study included a paper and pencil component in which the following tests were administered: Purdue Creativity, the Miller Object Visualization, and the Witkin Embedded Figures. The authors conclude that “[m]ost apparent were enhanced abilities to recognize patterns, to minimize and isolate visual distractions, and to maintain visual memory in spite of confusing changes of form and color.” Specifically, the Witkin test is singled as a test in which the ingestion of mescaline produced a consistent performance improvement (p<0.01).

Example of a Witkin Embedded Figure. You have to determine if the left shape is in the right figure within 30 seconds.

Example of a Witkin Embedded Figure. You have to determine if the left shape is in the right figure within 30 seconds.

In personal communication Fadiman has said that these tests were mostly a waste of time: They used one valuable hour of the problem-solving session. I disagree. After 50 years, those results are incredibly valuable to me and the next wave of computational researchers of the mind. I think that the recorded performance enhancement is a key piece of information. We certainly do not expect an enhancement on all areas of cognition and perception (we know, for example, that reaction time and verbal fluency are impaired). Identifying the kinds of tasks that do receive a boost can inform future research. In future posts I will explain my theory for why Witkin Embedded Figures test was particularly benefited from a psychedelic, and how we can create a new test that takes this into account.

New Possible Protocols for Psychedelic Research

Nowadays it is very hard to obtain the required permits and affiliations to conduct academic research on psychedelic consciousness. The key rate-limiting factor is the restrictions that apply to research with controlled substances. Thankfully, we do happen to be living at the beginning of a psychedelic renaissance. It is not hard to imagine that as psychedelics start to (re-)enter the mainstream in psychiatry, a large number of clinical trials will be conducted. In principle, a collaboration can be accorded between a psychophysics lab and a clinical research group to conduct psychophysical assessments during the course of treatment.

Clinical trials for medical applications of psychedelics, though, are still limited in scope and focus. Eventually the medical applications of psychedelics will run out and it will no longer be possible to enroll patients into psychophysical studies. Thus, either more permissive rules and regulations will emerge and society’s rules will be more lax, or we will potentially endure decades or centuries of unnecessary barriers to scientific discoveries about consciousness.

psychedelic_research_autocomplete

As it turns out, tens (if not hundreds) of millions of persons have tried psychedelic substances. Interest is not slowing down, and no propaganda effort save for literal brain-washing will prevent people from developing a genuine interest in the field. There will be millions of volunteers and hundreds of thousands of willing researchers. As I envision in The Psychedelic Future of Consciousness, the future of consciousness studies will be nothing like what it is today. Once those who are interested are given the opportunity to study psychedelics seriously, the research panorama will look very differently.

But What to Do in the Meantime?

Investigating the psychophysics of texture perception more thoroughly may require at least mild to moderate doses for noticeable effects, and a period of examination at a time when acute effects are present. How to overcome the hurdles ahead of any current academic investigation of this nature? This ultimately depends on the specific constraints required for the study. It is true that people would have to be high on LSD (or Mescaline, mushrooms, 2C-B, etc.) while they perform the experiment. But who says that they have to conduct the experiment in your presence? That you have to give them the substance yourself? Without the typical background assumptions that are assumed in psychophysics research labs, can we do anything else?

I suspect that we can do much better. We can come up with protocols that side-step current obstacles. We need to be creative. And I do have some ideas, with varying levels of plausibility for how to implement psychedelic research in legal, viable and immediate ways. Consider, for example, how James Fadiman has been collecting hundreds of micro-dose reports via email without any difficulty for many years. He is taking advantage of the fact that the optimal format of micro-dose reports tends to be “summary and retrospective” narrative. A single dose on a single day on a single person is not likely to be life-changing. So his particular line of research is well suited for the means available to him. Likewise, as I requested in the psychedelic experience of visual textures post, people’s subjective judgements of visual textures while high on LSD can be recorded online. These are just two examples of how non-mainstream research approaches can be taken to study psychedelics. Unfortunately, both of these protocols lack proper controls and standardized settings. But this is all I will say for now.

Stay tuned: In a future post I will propose a set of protocols for independent studies on psychedelics, including additional methods for studying psychedelic visuals.

Thanks for reading! 

If you would like to be a collaborator with me, please email me by finding my contact info in the contact section of this blog. We’ll take it from there.

Note: If any link is broken, please leave a comment and I’ll provide an updated version. Thanks!

Why not computing qualia?

Qualia is the given. In our minds, qualia comparisons and interactions are an essential component of our information processing pipeline. However, this is a particular property of the medium: Consciousness.

David Marr, a cognitive scientist and vision researcher, developed an interesting conceptual framework to analyze information processing systems. This is Marr’s three levels of analysis:

The computational level describes the system in terms of the problems that it can and does solve. What does it do? How fast can it do it? What kind of environment does it need to perform the task?

The algorithmic level describes the system in terms of the specific methods it uses to solve the problems specified on the computational level. In many cases, two information processing systems do the exact same thing from an input-output point of view, and yet they are algorithmically very different. Even when both systems have near identical time and memory demands, you cannot rule out the possibility that they use very different algorithms. A thorough analysis of the state-space of possible algorithms and their relative implementation demands could rule out the use of different algorithms, but this is hard to do.

The implementation level describes the system in terms of the very substrate it uses. Two systems that perform the same exact algorithms can still differ in the substrate used to implement them.

An abacus is a simple information processing system that is easy to describe in terms of Marr’s three levels of analysis. First, computationally: the abacus performs addition, subtraction and various other arithmetic computations. Then algorithms: those used to process information involve moving {beds} along {sticks} (I use ‘{}’ to denote that the sense of these words is about their algorithmic-level abstractions rather than physical instantiation). And the implementation: not only can you choose from metallic and wooden abacus, you can also get your abacus implemented using people’s minds!

What about the mind itself? The mind is an information processing system. At the computational level, the mind has a very general power. It can solve problems never before presented to it, and it can also design and implement computers to do narrow problems more efficiently. At the algorithmic level, we know very little about the human mind, though various fields center on this level. Computational psychology models the algorithmic and the computational level of the mind. Psychophysics, too, attempts to reveal the parameters of the algorithmic component of our minds and their relationship to parameters at the implementation level. And when we reason about logical problems, we do so using specific algorithms. And even counting is something that kids do with algorithms they learn.

The implementation level of the mind is a very tricky subject. There is a lot of research on the possible implementation of algorithms that a neural network abstraction of the brain can instantiate. This is an incredibly important part of the puzzle, but it cannot fully describe the implementation of the human mind. This is because some of the algorithms performed by the mind seem to be implemented with phenomenal binding: The simultaneous presence of diverse phenomenologies in a unified experience. When we make decisions we compare how pleasant each option would be. And to reason, we bundle up sensory-symbolic representations within the scope of the attention of our conscious mind. In general, all algorithmic applications of sensations require the use of phenomenal binding in specific ways. The mind is implemented by a network of binding tendencies between sensations.

A full theory of the mind will require a complete account of the computational properties of qualia. To obtain those we will have to bridge the computational and the implementation level descriptions. We can do this from the bottom up:

  1. An account of the dynamics of qualia (to be able to predict the interactions between sensations just as we can currently predict how silicon transistors will behave) is needed to describe the implementation level of the mind.
  2. An account of the algorithms of the mind (how the dynamics of qualia are harnessed to implement algorithms) is needed to describe the algorithmic level of the mind.
  3. And an account of the capabilities of the mind (the computational limits of qualia algorithms) is needed to describe the computational level of the mind.

Why not call it computing qualia? Computational theories of mind suggest that your conscious experience is the result of information processing per se. But information processing cannot account for the substrate that implements it. Otherwise you are confusing the first and the second level with the third. Even a computer cannot exist without first making sure that there is a substrate that can implement it. In the case of the mind, the fact that you experience multiple pieces of information at once is information concerning the implementation level of your mind, not the algorithmic or computational level.

Isn’t information processing substrate-independent?

Not when the substrate has properties needed for the specific information processing system at hand. If you go to your brian and replace one neuron at a time by a silicon neuron that is functionally identical, at what point would your consciousness disappear? Would it fade gradually? Or would nothing happen? Functionalists would say that nothing should happen since all the functions, locally and globally, are maintained throughout this process. But what if this is not possible?

If you start with a quantum computer, for example, then you have the problem that part of the computational horse-power is being produced by the very quantum substrate of the universe, which the computer harnesses. Replacing every component, one at a time, by a classical counter-part (a non-quantum chip with similar non-quantum properties), would actually lead to a crash. At some point the quantum part of the computer will break down and will no longer be useful.

Likewise with the mind. If the substrate of the mind is actually relevant from a computational point of view, then replacing a brain by seemingly functionally identical components could also lead to an inevitable crash. Functionalists would suggest that there is no reason to think that there is anything functionally special about the mind. But phenomenal binding seems to be it. Uniting pieces of information in a unitary experience is an integral part of our information processing pipeline, and precisely that functionality is the one we do not know how to conceivably implement without consciousness.


Textures

Implementing something with phenomenal binding, rather than implementing phenomenal binding (which is not possible)


On a related note: If you build a conscious robot and you don’t mind phenomenal binding, your robot will have a jumbled-up mind. You need to incorporate phenomenal binding in the pipeline of training. If you want your conscious robot to have a semantically meaningful interpretation of the sentence “the cat eats” you need to be able to bind its sensory-symbolic representations of cats and of eating to each other.

David Pearce’s daily morning cocktail (2015)

“Amineptine c. 200mg, selegiline 2 x 5mg, resveratrol 2 x 250mg, turmeric, blueberry, green tea extract, flaxseed oil, soya protein isolate, creatine, l-carnosine, optimised folate, LEF “Life-Extension” mix, 200mg ibuprofen; and a selection of various Linwoods products added to my black coffee and sugar-free Red Bull. 

“This cocktail would not suit everyone.”

In case you were wondering. Obtained from here.

Psychedelic Perception of Visual Textures

On March 24th 2015 Team Qualia Reverse Engineering (TQRE) went for a long walk within the Stanford campus and around Palo Alto. The purpose of this walk- the Pattern Walk -was to snap a picture of every interesting pattern (or texture) out there that got on our way. The following gallery contains 74 of these patterns. These display a wide range of texture properties: Natural/synthetic, regular/irregular, 2D/2.5D/3D, symmetric/asymmetric, structured/unstructured, etc.

Here are a few observations: 

  1. Human languages do not have the necessary vocabulary (and conceptual primitives) to talk about visual textures adequately. When two images belong to the same category (say, “plants” vs. “rock tilings”), and have roughly similar first order statistics (mean, standard deviation, kurtosis, etc. of the RGB values) there is relatively little else to say about a texture in a way that a person would understand.
  2. Our visual system can recognize extraordinarily subtle properties that distinguish textures from one another. For instance, I bet you can recognize at an immediate experiential level the differences between picture 61 and 62. But can you verbalize such difference?
  3. Mathematics, and statistics in particular, may provide helpful semantic seeds for describing patterns. Indeed, having a basic handle on a few mathematical concepts can leverage one’s ability to talk about the differences between textures. For example, compare images 43 and 44. They are perceptually very different. But how long would it take you to convey the difference to a random person? If there was a person who could only hear you, how would you signal that you are not talking about 43 but 44? If both of you know of the concept of concavity you might only need a few words! Without it, you’d be fairly lost.

Fancifully, we may someday produce a good vocabulary that can effectively allow us to talk about visual textures without having to be currently sharing the same (similar) visual experience.

In practice, we already have some vocabulary that accomplishes this, but it is very obscure and sufficiently technical that its widespread adoption is unrealistic. In particular, I encourage anyone interested in the topic to read “A Parametric Texture Model Based on Joint Statistics of Complex Wavelet Coefficients” by Javier Portilla and Eero P. Simoncelli. They analyze (and synthesize) visual textures by computing a set of highly descriptive statistical properties characteristic of the pattern in question.

As we will see in future posts, their model can be used to point out perceptible statistical features that are perceived as regularities by the human visual system. It may not be sexy to say “Hey Ma’m I really dig the Cross-scale phase statistics of the pattern in your dress.” For now, that’s what we have.

If you want to help me figure out how psychedelics affect your visual experience:

Please browse through these images by clicking on the first one and exploring the slideshow. See which images you like, which produce “odd or interesting visual effects” and which “stand out” in however way you want to define that. Feel free to comment right below any of the images (there is a comment section beneath each image when you click through them as a slideshow) to point out the peculiarities that you notice.

Critically, also include your state of consciousness in the comment. If you took LSD (or any visually-affecting substance) two hours ago (or you are still high), it would be great if you could point that out. Please explain how you think that your visuals are affecting your experience of the various patterns. Everyone loves to talk about their LSD visuals. Now you can do it all you want! And your efforts may actually enable us to understand the way psychedelics affect the algorithms of human vision 🙂

The best case scenario:

You would make comments on these images while sober, and then add comments while high on a psychedelic (doesn’t have to by psychedelic – could be dissociative, though typing might be particularly hard in that condition). Point out the main differences between the textures as perceived on each of the states of consciousness you happen to be in. If you do decide to follow the above protocol, please provide information about the specific substance(s) you consumed and how long ago you did so.

That is, do this if you were planning on taking a hallucinogen to begin with. Independently of that, baseline data is still very valuable, so do add comments about these patterns even if you are sober and plan on staying sober 🙂

In the following post I will explain how this Pattern Walk, the statistical analysis of visual textures, psychophysics and psychedelics can ultimately fit into the larger project of reverse-engineering the computational properties of consciousness.

Should humans wipe out all carnivorous animals so the succeeding generations of herbivores can live in peace?

That was the Quora question.

David Pearce’s answer:

“Sentient beings shouldn’t hurt, harm, and kill each other. This isn’t an argument for mass genocide against cannibals or carnivores, but for dietary reform. Humans are prone to status quo bias. So let’s do a thought-experiment. Imagine we stumble across an advanced civilisation that has abolished predation, disease, famine, and all the horrors of primitive Darwinian life. The descendants of archaic lifeforms flourish unmolested in their wildlife parks – free living but not “wild”. Should we urge scrapping their regime of compassionate stewardship of the living world – and a return to asphyxiation, disembowelling and being eaten alive? Or is a happy biosphere best conserved intact?

“Back here on Earth, the exponential growth of computer power entails that every cubic metre of the planet will shortly be accessible to surveillance and micro-management. In consequence, which life-forms and states of consciousness exist in tomorrow’s wildlife parks will be up to us. Mass-produced in vitro meat, the CRISPR revolution in biotechnology, and fertility regulation via cross-species immunocontraception mean there is no need to re-enact the traditional Darwinian horror story indefinitely. On some fairly modest assumptions, fertility regulation is ethically preferable to Malthusian methods of population control in humans and nonhuman alike.

“Critics might claim that a genetically tweaked vegetarian lion isn’t “truly” a lion. But this is like saying non-Caucasians who lack the 1% to 3% Neanderthal DNA typical of Caucasians aren’t “truly” human. Or vice versa. In short, beware naive species essentialism.

“For now this debate is fanciful. Before humans can start systematically helping sentient beings, we must stop systematically harming them. Thankfully, the in vitro meat revolution promises a world where factory-farms and slaughterhouses have been outlawed.  Before seriously contemplating high-tech Jainism, let’s shut the death factories.”