Personality Traits Are Continuous With Mental Illnesses

Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity.

The Constitution of the World Health Organization


Whether pain takes the form of the eternal Treblinka of our Fordist factory farms and conveyor-belt killing factories, or whether it’s manifested as the cruelties of a living world still governed by natural selection, the sheer viciousness of the Darwinian Era is likely to horrify our morally saner near-descendants.

David Pearce in Brave New World? A Defense of Paradise-Engineering


Personality traits are continuous with mental illnesses

by Geoffrey Miller (originally posted on Edge in 2011)

We like to draw clear lines between normal and abnormal behavior. It’s reassuring, for those who think they’re normal. But it’s not accurate. Psychology, psychiatry, and behavior genetics are converging to show that there’s no clear line between “normal variation” in human personality traits and “abnormal” mental illnesses. Our instinctive way of thinking about insanity — our intuitive psychiatry — is dead wrong.

To understand insanity, we have to understand personality. There’s a scientific consensus that personality traits can be well-described by five main dimensions of variation. These “Big Five” personality traits are called openness, conscientiousness, extraversion, agreeableness, and emotional stability. The Big Five are all normally distributed in a bell curve, statistically independent of each other, genetically heritable, stable across the life-course, unconsciously judged when choosing mates or friends, and found in other species such as chimpanzees. They predict a wide range of behavior in school, work, marriage, parenting, crime, economics, and politics.

Mental disorders are often associated with maladaptive extremes of the Big Five traits. Over-conscientiousness predicts obsessive-compulsive disorder, whereas low conscientiousness predicts drug addiction and other “impulse control disorders”. Low emotional stability predicts depression, anxiety, bipolar, borderline, and histrionic disorders. Low extraversion predicts avoidant and schizoid personality disorders. Low agreeableness predicts psychopathy and paranoid personality disorder. High openness is on a continuum with schizotypy and schizophrenia. Twin studies show that these links between personality traits and mental illnesses exist not just at the behavioral level, but at the genetic level. And parents who are somewhat extreme on a personality trait are much more likely to have a child with the associated mental illness.

One implication is that the “insane” are often just a bit more extreme in their personalities than whatever promotes success or contentment in modern societies — or more extreme than we’re comfortable with. A less palatable implication is that we’re all insane to some degree. All living humans have many mental disorders, mostly minor but some major, and these include not just classic psychiatric disorders like depression and schizophrenia, but diverse forms of stupidity, irrationality, immorality, impulsiveness, and alienation. As the new field of positive psychology acknowledges, we are all very far from optimal mental health, and we are all more or less crazy in many ways. Yet traditional psychiatry, like human intuition, resists calling anything a disorder if its prevalence is higher than about 10%.

The personality/insanity continuum is important in mental health policy and care. There are angry and unresolved debates over how to revise the 5th edition of psychiatry’s core reference work, the Diagnostic and Statistic Manual of Mental Disorders (DSM-5), to be published in 2013. One problem is that American psychiatrists dominate the DSM-5 debates, and the American health insurance system demands discrete diagnoses of mental illnesses before patients are covered for psychiatric medications and therapies. Also, the U.S. Food and Drug Administration approves psychiatric medications only for discrete mental illnesses. These insurance and drug-approval issues push for definitions of mental illnesses to be artificially extreme, mutually exclusive, and based on simplistic checklists of symptoms. Insurers also want to save money, so they push for common personality variants — shyness, laziness, irritability, conservatism — not to be classed as illnesses worthy of care. But the science doesn’t fit the insurance system’s imperatives. It remains to be seen whether DSM-5 is written for the convenience of American insurers and FDA officials, or for international scientific accuracy.

Psychologists have shown that in many domains, our instinctive intuitions are fallible (though often adaptive). Our intuitive physics — ordinary concepts of time, space, gravity, and impetus — can’t be reconciled with relativity, quantum mechanics, or cosmology. Our intuitive biology — ideas of species essences and teleological functions — can’t be reconciled with evolution, population genetics, or adaptationism. Our intuitive morality — self-deceptive, nepotistic, clannish, anthropocentric, and punitive — can’t be reconciled with any consistent set of moral values, whether Aristotelean, Kantian, or utilitarian. Apparently, our intuitive psychiatry has similar limits. The sooner we learn those limits, the better we’ll be able to help people with serious mental illnesses, and the more humble we’ll be about our own mental health.

Estimated Cost of the DMT Machine Elves Prime Factorization Experiment

“Okay,” I said. “Fine. Let me tell you where I’m coming from. I was reading Scott McGreal’s blog, which has some good articles about so-called DMT entities, and mentions how they seem so real that users of the drug insist they’ve made contact with actual superhuman beings and not just psychedelic hallucinations. You know, the usual Terence McKenna stuff. But in one of them he mentions a paper by Marko Rodriguez called A Methodology For Studying Various Interpretations of the N,N-dimethyltryptamine-Induced Alternate Reality, which suggested among other things that you could prove DMT entities were real by taking the drug and then asking the entities you meet to factor large numbers which you were sure you couldn’t factor yourself. So to that end, could you do me a big favor and tell me the factors of 1,522,605,027, 922,533,360, 535,618,378, 132,637,429, 718,068,114, 961,380,688, 657,908,494, 580,122,963, 258,952,897, 654,000,350, 692,006,139?

Universal Love, Said the Cactus Person, by Scott Alexander

In the comments…

gwern says:
I was a little curious about how such a prime experiment would go and how much it would cost. It looks like one could probably run an experiment with a somewhat OK chance at success for under $1k.
We need to estimate the costs and probabilities of memorizing a suitable composite number, buying DMT, using DMT and getting the requisite machine-elf experience (far from guaranteed), being able to execute a preplanned action like asking about a prime, and remembering the answer.

1. The smallest RSA number not yet factored is 220 digits. The RSA numbers themselves are useless for this experiment because if one did get the right factors, because it’s so extraordinarily unlikely for machine-elves to really be an independent reality, a positive result would only prove that someone had stolen the RSA answers or hacked a computer or something along the lines. RSA-768 was factored in 2009 using ~2000 CPU-years, so we need a number much larger; since Google has several million CPUs we might want something substantially larger, at least 800 digits. We know from mnemonists that numbers that large can be routinely memorized, and an 800 digit decimal can be memorized in an hour. Chao Lu memorized 67k digits of Pi in 1 year. So the actual memorization time is not significant. How much training does it take to memorize 800 digits? I remember a famous example in WM research of how WM training does not necessarily transfer to anything, of a student taught to memorize digits, Ericsson & Chase’s whose digit span went from ~7 to ~80 after 230 hours of training; digit span is much more demanding than a one-off memorization. This does something similar using more like 80 hours of training. Foer’s _Moonwalking With Einstein: The Art and Science of Remembering Everything_ doesn’t cover much more than a year or two and fairly undemanding training regimen, and he performed well. So I’m going to guess that to memorize a number which would be truly impressive evidence (and not simply evidence for a prank or misdeeds by a hobbyist, RSA employee, Google, or the NSA) would require ~30h of practice.
2. some browsing of the DMT category on the current leading black-market suggests that 1g of DMT from a reputable seller costs ฿0.56 or ~$130. The linked paper says smoking DMT for a full trip requires 50mg/0.05g so our $130 buys ~19 doses.
3. The linked paper says that 20% of Strassman’s injected-DMT trips give a machine-elf experience; hence the 1g will give an average of ~3-4 machine-elfs and 19 trips almost guarantees at least 1 machine-elf assuming 20% success-rate (1-(1-0.2)^19 = 98%). Since the 20% figure comes from injected DMT and DMT of a controlled high quality, probably this is optimistic for anyone trying out smoking DMT at home, but let’s roll with it.
4. in a machine-elf experience, how often could we be lucid enough to wake up and ask the factoring question? No one’s mentioned trying so there’s no hard data, but we can borrow from a similar set of experiments in verifying altered states of consciousness, Laberge’s lucid dreaming experiments in which subjects had to exert control to wiggle their eyes in a fixed pattern. This study gives several flows from # of nights to # of verifications, which all are roughly 1/3 – 1/4; so given our estimated 3-4 machine-elfs, we might be able to ask 1 time. If the machine-elves are guaranteed to reply correctly, then that’s all we need.
5. at 30 hours of mnemonic labor valued at minimum wage of $8 and $130 for 19 doses, that gives us an estimate of $370 in costs to ask an average of once; if we amortize the memorization costs some more by buying 2g, then we instead spend $250 per factoring request for 2 tries; and so on down to a minimum cost of (130/19)*5 = $34 per factoring request. To get n=10 requests, we’d need to spend a cool ((30*8) + 10*130)=$1540.
6. power analysis for a question like this is tricky, since we only need one response with the *right* factors; probably what will happen is that the machine-elfs will not answer or any answer will be ‘forgotten’. You can estimate other stuff like how likely the elves are to respond given 10 questions and 0 responses (flat prior’s 95% CI: 0-28%), or apply decision-theory to decide when to stop trying (tricky, since any reasonable estimate of the probability of machine-elves will tell you that at $35 a shot, you shouldn’t be trying at all).

Hence, you could get a few attempts at somewhere under $1k, but exactly how much depends sensitively on what fraction of trips you get elves and how often you manage to ask them; the DMT itself doesn’t cost *that* much per dose (like ~$7) but it’s the all the trips where you don’t get elves or you get elves but are too ecstatic to ask them anything which really kill you and drive up the price to $34-$250 per factoring request. Also, there’s a lot of uncertainty in all these estimates (who knows how much any of the quoted rates differ from person to person?).

I thought this might be a fun self-experiment to do, but looking at the numbers and the cost, it seems pretty discouraging.


Related Empirical Paradigms for Psychedelic Research:

  1. LSD and Quantum Measurement (an experiment that was designed, coded up, and conducted to evaluate whether one can experience multiple Everett branches at once while on LSD).
  2. How to Secretly Communicate with People on LSD (a method called Psychedelic Cryptography which uses the slower qualia decay factor induced by psychedelics, aka. “tracers”, in order to encode information in gifs that you can only decode if you are sufficiently high on a psychedelic).
  3. Psychophysics for Psychedelic Research: Textures (an experimental method developed by Benjamin Bala based on the textural mongrel paradigm proposed by Eero Simoncelli and extended to provide insights into psychedelic visual perception. See: analysis).

The Qualia Explosion

Extract from “Humans and Intelligent Machines: Co-Evolution, Fusion or Replacement?” (talk) by David Pearce

Supersentience: Turing plus Shulgin?

Compared to the natural sciences (cf. the Standard Model in physics) or computing (cf. the Universal Turing Machine), the “science” of consciousness is pre-Galilean, perhaps even pre-Socratic. State-enforced censorship of the range of subjective properties of matter and energy in the guise of a prohibition on psychoactive experimentation is a powerful barrier to knowledge. The legal taboo on the empirical method in consciousness studies prevents experimental investigation of even the crude dimensions of the Hard Problem, let alone locating a solution-space where answers to our ignorance might conceivably be found.

Singularity theorists are undaunted by our ignorance of this fundamental feature of the natural world. Instead, the Singularitarians offer a narrative of runaway machine intelligence in which consciousness plays a supporting role ranging from the minimal and incidental to the completely non-existent. However, highlighting the Singularity movement’s background assumptions about the nature of mind and intelligence, not least the insignificance of the binding problem to AGI, reveals why FUSION and REPLACEMENT scenarios are unlikely – though a measure of “cyborgification” of sentient biological robots augmented with ultrasmart software seems plausible and perhaps inevitable.

If full-spectrum superintelligence does indeed entail navigation and mastery of the manifold state-spaces of consciousness, and ultimately a seamless integration of this knowledge with the structural understanding of the world yielded by the formal sciences, then where does this elusive synthesis leave the prospects of posthuman superintelligence? Will the global proscription of radically altered states last indefinitely?

Social prophecy is always a minefield. However, there is one solution to the indisputable psychological health risks posed to human minds by empirical research into the outlandish state-spaces of consciousness unlocked by ingesting the tryptaminesphenylethylaminesisoquinolines and other pharmacological tools of sentience investigation. This solution is to make “bad trips” physiologically impossible – whether for individual investigators or, in theory, for human society as a whole. Critics of mood-enrichment technologies sometimes contend that a world animated by information-sensitive gradients of bliss would be an intellectually stagnant society: crudely, a Brave New World. On the contrary, biotech-driven mastery of our reward circuitry promises a knowledge explosion in virtue of allowing a social, scientific and legal revolution: safe, full-spectrum biological superintelligence. For genetic recalibration of hedonic set-points – as distinct from creating uniform bliss – potentially leaves cognitive function and critical insight both sharp and intact; and offers a launchpad for consciousness research in mind-spaces alien to the drug-naive imagination. A future biology of invincible well-being would not merely immeasurably improve our subjective quality of life: empirically, pleasure is the engine of value-creation. In addition to enriching all our lives, radical mood-enrichment would permit safe, systematic and responsible scientific exploration of previously inaccessible state-spaces of consciousness. If we were blessed with a biology of invincible well-being, exotic state-spaces would all be saturated with a rich hedonic tone.

Until this hypothetical world-defining transition, pursuit of the rigorous first-person methodology and rational drug-design strategy pioneered by Alexander Shulgin in PiHKAL and TiHKAL remains confined to the scientific counterculture. Investigation is risky, mostly unlawful, and unsystematic. In mainstream society, academia and peer-reviewed scholarly journals alike, ordinary waking consciousness is assumed to define the gold standard in which knowledge-claims are expressed and appraised. Yet to borrow a homely-sounding quote from Einstein, “What does the fish know of the sea in which it swims?” Just as a dreamer can gain only limited insight into the nature of dreaming consciousness from within a dream, likewise the nature of “ordinary waking consciousness” can only be glimpsed from within its confines. In order to scientifically understand the realm of the subjective, we’ll need to gain access to all its manifestations, not just the impoverished subset of states of consciousness that tended to promote the inclusive fitness of human genes on the African savannah.

Why the Proportionality Thesis Implies an Organic Singularity

So if the preconditions for full-spectrum superintelligence, i.e. access to superhuman state-spaces of sentience, remain unlawful, where does this roadblock leave the prospects of runaway self-improvement to superintelligence? Could recursive genetic self-editing of our source code repair the gap? Or will traditional human personal genomes be policed by a dystopian Gene Enforcement Agency in a manner analogous to the coercive policing of traditional human minds by the Drug Enforcement Agency?

Even in an ideal regulatory regime, the process of genetic and/or pharmacological self-enhancement is intuitively too slow for a biological Intelligence Explosion to be a live option, especially when set against the exponential increase in digital computer processing power and inorganic AI touted by Singularitarians. Prophets of imminent human demise in the face of machine intelligence argue that there can’t be a Moore’s law for organic robots. Even the Flynn Effect, the three-points-per-decade increase in IQ scores recorded during the 20th century, is comparatively puny; and in any case, this narrowly-defined intelligence gain may now have halted in well-nourished Western populations.

However, writing off all scenarios of recursive human self-enhancement would be premature. Presumably, the smarter our nonbiological AI, the more readily AI-assisted humans will be able recursively to improve our own minds with user-friendly wetware-editing tools – not just editing our raw genetic source code, but also the multiple layers of transcription and feedback mechanisms woven into biological minds. Presumably, our ever-smarter minds will be able to devise progressively more sophisticated, and also progressively more user-friendly, wetware-editing tools. These wetware-editing tools can accelerate our own recursive self-improvement – and manage potential threats from nonfriendly AGI that might harm rather than help us, assuming that our earlier strictures against the possibility of digital software-based unitary minds were mistaken. MIRI rightly call attention to how small enhancements can yield immense cognitive dividends: the relatively short genetic distance between humans and chimpanzees suggests how relatively small enhancements can exert momentous effects on a mind’s general intelligence, thereby implying that AGIs might likewise become disproportionately powerful through a small number of tweaks and improvements. In the post-genomic era, presumably exactly the same holds true for AI-assisted humans and transhumans editing their own minds. What David Chalmers calls the proportionality thesis, i.e. increases in intelligence lead to proportionate increases in the capacity to design intelligent systems, will be vindicated as recursively self-improving organic robots modify their own source code and bootstrap our way to full-spectrum superintelligence: in essence, an organic Singularity. And in contrast to classical digital zombies, superficially small molecular differences in biological minds can result in profoundly different state-spaces of sentience. Compare the ostensibly trivial difference in gene expression profiles of neurons mediating phenomenal sight and phenomenal sound – and the radically different visual and auditory worlds they yield.

Compared to FUSION or REPLACEMENT scenarios, the AI-human CO-EVOLUTION conjecture is apt to sound tame. The likelihood our posthuman successors will also be our biological descendants suggests at most a radical conservativism. In reality, a post-Singularity future where today’s classical digital zombies were superseded merely by faster, more versatile classical digital zombies would be infinitely duller than a future of full-spectrum supersentience. For all insentient information processors are exactly the same inasmuch as the living dead are not subjects of experience. They’ll never even know what it’s like to be “all dark inside” – or the computational power of phenomenal object-binding that yields illumination. By contrast, posthuman superintelligence will not just be quantitatively greater but also qualitatively alien to archaic Darwinian minds. Cybernetically enhanced and genetically rewritten biological minds can abolish suffering throughout the living world and banish experience below “hedonic zero” in our forward light-cone, an ethical watershed without precedent. Post-Darwinian life can enjoy gradients of lifelong blissful supersentience with the intensity of a supernova compared to a glow-worm. A zombie, on the other hand, is just a zombie – even if it squawks like Einstein. Posthuman organic minds will dwell in state-spaces of experience for which archaic humans and classical digital computers alike have no language, no concepts, and no words to describe our ignorance. Most radically, hyperintelligent organic minds will explore state-spaces of consciousness that do not currently play any information-signalling role in living organisms, and are impenetrable to investigation by digital zombies. In short, biological intelligence is on the brink of a recursively self-amplifying Qualia Explosion – a phenomenon of which digital zombies are invincibly ignorant, and invincibly ignorant of their own ignorance. Humans too of course are mostly ignorant of what we’re lacking: the nature, scope and intensity of such posthuman superqualia are beyond the bounds of archaic human experience. Even so, enrichment of our reward pathways can ensure that full-spectrum biological superintelligence will be sublime.


Image Credit: MohammadReza DomiriGanji

John von Neumann

Passing of a Great Mind

John von Neumann, a Brilliant, Jovial Mathematician, was a Prodigious Servant of Science and his Country

by Clary Blair Jr. – Life Magazine (February 25th, 1957)

The world lost one of its greatest scientists when Professor John von Neumann, 54, died this month of cancer in Washington, D.C. His death, like his life’s work, passed almost unnoticed by the public. But scientists throughout the free world regarded it as a tragic loss. They knew that Von Neumann’s brilliant mind had not only advanced his own special field, pure mathematics, but had also helped put the West in an immeasurably stronger position in the nuclear arms race. Before he was 30 he had established himself as one of the world’s foremost mathematicians. In World War II he was the principal discoverer of the implosion method, the secret of the atomic bomb.

The government officials and scientists who attended the requiem mass at the Walter Reed Hospital chapel last week were there not merely in recognition of his vast contributions to science, but also to pay personal tribute to a warm and delightful personality and a selfless servant of his country.

For more than a year Von Neumann had known he was going to die. But until the illness was far advanced he continued to devote himself to serving the government as a member of the Atomic Energy Commission, to which he was appointed in 1954. A telephone by his bed connected directly with his EAC office. On several occasions he was taken downtown in a limousine to attend commission meetings in a wheelchair. At Walter Reed, where he was moved early last spring, an Air Force officer, Lieut. Colonel Vincent Ford, worked full time assisting him. Eight airmen, all cleared for top secret material, were assigned to help on a 24-hour basis. His work for the Air Force and other government departments continued. Cabinet members and military officials continually came for his advice, and on one occasion Secretary of Defence Charles Wilson, Air Force Secretary Donald Quarles and most of the top Air Force brass gathered in Von Neumann’s suite to consult his judgement while there was still time. So relentlessly did Von Neumann pursue his official duties that he risked neglecting the treatise which was to form the capstone of his work on the scientific specialty, computing machines, to which he had devoted many recent years.

von_neumann_1_1

His fellow scientists, however, did not need any further evidence of Von Neumann’s rank as a scientist – or his assured place in history. They knew that during World War II at Los Alamos Von Neumann’s development of the idea of implosion speeded up the making of the atomic bomb by at least a full year. His later work with electronic computers quickened U.S. development of the H-bomb by months. The chief designer of the H-bomb, Edward Teller, once said with wry humor that Von Neumann was “one of those rare mathematicians who could descend to the level of the physicist.” Many theoretical physicists admit that they learned more from Von Neumann in methods of scientific thinking than from any of their colleagues. Hans Bethe, who was director of the theoretical physics division at Los Alamos, says, “I have sometimes wondered whether a brain like Von Neumann’s does not indicate a species superior to that of man.”

von_neumann_2

The foremost authority on computing machines in the U.S., Von Neumann was more than anyone else responsible for the increased use of the electronic “brains” in government and industry. The machine he called MANIAC (mathematical analyzer, numerical integrator and computer), which he built at the Institute for Advanced Study in Princeton, N.J., was the prototype for most of the advanced calculating machines now in use. Another machine, NORC, which he built for the Navy, can deliver a full day’s weather prediction in a few minutes. The principal adviser to the U.S. Air Force on nuclear weapons, Von Neumann was the most influential scientific force behind the U.S. decision to embark on accelerated production of intercontinental ballistic missiles. His “theory of games,” outlined in a book which he published in 1944 in collaboration with Economist Oskar Morgenstern, opened up an entirely new branch of mathematics. Analyzing the mathematical probabilities behind games of chance, Von Neumann went on to formulate a mathematical approach to such widespread fields as economics, sociology and even military strategy. His contributions to the quantum theory, the theory which explains the emission and absorption of energy in atoms and the one on which all atomic and nuclear physics are based, were set forth in a work entitled Mathematical Foundations of Quantum Mechanics which he wrote at the age of 23. It is today one of the cornerstones of this highly specialized branch of mathematical thought.

For Von Neumann the road to success was a many-laned highway with little traffic and no speed limit. He was born in 1903 in Budapest and was of the same generation of Hungarian physicists as Edward Teller, Leo Szilard and Eugene Wigner, all of whom later worked on atomic energy development for the U.S.

The eldest of three sons of a well-to-do Jewish financier who had been decorated by the Emperor Franz Josef, John von Neumann grew up in a society which placed a premium on intellectual achievement. At the age of 6 he was able to divide two eight-digit numbers in his head. By the age of 8 he had mastered college calculus and as a trick could memorize on sight a column in a telephone book and repeat back the names, addresses and numbers. History was only a “hobby,” but by the outbreak of World War I, when he was 10, his photographic mind had absorbed most of the contents of the 46-volume works edited by the German historian Oncken with a sophistication that startled his elders.

Despite his obvious technical ability, as a young man Von Neumann wanted to follow his father’s financial career, but he was soon dissuaded. Under a kind of supertutor, a first-rank mathematician at the University of Budapest named Leopold Fejer, Von Neumann was steered into the academic world. At 21 he received two degrees – one in chemical engineering at Zurich and a PhD in mathematics from the University of Budapest. The following year, 1926, as Admiral Horthy’s rightist regime had been repressing Hungarian Jews, he moved to Göttingen, Germany, then the mathematical center of the world. It was there that he published his major work on quantum mechanics.

The young professor

His fame now spreading, Von Neumann at 23 qualified as a Privatdozent (lecturer) at the University of Berlin, one of the youngest in the school’s history. But the Nazis had already begun their march to power. In 1929 Von Neumann accepted a visiting lectureship at Princeton University and in 1930, at the age of 26, he took a job there as professor of mathematical physics – after a quick trip to Budapest to marry a vivacious 18-year-old named Mariette Kovesi. Three years later, when the Institute for Advanced Study was founded at Princeton, Von Neumann was appointed – as was Albert Einstein – to be one of its first full professors. “He was so young,” a member of the institute recalls, “that most people who saw him in the halls mistook him for a graduate student.”

von_neumann_3

Although they worked near each other in the same building, Einstein and Von Neumann were not intimate, and because their approach to scientific matters was different they never formally collaborated. A member of the institute who worked side by side with both men in the early days recalls, “Einstein’s mind was slow and contemplative. He would think about something for years. Johnny’s mind was just the opposite. It was lightning quick – stunningly fast. If you gave him a problem he either solved it right away or not at all. If he had to think about it a long time and it bored him, hist interest would begin to wander. And Johnny’s mind would not shine unless whatever he was working on had his undivided attention.” But the problems he did care about, such as his “theory of games,” absorbed him for much longer periods.

‘Proof by erasure’

Partly because of this quicksilver quality Von Neumann was not an outstanding teacher to many of his students. But for the advanced students who could ascend to his level he was inspirational. His lectures were brilliant, although at times difficult to follow because of his way of erasing and rewriting dozens of formulae on the blackboard. In explaining mathematical problems Von Neumann would write his equations hurriedly, starting at the top of the blackboard and working down. When he reached the bottom, if the problem was unfinished, he would erase the top equations and start down again. By the time he had done this two or three times most other mathematicians would find themselves unable to keep track. On one such occasion a colleague at Princeton waited until Von Neumann had finished and said, “I see. Proof by erasure.”

Von Neumann himself was perpetually interested in many fields unrelated to science. Several years ago his wife gave him a 21-volume Cambridge History set, and she is sure he memorized every name and fact in the books. “He is a major expert on all the royal family trees in Europe,” a friend said once. “He can tell you who fell in love with whom, and why, what obscure cousin this or that czar married, how many illegitimate children he had and so on.” One night during the Princeton days a world-famous expert on Byzantine history came to the Von Neumann house for a party. “Johnny and the professor got into a corner and began discussing some obscure facet,” recalls a friend who was there. “Then an argument arose over a date. Johnny insisted it was this, the professor that. So Johnny said, ‘Let’s get the book.’ They looked it up and Johnny was right. A few weeks later the professor was invited to the Von Neumann house again. He called Mrs. von Neumann and said jokingly, ‘I’ll come if Johnny promises not to discuss Byzantine history. Everybody thinks I am the world’s greatest expert in it and I want them to keep on thinking that.'”von_neumann_4

Once a friend showed him an extremely complex problem and remarked that a certain famous mathematician had taken a whole week’s journey across Russia on the Trans-Siberian Railroad to complete it. Rushing for a train, Von Neumann took the problem along. Two days later the friend received an air-mail packet from Chicago. In it was a 50-page handwritten solution to the problem. Von Neumann had added a postscript: “Running time to Chicago: 15 hours, 26 minutes.” To Von Neumann this was not an expression of vanity but of sheer delight – a hole in one.

During periods of intense intellectual concentration Von Neumann, like most of his professional colleagues, was lost in preoccupation, and the real world spun past him. He would sometimes interrupt a trip to put through a telephone call to find out why he had taken the trip in the first place.

Von Neumann believed that concentration alone was insufficient for solving some of the most difficult mathematical problems and that these are solved in the subconscious. He would often go to sleep with a problem unsolved, wake up in the morning and scribble the answer on a pad he kept on the bedside table. It was a common occurrence for him to begin scribbling with pencil and paper in the midst of a nightclub floor show or a lively party, “the noisier,” his wife says, “the better.” When his wife arranged a secluded study for Von Neumann on the third floor of the Princeton home, Von Neumann was furious. “He stormed downstairs,” says Mrs. von Neumann, “and demanded, ‘What are you trying to do, keep me away from what’s going on?’; After that he did most of his work in the living room with my phonograph blaring.”

His pride in his brain power made him easy prey to scientific jokesters. A friend once spent a week working out various steps in an obscure mathematical process. Accosting Von Neumann at a party he asked for help in solving the problem. After listening to it, Von Neumann leaned his plump frame against a door and stared blankly, his mind going through the necessary calculations. At each step in the process the friend would quickly put in, “Well, it comes out to this, doesn’t it?” After several such interruptions Von Neumann became perturbed and when his friend “beat” him to the final answer he exploded in fury. “Johnny sulked for weeks,” recalls the friend, “before he found out it was all a joke.”

He did not look like a professor. He dressed so much like a Wall Street banker that a fellow scientist once said, “Johnny, why don’t you smear some chalk dust on your coat so you look like the rest of us?” He loved to eat, especially rich sauces and desserts, and in later years was forced to diet rigidly. To him exercise was “nonsense.”

Those lively Von Neumann parties

Most card-playing bored him, although he was fascinated by the mathematical probabilities involved in poker and baccarat. He never cared for movies. “Every time we went,” his wife recalls, “he would either go to sleep or do math problems in his head.” When he could do neither he would break into violent coughing spells. What he truly loved, aside from work, was a good party. Residents of Princeton’s quiet academic community can still recall the lively goings-on at the Von Neumann’s big, rambling house on Westcott Road. “Those old geniuses got downright approachable at the Von Neumanns’,” a friend recalls. Von Neumann’s talents as a host were based on his drinks, which were strong, his repertoire of off-color limericks, which was massive, and his social ease, which was consummate. Although he could rarely remember a name, Von Neumann would escort each new guest around the room, bowing punctiliously to cover up the fact that he was not using names in introducing people.von_neumann_5

Von Neumann also had a passion for automobiles, not for tinkering with them but for driving them as if they were heavy tanks. He turned up with a new one every year at Princeton. “The way he drove, a car couldn’t possibly last more than a year,” a friend says. Von Neumann was regularly arrested for speeding and some of his wrecks became legendary. A Princeton crossroads was for a while known as “Von Neumann corner” because of the number of times the mathematician had cracked up there. He once emerged from a totally demolished car with this explanation: “I was proceeding down the road. The threes on the right were passing me in orderly fashion at 60 miles an hour. Suddenly one of them stepped out in my path. Boom!”

Mariette and John von Neumann had one child, Marina, born in 1935, who graduated from Radcliffe last June, summa cum laude, with the highest scholastic record in her class. In 1937, the year Von Neumann was elected to the National Academy of Sciences and became a naturalized citizen of the U.S., the marriage ended in divorce. The following year on a trip to Budapest he met and married Klara Dan, whom he subsequently trained to be an expert on electronic computing machines. The Von Neumann home in Princeton continued to be a center of gaiety as well as a hotel for prominent intellectual transients.

In the late 1930s Von Neumann began to receive a new type of visitor at Princeton: the military scientist and engineer. After he had handled a number of jobs for the Navy in ballistics and anti-submarine warfare, word of his talents spread, and Army Ordnance began using him more and more as a consultant at its Aberdeen Proving Ground in Maryland. As war drew nearer this kind of work took up more and more of his time.

During World War II he roved between Washington, where he had established a temporary residence, England, Los Alamos and other defense installations. When scientific groups heard Von Neumann was coming, they would set up all of their advanced mathematical problems like ducks in a shooting gallery. Then he would arrive and systematically topple them over.

After the Axis had been destroyed, Von Neumann urged that the U.S. immediately build even more powerful atomic weapons and use them before the Soviets could develop nuclear weapons of their own. It was not an emotional crusade, Von Neumann, like others, had coldly reasoned that the world had grown too small to permit nations to conduct their affairs independently of one another. He held that world government was inevitable – and the sooner the better. But he also believed it could never be established while Soviet Communism dominated half of the globe. A famous Von Neumann observation at the time: “With the Russians it is not a question of whether but when.” A hard-boiled strategist, he was one of the few scientists to advocate preventive war, and in 1950 he was remarking, “If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not 1 o’clock?”von_neumann_6

In late 1949, after the Russians had exploded their first atomic bomb and the U.S. scientific community was split over whether or not the U.S. should build a hydrogen bomb, Von Neumann reduced the argument to: “It is not a question of whether we build it or not, but when do we start calculating?” When the H-bomb controversy raged, Von Neumann slipped quietly out to Los Alamos, took a desk and began work on the first mathematical steps toward building the weapon, specifically deciding which computations would be fed to which electronic computers.

Von Neumann’s principal interest in the postwar years was electronic computing machines, and his advice on computers was in demand almost everywhere. One day he was urgently summoned to the offices of the Rand Corporation, a government-sponsored scientific research organization in Santa Monica, Calif. Rand scientists had come up with a problem so complex that the electronic computers then in existence seemingly could not handle it. The scientists wanted Von Neumann to invent a new kind of computer. After listening to the scientists expound, Von Neumann broke in: “Well, gentlemen, suppose you tell me exactly what the problem is?”

For the next two hours the men at Rand lectured, scribbled on blackboards, and brought charts and tables back and forth. Von Neumann sat with his head buried in his hands. When the presentation was completed, he scribbled on a pad, stared so blankly that a Rand scientist later said he looked as if “his mind had slipped his face out of gear,” then said, “Gentlemen, you do not need the computer. I have the answer.”

While the scientists sat in stunned silence, Von Neumann reeled off the various steps which would provide the solution to the problem. Having risen to this routine challenge, Von Neumann followed up with a routine suggestion: “Let’s go to lunch.”

In 1954, when the U.S. development of the intercontinental ballistic missile was dangerously bogged down, study groups under Von Neumann’s direction began paving the way for solution of the most baffling problems: guidance, miniaturization of components, heat resistance. In less than a year Von Neumann put his O.K. on the project – but not until he had completed a relentless investigation in his own dazzlingly fast style. One day, during an ICBM meeting on the West Coast, a physicist employed by an aircraft company approached Von Neumann with a detailed plan for one phase of the project. It consisted of a tome several hundred pages long on which the physicist had worked for eight months. Von Neumann took the book and flipped through the first several pages. Then he turned it over and began reading from back to front. He jotted down a figure on a pad, then a second and a third. He looked out the window for several seconds, returned the book to the physicist and said, “It won’t work.” The physicist returned to his company. After two months of re-evaluation, he came to the same conclusion.von_neumann_7

In October 1954 Eisenhower appointed Von Neumann to the Atomic Energy Commission. Von Neumann accepted, although the Air Force and the senators who confirmed him insisted that he retain his chairmanship of the Air Force ballistic missile panel.

Von Neumann had been on the new job only six months when the pain first stuck in the left shoulder. After two examinations, the physicians at Bethesda Naval Hospital suspected cancer. Within a month Von Neumann was wheeled into surgery at the New England Deaconess Hospital in Boston. A leading pathologist, Dr. Shields Warren, examined the biopsy tissue and confirmed that the pain was a secondary cancer. Doctors began to race to discover the primary location. Several weeks later they found it in the prostate. Von Neumann, they agreed, did not have long to live.

When he heard the news Von Neumann called for Dr. Warren. He asked, “Now that this thing has come, how shall I spend the remainder of my life?”

“Well, Johnny,” Warren said, “I would stay with the commission as long as you feel up to it. But at the same time I would say that if you have any important scientific papers – anything further scientifically to say – I would get started on it right away.”

Von Neumann returned to Washington and resumed his busy schedule at the Atomic Energy Commission. To those who asked about his arm, which was in a sling, he muttered something about a broken collarbone. He continued to preside over the ballistic missile committee, and to receive an unending stream of visitors from Los Alamos, Livermore, the Rand Corporation, Princeton. Most of these men knew that Von Neumann was dying of cancer, but the subject was never mentioned.

Machines creating new machines

After the last visitor had departed Von Neumann would retire to his second-floor study to work on the paper which he knew would be his last contribution to science. It was an attempt to formulate a concept shedding new light on the workings of the human brain. He believed that if such a concept could be stated with certainty, it would also be applicable to electronic computers and would permit man to make a major step forward in using these “automata.” In principle, he reasoned, there was no reason why some day a machine might not be built which not only could perform most of the functions of the human brain but could actually reproduce itself, i.e., create more supermachines like it. He proposed to present this paper at Yale, where he had been invited to give the 1956 Silliman Lectures.

As the weeks passed, work on the paper slowed. One evening, as Von Neumann and his wife were leaving a dinner party, he complained that he was “uncertain” about walking. Doctors furnished him with a wheelchair. But Von Neumann’s world had begun to close in tight around him. He was seized by periods of overwhelming melancholy.

In April 1956 Von Neumann moved into Walter Reed Hospital for good. Honors were now coming from all directions. He was awarded Yeshiva University’s first Einstein prize. In a special White House ceremony President Eisenhower presented him with the Medal of Freedom. In April the AEC gave him the Enrico Fermi award for his contributions to the theory and design of computing machines, accompanied by a $50,000 tax-free grant.

Although born of Jewish parents, Von Neumann had never practiced Judaism. After his arrival in the U.S. he had been baptized a Roman Catholic. But his divorce from Mariette had put him beyond the sacraments of the Catholic Church for almost 19 years. Now he felt an urge to return. One morning he said to Klara, “I want to see a priest.” He added, “But he will have to be a special kind of priest, one that will be intellectually compatible.” Arrangements were made for special instructions to be given by a Catholic scholar from Washington. After a few weeks Von Neumann began once again to receive the sacraments.

The great mind falters

Toward the end of May the seizures of melancholy began to occur more frequently. In June the doctors finally announced – though not to Von Neumann himself – that the cancer had begun to spread. The great mind began to falter. “At times he would discuss history, mathematics, or automata, and he could recall word for word conversations we had had 20 years ago,” a friend says. “At other times he would scarcely recognize me.” His family – Klara, two brothers, his mother and daughter Marina – drew close around him and arranged a schedule so that one of them would always be on hand. Visitors were more carefully screened. Drugs fortunately prevented Von Neumann from experiencing pain. Now and then his old gifts of memory were again revealed. One day in the fall his brother Mike read Goethe’s Faust to him in German. Each time Mike paused to turn the page, Von Neumann recited from memory the first few lines of the following page.

One of his favorite companions was his mother Margaret von Neumann, 76 years old. In July the family in turn became concerned about her health, and it was suggested that she go to a hospital for a checkup. Two weeks later she died of cancer. “It was unbelievable,” a friend says. “She kept on going right up to the very end and never let anyone know a thing. How she must have suffered to make her son’s last days less worrisome.” Lest the news shock Von Neumann fatally, elaborate precautions were taken to keep it from him. When he guessed the truth, he suffered a severe setback.

Von Neumann’s body, which he had never given much thought to, went on serving him much longer than did his mind. Last summer the doctors had given him only three or four weeks to live. Months later, in October, his passing was again expected momentarily. But not until this month did his body give up. It was characteristic of the impatient, witty and incalculably brilliant John von Neumann that although he went on working for others until he could do not more, his own treatise on the workings of the brain – the work he thought would be his crowning achievement in his own name – was left unfinished.

von_neumann_8

 

 

Qualia Computing Media Appearances

Podcasts

16 – Andrés Gómez Emilsson on Solving Consciousness and Being Happy All the Time (The Most Interesting People I Know…, October 2019)

On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson (AI Alignment Podcast, May 2019)

71 – Researching Qualia with Andrés Gómez Emilsson (The Bayesian Conspiracy, October 2018)

The Future of Mind (Waking Cosmos, October 2018)

Consciousness, Qualia, and Psychedelics with Andres Gomez Emilsson (Catalyzing Coherence, May 2018)

Consciousness and Qualia Realism (Cosmic Tortoise, May 2018)

Robert Stark interviews Transhumanist Andres Gomez Emilsson (The Stark Truth with Robert Stark, October 2017)

Como el MDMA, pero sin la neurotoxicidad (Abolir el sufrimiento con Andrés Gómez) (Guía Escéptica [in Spanish], March 2016)

Happiness is Solving the World’s Problems (The World Transformed, January 2016)

Presentations

The Hyperbolic Geometry of DMT Experiences (@Harvard Science of Psychedelics Club, September 2019)

Harmonic Society: 8 Models of Art for a Scientific Paradigm of Aesthetic Qualia (at a QRI Party, July 2019)

Andrés Gómez Emilsson – Consciousness vs Replicators (Burning Man, August 2018)

Quantifying Valence (see alsoThe Science of Consciousness, April 2018)

Quantifying Bliss (Consciousness Hacking, June 2017)

Utilitarian Temperament: Satisfying Impactful Careers (BIL Oakland 2016: The Recession Generation, July 2016)

Interviews

Mapping the psychedelic experience – a conversation with the Qualia Research Institute (Adeptus Psychonautica, October 2019)

Simulation #453 Catalog & Navigate Consciousness (Simulation, Jun 2019)

Simulation #310 Mike Johnson & Andrés Gómez Emilsson – Chemistry of Consciousness (Simulation, March 2019)

Simulation #255 Andrés Gómez Emilsson – Computational Properties of Consciousness (Simulation, February 2019)

Want a Penfield Mood Organ? This Scientist Might Be Able to Help (Ziff Davis PCMag, April 2018)

Frameworks for Consciousness – Andres Gomez Emilsson (Science, Technology & the Future by Adam Ford, March 2018)

Towards the Abolition of Suffering through Science (featuring David Pearce, Brian Tomasik, & Mike Johnson hosted by Adam Ford, August 2015)

The Mind of David Pearce (Stanford, December 2012)

Andrés Gómez Emilsson, el joven que grito espurio a Felipe Calderón (Cine Desbundo [in Spanish], October 2008)

Narrative Inclusions

SSC Journal Club: Relaxed Beliefs Under Psychedelics and the Anarchic Brain (Slate Star Codex, September 2019)

Why We Need to Study Consciousness (Scientific American, September 2019)

Young pioneers on their hopes for technology, and older trailblazers on their regrets (MIT Technology Review, August 2019)

Mike Johnson – Testable Theories of Consciousness (Science, Technology & the Future, March 2019)

On Consciousness, Qualia, Valence & Intelligence with Mike Johnson (Science, Technology, Future, October 2018)

Podcast with Daniel Ingram (Cosmic Tortoise [referenced at 2h22m], January 2018)

Fear and Loathing at Effective Altruism Global 2017 (Slate Star Codex, August 2017)

Transhumanist Proves Schrödinger’s Cat Experiment Isn’t Better on LSD (Inverse, October 2016)

High Performer: Die Renaissance des LSD im Silicon Valley (Wired Germany [in German], June 2015)

Come With Us If You Want To Live (Harper’s Magazine, January 2015)

David Pearce’s Social Media Posts (Hedwebpre-2014, 2014, 2015, 2016, 2017, 2018)

David Pearce at Stanford 2011 (Stanford Transhumanist Association, December 2011)

External Articles

Ending Suffering Is The Most Important Cause (IEET, September 2015)

This Is What I Mean When I Say ‘Consciousness’ (IEET, September 2015)

My Interest Shifted from Mathematics to Consciousness after a THC Experience (IEET, September 2015)

‘Spiritual/Philosophical’ is the Deepest, Highest, Most Powerful Dimension of Euphoria (IEET, September 2015)

Bios

H+pedia, ISI-S, Decentralized AI Summit, Earth Sharing

Miscellaneous

Philosophy of Mind Stand-up Comedy (The Science of Consciousness,  April 2018)

Randal Koene vs. Andres Emilsson on The Binding Problem (Bay Area Futurists, Oakland CA, May 2016)


Note: I am generally outgoing, fun-loving, and happy to participate in podcasts, events, interviews, and miscellaneous activities. Feel free to invite me to your podcast/interview/theater/etc. I am flexible when it comes to content; anything I’ve written about in Qualia Computing is fair game for discussion. Infinite bliss!

Marijuana-induced “Short-term Memory Tracers”

[On the subjective effects of marijuana]: It’s one thing to describe it verbally and another thing to experience it yourself. I had this dissociated feeling that was really intense. I had memory tracers. So it wasn’t like, you know, people on LSD or stuff will talk about how “your hand is tracing over and over again” and it was almost like that with my memory. My short term memory was repeating over and over again. So it’d be things like getting in a car, and getting in the car over and over again, putting on a seat belt over and over and over again, and it was like short term memory tracers. And it was overall extremely intense. Had an altered perception of space. You know… distance. That’s something I [also] got on mushrooms, which I talked about in a previous video, but it’s like you see something far away and you don’t really know if it’s really far away, or if it is just really small. So if you see a car that’s like 50 feet away, you don’t know if it is 50 feet away or if it is just a matchbox car that’s really close to you. So it kind of had that; it altered the way I saw space. And, to be honest, I freaked the fuck out, because this isn’t what I thought marijuana was supposed to be. I thought it was a sedative. I thought it made you relaxed. I didn’t know it tripped you the fuck out. So, uh, my response was: I thought I was dying. I remember being in the backseat of the car and saying “is this normal?” And the guy in the front seat– he was this Indian dude, his name was Deepak– I swear to God it was like, uh, my Kumar, and he turns back and and was like “Are you tripping, man? Are you feeling it, man?” and that just made me even more fucked up in the head. Because he was saying it in his Indian accent, and I was like “What’s going on? What’s going on?”, and I thought I had to go to the hospital. Uh, let’s fast-forward in the experience, so about one hour later, or 30 minutes later, I don’t really know, it started to turn more in what I expected it to be. Which was this sedative, I started feeling more relaxed, like the trip started subsiding, and I was left with this trip afterglow of relaxation, feeling giggly, feeling really hungry, and you know kind of like the standard marijuana high. And this happened every time I smoked marijuana in the beginning. I was uncomfortable for the first 30 minutes to an hour. I learned to kind of enjoy it, but for the most part I was waiting it out. And then I’d get relaxed and chill. And I wouldn’t really call it paranoia, it was really just tripping so hard I was kind of like “wow, like, I’m really fucking tripping, I hope I don’t act weird in front of a bunch of people” Maybe that is paranoia, I don’t know.

 

[…10 more minutes talking about marijuana…]

 

And I don’t know why the fuck marijuana is still illegal in 2017. I feel like I a fucking pilgrim. Like, seriously? A war veteran can go and almost die for his country. He could come back, and drink alcohol, buy an assault riffle, and get prescribed speed, but smoke a joint? Nah, you are a fucking criminal! I mean, that doesn’t make any fucking sense. I’ve been doing this push, that I said that if by January 2018 Marijuana wasn’t legal I’d shave my hair. I’m not gonna shave my head. I am gonna cut all of my hair off, and I’m really sad about that. Usually when I cut my hair off I send it to Korea at a random address because I just like to say “my hair is in Korea”. And I’m sure whoever opens it is like “why the fuck am I getting this?” But this time I’m gonna throw it up into eBay just because I want to see if anybody bids on it. I’m gonna do it 99 cents free shipping. But yeah, getting my hair cut is simply really weird: when I get to the stylist and say “can you put this on a bag? I’m gonna sell this.” Uh, but yeah, that really is it for Marijuana as far as my overall experience with the substance.

– What’s smoking marijuana like? The positive and negative effects of smoking cannabis and dabs by Youtube addiction recovery coach Cg Kid

The Banality of Evil

In response to the Quora question “I feel like a lot of evil actions in the world have supporters who justify them (like Nazis). Can you come up with some convincing ways in which some of the most evil actions in the world could be justified?David Pearce writes:


Tout comprendre, c’est tout pardonner.”
(Leo Tolstoy, War and Peace)

Despite everything, I believe that people are really good at heart.
(Anne Frank)

The risk of devising justifications of the worst forms of human behaviour is there are people gullible enough to believe them. It’s not as though anti-Semitism died with the Third Reich. Even offering dispassionate causal explanation can sometimes be harmful. So devil’s advocacy is an intellectual exercise to be used sparingly.

That said, the historical record suggests that human societies don’t collectively set out to do evil. Rather, primitive human emotions get entangled with factually mistaken beliefs and ill-conceived metaphysics with ethically catastrophic consequences. Thus the Nazis seriously believed in the existence of an international Jewish conspiracy against the noble Aryan race. Hitler, so shrewd in many respects, credulously swallowed The Protocols of the Elders of Zion. And as his last testament disclosed, obliquely, Hitler believed that the gas chambers were a “more humane means” than the terrible fate befalling the German Volk. Many Nazis (HimmlerHössStangl, and maybe even Eichmann) believed that they were acting from a sense of duty – a great burden stoically borne. And such lessons can be generalised across history. If you believed, like the Inquisition, that torturing heretics was the only way to save their souls from eternal damnation in Hell, would you have the moral courage to do likewise? If you believed that the world would be destroyed by the gods unless you practised mass human sacrifice, would you participate? [No, in my case, albeit for unorthodox reasons.]

In a secular context today, there exist upstanding citizens who would like future civilisation to run “ancestor simulations”. Ancestor simulations would create inconceivably more suffering than any crime perpetrated by the worst sadist or deluded ideologue in history – at least if the computational-functional theory of consciousness assumed by their proponents is correct. If I were to pitch a message to life-lovers aimed at justifying such a monstrous project, as you request, then I guess I’d spin some yarn about how marvellous it would be to recreate past wonders and see grandpa again.
And so forth.

What about the actions of individuals, as distinct from whole societies? Not all depraved human behaviour stems from false metaphysics or confused ideology. The grosser forms of human unpleasantness often stem just from our unreflectively acting out baser appetites (cfHamiltonian spite). Consider the neuroscience of perception. Sentient beings don’t collectively perceive a shared public world. Each of us runs an egocentric world-simulation populated by zombies (sic). We each inhabit warped virtual worlds centered on a different body-image, situated within a vast reality whose existence can be theoretically inferred. Or so science says. Most people are still perceptual naïve realists. They aren’t metaphysicians, or moral philosophers, or students of the neuroscience of perception. Understandably, most people trust the evidence of their own eyes and the wisdom of their innermost feelings, over abstract theory. What “feels right” is shaped by natural selection. And what “feels right” within one’s egocentric virtual world is often callous and sometimes atrocious. Natural selection is amoral. We are all slaves to the pleasure-pain axis, however heavy the layers of disguise. Thanks to evolution, our emotions are “encephalised” in grotesque ways. Even the most ghastly behaviour can be made to seem natural –like Darwinian life itself.

Are there some forms of human behaviour so appalling that I’d find it hard to play devil’s advocate in their mitigation – even as an intellectual exercise?

Well, perhaps consider, say, the most reviled hate-figures in our society – even more reviled than murderers or terrorists. Most sexually active paedophiles don’t set out to harm children: quite the opposite, harm is typically just the tragic by-product of a sexual orientation they didn’t choose. Posthumans may reckon that all Darwinian relationships are toxic. Of course, not all monstrous human behavior stems from wellsprings as deep as sexual orientation. Thus humans aren’t obligate carnivores. Most (though not all) contemporary meat eaters, if pressed, will acknowledge in the abstract that a pig is as sentient and sapient as a prelinguistic human toddler. And no contemporary meat eaters seriously believe that their victims have committed a crime (cfAnimal trial – Wikipedia). Yet if questioned why they cause such terrible suffering to the innocent, and why they pay for a hamburger rather than a veggieburger, a meat eater will come up with perhaps the lamest justification for human depravity ever invented:

“But I like the taste!”

Such is the banality of evil.

Person-moment affecting views

by Katja Grace (source)

[Epistemic status: sloppy thoughts not informed by the literature. Hoping actual population ethicists might show up and correct me or point me to whoever has already thought about something like this better.]

Person-affecting views say that when you are summing up the value in different possible worlds, you should ignore people who only exist in one of those worlds. This is based on something like the following intuitions:

  1. World A can only be better than world B insofar as it is better for someone.
  2. World A can’t be better than world B for Alice, if Alice exists in world A but not world B.

The further-fact view says that after learning all physical facts about Alice and Alice’—such as whether Alice’ was the physical result of Alice waiting for five seconds, or is a brain upload of Alice, or is what came out of a replicating machine on Mars after Alice walked in on Earth, or remembers being Alice—there is still a further meaningful question of whether Alice and Alice’ are the same person.

I take the further-fact view to be wrong (or at least Derek Parfit does, and I think we agree the differences between Derek Parfit and I have been overstated). Thinking that the further-fact view is wrong seems to be a common position among intellectuals (e.g. 87% among philosophers).

If the further-fact view is wrong, then the what we have is a whole lot of different person-moments, with various relationships to one another, which for pragmatic reasons we like to group into clusters called ‘people’. There are different ways we could define the people, and no real answer to which definition is right. This works out pretty well in our world, but you can imagine other worlds (or futures of our world) where the clusters are much more ambiguous, and different definitions of ‘person’ make a big difference, or where the concept is not actually useful.

Person-affecting views seem to make pretty central use of the concept ‘person’. If we don’t accept the further-fact view, and do want to accept a person-affecting view, what would that mean? I can think of several options:

  1. How good different worlds are depends strongly on which definition of ‘person’ you choose (which person moments you choose to cluster together), but this is a somewhat arbitrary pragmatic choice
  2. There is some correct definition of ‘person’ for the purpose of ethics (i.e. there is some relation between person moments that makes different person moments in the future ethically relevant by virtue of having that connection to a present person moment)
  3. Different person-moments are more or less closely connected in ways, and a person-affecting view should actually have a sliding scale of importance for different person-moments

Before considering these options, I want to revisit the second reason for adopting a person-affecting view: If Alice exists in world A and not in world B, then Alice can’t be made better off by world A existing rather than world B. Whether this premise is true seems to depend on how ‘a world being better for Alice’ works. Some things we might measure would go one way, and some would go the other. For instance, we could imagine it being analogous to:

  1. Alice painting more paintings. If Alice painted three paintings in world A, and doesn’t exist in world B, I think most people would say that Alice painted more paintings in world A than in world B. And more clearly, that world A has more paintings than world B, even if we insist that a world can’t have more paintings without somebody in particular having painted more paintings. Relatedly, there are many things people do where the sentence ‘If Alice didn’t exist, she wouldn’t have X’.
  2. Alice having painted more paintings per year. If Alice painted one painting every thirty years in world A, and didn’t exist in world B, in world B the number of paintings per year is undefined, and so incomparable to ‘one per thirty years’.

Suppose that person-affecting view advocates are right, and the worth of one’s life is more like 2). You just can’t compare the worth of Alice’s life in two worlds where she only exists in one of them. Then can you compare person-moments? What if the same ‘person’ exists in two possible worlds, but consists of different person-moments?

Compare world A and world C, which both contain Alice, but in world C Alice makes different choices as a teenager, and becomes a fighter pilot instead of a computer scientist. It turns out that she is not well suited to it, and finds piloting pretty unsatisfying. If Alice_t1A is different from Alice_t1C, can we say that world A is better than world C, in virtue of Alice’s experiences? Each relevant person-moment only exists in one of the worlds, so how can they benefit?

I see several possible responses:

  1. No we can’t. We should have person-moment affecting views.
  2. Things can’t be better or worse for person-moments, only for entire people, holistically across their lives, so the question is meaningless. (Or relatedly, how good a thing is for a person is not a function of how good it is for their person-moments, and it is how good it is for the person that matters).
  3. Yes, there is some difference between people and person moments, which means that person-moments can benefit without existing in worlds that they are benefitting relative to, but people cannot.

The second possibility seems to involve accepting the second view above: that there is some correct definition of ‘person’ that is larger than a person moment, and fundamental to ethics – something like the further-fact view. This sounds kind of bad to me. And the third view doesn’t seem very tempting without some idea of an actual difference between persons and person-moments.

So maybe the person-moment affecting view looks most promising. Let us review what it would have to look like. For one thing, the only comparable person moments are the ones that are the same. And since they are the same, there is no point bringing about one instead of the other. So there is never reason to bring about a person-moment for its own benefit. Which sounds like it might really limit the things that are worth intentionally doing. Isn’t making myself happy in three seconds just bringing about a happy person moment rather than a different sad person moment?

Is everything just equally good on this view? I don’t think so, as long as you are something like a preference utilitarian: person-moments can have preferences over other person-moments. Suppose that Alice_t0A and Alice_t0C are the same, and Alice_t1A and Alice_t1C are different. And suppose that Alice_t0 wants Alice_t1 to be a computer scientist. Then world A is better than world C for Alice_t0, and so better overall. That is, person-moments can benefit from things, as long as they don’t know at the time that they have benefited.

I think an interesting  feature of this view is that all value seems to come from meddling preferences. It is never directly good that there is joy in the world for instance, it is just good because somebody wants somebody else to experience joy, and that desire was satisfied. If they had instead wished for a future person-moment to be tortured, and this was granted, then this world would apparently be just as good.

So, things that are never directly valuable in this world:

  • Joy
  • Someone getting what they want and also knowing about it
  • Anything that isn’t a meddling preference

On the upside, since person-moments often care about future person-moments within the same person, we do perhaps get back to something closer to the original person-affecting view. There is often reason to bring about or benefit a person moment for the benefit of previous person moments in the history of the same person, who for instance wants to ‘live a long and happy life’. My guess after thinking about this very briefly is that in practice it would end up looking like the ‘moderate’ person-affecting views, in which people who currently exist get more weight than people who will be brought into existence, but not infinitely more weight. People who exist now mostly want to continue existing, and to have good lives in the future, and they care less, but some, about different people in the future.

So, if you want to accept a person-affecting view and not a further-fact view, the options seem to me to be something like these:

  1. Person-moments can benefit without having an otherworldly counterpart, even though people cannot. Which is to say, only person-moments that are part of the same ‘person’ in different worlds can benefit from their existence. ‘Person’ here is either an arbitrary pragmatic definition choice, or some more fundamental ethically relevant version of the concept that we could perhaps discover.
  2. Benefits accrue to persons, not person-moments. In particular, benefits to persons are not a function of the benefits to their constituent person-moments. Where ‘person’ is again either a somewhat arbitrary choice of definition, or a more fundamental concept.
  3. A sliding scale of ethical relevance of different person-moments, based on how narrow a definition of ‘person’ unites them with any currently existing person-moments. Along with some story about why, given that you can apparently compare all of them, you are still weighting some less, on grounds that they are incomparable.
  4. Person-moment affecting views

None of these sound very good to me, but nor do person-affecting views in general, so maybe I’m the wrong audience. I had thought person-moment affecting views were almost a reductio, but a close friend says he thought they were the obvious reasonable view, so I am curious to hear others’ takes.



An interesting thing to point out here is that what Katja describes as the further-fact view is terminologically equivalent to what we here call Closed Individualism (cf. Ontological Qualia). This is the common-sense view that you start existing when you are born and stop existing when you die (which also has soul-based variants with possible pre-birth and post-death existence). This view is not very philosophically tenable because it presupposes that there is an enduring metaphysical ego distinct for every person. And yet, the vast majority of people still hold strongly to Closed Individualism. In some sense, in the article Katja tries to rescue the common-sense aspect of Closed Individualism in the context of ethics. That is, by trying to steel-man the common-sense notion that people (rather than moments of experience) are the relevant units for morality while also negating further-fact views, you provide reasons to keep using Closed Individualism as an intuition-pump in ethics (if only for pragmatic reasons). In general, I consider this kind of discussions to be a very fruitful endeavor as they approach ethics by touching upon the key parameters that matter fundamentally: identity, value, and counterfactuals.

As you may gather from pieces such as Wireheading Done Right and The Universal Plot, at Qualia Computing we tend to think the most coherent ethical system arises when we take as a premise that the relevant moral agents are “moments of experience”. Contra Person-affecting views, we don’t think it is meaningless to say that a given world is better than another one if not everyone in the first world is also in the second one. On the contrary – it really does not matter who lives in a given world. What matters is the raw subjective quality of the experiences in such worlds. If it is meaningless to ask “who is experiencing Alice’s experiences now?” once you know all the physical facts, then moral weight must be encoded in such physical facts alone. In turn, it could certainly happen then that the narrative aspect of an experience may turn out to be irrelevant for determining the intrinsic value of a given experience. People’s self-narratives may certainly have important instrumental uses, but at their core they don’t make it to the list of things that intrinsically matter (unlike, say, avoiding suffering).

A helpful philosophical move that we have found adds a lot of clarity here is to analyze the problem in terms of Open Individualism. That is, assume that we are all one consciousness and take it from there. If so, then the probability that you are a given person would be weighted by the amount of consciousness (or number of moments of experience, depending) that such person experiences throughout his or her life. You are everyone in this view, but you can only be each person one at a time from their own limited points of view. So there is a sensible way of weighting the importance of each person, and this is a function of the amount of time you spend being him or her (and normalize by the amount of consciousness that person experiences, in case that is variable across individuals).

If consciousness emerges victorious in its war against pure replicators, then it would make sense that the main theory of identity people would hold by default would be Open Individualism. After all, it is only Open Individualism that aligns individual incentives and the total wellbeing of all moments of experience throughout the universe.

That said, in principle, it could turn out that Open Individualism is not needed to maximize conscious value – that while it may be useful instrumentally to align the existing living intelligences towards a common consciousness-centric goal (e.g. eliminating suffering, building a harmonic society, etc.), in the long run we may find that ontological qualia (the aspect of our experience that we use to represent the nature of reality, including our beliefs about personal identity) has no intrinsic value. Why bother experiencing heaven in the form of a mixture of 95% bliss and 5% ‘a sense of knowing that we are all one’, if you can instead just experience 100% pure bliss?

At the ethical limit, anything that is not perfectly blissful might end up being thought of as a distraction from the cosmic telos of universal wellbeing.

Modern Accounts of Psychedelic Action

Excerpts from Unifying Theories of Psychedelic Drug Effects (2018) by Link Swanson (these are just key quotes; the full paper is worth reading)

Abstract

How do psychedelic drugs produce their characteristic range of acute effects in perception, emotion, cognition, and sense of self? How do these effects relate to the clinical efficacy of psychedelic-assisted therapies? Efforts to understand psychedelic phenomena date back more than a century in Western science. In this article I review theories of psychedelic drug effects and highlight key concepts which have endured over the last 125 years of psychedelic science. First, I describe the subjective phenomenology of acute psychedelic effects using the best available data. Next, I review late 19th-century and early 20th-century theories—model psychoses theory, filtration theory, and psychoanalytic theory—and highlight their shared features. I then briefly review recent findings on the neuropharmacology and neurophysiology of psychedelic drugs in humans. Finally, I describe recent theories of psychedelic drug effects which leverage 21st-century cognitive neuroscience frameworks—entropic brain theory, integrated information theory, and predictive processing—and point out key shared features that link back to earlier theories. I identify an abstract principle which cuts across many theories past and present: psychedelic drugs perturb universal brain processes that normally serve to constrain neural systems central to perception, emotion, cognition, and sense of self. I conclude that making an explicit effort to investigate the principles and mechanisms of psychedelic drug effects is a uniquely powerful way to iteratively develop and test unifying theories of brain function.


fphar-09-00172-g002

Subjective rating scale items selected after psilocybin (blue) and placebo (red) (n = 15) (Muthukumaraswamy et al., 2013). “Items were completed using a visual analog scale format, with a bottom anchor of ‘no, not more than usually’ and a top anchor of ‘yes, much more than usually’ for every item, with the exception of ‘I felt entirely normal,’ which had bottom and top anchors of ‘No, I experienced a different state altogether’ and ‘Yes, I felt just as I normally do,’ respectively. Shown are the mean ratings for 15 participants plus the positive SEMs. All items marked with an asterisk were scored significantly higher after psilocybin than placebo infusion at a Bonferroni-corrected significance level of p < 0.0022 (0.5/23 items)” (Muthukumaraswamy et al., 2013, p. 15176).


Neuropharmacology and Neurophysiological Correlates of Psychedelic Drug Effects

Klee recognized that his above hypotheses, inspired by psychoanalytic theory and LSD effects, required neurophysiological evidence. “As far as I am aware, however, adequate neurophysiological evidence is lacking … The long awaited millennium in which biochemical, physiological, and psychological processes can be freely correlated still seems a great distance off” (Klee, 1963, p. 466, 473). What clues have recent investigations uncovered?

A psychedelic drug molecule impacts a neuron by binding to and altering the conformation of receptors on the surface of the neuron (Nichols, 2016). The receptor interaction most implicated in producing classic psychedelic drug effects is agonist or partial agonist activity at serotonin (5-HT) receptor type 2A (5-HT2A) (Nichols, 2016). A molecule’s propensity for 5-HT2A affinity and agonist activity predicts its potential for (and potency of) subjective psychedelic effects (Glennon et al., 1984McKenna et al., 1990Halberstadt, 2015Nichols, 2016Rickli et al., 2016). When a psychedelic drug’s 5-HT2A agonist activity is intentionally blocked using 5-HT2A antagonist drugs (e.g., ketanserin), the subjective effects are blocked or attenuated in humans under psilocybin (Vollenweider et al., 1998Kometer et al., 2013), LSD (Kraehenmann et al., 2017a,bPreller et al., 2017), and ayahuasca (Valle et al., 2016). Importantly, while the above evidence makes it clear that 5-HT2A activation is a necessary (if not sufficient) mediator of the hallmark subjective effects of classic psychedelic drugs, this does not entail that 5-HT2A activation is the sole neurochemical cause of all subjective effects. For example, 5-HT2A activation might trigger neurochemical modulations ‘downstream’ (e.g., changes in glutamate transmission) which could also play causal roles in producing psychedelic effects (Nichols, 2016). Moreover, most psychedelic drug molecules activate other receptors in addition to 5-HT2A (e.g., 5-HT1A, 5-HT2C, dopamine, sigma, etc.) and these activations may importantly contribute to the overall profile of subjective effects even if 5-HT2A activation is required for their effects to occur (Ray, 20102016).

How does psychedelic drug-induced 5-HT2A receptor agonism change the behavior of the host neuron? Generally, 5-HT2A activation has a depolarizing effect on the neuron, making it more excitable (more likely to fire) (Andrade, 2011Nichols, 2016). Importantly, this does not necessarily entail that 5-HT2Aactivation will have an overall excitatory effect throughout the brain, particularly if the excitation occurs in inhibitory neurons (Andrade, 2011). This important consideration (captured by the adage ‘one neuron’s excitation is another neuron’s inhibition’) should be kept in mind when tracing causal links in the pharmaco-neurophysiology of psychedelic drug effects.

In mammalian brains, neurons tend to ‘fire together’ in synchronized rhythms known as temporal oscillations (brain waves). MEG and EEG equipment measure the electromagnetic disturbances produced by the temporal oscillations of large neural populations and these measurements can be quantified according to their amplitude (power) and frequency (timing) (Buzsáki and Draguhn, 2004). Specific combinations of frequency and amplitude can be correlated with distinct brain states, including waking ‘resting’ state, various attentional tasks, anesthesia, REM sleep, and deep sleep (Tononi and Koch, 2008Atasoy et al., 2017a). In what ways do temporal oscillations change under psychedelic drugs? MEG and EEG studies consistently show reductions in oscillatory power across a broad frequency range under ayahuasca (Riba et al., 20022004Schenberg et al., 2015Valle et al., 2016), psilocybin (Muthukumaraswamy et al., 2013Kometer et al., 2015Schartner et al., 2017), and LSD (Carhart-Harris et al., 2016cSchartner et al., 2017). Reductions in the power of alpha-band oscillations, localized mainly to parietal and occipital cortex, have been correlated with intensity of subjective visual effects—e.g., ‘I saw geometric patterns’ or ‘My imagination was extremely vivid’—under psilocybin (Kometer et al., 2013Muthukumaraswamy et al., 2013Schartner et al., 2017) and ayahuasca (Riba et al., 2004Valle et al., 2016). Under LSD, reductions in alpha power still correlated with intensity of subjective visual effects but associated alpha reductions were more widely distributed throughout the brain (Carhart-Harris et al., 2016c). Furthermore, ego-dissolution effects and mystical-type experiences (e.g., ‘I experienced a disintegration of my “self” or “ego”’ or ‘The experience had a supernatural quality’) have been correlated with reductions in alpha power localized to anterior and posterior cingulate cortices and the parahippocampal regions under psilocybin (Muthukumaraswamy et al., 2013Kometer et al., 2015) and throughout the brain under LSD (Carhart-Harris et al., 2016c).

The concept of functional connectivity rests upon fMRI brain imaging observations that reveal temporal correlations of activity occurring in spatially remote regions of the brain which form highly structured patterns (brain networks) (Buckner et al., 2013). Imaging of brains during perceptual or cognitive task performance reveals patterns of functional connectivity known as functional networks; e.g., control network, dorsal attention network, ventral attention network, visual network, auditory network, and so on. Imaging brains in taskless resting conditions reveals resting-state functional connectivity (RSFC) and structured patterns of RSFC known as resting state networks (RSNs; Deco et al., 2011). One particular RSN, the default mode network (DMN; Buckner et al., 2008), increases activity in the absence of tasks and decreases activity during task performance (Fox and Raichle, 2007). DMN activity is strong during internally directed cognition and a variety of other ‘metacognitive’ functions (Buckner et al., 2008). DMN activation in normal waking states exhibits ‘inverse coupling’ or anticorrelation with the activation of task-positive functional networks, meaning that DMN and functional networks are often mutually exclusive; one deactivates as the other activates and vice versa (Fox and Raichle, 2007).

In what ways does brain network connectivity change under psychedelic drugs? First, functional connectivity between key ‘hub’ areas—mPFC and PCC—is reduced. Second, the ‘strength’ or oscillatory power of the DMN is weakened and its intrinsic functional connectivity becomes disintegrated as its component nodes become decoupled under psilocybin (Carhart-Harris et al., 20122013), ayahuasca (Palhano-Fontes et al., 2015), and LSD (Carhart-Harris et al., 2016cSpeth et al., 2016). Third, brain networks that normally show anticorrelation become active simultaneously under psychedelic drugs. This situation, which can be described as increased between-network functional connectivity, occurs under psilocybin (Carhart-Harris et al., 20122013Roseman et al., 2014Tagliazucchi et al., 2014), ayahuasca (Palhano-Fontes et al., 2015) and especially LSD (Carhart-Harris et al., 2016cTagliazucchi et al., 2016). Fourth and finally, the overall repertoire of explored functional connectivity motifs is substantially expanded and its informational dynamics become more diverse and entropic compared with normal waking states (Tagliazucchi et al., 20142016Alonso et al., 2015Lebedev et al., 2016Viol et al., 2016Atasoy et al., 2017bSchartner et al., 2017). Notably, the magnitude of occurrence of the above four neurodynamical themes correlates with subjective intensity of psychedelic effects during the drug session. Furthermore, visual cortex is activated during eyes-closed psychedelic visual imagery (de Araujo et al., 2012Carhart-Harris et al., 2016c) and under LSD “the early visual system behaves ‘as if’ it were receiving spatially localized visual information” as V1-V3 RSFC is activated in a retinotopic fashion (Roseman et al., 2016, p. 3036).

Taken together, the recently discovered neurophysiological correlates of subjective psychedelic effects present an important puzzle for 21st-century neuroscience. A key clue is that 5-HT2A receptor agonism leads to desynchronization of oscillatory activity, disintegration of intrinsic integrity in the DMN and related brain networks, and an overall brain dynamic characterized by increased between-network global functional connectivity, expanded signal diversity, and a larger repertoire of structured neurophysiological activation patterns. Crucially, these characteristic traits of psychedelic brain activity have been correlated with the phenomenological dynamics and intensity of subjective psychedelic effects.


21st-Century Theories of Psychedelic Drug Effects

Entropic Brain Theory

Entropic Brain Theory (EBT; Carhart-Harris et al., 2014) links the phenomenology and neurophysiology of psychedelic effects by characterizing both in terms of the quantitative notions of entropy and uncertainty. Entropy is a quantitative index of a system’s (physical) disorder or randomness which can simultaneously describe its (informational) uncertainty. EBT “proposes that the quality of any conscious state depends on the system’s entropy measured via key parameters of brain function” (Carhart-Harris et al., 2014, p. 1). Their hypothesis states that hallmark psychedelic effects (e.g., perceptual destabilization, cognitive flexibility, ego dissolution) can be mapped directly onto elevated levels of entropy/uncertainty measured in brain activity, e.g., widened repertoire of functional connectivity patterns, reduced anticorrelation of brain networks, and desynchronization of RSN activity. More specifically, EBT characterizes the difference between psychedelic states and normal waking states in terms of how the underlying brain dynamics are positioned on a scale between the two extremes of order and disorder—a concept known as ‘self-organized criticality’ (Beggs and Plenz, 2003). A system with high order (low entropy) exhibits dynamics that resemble ‘petrification’ and are relatively inflexible but more stable, while a system with low order (high entropy) exhibits dynamics that resemble ‘formlessness’ and are more flexible but less stable. The notion of ‘criticality’ describes the transition zone in which the brain remains poised between order and disorder. Physical systems at criticality exhibit increased transient ‘metastable’ states, increased sensitivity to perturbation, and increased propensity for cascading ‘avalanches’ of metastable activity. Importantly, EBT points out that these characteristics are consistent with psychedelic phenomenology, e.g., hypersensitivity to external stimuli, broadened range of experiences, or rapidly shifting perceptual and mental contents. Furthermore, EBT uses the notion of criticality to characterize the difference between psychedelic states and normal waking states as it “describes cognition in adult modern humans as ‘near critical’ but ‘sub-critical’—meaning that its dynamics are poised in a position between the two extremes of formlessness and petrification where there is an optimal balance between order and flexibility” (Carhart-Harris et al., 2014, p. 12). EBT hypothesizes that psychedelic drugs interfere with ‘entropy-suppression’ brain mechanisms which normally sustain sub-critical brain dynamics, thus bringing the brain “closer to criticality in the psychedelic state” (Carhart-Harris et al., 2014, p. 12).


Integrated Information Theory

Integrated Information Theory (IIT) is a general theoretical framework which describes the relationship between consciousness and its physical substrates (Oizumi et al., 2014Tononi, 20042008). While EBT is already loosely consistent with the core principles of IIT, Gallimore (2015) demonstrates how EBT’s hypotheses can be operationalized using the technical concepts of the IIT framework. Using EBT and recent neuroimaging data as a foundation, Gallimore develops an IIT-based model of psychedelic effects. Consistent with EBT, this IIT-based model describes the brain’s continual challenge of minimizing entropy while retaining flexibility. Gallimore formally restates this problem using IIT parameters: brains attempt to optimize the give-and-take dynamic between cause-effect information and cognitive flexibility. In IIT, a (neural) system generates cause-effect information when the mechanisms which make up its current state constrain the set of states which could casually precede or follow the current state. In other words, each mechanistic state of the brain: (1) limits the set of past states which could have causally given rise to it, and (2) limits the set of future states which can causally follow from it. Thus, each current state of the mechanisms within a neural system (or subsystem) has an associated cause-effect repertoire which specifies a certain amount of cause-effect information as a function of how stringently it constrains the unconstrained state repertoire of all possible system states. Increasing the entropy within a cause-effect repertoire will in effect constrain the system less stringently as the causal possibilities are expanded in both temporal directions as the system moves closer to its unconstrained repertoire of all possible states. Moreover, increasing the entropy within a cause-effect repertoire equivalently increases the uncertainty associated with its past (and future) causal interactions. Using this IIT-based framework, Gallimore (2015)argues that, compared with normal waking states, psychedelic brain states exhibit higher entropy, higher cognitive flexibility, but lower cause-effect information.


Predictive Processing

The first modern brain imaging measurements in humans under psilocybin yielded somewhat unexpected results: reductions in oscillatory power (MEG) and cerebral blood flow (fMRI) correlated with the intensity of subjective psychedelic effects (Carhart-Harris et al., 2012Muthukumaraswamy et al., 2013). In their discussion, the authors suggest that their findings, although surprising through the lens of commonly held beliefs about how brain activity maps to subjective phenomenology, may actually be consistent with a theory of brain function known as the free energy principle (FEP; Friston, 2010).

In one model of global brain function based on the free-energy principle (Friston, 2010), activity in deep-layer projection neurons encodes top-down inferences about the world. Speculatively, if deep-layer pyramidal cells were to become hyperexcitable during the psychedelic state, information processing would be biased in the direction of inference—such that implicit models of the world become spontaneously manifest—intruding into consciousness without prior invitation from sensory data. This could explain many of the subjective effects of psychedelics (Muthukumaraswamy et al., 2013, p. 15181).

What is FEP? “In this view, the brain is an inference machine that actively predicts and explains its sensations. Central to this hypothesis is a probabilistic model that can generate predictions, against which sensory samples are tested to update beliefs about their causes” (Friston, 2010). FEP is a formulation of a broader conceptual framework emerging in cognitive neuroscience known as predictive processing (PP; Clark, 2013)10. PP has links to bayesian brain hypothesis (Knill and Pouget, 2004), predictive coding (Rao and Ballard, 1999), and earlier theories of perception and cognition (MacKay, 1956Neisser, 1967Gregory, 1968) dating back to Helmholtz (1925) who was inspired by Kant (1996; see Swanson, 2016). At the turn of the 21st century, the ideas of Helmholtz catalyzed innovations in machine learning (Dayan et al., 1995), new understandings of cortical organization (Mumford, 1992Friston, 2005), and theories of how perception works (Kersten and Yuille, 2003Lee and Mumford, 2003).


Conclusion

The four key features identified in filtration and psychoanalytic accounts from the late 19th and early 20th century continue to operate in 21st-century cognitive neuroscience: (1) psychedelic drugs produce their characteristic diversity of effects because they perturb adaptive mechanisms which normally constrain perception, emotion, cognition, and self-reference, (2) these adaptive mechanisms can develop pathologies rooted in either too much or too little constraint (3) psychedelic effects appear to share elements with psychotic symptoms because both involve weakened constraints (4) psychedelic drugs are therapeutically useful precisely because they offer a way to temporarily inhibit these adaptive constraints. It is on these four points that EBT, IIT, and PP seem consistent with each other and with earlier filtration and psychoanalytic accounts. EBT and IIT describe psychedelic brain dynamics and link them to phenomenological dynamics, while PP describes informational principles and plausible neural information exchanges which might underlie the larger-scale dynamics described by EBT and IIT. Certain descriptions of neural entropy-suppression mechanisms (EBT), cause-effect information constraints (IIT), or prediction-error minimization strategies (PP, FEP) are loosely consistent with Freud’s ego and Huxley’s cerebral reducing valve.


Qualia Computing comment: As you can see above, 21st century theories of psychedelic action have a lot of interesting commonalities. A one-line summary of what they all agree on could be: Psychedelics increase the available state-space of consciousness by removing constraints that are normally imposed by standard brain functioning. That said, they do not make specific predictions about valence. That is, they leave the question of “which alien states of consciousness will feel good and which ones will feel bad” completely unaddressed. In the following posts about the presentations of members of the Qualia Research Institute at The Science of Consciousness 2018 you will see how, unlike other modern accounts, our Qualia Formalist approach to consciousness can elucidate this matter.

What If God Were a Closed Individualist Presentist Hedonistic Utilitarian With an Information-Theoretic Identity of Indiscernibles Ontology?

Extract from “Unsong” (chapter 18):

There’s an old Jewish childrens’ song called Had Gadya. It starts:

A little goat, a little goat
My father bought for two silver coins,
A little goat, a little goat

Then came the cat that ate the goat
My father bought for two silver coins
A little goat, a little goat

Then came that dog that bit the cat…

And so on. A stick hits the dog, a fire burns the stick, water quenches the fire, an ox drinks the water, a butcher slaughters the ox, the Angel of Death takes the butcher, and finally God destroys the Angel of Death. Throughout all of these verses, it is emphasized that it is indeed a little goat, and the father did indeed buy it for two silver coins.

[…]

As far as I know, no one has previously linked this song to the Lurianic Kabbalah. So I will say it: the deepest meaning of Had Gadya is a description of how and why God created the world. As an encore, it also resolves the philosophical problem of evil.

The most prominent Biblical reference to a goat is the scapegoating ritual. Once a year, the High Priest of Israel would get rid of the sins of the Jewish people by mystically transferring all of them onto a goat, then yelling at the goat until it ran off somewhere, presumably taking all the sin with it.

The thing is, at that point the goat contained an entire nation-year worth of sin. That goat was super evil. As a result, many religious and mystical traditions have associated unholy forces with goats ever since, from the goat demon Baphomet to the classical rather goat-like appearance of Satan.

So the goat represents evil. I’ll go along with everyone else saying the father represents God here. So God buys evil with two silver coins. What’s up?

The most famous question in theology is “Why did God create a universe filled with so much that is evil?” The classical answers tend to be kind of weaselly, and center around something like free will or necessary principles or mysterious ways. Something along the lines of “Even though God’s omnipotent, creating a universe without evil just isn’t possible.”

But here we have God buying evil with two silver coins. Buying to me represents an intentional action. Let’s go further – buying represents a sacrifice. Buying is when you sacrifice something dear to you to get something you want even more. Evil isn’t something God couldn’t figure out how to avoid, it’s something He covets.

What did God sacrifice for the sake of evil? Two silver coins. We immediately notice the number “two”. Two is not typically associated with God. God is One. Two is right out. The kabbalists identify the worst demon, the nadir of all demons, as Thamiel, whose name means “duality in God”. Two is dissonance, divorce, division, dilemmas, distance, discrimination, diabolism.

This, then, was God’s sacrifice. In order to create evil, He took up duality.

“Why would God want to create evil? God is pure Good!”

Exactly. The creation of anything at all other than God requires evil. God is perfect. Everything else is imperfect. Imperfection contains evil by definition. Two scoops of evil is the first ingredient in the recipe for creating universes. Finitude is evil. Form is evil. Without evil all you have is God, who, as the kabbalists tell us, is pure Nothing. If you want something, evil is part of the deal.

Now count the number of creatures in the song. God, angel, butcher, ox, water, fire, stick, dog, cat, goat. Ten steps from God to goat. This is the same description of the ten sephirot we’ve found elsewhere, the ten levels by which God’s ineffability connects to the sinful material world without destroying it. This is not a coincidence because nothing is ever a coincidence. Had Gadya isn’t just a silly children’s song about the stages of advancement of the human soul, the appropriate rituals for celebrating Passover in the Temple, the ancient Sumerian pantheon, and the historical conquests of King Tiglath-Pileser III. It’s also a blueprint for the creation of the universe. Just like everything else.


(see also: ANSWER TO JOB)