“Fixing Cognitive Biases to Shape the Next Generation of Humans”
– an interview with João Fabiano (translated from Portuguese)
Edge of Tomorrow Report – 3.25.2010
by Carlos R. B. Azevedo (@crbazevedo)
He has been researching “Philosophy of the Mind and Transhumanism”, in which he focuses on methods to address cognitive deviations from rationality that pose risks for society, possibly leading to dangerous loss scenarios for humanity. Such methods are mainly based on modelling rationality through algorithms or on treating such biases with the usage of neurochemicals.
The studies towards cognitive bias were initiated by two psychologists, namely Amos Tversky and Daniel Kahneman – the later being a Nobel Laureate in economics for their joint research on prospect theory – in the 1970’s. In this interview, João talks about his research, the role of transhumanism, the limits of human rationality and how it is possible to extend our cognition.
EOT: What are cognitive biases, prospect theory, and transhumanism and how do these relate to each other?
Fabiano: When we face situations in which we have to decide what to do from uncertain data – the field of study of prospect theory – we often run into errors in determining the best action. These errors are often systematic and recurrent, with a certain degree of generality in the human population and are known as cognitive bias: systematic deviations of human cognition in relation to a certain rationality paradigm.
Of all the uncertain scenarios that humanity has ever faced, the fate of the current technology – which seems to grow exponentially – and of the humanity is certainly among the most difficult ones to assess and therefore more subject to biases. The Transhumanism preaches an attempt to make a rational use of technology to profoundly improve the human condition. This goal can only be achieved through an extensive knowledge of the cognitive biases that affect our decisions.
EOT: You are mainly interested in methods to address cognitive biases. Are there any biases in particular that deserve our foremost attention?
Fabiano: Certainly the biases affecting our assessment of very high risks are the most important ones to be observed. There is a special type of risk, the catastrophic risk – with the potential to destroy more than 10 million human beings – and the existential risks – with the potential to destroy the entire human race – which are particularly neglected.
I think the two major biases affecting this area are: (1) the bias of intentionality: to devote more attention to risks posed by humans as opposed to those posed by nature; and (2) observational bias: those that prevent us to obtain information about our probability of extinction based on our own history. An extensive compilation of some other bias can be found in the article “Cognitive Biases Potentially Affecting Judgment of Global Risks' by AI researcher Eliezer Yudkowsky.
EOT: Could you brief us on your research on cognitive enhancers and tell us in advance what your main findings are?
Fabiano: I discovered some shocking things. The main thing is that the cognitive enhancers currently being used are not the safest and the most effective. Most drugs that are used primarily or secondarily with a cognitive enhancement goal (eg: Ritalin, amphetamines, coffee and cigarettes) are addictive and have many side effects to health, including a substantial risk of death. The newer drugs (eg: modafinil, piracetam and aricetp) are healthy safer and more effective, however due to a long list of biases, they are still viewed with caution by the majority of the concerned population.
The main bias affecting our judgment on cognitive enhancers is the status quo bias: the one who supposes, a priori, any change in the status quo as being bad. It is often manifested by comparing the risk of something [i.e. the new drug] with nothing [i.e. the absence of the drug] instead of comparing it with the risk of something else [i.e. an old drug] that would be replaced. This occurs, for instance, when comparing the risk of modafinil with a zero risk rather than comparing it with the risk of caffeine.
Another set of persistent biases, which occurs even among the scientific community, consists of those which affect how we absorb information about statistics. These block the human mind to learn that modafinil is safer than caffeine, when we see a poll showing a 5% incidence of coronary complications in those who use coffee as opposed to 0% in users of modafinil. Our brain is programmed to learn that something is not safe if we find out about someone who has suffered the ill effects of the substance, but not if we see an abstract number written down on some research report.
EOT: The algorithmic approach for addressing cognitive bias seems to assume that rationality is a measurable quantity which depends on the outcomes (i.e., pay-offs) of the decision process. However, it is known that each individual is subjected to different perceptions of reality, being more or less satisfied with such outcomes. What are the main difficulties (if any) on considering an “universal rationality” in such models?
Fabiano: We can define a rational action as the one that maximize gains and minimize losses. What is a gain and what is a loss is left to be determined. Any value can be understood as the one that should be maximized and any action – however counter-intuitive it may be – taken to maximize this value can be regarded as a rational action.
The evolutionary history of humanity has left us relatively uniform with respect to many values. If we want to speak of a universal human rationality, I think we should take into account these values, such as happiness and preservation of life. Only these two already provide us with a lot of work.
EOT: On the neurochemicals approach, do you know of any result suggesting a correlation between the levels of such substances and the incidence of cognitive bias?
Fabiano: Some of the theories on cognitive biases explain them as shortcuts that the human cognition takes to solve certain tasks. In most cases, these shortcuts make it to the desired purpose with less processing power than what would be used if our brain had to carry out the ideal reasoning. In some cases, however, these shortcuts leads to mistakes and some of these errors can be proven catastrophic.
By increasing our cognitive ability, cognitive enhancers have a great potential to refine our thinking, by raising our processing capacity and therefore helping us to avoid using shortcuts that do not always reach the correct answer.
EOT: Transhumanists often point up the possibility of cognitive enhancement through neurochemicals. However, according to W.H.O., about one billion people still lack access to safe drinking water. How to guarantee that such technologies will be available to everyone and that they won’t be used to support emerging elite of individuals, therefore contributing to inequality?
Fabiano: Inequality is a serious problem, which with some luck may be given a better solution if the population can more effectively evaluate the suffering of others. A major bias affecting inequality is that of apathy, which causes no one to help the wounded in a crowd simply because everyone is waiting on someone else in the crowd to take action about it.
I think that only a global distribution of these technologies, so that everyone has equal access, would ensure the entire population to benefit from these. Yet few people would be willing to support this idea, in general and also in the specific case of cognitive enhancers. This is a big enough problem to be analyzed and solved separately in its own terms.
EOT: It seems that biological evolution has become a conscious process as humans are now capable of not only directly interfering in gene inheritance through applied knowledge from disciplines like bioengineering, but also discussing what the next generation of humans should be like. How is the discussion going on between transhumanists and society?
Fabiano: The discussion has been growing at an accelerated pace. There are an increasing number of academic and popular publications on the subject and the media exposure has never been greater. It has been increasingly discussed in the specialized field, as stated by Nick Bostrom, how transhumanist technologies should be implemented to maximize the benefits and no longer whether or not we should use them.
However, much of the average population is still relatively alienated from the discussion. Many still feel that the future will be populated only by technological gadgets and do not understand that the technology will have a much more profound impact on the human condition.
EOT: Recently, experts were asked “How long till human-level AI?” (see EOTomorrow entry: http://wp.me/pMYsS-7v). Are you optimistic about machines passing a Turing Test in the near future or even achieving superhuman intelligence?
Fabiano: Personally I would like to believe in the most optimistic forecasts. If the current development of technology continues at this rate, I believe that in a few decades we will reach the level of human intelligence and of superhuman intelligence in a safe manner. But given the amount of catastrophic risks – large meteors, super volcanism, epidemics, nuclear war / terrorism, nanotechnology, biotechnology, and finally, the superintelligence going wrong and destroying what we value most – this may prove to be a big ‘if’. I consider the current epistemic state of this discussion as of great uncertainty.
 See Nick Bostrom’s article ‘The Mysteries of Self-Locating Belief and Anthropic Reasoning’ , available at: http://anthropic-principle.com/preprints/mys/mysteries.pdf
 Available at: http://www.singinst.org/upload/cognitive-biases.pdf
Readers can contact João Fabiano at joaolkf at gmail dot com
Want to contribute data to Edge of Tomorrow? Contact us.
Comments encouraged below.