Monday, October 12, 2009

Part 2: Interview Series with Aaron Blaisdell of UCLA

In the first installment of this fabulous series, I left you hanging on the edge of your seat by ending mid-question-answering, making you itch for more with the epistemocratic Dr. Blaisdell.

Well, here you go; on with the show ...

KP: In what ways does your research inform your personal health mythology or strategy?

AB: ... A naive observer, someone like a martian or a bushman, if they saw a barometer change and the weather change, they might become superstitious and begin doing a rain dance and think, "Oh, this event causes that; my crops are doing well, and it would be really nice if that watering hole wasn't drying up to get the animals back. Let's tamper with that barometer." That would be an error in a purely associative way to derive cause and effect relationships because actually the two things are both events of the common cause that that particular individual didn't know about, but the scientific method allows us to discover hidden causes in our world by manipulating events. We do this through experiments. Say we thought that the causal model was what I initially came up with; that the barometer is causing changes in weather. I am going to tamper with the barometer. Doing that experiment, you are going to discover that is doesn't cause changes in the weather. So you have to give up that causal model and think about what other causal models might actually be more accurate. Well, maybe there is a common cause. So we know about those common causes.

It's through science and manipulation, what we call intervention: you intervene on one variable, and if it effects another variable every time you do it, then you grow in your confidence that event A is a cause of event B. But, if you find that you manipulate event A and it doesn't effect event B, maybe they are correlated because of some other causal structure.

Can rats think like that?

Most people would assume not. Maybe through instrumental learning can they discover cause and effect relationships. But can they, before they even have a chance to intervene on something, entertain the idea that two events are correlated because of the common cause—that's what I set out to test and actually showed evidence of in rats.

KP: Is this similar to Taleb's idea of tinkering?

AB: Yeah, all these things are really playing with similar ideas—how we glean information and how we think about what we don't know.

So, some of the foundations of tinkering: it's not just instrumental learning, trial-and-error, but maybe, as Popper said, he wanted his hypotheses to die in his stead. Maybe mental tinkering [BP's note: I call this thinkering], before you actually physically tinker, allows you to throw out, depending on your model or mythology you are coming from, some bad hypotheses before you can try out some of the better ones. Rats have some of this.

To get back to *the rat study—it's very simple actually. What I did is: I had the rats learn that a light, a little flashing light in a Pavlovian box, is followed sometimes by a tone and sometimes by food. So they might have used Pavlovian conditioning; just as I said, Pavlovian conditioning might be the substrate by which animals learn to piece together spatial maps and maybe causal maps as well. If they treat the light as a common cause of the tone and of food, they see [hear] the tone and they predict food might happen. Just like if you see the barometer drop then you think, "Oh, the storm might happen." But, if you see someone tamper with the barometer and you know that the barometer and the storm aren't causally related, then you won't think that the weather is going to change. So, the question is, if the rat intervenes to make the tone happen, will it now no longer think the food will occur.

So there were a bunch of rats; they all had the same training—light as an antecedent to tone and food. Then, at test, some of the rats got tone and they tended to go look in the food section. So they were expecting food based on the tone—which humans would says is a diagnostic reasoning process. “Tone is there because light causes tone and light also causes food. Oh, there must be food.” Or, it's just second order Pavlovian conditioning. The critical test was with another group of rats that got the same training. We gave them a lever that they had never had before. They were in this box, and they have a lever that is rigged so that if they press the lever the tone will immediately come up. So now the question is, do the rats attribute that tone to being caused by themselves. That is, did they intervene to make that variable change? If they thought that they were the cause of the tone, that means it couldn't have been the light, therefore the other effects of the light, food, would not have been expected. In that case, the intervening rats, after hearing the tone of their own intervention, should not expect food. Indeed, they didn't go to food nearly as much. That is the essence of the finding and how it fits in with this idea of causal models and how we go about testing our world.

That has changed how I have been looking at the world—the way I look at this whole nutritional lifestyle change that I have been going through—it's kind of a metamorphosis as I go through all these blogs and stuff. I am tinkering with these ideas myself. I am realizing that I am more cognizant of the fact that I am creating causal models. Now, when I am entertaining a hypothesis, I am doing so through this kind of system of spinning ideas: "Is this the right mix or is that?" Then you can go on and test it through self-experimentation, trying to discover variables that are related causally and others that are not related causally. That's a very powerful tool.

Is there a specific example of a personal change that happened through this causal model?

I have followed a lot of the knowledge of grains being inflammatory. I have cut grains almost entirely out of my diet. I have lost a lot of the key inflammatory markers. My gums no longer bleed when I floss them. I have had confirmation of my hypothesis that I have gleaned from the internet. My belt size went down a notch. Don talked about this on Primal Wisdom: When he and others used to eat a lot of carbs, they'd feel bloated; they'd loosen their belts. I used to chalk it up to, "I just ate a lot; my stomach is bigger." That's a natural hypothesis; you think in terms of cause-and-effect: Stomach is bigger; it's basically a bag that stores food; so, you loosen your belt. But, now, I can eat as much as I want of meat and stuff, as long as it doesn't have grains or bad fats, like industrial oils. I don't get the inflammatory response anymore. I threw out one hypothesis that is more consistent with another one.

As Aaron shares, the best we can do is be humble, tinker, and live each day by our yet-to-be-falsified personal ('n=1') mythologies, positioning ourselves to seize positive Black Swan opportunities--chance favors the prepared mind--while clipping our exposure to negative Black Swan strikes: it's perceptive optimism in action (thanks to Dave Lull).

At some point, it is inevitable: we will have to re-edit the stories that we tell ourselves, as Aaron has done; we will have to let go of those falsified components of our narratives.

Sometimes, this can be quite challenging; but, the longer we wait, the more challenging it will become.

This is the process of becoming a Mythocrat, the Sophiological friend of the Epistemological Epistemocrat. They're two kindred, complimentary archetypes.

Epistemocracy and Mythocracy combine to form a dynamic 'united duad' for interdisciplinary inquiry into the human condition; it's simply reflection about, as Aaron said, "how we glean information and how we think about what we don't know."

Again, please leave any questions or remarks for Aaron in the comments section of this post, and he will respond promptly. He's awesome that way.

Please tune in again soon for Part 3 of this wonderful Interview Series.

Thanks again to Kai and Aaron: cheers!

To good health,


*Here's his abstract, by the way:
Causal Reasoning in Rats

Aaron P. Blaisdell (1*), Kosuke Sawa (2), Kenneth J. Leising (1), Michael R. Waldmann (3)

Empirical research with nonhuman primates appears to support the view that causal reasoning is a key cognitive faculty that divides humans from animals. The claim is that animals approximate causal learning using associative processes. The present results cast doubt on that conclusion. Rats made causal inferences in a basic task that taps into core features of causal reasoning without requiring complex physical knowledge. They derived predictions of the outcomes of interventions after passive observational learning of different kinds of causal models. These competencies cannot be explained by current associative theories but are consistent with causal Bayes net theories.

1 Department of Psychology, University of California, Los Angeles, CA 90095, USA.
2 Japan Society for the Promotion of Science, Nagoya University, Nagoya 464-8601, Japan.
3 Department of Psychology, University of Göttingen, 37073 Göttingen, Germany.

* To whom correspondence should be addressed. E-mail: