1/24/15

Nearly 100% Psychobabble Or: Freud, Kahneman, and the Question of Bounded Self Control

When I was very little someone told me the world was round. I don’t remember who told me, or where I was at the time, but I do remember the picture that I drew in my head. (I may have also drawn it on paper, but no record survives). It looked something like this:  




When I was a little older, I don’t remember exactly how old, I drew another picture in my head, a picture of my brain. It looked something like this: 



I imagined the brain as two distinct parts, an inner part that held memory, controlled emotions, and generated ideas, and an outer part that interpreted the deep mysterious movings of the inner part, and turned it into language. When I was asked a question, the inner part supplied the answers, while the outer part ran in circles like an anxious lap dog, panting “What’s the answer? You know the answer right? We definitely know the answer. So what’s the answer?” When I felt sad, or angry, or happy, it was the inner part who supplied the feeling, while the outer part looked for an explanation. “Oh, you must feel sad because — didn’t want to sit with you at lunch. Or maybe you’re tired. Or maybe we just really don’t want to go to swim practice.” When I was creative, the outer part celebrated: “You’re great! We’re great! What a great idea!”

I never told anyone about the two-part picture of my brain. By then I was old enough to know what an actual brain looked like, roughly, and to grasp the difference between a representation and reality. It was just a funny picture of my head, for my head, by my head.  

In high school, I took AP Psychology, and was taught to think of the brain a collection of parts that each controlled particular functions. Accurate or not, I had a new picture of the brain, one with more parts than two: 
Fast forward to last summer, when I read “Thinking Fast and Slow” by Daniel Kahneman. In AP Psychology and in another psychology course I took in college, I had learned about the systematic mistakes that the human brain makes, many of which were first identified by Kahneman and his colleague Amos Tversky, and formalized into neat, wikipediable concepts: anchoring, availability, representativeness, loss aversion. But Kahneman went further in his book, putting all of the small things he and Tversky had learned over a long career into a new framework, a new picture of the brain. Or more accurately, a picture of two brains, one powerful and fast, but subject to a million biases (System 1) and one careful and logical, but achingly slow (System 2). 


If you can't tell, that's a brain dressed up as a turtle and a brain dressed up as a rabbit. 

I couldn’t help but be reminded of my old two-part picture of the brain. System 1 was clearly the mysterious inner part, the black hole that I couldn’t control but that was at the same time responsible for most of what I am. System 2 was the outer part, the question asker, the explanation-searcher. Cleary most of my pseudo-theory-not-a-theory didn’t line up, but I did feel somewhat pseudo-psychically validated. 

Fast forward again to LSE, and economics. We spend most of our time, of course, modeling “rational” agents, decision makers who seek to maximize their utility given complete information about prices, quantities, and the preferences of other agents. We all know, of course, that this is not the “real world,” but we find use in it anyway, principles that we can take away to apply to real world situations where, in the absence of complete information, perfect optimization is not possible. But haunting us always is a spectre, the spectre of "irrationality.” 

I hate the word “irrationality” and I avoid using it when I can. It’s definitely one of those buzz words that has so many meanings it has no meaning, and I hate it particularly because I think that many of the meanings people associate with it are wrong. 

For example, I often hear something along the lines of “irrationality means that people are not utility maximizing agents.” But the beauty of the concept of utility is that it can justify almost any decision, if you assume the right utility function form. Sometimes pointing out the irrationality of this meaning of "irrationality" is obvious. “Oh, look, she took the blame for something her friend did. Clearly she is not maximizing utility.” No, she just receives utility from altruism. Or from masochism.

Sometimes defending utility-maximizing rationality is easy, but sometimes it requires a little more subtlety. Defending it is important because it lies at the foundation of almost everything we do in economics; even behavioral economics usually assumes that agents are maximizing some utility function, albeit a funky one. In the 1940s John Von Neumann and Oskar Morgenstern defined a rational agent as someone whose preferences can be defined by a utility function that satisfies four key axioms:

1) Completeness
2) Transitivity 
3) Continuity 
4) Independence of Irrelevant Alternatives 

You can read Wikipedia if you want more. There are many examples of actions that appear to violate one or more of the Von-Neumann-Morgenstern (VNM) axioms, but in most cases I think they actually don’t.

For example, the transitivity axiom requires that if you choose an orange over a pear, and a pear over an apple, then to be rational you must choose an orange over an apple. But the VNM axioms don't do well with time. It’s impossible, of course, to be given all three discrete choices simultaneously; then you would just be choosing between an apple, a pear, and an orange. You can always argue that in the moment between your choice between the apple and the pear and your choice between the orange and the apple your mood changed enough to change your preferences; the form of your (rational) utility function changed. No one said preferences have to be constant, did they? (Do I contradict myself? Well then I contradict myself. I am large, I contain multitudes)



Another frequently violated axiom is the independence of irrelevant alternatives. IIA asserts that if I’m choosing between an apple and an orange, the presence of a lemon (which clearly I don’t prefer) shouldn’t affect my choice. But what if the lemon reminds me of how sour citrus can taste, and I therefore choose the apple? VNM axioms also don't deal well with information. The axioms extend easily enough to cover known probabilities (say, for example, there is a 30% chance the apple is mealy and a 20% chance the orange is dry) but they can’t really deal with the absence of information. To maintain VNM rationality, you have to maintain an incredible amount of information about your own preferences. How much did I enjoy the last orange I ate? How much did I enjoy the last apple? How did I feel an hour after I ate each? What is the average utility I've gleaned from all prior experiences with each type of fruit? The additional utility I could achieve from choosing the utility maximizing is probably not worth the disutility of the mental effort required to maximize utility. If the information I can assemble about my own preferences with a reasonable amount of effort is augmented by the presence of a lemon, does that really mean I’m irrational? 

I probably lost you somewhere in the last paragraph. The point is that I think limited information, or equivalently limited mental processing power, can explain a lot of “irrationality,” and that mis-specifications of the utility function (ignoring utility from altruism, or masochism, for instance, or not adjusting to account for changes in mood) can explain a good deal more. I don't think that either of those things mean that we are not maximizing utility. 

A Brookings paper I recently read for class breaks “irrationality” into three parts: 

1) Imperfect optimization 
2) Non-standard preferences 
3) Bounded Self-Control 

They further break “imperfect optimization” into “limited attention,” “limited computational capacity,” and “biased reasoning.” To me, the first of those two clearly refer to a lack of information. To elaborate on the last of those three, they discuss many of the heuristics and biases first identified by Kahneman and Tversky, and today taught in psychology 101. Kahneman points out that most of the time these heuristics and biases are very useful; they allow the brain to take shortcuts to arrive at a decision with much less information than a truly rational agent would need to accumulate. All of “imperfect optimization,” therefore, I categorize as “irrationality” by lack of information. 

“Non-standard preferences” clearly aligns with mis-specifications of the utility function. Yes, frequent mood swings and strong preference for altruism (or for masochism) make life difficult for modeling economists, but I don't think they imply insanity. “Non-standard preferences” therefore, I categorize as “irrationality” by originality. 

“Bounded Self-Control” though. There’s a doozy. It’s certainly real; it’s why I’m writing this blog post instead of doing problem sets. But it’s not a lack of information, exactly. I’m aware that my time and effort could better be expended elsewhere. It’s not non-standard preferences, really. When I’m scrolling through Buzzfeed, I’m neither happy in the moment nor when I finally force myself back to work. 

In microeconomics we briefly learned about something called the “two selves” or "multi selves" model. In short: I have two selves, one that wants healthy things and one that wants fatty things. The two selves are hungry; when they are choosing a restaurant the healthy self is in control, but when presented with a menu the fatty self takes over. They eat the fattiest thing at the healthiest restaurant. 



Two selves… two brains… two parts of one brain… one part that makes choices taking long-term goals into account and the other that eats three servings of fries and goes home with a stomachache. One rational, and the other… not. I like the two selves model, but it’s awkward, both mathematically and conceptually. Where does fatty come from? Why does it exist, when its presence clearly decreases the utility of the whole? Where does fatty end and healthy begin? And this is where I bring in Freud. 

(Disclaimer: I’ve only read a little Freud, and I don't do him justice. What I’m really bringing in here are the concepts we all vaguely associate with Freud—the unconscious, the primal, and sure, sex. And also the concepts we associate with Darwin, because basically all of social science, and science, is built on Darwin. One can make all sorts of ridiculous blanket statements in parentheses.)

Over the holidays I read “Civilization and its Discontents,” one of a stack of yellowed books from the seventies that my father found in the attic and brought down for distribution on Christmas morning. In the book, Freud asserts that the purpose of life is to maximize pleasure or minimize suffering (he even gets the word “utility” in there once or twice) and that civilization, or, as I shall henceforth call it, the economy, is at once man’s greatest tool to achieve happiness and the primary cause of his suffering. We get housing, protection from violence, and health insurance, but in return we must to repress our violent instincts and divert part of our libido from sexual objects towards society itself to foster cohesion…or something like that...and then bunch of stuff about how civilization is the natural result of the son’s desire to kill the father, and peeing on fire is an expression of man’s victory over... something. Anyway: 


Our civilized self, or ego, knows that civilization is for the best, and wants to play by the rules. Our primal self, or id, doesn’t want to play by the rules of the economy; its responses were coded before the economy came into being. It doesn’t want to get up and commoditize its labor every morning to earn money to exchange for goods in the markets that didn’t exist during the dawn of the world, doesn’t want to make long-term plans for a future that in its youth was far too uncertain to ever plan for, doesn’t want to forgo the fatty, nourishing food for the salad that, as a starving caveman, it would have certainly regretted later. It's heuristics and biases served it well back then. The primal self has preferences, and they may have once been utility maximizing, but they are no longer.

Alright, let’s line ‘em up and take stock: 


Thoughts? 

At first glance, it's easy to dismiss the first row, the fatty part, as irrational, to argue that it is a relic, and that we would be better off without it. But neither Kahneman nor Freud (I think), discredit that part of the psyche. Rather they are in awe of it. Kahneman recognizes that System 1 is responsible for most of our behavior; the million decisions required to wake up, get dressed, eat breakfast, brush teeth, and get to work or class, are made with almost no intervention from System 2. In my head's picture of my head, the inner part is intuition and creativity. But it is also uncontrollable. In Freud's words: 
"One might compare the relation of the ego to the id with that between a rider and his horse. The horse provides the locomotor energy, and the rider has the prerogative of determining the goal and of guiding the movements of his powerful mount towards it. But all too often in the relations between the ego and the id we find a picture of the less ideal situation in which the rider is obliged to guide his horse in the direction in which it itself wants to go."
Is it possible to retain the power and the intuition of the inner self while exorcising the irrationality? Or will our rational self, our civilized self, our economic self, always struggle with this more primitive psychology? I'm not sure. For the moment, it certainly makes modeling, policy, and life more difficult. It’s a balance. We all have to mange our fatty. Sometimes that means denying it fries, or putting it in an environment where it can only choose salad, and sometimes that means letting it run amok and making up for it tomorrow. But if it's necessary, I guess I’m ok with a little irrationality, for now. 



No comments :

Post a Comment