Wednesday 4 November 2009

Week 4- Expected Utility Theory, Prospect Theory & Priority Heuristics

This week we were asked to read The Priority Heuristics: Making Choices Without Trade-Offs by Brandstatter and Gigerenzer (2006). This article in my point of view, was quite heavy, nevertheless interesting. Before the lecture I was quite confused between the Expected Utility Theory and the Prospect Theory, but luckly Dr.Hardman explained the differences in the lecture clearly.

From what I understood, the Expected Utility Theory (EUT) deals with decisions under uncertainty. This theory claims that when people make risky decisions, they try to maximise their expected utility. From this lecture have learnt that everybody has different expected values, for example £10 may mean alot to one person but not another. According to the EUT we can calculate the expected utility by multiplying the utility of each outcome by its probability and then summing across the possible outcomes. For example, if I were to flip a coin and if it would land on heads I would win £100 if it would land on tails I would get nothing, therefore the expected utility would be (100 x 0.5) + (0 x 0.5) = 50. However, if you have a choice to choose between A= a win of £50 at a chance of 50% or B= a certain win of £30, according to the EUT people would choose A because 'people try to maximise their expected utility', but most people would actually choose B as there is an overwhelming amount of evidence which suggests that people favor minimum gains and are risk aversive. Interestingly, Priority Heuristics can predict people's choices through the following steps: priority rule, stopping rule and the decision rule.

In Contrast, the Prospect Theory, which is a modification of the EUT, explains that decision-makers code potential outcomes as gains and loses relative to a reference point, which may also be an expectation or aspiration. The most noticeable change occurs around the reference than those further from it. According to the Prospect Theory 'losses loom larger than gains' and this can be seen in the steeper curvature of the value curve for losses compared to gains. I can relate to this theory as I poker. One day I won £20, and I was quite happy not overly though, however, on another day I lost £15 and I was bickering about it for the rest of the day. This lecture/ariticle has taught me why I became so risk-taking when I lost my first £5, as I tried to recuperate the money that I lost. In the future I will try to control myself if I were to lose money and not triple my losses!

2 comments:

  1. In the example of the coin toss, your calculation is actually for expected value, not for expected utility. That is, the expected VALUE is (0.5 x 100) + (0.5 x 0) = 50. The expected utility of the gamble would be: (0.5 x u100) + (0.5 x u0), where "u" means "the utility of...".

    The fact that someone might prefer a sure thing of £30 to the coin toss that might get £100 if heads comes up is not consistent with expected value theory, but is entirely consistent with the risk aversion that is usually assumed in expected utility theory (recall the curve on the graph I showed). If we measure utility on a zero-to-one scale, then we can assign 1 as the utility for £100, 0 as the utility for £0, and the utility of £30 is as yet unknown. However, if someone prefers the sure thing to the gamble, this implies that u£30 > 0.5 x 1, meaning that the utility of £30 exceeds 0.5. We could of course use the methods described on last week's handout in order to measure a person's utility for £30.

    However, some of the other examples described last week (from Kahneman & Tversky, 1979) show how people violate certain principles of expected utility theory. These are also described in Chapter 7 of my textbook.

    ReplyDelete
  2. Thank you, sorry for the misunderstanding I fully understand it now after reading your comment and textbook. Shall I edit this blog?

    ReplyDelete