Last week we were discussing what we have learnt over the last 7 weeks. A question arose on what is involved when making a decision or judgement; some students claimed that we make decisions based on logic (e.g. outlining the pros and cons or advantages and disadvantages). However, this does not explain why we make good decision under limited time and knowledge. I did not mention much in the lecture, as I am quite shy, but I saw a great documentary called "My Brilliant Brain- Make Me A Genius". Although the first impressions of this documentary may be that it has nothing to do with judgment and decision making; if you watch it there is a part where Susan Polgar (the world's first grandmaster) uses her instinct to make her next move while playing a chess game under 60 seconds. In other words she uses her memory of previous games and plays her next move using her instinct. Most people who play chess think before they make their next move, using logic, thinking of the alternative moves etc. Surprisingly, by using her instinct, Susan always wins at chess. This may suggest that we unconsciously use our memory when we make a quick snap judgment or decision. I personally believe that our brains are extremely complex; when we make a decision or judgment we use our memories, emotions and our morals combined to make a rapid choice, which we think is best. Please right click on the hyperlink below to watch this documentary, or type into google video.
My Brilliant Brain - Make Me A Genius
Wednesday 2 December 2009
Monday 23 November 2009
Week 6 - The Endowment Effect
The endowment effect suggests that people may overprice their items that they are trying to sell. This contradicts the economic theory, which suggests that the value of an object is independant of initial ownership (Kahneman et al., 1990). The loss aversion proposed by the Prospect Theory's value function suggests that people should find it difficult to part with things that they own.
Kahneman et al (1990) conducted a study on students to research into this effect. The students were randomly allocated into three groups; in one group, labelled as the 'sellers', were given a coffee mug and were asked whether they were willing to sell the mug at a series of prices ranging between $0.25 and $9.25. In the second group, known as the 'buyers', were asked if they were willing to buy a coffee mug at the same set of prices. As a result, the sellers set a higher median price ($7.12) than the buyers ($2.87), this is consistent with the endowment effect predicted by loss aversion. The third group provided independent valuations of the coffee mugs to see whether the sellers were overpricing the mug. This group were called the choosers, as they were asked to choose, for each of the prices, whether they would rather have a coffee mug or the cash. As a result, the medium price of the choosers group was $3.12, of which was quite close to the buyers evaluation, therefore suggesting that the value of an object depends on ownership, even when that ownership is randomly assigned. However, authors have argued that the endowment effect is only present with naive participants and can be reduced with people who have experience in a market setting (Coursey et al, 1987).
Kahneman et al (1990) conducted a study on students to research into this effect. The students were randomly allocated into three groups; in one group, labelled as the 'sellers', were given a coffee mug and were asked whether they were willing to sell the mug at a series of prices ranging between $0.25 and $9.25. In the second group, known as the 'buyers', were asked if they were willing to buy a coffee mug at the same set of prices. As a result, the sellers set a higher median price ($7.12) than the buyers ($2.87), this is consistent with the endowment effect predicted by loss aversion. The third group provided independent valuations of the coffee mugs to see whether the sellers were overpricing the mug. This group were called the choosers, as they were asked to choose, for each of the prices, whether they would rather have a coffee mug or the cash. As a result, the medium price of the choosers group was $3.12, of which was quite close to the buyers evaluation, therefore suggesting that the value of an object depends on ownership, even when that ownership is randomly assigned. However, authors have argued that the endowment effect is only present with naive participants and can be reduced with people who have experience in a market setting (Coursey et al, 1987).
Friday 20 November 2009
Week 5- Framing Effects
Framing effects occur when different descriptions of the same decision problem give rise to predictably different preferences (Tversky & Kahneman, 1981). One study into framing effects was conducted by Tversky and Kahneman (1981), whereby participants were presented with an Asian Disease problem. The researchers proposed two alternative solutions to combat the disease; the alternative programs were either presented to the participants in a positive frame or a negative frame, however, both 'acts, outcomes, and contingencies' associated with the problems were essentially the same.
For example, in the positive frame, one of the alternative programs was presented as " If Program A is adopted, 200 people will be saved", but in the negative frame Program A was presented as "If Program A is adopted, 400 people will die". Program B, one the other hand, was described as a more risky option; for example, in the positive frame it was described as " there is one-third probability that 600 people will be saved, and a two-third probability that no people will be saved." In the positive frame, the two alternatives can be described as gains, and in the negative frame, the alternatives can be evaluated as losses. As a result, participants overwhelmingly choose Program A when it was presented in the positive frame, and overwhelmingly choose Program B when it was presented in the negative frame.
This is in line with the Prospect Theory, as decision makers tend to be risk aversive when choosing between perceived gains and risk seeking when deciding between perceived losses. Therefore this could account to why people choose Program A in the positive frame, and choose the more risky option Program B in the negative frame. However, can framing effects be avoided?
This week we were asked to read Deep Thoughts and Shallow Frames by LeBoeuf and Shafir. This article sought to explain how framing effects can be avoided, and whether it depends on how the individual analyses the problem presented to them. In one view, framing effects can be avioded if respondants 'put more thought' to a decision problem; it was claimed that by putting more thought into a decision problem they would detect alternative ways to think about the problem (Smith, 1985). Some research has supported this view as there was a lower occurance of framings effects when participants were asked to provide justifications for their decision.
Cacioppo and Petty (1982) investigated whether framing effects are moderated by respondant's tendencies to give decisions greater thought. The Need for Cognition (NC) identifies 'differences among individuals in their tendency to engage in and enjoy thinking' (Cacioppo & Petty, 1982, p.116). Individuals with high NC tend to give greater thought; greater information search; and pay less attention to surface cues than those low in NC (Verplanken et al., 1992; Heppner et al., 1983). Although this approarch sounds promising, replications of studies into people with high and low NC have yeilded mixed results. For example in study 1 of this article (mentioned above) found that framing effects were not moderated by the degree of thought given to a problem. Both reponses of high- and low-NC participants were heavily and equally influenced by the provided frame.
In my view, I believe that everyone is susceptible to framing effects no matter how 'much thought' is put into a decision problem. Framing effects are everywhere, from advertising to voting. For example, after 9/11 American citizens had to vote on a law titled 'The Patriots Act'. It was emphasised that the law was to 'monitor its citizens for the purpose of countering terrorism', which is in my view quite vague. The title may also suggest that if US citizens do not vote for the law to be passed, then they are not Patriotic. In fact the law was signed in October 26, 2001.
For example, in the positive frame, one of the alternative programs was presented as " If Program A is adopted, 200 people will be saved", but in the negative frame Program A was presented as "If Program A is adopted, 400 people will die". Program B, one the other hand, was described as a more risky option; for example, in the positive frame it was described as " there is one-third probability that 600 people will be saved, and a two-third probability that no people will be saved." In the positive frame, the two alternatives can be described as gains, and in the negative frame, the alternatives can be evaluated as losses. As a result, participants overwhelmingly choose Program A when it was presented in the positive frame, and overwhelmingly choose Program B when it was presented in the negative frame.
This is in line with the Prospect Theory, as decision makers tend to be risk aversive when choosing between perceived gains and risk seeking when deciding between perceived losses. Therefore this could account to why people choose Program A in the positive frame, and choose the more risky option Program B in the negative frame. However, can framing effects be avoided?
This week we were asked to read Deep Thoughts and Shallow Frames by LeBoeuf and Shafir. This article sought to explain how framing effects can be avoided, and whether it depends on how the individual analyses the problem presented to them. In one view, framing effects can be avioded if respondants 'put more thought' to a decision problem; it was claimed that by putting more thought into a decision problem they would detect alternative ways to think about the problem (Smith, 1985). Some research has supported this view as there was a lower occurance of framings effects when participants were asked to provide justifications for their decision.
Cacioppo and Petty (1982) investigated whether framing effects are moderated by respondant's tendencies to give decisions greater thought. The Need for Cognition (NC) identifies 'differences among individuals in their tendency to engage in and enjoy thinking' (Cacioppo & Petty, 1982, p.116). Individuals with high NC tend to give greater thought; greater information search; and pay less attention to surface cues than those low in NC (Verplanken et al., 1992; Heppner et al., 1983). Although this approarch sounds promising, replications of studies into people with high and low NC have yeilded mixed results. For example in study 1 of this article (mentioned above) found that framing effects were not moderated by the degree of thought given to a problem. Both reponses of high- and low-NC participants were heavily and equally influenced by the provided frame.
In my view, I believe that everyone is susceptible to framing effects no matter how 'much thought' is put into a decision problem. Framing effects are everywhere, from advertising to voting. For example, after 9/11 American citizens had to vote on a law titled 'The Patriots Act'. It was emphasised that the law was to 'monitor its citizens for the purpose of countering terrorism', which is in my view quite vague. The title may also suggest that if US citizens do not vote for the law to be passed, then they are not Patriotic. In fact the law was signed in October 26, 2001.
Wednesday 4 November 2009
Measuring Utility
We were asked to measure our utility function for a certain range of monetary values, using two different methods: The Certainty Equivalence Method and the Probability Equivalence Method. The graph above is my utility function using the Certainty Equivalence Method and below is my utility function using the Probability Equivalence Method.
Week 4- Expected Utility Theory, Prospect Theory & Priority Heuristics
This week we were asked to read The Priority Heuristics: Making Choices Without Trade-Offs by Brandstatter and Gigerenzer (2006). This article in my point of view, was quite heavy, nevertheless interesting. Before the lecture I was quite confused between the Expected Utility Theory and the Prospect Theory, but luckly Dr.Hardman explained the differences in the lecture clearly.
From what I understood, the Expected Utility Theory (EUT) deals with decisions under uncertainty. This theory claims that when people make risky decisions, they try to maximise their expected utility. From this lecture have learnt that everybody has different expected values, for example £10 may mean alot to one person but not another. According to the EUT we can calculate the expected utility by multiplying the utility of each outcome by its probability and then summing across the possible outcomes. For example, if I were to flip a coin and if it would land on heads I would win £100 if it would land on tails I would get nothing, therefore the expected utility would be (100 x 0.5) + (0 x 0.5) = 50. However, if you have a choice to choose between A= a win of £50 at a chance of 50% or B= a certain win of £30, according to the EUT people would choose A because 'people try to maximise their expected utility', but most people would actually choose B as there is an overwhelming amount of evidence which suggests that people favor minimum gains and are risk aversive. Interestingly, Priority Heuristics can predict people's choices through the following steps: priority rule, stopping rule and the decision rule.
In Contrast, the Prospect Theory, which is a modification of the EUT, explains that decision-makers code potential outcomes as gains and loses relative to a reference point, which may also be an expectation or aspiration. The most noticeable change occurs around the reference than those further from it. According to the Prospect Theory 'losses loom larger than gains' and this can be seen in the steeper curvature of the value curve for losses compared to gains. I can relate to this theory as I poker. One day I won £20, and I was quite happy not overly though, however, on another day I lost £15 and I was bickering about it for the rest of the day. This lecture/ariticle has taught me why I became so risk-taking when I lost my first £5, as I tried to recuperate the money that I lost. In the future I will try to control myself if I were to lose money and not triple my losses!
From what I understood, the Expected Utility Theory (EUT) deals with decisions under uncertainty. This theory claims that when people make risky decisions, they try to maximise their expected utility. From this lecture have learnt that everybody has different expected values, for example £10 may mean alot to one person but not another. According to the EUT we can calculate the expected utility by multiplying the utility of each outcome by its probability and then summing across the possible outcomes. For example, if I were to flip a coin and if it would land on heads I would win £100 if it would land on tails I would get nothing, therefore the expected utility would be (100 x 0.5) + (0 x 0.5) = 50. However, if you have a choice to choose between A= a win of £50 at a chance of 50% or B= a certain win of £30, according to the EUT people would choose A because 'people try to maximise their expected utility', but most people would actually choose B as there is an overwhelming amount of evidence which suggests that people favor minimum gains and are risk aversive. Interestingly, Priority Heuristics can predict people's choices through the following steps: priority rule, stopping rule and the decision rule.
In Contrast, the Prospect Theory, which is a modification of the EUT, explains that decision-makers code potential outcomes as gains and loses relative to a reference point, which may also be an expectation or aspiration. The most noticeable change occurs around the reference than those further from it. According to the Prospect Theory 'losses loom larger than gains' and this can be seen in the steeper curvature of the value curve for losses compared to gains. I can relate to this theory as I poker. One day I won £20, and I was quite happy not overly though, however, on another day I lost £15 and I was bickering about it for the rest of the day. This lecture/ariticle has taught me why I became so risk-taking when I lost my first £5, as I tried to recuperate the money that I lost. In the future I will try to control myself if I were to lose money and not triple my losses!
Tuesday 3 November 2009
Week 2/3- Reasoning the fast and frugal way
This is my first entry and this is the first time that I have created a blog, so I am quite uncertain on how to compose a blog. Nevertheless, I will give it a go. I am going to write about week 2, after reading Reasoning the Fast and Frugal Way: Models of Bounded Rationality by Gigerenzer and Goldstein (1996), of which I found quite interesting.
Simon's (1956,1982) models of bounded rationality attempts to understand how humans who have little time and knowledge behave. He argued that information-processing systems typically need to satisfice rather than optimise. Unlike the classical view, whereby the laws of human inference are the laws of probability and statistics, Simon argued that we would choose the first object that satisfies our aspiration levels, instead of calculating all possible alternatives, estimating probabilities and utilities for possible outcomes. In accordance with Herbert Simon (1956,1982) I believe that us humans are limited information processors and that our minds are shaped by the environment.
To demonstrate Simon's notion of satisfing, various simple algorithms were tested and by computer simulation Gigerenzer and Goldstein (1996) held a competition between the satisfing "Take the Best" algorithm and several "rational" inference procedures. In this article, satisfing algorithms use limited knowledge and performed, to my surprise, extremely well. One example of an inferential task, where participants must choose between two alternative choices by only using knowledge retrieved from memory only is.." Which city has a larger population? (a) Hamberg (b) Cologne". This is a great example to test participants under limited time and knowledge (obviously the participants were not German). Surprisingly 90% of US students gave the correct answer (Hamberg) and vise versa with German students when asked to choose between two American cities.
Many studies seems to support this idea that under limited time and knowledge we make correct inferences, maybe this could have arisen through evolution. For example, in an emergency (a lion approaches you) we go in a flight or fight mode and we have to make a decision under limited time and knowledge to survive, we would not have the time and knowledge to search for all relevant information, as the classical view suggests, otherwise by the time we would have gathered all information we would have been eaten (This has not been written anywhere, nor based on evidence I am just hypothesising).
In this example of which two cities is more populated, Gigerenzer explained that we try to find a cue, for example Hamberg has a football stadium or airport, and therefore participants choose one city over the other. The Take The Best algorithm is the first satisfing algorithm, it is called take the best as its policy is "take the best, ignore the best". Initially I thought that this algorithm was quite simplistic, until I learnt how accurate it made inferences about a real-world environment. The Take the Best algorithm parallels Simon's notion of satisfing, as the algorithm stops search after the first discriminating cue is found, just like Simon's satisfing algorithm stops search when the option meets an aspiration level.
To conclude, before I write another 1000 useless words, I agree with Simon's concept of satisfing and I am impressed that the Take the Best algorithm performed so well with limited information and did not need to compute weighted sums of cue values.
Simon's (1956,1982) models of bounded rationality attempts to understand how humans who have little time and knowledge behave. He argued that information-processing systems typically need to satisfice rather than optimise. Unlike the classical view, whereby the laws of human inference are the laws of probability and statistics, Simon argued that we would choose the first object that satisfies our aspiration levels, instead of calculating all possible alternatives, estimating probabilities and utilities for possible outcomes. In accordance with Herbert Simon (1956,1982) I believe that us humans are limited information processors and that our minds are shaped by the environment.
To demonstrate Simon's notion of satisfing, various simple algorithms were tested and by computer simulation Gigerenzer and Goldstein (1996) held a competition between the satisfing "Take the Best" algorithm and several "rational" inference procedures. In this article, satisfing algorithms use limited knowledge and performed, to my surprise, extremely well. One example of an inferential task, where participants must choose between two alternative choices by only using knowledge retrieved from memory only is.." Which city has a larger population? (a) Hamberg (b) Cologne". This is a great example to test participants under limited time and knowledge (obviously the participants were not German). Surprisingly 90% of US students gave the correct answer (Hamberg) and vise versa with German students when asked to choose between two American cities.
Many studies seems to support this idea that under limited time and knowledge we make correct inferences, maybe this could have arisen through evolution. For example, in an emergency (a lion approaches you) we go in a flight or fight mode and we have to make a decision under limited time and knowledge to survive, we would not have the time and knowledge to search for all relevant information, as the classical view suggests, otherwise by the time we would have gathered all information we would have been eaten (This has not been written anywhere, nor based on evidence I am just hypothesising).
In this example of which two cities is more populated, Gigerenzer explained that we try to find a cue, for example Hamberg has a football stadium or airport, and therefore participants choose one city over the other. The Take The Best algorithm is the first satisfing algorithm, it is called take the best as its policy is "take the best, ignore the best". Initially I thought that this algorithm was quite simplistic, until I learnt how accurate it made inferences about a real-world environment. The Take the Best algorithm parallels Simon's notion of satisfing, as the algorithm stops search after the first discriminating cue is found, just like Simon's satisfing algorithm stops search when the option meets an aspiration level.
To conclude, before I write another 1000 useless words, I agree with Simon's concept of satisfing and I am impressed that the Take the Best algorithm performed so well with limited information and did not need to compute weighted sums of cue values.
Monday 2 November 2009
Subscribe to:
Posts (Atom)