Today we are going to explore how to valueoptions. By "options", I mean situations in which you havethe
right, but not the obligation, to take an action. Having aright without an obligation is a bit like a free lunch: it isinherently valuable, and our goal is to quantify this.

The classic example is the "call" option on a stock. If you goto a financial institution and purchase a call option, you are buyingthe right, but not the obligation, to later purchase the underlyingstock at a specified price. Such an option has value, and indeedpeople buy and sell options just as they buy and sell other financialinstruments.
As I write this, IBM (International Business Machines) stock sells for$167 per share. Call options on IBM are available in a variety ofconfigurations. For instance, one can select the "strike price" (thespecified future stock price) and the "expiration date" (when you haveto choose whether or not to exercise youroption). Currently,you can buy a call on IBM with a strike priceof $175, expiring on January 12, 2012, for $6.70.
What does this mean? It means that by paying $6.70 today, youreceive the
right, but not the obligation, to purchase a shareof IBM stock on Jan 12th for $175. [Note: actual option contracts aretypically much more complex, e.g. some permit exercise prior toexpiration, some require margin (an up-front "down payment"), and soon, but we are going to ignore those complexities for today.]
Imagine that IBM stock rises to $200 on Jan 12th. You can "exercise"your option, buying IBM for $175, and immediately turn around and sellthe share on the open market for $200, thereby making a $25 profit. Onthe other hand, if IBM falls to $150, you simply walk away: yourefrain from exercising the option, so you pay nothing. In fact, IBMmight go up or down by various amounts, but the pattern is the same:if it is below $175 on Jan 12th, you walk away and lose nothing, if itis above $175, you exercise and resell, pocketing the difference asprofit.
You cannot lose, and you might win, so clearly, this "rightwithout obligation" has value. No one will give you this right, this"option", without some compensation. That compensation is currently$6.70, though that number will change over time as a function ofseveral factors, including the amount of time remaining beforeexpiration, interest rates, the current gap between the stock priceand the strike price, and the volatility (tendency to move up or downin price) of IBM stock.
Despite the phrase "you cannot lose" in the previous paragraph, buyinga call option is actually quite
risky, because you have to paythe "compensation" up front, before you know how things will turn out.
Suppose you have $167 available to invest. If you buy one share of IBMtoday, then even if IBM goes down to $150 in January, you canstill get
most of your money back by selling it at thatpoint. You will have lost $17 dollars, or about 10 percent of youroriginal investment.
In contrast, if you invest that $167 in options, what happens?At $6.70 each, you can buy about 25 options. If IBM goes up to $200,you make $25 * 25 = $625: you more than triple your money over a fourmonth period, quite a spectacular profit.
But, if IBM goes downto $150, when you walk away, you lose your entire $167 investment!Indeed, even if IBM goes up to $175, you still lose your entireinvestment! Even if IBM rises all the way to $181, you lose somemoney, since the $6 profit per option is less than the $6.70 you spentto acquire it in the first place: your $167 falls to $150. Only if IBMrises to at least $182 do you finally make money: (7-6.70)*25= $7.50, so your$167 rises to $174.50, a 5% gain (not counting what you lose to taxes,commissions and bid-ask spreads).
Intuitively, the $6.70 price up front is some kind ofprobability-weighted average of the potentially large gains on theupside and the zero you get on the downside, although as it turns out,the calculation and interpretation of the probabilities are moresubtle than we might expect.
Said another way, options are
leveraged: buying a call optionis similar to taking out a loan and using the loan proceeds to buy thestock. If the stock rises, you magnify your gain, but if the stockfalls, you magnify your loss, just as the "simple machine" known asa
lever allows you to magnify force to lift a heavy object.The one difference (and it is an important one) is that by buying theoption, you limit your potential loss to the initial option price. Youcannot lose more than you put in. With the loan, there is no downsideprotection: you can lose
more than your initial investment ifthe stock goes down.
For instance, suppose you have $7, and you get a $160 loan,and you buy a share of IBM, which then falls to $150. You can sell theshare for $150, and give that to the bank, but the story is not over.Not only did you lose all of your original $7, but now you also have adebt for $10. This is why buying "on credit", or "on margin", isrisky. The leverage inherent in "derivative securities" (an everexpanding class of financial instruments that including options,futures, swaps, and all sorts of other exotica) was one of thecontributing factors behind the 2008 financial meltdown. Derivativesecurities, including options, do have legitimate, constructive uses,but they also introduce risk, which requires proper management.
Think of buying a house with a mortgage: if the house pricefalls far enough, you can be "under water", i.e. you can have negativeequity, owing more than the house is worth. Suppose you need to sellthe house to move to another city to take a new job. If the mortgagelaws require you to pay off the original mortgage in full, you will bestuck with a debt for the difference between the mortgage balance andthe house price: the negative equity will haunt you. If instead themortgage laws give you the
option to walk away (move out, givethe house back to the bank, and be done with the mortgage), you have avaluable advantage: at the worst, you lose the initial down-payment youmade on the house, but you do not have to pay the negative equity.
In this case, it is the bank that bears the risk. To managethat risk, banks used to put in place requirements such as having a20% down-payment. During the housing bubble, many banks startedoffering mortgages with 10%, 5%, 3%, or even 0% down, which therebygreatly increased their leverage. With a 20% down payment, houseprices have to fall a great deal (at least 20%) before any home owneris under-water. But with 0% down,
any fall in house prices,however small, is enough to tip some people upside down.
Mortgages often provide yet another kind of option: theability to refinance without penalty. If interest rates come down, youcan pay off your old mortgage with a new, lower rate one, but ifinterest rates rise, you can keep on paying the old rate (assuming youhave a traditional fixed rate mortgage, not an adjustable rate one).
As these examples illustrate, options can be embedded inside ofother contracts (e.g. mortgages). Whether they are stand alonefinancial instruments, or embedded as terms in another contract, theyclearly have value.
So, how do you figure out the value of an option?
In general, the calculations are complicated. You need to knowsomething about the probability distribution of future outcomes, butunfortunately, this is not really something anyone
can know. Sopeople make assumptions. One common (but demonstrably incorrect)assumption is that stock prices (for stock options; or interest rates,in the mortgage case) move up or down based on a normal probabilitydistribution. To be more precise, people often assume stock pricesfollow a "log normal random walk". This assumption is very convenient,since it greatly simplifies the mathematics: normal distributions areto probability like constant coefficients and linearity are todifferential equations.
Once you have made an assumption about how the underlyingstock (or interest rate) behaves, you can "derive" how thecorresponding "derivative security" should be priced. The resultingcalculations are still complicated, but in the early 1970's economistslike Fischer Black, Myron Scholes and Robert Merton worked them out,some ultimately winning the Nobel prize.
However, real stocks are prone to crashes much more frequently anddramatically than the normal probability model would suggest, so moreelaborate models for pricing options are still a major topic ofcurrent research in academia and on Wall Street.
For today, though, I would like to give you the flavor of thesecalculations, while omitting some of the complexities that arise withactual financial instruments. The trouble with analyzing stock optionsis that we have to first develop quite a bit of economic theory(interest rates, no-arbitrage conditions) before we can derive theactual equations. In order that we can focus on the mathematicswithout being distracted by the economics, we are going toanalyze something simpler: a card game sometimes known as
Red-Or-Black.
With stocks, we would also need to choose a probability modeland explain why it was plausible. In contrast, in the simple world ofthe card game, there will be "nothing up my sleeve" -- all aspects ofthe game are straightforward and visible.
Despite this simplicity, it turns out that the actual processof valuing stock options is very similar to the calculation we areabout to perform.
Here's how the game works. Two people (a "dealer" and a "player") playthe game using an ordinary 52-card deck, containing 26 red cards and26 black ones. The "dealer" shuffles the deck. The "player" now turnsover the top card. If the card is red, she pays the dealer onedollar. If the card is black, the dealer pays her onedollar. The
player now decides whether to stop or to keepplaying by turning over the next card. If she keeps playing, the samerules apply: on red, the player loses a dollar, on black she gains adollar. She also continues to have the
option to stop the gameat any point. The game ends when the player chooses to stop it, orwhen all 52 cards have been played.
The central question is, how much money should the dealer charge theplayer as a "fair" entry fee for the privilege of playing this game?Equivalently, what is the average amount the player can expect to win,per game, if they play a large number of games?
Both of these questions implicitly assume the player knows andfollows the optimal strategy for the game, so an equally relevantquestion is "and just what
is the optimal strategy?", meaning"under what conditions should the player choose to stop?"
At first glance, the game looks symmetric, suggesting that the fairvalue is zero. Half the cards are red, half the cards are black, so itfeels like a 50% kind of situation. However, this is not the case.
In fact, the player cannot lose. No matter how badly behind she maybe, all she need do is keep playing to the end of the deck, and forsure, she will achieve a score of zero. For instance, suppose shedraws three red cards in a row, so she is down three dollars. Therest of the cards that remain to be turned over musttherefore
not be half red, half black anymore: they containthree extra black cards. Even if those three extra black cards are allthe way at the bottom of the deck, if the player simply plays all theway to the end, they will eventually turn up, resulting in a finalscore of 26 - 26 = 0.
However, the player can also win. For instance, if the first card isblack, and the player stops after it, she wins a dollar. The chancethat the first card is black is 50%, so the simple strategy called"stop if the first card is black, else play till the end" yields anaverage payoff of 50 cents (half the time it yields a dollar, half thetime it yields zero). Thus, the dealer is going to need tocharge
at least 50 cents as an entry fee to avoid being takenadvantage of.
Can the player do better than this? What is the actual optimalstrategy? If the first card is black, the player wins a dollar, but nowthe remaining deck has more red cards than black. If the player draws asecond card, the odds favor her losing her lead. Maybe she should quitwhile she is ahead?
If you think about this for a while, it may seem quitedifficult to analyze. An enormous number of patterns could arise:
- red, black, red, black, ...
- black, red, black, red, ...
- red, red, black, red, ...
- and many, many more (52-factorial, which is much larger than the number of seconds in the history of the universe since the Big Bang - see Powers of Two Back in Time
So we are going to need some sort of systematic approach!
Enter "Backward Induction". This is a very clever idea, though alsovery simple once you see it. It goes back at least to John von Neumannand Oskar Morgenstern (1944:
Theory of Games and Economic Behavior).
The idea is this:
Start from the end and work backward!At the end of the game, there is just one card left. It is easyto see how to act. There are only two possibilities:
- The player is ahead by 26-25 = 1 dollar, so the last card is red. It is time to stop, thereby locking in the dollar win.
- The player is behind: 25-26 = -1 dollar, so the last card isblack. The player should turn over the final, black, card, and receivea dollar, thereby achieving break-even (a score of zero).
Notice that we do not yet know the
probability of being inthese situations; this will emerge later. What we do see is how touse the current score to determine the optimal choice.
What if there are two cards left? Things are now a bit morecomplicated, but only a little. There are now 3 possible situations:
- The score is 26-24 = +2 in the player's favor. The player should stop, since only red cards remain.
- The score is 24-26 = -2, so the player is behind. They should continue playing, since only black cards remain.
- The score is 25-25 = 0. That means there is one red and one black card remaining. Since the deck was shuffled, the chance of drawing black next is 50%. Drawing black puts the player in a situation analyzed previously: they stop, and win +1. Drawing red also puts the player in a situation analyzed previously: they continue to the end, and win zero. Since these are equally likely, the expected payoff from taking the next card is 50 cents, which is positive, so the player should indeed take the card, since in a large number of repeated games they will win a dollar by doing so, about half the time, and lose nothing otherwise.
This is easier to follow if we draw a little picture.

In the picture, each box represents a potential situation the playercan find herself in. The boxes are organized on a grid based on thenumber of red and black cards that remain in the deck. The playerknows where she is based on what has happened so far in the game, asshown in the scoring equation at the top of each box.
Along the edges of the map, we know what to do: if there areonly black cards left, play, if only red cards, stop. The black arrowsshow how "playing" takes you to the next box (or boxes, when you donot know if you will draw red or black). If the recommended action is"stop", I made the arrow gray instead. In both cases, I putthe
final score in parentheses after the recommendedaction. For instance, in the top left box, the player is at -2, butthe final score will be zero since she can simply turn over the tworemaining black cards.
In the middle of the map, it is less clear what to do; that'swhere backward induction helps. At the 25-25=0 box, we are not surewhat to do, since we have a 50% chance to draw red orblack. However, in both cases, we move to an edge box, where wealready know how the game will play out. If we stop we get zero, butif we play we have a 50% chance of getting to a zero or a one, so werecommend "play", and record a 0.5 score in parenthesis as theexpected value (the average score we get after playing many games).
Do you see where this is headed? If there are 3 cards left, then thereare 4 possible scenarios: 3 red, 2 red + 1 black, 1 red + 2 black, or3 black. In each scenario, we can analyze the optimal strategy byexamining the possible next draws and noticing that they all reduce tocases we have analyzed previously. In particular, we compare theaverage score from the (one or) two potential future outcomes againstthe current score if we stop, in order to decide whether to continue playing.

With 4 cards left, there are 5 possible cases to analyze. And so on,back to the beginning of the game, where with all 52 cards remaining,there is only one scenario to analyze. We can visualize the full gameas a 27 by 27 rectangular grid, with the x-axis representing thenumber of black cards played so far (rather than the number remaining:this way the values increase as we move to the right) and the y-axisrepresenting the number of red cards played so far.
Instead of trying to draw the 27x27 grid and label it all by hand,let's automate the process using
R. As regular readersknow,
R is a free, high-quality, cross-platform, open-sourcestatistical computing programming language. If you have not alreadyset up R,see HowDo We Know? for easy to follow information on downloading andinstalling R, which should take you less than five minutes.Now copy and paste the following code into
R.
colorScheme <- function(s) { if(s==0) C <- "yellow" else if(s<0) { if(s > -5) C <- "orange" else "red" } else { # s > 0 if(s > 5) C <- "cyan" else hsv(0.17+0.06*s) }}setup <- function(name) { png(name, 800, 500) par(mar=c(5, 5, 2, 2)) z <- c(-0.5, 26.5) plot(z, z, type="n", xlab="Black cards played", ylab="Red cards played", cex.axis=1.5, cex.lab=1.5, xaxs="i", yaxs="i")}setup("position.png")for(b in 0:26) # b = black cards played for(r in 0:26) # r = red cards played rect(b-0.5, r-0.5, b+0.5, r+0.5, col=colorScheme(b-r))dev.off()value <- matrix(0, 27, 27)play <- 0*value # booleanshow <- 0*value # boolean## backward induction process:for(n in 52:0) # n = total cards played so far for(b in max(0,n-26):min(n,26)) { # black so far r <- n-b # red cards played so far R <- 26 - r # red cards remaining B <- 26 - b # black cards remaining if(B == 0) { # right edge of map play[b+1,r+1] <- FALSE # stop value[b+1,r+1] <- b-r } else if(R == 0) { # top edge of map play[b+1,r+1] <- TRUE # continue value[b+1,r+1] <- 0 } else { # apply backward induction step: ## expected value if continue playing fracRed <- R/(R+B) # fraction red remaining expect <- (1-fracRed)*value[b+2,r+1] + fracRed*value[b+1,r+2] current <- b - r # value if stop now play[b+1,r+1] <- expect >= current value[b+1,r+1] <- max(expect, current) } }for(b in 0:25) for(r in 0:25) if(play[b+1,r+1]) show[b+2,r+1] <- show[b+1,r+2] <- 1 setup("value.png")for(b in 0:26) for(r in 0:26) if(play[b+1,r+1] | show[b+1,r+1]) rect(b-0.5, r-0.5, b+0.5, r+0.5, col=colorScheme(value[b+1,r+1]))lines(c(0,26), c(0,26), lwd=2)dev.off()
When we run this, we get the following two pictures. First, a verysymmetrical picture that shows the
current score at every nodein the tree (every box or position in the game).

Red and orange are bad, green and cyan (light blue) are good. The toppart of the frame is red (or orange), because the player isbehind. The right part of the frame is green (or cyan), because theplayer is ahead. Along the diagonal is a yellow band where the scoreis zero: the player and the dealer are tied.
Second, a much less symmetrical picture showing the optimalstrategy and final payoffs, using the same color scheme.

Red and orange have vanished: the worst the player can get iszero, which is yellow. The ragged stairstep helps us visualize theoptimal strategy by marking where the player should stop. If you reacha colored box with nothing to its right, the optimal thing to do is tostop.
We can clearly see some patterns; for instance, when you arebehind (upper left), you always Play, since doing so will certainlyraise your score (to zero). On the other hand, if you are luckyenough to get really far ahead (6 cards: lower right), you willStop, since there will be lots of red cards and few black ones left.
I've drawn a diagonal black line through the middle (where the yellowband was in the first picture), to emphasize the way the optimalstrategy changes throughout the game. Early on, you keep playing untilyou have a 5 or 6 point advantage, if possible. But, if that doesn'thappen quickly, then as the game goes on you have to lower yourthreshold. By the end, you stop if you find yourself ahead by even one card.
You cannot tell this from the colors, but typing
value[1,1]
shows that the value in the bottom leftcell is 2.624, which is significantly higher than the 0.50 we obtainedvia the naive "stop if the first card is black, else play to the end"strategy discussed earlier.
This means the dealer needs to charge a $2.63 entry fee (or more).Otherwise potential players will flock to his doorstep asking to playthe game, knowing they will, on average, make money by doing so.
The important part of the algorithm is the section labeled Backward Induction;all the rest is just graphics. The core equations are
fracRed <- R/(R+B) # fraction red remaining expect <- (1-fracRed)*value[b+2,r+1] + fracRed*value[b+1,r+2] current <- b - r # value if stop now play[b+1,r+1] <- expect >= current value[b+1,r+1] <- max(expect, current)
We are doing the same thing as shown in the "two cards left"and "three cards left" examples earlier. We walk backward through themap, starting in the upper right corner (26,26) and moving toward thelower left (0,0) where the game begins. At each point, we calculatethe fraction of red cards that still remain. From this, and knowingwhat happens along the to and right boundaries of the map, we cancalculate the expected value of continuing to play. We compare this tothe current score if we stop now. We choose whichever is larger.
In R, matrix row and column indices start at 1, which is why we haveto write awkward things like "value[b+1,r+1]". The "value" and "play"variables are 27 by 27 square matrices, with rows and columns indexed1 to 27. The expression "value[1,2]" is the value in the first row andsecond column. Since "b", the number of black cards played so far,could be anything from 0 to 26, we have to add one to make the rangebe 1 to 27. Ditto for "r".
So far, we have discussed the game from the perspective of players whocan play repeatedly a large number of times, and who are thereforemainly interested in the long term average amount they can win.
But suppose you are an individual investor with shallow pockets: youcannot afford to play a large number of games. Perhaps you care aboutthe full probability
distribution of outcomes, not just theaverage. Moreover, perhaps you want to be more conservative, or moreaggressive, than the "optimal" rule recommends.
Let's compare the following three "stopping rules":
- We can approximate the optimal rule by
stop as soon as (b-r) >5.5-r/6
- We can try to be conservative and 'protect' wins bylocking them in prematurely, using
stop as soon as (b-r) > 0.5*(5.5-r/6)
- Or we can be greedy and hope for extra gains, using
stop as soon as (b-r) > 1.5*(5.5-r/6)
Now we can run some simulations to find out how these rules actuallyperform. Try this in R:
sim <- function(index) { b <- r <- 0 while(b+r < 52 & (b-r) < scale*(5.5-r/6)) { B <- 26 - b R <- 26 - r fracRed <- R/(B+R) if(runif(1) < fracRed) r <- r+1 else b <- b+1 } b-r}N <- 1e4scale <- 0.5conservative <- z <- sapply(1:N, sim)scale <- 1optimal <- z <- sapply(1:N, sim)scale <- 1.5greedy <- z <- sapply(1:N, sim)show <- function(col, val, dx) { cat(col, mean(val), "\n") for(i in 0:15) { s <- sum(val==i) if(s>0) rect(i+0.2*(dx-1), 0, i+0.2*dx, s/N, col=col) }}png("density.png", 800, 500)par(mar=c(5, 5, 2, 2))plot(c(0,9), c(0,0.6), type="n", xlab="Final Score", ylab="Frequency", cex.axis=1.5, cex.lab=1.5, xaxs="i", yaxs="i")show("blue", conservative, 1)show("green", optimal, 2)show("red", greedy, 3)dev.off()
Here are the results. The histograms plot the frequency of each score,colored by which rule we were using, tested over 10,000 trials.

The average scores are: 2.09 (red: greedy), 2.14 (blue: conservative),and 2.54 (green - pretty close to the optimal 2.62).
Clearly the red (greedy) strategy is a bad idea: by continuing to play aftergaining a large lead, you generally fritter the lead away, since bythat point the deck is heavy with red cards. Thus, while youoccasionally do get a higher score than is possible with the otherrules, you are also much more likely to slide into negative territoryand have to go to the end of the deck, getting a zero.
In contrast, the blue (conservative) strategy does lock in earlygains, resulting in frequent scores of 3, and relatively fewzeros. But, assuming you are risk-neutral, the green strategy is stillbetter, since on average it scores 2.54 instead of 2.14.
However, if you only get to play the game once, you may well behappier with the blue strategy. You are much less likely to get zerothan with either of the alternative strategies, and you have a highchance of getting a 3, which is quite respectable; indeed 3 exceedsthe mean score of
any strategy. You give up the potentialfor really high scores, but you also reduce the chance of gettingnothing.
I hope you enjoyed this discussion. You can click the "M"button below to email this post to a friend, or the "t" button toTweet it, or the "f" button to share it on Facebook, and so on.
As usual, please post questions, comments and other suggestions using the box below, or G-mail me directly at the address mentioned in the Welcome post. Remember that you can sign up for email alerts about new posts by entering your address in the widget on the sidebar. If you prefer, you can "follow"
@ingThruMath on Twitter to get a 'tweet' for each new post. The Contents page has a complete list of previous articles in historical order. Now that there are starting to be a lot of articles, you may also want to use the 'Topic' and 'Search' widgets in the side-bar to find other articles of related interest. See you next time!
Hiç yorum yok:
Yorum Gönder