report final

37
University of Connecticut School of Business Risk Modeling Project“Dot-com” Bubble: 1996 - 2002 Group 8 Group Members: Tianyan Li Wang Dai Jianguo Li Meng Xu John Almeida

Upload: mengsirius-xu

Post on 12-Aug-2015

100 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Report Final

University of Connecticut

School of Business

Risk Modeling Project:

“Dot-com” Bubble: 1996 - 2002

Group 8

Group Members:

Tianyan Li

Wang Dai

Jianguo Li

Meng Xu

John Almeida

Page 2: Report Final

I Overview of the Dot-Com Crisis

1 Causes

The first web US web sites on the world wide web were established in 1992, but “the web” grew at an exponential rate (Adamic & Huberman, 1999). By the mid-90’s, software companies like AOL, Internet Explorer, and Netscape provided users the ability to “surf” the web via sites such as Lycos, Altavista, and Yahoo (Week, 1996). Schools were increasingly interested in providing internet capabilities to students, and enrollment in US colleges and universities was at an all-time high: close to 40% of the population of 22- 24 year-olds was in school in the mid-1990’s (US Census Bureau, 2013). Computers and internet access became increasingly available at home and work: 18% of households had internet access by 1997 (US Census Bureau, 2013).

Coincidentally, US government initiated rate capital gains tax cuts that created available capital among the wealthiest income groups. This is considered to have increased volatility in the market by leading to more speculative investments (Dai, Shackelford, & Zhang, 2013). Thus, when the supply of internet capability and the “free” money from recent tax cuts met with the demand for experimentation in the new medium, the “dot-com” boom was created. Named after the ubiquitous domain suffix indicating a commercial (as opposed to military, government, or educational) website, this phenomena is alternately considered the “dot-com bubble,” the “tech bubble,” or something such as this.

Much of this was enabled by the NASDAQ. Although the NASDAQ was founded in the 1970’s and became popular for over-the-counter trades, they found they were able to expand by offering an electronic interface for traders (Simms, 1999). NASDAQ’s emphasis on technology allowed it to join the London Stock Exchange to form an intercontinental security market. The NASDAQ’s competitive advantage in technology over the NYSE made it the perfect market for these new technology companies and their stocks. 1995 – 2000 saw NASDAQ’s volume grow from 300 million shares traded per day to 2 billion shares traded per day (Yahoo! Finance).

2 Starting and Growing the Bubble

Many consider the start of the “tech” bubble to be Netscape’s initial public offering (IPO).

Netscape opened at $71 (U.S.), more than double the $28 public-offering price set late Tuesday. But after an intra-day high of $75, the stock drifted to close up $30.25 at $58.25. Volume soared to 13.8 million shares, making Netscape the second most-active Nasdaq issue and the third-best performing initial public offering in history (Toronto Star, 1995).

The dot-com boom proceeded as initial public offerings (IPO’s) and venture capital funding for rapidly expanding technology companies became increasingly popular. Coincidentally, both the desire to trade stocks and the technology to become one’s own stockbroker was available via companies such as Charles Schwab, Datek, and E*Trade became increasingly popular (Barboza, 1999). Companies such as broadcast.com (sold for $5.7B), Geocities.com (sold for

Page 3: Report Final

$3.5B), and theglobe.com ($28M) all successfully raised substantial amounts of money from investors who were hoping to find a quick return on their investments (Honan & Leckart, 2010). Other companies such as Microsoft, Dell, and Intel enjoyed solid business before the internet craze, but certainly enjoyed a surge in business from new customers (Picker & Levy, 2014). While the trading happened at internet terminals and “on” NASDAQ (physically located in New York), most of the new technology companies were in “Silicon Valley (Picker & Levy, 2014).”

3 Height of the Bubble and Thereafter

However, a problem emerged: many analysts start to doubt that these new, unproven technology companies could earn the profits that they claimed they could earn. The technology companies typically had almost no property, plant, or equipment, which allowed them very low startup costs and no required no debt. With hundreds of millions of potential customers in a business-to-customer model or millions of potential customers in a business-to-business model (with other businesses as customers), there was no way to provide an accurate assessment of what future revenue would be. It became clear that the projections of future profits were based on questionable assumptions at best and reckless speculation at worst (Cassidy, 2003).

Startup technology companies that had been flooded with cash began to exit the market at an increasing rate. By March 2001, such (formerly) notable companies as boo.com, pets.com, and etoys.com had failed. NASDAQ peaked on March 10th, 2000 (Yahoo! Finance). Although it’s impossible to a single cause of the point of inflection, layoffs peaked in April, 2000 and technology company closures peaked in May, 2000 (BBC, 2002).

The impact of the dot-com bubble’s deflation varied across different groups. The “average American citizen” was hardly any worse off as the dot-com businesses declared bankruptcies and ceased operations. Arguably, the “average investor,” assuming a well-balanced portfolio and a long-term investment strategy, survived reasonably well. The group hardest hit were technology workers, who had to contend with unpredictable employment conditions (Cassidy, 2003). The dot-com crisis had no clearly defined ending. Many of the unproven startups were bought by larger existing businesses, only to delay realizing their losses. Many companies whose wealth had rapidly accumulated lost their wealth equally quick. Several of the companies who grew substantially and maintained reasonable expenses actually survived the crisis and craze to provide valuable services to their customers: E-Bay, Amazon, and even Priceline (Cassidy, 2003).

II. Data Analysis

1 Data Selection

The NASDAQ index is a stock market index of the common stocks and similar securities listed on the NASDAQ stock market, meaning that it has over 3,000 components. It is highly followed in the U.S. as an indicator of the performance of stocks of technology companies and growth companies. At that time, most dot-com companies appeared on the NASDAQ. NASDAQ index could

Page 4: Report Final

obviously illustrate the dot-com bubble. The data sample ranges from 1/1/1990 to 12/31/2004, where data of 1/1/1990 to 1/1/1992 is for model building and the rest is for analyze the dot-com crisis. We collected the data from www.finance.yahoo.com.

2 Risk Models

2.1 Historical Simulation

To start the analysis of the financial crisis, we firstly build historical simulation model. We find the VaR of today by simply choosing a percentage from the historical data. (Please see the Appendix 1 and Excel File for procedure details). The results are in the figure as follow.

1/3/1

992

7/3/1

992

1/3/1

993

7/3/1

993

1/3/1

994

7/3/1

994

1/3/1

995

7/3/1

995

1/3/1

996

7/3/1

996

1/3/1

997

7/3/1

997

1/3/1

998

7/3/1

998

1/3/1

999

7/3/1

999

1/3/2

000

7/3/2

000

1/3/2

001

7/3/2

001

1/3/2

002

7/3/2

002

1/3/2

003

7/3/2

003

1/3/2

004

7/3/2

0040

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

VaR 1%

The reaction of this model is relatively slow by comparing others which we develop later.

2.2 RiskMetrics Model

We develop the risk-metrics model to find out the VaR by finding the parameters first. We use the historical data from 1990 to 1992 to establish the data. The parameter estimated of the model is shown below.

RISKMETRICS

Initial Values

λ 0.94Maximum Likelihood

Estimates

λ 0.940090181LOG LIKEHOOD 9611.838209

Page 5: Report Final

The value of the parameter is very close to 0.94 which is also applied by the JP Morgan Chase. After establishing the parameter, we should choose a distribution for the returns. Firstly we can choose the normal distributions. Sometimes, the t distribution and the filtered historical model are better distributions. Therefore, we develop all those three distributions for the model. (Please see the Appendix 1 and Excel File for procedure details).

Then we can get the result of this model following below. To illustrate the result, we just show the result of the normal distribution returns.

1/2/1992

7/2/1992

1/2/1993

7/2/1993

1/2/1994

7/2/1994

1/2/1995

7/2/1995

1/2/1996

7/2/1996

1/2/1997

7/2/1997

1/2/1998

7/2/1998

1/2/1999

7/2/1999

1/2/2000

7/2/2000

1/2/2001

7/2/2001

1/2/2002

7/2/2002

1/2/2003

7/2/2003

1/2/2004

7/2/2004

0%

2%

4%

6%

8%

10%

12%

14%

VaR

Also, we can use riskmetrics model to find out the daily Expected Shortfall. Following is the result for normal distribution. (For t-distribution, please see the Excel File).

1/2/1992

7/2/1992

1/2/1993

7/2/1993

1/2/1994

7/2/1994

1/2/1995

7/2/1995

1/2/1996

7/2/1996

1/2/1997

7/2/1997

1/2/1998

7/2/1998

1/2/1999

7/2/1999

1/2/2000

7/2/2000

1/2/2001

7/2/2001

1/2/2002

7/2/2002

1/2/2003

7/2/2003

1/2/2004

7/2/2004

0%

2%

4%

6%

8%

10%

12%

14%

16%

ES

Page 6: Report Final

2.3 GARCH Model

Similar to the risk-metrics model, we can use the GARCH model to search for VaR and Expected Shortfall. The parameters estimated by the log likelihood maximum theory. The results are shown below.

Maximum Likelihood Estimation

Starting Values

0.050000

0

0.800000

0

1.500E-

06

 

Results

0.117135

9

0.882864

1

1.421E-

06

Log likelihood 9624.001

Persistence

1.0000000

With these parameters, we can build the GARCH model to find out the VaRs and ES values. Again, we can apply the normal distribution, t distribution or the Filtered Historical Simulation as the distribution of the model. The result for VaRs at normal distribution is shown below. (For t-distribution and Filtered Historical Simulation, please see the Excel File).

Page 7: Report Final

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040%

2%

4%

6%

8%

10%

12%

14%

16%

VaR

2.4 NGARCH Model

Similar to the GARCH model, we can use the NGARCH model to search for VaR and Expected Shortfall. The parameters estimated by the log likelihood maximum theory. The results are shown below.

Maximum Likelihood Estimation

Starting Values

0.050000

0

0.800000

0

1.500E-

06

1.250000

0

  Results

0.104191

9

0.791644

5

4.359E-

06

0.999864

9

Log likelihood 9613.38

Persistence 1.000000

Page 8: Report Final

With these parameters, we can build the GARCH model to find out the VaRs and ES values. Again, we can apply the normal distribution, t distribution or the Filtered Historical Simulation as the distribution of the model. The result for VaRs Comparison under normal distribution is shown below. (For t-distribution and Filtered Historical Simulation, please see the Excel File).

1/3/1

992

1/3/1

993

1/3/1

994

1/3/1

995

1/3/1

996

1/3/1

997

1/3/1

998

1/3/1

999

1/3/2

000

1/3/2

001

1/3/2

002

1/3/2

003

1/3/2

0040%

5%

10%

15%

20%

25%

VaR Comparison

VaR VaR FHS VaR

Page 9: Report Final

3. Back Testing

In order to find out the validation of those models, we can apply the back testing techniques. First, we can test independence character of a model which test the hits independence. After that we can do coverage test which tests the real fraction that has hits compared to the theoretical value indicated by the VaR model. Finally, we can test of the independence and coverage by combining both the separate tests. The results for RiskMetrics model and GARCH model are listed below. (For procedure details, please see Appendix 1 and the Excel File)

Hypothesis Testing (Chi-Square Test)

Significance level = 10%

RiskMetrics GARCH

LRucReject VaR

model

Don't Reject VaR

model

Reject VaR

model

Reject VaR

model

LRind

Don't Reject VaR

model

Reject VaR

model

Reject VaR

model

Don't Reject VaR

model

LRccReject VaR

model

Don't Reject VaR

model

Don't Reject VaR

model

Don't Reject VaR

model

For other models under t distribution and Filtered historical model, we can use the similar tests.

4. Analysis for the Crisis with Risk Models

4.1 Model Selection

In order to select a best model for this analysis, we can use the exceedance analysis for each model. To illustrate the idea simple, we choose the RiskMetrics model, GARCH model and NGARCH model under normal distribution. We ignore the Historical Simulation here because other models have much better characteristics than it. The exceedance is the number of standard deviation of a negative return (loss) that exceeds the corresponding VaR indicated by the model. If the return loss does not exceed the VaR given, the value of the exceedance is zero. Therefore, we can have results as below. (For procedure details, please see Appendix 1 and the Excel File)

RiskMetrics Model

Page 10: Report Final

1/3/1

992

7/3/1

992

1/3/1

993

7/3/1

993

1/3/1

994

7/3/1

994

1/3/1

995

7/3/1

995

1/3/1

996

7/3/1

996

1/3/1

997

7/3/1

997

1/3/1

998

7/3/1

998

1/3/1

999

7/3/1

999

1/3/2

000

7/3/2

000

1/3/2

001

7/3/2

001

1/3/2

002

7/3/2

002

1/3/2

003

7/3/2

003

1/3/2

004

7/3/2

0040

1

2

3

4

5

6

RM 5%

GARCH Model

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040

0.5

1

1.5

2

2.5

3

3.5

4

Exceedance

NGARCH Model

Page 11: Report Final

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040

1

2

3

4

5

6

Exceedance

Apparently, the exceedance analysis shows that the NGARCH model is the best of the three. The exceedance number are more close to 5% indicated by the model and the distribution of hits is relatively random. Therefore, we are going to use this model to analyze the indication from the crisis.

4.2 Crisis Analysis

Firstly, if we take a look at the return and price figure of the crisis period, we can get a basic information for the crisis. From historical data, we have figures below.

1/2/1992

7/2/1992

1/2/1993

7/2/1993

1/2/1994

7/2/1994

1/2/1995

7/2/1995

1/2/1996

7/2/1996

1/2/1997

7/2/1997

1/2/1998

7/2/1998

1/2/1999

7/2/1999

1/2/2000

7/2/2000

1/2/2001

7/2/2001

1/2/2002

7/2/2002

1/2/2003

7/2/2003

1/2/2004

7/2/2004

0

1000

2000

3000

4000

5000

6000

Adj Close Prices

Page 12: Report Final

1/3/1992

7/3/1992

1/3/1993

7/3/1993

1/3/1994

7/3/1994

1/3/1995

7/3/1995

1/3/1996

7/3/1996

1/3/1997

7/3/1997

1/3/1998

7/3/1998

1/3/1999

7/3/1999

1/3/2000

7/3/2000

1/3/2001

7/3/2001

1/3/2002

7/3/2002

1/3/2003

7/3/2003

1/3/2004

7/3/2004

-0.15

-0.1

-0.05

0

0.05

0.1

0.15

RETURN

From these figures, we can know that the market had a large volatility during 1999-2001. And the price peak appears at 2000. Then we take a look at the VaR indicated by the NGARCH model.

1/2/1992

7/2/1992

1/2/1993

7/2/1993

1/2/1994

7/2/1994

1/2/1995

7/2/1995

1/2/1996

7/2/1996

1/2/1997

7/2/1997

1/2/1998

7/2/1998

1/2/1999

7/2/1999

1/2/2000

7/2/2000

1/2/2001

7/2/2001

1/2/2002

7/2/2002

1/2/2003

7/2/2003

1/2/2004

7/2/2004

0%

2%

4%

6%

8%

10%

12%

14%

16%

18%

VaR

4.2.1 Information Before the Crisis

We can find large VaRs since 1998 to 1999. This is a signal of large market volatility. We can understand this unusual volatility as a signal of irrational market behavior. Therefore, as risk managers, we can detect this signal before the crisis before the large losses coming.

Page 13: Report Final

4.2.2 Information During the Crisis

The VaR during the crisis from 1999-2001 can get as high as 16% for 1% VaR. However, the even worse situation is the largest exceedance of VaR can be 5 standard deviation if we use NGARCH model and 4 standard Deviation if we use Garch model.

The technique adjustment we can apply here is to change the model or the model’s parameter more frequently. Otherwise, the risk managing system cannot work well during this period.

4.2.3 Information After the Crisis

After 2001, the market volatility decreases slowly and finally become stable after 2002. This can be interpreted as the signal that the market returns to normal situation. At this point, risk managers are able to adjust their models so that the model does not provide too conservative strategies.

III Stress Testing

The reason why we will do the stress testing is that most risk management work is short of data samples. This could be a big issue if available historical data cannot fully reflect the potential risks going forward. For example, the available data may lack extreme events such as an equity market crash, which occurs very infrequently.

Since the portfolio is consist of Ebay and Amazon, in the first step of stress testing, we have to use the solver function in excel to get the weight of each asset to make the unconditional VaR to be minimum. As we can see in the excel result, the weight of Ebay is 0.53, while the weight of Amazon is 0.47. The portfolio constructing result is the figure as follow from the Excel File.

Minimizing    

Ebay Variance 0.0024956

  Weight 0.5302198

  VaR 0.0846235

Amazon Variance 0.0028244

  Weight 0.4697802

  VaR 0.0847399

Page 14: Report Final

PortfolioUnconditional Covariance

-6.024E-05

  Variance 0.0012949

  VaR 0.0837137

  Correlation-0.0226896

Next, under the RiskMetric model, we can calculate the portfolio returns and variance during the period in which the date we collected. (See Appendix). In this case, we select the period that is from January 2002 to June 2004 as the normal time period. We choose a period of 60 days. On the other hand, we select the period that is from January 1999 to December 2000 as the crisis time period. We choose another period of 60 days.

Here is the chart that is showing the daily VaR during the testing period by using the historical date we collect during the normal period.

1/5/1999

1/8/1999

1/11/1999

1/14/1999

1/17/1999

1/20/1999

1/23/1999

1/26/1999

1/29/1999

2/1/1999

2/4/1999

2/7/1999

2/10/1999

2/13/1999

2/16/1999

2/19/1999

2/22/1999

2/25/1999

2/28/1999

3/3/1999

3/6/1999

3/9/1999

3/12/1999

3/15/1999

3/18/1999

3/21/1999

3/24/1999

3/27/1999

3/30/1999

0

0.02

0.04

0.06

0.08

0.1

0.12

Normal Time

As we can see, the daily VaR shown by the blue line remains at the level of 8% stably. The maximum and minimum VaR are 10% and 6% approximately. There is not much big difference between each scatterplots in the line.

The second chart is showing the relationship of crisis daily VaR and correlated daily Var.

Page 15: Report Final

1/5/1999

1/8/1999

1/11/1999

1/14/1999

1/17/1999

1/20/1999

1/23/1999

1/26/1999

1/29/1999

2/1/1999

2/4/1999

2/7/1999

2/10/1999

2/13/1999

2/16/1999

2/19/1999

2/22/1999

2/25/1999

2/28/1999

3/3/1999

3/6/1999

3/9/1999

3/12/1999

3/15/1999

3/18/1999

3/21/1999

3/24/1999

3/27/1999

3/30/1999

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

0.2

Chart Title

VaR-Crisis VaR-Crisis-Cor=1

The VaR with higher correlation has higher values. However, the figure shapes are very similar.

Compared with the daily VaR under the first scenario, the crisis daily VaR is a little higher. As we can see, before September 2004, the red line is decreasing rapidly and is much higher than the blue line. The reason is that without considering the diversification benefits, the unconditional VaR of portfolio that only involves Ebay alone will be become much larger. In this case, the weight of Ebay in the portfolio is equal to one. However, the tendency of decreasing of red line becomes smaller. Although the red line is still decreasing as the time passed, the rate becomes slower and slower. After September 2004, the red line is on the bottom of the blue line. In this case, the reason I think is the other asset in the portfolio, Amazon, is much more risky than the Ebay.

Finally, we can use the filtered historical simulation of different time period for different scenarios to calculate the daily VaR by assuming the normal distribution for both stocks.

IV Conclusions and Recommendations

There was a substantial amount of benefit that occurred as the internet grew in the late 1990’s. The internet did create substantial benefits for its users: time savings, increased communications, and new learning opportunities. However, during this time there was also a substantial amount of volatility, confusion, and waste. In order to support the strategy of supporting growth, even “excited” growth while reducing “irrational exuberance,“ the government could enact a tiered capital gains tax, for example taxing capital gains at 20% and taxing capital gains above

Page 16: Report Final

$1,000,000 at 50%. This would reduce the incentive for individual investors to seek large gains (Pascal, 2000).

Alternately, the government could create a tax policy that triggered transaction trades if the market grew too quickly or if volatility reached a certain level. Both of these measures would create a natural “brake” for the economy should stocks start to trade too quickly. Additionally, the government could create a system where new (especially unproven) companies who experienced substantial capital gains would also experience substantial taxes not applicable to larger companies. Such a mechanism would encourage companies to carefully consider the demand for their stock and price the IPO appropriately: an IPO that is priced according to its value should maintain a constant price rather than experience wild price swings (Rydqvist, 1997).

The dot-com boom showed that new technologies, while definitely exciting, can have both costs and benefits. The benefits of technologies include time savings, money savings, increased interaction, and new opportunities. Time savings and money savings can be difficult to quantify sometimes, but the benefit of a better quality interaction or new opportunity is usually very difficult to quantify. This can add substantial volatility to the market, creating risk for investors and questioning underlying assumptions. A general rule is to consider look for logical errors in underlying assumptions (for example, that demand for the internet would necessarily translate to revenue earned for new internet companies) and to create stress tests for different circumstances. Although investing in any market necessarily involves risk, a well-managed portfolio works hard to find unnecessary risk and adjust itself accordingly.

Page 17: Report Final

ReferenceAdamic, L. A., & Huberman, B. A. (1999, 09). Internet Growth Dynamics of the World-Wide Web. Nature,

401.

Barboza, D. (1999, February 1). Small On-Line Brokers Raise Share of Trades. New York Times.

BBC. (2002, March 12). Dot-Com Timeline. London, England. Retrieved from http://news.bbc.co.uk/2/hi/business/1869544.stm

Cassidy, J. (2003). Dot.con: How America Lost Its Mind and Its Money in the Internet Era. New York: Harper Perennial.

Dai, Z., Shackelford, D. A., & Zhang, H. H. (2013, September). Does Financial Constraint Affect the Relation between Shareholder Taxes and the Cost of Equity Capital? The Accounting Review, 88(5).

Honan, M., & Leckart, S. (2010, February 10). 10 Years After: A Look Back at the Dotcom Boom and Bust. Wired.

Picker, L., & Levy, A. (2014, March 6). IPO Dot-Com Bubble Echo Seen Muted as Older Companies Debut. Bloomberg.com.

Simms, M. (1999, February). The History: How Nasdaq Was Born. Traders Magazine.

Toronto Star. (1995, August 10). Netscape skyrockets at launch Huge demand for shares of Internet software firm. Toronto, Canada.

US Census Bureau. (2013). Computer and Internet Use in the United States. Washington, DC: US Department of Commerce.

Week, P. B. (1996, April 15). On Wall Street: Net Directories Get Boost from Stock Offerings Lycos, Excite, and Yahoo!

Yahoo! Finance. (n.d.). NASDAQ Composite (^IXIC).

Page 18: Report Final

Appendix 1:Risk Modeling Techniques

Historical Simulation

HS technique is the simplest way to calcaulate the VaR and ES(Expected Shortfall). It assumes that the distribution oftomorrow’s portfolio returns, RPF,t+1, is well approximated by the empirical distribution of the past m observations.

The pseudo log return can now be defined as

Consider the availability of a past sequence of m daily hypothetical portfolio returns, calculated using past prices of the underlying assets of the portfolio, but using today’s portfolio weight .

The VaR with coverage rate,p, is then simply calculated as 100pth percentile of the sequence of past portfolio returns. The formula is

In Excel, we could sort the returns in ascending order and choose the VaR to be the number such that only 100% of the observations are smaller than the VaR.

Expected shortfall is an alternative risk measure. It is defined as the expected return given that the

return falls below the VaR. So for the 1-day horizon, we have

Weighted Historical Simulation

WHS technique is improvement of HS by designing to relieve the tension in the choice of sample size, m. We assign relatively more weight to the most recent observations and relatively less weight to the return further in the past.

In WHS, our sample of m past hypothetical returns, is assigned probability weight declining

exponentially through the past as follows.

So we can easily see that when

After using this way to get the return, we could repeat the process in the HS to get the VaR.

Page 19: Report Final

RiskMetrics Models

RiskMetrics system considers the following model, where the weights on past squared returns decline exponentially as we move backward in time. RM variance model formula is

So RiskMetrics model’s foreast for tomorrow’s volatility can be seen as a weighted average of today’s volatility and today’s squared return.

RM tracks variance changes in a way that is broadly consistent with observed returns.RM found that the estimates were quite similar across assets, and they therefore simply set ^=0.94 for every asset for daily variance forecasting. So

In Excel, after we got the today’s variance and returns, use ,then we can calculate the tomorrow’s variance.

GARCH model

GARCH model is the simplest model that capture important features of returns data and that are flexible enough to accommodate specific aspects of individual assets. The formula is

Note that RM model can be viewed as a special case of the simple GARCH model if a =1 -^, B=^ amd w=0/. But there is a important advantage about GARCH: it consider the fact that the long-run average variance tends to be relatively stable over time.

In Excel, we could use the Maximum likelihood Estimation to estimate the three parameters w,a,b. Then with the help of the today’s returns ,today’s variance, we could get the tomorrow’s variance.

NGARCH model

NGARCH model (nonlinear GARCH model) is a extensions of the GARCH Model. It could be used to captured the leverage effect, which means that negative return increases variance by

Page 20: Report Final

more than a positive return of the same magnitude. GARCH Models is modified so that the weight given to the return depends on whether the return is positive or negative in the following

simple mannar:

Which is sometimes refered to as the NGARCH model.

It is strictly speaking a positive piece of news, zt>0,rather than raw reurn Rt,which has less of an

impact on variance than a negative piece of news, if θ>0.

In Excel, we could use the Maximum likelihood Estimation to estimate the three parameters w,a,b. Then with the help of the today’s returns ,today’s variance, we could get the tomorrow’s variance.

Filtered Historical Simulation

The FHS approach attempts to combine the best of the model-based with the best of the model-free approaches in a very intuitive fashion.

Using the GARCH Models, where

With the sequence of past returns, ,we can estimate the GARCH model and calculate past standardized returns from the observed returns and from the estimatd standard

deviations as

So we can calculate the VaR by using following formula:

For ES, we can calculate in the following way:

T-distributionThe t-distribution can capture the most important deviations from normality. It been defind by

where

D is the only parameter that we need to use maximum likelihood estimation to get.

We can get the VaR and ES by using the following way:

Page 21: Report Final

Monte Carlo Simulation

Monte Carlo Simulation relies on artigicail random numbers to simulate the hypothetical daily returns from day t+1 to day t+K as

In MCS, we first use GAECH models to obtain the tomorrow’s variance. Then using the ramdom number generator, we can generate a set of artificial random numbers

From these random numbers we can calculate a set of hypothetical returns for tomorrow as

Given these hypothetical returns, we can update the variance to get a set of hypothetical variances for the day after tomorrow,t+2,as follow:

Then given a nes st of random numbers frawn from the N(0,1) distribution,

We can calculate a set of hypothetical returns for t+2 day, and the variance

So after we repeat these step for K times, we can simulate the hypothetical daily returns from day t+1 to day t+K as

Page 22: Report Final

For VaR and ES of Monte Carlo, we can calculate them in the following way:

Page 23: Report Final

Appendix 2—Figures from the risk models

(1) RiskMetricsNormal Distribution

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040%

2%

4%

6%

8%

10%

12%

14%

VaR

1%VaR 5%VaR

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040%

2%

4%

6%

8%

10%

12%

14%

16%

ES

1%ES 5%ES

Page 24: Report Final

1/3/1

992

7/3/1

992

1/3/1

993

7/3/1

993

1/3/1

994

7/3/1

994

1/3/1

995

7/3/1

995

1/3/1

996

7/3/1

996

1/3/1

997

7/3/1

997

1/3/1

998

7/3/1

998

1/3/1

999

7/3/1

999

1/3/2

000

7/3/2

000

1/3/2

001

7/3/2

001

1/3/2

002

7/3/2

002

1/3/2

003

7/3/2

003

1/3/2

004

7/3/2

0040%

2%

4%

6%

8%

10%

12%

14%

16%

1%VaR&ES

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040%

2%

4%

6%

8%

10%

12%

5%VaR&ES

5%VaR 5%ES

t-Distribution

Page 25: Report Final

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040%

2%

4%

6%

8%

10%

12%

VaR

1%VaR 5%VaR

(2)GARCH

Normal Distribution

Page 26: Report Final

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040%

2%

4%

6%

8%

10%

12%

14%

16%

VaR

1%VaR 5%VaR

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040%

2%

4%

6%

8%

10%

12%

14%

16%

18%

ES

1%ES 5%ES

Page 27: Report Final

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040%

2%

4%

6%

8%

10%

12%

14%

16%

18%

1%VaR&ES

1%VaR 1%ES

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040%

2%

4%

6%

8%

10%

12%

14%

5%VaR&ES

5%VaR 5%ES

t-Distribution

Page 28: Report Final

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040%

2%

4%

6%

8%

10%

12%

14%

VaR

1%VaR 5%VaR

(3) NGARCHNormal Distribution

Page 29: Report Final

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040%

2%

4%

6%

8%

10%

12%

14%

16%

18%

VaR

1%VaR 5%VaR

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040%

2%

4%

6%

8%

10%

12%

14%

16%

18%

20%

ES

1%ES 5%ES

Page 30: Report Final

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040%

2%

4%

6%

8%

10%

12%

14%

16%

18%

20%

1%VaR&ES

1%VaR 1%ES

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040%

2%

4%

6%

8%

10%

12%

14%

16%

5%VaR&ES

5%VaR 5%ES

t-distribution

Page 31: Report Final

1/2/1

992

7/2/1

992

1/2/1

993

7/2/1

993

1/2/1

994

7/2/1

994

1/2/1

995

7/2/1

995

1/2/1

996

7/2/1

996

1/2/1

997

7/2/1

997

1/2/1

998

7/2/1

998

1/2/1

999

7/2/1

999

1/2/2

000

7/2/2

000

1/2/2

001

7/2/2

001

1/2/2

002

7/2/2

002

1/2/2

003

7/2/2

003

1/2/2

004

7/2/2

0040%

2%

4%

6%

8%

10%

12%

14%

VaR

1%VaR 5%VaR