what do rating agency announcements signal- confirmation or new information?

Upload: shalin-bhagwan

Post on 03-Jun-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    1/39

    What Do Rating Agency Announcements Signal? :

    Confirmation or New Information

    Sander J.J. Konijna,b, Herbert A. Rijkena

    (a) Department of Finance, VU University Amsterdam

    (b) Tinbergen Institute, Amsterdam

    Abstract

    Prior surveys and empirical research suggest that rating agencies respond slowly to

    changes in underlying credit-quality. We consider rating changes, watchlist additions

    and outlook assignments from 1991 to 2005. Like previous research, we find significant

    negative pre-announcement abnormal return reactions, potentially followed by positive

    post-announcement corrections. These patterns suggest it may be vital to account for

    alternative measures of creditworthiness used by market participants. We estimate

    point-in-time default prediction models to make a distinction between confirmed and

    unconfirmed announcements. We obtain no significant positive post-announcement

    returns if announcements are out of line with pre-announcement point-in-time credit

    model tendencies. Significant positive post-announcement returns typically materialize

    when announcements are in line with pre-announcement point-in-time credit quality

    deteriorations. In turn, pre-announcement return reactions become less severe when

    downgrades are preceded by watchlist additions. We finally find that downgrade an-

    nouncement abnormal return reactions are predominantly related to severity of the

    downgrade signal. This not only means number of notches downgraded. Returns are

    also larger when we either observe a concomitant rating change by Standard & Poors

    in close range, or the company is newly added to the watchlist when it is downgraded.

    Key words: Rating Agencies, Rating changes, Watchlist, Outlook, Event study, Stock

    returns, Default prediction, Credit-scoring.

    JEL classification: tbw.

    Corresponding author: Herbert A. Rijken, FEWEB/FIN, VU University Amsterdam, De Boelelaan 1105,

    1081 HV Amsterdam, The Netherlands. Phone: +31 20 59??????; fax: +31 20 59??????; e-mail:

    [email protected].

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    2/39

    1 Introduction

    Information disclosure by rating agencies has been examined in two alternative ways. A

    first approach looks at the accuracy and consistency of rating levels, and timeliness of rating

    changes vis-a-vis alternative measures of creditworthiness. Ederington et al. (1987) and

    Perraudin and Taylor (2004) relate bond ratings to yield spreads and bond prices, respec-

    tively. Altman and Rijken (2005) and Loffler (2005) shed light on the underlying rating

    process, trying to explain why rating changes are rare, serially dependent and predictable

    using borrower fundamentals. Surveys by Ellis (1998) and Baker and Mansi (2002) reveal

    that market participants believe that agency ratings adjust slowly to changes in corporate

    credit quality.

    Secondly, one may examine return reactions surrounding rating agency announcements.

    Using daily stock return data Holthausen and Leftwich (1986) obtain significant abnor-

    mal return reactions surrounding downgrades and watches for down- and upgrade. Hand,

    Holthausen and Leftwich (1992) find significant excess returns when watches for downgrade

    and downgrades are announced. Goh and Ederington (1993)(1999) obtain significant nega-

    tive return reactions surrounding downgrades, but insignificant and small responses in case

    of upgrades. When larger event windows are considered, previous research typically reveals

    that announcements tend to be preceded by large and significant return responses.

    Instead of equity markets, several authors look at bond market reactions, see Nordon

    and Weber (2004) for a brief outline. The main disadvantage of bond data in event study

    analysis is potential illiquidity. Corporate bonds are frequently bought and held to maturity,

    such that actual trading volume can be quite low, see Alexander, Edwards and Ferri (1998).

    Nonetheless, the overall pattern is quite similar to equity markets. Studies find significant

    (pre-)announcement window abnormal returns (WAR) surrounding downgrades and watches

    for downgrade. The impact of positive announcements seems to be lower.Recent studies have given more attention to credit default swaps (CDS). CDSs are directly

    related to credit risk. As a result, they seem to be very suitable to determine the importance

    of rating agency announcements. What is more, Blanco, Brennan and Marsh (2005) show

    2

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    3/39

    that most pricing relevant information flows from CDS to bond markets.

    Conditioning on rating events, Hull, Predescu and White (2004) reveal that index ad-

    justed CDS spreads (i.e., adjusted by a spread index of similarly rated CDSs) widen sig-

    nificantly before downgrades, watches for downgrade and outlooks. Considering the an-

    nouncement window itself, the authors only find a significant response in case of watchesfor downgrade. Nordon and Weber (2004) study the response of equity and CDS markets to

    watchlist additions and rating changes. The authors obtain significant announcement effects

    in both markets with respect to both event types. Moreover, results once again indicate

    that both markets anticipate downgrades as well as watches for downgrade. In general stock

    markets anticipate downgrades more steadily than CDS markets.

    Inspired by previous empirical findings, we combine both strands of research to more

    accurately assess the information content of Moodys rating announcements. Significant pre-announcement abnormal returns suggest it may be vital to account for alternative measures

    of creditworthiness used by market participants. For the latter are unobserved, we estimate

    point-in-time default prediction models, using only publicly available information. Point-

    in-time models should by definition respond quickly to changes in underlying fundamentals.

    This in turn allows us to identify announcements that might predominantly confirm an

    underlying tendency. We subsequently take an event study approach to determine whether

    we observe a different return response surrounding the events considered. Our sample period

    runs from 1991 to 2005. Besides rating changes and watchlist additions we consider outlook

    assignments as well.

    Unconditional estimation results confirm that sharp negative equity returns already ma-

    terialize prior to negative rating announcements. In case of downgrades, and somewhat less

    so in case of watches for downgrade, negative announcement and pre-announcement window

    abnormal returns (WAR) are partially annihilated by positive average post-announcement

    abnormal returns. The latter is in line with Glascock, Davidson and Henderson (1987), Nor-

    don and Weber (2004) and, considering bond market reactions, Heinke and Steiner (2001).

    We obtain no significant positive post-announcement returns if announcements do not

    3

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    4/39

    confirm a pre-announcement point-in-time credit model tendency. This suggests that new

    information is fully absorbed once it is revealed. On the other hand, significant positive

    post-announcement returns typically materialize when announcements are in line with pre-

    announcement point-in-time credit quality deteriorations. In this case, formal rating agency

    announcements might predominantly resolve underlying uncertainty, which puts the marketat ease.

    In case of downgrades, we find that pre-announcement return reactions are mostly related

    to watchlist precedence. Pre-announcement negative abnormal returns are less severe when

    downgrades are preceded by watchlist additions. Immediate significant negative abnormal

    return reactions are already obtained once the watchlist addition is announced.

    We finally find that announcement window abnormal returns are predominantly deter-

    mined by severity of the downgrade signal. This not only means number of notches down-graded. Returns are also stronger when we either observe a concomitant rating change by

    Standard & Poors in close range, or the company is newly added to the watchlist when it

    is downgraded.

    The remainder of this paper is organized as follows. To better understand the rating

    process, section 2 gives an overview of Moodys watchlist and outlook assignment and res-

    olution (periods). Section 3 presents estimated window abnormal returns related to rating

    changes, watchlist additions and outlook assignments. This section is subdivided in subsec-

    tions related to unconditional WARs, and conditional WARs In the latter case we condition

    on attributes related to individual announcements. Section 4 concludes.

    2 Signaling Creditworthiness: Rating, Watchlist and

    Outlook assignments

    Before examining the information content of announcements, we first look more closely at

    the rating process itself, which is described in Moodys (2004).1 We consider the total rating

    1See also Standard and Poors (2005).

    4

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    5/39

    history of companies, obtained from the July 2005 version of Moodys DRS database. An

    extended version of watchlist and outlook information was provided separately by Moodys

    Investors Service.

    Rating changes are the ultimate means of rating agencies to publicly express their opinion

    regarding relative changes in underlying creditworthiness of rated issuers. This in turnreflects the issuers ability and willingness to stick with contractual obligations.

    More recently Moodys started to supplement its rating assignments by rating reviews

    and outlooks. Though Moodys already published watchlist assignments back in 1985, they

    have only become a formal part of the credit quality designation process since October 1991.

    Somewhat later, January 1995, Moodys also started to assign rating outlooks. As a result

    our data set stretches from October 1991 to February 2005.

    In assigning ratings, agencies make use of a through-the-cycle rating methodology. Thatis, ratings are only revised when there is a significant permanent, long term component in the

    underlying credit quality change, see also Cantor and Mann (2003). Watchlist and outlook

    assignments seem to deal with the continuous struggle of agencies to strike a balance between

    rating timeliness and rating stability.

    Outlook and watchlist assignments have different implications, see Cantor and Hamilton

    (2004). A rating outlook represents an opinion regarding the likely direction of an issuers

    credit quality. Watches, also called rating reviews, can be considered as a subset of rating

    outlooks, giving a much stronger indication about a possible future rating change.

    Looking at fractions of rating changes preceded by a watch or outlook confirms this dif-

    ference in interpretation. From October 1991 to February 2005, 36 (31) percent of observed

    downgrades (upgrades) were preceded by a watch for downgrade (upgrade). The correspond-

    ing numbers in case of negative (positive) outlooks are 16 (10) percent, respectively.

    Cantor and Hamilton (2004) underline the importance of watches and outlooks in terms

    of signaling. They show that it becomes much harder to predict future rating changes from

    past rating changes once one controls for watchlist and outlook status.

    5

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    6/39

    2.1 Watchlist and Outlook: Resolution and Resolution period

    Table 1 provides the total number of watchlist and outlook assignments within our sample

    period. We will focus on those assignments with a clear indicated direction. In particular,

    positive and negative outlooks and watches for upgrade or downgrade.

    Insert Table 1

    It is clear that negative watchlist and outlook indications are always larger than their

    positive counterparts from 1995 onwards. The fraction of positive to negative watches (out-

    looks) actually pretty much resembles the fraction of upgrades to downgrades.

    Watchlist and outlook assignments are by and large equally divided between investment-

    and speculative-grade issuers, with the exception of watches for downgrade. In the latter

    case the number of assignments in the investment-grade category is almost twice as large.

    This is in line with the fraction of speculative-grade to total number of issuers, which ranges

    from 30 to 39 percent in the period considered. However, the result is surprising if one

    recognizes that the number of downgrades are almost equally divided between investment-

    and speculative-grade issuers.

    Table 2 gives an overview of watchlist and outlook resolutions. Patterns are similar

    across positive and negative watchlist (outlook) assignments. More than 2/3 of the times

    a company was added to the watchlist it eventually experienced a rating change in similar

    direction. Rating changes in the opposite direction are rare. Though still relatively small,

    the latter is observed more frequently in case of outlooks. The fraction of outlooks that

    directly led to rating changes in the indicated direction is much smaller as compared to

    watchlist resolutions, somewhat more than 1/4. Intended and empirical resolution periods

    of outlooks exceeds those of watchlist assignments. As a result, it comes as no surprise that

    a significant part of outlooks are still unresolved at the end of our sample period (i.e., rightcensored).

    Insert Table 2

    6

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    7/39

    The second part of the table shows that subdivisions of watchlist resolutions between

    investment- and speculative-grade issuers reflect those of initial assignments, see table 1.

    This is not true in case of outlooks.

    As far as oulooks are concerned this, in a sense, is only part of the story. The lower

    part of table 2 indicates that about half of the outlooks that did not lead to an immediaterating change were succeeded by a watch or outlook in similar direction. The numbers in

    parentheses next to these counts denote the cases that led to a rating change in similar

    direction.

    One may consider outlooks followed by a watch, and henceforth followed by a rating

    change in similar direction, as resolved. Incorporating these cases, resolution fractions of

    positive and negative outlooks increase to about 0.42. This is still smaller than watchlist

    assignments. Taking this broader view, speculative and investment-grade subdivision atresolution (in intended direction) more closely resembles the corresponding subdivision at

    inception.

    We finally consider resolution periods, defined as the length of time from inception to

    outlook or watchlist resolution. Moodys (2004) states that an outlook should be interpreted

    as an opinion regarding the likely direction of a rating change over the medium term, typically

    18 to 36 months. Watches on the other hand indicate that the rating is under review for

    possible rating change on the short term, usually within 90 days.2

    Figure 1 provides frequency distributions of watchlist and outlook resolution periods,

    excluding right censored cases. The numbers on top of the bars refer to the percentage

    of cases within each bin that experienced a rating change in the intended direction. For

    example, looking at the negative outlook plot, 56 out of 200 (i.e., 28 percent) negative

    outlooks with a resolution period between 500 and 600 days ended up in a rating downgrade.

    2Standard & Poors (2005) use a similar distinction. Outlooks are supposed to assess the potential direction

    of a long term credit rating and are typically resolved within 6 months to 2 years. Credit watches indicate

    the potential direction of a rating change in a short- or long term rating and are normally completed within

    90 days.

    7

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    8/39

    Insert Figure 1

    The plotted distributions of positive and negative outlooks are grossly of similar shape.

    The largest dissimilarity seems to be the relatively large number of negative outlooks resolved

    within 100 days. Given these cases, the fraction of negative outlooks leading to a rating

    downgrade is relatively high as well.

    Dissimilarities between watches for downgrade and upgrade are more pronounced. Res-

    olution periods of watches for downgrade are relatively more concentrated at the lower end

    of the frequency distribution. Short resolution periods are associated with slightly higher

    actual downgrade fractions as well.

    3 Measuring Window Abnormal Returns (WAR)

    3.1 Data Set

    Given previous insights, we next examine WARs surrounding rating agency announcements.

    To arrive at our sample we combine data from several sources. Companies rating histories

    are obtained from the July 2005 version of Moodys DRS database. An extended version

    of watchlist and outlook information is provided separately by Moodys Investors Service.

    Standard & Poors issuer ratings are obtained from the June 2005 version of Standard &

    Poors CREDITPRO 7.0 database.3 Daily stock returns, and stock index return data, are

    taken from Thomson Financial / Datastream. The latter refer to the theoretical growth in

    value of a share holding position over a one day period, assuming dividends are re-invested

    to purchase additional units of stock at the closing price applicable on the ex-dividend date,

    ignoring tax and reinvestment charges.

    As a first step we tried to match Moodys rated companies with U.S. companies that are,

    or once were, traded on either the NYSE or the NASDAQ. To facilitate our own ranking of

    3We did not collect data on smaller players (e.g., Fitch, Duff & Phelps). Norden and Weber (2004) find

    that market respons to rating events by Fitch are considerably weaker than those of Standard and Poors

    and Moodys.

    8

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    9/39

    companies in terms of default probability later on, we narrow our sample further to those

    companies for which we were able to obtain company specific variables. The latter are

    obtained from Standard & Poors COMPUSTAT database.4

    Our final sample period runs from October 1991 to February 2005. Matching the different

    data sources leaves us with 1099 U.S. companies.Calculating a rating transition matrix reveals that most rating activity is concentrated

    along the main diagonal. This confirms that ratings most frequently change only gradually

    (i.e., notch by notch). Calculating a similar transition matrix with respect to Moodys entire

    rated universe, reveals that the number of rating changes at both the upper and lower ends

    of the rating spectrum are relatively sparse. This is especially true in case of upgrades at

    the upper end of the rating scale, which might have been caused by exclusion of financial

    companies. Apart from this the sample seems to be a reasonable reflection of the type ofrating changes observed within the sample period.5

    3.2 Unconditional WAR

    To measure the impact of rating agency announcements we make use of a common approach

    in event study analysis, relating the return of company i, Rit, to the market portfolio, Rmt

    (i.e., the market or one factor model):6

    Rit=i+ Rmti+ uit (1)

    The model is estimated using a 400 trading day window, equally divided between the pre-

    and post-event period.7 We take such an approach because it is not uncommon for events to

    4We consider non-financial companies only. COMPUSTAT data on financials, like banks, is frequently

    non-available. Moreover, in contrast with non-financial companies, financial companies are highly regulated,

    making them kind of special compared to non-financials.5These results are available from the corresponding author upon request.6Though a multifactor model should provide a better fit, Campbell, Lo and MacKinlay (1997) note that

    the gains from employing multifactor models for event study analysis is limited.7Results tend to be insensitive to the length of the estimation window.

    9

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    10/39

    be surrounded by other announcements. Indeed, the very existence of watchlist and outlook

    assignments might be explained by a willingness to disseminate some information as early as

    possible. By using a reasonably large estimation window, and considering data from the pre-

    and post-event periods, estimates will be less susceptible to announcements surrounding the

    actual event. Moreover, including post-event data allows us to incorporate potential changesin return variability as a result of the announcement itself. We consider a 60 trading day

    event window to estimate excess returns surrounding Moodys announcements.

    In line with other studies, we find that the overall impact of rating agency announcements

    is largest in case of negative announcements. Due to space considerations we therefore

    predominantly report results on the latter, commenting on positive announcements only

    when appropriate.

    The unshaded columns of table 3 give an overview of window abnormal returns (WAR)related to negative announcements. The watchlist and outlook sample composition seems

    reasonable given information provided in table 1. The skewed division of watches for down-

    grade between investment- and speculative-grade ratings is comparable to table 1.

    Insert Table 3

    In case of downgrades we obtain an average total event WAR of 1.6 percent. The largest

    event WAR is associated with watches for downgrade, -5 percent. More than 2/3 of themeventually wind up in a rating downgrade. Considering the relatively short resolution period

    as well, the market seems to be fully aware of the seriousness of this signal. The return impact

    of negative outlooks is much less severe. This is in accordance with a longer resolution period,

    and a lower likelihood of an eventual rating change.

    Excess return patterns seems to be somewhat similar across announcements types. A

    significant part of abnormal returns materializes prior to the announcement day window,

    subsequently being followed by a correction in the opposite direction, especially in case ofdowngrades and negative outlooks.

    The impact of positive announcements turns out to be small and insignificant. However,

    watches for upgrade are a clear exception to this rule, with a total event WAR of almost

    10

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    11/39

    4 percent. Like their negative counterparts, a significant part of this excess return mate-

    rializes prior to and within the announcement day window. However, we do not obtain a

    significant post-announcement reaction. All information seems to be incorporated after the

    announcement has been made.

    We note that results reported so far are possibly contaminated. There might be con-comitant announcements surrounding the event considered. One could think of a watch or

    outlook preceding the actual rating change within close range. To deal with the problem of

    contamination we exclude observations if they are preceded by an announcement in similar

    direction within the pre-announcement event window. In case of watches and outlooks we

    moreover exclude observations if we observe a concomitant rating change at the announce-

    ment day. We did not follow the same practice the other way round. Rating changes are

    considered to be the ultimate signaling device. It seems odd to exclude a rating change dueto an outlook assignment at the same date.8

    Figure 2 graphically depicts estimated cumulative abnormal returns of the uncontam-

    inated samples, including positive announcements. The gray columns in table 3 report

    corresponding WAR statistics with respect to negative announcements. In general not many

    observations are lost in case of positive and negative watchlist announcements. This implies

    that they are relatively stand alone announcements. Observational losses within other cate-

    gories are relatively more pronounced, about 1/3 (1/5) in case of negative (positive) outlook

    and downgrade (upgrade) announcements.

    Insert Figure 2

    In broad lines results are similar to the contaminated samples, especially so in case of

    positive announcements, though there are some differences. Returns prior to the announce-

    ment day window decrease in magnitude and significance across all announcement types, but

    do not vanish. The significant post-announcement return in case of watches for downgrade

    diminishes both in magnitude and significance. The largest difference is obtained in case of

    8In a cross-sectional regression later on we determine the additional impact of watchlist and outlook

    assignments on excess returns within the announcement day window as far as rating changes ae concerned.

    11

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    12/39

    negative outlooks. Results in the first interval of the event window, and the positive correc-

    tion afterwards are driven by concomitant downgrades at announcement dates. The latter

    are responsible for 80 percent of the total sample loss of 1/3. Abnormal returns immediately

    prior to and within the announcement day window remain significant.

    3.2.1 Robustness

    As a robustness check, we first determine whether we have to account for beta shifts sur-

    rounding rating agency announcements. If default risk is systematic it will be priced, see

    Denis and Denis (1995) and Vassalou and Xing (2004). On the other hand, the likelihood

    of default might be foremostly related to idiosyncratic factors, see Asquith, Gertner and

    Scharfstein (1994), Opler and Titman (1994), Dichev and Piotroski (2001).

    To test for beta stationarity surrounding agency announcements we use the testing pro-

    cedure of Impson, Glascock and Karafiath (1992), which is briefly described in the appen-

    dix. Table 4 reports an unanimous rejection of systematic beta shifts surrounding agency

    announcements. This result in itself does not necessarily imply that default risk is predomi-

    nantly idiosyncratic.9 However, for our purposes it at least implies we do not have to account

    for beta shifts.

    Insert Table 4

    Secondly, Corhay and Tourani Rad (1996) indicate results may be significantly affected

    by inefficiencies in the estimation procedure. As a result we allowed for a (skewed-t)

    GARCH(1,1) specification with respect to the normal performance return model, see also

    Abad-Romero and Robles-Fernandez (2006). We did not find significant differences in esti-

    mated WARs.10 For comparability reasons we stick with ordinary estimation of WARs in

    9For example, if rating agencies are relatively slow in information dissemination a split up based on

    announcement times might be incorrect. Changes in required rates of return could have materialized beforethe actual announcement.

    10The parameters are estimated by maximum likelihood using the G@RCH-package of the Ox programming

    language, see Laurent and Peters (2002)). These results are available from the corresponding author upon

    request.

    12

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    13/39

    the remainder.

    Finally, to make our results less susceptible to outliers, we consider two nonparametric

    tests which are fully specified in the appendix. The generalized sign test examines whether,

    within a specific event window, the number of stocks with positive WARs differs significantly

    from the number expected in the absence of abnormal performance. The latter is based onthe average fraction of positive abnormal returns observed in the estimation period. In

    case of negative announcements we would expect a lower number of positive WARs than

    expected, leading to a negative statistic. As an alternative, we transform the time series

    of residuals into their respective ranks. The nonparametric rank test examines whether the

    mean rank obtained within a specific window differs significantly from the average rank of

    the time series as a whole.

    The last two lines of tables 5 reveal that both nonparametric tests are grossly in line withprior results. For example, in case of watches for downgrade sign tests clearly indicate that

    the reported number of negative WARs is significantly larger than expected. The same holds

    true with respect to the announcement day window and the window immediately preceding

    it. The rank tests in turn confirm that the average rank obtained in these windows is

    significantly lower than the overall mean rank.

    Overall, we confirm that sharp negative abnormal returns already materialize prior to

    negative rating events. In case of downgrades, and somewhat less so in case of watches

    for downgrade, negative announcement and pre-announcement window abnormal returns

    are partially annihilated by positive post-announcement abnormal returns. The latter is in

    line with Glascock et al. (1987), Nordon and Weber (2004) and, considering bond market

    reactions, Heinke and Steiner (2001).

    3.3 Conditional WAR

    3.3.1 Investment- versus Speculative-Grade

    Up till now announcements related to specific rating events have been treated similarly.

    Jorion and Zhang (2005) underline that return reactions to rating announcements may be

    13

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    14/39

    higher once ratings decline. The structural model of Merton (1974) implies that changes

    in equity value, due to underlying changes in default probability, are larger once default

    probability is at a higher level to start with. Moreover, differences in default probabilities

    between adjacent rating classes become larger once ratings decline, see Cantor, Hamilton,

    Ou and Varma (2006).We first determine whether (uncontaminated) results in table 3 are predominantly driven

    by investment- or speculative-grade issuers. This is not only interesting in itself, it is also

    important to know whether differences between alternative subsamples should predominantly

    be ascribed to alternative rating compositions. To focus more clearly on differences between

    prior-, post- and announcement window returns we adjust our windows accordingly.

    Table 5 reveals that speculative-grade total event WARs are generally larger than their

    investment-grade counterparts. WAR patterns across specific windows are similar for bothrating categories in case of negative outlooks. Looking at watches for downgrade and rating

    changes instead, we obtain larger (pre-)announcement WARs. Only in case of watches

    for downgrade speculative-grade post-announcement WARs are larger as well, such that

    differences between total event WARs are less pronounced.

    Insert Table 5

    As a robustness check, we again consider asymptotic versions of two nonparametric tests,

    which are outlined in the appendix. The Mann-Whitney U test first orders the combined

    sample of WARs in each window, and subsequently compares the mean rank of populations

    that are to be compared. The Kolmogorov-Smirnov test examines the maximum distance

    between underlying empirical distribution functions.

    Kolmogorov-Smirnov statistics at the lower end of table 5 almost uniformly reject sim-

    ilarity of WAR distributions across subsamples. Though there are differences in terms of

    significance levels, this casts some doubt on testing power. Mann-Whitney U tests only

    confirm rank differences in case of rating change announcements.

    14

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    15/39

    3.3.2 Confirmed versus Unconfirmed Announcements

    Keeping differences across investment- and speculative-grade rating classes in mind, we turn

    to WAR patterns. Surveys by Ellis (1998) and Baker and Mansi (2002) reveal that market

    participants believe that agency ratings adjust slowly to changes in corporate credit quality.

    Recent studies by Nordon and Weber (2004) and Hull et al. (2004) document pre-

    announcement CDS spread responses to negative rating agency announcements, and pre-

    dictability of rating events given CDS spread changes. Together with insurance companies,

    and the recent increasing share of hedge funds, banks represent the majority of market par-

    ticipants in CDS markets, see British Bankers Association (2006). This implies that CDS

    market participants are generally well informed. As an indication, Blanco et al. (2005) show

    that most pricing relevant information flows from CDS to bond markets.

    Surveys and empirical results suggest it would be vital to explicitly account for opinions

    held by market participants that are unrelated to possible sluggish information provisioning

    by rating agencies. For opinions are unobserved, we estimate point-in-time default prediction

    models, which respond quickly to changes in underlying fundamentals. This allows us to dif-

    ferentiate between announcements that are in line with an underlying tendency (confirmed),

    and those that are not (unconfirmed).

    Some studies have tried to differentiate between expected and unexpected rating changes

    as well. Hand et al. (1992) relate corporate bond yields to the median yield of bonds

    within similar rating classes. They do find a stronger announcement WAR when watches

    for downgrade are classified as unexpected. Goh and Ederington (1993) look at underlying

    causes of downgrades. Downgrades due to deteriorations or improvements in a firms prospect

    or performance, like cash flow generation, are considered to be forward looking whilst others

    are classified as backward looking (i.e., expected). The authors find a larger announcement

    WAR in case of forward looking announcements. However, they obtain no clear differences

    in pre- and post-announcement WARs.

    15

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    16/39

    Point-in-Time Ratings To proxy for point-in-time default risk assessment we estimate

    logit models in a panel data setting along the lines of Altman and Rijken (2005)(2006)11:

    L(|x, y) =

    t i[(Txit)]

    1yi[1(Txit)]yi (2)

    where (Txit) = (1+eTxit)1. The (transformed) company specific variablesxitused are:

    Net working capital (scaled by total assets,T A),W K/T A, retained earnings, ln(1RE/TA),

    earnings before interest and taxes, ln(1EBIT/T A), and market value of equity to book

    value of liabilities, 1+ln(ME/BL). Net working capital proxies for short term liquidity. The

    other three variables are related to past, current and future profitability. The last variable

    is of course also a measure of financial leverage. We moreover include the too-big-to-fail

    proxy Size, defined as total liabilities scaled by the total value of the U.S. equity market,

    1+l n(BL/Mkt), the firms stock return vis-a-vis the equally weighted market return in the

    12 months preceding t,AR, and(AR), the standard deviation of monthly abnormal returns

    in the 12 months preceding t.

    The definition ofyi depends on the estimated model. We estimate a long term default

    prediction model (ldp) and a marginal default prediction model (mdp). In the former case,

    yi equals 0 if companyi defaults beforet + T, whereTis set equal to 6 years. In case of the

    mdpmodelyiis equal to 0 if companyi defaults in a future period (t+T1, t+T1+T), where

    both T1 and Tare set equal to 3 years.12 As a result the mdp model focuses exclusively

    on the long term, in a sense looking through-the-cycle, whilst the ldp model also accounts

    for short term default risk.

    At the end of each month company rating data is linked to company specific model

    variables. The estimation period stretches from April 1982, when Moodys began to add

    rating modifiers within broad rating classes, to the beginning of 2005, resulting in an average

    11

    We could have considered a structural or intensity based model as well. Lando (2004) Chapter 4 givesan overview of alternative statistical techniques.

    12The parameter estimates of the mdp model do not change substantially when T1 is varied between 3 and

    6 years and Tis allowed to vary between 1 and 3 years.

    16

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    17/39

    time series of 85 months per issuer. For we make use of a longer estimation period, and

    there is no need for daily stock data, the credit models are estimated using a sample of 2239

    U.S. companies. This is significantly larger than the sample obtained to estimate WARs

    surrounding rating agency announcements (i.e., 1099 U.S. companies).

    Table 6 reports estimation results on the ldp and mdp models. Though working capitalhas a negative coefficient, coefficient signs are generally as expected. The lower part of

    the table reports relative weights, RWi = iij|j|j

    , where i denotes the standard error of

    variable j in the pooled sample. This gives an indication of the relative importance of the

    variables considered. Both models give most weight to retained earnings, leverage and size.

    Insert Table 6

    Given estimated default prediction models, we obtain a monthly ranking of companies

    in terms of estimated default probabilities. Each month we designate companies to rating

    classes based on this relative ranking. The number of companies assigned to a specific rating

    classes are comensurate with the actual number of companies in these rating classes, as

    suggested by agency ratings. In the end we are left with what will be called credit model

    ratings.

    We next define an announcement as confirmed if we observe a credit model rating change

    in similar direction within a fixed window prior to announcement. The latter is defined as

    the difference between the first and last credit model rating of the window considered. For

    example, if the credit model rating deteriorates prior to a negative rating agency announce-

    ment, it is classified as confirmed. When no credit model rating change, or even an opposite

    tendency, is observed, the rating change is classified as unconfirmed.

    The constructed credit model ratings are plain point-in-time ratings. As a result these

    models do not suffer from possible conservatism in information provisioning by rating agen-

    cies. Conditioning on agency rating changes, Altman and Rijken (2006) note that, in a 4

    year interval surrounding rating changes, about 80-90% of the credit model rating change

    occurs in the 2 year period prior to an actual agency rating change. On average credit model

    17

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    18/39

    ratings anticipate agency rating changes by about 3/4 (1/2) year in case of upgrades (down-

    grades). Given data availability, our fixed credit model window runs from one year prior to

    announcement to the announcement itself.

    Confirmed versus Unconfirmed WARs Figure 3 graphically depicts cumulative abnor-

    mal returns of announcements that were in line with a downward moving ldp credit model

    rating (confirmed), and those where this was not the case (unconfirmed).13 Table 7 reports

    corresponding WARs. We again exclude watches and outlooks if Moodys or Standard &

    Poors changes the companys rating at the same date.

    Insert Figure 3 and Table 7

    Table 7 reveals that subsamples do not differ that much in size, with the exception ofdowngrade announcements. The relative magnitudes of investment- and speculative-grade

    companies are similar in corresponding subsamples. Results are therefore not a priori driven

    by differences in rating composition.

    We obtain clear differences in abnormal return patterns. Firstly, looking at downgrades

    and watches for downgrade, announcement WARs are less severe in case of confirmed an-

    nouncements. Lacking confirmation apparently leads to a lower immediate response. Sec-

    ondly, downgrades classified as unconfirmed show no significant pre- and post-announcement

    WARs. Their confirmed counterparts experience a significant negative abnormal return prior

    to the downgrade announcement, followed by a positive correction afterwards. Similar dif-

    ferences in post-announcements WARs are obtained in case of watches for downgrade and

    negative outlooks. Except for negative outlooks, these findings are almost uniformly con-

    firmed by non-parametric Kolmogorov-Smirnov and Mann-Whitney U tests.

    We checked the previous results for robustness, focusing on pre- and post-announcement

    windows. Due to space considerations these results are not reported.14

    13In general, subdividing our sample based on the mdp model gives results that are very close to the ones

    obtained in the ldpcase. In the remainder we only report results related to the ldpmodel.14Results are available from the corresponding author upon request.

    18

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    19/39

    As a first check we shortened the interval used to subdivide our samples. Instead of

    using the full one year period prior to announcement, we temporarily only considered credit

    model ratings assigned strictly outside the event window. This alternative setup only had

    an impact on the pre-announcement WAR difference in case of downgrades, which became

    somewhat smaller. Secondly, we excluded company WARs that were smaller (larger) thanthe 10 (90) percent quantile of the empirical WAR distributions. Results resembled our base

    case very closely. Like nonparametric inference, this confirms that our results are not unduly

    driven by extreme outliers. Finally, we experimented with exclusion of observations if they

    were preceded by an announcement in similar direction within the entire pre-announcement

    event window. Once again, this did not affect our overall finding.

    To conclude, we obtain no significant post-announcement WARs when announcements

    do not confirm a deteriorating pre-announcement credit model tendency. This suggeststhat new information is fully absorbed once it is revealed. Positive post-announcement

    WARs materialize when announcements are consistent with prior point-in-time credit quality

    deteriorations. This suggests a pre-announcement excessive response once concerns about

    companies grow. Formal rating agency announcements might then predominantly resolve

    some underlying uncertainty, which puts the market at ease.

    3.3.3 Watchlist Resolution And (Un)confirmed Announcements

    In section 3.3.2 we obtain a pre-announcement return differential only in case of downgrades.

    This differential might to a large extent be related to prior signaling by rating agencies.

    This is confirmed if we subdivide our downgrade sample by watchlist precedence. The

    average downgrade pre-announcement WAR related to watchlist precedence, -1 percent, is

    significantly smaller than similar reactions with no watchlist precedence, -5 percent.

    As shown in table 2, watchlist precedence and rating levels are intimately related. Split-

    ting the downgrade sample up by rating grade first, and estimating WARs conditional

    on watchlist precedence gives similar results. No matter whether we are dealing with

    speculative-grade or investment-grade companies, pre-announcement WARs are significantly

    19

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    20/39

    more negative when downgrades were not preceded by a watchlist addition.

    We finally subdivide our downgrade sample into four subsamples, based on watchlist

    precedence on the one hand and credit model indication on the other. Table 8 reveals

    that, after conditioning on watchlist precedence first, the relative sample decompositions in

    terms of speculative- and investment-grade companies is quite similar across confirmed andunconfirmed subsamples (i.e., vertical direction). However, if we condition on credit model

    indication first, the distribution between investment- and speculative grade companies is

    turned upside down if we subsequently condition on watchlist precedence (i.e., horizontal

    direction). This is in accordance with the intimate link between watchlist additions and the

    investment-grade rating category.

    Insert Table 8

    As expected the total event WAR is highest and statistically very significant in the un-

    preceded (i.e., no watch), unconfirmed subsample. Once again, two observations stand out,

    and are consistently backed by nonparametric tests at the 1% significance level. Firstly,

    across subsamples pre-announcement abnormal returns are more severe when downgrades

    are not preceded by watchlist additions. Secondly, when downgrades are classified as con-

    firmed, we obtain significant and large positive post-announcement abnormal return reac-

    tions, whether or not downgrades were preceded by watchlist additions. This is not true in

    case of their unconfirmed counterparts.

    3.3.4 Announcement WARs: Multivariate Regression

    Previous results give additional insight into pre- and post-announcement WARs, but reveal

    less with respect to announcement WARs. To more accurately control for additional infor-

    mation, we consider a multivariate regression framework in case of downgrade announcement

    WARs. Our main interest still centers on the impact of rating agency signals (i.e., outlooksand watches), and prior changes in market participants point-in-time default risk assess-

    ment. The latter is again captured by ldp credit model rating changes one year prior to

    downgrades.

    20

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    21/39

    To assess the importance of watchlist (outlook) additions, we includewatchperiod(outlookperiod

    If the rating change was a resolution of a watchlist (outlook) assignment, the variable is de-

    fined as the natural log of one plus the watchlist (outlook) period. If not, the variable equals

    the maximum value of this subsample. Moreover, we include watchat (outlookat), which

    is equal to 1 if a companys rating changes but the watch (outlook) was not lifted, or weobserved a concomitant new watchlist (outlook) assignment.

    Apart from these variables, we consider several control variables: BoundInv/Spec, indi-

    cates whether the companys rating surpasses the investment-/speculative-grade boundary;

    #notch, number of notches downgraded; default, equal to 1 if we observed a default within

    one year after the rating change, which could have been anticipated; broad, equal to 1 when

    the rating change was across broad rating classes; and S&P(t0,t1), which indicates whether we

    observed a rating change in similar direction by Standard & Poors within (t0, t1). We expectto obtain a negative coefficient with respect to all variables. When Standard & Poors rating

    change is announced (much) earlier than Moodys we might also obtain a positive coefficient,

    as Moodys downgrade comes as no surprise then. We also include W AR(30,2), the WAR

    in the full pre-announcement window.

    Model 1 in table 9 indicates that control variables predominantly enter with their ex-

    pected sign. We obtain significant coefficients with respect to default, #notchand W AR(30,2).

    Though it seems that negative pre-announcement WARs strengthen announcement WARs,

    the actual impact is modest. Prior changes in point-in-time default risk assessment, as cap-

    tured by ldp, seems to have no bearing on announcement WARs. The estimation result

    predominantly indicates that announcement WARs are more severe when additional infor-

    mation is revealed. This can either be a concomitant rating change by Standard & Poors in

    the same time interval, S&P(1,1), or a new/non-lifted watchlist (outlook) assignment. The

    length of watchlist and outlook resolution periods have no significant impact on announce-

    ment WARs.

    Insert Table 9

    In model 2 we replace plain resolution periods bywatchres,90,outlookres,180and watchres,+90

    21

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    22/39

    outlookres,+180. These variables simply indicate whether the rating change was a resolution

    to a watch or outlook in similar direction within or beyond 90 or 180 days, corresponding by

    and large to the median resolution period given an eventual downgrade. Since the watchat

    and outlookat variables turned out to be very significant, we split these cases further up be-

    tween watches (outlooks) that are not lifted when ratings change, watchat,old(outlookat,old),and new watchlist (outlook) additions, watchat,new(outlookat,new).

    Parameter estimates reveal that announcement WARs are unaffected by watchlist prece-

    dence. Results confirm that announcement WARs are predominantly affected by downgrade

    severity. This not only means number of notches downgraded. Returns are also larger when

    we either observe a concomitant rating change by Standard & Poors in the announcement

    window, or the company is newly added to the watchlist when it is downgraded. Direct out-

    look resolution (i.e., no watchlist addition) might be an important signal as well. We indeedfind relatively large, but insignificant, coefficients related to outlook resolution dummies.

    4 Conclusion

    Prior surveys and benchmark studies indicate that agency ratings respond slowly to changes

    in underlying credit-quality. Empirical research on return reactions on the other hand fre-

    quently report abnormal return reactions prior to negative announcements. We confirm thelatter, where we consider three types of announcements: rating changes, watchlist additions

    and outlook assignments. We also obtain significant positive post-announcement returns in

    case of negative announcements, which is in line with several event studies.

    Given previous empirical findings we estimate poin-in-time default prediction models.

    These models do not suffer from possible conservatism in information provisioning by rat-

    ing agencies. This allows us to make a distinction between confirmed and unconfirmed

    announcements.

    We obtain no significant positive post-announcement returns if announcements are out

    of line with pre-announcement point-in-time credit model tendencies. This indicates that

    22

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    23/39

    new information is fully absorbed once it is revealed. On the other hand, significant positive

    post-announcement returns typically materialize when announcements are in line with pre-

    announcement point-in-time credit quality deteriorations. This suggests a pre-announcement

    excessive response once market participants concerns about specific companies grow. For-

    mal rating agency announcements might then predominantly resolve some underlying un-certainty, which puts the market at ease.

    In case of downgrades, we find that pre-announcement return reactions are intimately

    related to watchlist precedence. Pre-announcement negative abnormal returns are less severe

    when downgrades are preceded by watchlist additions. Assigning a watch for downgrade to a

    specific company already leads to an immediate significant negative abnormal return reaction

    once the watchlist addition is announced.

    We finally find that announcement window abnormal return reactions are predominantlydetermined by severity of the downgrade signal. This not only means number of notches

    downgraded. Returns are also larger when we either observe a concomitant rating change

    by Standard & Poors in close range, or the company is newly added to the watchlist when

    it is downgraded.

    23

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    24/39

    5 Appendix

    5.1 Beta stationarity

    See Impson et al. (1992). We write:

    Rki =Xkiki+ uki (3)

    where ik denotes the estimated parameter vector of the market model of thek th company,

    out of K companies, in period i. The latter in this case being either the period prior

    to the rating change (i = 1) or the period after the rating change (i = 2). The shift

    in parameter vector then equals: k = k2 k1. Denoting = (1, 2,...,K) and R =

    0 1 0 1 . . . 0 1 , we then have:RT

    RV()RT

    R a 2(1) (4)

    with V() being a block diagonal matrix with lth block:

    2k1

    XTk1Xk11

    + 2k2

    XTk2Xk21

    (5)

    5.2 Nonparametric Tests w.r.t. WARs

    5.2.1 Sign Test

    See Cowan (1992). DefineN=nest + nevent , and K= #firms, where nest (nevent) denotes

    the number of return observations in the estimation (event) window. The Sign Test is given

    by:

    Sign Test = wKp

    [Kp(1p)]1/2

    a N(0, 1) (6)

    where:

    w=K

    k=1

    1{WARk>0} (7)

    24

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    25/39

    and

    p= 1

    K

    K

    k=1

    1

    nest

    nestt=1

    1{ARkt>0} (8)

    5.2.2 Rank Test

    See Cowan (1992). If we define Mkt, the abnormal return of company k at time t:

    Mkt =rank(ARkt) (9)

    the Rank Test is given by:

    Rank Test = (nevent)

    1/2

    1nevent

    1K

    neventt=1

    K

    k=1 Mkt

    M1N

    N

    t=1

    1K

    K

    k=1

    Mkt M

    21/2 a N(0, 1) (10)wherem and Mdenotes the mean rank, (N+ 1)/2.

    5.3 Non-Parametric Tests for Two Independent Samples

    5.3.1 Mann-Whitney U

    See Sheskin (2003). Suppose we have two samples, with sample sizes n1 and n2. Define:

    U1 = n1n2+n1(n1+ 1)

    2 + iMi1 (11)

    U2 = n1n2+n2(n2+ 1)

    2 + iMi2 (12)

    where ni equals the number of observations in subsample iwhilst Mij denotes summa-

    tion of the ranks of the jth subsample. Then:

    z=(U1 U2)

    n1n22

    n1n2(n1+n2+1)12

    a N(0, 1) (13)

    25

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    26/39

    5.3.2 Kolmogorov-Smirnov

    See Sheskin (2003). The Kolmogorov-Smirnov test is given as:

    4maxi (F1(ni) F2(ni))

    2

    n1n2n1+ n2

    a 2(2) (14)

    Where Fj(ni) denotes the empirical distribution of subsample j evaluated atni (i.e., the

    ith observation from a total ofN=n1+ n2).

    References

    Abad-Romero, P., Robles-Fernandez, M.D., 2006, Risk and Return Around Bond Rating Changes: New

    Evidence From the Spanish Stock Market, Journal of Business Finance & Accounting, 33(5-6), 885-

    908.

    Alexander, G., Edwards, A.K., Ferri, M.G., 1998, Trading Volumes and Liquidity in Nasdaqs High-Yield

    Bond Market, Unpublished Manuscript, Securities and Exchange Commission, Washington DC.

    Altman, E.I., Rijken, H.A., 2005, How Rating Agencies Achieve Rating Stability, Journal of Banking and

    Finance, 28(11), 2679-2714.

    Altman, E.I., Rijken, H.A., 2006, A Point-in-Time Perspective on Through-the-Cycle Ratings, Financial

    Analysts Journal, 62(1), 54-70.

    Asquith, P., Gertner, R., Scharfstein, D., 1994, Anatomy of Financial Distress: An Examination of Junk

    Bond Issuers, Quarterly Journal of Economics, 109, 625-658. ??

    Baker, H.K., Mansi, S.A., 2002, Assessing Credit Agencies by Bond Issuers and Institutional Investors,

    Journal of Business Finance and Accounting, 29(9-10), 1367-1398. ??

    Blanco, R., Brennan, S., Marsh, I.W., 2005, An Empirical Analysis of the Dynamic Relation between

    Investment-Grade Bonds and Credit Default Swaps, The Journal of Finance, 60(5), 2255-2281.

    British Bankers Association, 2006, BBA Credit Derivatives Report 2006, London. ??

    Campbell, J.Y., Lo, A.W., MacKinlay, A.C., 1997, The Econometrics of Financial Markets, 2nd edn,

    Princeton University Press, Princeton, NJ.

    Cantor, R., Mann, C., 2003, Are Corporate Bond Ratings Procyclical?, Special Comment, Moodys In-

    vestors Service, October. ??

    Cantor, R., Hamilton, D.T., 2004, Rating Transitions and Default Conditional on Outlooks, Journal of

    Fixed Income, 14(2), 54-66.

    26

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    27/39

    Cantor, R., Hamilton, D.T., Ou, S., Varma, P., 2006, Default and Recovery Rates of Corporate Bond

    Issuers, 1920-2005, Special Comment, Moodys Investors Service.

    Corhay, A., Tourani Rad, A., 1996, Conditional Heteroskedasticity Adjusted Market Model and an Event

    Study, Quarterly Review of Economics and Finance, 36(4), 529-538.

    Cowan, A.R., 1992, Nonparametric Event Study Tests, Review of Quantitative Finance and Accounting,

    2, 343-358. ??

    Denis, D.J., Denis, D., 1995, Causes of Financial Distress Following Leveraged Recapitalization, Journal

    of Financial Economics, 37(2), 129-157.

    Dichev, I.D., Piotroski, J.D., 2001, The Long-Run Stock Returns Following Bond Ratings Changes, The

    Journal of Finance, 56(1), 173-203.

    Ederington, L.H., Yawitz J.B., Roberts, B.E., 1987, The Informational Content of Bond Ratings, Journal

    of Financial Research, 10(3), 211-226.

    Ederington, L.H., Goh, J.C., 1993, Is a Bond Rating Downgrade Bad News, Good News, or No News for

    Stockholders?, The Journal of Finance, 48(5), 2001-2008.

    Ederington, L.H., Goh, J.C., 1999, Cross-Sectional Variation in the Stock Market Reaction to Bond Rating

    Changes, The Quarterly Review of Economics and Finance, 39(1), 101-112.

    Ellis, D.M., 1998, Different Sides of the Same Story: Investors and Issuers View of Rating Agencies, The

    Journal of Fixed Income, 7(4), 35-45.

    Glascock, J.L., Davidson, W.N., Henderson, G.V., 1987, Announcement Effects of Moodys Bond Rating

    Changes on Equity Returns, Quarterly Journal of Business and Economics, 26(3), 67-78.

    Hand, J.R.M., Holthausen, R.W., Leftwich, R.W., 1992, The Effect of Bond Rating Agency Announce-

    ments on Bond and Stock Prices, The Journal of Finance, 47(2), 733-752.

    Heinke, V.G., Steiner, M., 2001, Event Study Concerning International Bond Price Effects of Credit

    Rating Actions, International Journal of Finance and Economics, 6(2), 139-157.

    Holthausen, R.W., Leftwich, R.W., 1986, The Effect of Common Bond Rating Changes on Common Stock

    Prices, Journal of Financial Economics, 17(1), 57-89.

    Hull, J., Predescu, M., White, A., 2004, The Relationship Between Credit Default Swap Spreads, Bond

    Yields, and Credit Rating Announcements, Journal of Banking & Finance, 28(11), 2789-2811.

    Impson, C. M., Glascock, J., Karafiath, I., 1992, Testing Beta Stationarity Across Bond Rating Changes,

    The Financial Review, 27(4), 607-618.

    Jorion, P., Zhang, G., 2005, Non-Linear Effects of Bond Rating Changes, Unpublished Manuscript. ??

    27

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    28/39

    Lando, D., 2004, Credit Risk Modeling: Theory and Applications, Princeton University Press, Princeton,

    NJ.

    Laurent, S., Peters, J.-P., 2002, G@RCH 2.2: An Ox Package for Estimating and Forecasting Various

    ARCH Models, Journal of Economic Surveys, 16(3), 447-485.

    Loffler, G., 2005, Avoiding the Rating Bounce: Why Rating Agencies are Slow to React to New Informa-

    tion, Journal of Economic Behavior & Organization, 56(3), 365-381.

    Merton, R.C., 1974, On the Pricing of Corporate Debt: The Risk Structure of Interest Rates, The Journal

    of Finance, 29(2), 449-470.

    Moodys Investors Service, 2004, Guide to Moodys Ratings, Rating Process and Rating Practices, June.

    (www.moodys.com) ??

    Norden, L., Weber, M., 2004, Informational Efficiency of Credit Default Swap and Stock Markets: The

    Impact of Credit Rating Announcements, Journal of Banking & Finance, 28(11), 2813-2843.

    Opler, T., Titman, S., 1994, Financial Distress and Corporate Performance, Journal of Finance, 49,

    1015-1040. ??

    Perraudin, W., Taylor, A.P., 2004, On the consistency of ratings and bond market yields, Journal of

    Banking & Finance, 28(11), 2769-2788.

    Sheskin, D.J., 2003, Handbook of Parametric and Nonparametric Statistical Procedures, 3rd edn, Chap-

    man & Hall/CRC, Boca Raton, FL.

    Standard and Poors, 2005, Corporate Ratings Criteria, McGraw-Hill Companies

    (www.standardandpoors.com)

    Vassalou, M., Xing, Y., 2004, Default Risk in Equity Returns, The Journal of Finance, 59(2), 831-868.

    28

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    29/39

    Table 1 : Moody's Watchlist and Outlook Assignments (1991-2005): Totals and Affected Rating Grades

    Watch Watch Positive Negative

    Upgrade Downgrade Outlook Outlook

    Total : 1991-2005 2658 5616 2027 4290

    Total : 1995-2005 2401 5063 2027 4290

    Investment-grade 1439 (0.541) 3605 (0.641) 890 (0.439) 2155 (0.502)

    Speculative-grade 1204 (0.452) 1975 (0.351) 1037 (0.511) 1961 (0.457)

    WR 3 (0.001) 7 (0.001) 16 (0.007) 17 (0.003)

    Not Assigned 12 (0.004) 29 (0.005) 84 (0.041) 157 (0.036)

    Table 2 : Moody's Watchlist and Outlook Assignments (1991-2005): Resolutions

    Watch Watch Positive NegativeUpgrade Downgrade Outlook Outlook

    Upgrade 1852 (0.710) 28 (0.005) 476 (0.290) 123 (0.033)

    Downgrade 18 (0.006) 3719 (0.676) 129 (0.078) 1049 (0.282)

    No Rating Change 569 (0.218) 1629 (0.296) 795 (0.485) 1954 (0.526)

    WR 151 (0.057) 90 (0.016) 122 (0.074) 373 (0.100)

    Not Assigned 12 (0.004) 29 (0.005) 84 (0.051) 157 (0.042)

    Left Censored 3 (0.001) 3 (0.000) 30 (0.018) 49 (0.013)

    Right Censored 53 118 391 581

    Intended Direction (Total) 1852 3719 476 1049

    Investment-grade 1048 (0.565) 2436 (0.655) 181 (0.380) 387 (0.368)

    Speculative-grade 804 (0.434) 1283 (0.344) 294 (0.617) 660 (0.629)

    WR 1 (0.000) 2 (0.000)

    Successor

    Watch Upgrade 273 (211)

    Positive Outlook 94 (33)

    Watch Downgrade 672 (499)

    Negative Outlook 343 (31)

    Intended Direction (Total) 687 1548

    Speculative-grade 298 (0.434) 717 (0.463)

    Investment-grade 389 (0.566) 831 (0.537)

    Notes: This table gives an overview of watchlist and outlook resolutions. The sample period runs from

    October 1991 to February 2005. The upper part of the table indicates whether the watchlist or outlook

    assignment led to a rating change, or the rating was left unchanged. Corresponding fractions as a % of

    resolved cases (i.e., excluding right censored cases) are given in parentheses. WR is short for a withdrawnrating. Not Assigned denotes cases with no rating data available for the company considered. Left (Right)

    Censored refers to cases when there was no rating data available at the start (end) of the watchlist or

    outlook period. The middle part of the table splits resolved cases, in the intended direction, up according to

    rating levels at the start of the watchlist or outlook period. In case of outlooks the lower part of the table

    indicates whether the outlook or watch was succeeded by a watch or outlook in similar direction. Cases that

    eventually led to a rating change in the intended direction are given in parentheses. The final part adds

    outlooks that eventually ended up (i.e., either directly or indirectly) in a rating change in the intended

    direction (e.g., positive outlook: 476 + 211), and gives the rating category of the company at the start of the

    outlook assignment.

    Notes: This table gives an overview of watchlist additions and outlook assignments by Moody's Investors

    Service. The sample period runs from October 1991 to February 2005. The lower part of the table splits

    assignments up according to rating levels at the start of the watchlist or outlook period. WR is short for a

    withdrawn rating. Not Assigned denotes cases with no rating data available for the company considered.

    Relative fractions as a % of all 1991-2005 assignments are given in parentheses.

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    30/39

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    31/39

    Table 4: Testing for Beta Stationarity Surrounding Agency Rating Announcements

    Downgrade Watch Down Negative Outlook Upgrade Watch Up Positive Outlook

    # obs 804 756 331 544 254 236

    Avg. Beta Change 0.064 0.061 0.067 0.041 -0.004 0.091

    Statistic 2.20 2.46 2.49 2.32 2.45 2.57

    p-value 0.14 0.12 0.11 0.13 0.12 0.11

    Notes: This table provides estimates of average beta changes as a result of rating agency announcements. The

    sample period runs from October 1991 to February 2005. The statistic refers to the beta stationarity test proposed byImpson et al. (1992), which is briefly described in the appendix.

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    32/39

    Table5:UncontaminatedSample:InvestmentversusSpeculative-gradeAn

    nouncements

    Investment-grade

    Downgrade

    Watc

    hDown

    NegativeOutlook

    Window

    (-30,-5)

    (-1,1)

    (5,30)

    (-5,5)

    (-30,30)

    (-30,-5)

    (-1,1)

    (5,30)

    (-5,5)

    (-30,3

    0)

    (-30,-5)

    (-1,1)

    (5,30)

    (-5,5)

    (-30,30)

    n

    410

    410

    410

    410

    410

    5

    58

    558

    558

    558

    558

    194

    194

    194

    19

    4

    194

    WAR

    0.008

    -0.006a

    0.024a

    -0.010a

    0.025a

    -0.0

    17a

    -0.018a

    0.005

    -0.029a

    -0.040a

    -0.013c

    -0.015a

    0.013c

    -0.01

    8a

    -0.017

    #PositiveWAR

    224

    181

    257

    203

    241

    2

    50

    240

    291

    230

    237

    85

    70

    107

    8

    1

    88

    #NegativeWAR

    186

    229

    153

    207

    169

    3

    03

    313

    262

    323

    316

    103

    118

    81

    10

    7

    100

    Signtest

    2.417a-1.831b

    5.678a

    0.342

    4.097a

    -1.4

    96c

    -2.347a

    1.992b

    -3.198a

    -2.603a

    -0.936

    -3.125a

    2.274b

    -1.52

    0c

    -0.499

    Ranktest

    0.904

    -2.193b

    3.091a

    -1.557c

    2.007b

    -2.2

    37b

    -5.033a

    0.690

    -5.900a

    -3.365a

    -0.355

    -3.308a

    1.575c

    -2.83

    3a

    -0.272

    Speculative-grade

    Downgrade

    Watc

    hDown

    NegativeOutlook

    Window

    (-30,-5)

    (-1,1)

    (5,30)

    (-5,5)

    (-30,30)

    (-30,-5)

    (-1,1)

    (5,30)

    (-5,5)

    (-30,3

    0)

    (-30,-5)

    (-1,1)

    (5,30)

    (-5,5)

    (-30,30)

    n

    394

    394

    394

    394

    394

    1

    98

    198

    198

    198

    198

    137

    137

    137

    13

    7

    137

    WAR

    -0.059a-0.017a

    0.023b

    -0.025a

    -0.057a

    -0.0

    33b

    -0.026a

    0.029b

    -0.047a

    -0.047b

    -0.020

    -0.013b

    0.011

    -0.02

    1b

    -0.027

    #PositiveWAR

    158

    164

    225

    161

    175

    88

    79

    104

    87

    89

    60

    63

    63

    5

    6

    64

    #NegativeWAR

    236

    230

    169

    233

    219

    1

    05

    114

    89

    106

    104

    71

    68

    68

    7

    5

    67

    Signtest

    -2.881a-2.276b

    3.879a

    -2.579a

    -1.166

    -0.6

    73

    -1.970b

    1.632c

    -0.817

    -0.529

    -0.449

    0.076

    0.076

    -1.14

    9

    0.251

    Ranktest

    -3.065a-2.321b

    0.963

    -2.842a

    -2.352b

    -1.6

    02c

    -3.187a

    0.377

    -2.968a

    -1.995b

    0.465

    -0.796

    0.797

    -0.06

    8

    0.702

    Mann-WhitneyU

    -5.225a-1.576c

    -0.300

    -2.913a

    -4.097a

    -0.6

    71

    -0.505

    -0.057

    -0.645

    -0.092

    -0.349

    -0.929

    -0.436

    -0.37

    3

    -0.463

    Kolmogorov-Smirnov

    49.529a11.055a

    11.994a

    30.556a

    41.157a

    11.6

    55a

    5.379c

    6.124b

    16.669a

    9.224a

    7.916b

    4.789c

    6.772b

    9.32

    2a

    9.003b

    Notes:Thistablegivesanoverviewofestimatedwindowabnormalreturns(WA

    Rs)relatedtospecific(uncontaminated)ratingagencyannouncements,subdividedbyratingclass.The

    sampleperiodrunsfromOctober1

    991toFebruary2005.Backgroundinformat

    ionisprovidedinTable3.UnderneathWAR

    estimatesthetablereportsthenumberofp

    ositiveand

    negativeWARswithineachwindow.ThelowerpartofthetablecomparesWARsacrossratedsubsamples,usingnonparam

    etrictestingprocedures.Bothtestsaredescribedinthe

    appendix.TheMann-WhitneyUtestfirstordersthecombinedsampleofWARs

    ineachwindow,andsubsequentlycompare

    sthemeanrankofbothpopulations.TheKo

    lmogorov-

    Smirnovtestexaminesthemaximumdistancebetweenunderlyingempiricaldistributionfunctions.a,bandcdenotesignific

    anceatthe1,5and10%confidencelevel,r

    espectively.

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    33/39

    Table 6: Long Term and Marginal Default Prediction models

    LDP MDP

    Variable Coeff t-value RW Coeff t-value RW

    constant 5.33 a 10.2 5.58 a 10.2

    WK/TA -0.98 a 2.6 -6.7 -1.53 a 3.9 -13.4

    RE/TA 1.32 a 4.8 16.2 1.16 a 4.2 18.4

    EBIT/TA 0.96 1.1 2.8 0.03 0.0 0.1

    ME/BL 0.85 a 10.5 31.6 0.67 a 7.9 32.1Size 0.38 a 6.8 21.7 0.33 a 5.8 24.5

    (AR) -5.81 a 7.8 -13.7 -3.07 a 3.8 -9.3

    AR 5.50 a 4.7 7.4 1.25 0.9 2.2

    pseudo R-squared 0.26 0.14

    # obs. 114535 107445

    # default obs. 13304 6214

    Notes: This table presents parameter estimates of logit regressions concerning default

    events. LDP and MDP denote long term and marginal default prediction model,

    respectively. Default is indicated as 0, no default is indicated as 1. The (transformed)

    company specific variables used are: Net working capital (scaled by total assets, TA),

    WK/TA, retained earnings, ln (1-RE/TA), earnings before interest and taxes, ln (1-

    EBIT/TA), market value of equity to book value of liabilities, 1+ln (ME/BL), Size, defined as

    total liabilities scaled by the total value of the U.S. equity market, 1+ln (BL/Mkt), the firm's

    stock return vis-a-vis the equally weighted market return in the preceding 12 months, AR,

    and (AR), the standard error of monthly abnormal returns over that same period.

    Standard errors of the estimated parameters are generalized Huber/White robust standard

    errors, relaxing the error term distribution and independence among observations. The

    lower part of the table gives relative weights of the variables, defined as /(||). a, b

    and c denote significance at the 1, 5 and 10% confidence level, respectively.

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    34/39

    Table7:IndicatedCreditQualityChangebyldpmodel

    Confirmed

    Downgrade

    WatchDown

    NegativeOutlook

    Window

    (-30,-5)

    (-1,1)

    (5,30)

    (-5,5)

    (-30,30)

    (-30,-5)

    (-1,1)

    (5,30)

    (-5,5)

    (-30,30)

    (-30,-5)

    (-1,1)

    (5,30)

    (-5,5)

    (-30,30)

    n

    595

    595

    595

    595

    595

    363

    363

    363

    363

    363

    156

    156

    156

    156

    156

    Investment

    334

    334

    334

    334

    334

    260

    260

    260

    260

    260

    86

    86

    86

    86

    86

    Speculative

    261

    261

    261

    261

    261

    103

    103

    103

    103

    103

    70

    70

    70

    70

    70

    WAR

    -0.038a

    -0.016a

    0.058a

    -0.019a

    0.002

    -0.031a

    -0.027a

    0.032a

    -0.046a

    -0.042a

    -0.011

    -0.011a

    0.029

    b

    -0.016b

    0.009

    #PositiveWAR

    267

    257

    392

    288

    328

    156

    149

    208

    132

    161

    71

    66

    91

    75

    79

    #NegativeWAR

    328

    338

    203

    307

    267

    207

    214

    155

    231

    202

    85

    90

    65

    81

    77

    Signtest

    -1.631c

    -2.440a

    8.489a

    0.069

    3.308a

    -2.083b

    -2.818a

    3.378a

    -4.603a

    -1.558c

    -0.738

    -1.539c

    2.466

    a

    -0.097

    0.544

    Ranktest

    -3.066a

    -2.944a

    4.380a

    -1.384c

    0.239

    -2.300b

    -5.342a

    1.711b

    -4.992a

    -2.295b

    -1.354c

    -1.766b

    2.668

    a

    -0.703

    0.772

    Unconfirmed

    Downgrade

    WatchDown

    NegativeOutlook

    Window

    (-30,-5)

    (-1,1)

    (5,30)

    (-5,5)

    (-30,30)

    (-30,-5)

    (-1,1)

    (5,30)

    (-5,5)

    (-30,30)

    (-30,-5)

    (-1,1)

    (5,30)

    (-5,5)

    (-30,30)

    n

    369

    369

    369

    369

    369

    317

    317

    317

    317

    317

    137

    137

    137

    137

    137

    Investment

    211

    211

    211

    211

    211

    233

    233

    233

    233

    233

    86

    86

    86

    86

    86

    Speculative

    158

    158

    158

    158

    158

    84

    84

    84

    84

    84

    51

    51

    51

    51

    51

    WAR

    -0.007

    -0.007a

    0.007

    -0.027a

    -0.023b

    -0.025a

    -0.004b

    -0.012b

    -0.017a

    -0.050a

    -0.013

    -0.014a

    0.005

    -0.019a

    -0.027c

    #PositiveWAR

    183

    157

    204

    156

    185

    144

    144

    152

    151

    138

    64

    57

    69

    48

    65

    #NegativeWAR

    186

    212

    165

    213

    184

    173

    173

    165

    166

    179

    73

    80

    68

    89

    72

    Signtest

    0.732

    -1.978b

    2.921a

    -2.082b

    0.940

    -0.980

    -0.980

    -0.081

    -0.193

    -1.655b

    -0.376

    -1.573c

    0.479

    -3.112a

    -0.205

    Ranktest

    0.389

    -1.555c

    1.667b

    -4.080a

    -0.008

    -2.158b

    -1.944b

    -0.950

    -3.576a

    -3.315a

    1.029

    -2.380a

    0.622

    -2.163b

    0.089

    Mann-WhitneyU

    -2.204b

    -0.46

    -4.500a

    -2.530a

    -1.856b

    -0.105

    -2.486a

    -3.177a

    -2.763a

    -0.139

    -0.516

    -0.655

    -1.588

    c

    -1.245

    -0.868

    Kolmogorov-Smirnov

    11.403a

    6.923b

    31.217a

    17.439a

    11.600a

    3.701

    8.121b

    15.815a

    10.714a

    3.224

    1.883

    1.874

    3.394

    7.007b

    3.194c

    Notes:Thistablegivesanove

    rviewofestimatedwindowabnormalreturns(WARs)relatedtospecificratingagencyannouncements.Thesampleperiod

    runsfromOctober

    1991toFebruary2005.Theupperpartofthetablereferstoratingagencyannouncementthatareinaccordancewithasimilarmovementinpoint-in-timecreditmodelratings

    (i.e.,confirmed).Thelowerpa

    rtofthetablereferstotheremainingcas

    es(i.e.,unconfirmed).Tablelayoutisas

    describedintables3and5.a,bandcdenotesignificanceat

    the1,5and10%

    confidencelevel,respectively.

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    35/39

    Table 8: Downgrades: Differentiation Based on Credit Quality Changes by ldp Model and Watchlist Resolution

    Confirmed + Watch (1) Confirmed + No Watch (2)

    Window (-30,-5) (-1,1) (5,30) (-5,5) (-30,30) Window (-30,-5) (-1,1) (5,30) (-5,5) (-30,30)

    n 342 342 342 342 342 n 253 253 253 253 253

    Investment 239 239 239 239 239 Investment 91 91 91 91 91

    Speculative 103 103 103 103 103 Speculative 162 162 162 162 162

    WAR -0.023 b -0.009 a 0.057 a -0.006 0.023 c WAR -0.058 a -0.025 a 0.059 a -0.038 a -0.028 c

    # Positive WAR 171 157 224 177 197 # Positive WAR 96 100 168 111 131

    # Negative WAR 171 185 118 165 145 # Negative WAR 157 153 85 142 122

    Sign test 0.559 -0.921 6.164 a 1.194 3.309 a Sign test -3.200 a -2.697 a 5.860 a -1.313 c 1.204

    Rank test -1.425 c -1.935 b 4.008 a -0.099 1.397 c Rank test -3.535 a -2.741 a 2.779 a -2.222 b -1.204

    Unconfirmed + Watch (3) Unconfirmed + No Watch (4)

    Window (-30,-5) (-1,1) (5,30) (-5,5) (-30,30) Window (-30,-5) (-1,1) (5,30) (-5,5) (-30,30)

    n 194 194 194 194 194 n 175 175 175 175 175

    Investment 150 150 150 150 150 Investment 61 61 61 61 61

    Speculative 44 44 44 44 44 Speculative 114 114 114 114 114

    WAR 0.018 a -0.008 a 0.013 c -0.016 a 0.019 b WAR -0.043 a -0.006 c -0.002 -0.039 a -0.078 a

    # Positive WAR 109 79 110 82 112 # Positive WAR 74 78 94 74 73

    # Negative WAR 85 115 84 112 82 # Negative WAR 101 97 81 101 102

    Sign test

    Sign test 2.240 b -2.070 b 2.384 a -1.639 c 2.672 a Sign test -1.298 c -0.692 1.731 b -1.298 c -1.449 c

    Rank test 2.377 a -1.602 c 1.671 b -2.414 a 1.824 b Rank test -1.720 b -0.631 0.721 -3.356 a -1.744 b

    (1) versus (3) (2) versus (4)

    Mann-Whitney U -2.400 a -0.345 -2.865 a -2.148 b -0.377 Mann-Whitney U -1.008 -1.019 -3.473 a -1.330 c -1.902 b

    Kolmogorov-Smirnov 15.296 a 6.832 b 17.406 a 10.375 a 6.628 b Kolmogorov-Smirnov 5.064 c 3.240 16.202 a 10.214 a 10.382 a

    (1) versus (2) (3) versus (4)

    Mann-Whitney U -2.627 a -1.082 -0.754 -1.684 b -1.448 c Mann-Whitney U -3.901 a -0.314 -0.716 -1.762 b -3.668 a

    Kolmogorov-Smirnov 10.850 a 4.916 c 2.614 9.674 a 7.940 b Kolmogorov-Smirnov 28.069 a 6.774 b 3.961 17.048 a 22.642 a

    Notes: This table gives an overview of window abnormal returns (WARs) subdivided by credit model confirmation and watchlist precedence. For example, the

    Confirmed + Watch (1) part of the table refers to downgrades that were preceded by a watch for downgrade, and experienced a deteriorating credit model rating

    prior to the downgrade announcement. Table layout is as described in tables 3 and 5. a, b and c denote significance at the 1, 5 and 10% confidence level,

    respectively.

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    36/39

    Table 9: Downgrade Announcement WAR: Multivariate Regression Analysis

    Model 1 Model 2

    Variable Coeff Std. Error Coeff Std. Error

    broad 0.004 (0.006) 0.004 (0.006)

    Boundinv/spec -0.013 (0.015) -0.009 (0.015)

    default -0.053 (0.037) c -0.048 (0.035) c

    #notch -0.006 (0.004) c -0.005 (0.004) c

    S&P (-1,1) -0.027 (0.015) b -0.024 (0.014) b

    S&P (-30,-2) -0.001 (0.006) -0.003 (0.006)

    WAR (-30,-2) 0.052 (0.033) c 0.049 (0.032) c

    ldp 0.004 (0.005) 0.005 (0.005)

    Watch at -0.032 (0.011) a

    Watch at,new -0.065 (0.020) a

    Watch at,old -0.001 (0.012)

    Outlookat -0.016 (0.007) b

    Outlkat,new -0.018 (0.008) a

    Outlkat,old 0.011 (0.031)

    Watchperiod -0.003 (0.003)

    Outlookperiod 0.004 (0.009)

    Watch res,-90 -0.005 (0.007)

    Watch res,+90 -0.004 (0.006)

    Outlookres,-180 -0.029 (0.027)

    Outlookres,+180 -0.031 (0.034)

    R-squared 0.080 0.100

    R-squared Adj 0.054 0.070

    F-Stat 3.047 0.000 a 3.379 0.000 a

    Notes: The table gives an overview of parameter estimates related to multivariate

    regressions of announcement window abnormal returns (i.e., (-1,1)) on variables of

    interest. The control variables used are: broad, indicates whether the rating change was

    across broad rating classes; Boundinv/spec, indicates whether the company's rating

    crosses the investment-/speculative-grade boundary; default, equal to 1 if we observed a

    default within one year after the rating change; #notch, number of notches downgraded,

    S&P(t0,t1), indicates a rating change in similar direction by Standard and Poors within

    (t0,t1); WAR(t0,t1), abnormal return in the interval (t0,t1). We also include dummies

    related to broad industry and rating classes. The impact of rating agency signals (i.e.,

    outlooks and watches), and credit model signals are measured by: ldp, credit modelrating changes one year prior to downgrades; Watchat (Outlookat), equal 1 to if a

    company's rating changes but the watch (outlook) was not lifted, or we observed a

    concomitant new watchlist (outlook) assignment; Watchperiod (Outlookperiod), the

    natural log of one plus the watchlist (outlook) period; Watchres,-90, Watchres,+90,

    (Outlookres,-180, Outlookres,+180), indicate whether the rating change was a resolution

    to a watch (outlook) in similar direction within or beyond 90 (180) days; Watchat,new

    (Outlookat,new), equal to 1 if a company's rating changes and we observe a concomitant

    watchlist (outlook) assignment; Watchat,old (Outlookat,old) equal to 1 if watches

    (outlooks) are not lifted when ratings change. a, b and c denote significance at the 1, 5

    and 10% confidence level, respectively.

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    37/39

    Figure 1 : Watchlist and Outlooks (1991-2005): Resolution Period

    Notes: The figures depict frequency distributions of watchlist and outlook resolution periods (i.e., excluding right censored cases). The sample periodruns from October 1991 to February 2005. The numbers on top of the bars refer to the percentage of cases within each bin that experienced a ratingchange in the intended direction. The number of watches and outlooks with negative direction are by and large twice as large as the number with apositive direction. Given differences in sample sizes, vertical axes are scaled such as to facilitate comparison.

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    38/39

    Figure 2: Cumulative Abnormal Returns, Uncontaminated Moody's Announcements (1991-2005)

    Moody's Negative Rating Anno uncements

    -0.06

    -0.05

    -0.04

    -0.03

    -0.02

    -0.01

    0

    0.01

    0.02

    -30 -27 -24 -21 -18 -15 -12 -9 -6 -3 0 3 6 9 12 15 18 21 24 27 30

    Trading Days

    Negative Outlook Watch Down Downgrade

    Moody's Positive Rating Announcements

    -0.02

    -0.01

    0

    0.01

    0.02

    0.03

    0.04

    0.05

    0.06

    -30 -27 -24 -21 -18 -15 -12 -9 -6 -3 0 3 6 9 12 15 18 21 24 27 30

    Trading Days

    Positive Outlook Watch Up Upgrade

    Notes: The two graphs depict estimated Cumulative Abnormal Returns (CARs) surrounding rating agency announcements.

    Observations preceded by an announcement in similar direction are excluded. In case of watches for downgrade and

    negative outlooks we also exclude observations when we observe a concomitant rating change at the announcement date.

  • 8/11/2019 What Do Rating Agency Announcements Signal- Confirmation or New Information?

    39/39

    Figure 3 : Cumulative Abnormal Returns, Confirmed versus Unconfirmed Announcements by Moody's (1991-2005)

    Moody's Downgrade Unconfirmed

    -0.06

    -0.05

    -0.04

    -0.03

    -0.02

    -0.01

    0

    0.01

    0.02

    0.03

    -30 - 27 -24 -21 -18 - 15 - 12 -9 -6 -3 0 3 6 9 12 15 18 21 2 4 2 7 3 0

    Trading Days

    Moody's Watch Down Confirmed

    -0.09

    -0.08

    -0.07

    -0.06

    -0.05

    -0.04

    -0.03

    -0.02

    -0.01

    0

    0.01

    0.02

    0.03

    -30 - 27 -24 - 21 -18 -15 - 12 -9 -6 -3 0 3 6 9 12 15 18 21 24 27 30

    Trading Days

    Moody's Watch Down Unconfirmed

    -0.06

    -0.05

    -0.04

    -0.03

    -0.02

    -0.01

    0

    0.01

    0.02

    0.03

    -30 -27 -24 -21 -18 -15 -12 -9 -6 -3 0 3 6 9 12 15 18 21 24 27 30

    Trading Days

    Moody's Negative Outlook Confirmed

    -0.06

    -0.05

    -0.04

    -0.03

    -0.02

    -0.01

    0

    0.01

    0.02

    0.03

    -30 - 27 -24 - 21 -18 -15 - 12 -9 -6 -3 0 3 6 9 12 15 18 21 24 27 30

    Trading Days

    Moody's Negative Outlook Unconfirmed

    -0.06

    -0.05

    -0.04

    -0.03

    -0.02

    -0.01

    0

    0.01

    0.02

    0.03

    -30 -27 -24 - 21 -18 - 15 -12 -9 -6 -3 0 3 6 9 12 15 18 21 24 27 30

    Trading Days

    Moody's Downgrade Confirmed

    -0.07

    -0.06

    -0.05

    -0.04

    -0.03

    -0.02

    -0.01

    0

    0.01

    0.02

    0.03

    -30 - 27 -24 -21 -18 - 15 - 12 -9 -6 -3 0 3 6 9 12 15 18 21 2 4 2 7 3 0

    Trading Days

    Notes: The graphs depict estimated Cumulative Abnormal Returns (CARs) surrounding rating agency announcement. The sample period runs from October 1991 to February 2005. Left hand side graphs refer to rating agency

    announcement that are in accordance with a similar movement in point-in-time credit model ratings (i.e., confirmed). Right hand side graphs refer to the remaining cases (i.e., unconfirmed). Corresponding WARs are provided in

    table 6.