radar rainfall uncertainties

Upload: avijit-paul

Post on 10-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 Radar Rainfall Uncertainties

    1/8

    AFFILIATIONS:KrajewsKiIIHR-Hydroscience & Engineering,

    The University of Iowa, Iowa City, Iowa; Villariniand smithDepartment of Civi and Environmenta Engineering, Princeton

    University, Princeton, New Jersey

    CORRESPONDING AUTHOR: Witod F. Krajewski, IIHR-

    Hydroscience & Engineering, The University of Iowa, Iowa City,

    IA 52242

    E-mai: [email protected]

    The abstract for this article can be found in this issue, following the

    table of contents.

    DOI:10.1175/2009BAMS2747.1

    In fina form 7 September 2009

    2010 American Meteoroogica Society

    Now is good time to ssess thee decdes of pogess since Jim Wilson nd Ed Bndes

    smmized the opetionl cpbility of d to povide qntittive infll estimteswith potentil pplictions to hydology.

    The purpose of this article is to honor Jim Wilson

    and Ed Brandes for their seminal paper (Wilson

    and Brandes 1979), Radar measurement of

    rainfallA summary. The work has been frequently

    cited [163 times according to the Institute for Scientific

    Information (ISI) Web of Knowledge as of 7 June

    2009], and it was a comprehensive attempt to summa-

    rize the capabilities of weather radar to provide quan-titative estimates of precipitation, which inspired a

    generation of radar hydrometeorologists in the United

    States and elsewhere. They discussed the numerous

    sources of uncertainties associated with radar-based

    rainfall estimates, including calibration, attenuation,

    bright band, anomalous propagation, beam blockage,

    ground clutter and spurious returns, random errors,

    and variability in the relation between reflectivity

    Zand rainfall rate R (ZR relations). The authors

    also addressed the possible impact of the errors in

    rain gauge measurements of rainfall and sampling

    uncertainties (errors resulting from the approxima-

    tion of an areal estimate using a point measurement).

    In particular, based on contemporary research (e.g.,

    Huff 1970; Woodley et al. 1975) concerning the spatialsampling error, Wilson and Brandes (1979) reported

    that it decreases with increasing area size, increasing

    time period, increasing gage density, and increasing

    rainfall amount. Based on more recent research, we

    have developed quantitative models that reflect how

    the spatial sampling errors decrease with increasing

    temporal and decreasing spatial scales, rain gauge

    network density, and rainfall amount (e.g., Ciach and

    Krajewski 1999; Zhang et al. 2007; Villarini et al. 2008;

    Villarini and Krajewski 2008).

    These uncertainties notwithstanding, Wilsonand Brandes foresaw the operational utility of radar-

    rainfall estimation and promoted its use in flash flood

    forecasting, noting that radar can be of lifesaving

    usefulness by alerting forecasters to the potential for

    flash f looding.

    In this article, rather than trying to review the

    (sizable) literature of the different methods of radar-

    rainfall estimation and their accompanying sources

    of uncertainties, our goal is to answer the question,

    How much better can we do now versus what was

    done 30 yr ago? To answer this question, we replicate,

    RADAR-RAINFAll UNCERTAINTIESWhee e We fte Thity Yes of Effot?

    by witold F. KrajewsKi, Gabriele Villarini, and james a. smith

    87JaNuarY 2010aMErICaN METEOrOLOGICaL SOCIETY |

  • 8/8/2019 Radar Rainfall Uncertainties

    2/8

    to the highest degree possible, the analysis docu-

    mented in Wilson and Brandes (1979) and report the

    results. Acknowledging considerable technological

    progress in electronics, computing, and communica-

    tion, the basics of weather radar remain unchanged.

    Because our focus is on recent operational data in

    the United States, we do not assess polarimetric

    radar performance (e.g., Zrni 1996; Petersen et al.1999; Zrni and Ryzhkov 1999; Doviak et al. 2000;

    Bringi and Chandrasekar 2001; Brandes et al. 2002,

    2004). We limit our analysis to a similar number of

    warm-season storms that were analyzed by Wilson

    and Brandes (1979) and stay with the performance

    statistics they used.

    This paper is organized as follows. In the next

    section, we describe the data and the setup used for

    the analysis. In the Results section, we present the

    results of this study, and in the last section, we sum-

    marize our main points and conclude the paper.

    SETUP AND DATA. Wilson and Brandes study

    (1979) represented the operational weather radar

    technology of the 1970s and 1980s. In 1971, the U.S.

    National Weather Service (NWS) began the Digitized

    Radar Experiment (D/RADEX) to improve the opera-

    tional use of radar data through computer processing.

    The early stages of D/RADEX involved four sites:

    Kansas City, Missouri, established in August 1971;

    Oklahoma City, Oklahoma, (October 1971); Forth

    Worth, Texas (December 1971); and Monett, Missouri

    (February 1972). Several useful hydrometeoro-logical products were developed under the program,

    including echo tops, vertically integrated liquid water

    content, severe weather probability, storm structure,

    and rainfall accumulation. In 1983, the system was

    upgraded to quasi-operational status and renamed the

    Radar Data Processor, version II (RADAP II). As of

    1991, the RADAP II network consisted of 12 sites. Six

    of these sites cover a major portion of the Arkansas

    River basin in the central part of the United States.

    The RADAP II system was designed as a prototype

    of the Next Generation Weather Radar (NEXRAD)system of modern Doppler radars (Heiss et al. 1990;

    Crum and Alberty 1993; Klazura and Imy 1993) and

    served in that role until it ceased operation in 1992.

    The RADAP II system used two types of radar:

    Weather Surveillance Radar-1957 (WSR-57) and

    Weather Surveillance Radar-1974 S band (WSR-74S).

    Both share the same basic characteristics of a 2.2

    beam width and a 10-cm wavelength (S band). The

    RADAP II sites collected base-level and tilt-sequence

    (volumetric) observations of reflectivity every

    1012 min. These observations were built from input

    scans of data processed into 180 radials covering 360

    of azimuth under the radar umbrella. The radials

    were centered on even azimuths, covered a range from

    10 to 126 nautical miles, and contained a data value

    for each nautical mile of range. The data values were

    given in 16 (015) categories of radar reflectivity.

    In the early 1990s, WSR-57 and WSR-74S radars

    were replaced by the NEXRAD network of WeatherSurveil lance Radar-1988 Doppler (WSR-88D) radars

    (e.g., Crum and Alberty 1993; Crum et al. 1998). The

    first operational WSR-88D was installed in fall of

    1990 in Norman, Oklahoma, while the last one was

    installed in the summer of 1997 in North Webster,

    Indiana. WSR-88D is an S-band radar with Doppler

    capability. WSR-88D has a narrower 0.95 beamwidth

    and collects base-level and tilt-sequence (volumetric)

    observations of reflectivity every 56 or 1012 min,

    depending on the scanning strategy (e.g., Klazura

    and Imy 1993). Apart from the base data products

    (reflectivity, mean radial velocity, and spectrum

    width), WSR-88D radars use many different algo-

    rithms to convert the base data into several hydro-

    meteorological products (Klazura and Imy 1993). In

    particular, rainfall estimates are generated through

    the Precipitation Processing System (PPS; Fulton et al.

    1998). PPS is a suit of algorithms with more than 40

    adaptable parameters. Five subalgorithms comprise

    the PPS: reflectivity preprocessing, rain-rate conver-

    sion, rainfall accumulation, gaugeradar adjustment,

    and rainfall product generation. Over the years, the

    PPS has undergone frequent, albeit rather minor,improvements. It has now evolved into a mature and

    robust algorithm that provides the nation with crucial

    precipitation data. The rainfall products generated by

    the PPS include the Digital Precipitation Array (DPA),

    3-h total, Digital Hybrid Reflectivity, and storm total.

    In the summer of 2008, WSR-88D radars began col-

    lecting data in the so-called superresolution, that is, at

    0.5 in azimuth and 250 m in range. However, the PPS

    does not utilize this new capability, largely because

    of the arrival of dual polarization [dual-polarization

    rain-rate products will be available on a 250 m 1polar grid (Istok et al. 2009)].

    Wilson and Brandes (1979) used radar-rainfall

    estimates from the National Severe Storms Labora-

    tory (NSSL) WSR-57 radar located close to Oklahoma

    City. The radar was operated with a sampling interval

    of 5 min. To convert reflectivity into rainfall rate,

    the authors used the MarshallPalmer ZR relation

    [Z= 200 R1.6 (Marshall and Palmer 1948; Marshall

    et al. 1955)]. The radar data were complemented with

    measurements from rain gauges located between 45

    and 100 km from the radar site. Wilson and Brandes

    88 JaNuarY 2010|

  • 8/8/2019 Radar Rainfall Uncertainties

    3/8

    (1979) compared radar and rain gauge data for

    14 Oklahoma storms with durations ranging from

    2 to 12 h from April to June 1974, plus a storm in

    April 1975.

    We mimic the analysis of Wilson and Brandes

    (1979) as closely as possible. Our analysis is based

    on 20 Oklahoma storms selected from a 6-yr period

    from January 1998 to December 2003. The radardata came from the Oklahoma City WSR-88D radar

    (KTLX). We processed level II data through the Build

    4 (newer versions are currently available) of the Open

    Radar Product Generator (ORPG) version of the PPS

    software system used by the NWS and generated

    DPAs. This product represents hourly accumulation

    maps averaged over the Hydrologic Rainfall Analysis

    Project (HRAP) grid [approximately 4 4 km 2

    pixels (e.g., Reed and Maidment 1999)]. We used

    the NEXRAD ZR relation [Z= 300 R1.4; see Fulton

    et al. (1998)] to convert reflectivity into rainfall rate.

    By using the ORPG, we assured consistency in data

    processing for all storms.

    Our radar-rainfall estimates are complemented

    by concurrent and collocated rain gauge measure-

    ments from the Oklahoma Mesonet (e.g., Brock et al.

    1995) and U.S. Department of Agriculture (USDA)

    Agricultural Research Service (ARS) Micronet (e.g.,

    Allen and Naney 1991). As shown in Fig. 1, the former

    consists of more than 110 weather stations distributed

    almost uniformly across the state of Oklahoma, with

    an intergauge distance of about 50 km; the Micronet is

    a much denser network that consists of 42 rain gaugeslocated between 70 and 100 km from the Oklahoma

    City radar site with an intergauge distance of about

    5 km. Similarly to Wilson and Brandes (1979), we

    include in our analysis only those gauges that are

    located between 45 and 100 km.

    While we made every effort to reproduce Wilson

    and Brandes analysis (1979) as closely as possible

    based on the information reported in their paper (see

    in particular the Results in their paper), there are

    some obvious differences between the two studies.

    First, the radar technology is different. Our radardata come from the WSR-88D radar (e.g., Crum

    and Alberty 1993; Klazura and Imy 1993), which is

    much more sensitive and has a larger antenna than

    the WSR-57 radar used by Wilson and Brandes.

    Even though both of these radars operate at S band,

    WSR-57 has 2.2 beamwidth, while WSR-88D has

    0.95 beamwidth. This means that at 100-km range

    from the radar, the WSR-57 provides ref lectivity over

    pixels that are about 2 km 2 km (1 nm), while the re-

    flectivity data available for the WSR-88D correspond

    to about 1 km 1 km pixels. Wilson and Brandes

    (1979) analyzed their rainfall estimates in polar

    coordinates with 2 1 km resolution (E. Brandes

    2009, personal communication). In our case, the

    radar-rainfall estimates were averaged over the HRAP

    grid. Even though this presumably implies a factor of

    4 ratio in the spatial resolution between the two stud-

    ies, we do not think that this significantly impacts our

    comparisons, because we are working with rainfall

    storm totals. The representativeness of rain gauge

    data with respect to both resolutions can be assessed

    (not included) based on the information collected at

    the Piconet rain gauge network at the Oklahoma City

    Airport and reported by Ciach and Krajewski (2006).

    Another difference is the ZR relationship used inthe two studies: the default ZR relation for WSR-57

    was MarshallPalmer (Z= 200 R1.6), while we used

    the NEXRAD ZR (Z= 300 R1.4), which is the most

    common ZR relationship used operationally. This

    difference in the ZR relationship might impact the

    results because there is some evidence (e.g., Villarini

    2008; Villarini and Krajewski 2010) that Marshall

    Palmer ZR leads to higher errors in radar-rainfall

    estimates for a midwestern climate. We did not repro-

    cess the data with the MarshallPalmer ZR relation

    because it would necessitate use of the ORPG, whichis exceedingly tedious and cumbersome.

    As we mentioned before, we selected 20 storms for

    our study. We defined the beginning of a storm as

    the point at which 1-h rainfall accumulation larger

    than zero was first detected by any of the rain gauges

    within 45 and 100 km from the radar; we defined

    the conclusion of the event as the time at which a

    prolonged zero-rainfall period began. Wilson and

    Brandes (1979) did not offer their definition of a

    storm, but we were able to determine that [s]torm

    periods began comfortably before any gauge used for

    Fig. 1. Map with the location of the rain gauges and

    radar site.

    89JaNuarY 2010aMErICaN METEOrOLOGICaL SOCIETY |

  • 8/8/2019 Radar Rainfall Uncertainties

    4/8

    comparison experienced rain and ended well after

    rain stopped at the last site affected. Events with rain

    already in progress were not selected to avoid errors

    associated with gauge timing (E. Brandes 2009, per-

    sonal communication). Thus, even though the details

    of the definition of a storm in these two studies might

    differ, we do not believe that this aspect would signif i-

    cantly influence our comparison of the results.

    RESULTS. In Table 1, we summarize the results of

    our analysis for the 20 storms considered in this study.

    We tried to include only events that occurred between

    April and October (the only exception were events

    1 and 3) to avoid problems associated with rainfall

    measurements by radar and rain gauges during the

    winter months (e.g., Smith et al. 1996; Germann et al.

    2006; Ciach et al. 2007). This is in agreement with

    Table 2 in Wilson and Brandes (1979), where they only

    included events that occurred in AprilJune.

    A larger number of rain gauges were available for

    this study. With the exception of event 6, in which

    only 44 rain gauges were working for the entire storm,

    we used at least 53 rain gauges for each event. On

    average, Wilson and Brandes (1979) used measure-

    ments from 16 rain gauges, with a minimum of 5 and

    a maximum of 22. Comparing the storm durations

    between the two studies, it is evident that our events

    lasted longer than those in Wilson and Brandes

    (1979). In this study, we have events with durations

    ranging from 6 to 52 h (on average, they lasted ap-

    proximately 21 h). The events selected by Wilson andBrandes (1979) were much shorter, ranging from 2

    to 12 h, with an average duration of less than 6 h.

    Moreover, the rain gauge averages (G-

    ) for our storms

    tend to be higher than those in Wilson and Brandes

    (1979). It is possible that these differences are due to

    the contrasting definition of storm between the

    two studies.

    In column 8 in Table 1, we report the results for

    the same quantities as those in Table 2 in Wilson and

    Brandes (1979). In particular, representing with Giand

    Ri the storm total rainfall accumulation values for theith rain gauge measurement and the corresponding

    radar-rainfall estimate, we write

    (1)

    (2)

    (3)

    relative dispersion about E[G/R]

    (4)

    average difference (5)

    average difference (storm bias removed)

    (6)

    where Nis the number of gauges available during a

    particular event and [G/R] is the standard deviation

    of the ratios between rain gauge measurements G and

    radar-rainfall estimates R.

    The column with the ratio between rain gauge and

    radar accumulations (column 8) reports values ranging

    from 0.58 (overestimation by the radar compared tothe rain gauges) to 1.66 (underestimation by the radar

    compared to the rain gauges). Wilson and Brandes

    (1979) found values ranging from 0.41 to 2.41, with a

    more extreme underestimation and overestimation by

    the radar compared to the rain gauges. When averag-

    ing the results from the 20 selected storms, we obtain a

    value of 0.99 compared to a value of 1.04 from Wilson

    and Brandes (1979). Therefore, in general we have an

    overall improvement in terms of the average gauge

    radar ratios (with a value closer to 1 for WSR-88D), but

    the statistical significance of the difference is difficultto establish within the scope of this study.

    Column 9 summarizes the values of the coefficient

    of variation of the gaugeradar ratios. Based on our

    analysis, we obtain values ranging from about 15%

    to 45%, with an average value of 25.5%. Even in this

    case, we observe an overall improvement compared

    to the results presented in Wilson and Brandes (1979):

    their results ranged from 10% to 46%, with an average

    value of 30%. Therefore, we have an overall improve-

    ment on the order of 17% compared to the results

    published 30 yr ago.

    90 JaNuarY 2010|

  • 8/8/2019 Radar Rainfall Uncertainties

    5/8

    We obtain an even larger improvement when

    comparing the average differences between radar

    and rain gauges (column 10). Overall, we observe

    an average difference of 41.7% between radar and

    rain gauges (48.9% considering only events lasting

    15 h), with values ranging from about 15% to 91%.

    These values are smaller than those reported in

    Wilson and Brandes (1979); on average, the differ-

    ence between radar and rain gauges was 63%, with

    values ranging from 30% to 160%. Therefore, in this

    case, we observe a reduction in the average differ-

    ences between radar and rain gauges on the order

    of 33% with respect to the data analyzed by Wilson

    and Brandes (1979).

    Table 1. Summary of the comparisons between radar and rain gauges for the 20 selected storms.

    EventDate

    (begin)Date (end)

    Number

    of gauges

    Duration

    (h)

    G

    (mm)

    R

    (mm)E[G/R]

    Relative

    dispersion

    about

    E[G/R] (%)

    Avg

    diff

    %

    Avg diff

    (E[G/R

    removed)

    %

    10100 UTC

    4 Jan 1998

    1500 UTC

    4 Jan199854 15 60.57 45.98 1.33 15.02 23.57 11.89

    21100 UTC

    26 Apr 1998

    1700 UTC

    27 Apr 199853 31 61.40 44.82 1.39 20.57 27.18 16.35

    32200 UTC

    11 Mar 1999

    0000 UTC

    13 Mar 199956 27 42.15 32.49 1.38 20.55 32.29 19.93

    40100 UTC

    14 Apr 1999

    1400 UTC

    14 Apr 199956 14 28.37 18.74 1.48 17.15 30.40 14.86

    50200 UTC

    10 May 1999

    1100 UTC

    10 May 199955 10 31.17 31.21 1.04 23.28 15.94 17.24

    60200 UTC

    30 Oct 1999

    2300 UTC

    31 Oct 199944 46 55.31 52.33 1.10 23.17 16.54 17.38

    70900 UTC

    28 Jun 2000

    1900 UTC

    28 Jun 200055 11 28.34 29.16 0.97 17.42 14.93 14.38

    80800 UTC

    22 Ju 2000

    1700 UTC

    22 Ju 200053 10 21.46 27.33 0.81 36.54 41.29 25.09

    91800 UTC

    4 May 2001

    1400 UTC

    5 Mar 200156 21 32.67 41.83 0.78 18.74 32.92 13.79

    100200 UTC

    28 May 2001

    1100 UTC

    28 May 200154 10 31.10 40.99 0.80 27.63 39.80 24.32

    111200 UTC

    12 Apr 2002

    2300 UTC

    12 Apr 200255 12 19.56 38.30 0.58 33.84 91.37 24.33

    122100 UTC

    4 Jun 2002

    1600 UTC

    5 Jun 200254 20 42.45 68.58 0.62 18.61 68.06 15.23

    131500 UTC

    13 Jun 2002

    2000 UTC

    13 Jun 2002

    55 6 18.87 29.12 0.65 21.32 61.24 17.87

    140800 UTC

    27 Aug 2002

    1500 UTC

    27 Apr 200254 8 21.78 32.89 0.65 30.53 72.64 31.13

    151300 UTC

    8 Sep 2002

    1800 UTC

    9 Sep 200255 30 40.20 24.80 1.66 34.12 37.32 32.98

    160000 UTC

    19 Sep 2002

    1400 UTC

    19 Sep 200254 15 27.68 44.15 0.63 25.47 69.04 22.25

    170900 UTC

    8 Oct 2002

    2100 UTC

    9 Oct 200256 37 61.69 47.66 1.26 17.23 24.52 15.03

    181100 UTC

    4 Jun 2003

    2100 UTC

    5 Jun 200354 35 45.58 53.34 0.90 45.48 34.41 26.01

    190900 UTC

    9 Aug 2003

    1700 UTC

    9 Aug 200355 9 23.46 28.66 0.71 39.06 77.30 44.72

    202000 UTC

    29 Aug 2003

    2300 UTC

    31 Aug 200355 52 64.80 67.55 0.96 24.72 23.23 21.98

    Average 20 cases: 0.99 25.5 41.7 21.3

    91JaNuarY 2010aMErICaN METEOrOLOGICaL SOCIETY |

  • 8/8/2019 Radar Rainfall Uncertainties

    6/8

    Finally, we recalculated the average differences

    after correcting for the storm bias (column 11). The

    average differences between radar and rain gauges

    decreased, with values ranging from 11.9% to 44.7%,

    and with an average value of 21.3% (26.1% when

    considering only events lasting 15 h). The results

    in Wilson and Brandes (1979) were closer to these,

    with values ranging from 8% to 42%, and on averageequal to 24%.

    DISCUSSION AND CONCLUSIONS. This

    short paper was written to acknowledge Wilson and

    Brandes for their benchmark paper (Wilson and

    Brandes 1979). Their study can be considered one

    of the first in which uncertainties in radar-rainfall

    estimates were quantitatively summarized and

    discussed. Despite considerable uncertainties in

    radar-rainfall estimates revealed by the study, they

    foresaw the tremendous potential of using radar in

    hydrology. Now, it is hard to even imagine the absence

    of radar-rainfall maps in our everyday personal and

    professional activities.

    Thirty years after their paper was published,

    we sought to answer the following question: How

    much better can we do now? Our analysis confirms

    that we can do significantly better. We found an

    overall reduction in the average differences between

    radar and rain gauges on the order of 33%; we also

    found a reduction in the coefficient of variation of

    the expected value of the ratios between rain gauge

    measurements and radar estimates on the order of17%, and much of this improvement can be associ-

    ated with the improved radar hardware and software

    (e.g., smaller beam width). These numbers should

    be interpreted in a qualitative sense rather than as

    a new benchmark. There are several other studies

    documented in the literature that have evaluated vari-

    ous aspects of the current technology performance

    (e.g., Smith et al. 1996; Baeck and Smith 1998; Young

    et al. 1999; Brandes et al. 1999; Westrick et al. 1999;

    Seo et al. 2000; Ciach et al. 2007) and cumulatively

    provided a much more comprehensive assessment.Efforts continue to improve operational quantita-

    tive precipitation estimation (QPE) products (e.g.,

    Vasiloff et al. 2007), likely resulting in better results

    than those indicated by our simple analysis. In the

    future, it is expected that radar-rainfall uncertainties

    will be reduced even more with the use of polari-

    metric radars (e.g., Zrni 1996; Petersen et al. 1999;

    Zrni and Ryzhkov 1999; Bringi and Chandrasekar

    2001; Brandes et al. 2002, 2004), and we hope that in

    30 yr there will be a similar study reporting further

    progress.

    Benchmarking performance of observing systems

    is a critical element necessary for continued progress.

    How can you argue that you could do better if you do

    not know how well you can do? We take this oppor-

    tunity to call for a more systematic approach to our

    (hydrologic) communitys efforts to monitor its own

    progress. Preserving data and legacy software would

    allow long-term reanalysis similar to what has beendone in the area of numerical weather prediction mod-

    eling (e.g., Kalney et al. 1996). Developing adequate

    performance measures and evaluation technologies

    should be an integral part of the approach. While

    going back in time would be unfeasible, all-digital

    archives of the present and future should facilitate

    reanalysis and closer monitoring of the progress.

    While trying to reduce uncertainties in radar-

    rainfall estimates, we should also focus on finding

    ways to account for various error sources. In

    particular, we think that empirically supported

    modeling of the total radar-rainfall uncertainties

    warrants future investigations and studies. In the

    literature, few studies present results about the im-

    pact of the total uncertainties; however, in the vast

    majority of the cases, these models are based on as-

    sumptions and educated guesses.

    One key factor in solving the persistent problem

    of radar-rainfall uncertainties is the availability of

    dense rain gauge networks that could provide valu-

    able information for modeling these uncertainties.

    Consequently, networks should be located in different

    parts of the United States that are characterized bydifferent topography and climatic conditions. While

    meteorological networks in the United States are

    abundant (e.g., National Research Council 2009),

    only a few meet the requirements for the density and

    quality required to support radar-rainfall uncertainty

    studies.

    Despite over 30 yr of effort, the comprehensive

    characterization of uncertainty of radar-rainfall

    estimation has not been achieved. We hope that

    ensemble forecasting (e.g., Schaake et al. 2007)

    both in meteorology and hydrology will lead to anintensification of effort, because error covariance

    is needed (e.g., Berenguer and Zawadzki 2008) to

    improve forecasting ability. The celebrated study by

    Wilson and Brandes (1979) will remain a cornerstone

    of this effort.

    ACKNOWLEDGMENTS.We appreciate the useful

    comments and clarifications provided by both Jim Wilson

    and Ed Brandes. Partial support for the first author was

    provided by the Rose and Joseph Summers Endowment.

    The second author was supported by NASA Headquarters

    92 JaNuarY 2010|

  • 8/8/2019 Radar Rainfall Uncertainties

    7/8

    under the Earth Science Fellowship Grant NNX06AF23H

    while he was a graduate student at The University of Iowa.

    The authors also acknowledge discussions over the years

    with Grzegorz Ciach, Dominique Creutin, Dong-Jun Seo,

    and Dave Kitzmiller, among many others.

    RefeRences

    Allen, P. B., and J. W. Naney, 1991: Hydrology of the

    Little Washita River Watershed, Oklahoma: Data

    and analyses. U.S. Department of Agriculture,

    Agricultural Research Service Publication ARS-90,

    73 pp.

    Baeck, M. L., and J. A. Smith, 1998: Rainfall estimation

    by the WSR-88D for heavy rainfall events. Wea.

    Forecasting, 13, 416436.

    Berenguer, M., and I. Zawadzki, 2008: A study of the

    error covariance matrix of radar rainfall estimates in

    stratiform rain. Wea. Forecasting, 23, 10851101.

    Brandes, E. A., J. Vivekanandan, and J. W. Wilson,

    1999: A comparison of radar reflectivity estimates

    of rainfall f rom collocated radars.J. Atmos. Oceanic

    Technol., 16, 12641272.

    , G. Zhang, and J. Vivekanandan, 2002: Experi-

    ments in rainfall estimation with a polarimetric

    radar in a subtropical environment.J. Appl. Meteor.,

    41, 674685.

    ,, and, 2004: Comparison of polarimetric

    radar drop size distribution retrieval algorithms.J.

    Atmos. Oceanic Technol., 21, 584598.

    Bringi, V. N., and V. Chandrasekar, 2001: PolarimetricDoppler Weather Radar, Principles and Applications,

    Cambridge University Press, 634 pp.

    Brock, F. V., K. C. Crawford, R. L. Elliott, G. W. Cuperus,

    S. J. Stadler, H. L. Johnson, and M. D. Eilts, 1995: The

    Oklahoma Mesonet: A technical overview.J. Atmos.

    Oceanic Technol., 12, 519.

    Ciach, G. J., and W. F. Krajewski, 1999: On the estima-

    tion of radar rainfall error variance.Adv. Water

    Resour., 22, 585595.

    , and, 2006: Analysis and modeling of spatial

    correlation structure in small-scale rainfall in CentralOklahoma.Adv. Water Resour., 29, 14501463.

    ,, and G. Villarini, 2007: Product-error-driven

    uncertainty model for probabilistic quantitative

    precipitation estimation with NEXRAD data. J.

    Hydrometeor., 8, 13251347.

    Crum, T. D., and R. L. Alberty, 1993: The WSR-88D

    and the WSR-88D operational support facility. Bull.

    Amer. Meteor. Soc., 74, 16691687.

    , R. E. Saff le, and J. W. Wilson, 1998: An update on

    the NEXRAD program and future WSR-88D support

    to operations. Wea. Forecasting, 13, 253262.

    Doviak, R. J., V. Bringhi, A. Ryzhkov, A. Zahrai, and

    D. Zrni, 2000: Considerations for polarimetric

    upgrades to operational WSR-88D radars.J. Atmos.

    Oceanic Technol., 17, 257278.

    Fulton, R. A., J. P. Breidenbach, D.-J. Seo, D. A. Miller,

    and T. OBannon, 1998: WSR-88D rainfall algorithm.

    Wea. Forecasting, 13, 377395.

    Germann, U., G. Galli, M. Boscacci, and M. Bolliger,2006: Radar precipitation measurement in a moun-

    tainous region. Quart. J. Roy. Meteor. Soc., 132,

    16691692.

    Heiss, W. H., D. L. McGrew, and D. Sirmans, 1990: Next

    Generation Weather Radar (WSR-88D).Microwave

    J., 33, 7998.

    Huff, F. A., 1970: Sampling errors in measurement of

    mean precipitation.J. Appl. Meteor.,9, 3544.

    Istok, M., and Coauthors, 2009: WSR-88D dual polariza-

    tion initial operational capabilities. Preprints, 25th

    Conf. on International Interactive Information and

    Processing Systems (IIPS) for Meteorology, Oceanog

    raphy, and Hydrology, Phoenix, AZ, Amer. Meteor.

    Soc., 15.5. [Available online at http://ams.confex.

    com/ams/pdfpapers/148927.pdf.]

    Kalnay, E., and Coauthors, 1996: The NCEP/NCAR

    40-Year Reanalysis Project. Bull. Amer. Meteor. Soc.,

    77, 437471.

    Klazura, G. E., and D. A. Imy, 1993: A description of

    the initial set of analysis products available from

    the NEXRAD WSR-88D system. Bull. Amer. Meteor.

    Soc., 74, 12931311.

    Marshall, J. S., and W. M. Palmer, 1948: The distributionof raindrops with size.J. Meteor., 5, 165166.

    , W. Hitschfeld, and K. L. S. Gunn, 1955: Advances

    in radar weather. Advances in Geophysics, Vol. 2,

    Academic Press, 156.

    National Research Council, 2009: Observing weather

    and climate from the ground up: A nationwide

    network of networks. National Academies Press,

    234 pp.

    Petersen, W. A., and Coauthors, 1999: Mesoscale and

    radar observations of the Fort Collins f lash flood of

    28 July 1997.Bull. Amer. Meteor. Soc., 80, 191216.Reed, S. M., and D. R. Maidment, 1999: Coordinate

    transformations for using NEXRAD data in GIS-

    based hydrologic modeling, J. Hydrol. Eng.,4(2),

    174182.

    Schaake, J. C., T. M. Hamill, R. Buizza, and M. Clark,

    2007: HEPEX: The Hydrological Ensemble Pre-

    diction Experiment. Bull. Amer. Meteor. Soc., 88,

    15411547.

    Seo, D.-J., J. Breidenbach, R. Fulton, D. Miller, and

    T. OBannon, 2000: Real-time adjustment of range-

    dependent biases in WSR-88D rainfall estimates

    93JaNuarY 2010aMErICaN METEOrOLOGICaL SOCIETY |

  • 8/8/2019 Radar Rainfall Uncertainties

    8/8

    due to nonuniform vertical profile of ref lectivity.J.

    Hydrometeor., 1, 222240.

    Smith, J. A., D.-J Seo, M. Baeck, and M. Hudlow, 1996:

    An intercomparison study of NEXRAD precipitation

    estimates. Water Resour. Res., 32, 20352045.

    Vasiloff, S. V., and Coauthors, 2007: Improving QPE and

    very short-term QPF: An initiative for a community-

    wide integrated approach. Bull. Amer. Meteor. Soc.,88, 18991911.

    Villarini, G., 2008: Empirically-based modeling of

    radar-rainfall uncertainties. Ph.D. thesis, The

    University of Iowa, 321 pp.

    , and W. F. Krajewski, 2008: Empirically-based

    modeling of spatial sampling uncertainties associ-

    ated with rainfal l measurements by rain gauges.Adv.

    Water Resour., 31, 10151023.

    , and, 2010: Sensitivity studies of the models

    of radar-rainfall uncertainties. J. Appl. Meteor.

    Climatol., in press, doi:10.1175/2009JAMC2188.1.

    , P. V. Mandapaka, W. F. Krajewski, and R. J.

    Moore, 2008: Rainfall and sampling errors: A rain

    gauge perspective. J. Geophys. Res., 113, D11102,

    doi:10.1029/2007JD009214.

    Westrick, K. J., C. F. Mass, and B. A. Colle, 1999: The

    limitations of the WSR-88D radar network for

    quantitative precipitation measurement over the

    coastal United States. Bull. Amer. Meteor. Soc., 80,

    22892298.

    Wilson, J. W., and E. A. Brandes, 1979: Radar measure-

    ment of rainfallA summary. Bull. Amer. Meteor.

    Soc., 60, 10481058.Woodley, W., A. Olsen, A. Herndon, and V. Wiggert,

    1975: Comparison of gage and radar methods of

    convective rain measurement. J. Appl. Meteor., 14,

    909928.

    Young, C. B., B. R. Nelson, A. A. Bradley, J. A. Smith,

    C. D. Peters-Lidard, A. Kruger, and M. L. Baeck,

    1999: An evaluation of NEXRAD precipitation

    estimates in complex terrain. J. Geophys. Res., 104,

    19 69119 703.

    Zhang, Y., T. Adams, and J. V. Bonta, 2007: Subpixel-

    scale rainfall variability and the effects on theseparation of radar and gauge rainfall errors. J.

    Hydrometeor., 8, 13481363.

    Zrni, D. S., 1996: Weather radar polarimetryTrends

    toward operational applications. Bull. Amer. Meteor.

    Soc., 77,15291534.

    , and A. V. Ryzhkov, 1999: Polarimetry for weather

    surveillance radars. Bull. Amer. Meteor. Soc., 80,

    389406.

    94 JaNuarY 2010|