quantitative techniques in competition analysis - …dl4a.org/uploads/pdf/oft266.pdf · 1...

139
Quantitative techniques in competition analysis Prepared for the Office of Fair Trading by LECG Ltd October 1999 Research paper 17 OFT 266

Upload: dinhdan

Post on 02-Sep-2018

253 views

Category:

Documents


2 download

TRANSCRIPT

Quantitativetechniques incompetition

analysis

Prepared for the Office of FairTrading

by LECG Ltd

October 1999 Research paper

17OFT 266

LECG Ltd provides sophisticated economic and financial analysis, publicpolicy analysis, litigation support and strategic management consulting. It is asubsidiary of Navigant Consulting Inc, a US-based company with offices inthe United States, Canada, New Zealand, Argentina and Europe.

This report was prepared by

Thomas HoehnJames LangenfeldMeloria MeschiLeonard Waverman

based at LECG Ltd

40-43 Chancery Lane Stéphanie Square Centre 1603 Orrington AveLondon WC2A 1SL Avenue Louise 65 Suite 1500England 1050 Brussels Evanston, Illinois

Belgium IL60201USA

tel: 0171-269 0500 tel: 00 322-534 5545 tel: 001-847 475 1566fax: 0171-269 0515 fax: 00 322-535 7700 fax: 001-847 475 1031

Please note any views expressed in this paper are those of the authors:they do not necessarily reflect the views of the Director General of FairTrading.

1

QUANTITATIVE TECHNIQUES INCOMPETITION ANALYSIS

PREFACEThis paper is the 17th of a series of research papers (listed overleaf) to bepublished by the Office of Fair Trading. These papers report the findings ofprojects commissioned by the OFT as part of its ongoing programme ofresearch into aspects of UK Competition and Consumer Policy. The intentionis that research findings should be made available to a wider audience ofpractitioners, both for information and as a basis for discussion. Any viewsexpressed in this paper are those of the authors: they do not necessarily reflectthe views of the Director General of Fair Trading. This report is not andshould not be treated as a guideline issued as a consequence of the DirectorGeneral’s obligation to publish general advice and information under theCompetition Act 1998.

Comments on the paper should be sent to me, at the address shown below. Research proposals on other aspects of UK Competition and Consumer Policywould also be welcomed. Requests for additional copies of this paper (orcopies of earlier papers in this series) should, however, be sent to the addressshown on page 2.

Peter BamfordChief Economist Office of Fair TradingChancery House53 Chancery LaneLondon WC2A 1SP

2

OFFICE OF FAIR TRADING RESEARCH PAPERS

1 Market Definition in UK Competition Policy, National Economic Research Association, February 1993

2 Barriers to Entry and Exit in UK Competition Policy, London Economics, March 1994

3 Packaged Mortgages: Results of Consumer Surveys, Research Surveys of Great Britain, June 1994

4 Consumers’ Appreciation of Annual Percentage Rates - Taylor Nelson AGB survey results, June 1994

5 Predatory Behaviour in UK Competition Policy, Geoffrey Myers, November 1994

6 Underwriting of Rights Issues: a study of the returns earned by sub-underwriters from UK rightsissues, Paul Marsh, November 1994

7 Transparency and Liquidity: a study of large trades on the London Stock Exchange under differentpublication rules, Gordon Gemmill, November 1994

8 Gambling, Competitions and Prize Draws - Taylor Nelson AGB survey results, September 1996

9 Consumer dissatisfaction - Taylor Nelson AGB survey results, December 1996

10 The Assessment of Profitability by Competition Authorities, Martin Graham and Anthony Steele,February 1997

11 Consumer detriment under conditions of imperfect information, London Economics, August 1997

12 Vertical Restraints and Competition Policy, Paul W Dobson and Michael Waterson, December 1996

13 Competition in retailing, London Economics, September 1997

14 The effectiveness of undertakings in the bus industry, National Economic Research Associates, December 1997

15 Vulnerable Consumer Groups: Quantification and Analysis, Ramil Burden, April 1998

16 The Welfare Consequences of the Exercise of Buyer Power, Paul Dobson, Michael Waterson and Alex Chu, September 1998

Copies of these papers are available, free of charge, from:

Office of Fair Trading, PO Box 366, Hayes UB3 1XBTel: 0870 60 60 321, Fax: 0870 60 70 321, e-mail: [email protected]

3

CONTENTS

Preface 1Office of Fair Trading Research Papers 2List of Abbreviations 4

PART I: INTRODUCTION AND OVERVIEW1 Introduction 52 How quantitative techniques support antitrust analysis in practice 11

PART II: STATISTICAL TESTS OF PRICES AND PRICE TRENDS3 Cross-sectional price tests 434 Hedonic price analysis 495 Price correlation 536 Speed of adjustment test 577 Causality tests 598 Dynamic price regressions and co-integration analysis 63

PART III: DEMAND ANALYSIS9 Residual demand analysis 6910 Critical loss analysis 7711 Import penetration tests 8112 Survey techniques 83

PART IV: MODELS OF COMPETITION13 Price-concentration studies 8714 Analysis of differentiated products: the diversion ratio 9315 Analysis of differentiated products: estimation of demand systems 9716 Bidding studies 103

PART V: OTHER TECHNIQUES AND CONCLUSIONS17 Time series event studies of stock markets’ reactions to news 10718 Conclusions 109

AppendixA Glossary of terms 117B Bibliography 127C List of case summaries 137

4

LIST OF ABBREVIATIONS

Below are the full versions of those abbreviations which occur regularly in the text.

For an explanation of some common words or phrases please refer to Appendix A.

BEUC Bureau Européen des Unions de Consommateurs

- also known as the European Consumers’ Organisation

DGFT Director General of Fair Trading

DOJ United States’ Department of Justice

ECJ European Court of Justice

FTC United States’ Federal Trade Commission

HHI Herfindahl-Hirschman Index

IIAA Independence of Irrelevant Alternatives Assumption

IFS Institute for Fiscal Studies

IO industrial organisation

MMC Monopolies and Mergers Commission

OLS Ordinary Least Squares (see also glossary)

SAS a dedicated statistical package

SPSS a dedicated statistical package

5

PART I: INTRODUCTION AND OVERVIEW

1 INTRODUCTION

Background to the study

1.1 LECG Ltd has been commissioned by the Office of Fair Trading (OFT) to undertake astudy of the main quantitative techniques used in competition, that is, antitrust, analysis.

1.2 This project is one of a series of studies the OFT has commissioned on issues relevantto UK competition policy. The purpose of these studies is to stimulate debate among awider audience of antitrust practitioners. These studies do not necessarily reflect theviews of the Director General of Fair Trading (DGFT). In particular, this report shouldnot be taken as providing general advice and information about the way the DirectorGeneral expects competition policy to operate. Other related past studies include MarketDefinition in UK Competition Policy by National Economic Research Associates, 1993,and Barriers to Entry and Exit in UK Competition Policy by London Economics, 1994.

1.3 Over recent years, the use of quantitative analysis in antitrust has increased for a varietyof reasons. These reasons include the development of modern and fairly reliablequantitative techniques, advancements in user-friendly software and cheap hardware,availability of more and better data and, not least, an increasing use of economists andeconomic evidence, by antitrust authorities and the companies concerned. None of theprevious OFT studies have dealt specifically with the range of quantitative techniquesused in competition policy cases. This project therefore fills an important gap in the seriesof research papers published by the OFT.

Antitrust legislation in the UK

1.4 Competition policy in the UK is conducted within the framework of certain pieces oflegislation. The main areas of antitrust that fall under the jurisdiction of the OFT and/orthe Competition Commission - formerly the Monopolies and Mergers Commission(MMC) - are:

As of May 1 1999, Articles 85 and 86 have been renumbered as Articles 81 and 82 under the Treaty of1

Amsterdam. This report however generally refers to the prohibitions as Articles 85 and 86.

6

� monopoly and abuse of dominant position;

� mergers; and

� agreements between firms (vertical and horizontal).

1.5 Until 1998, UK policy in these areas of antitrust was covered by four main pieces oflegislation. Mergers are governed by the Fair Trading Act 1973, which together withthe Competition Act 1980 also covers monopolies and anti-competitive practices.Agreements between firms could be investigated as part of monopoly enquiries under theFair Trading Act or under the Restrictive Trade Practices Act 1976 and the ResalePrices Act 1976.

1.6 Competition policy legislation in the UK is currently however, in the process of changewith the introduction of the Competition Act 1998. The Competition Act 1998 replacesor amends much of the above legislation, notably the Restrictive Trade Practices Act, theResale Prices Act, and the majority of the Competition Act 1980. Some parts of theprevious legislation remain unchanged, such as the provisions for dealing with mergersunder the Fair Trading Act.

1.7 The new legislation introduces two prohibitions: one of agreements (whether written ornot) which prevent, restrict or distort competition and may affect trade within the UK; theother of conduct by dominant companies which amounts to an abuse of their position ina market in the UK. The two prohibitions come into force on 1 March 2000. Theprohibitions in the Competition Act are based on Articles 85 and 86 of the EC Treaty.1

The Competition Act gives the DGFT powers to investigate undertakings believed to beinvolved in anti-competitive activities, and to impose financial penalties whereappropriate.

1.8 The Competition Act is applied and enforced by the DGFT and, in relation to theregulated utility sectors, concurrently with the regulators for telecommunications, gas,electricity, water and sewerage and railway services. A new Competition Commission,incorporating the former MMC, has been established which hears appeals.

The Competition Act 1998: OFT 400, The Major Provisions; OFT 401, The Chapter I Prohibition; 2

OFT 402,The Chapter II Prohibition; OFT 403, Market Definition; OFT 404, Powers of Investigation; OFT 405, Concurrent Application to Regulated Industries; OFT 406, Transitional Arrangements; OFT 407,Enforcement; OFT 408, Trade Associations, Professions and Self-Regulating Bodies.

These approaches may not always be appropriate under the Competition Act 1998.3

7

1.9 In March 1999, the DGFT published a series of guidelines under the Competition Act2

setting out general advice and information about the application and enforcement of theprohibitions. As noted earlier this report is an independent piece of economic researchand, as such, does not constitute a guideline under the Act.

Antitrust issues requiring quantitative techniques

1.10 The application of quantitative techniques to antitrust has arisen naturally from the needto answer the central questions of antitrust analysis, many of which may involvequantification, as these examples make clear:3

� Market definition : What products, geographic areas, and suppliers/buyers formpart of the relevant market?

� Market structure issues: How should one measure concentration, marketshares, entry barriers and exit conditions?

� Pricing issues: Are movements in market prices consistent with competition,with monopoly or with collusion? Do we observe prices in one geographicmarket that are significantly higher than in others? Are price-cost relationshipsconsistent with predatory pricing?

� Other behavioural issues: To what extent do leading firms’ non-pricestrategies, for example, on matters such as supply constraints, distributionagreements, investment, advertising or patent licensing, lessen competition orimprove industry performance? Where both effects exist, which is moresignificant? Do variations in cost efficiency explain profit variations?

� Vertical issues: To what extent does vertical integration or contracting (such astied pubs or other exclusive arrangements) by leading firms reduce competitionand/or yield efficiencies?

� Special merger issues: How much might the merger concerned change pricingand other market behaviour, either by lessening competition or by promotingefficiency?

8

� Potential entry and competitive expansion: How responsive are both potentialentrants and competing fringe firms to increased prices or margins in therelevant market?

1.11 Each of these areas provides scope for some degree of quantitative analysis; however notall such analyses need to use complex formal mathematical or statistical techniques. Itshould be noted that quantitative analysis interacts with qualitative analysis in a complexway. Rarely will quantitative techniques and analysis alone decide matters, though theycan provide very valuable evidence in cases. It should be stressed that the weighing andsifting of evidence will always involve expert judgement on the part of the competitionauthorities.

Classification of quantitative techniques

1.12 The techniques outlined in this review are those designed to test an economic hypothesisto the exclusion of exploratory data analysis. The menu of selected techniques belowranges from uncomplicated descriptive statistics (for example, average price levels, andsales and price trends) to advanced econometric estimation of demand and supplyfunctions.

Statistical tests of prices and price trends (Part II)

� Cross-sectional price test

� Hedonic price analysis

� Price correlation

� Speed of adjustment test

� Causality test

� Dynamic price regressions and co-integration analysis

Demand analysis (Part III)

� Residual demand analysis

� Critical loss analysis

� Import penetration tests

� Survey techniques

9

Models of competition (Part IV)

� Price-concentration studies

� Analysis of differentiated products

� Bidding studies

1.13 Other techniques not covered by this report in detail are:

� analysis of profitability;

� analysis of acquisition price;

� time series event studies of stock market’s reactions to news;

� econometric and Data Envelopment Analysis (DEA) of relative efficiency;

� cost analysis; and

� cluster analysis, discriminant analysis, factor analysis etc.

Aim and structure of this report

1.14 This study reviews the application of quantitative techniques from both a technical andpractical perspective. Each technique, or major group of techniques, is summarised interms of its main elements. At the same time, care is taken to put these techniques intocontext and provide an overview of their uses.

1.15 Each technique is discussed under three headings. First, each statistical test is describedbriefly. Secondly, comments on data requirements and ease of computation are added.Thirdly, the technique is discussed in terms of problems of interpretation. The latterheading is vital, as it is the interpretation of statistical relationships that is crucial inantitrust cases. Economic significance can be different from statistical significance.

1.16 Before dealing with the various tests we also present an overview in Chapter 2 of thegeneral uses and applications of quantitative techniques in US, UK and EU competitionpolicy. This overview places the techniques that are examined in later chapters into abroader context and offers an illustration of the competition issues that typically requirequantification using statistical and econometric tests.

1.17 Throughout the report we include case studies that illustrate the applications ofquantitative techniques. These case studies often deal with more than one technique andso should be read within the context of the entire report. However, they do illustrate howquantitative questions can be at the core of a case and its outcomes.

10

1.18 In the following six chapters, we describe statistical techniques that analyse price only:price tests, covering cross-sectional statistical comparisons (Chapter 3); hedonic priceanalysis (Chapter 4); and time series price comparisons (Chapters 5 to 8). Next, weexamine quantitative techniques that are more closely connected to economic theory, andare used in antitrust analysis for market definition and for the analysis of demand:residual demand analysis and critical loss analysis (Chapters 9 and 10); importpenetration tests (Chapter 11); and survey techniques (Chapter 12). In the final fourchapters, we describe those techniques that are used to estimate or simulate models ofcompetition in order to detect anti-competitive behaviour: price-concentration studies(Chapter 13); analysis of differentiated product markets using the diversion ratio (Chapter14); analysis of differentiated product markets using an estimation of demand system(Chapter 15); bidding techniques (Chapter 16); and, time series event studies

(Chapter 17).

11

2 HOW QUANTITATIVE TECHNIQUES SUPPORT ANTITRUST ANALYSIS IN PRACTICE

2.1 This chapter reviews the use of quantitative techniques in the context of antitrust analysisand draws on the application of these techniques to real life cases in the UK, US and EC.While this cannot be comprehensive, the aim is to illustrate the central role thatquantitative questions and techniques can play in casework. It is not the intention in thischapter to go into each and every detail of how a specific technique is employed. Thereader who wants to have more detailed information about a specific technique can turnto later pages of the report where fuller explanations can be found.

2.2 The analysis of mergers, restrictive agreements and abuses of a dominant position haveall followed a similar analytical approach. This approach has involved a number of steps:

Step 1 Identification of the firms concerned; preliminary analysis of their activities;determination of relevant jurisdiction

Step 2 Definition of the affected markets in their product and geographic dimension,leading to an assessment of the position of the firms in the affected markets

Step 3 Assessment of any potential adverse effects on competition of the allegedrestrictive or abusive behaviour, or the proposed merger

Step 4 Assessment of possible efficiency defences and other relevant public interestjustifications.

2.3 Each step may involve a degree of quantification. Step 1 requires the description of theactivities of a firm principally through the use of financial indicators. Quantitativetechniques that are essentially economic and of a minimum degree of technicalsophistication have more often been used in Steps 2 and 3. Step 2 dominates mergerproceedings, where the assessment of the competitive effect of an acquisition of acompetitor is highly dependent on the assessment of the change in concentration in theaffected markets. In monopoly situations Steps 2 and 3 have generally been given equalimportance as both a monopoly and its abuse must be found, while restrictive agreementsor practices focussed more on Steps 3 and 4. In general, the more contentious andcomplex the case, the more sophisticated the techniques that have been applied.

For an overview see Doern, G.B., 1996.4

Commission Notice on the definition of the relevant market for the purposes of Community competition5

law OJ 1997 C372/5. Supra footnote 5.6

12

2.4 The use of quantitative techniques has differed from country to country and, in someinstances, between different authorities within the same country. In the US, antitrustauthorities and the courts have a longer tradition of relying on economic analysis andempirical verification. This is partly due to the increased influence of economists in theDepartment of Justice, which became noticeable during the 1970s, but is also due to themore litigious nature of US antitrust policy which is very demanding in terms ofsupporting economic and factual evidence. Expert testimony is more often required in alitigation setting where the adversarial process pitches expert against expert and whereeach party tries to expose the weakness of the other parties’ arguments and evidence. Aninvestigative procedure poses different demands on the parties involved and does notallow them to influence the investigative process as much. On the contrary it is theinvestigating authority which drives the process and this is typically the case in Europe(MMC, the European Commission’s DGIV, etc).4

2.5 European antitrust authorities have only more recently paid more explicit attention toempirical evidence provided by economists. The introduction of the European MergerRegulation 4064/89 has arguably provoked a major change in the use of economics andexpert economic evidence, culminating in the recent Commission Notice on marketdefinition, which explicitly advocates the use of quantitative techniques to provide5

evidence of demand substitution:

‘There are a number of quantitative tests that have specifically beendesigned for the purpose of delineating markets. These tests consist ofvarious econometric and statistical approaches: estimates of elasticities andcross-price elasticities for the demand of a product, tests based on similarityof price movements over time, the analysis of causality between price seriesand similarity of price levels and/or their convergence. The Commissiontakes into account the available quantitative evidence capable ofwithstanding rigorous scrutiny for the purposes of establishing patterns ofsubstitution in the past.’6

2.6 Four basic applications of quantitative analysis in antitrust are examined in the rest ofthis chapter:

� the determination of relevant antitrust markets;

� the analysis of market structure;

Department of Justice and Federal Trade Commission, 1992, Horizontal Merger Guidelines, Antitrust7

and Trade Regulation Report, 69(1559), Washington D.C. For a more thorough discussion on theapplication of the guidelines see Langenfeld, J., 1996, The Merger Guidelines as Applied, in Coate,M. and A. Kleit, eds., The Economics of the Antitrust Process. An interesting discussion on theevolution of US merger policy with respect to market definition can be found in Lande, R. and J.Langenfeld, 1997, From Surrogates to Stories: The Evolution of Federal Merger Policy, AntitrustMagazine, Spring: 5-9

13

� the analysis of competition, in particular the analysis of pricing

behaviour; and

� the analysis of efficiency effects.

The delineation of markets

2.7 Because of the increased importance of quantitative analysis for defining antitrustmarkets it is worth spelling out in more detail the type of empirical evidence that isrequired to establish the extent of demand-side and supply-side substitution, these beingthe key criteria for defining a relevant antitrust market.

2.8 The most well known and largely accepted method used by competition authorities inmany countries is the hypothetical monopolist, or cartel, test. This test seeks to identifythe smallest set of products and producers (containing the product under investigation),where a hypothetical monopolist or cartel, controlling the supply of all the products inthat set, could increase profits by instituting a small, but appreciable, permanent increasein price over the competitive level. This is also known as the SSNIP (Small butSignificant, Non-transitory Increase in Price) test. The underlying approach can beapplied to geographic market identification as well as to product market identification.

2.9 This approach was pioneered in the USA where the competition authorities - primarilythe Department of Justice (DOJ) and the Federal Trade Commission (FTC) - first set outthese principles in the 1984 Horizontal Merger Guidelines which have since been revisedseveral times. This approach has also been set out by the European Commission in its7

In its Notice on market definition the European Commission says:8

‘The question to be answered is whether the parties’ customers would switch to readilyavailable substitutes or to suppliers located elsewhere in response to a hypothetical small (inthe region of 5-10%) permanent relative price increase in the products and areas beingconsidered. If substitution would be enough to make the price increase unprofitable becauseof the resulting loss of sales, additional substitutes and areas are included in the relevantmarket. This would be done until the set of products and geographic areas is such that smallpermanent increases in relative prices would be profitable.’

This Notice is also referred to in paragraph 2.5.The cost here does not have to be in monetary terms, for example, if a consumer has to take a bus three9

hours earlier than his normal time to be able to switch to another operator, it is a cost to this customer.

14

Notice on market definition. In the UK, the OFT has recently referred to the use of this8

approach in the guideline Market Definition (reference: OFT403) issued under theCompetition Act 1998.

2.10 In the past, MMC reports have not always set out a rigorous definition of the relevantmarket. There are of course exceptions such as the London Clubs International/CapitalCorporation merger report (1997) where the SSNIP test is used to define the relevantmarket.

2.11 The correct definition of the relevant antitrust market is an important feature of anaccurate competition analysis. A too narrowly defined market can lead to unnecessarycompetition concerns, and on the other hand, a too widely defined market may disguisereal competition problems. This will certainly be the case if too much emphasis is placedon the market share arising from an ‘incorrect’ market definition.

Demand-side substitution

2.12 Analysis of demand-side substitution focuses on what substitutes exist for buyers andwhether enough customers would switch, in the event of a price increase, withoutincurring a cost, to constrain the behaviour of suppliers of the products in question. This9

is an essentially empirical question.

2.13 Substitutes do not have to be identical products to be included in the same market. Indeedmost products and services today are differentiated products. Nor do product prices haveto be identical. For example, if two products serve the same purpose, but one is of adifferent specification, perhaps a higher quality, they might still be in the same market,as long as customers prefer it due to a higher price-quality ratio. For example, aMercedes-Benz may last 400,000 kilometres, but a luxury Ford may last 200,000kilometres. If the two cars were the same except for the life of the car, then one wouldnot expect their prices to be the same. If the price of the Mercedes were to increase, itscost per kilometre driven would then be greater than the Ford’s and some consumerswould switch to the Ford. In addition, products do not have to be direct substitutes to be

See diversion analysis below, for a more detailed discussion of this concept.10

The profit change is not simply the result of demand side changes but depends also on the way lower11

output, arising as a result of the increased price, affects costs. The European Commission is known to take evidence of demand-side substitution in the range of 10-12

20% very seriously. The MMC investigated the relevance of switching costs in their report on Video Games (1995) and13

undertook a similar analysis in the context of Telephone Number Portability (1995). Cfr. DOJ-FTC, 1992, Horizontal Merger Guidelines, op. cit., paragraph 2.9.14

15

included in the same market. There may be a chain of substitution between them.10

2.14 Moreover, it is not necessary for all consumers, or even the majority, to switch activelyto substitute products for the products still to be regarded as substitutes and in the samemarket. The important factor is whether the number of customers likely to switch is largeenough to prevent a hypothetical monopolist maintaining prices above competitive levels.In fact if a 10% price increase were to lead to as little as 10-20% of customers switchingto substitute products the benefit of the price increase would be lost and it would beunprofitable for the company to make the price increase. The behaviour of so-called11

‘marginal consumers’ who are most likely to switch keeps prices competitive not onlyfor themselves but also for other consumers who are not able to switch, assuming thatsuppliers cannot price discriminate among customer groups. Clearly the stronger theevidence that consumers would switch, the less likely it is that a particular product orgroup of products is in a market on its own.12

2.15 The costs of switching can, however, be very important for customers. For example,changing from electric to gas heating, following a fall in the price of gas, may involve asubstantial amount of investment in new equipment. Another example of a market whereswitching costs can be significant, is the market for video games. Here consumers arefaced with video games developed around different hardware – the PC, or the console –giving rise to switching costs for consumers in the video games market. In the presence13

of switching costs there may be a large gap between short and long run demandsubstitution.

Supply-side substitution

2.16 In the absence of demand-side substitution market power may still be constrained bysupply-side substitution. Supply-side substitution occurs where suppliers are able to14

respond rapidly to small but permanent changes in relative prices by switching productionto the relevant products, without incurring significant additional costs or risks.

European Commission, 1992, Case IV/M.0291, 1992, Torras/Sarrio.15

European Commission,1973, Case 6/72, ECR 215, Europemballage Corpn and Continental Can Co Inc16

The same factors apply to the analysis of consumer switching costs for the assessment of demand-side17

substitutability.

16

In these circumstances, the potential for supply-side substitution will have a similardisciplinary effect to demand-side substitution on the competitive behaviour of thecompanies involved.

2.17 As with demand-side substitution, supply-side substitution needs to be relatively quick,for without speed its effectiveness in constraining current market power is reduced. It isa matter of opinion about how quickly supply-side substitution should take place, todistinguish it from entry, but it is often set by competition authorities to within a year.

2.18 An example of this is the supply of paper used in publishing. Paper is produced in15

various grades dependent on the coating used. From a customer’s point of view thesetypes of paper are not viewed as substitutes. However, these grades are produced with thesame plant and raw materials so it is relatively easy for manufacturers to switchproduction between different grades. If a hypothetical monopolist in one grade of papertried to set prices above competitive levels, manufacturers currently producing othergrades can start to supply this grade.

2.19 Analysing short run supply-side substitution raises similar issues to the consideration ofbarriers to entry. Both are concerned with establishing whether firms will be able to beginsupplying a product in competition with another existing firm. The distinction is only oneof timing, that is, the speed of set-up.

2.20 The European Commission now makes explicit reference to short run supply-sidesubstitution as a factor that should be considered in defining markets. This reflects theEuropean Court of Justice’s judgement in Continental Can, which was critical of the16

failure by the Commission to include supply-side substitutes within the market.

2.21 The type of evidence to be used in an assessment of supply-side substitution include thefollowing:17

� systematic analysis of firms that have started or stopped producing the productsin question;

� the time required to begin supplying the products in question;

� enquiries of potential suppliers to see if substitution is possible (even if thepotential supplier currently has no plans to enter the market) and at what cost;

Nestle Perrier, 92/553/Cee, OJ 5-12-1992 vol. L 35618

17

� enquiries of firms might be included to determine whether existing capacity istied up, perhaps because of long term contracts;

� the views of customers - in particular, their views on whether they would switchto the new supply, and whether the costs of switching were prohibitive; and

� an evaluation of the ‘sunk costs’ of switching, to see if potential suppliers canbegin producing the products in question without risking substantial investment.

2.22 It is probably fair to say that quantitative measurement techniques have so far beenapplied rather more to the demand side than to the supply side, in competition cases.

CASE STUDY 1: NESTLE-PERRIER MERGER

The Nestle-Perrier merger case of 1992 is interesting for a number of reasons. First, the18

Nestle-Perrier case is interesting because it provides an example of how markets can be definedfor antitrust purposes. In this instance very little empirical analysis was undertaken to determinethe correct antitrust market. Nestle notified the Commission of its intention to buy Perrier andcede Volvic, a Nestle mineral water brand, to BSN, the second biggest competitor. This wasdesigned to pre-empt any involvement by the Commission on the basis of concerns aboutNestle’s market position. Nestle and Perrier together had 75%, by volume, of the ‘market formineral waters’. Nestle was claiming that the relevant market was the ‘non-alcoholic refreshingdrinks’ market, which would include colas and all other soft drinks. To determine the relevantmarket for still and sparkling mineral waters the Commission used surveys and comparisonssupplied by consumers, trade associations and supermarkets. There appears to have beensome emphasis on price correlation, and graphs showing parallel price movements over timefor all mineral water.

Secondly, the Nestle-Perrier case provides a good example of barriers to entry and theconclusions of the European Commission as to their relevance in merger cases. In the Nestle-Perrier case the Commission pointed to the high degree of brand recognition in the mineralwater industry. This brand recognition was due to intensive advertising campaigns that the threemajor firms (Nestle, Perrier and BSN) had undertaken over several years. New entrants to themarket would have faced similar expenditure requirements to attract and retain customers. TheCommission concluded that a new brand would require a long lead-time and heavy investmentin advertising and promotion to compete in the market, and would have difficulty establishing apresence in the market due to the large number of brands already introduced by the top threefirms. In its judgment, the Commission implicitly appealed to the theory that advertising is abarrier to entry because it is a sunk cost that cannot be recovered nor transferred to other uses.

Schmalensee, R., 1978, Entry deterrence in the ready-to-eat breakfast cereal industry, Bell Journal of Economics,19

9:305-27 in Office of Fair Trading Research Paper 2 1994, Barriers to entry and exit in UK competition.

The other important feature of the case was that the Commission for the first time investigated the matter not as20

a single firm dominance case (monopoly) but multiple firm dominance case (oligopoly).

18

It is also using the brand proliferation argument of Schmalensee. Both of these factors have19

been recognised as potential barriers to entry in modern IO literature. However, both are difficultto apply and the brand proliferation argument, in particular, has been criticised.

Subsequent analysis showed both the pre- and post-merger market shares of Nestle and BSNto be very high. So, although Perrier had chosen to divest Volvic to BSN, the market positionsof the two were effectively made more symmetric without their combined market share falling atall. In other words, the divestiture would strengthen the duopolistic structure of the market.Despite this, the Commission would not block the merger outright, instead negotiating a remedywith Nestle and Perrier. Nestle was allowed to keep Perrier, but had to sell eight of its lesserbrands of mineral water to another company, which was not in the market at that time, to createcompetition in the market. 20

The relevant geographic market

2.23 The relevant geographic market is the area over which demand-side and supply-sideproduct substitution takes place. Of particular importance in defining the geographicmarket is the degree to which chains of substitution extend the market, and the roleplayed by imports in conditioning the ability of local suppliers to raise prices.

2.24 The type of evidence that can be used to determine the extent of a geographic marketincludes surveys of consumer and competitor behaviour; estimates of price elasticitiesin different areas; and analysis of price changes across contiguous geographic areas. The latter can provide reasonable evidence that two areas are in the same market if theprices for the product under examination move together in the two areas for reasonsunrelated to changes in the costs of production.

Sheffman, D.T. and P.T. Spiller, 1987, Geographic market definition under the US Department of Justice Merger21

Guidelines, Journal of Law and Economics, 30: 123-47. The vector contains crude oil price, energy use, transport costs for crude oil, refining capacity, and a weather22

variable. See Chapter 9 for a description of residual demand analysis. The vector contains per capita income, industrial production, and seasonal factors.23

This result is confirmed by Stigler, G.J. and R.A. Sherwin, 1985, The Extent of the Market, Journal of Law and24

Economics, 28: 555-85. See paragraph 5.8 below for a more detailed discussion.

19

CASE STUDY 2: DEFINING THE GEOGRAPHICAL EXTENT OF US PETROL MARKETS

During the 1980s a number of mergers occurred between petrol producers in the US, and theissue of the determination of the relevant geographic markets for wholesale fuel became aheavily contested issue. There is very little doubt that the area west of the Rocky Mountainsconstitutes an isolated geographic market, as no petrol is transported there from the rest ofthe country. All petrol consumed on the West Coast is either imported or refined west of theRockies. It is more difficult to establish in which way the other areas of the country areconnected.

Refineries located in the Gulf Coast, which produces about half of the total US production,provide virtually all the petrol sold in the South-East. The petrol flows to the region via twopipelines, the Colonial and the Plantation. Terminals are clustered along these pipelines,usually close to main urban areas, and petrol is transported from these terminals to the finaldestination by lorry. North-eastern buyers are supplied with petrol by three main sources:local refineries located near the larger cities (the North-East produces about 15% of the totalUS output); the Gulf Coast refineries via the Colonial pipeline; and, foreign refineries. Theexistence of two different and one common source of supply (the Colonial pipeline) make itimpossible to determine, without a detailed investigation, whether the North-East and theSouth-East of the US are in the same geographic market.

In a leading article on the use of quantitative techniques in market definition, Sheffman andSpiller used residual demand analysis to determine whether the relevant antitrust market for21

gasoline refining in the eastern United States covers the whole area east of the RockyMountains, or whether the Gulf Coast and the North-East together, or the North-East alone,form an antitrust market. This analysis used monthly data for the period April 1981 toFebruary 1985 in order to estimate residual demand functions for wholesale gasoline; theregressors include a vector of cost shift variables for potential competitors, to account for22

supply-side substitution; and a vector of demand shift variables to account for demand-side23

substitution.

The results showed that there were two relevant antitrust markets east of the RockyMountains relevant for merger analysis: the Gulf Coast alone, for mergers that happenedbetween refineries within this area, and the area that is comprised of the North-East plus theGulf Coast for mergers that included the North-East. The results obtained with residualdemands differed from those from correlation tests that show highly correlated prices acrossthe whole area east of the Rockies leading to a wide market definition. In other words, the24

economic market was larger than the antitrust market. To be more specific, while ‘historical’

Bain, J.S., 1956, Barriers to New Competition: Their Character and Consequences in Manufacturing25

Industries, Cambridge: Harvard University Press.; and Bain, J.S., 1951, Relation of Profit Rates toIndustry Concentration: American Manufacturing, 1936-1940, Quarterly Journal of Economics, 65:293-324.

20

petrol prices tend to converge across the area west of the Rockies, refineries in the GulfCoast have the potential power to promote a long-lasting price increase by cutting capacity inthe Gulf area alone. Such a price increase cannot travel beyond the regional boundaries ofthe Gulf Coast, which makes that area an antitrust market.

Quantitative tests for market definition

2.25 There are a number of quantitative tests available to help market definition, and there ismuch debate on which is the most adequate. In Chapters 5 to 8 we review tests that are

based on the analysis of price trends as well as the more sophisticated test of demand(Chapter 9) or a system of demand equations (Chapter 15). Generally, tests based on pricetrends alone should be treated with caution, as they do not allow an assessment ofwhether prices could be profitably raised by market participants. However, the paucityof the data available often prevents the analyst from estimating more appropriate demandmodels, so that antitrust markets are defined on the basis of price tests alone. The twoexamples in this chapter on the definition of the relevant market for radio advertising andthe relevant market for petrol highlight some of the techniques used in the definition ofthe relevant product market and the geographic market respectively.

Analysis of market structure

2.26 The traditional analysis of antitrust is firmly rooted in the structure-conduct-performanceparadigm developed by Bain. According to this view, it is the structure of the market25

that determines its performance, via the conduct of its participants. Performance ismeasured by the ability to charge a price above the competitive level, thereby earning apositive mark-up. In line with this paradigm the degree of concentration in a market haslong been considered one of its major structural characteristics and analysis of marketstructure then becomes a key indicator of actual or potential market power.

2.27 It is now recognised at both a theoretical and empirical level that the structure-conduct-performance approach is overly simplistic and that matters are more complex.Nevertheless, considerable emphasis in antitrust cases does still seem to be put onstructural data, perhaps partly because it is relatively easy to collect.

Concentration indices

2.28 Here, market shares are calculated for all firms identified as participants in the market.They may be calculated on the basis of a firm’s sales or shipments or capacities,

There are other indices that provide a measure of market concentration. The HHI is simply the one that26

is used most widely. HHI levels of 1000 and 1800 correspond to a four-firms concentration ratio of 50-70% respectively.27

See also paragraphs 3.2 to 3.6 for a further discussion of HHI. Farrell , J. and C Shapiro, 1990, Horizontal Mergers: An Equilibrium Analysis, American Economic28

Review, 80: 107-26

21

depending upon the nature of the market. Note that total sales (or capacity) include thosethat are likely to arise in response to a small, non-transitory increase in price. So, evenfirms not currently producing for, or selling in, the market are assigned ‘hypothetical’market shares. As already noted, the simple market share held by a firm, or a mergedfirm, may be used to trigger an investigation.

2.29 The Herfindahl-Hirschman Index (HHI) is also used, for example, by the US competitionauthorities to measure market concentration. The HHI is simply the sum of squares of26

individual market shares (and so it gives proportionately greater weight to larger firms).According to the FTC/DOJ Merger Guidelines, an ‘unconcentrated market’ has an HHIless than 1000, a ‘moderately concentrated market’ has an HHI between 1000 and 1800,and a ‘concentrated market’ an HHI greater than 1800, while a pure monopoly wouldhave an HHI of 10,000. Any merger which would leave the HHI below 1000 is27

considered unlikely to raise concerns that it will significantly reduce competition. Amerger leading to an increase in the HHI of less than 100 points, when HHI is between1000 and 1800, will also not normally be investigated. When the HHI exceeds 1800 anda proposed merger leads to an increase of more than 50 points, serious competitiveconcerns are deemed to be raised.

2.30 Measures of market concentration are only as good as the implied definition of themarket. Even when that is deemed to be unproblematic however, market concentrationmeasures are economically difficult to interpret, and the connection between market shareand market power is far from clear. Farrell and Shapiro have argued that the HHI is a28

poor reflector of the welfare consequences which flow from a merger, and that ‘increasesin the HHI are not associated with a lessening of economic welfare’. In fact, in somesimple Cournot models, ‘more concentration amongst the non-merging firms makes itmore likely that the merger will be welfare enhancing.’

2.31 Further:

‘Implicitly the guidelines assume a reliable (inverse) relationshipbetween market concentration and market performance. In particularthe entire approach presumes that a structural change, such as amerger, that increases the equilibrium value of [the] HHI alsosymmetrically reduces equilibrium welfare defined as the sum ofproducer and consumer surplus. Is there in fact such a reliablerelationship between changes in market concentration and changes ineconomic welfare? In some very special circumstances there is…, but

London Economics, 1997, Competition in Retailing, Office of Fair Trading Research Paper 1329

discusses the analysis of competition in retail markets Frankel, A. And J. Langenfeld, 1997, Sea Change or Submarkets?, Global Competition Review, June/July: 29-3030

22

if the competing firms are not equally efficient, or there are economiesof scale, there is no reason to expect that concentration and welfarewill move in opposite directions.’

2.32 Here the now familiar objections to an overly deterministic and structuralistapproach to competition are being made. Concentration indices, like all indices,are at best rough measures of the quantities of interest – in this case market power– and must be used with care. Analyses relying exclusively on such measures arelikely to be led into error.

Price-concentration analysis

2.33 A frequently-used test to assess the impact of concentration in an industry(market) is to compare a number of local markets in terms of their supplycharacteristics. The hypothesis is that higher degrees of concentration go hand-in-hand with higher prices and price-cost margins. Such tests are sometimes used inretailing markets that are local. The SCI/Plantsbrook (1995) case outlined inparagraphs 13.14 to 13.18 and the case study below on the US merger of twooffice supply chains give examples of this analysis. Other examples are the petrolenquiry by the MMC in 1989 and the merger of betting shop chains, Grand Metand William Hill, also in 1989. The MMC found that the merger of Grand Metand William Hill bookmakers would create a number of local monopolysituations leading to reduced competition in off-course betting at a local level,and that Grand Met should therefore divest certain betting offices. Offices shouldbe divested where former Mecca betting offices (belonging to Grand Met) andWilliam Hill betting offices were within quarter of a mile of each other, and,where there were no other betting offices within a quarter of a mile of one ofthese offices. There was some debate however, over whether the divestiture wentfar enough. The existence of barriers to entry at the local level as well as at thenational level, where the two chains of bookmakers compete in terms ofadvertising and promotion, would suggest that the merger needed to be examinedmore closely with regard to its negative effect on competition.29

CASE STUDY 3: STAPLES AND OFFICE DEPOT 30

Following industry consolidation Staples and Office Depot are two of the three remainingoffice supply ‘superstore’ chains in operation in the US - the other being Office Max. InSeptember 1996 Staples Inc agreed to acquire rival Office Depot in an acquisition valued

23

at $3.4 billion. Prior to the emergence of superstores in the mid-1980s, businesses andconsumers typically purchased office supplies through dealers that offered items listedin a catalogue published by one of several office supply wholesalers. The superstorechains followed a different strategy. They constructed large, efficient warehouse-stylestores where a variety of items were offered. Although this fell far short of the varietyoffered by traditional wholesalers with their immense catalogues, the cost savings onpopular items were passed on to consumers in the form of lower prices. Mostconsumable office supplies are still however sold through other channels, includingtraditional distributors and their dealers, contract stationers, mass merchandisers andothers.

In April 1997 the FTC rejected an offer by Staples to divest up to 63 stores to Office Maxas a condition for permitting the acquisition to proceed. This was unusual for a USantitrust agency that often settles merger challenges with this type of consentagreement.

The FTC argued that the companies’ documents and statistical evidence demonstratedthat Staples and Office Depot are particularly close competitors. That is, in geographicmarkets (metropolitan areas) in which two firms compete with one another, office supplyprices are significantly lower than in metropolitan areas in which only one or the other ispresent. The FTC further concluded that the relevant product market includes only ‘thesale of office supplies through office superstores’ and that Staples’ acquisition wouldlessen competition in violation of antitrust legislation.

Staples and Office Depot rejected the FTC’s statistical study of the relationship betweenprice and head-to-head superstore competition on two grounds. First, they claimed thatthe FTC’s results were unrepresentative because of the particular set of office supplyproducts analysed. Secondly, they argued that the FTC’s results did not take properaccount of the fact that higher prices were found in the cities which generally have highercosts for doing business. Furthermore, Staples and Office Depot together account foronly 5% of total annual sales of office supply products in the US.

Both the FTC and the parties seem to agree that the advent of superstores has broughtsystematically lower prices to office supply consumers. Staples and Office Depot arguedthat the merger would allow more of the same, while the FTC maintained that it is thisvery cost and price reduction caused by the formation of superstores that has effectivelyturned them into their own relevant market. This case was interesting in that it showedthe FTC’s general tendency towards focusing on anti-competitive effects involving eithernarrow markets or small parts of larger markets. It also highlighted their increasingreliance on sophisticated economic theories and statistical techniques to define thesenarrow markets and estimate the likely competitive effects of mergers.

Barriers to entry

2.34 Increases in concentration do not necessarily result in higher prices. If market entry iseasy, the threat of entry by potential competitors reduces the ability to exercise market

Martin Graham and Anthony Steele, 1997, The Assessment of Profitability by Competition Authorities, OFT31

Research Paper 10 Harbord, D. and T. Hoehn, 1994, Barriers to Entry and Exit in European Competition Policy,32

International Review of Law and Economics: 411-35. Baumol, W. J., J.C. Pauser and R.D. Willig, 1982, Contestable Markets and the Theory of Industrial33

Structure, NY: Hartcourt Brace Jovanovich. The other side of this problem is that sunk costs alsoconstitute a barrier to exit for the incumbent firms.

Paragraph 122 of the decision in the ECJ case 27/7634

24

power. The prevalence of barriers to entry has been a long-standing issue of debateamong economists. Barriers to entry can be detected and their magnitude assessed byexamining the profitability of firms from their activities in the relevant market. This isoften done by comparing the Accounting Rate of Return (ARR) with the risk adjustedcost of capital. Martin Graham and Anthony Steele have described a superior techniquein which the Certainty Equivalent Accounting Rate of Return (CARR) is compared withthe risk free rate. Recent, largely theoretical, work in industrial organisation has greatly31

clarified the approach that should be taken to the analysis of entry conditions and barriersto entry. The so-called ‘new industrial organisation’ economics (‘the new IO’) has32

brought strategic issues to the fore and has contributed greatly to our understanding.

2.35 In particular the new IO allows us to isolate a small number of factors of crucialimportance in assessing barriers to entry, and questions concerning market power, andso directs attention towards particular aspects of the firm’s technology, market structureand firm behaviour. To be more specific, the new IO has revealed the fundamentalimportance of sunk costs, the nature of post-entry competition and strategic interactionbetween incumbents and entrants as being crucial to any analysis or case study of entryconditions in particular markets.

2.36 Against this background it is surprising to find that so much analysis of mergers is basedon the structure-conduct-performance paradigm. The MMC, DOJ as well as theEuropean Commission’s DGIV, put a lot of emphasis on the analysis of market structureand concentration ratios.

2.37 For example, the existence of high economies of scale is the typical Bainian barrier – theincumbents can set pre-entry output at such high levels that new entrants would be forcedto sell at below cost. In a number of cases the MMC or the European Court of Justice(ECJ) have concluded that economies of scale deter potential competitors. Although itis unusual to find significant economies of scale without associated sunk costs, it isnevertheless important to make the distinction between economies of scale that involvesunk costs and those which do not. The existence of sunk costs which are irrecoverable33

if entry is unsuccessful is clearly recognised in United Brands:34

‘The particular barriers to entry to competitors entering the market are theexceptionally large capital investments required for the creation and running

[1991] OJ L 334/42.35

London Economics, 1994, The Assessment of Barriers to Entry and Exit in UK Competition Policy,36

OFT Research Paper 2.

25

of a banana plantation, …and the actual cost of entry made up inter alia ofall the general expenses incurred in penetrating the market such as the settingup of an adequate commercial network, the mounting of very large scaleadvertising campaigns, all those financial risks, the costs of which areirrecoverable if the attempt fails.’

2.39 The view that sunk costs are a barrier to entry is also argued by the Commission in thede Havilland case . Aerospatiale and Alenia, who control the largest European and35

worldwide producer of regional aircraft, proposed to acquire the second largest producer,de Havilland. The aircraft industry is characterised by high sunk costs in both plant andequipment, and in the costs of changing designs, which deter post-design alterations. TheCommission found that a time-lag of two to three years for market research was requiredto determine the type of plane a market needed, and that the total lag time was six toseven years from initial research to point of delivery. Potential entrants from around theworld were identified, but the Commission concluded that the additional investmentrequired in research and development, and in design changes, made entry unlikely.

2.40 In their research report for the OFT, London Economics recommend a seven-stepprocedure to assess the existence of barriers to entry. The US approach is similar in that36

the DOJ focuses on the history of entry as well as the cost conditions under which viableentry is expected to occur. The US approach outlines three ways to deal with barriers toentry:

� Assess actual experience with entry into the market under investigation. Theturnover of firms in the industry can be taken as an indication of ease of entry andexit. The higher the turnover, the easier is entry.

� Estimate the ‘minimum viable scale’ (MVS). The US DOJ Merger Guidelinesprovide an heuristic test for assessing entry into markets for homogenous goods:if a firm can profitably enter the market with a market share of less than 5%, thenentry can be assumed to be likely. The reasoning for the 5% benchmark is asfollows. Assuming a unit price elasticity of demand, if the market price is raisedby 5% due to a merger or other anti-competitive act, market demand will decreaseby 5%. This creates an opportunity for entry, because it frees 5% demandcapacity. The test consists of determining whether there could be a firm thatwould enter the market, produce that extra 5% and still be in business when theprice goes down by 5% and back to its original level, as a result of the increasedsupply due to the new entry. Historical data on entry patterns in the market underinvestigation can be used as evidence for this test; the analyst will look at the size

Hayek’s dictum that all relevant economic information is contained in a product’s price dates from his37

classic article and remains, despite its oversimplification, a powerful statement today. Hayek, F., 1945,The Use of Knowledge in Society, American Economic Review, 35: 519-30.

26

of entrants in terms of market share at the time of entry, and at the size theyreached after one or two years. In the absence of historical data, a less preferredalternative would be to look at the size and profitability of current firms. Thisway of proceeding will give the analyst an idea of whether entry could bepossible, but it will not shed light on whether some new firm could actually enterthe market: that depends on sunk costs. However, in the absence of informationon sunk costs, this methodology might be the only one available to the analyst.

� Undertake pro forma calculations in a business-type analysis. The analystwho has information on the current market price, variable cost and initialinvestment will calculate whether it would be profitable to enter the market.From these calculations one can assess how large the company would have to bein order to be profitable, and so estimate the minimum viable scale.

2.41 The London Economics approach is more wide-ranging and includes the analysis ofstrategic behaviour which could also be considered to be part of the assessment ofcompetition and anti-competitive behaviour.

Analysis of competition and the scope for market power

Analysis of prices and price trends

2.42 The analysis of prices, price trends and relative price levels can be an important part ofa competition investigation. Price analyses are particularly useful in investigations ofalleged price fixing and bid rigging during procurement auctions, and can contribute tomarket definition analysis (as indicated above). The simple analysis of prices alreadyprovides a significant amount of information and once prices of products are analysedtogether with the respective quantities sold in the market then additional information isgenerated. More generally, prices are the main element in competition (although non-37

price factors can also be important) and so price levels directly affect consumer welfare.Problems in competition normally manifest themselves in non-competitive price levels.

2.43 Below are just a few examples of instances in the past ten years where competition issueswere raised on price grounds. Typically the public or some public watchdog argued thatprice was too high and that competition was in danger of malfunctioning unless theauthorities intervened.

Monopoly and Mergers Commission, 1994, The Supply of Recorded Music. A Report on the Supply in the UK38

of Pre-recorded Compact Discs, Vinyl discs and tapes containing music.

27

CASE STUDY 4: THE SUPPLY OF RECORDED MUSIC

The 1993 MMC inquiry into the supply of recorded music was prompted by the concern38

about the prices of compact discs (CDs), with particular emphasis on the difference in pricesbetween the UK and the US. The Consumers’ Association presented the MMC with a body ofevidence gathered during their on-going observation of CD prices. They highlighted that CDprices had remained high since their introduction into the UK market while the price of CDplayers had fallen substantially; that the production costs for CDs had fallen; and, that therewas widespread consumer dissatisfaction with the level of CD prices in the UK compared tothe US.

For the purposes of investigation, the MMC commissioned a survey that compared the retailprices of pre-selected, full price album titles, for both CDs and cassettes, across the UK,USA, Germany, France and Denmark. The average prices for the pre-selected CDs, withouttax and adjusted to pound sterling equivalents, were then compared across the fivecountries. The results of the survey are presented in the table below. The results show thatthe average CD was priced 8% lower in the US than the UK and that average prices in theother countries were higher than the UK. Cassette prices exhibited a similar pattern with theUS being 12.9% lower than the UK.

One of the record companies, Sony, carried out its own survey on the prices of a largersample of titles in the UK and US. Looking at weighted average prices of full price titles, theyfound that prices in the US were 5.8% lower than in the UK, for CDs, and 11% lower than inthe UK, for cassettes. These results are similar to the MMC’s results. However, because ofthe larger sample the statistical significance of the smaller difference in the Sony surveycould be confirmed. Furthermore Sony found - from an analysis of the average retail prices(unweighted) of Sony CDs in different US cities - that the price range was actually greaterwithin the US than between the US and the UK.

Table 1: Cost in Pounds Sterlin g of Pre-selected CD Titles in Europe and the US

Pre-selected Titles UK US F* G* Denmark

Diva – Annie Lennox 11.78 10.21 13.03 11.23 11.38Soul Dancing – Taylor Dayne 11.25 9.83 12.87 11.19 11.58Zooropa – U2 10.22 9.85 11.88 10.72 11.33Keep the Faith – Bon Jovi 10.56 10.53 12.81 10.89 11.50River of Dreams – Billy Joel 10.33 9.45 11.74 10.75 11.28Timeless – Michael Bolton 11.26 10.45 12.41 11.00 11.28Tubular Bells II – Mike Oldfield 11.71 10.21 12.71 10.92 11.25What’s Love Got to Do With It? – Tina Turner 10.06 9.67 13.12 11.15 11.44Column Average 10.90 10.03 12.57 10.98 11.38

Monopoly and Mergers Commission, 1989, The Supply of Petrol.39

28

* F is France* G is Germany

Source: BMRB International Survey of retail prices, September 1993

Given the consistent findings of lower prices for recorded music in the US than in the UK, theMMC asked a specialist retail consultancy to assess whether the differences in CD andcassette prices between the US and the UK were reflected by similar differences in prices ofother products. A price audit on a carefully matched basket of manufactured leisure goods,sold at similar prices to recorded music, was undertaken in late 1993. The audit found that onaverage the US prices (using the same exchange rate as the original MMC survey andwithout tax) were 8% lower than the same goods in the UK. This result was in line with theresults of the price surveys for CDs indicating that there was nothing unusual or atypicalabout this market. These findings contributed to the MMC’s conclusion that the complexmonopoly in the supply of recorded music did not act against the public interest.

2.44 In the petrol enquiry the MMC report discussed at some length and summarised empiricalevidence regarding the transmission of price changes for crude oil to prices at the petrolstation. This involved the analysis of long-term price trends through dynamic regressionanalysis. The justification of this analysis was the apparent uniformity of prices at petrolstations in local areas on the one hand and the speed at which prices were adjustedupwards when the price for crude oil rises on the Rotterdam spot market. The MMCinvestigated the supply of petrol in the UK and in its report cleared the industry of anyanti-competitive practices.39

Carton board, 1994, Case IV/33833, OJ L243. 40

29

2.45 The analysis of competition in antitrust investigations often takes the form of seeking toestablish the effects of a particular set of actions or a change in behaviour. This is the casewhen the formation of a cartel is said to have led to higher prices than otherwise wouldhave been the case. Similarly the merger of two companies active in the same market isan event that may lead to a change in output, quality or price, to the detriment ofconsumers. While in the case of a notification of a merger the event is in the future, inmany cases there is a historical event that can be evaluated in an empirical fashion.

2.46 There is no single quantitative technique that is designed to capture the historical impactof an event. Rather, event analysis or impact analysis, is an element in most quantitativetechniques. For example, the analysis of price trends mentioned above could entail thestatistical test for a structural break of the time series. Did the break up of theinternational coffee cartel lead to change in green coffee bean prices traded on theinternational commodity markets? Did the ending of anti-dumping duties on soda ashlead to a decrease in soda ash prices in the European Union? Did the co-ordinated priceannouncement of the European carton board producers lead to higher prices than the freeinteraction of supply and demand would lead us to expect? More generally, events canbe analysed in a number of ways:

� Time series of prices can be evaluated in terms of structural breaks (for example,standard tests exist in most econometric packages; dummy variables that relateto the event can be included in regression equations).

� Actual price trends can be compared with what a model of competition in theabsence of an event would predict (bidding models, for example, or oligopolymodels where the number and identity of players are changed).

� The counterfactual can be empirically established in some instances (stock marketreactions to news of a merger or price announcement can be compared to anindex).

2.47 Event analysis – or impact analysis – played an important part in the Carton board case .40

Here the European Commission imposed heavy fines on companies who had formed acartel, which among others co-ordinated regular price announcements over a period offive years. The decision was recently confirmed in an appeal at the Court of FirstInstance (the decision of May 14 1998). As part of their defence to argue mitigatingcircumstances a study of actual price trends was commissioned by a group of companiesand submitted to the Commission. This study analysed the actual behaviour of prices asagainst the trend of prices implied by the series of price announcements and showed amajor divergence of these two price series. The study showed how the prices achieved

Sources include Werden, G.J. and L.M. Froeb, 1994, The Effects of Mergers in Differentiated Products41

Industries: Logit Demand and Merger Policy, Journal of Law, Economics, and Organization; 10(2):407-26.

A leading example is Hausman, J.J., G. Leonard, and J. D. Zona, 1994, Competitive Analysis with42

Differentiated Products, Annales d'Economie et de Statistique,, 34: 159-80.

30

in the market place only followed the price announcements initially and even then onlyto a small degree. The Court noted the lack of any verification by the Commission of theactual effect of the cartelistic practice but this did not affect the decision to uphold theCommission’s findings.

Models of competition

2.48 A number of empirical approaches and simulation models have been developed with theaim to assess directly the scope for market power following a merger between producersof differentiated products. Similarly, existing rather than potential market power maymanifest itself through anti-competitive behaviour which limits the extent to which aleading firm will lose sales to close competitors. One technique that seeks to quantifythese effects is the so-called ‘diversion ratio’ which measures the degree to which a firmis subject to loss of sales to competitors who provide many customers’ first and secondpreferred choices (with an homogenous, or identical, product market any price differencesshould, all other factors being equal, lead to full and immediate substitution to a rival).

2.49 With the same objective (to establish the scope for independent behaviour), theDepartment of Justice has employed statistical estimation of the demand for thedifferentiated products of competing firms, and then simulated potential price increasesbased on models in which consumers’ rankings of goods are independent of prices—intechnical terms, logit demand models. Such or related methods have been used in cases41

concerning fragrances, desk-top publishing software, wholesale bread bakers, and manyothers.

2.50 Other empirical and simulation approaches follow a similar approach to the logit model,but are less restrictive in their assumptions. They also use data on prices, quantity ofsales, margins, and costs in attempts to estimate the full system of demand equations forcompeting differentiated products. Then, the effects of a merger (or other practice) on42

price is simulated from the elasticities and other relations from these estimated equations.Such methods have been used in presentations to the US enforcement agencies, mostoften for consumer products for which scanner-based price data is available. In Europesuch techniques have been little used. One notable exception is the case of KimberleyClark/Scott where such techniques were employed. Another case where the nature ofcompetition was modelled explicitly is the Boeing/McDonnell Douglas merger. There aninterested third party provided empirical evidence of the bidding process in the sale ofcivilian aircraft.

Bishop, B., 1997 The Boeing/McDonnell Douglas Merger, European Competition Law Review,43

18(7): 417-19.

31

CASE STUDY 5: THE BOEING/MCDONNELL DOUGLAS MERGER

This merger between two of the three main suppliers of civil aircraft, has been the cause of43

much debate. In normal circumstances a merger that reduced the number of firms in an industryfrom three to two, and saw a large increase in the HHI would have provoked an antitrust suit bythe FTC and have faced stiff resistance from the European Commission. Instead the merger wasallowed to proceed with the European Commission imposing the relatively weak remedy ofaccounting separation for the military and civil side of the combined Boeing/McDonnell DouglasCorporation.

One of the arguments used by Airbus, the main competitor to Boeing and the McDonnellDouglas Corporation (MDC), to influence this decision was a bidding study. This study revealedthat in 54 ‘campaigns’ or bidding procedures, the presence of the MDC as a bidder played a vitalrole in reducing the prices paid by airlines. On average, prices were 7.6% higher when the MDCdid not bid. The result was the same, regardless of whether the size of the sale and otherfactors, were statistically controlled. These results would suggest that the merger might be anti-competitive as competition in the civil aircraft market was likely to be reduced, so increasing theprices paid by airlines and the fares subsequently paid by consumers.

The obvious conclusions that were drawn from the bidding study assumed that the future wouldbe similar to the past in the absence of the merger. Boeing, however, argued that the MDC wasa failing firm (an argument often used in antitrust cases) and that no-one would buy MDCproducts again. The argument succeeded even though there was evidence to suggest that theMDC was not a failing firm in the classical sense. Indeed, for its newest aircraft there is anorders backlog for years of production work, and even as a spares business the MDC would beprofitable for several years.

It is interesting that Airbus was the only party to oppose the merger. Even the US customers ofBoeing, did not question the amalgamation of the two suppliers. This lack of involvement by USairlines was used to support the decision to allow the merger. Airbus on the other hand,

Similar tests have been proposed, and used, by the UK’s Office of Fair Trading, see G. Myers, 1994,44

Predatory Behaviour in UK competition policy, OFT Research Paper 5 which adds to the analysis ofpredation with an assessment of intent as well as a discussion of the cost tests that can be applied. Seealso Judge Easterbrook in A.A. Poultry Farms Inc. v. Rose Acre Farms Inc. F.2d 1396 (7 Cir. 1989),th

and Klevorick (1993). Areeda, P. and D. Turner, 1975, Predatory Pricing and Related Practices Under Section 2 of the45

Sherman Act, Harvard Law Review, 88: 697-33. Areeda, P. and D. Turner, 1975, op. cit.46

Areeda, P. and H. Hovenkamp, 1992, Antitrust Law: An Analysis of Antitrust Principles and Their47

Application, 1992 Supplement. Boston: Littlebrown.

32

which,based on the results of the bidding study was set to benefit from increased prices,opposed the merger. It feared that Boeing would use its new market power in a predatorymanner, using offset deals on military aircraft produced by the military unit of MDC.

Predatory pricing

2.51 In contrast to concerns that prices are too high, prices that are too low may also betroublesome. As an empirical matter it is very hard to determine when pricing ispredatory. The offence – low pricing – is also a prime virtue of the competitive process.Distinguishing predatory from normal competitive behaviour is therefore a subtle task.Among others, London Economics (1994) proposed a two-part test for predatorypricing. The first step is an analysis of market structure to determine whether predatory44

behaviour is potentially a rational strategy. The crucial question is whether the allegedpredator, if successful in deterring entry or inducing exit, could recover the short-termlosses incurred. The second step is an examination of conduct or market behaviour usinga price-cost test, such as the one suggested by Areeda and Turner which seeks to45

establish whether prices are below variable costs. In addition it is useful to investigatethe history of entry-deterring behaviour in the market and evidence of intent. Moderntheory also suggests that capital market imperfections, for example, informationasymmetries and financial constraints, can be important in supporting predatorybehaviour.

2.52 Areeda and Turner’s price-cost test excludes only the following from variable costs: (i)46

capital costs, (ii) property and other taxes, and (iii) depreciation. It is important is todetermine which costs were truly ‘avoidable’ in the sense that they would not have beenincurred otherwise, that is, if prices had not been lowered and output or sales therebyincreased. As stated by Areeda and Hovenkamp:47

‘Which costs are to be considered variable and fixed is a function of the timeperiod and how large a range of output is being considered. All costs are variablein the long run...A predatory pricing rule should focus on those costs which are

Phlips, L. And I.M. Moras, 1993, The AKZO Decision: A Case of Predatory Pricing?, Journal of Industrial48

Economics, 41: 315-21.AKZO’s market share in the EC market for organic peroxides in 1981 was 50%.49

In 1982 AKZO’s market share in the UK flour additives market was 52%, followed by ECS with a market share50

of 35%, and Diaflex with 13%. Diaflex purchased its raw materials from AKZO. There were three large buyersof roughly comparable size with a combined market share of 85%, plus a number of smaller independent flourmills.

Diaflex followed suit and offered prices similar to those quoted by AKZO to two large independent customers51

of ECS. Price cutting continued until 1983 when ECS was grated interim measures by the Commission.

33

variable in the relevant time period. ...The cost-based rules must focus on costswhich the defendant should have considered when setting the allegedly predatoryprice.’

So, the distinction between fixed and variable costs will depend upon the range of outputand the time period involved, the nature of the firm’s contracts with its input suppliers,whether or not it has excess capacity, etc. The important point is the identification of theavoidable costs upon which economic decisions are based.

CASE STUDY 6: AKZO

The classic finding of predation in EU competition law is the AKZO case. AKZO Chemie was48

the major European producer of organic peroxides, one of which, benzoyl-peroxide, was usedin flour additives in the UK and Ireland. Most sales of organic peroxide however were in theEuropean plastics market, where AKZO was a dominant supplier. ECS, a UK producer of flour49

additives, began to produce benzoyl-peroxide for its own use in 1977 after a series of price risesby its main supplier, AKZO. When ECS started to expand into the more lucrative European50

plastics market in 1979, AKZO responded with direct threats of overall price reductions in theUK flour additives market and price cuts targeted at ECS’s main customers, if ECS did notwithdraw from the plastics sector. In December 1979, ECS was granted a High Court injunctionin the UK to prevent AKZO from implementing its threats. An out-of-court settlement wassubsequently reached in which AKZO undertook not to reduce its selling prices in the UK orelsewhere ‘with the intention of eliminating ECS as a competitor’.

Prior to the dispute AKZO regularly increased its prices to its UK customers by increments of10%. ECS tended to follow AKZO’s UK price increases whilst maintaining its own pricesapproximately 10% below AKZO’s. In March 1980, following the out-of-court settlement, AKZOagain increased its UK prices by 10%, but on this occasion ECS did not follow, increasing thenormal price gap between the two companies. Some of AKZO’s large customers subsequentlyapproached ECS for price quotations. AKZO responded by matching or bettering ECS priceoffers, and undercutting ECS’s prices to its own customers. This resulted in AKZO gaining51

market share at the expense of ECS.

The price history as described by the Commission would appear to be consistent with vigorousprice competition following a breakdown of previously co-ordinated pricing strategies, or with

See Phlips, L. And I.M. Moras, 1993, op. Cit., who interpret the price history as evidence of ‘ the reaction of a52

dominant firm that lost its price leadership and tries to discipline a deviant’ , the result being a shift from ‘a priceleadership situation towards a more competitive one’ . We are not unsympathetic to this interpretation, althoughtheir argument that the market was characterised by ‘ complete information’, making predation a non-crediblestrategy, strikes us as far-fetched.The Advocate-General disagreed with the Commission’s approach to market definition, and argued that, in any53

case, it was not sufficiently proved that AKZO held a dominant position in the relevant market. He also foundinsufficient evidence of abuse of dominant position.

34

predation. The Commission concluded in favour of predation on the basis of internal AKZO52

documents which indicated that eliminating ECS was its strategy, and internal managementdocuments apparently demonstrating the AKZO prices for selected customers were less thanaverage variable, or marginal, costs. The Commission argued that AKZO’s predatory behaviourwas creating a barrier to entry, and pointed to evidence of other predatory episodes as well asevidence of financial difficulties at ECS which limited its ability to sustain a prolonged price war.

This case contains practically all of the ingredients required for successful predation. AKZO hadsignificant market power in each of the markets in question (that is, large market shares in boththe UK flour additives market and the EU plastics market), evidence of predatory intent wasgiven, as well as evidence of previous predatory episodes. Prices were found to be belowaverage variable cost in targeted market segments and ECS was found to be financiallyconstrained. In addition, AKZO was apparently targeting a market of minor importance to protectits more lucrative European plastics market, so minimising the costs of predation, while inflictingmaximum damage on its competitor.

The case has received widespread attention. On appeal the ECJ supported the main findingsof the Commission, despite a dissenting opinion by the Advocate-General, and AKZO was53

fined ECU 7,500,000.

Evaluation of efficiency defences

2.53 A major area of economic antitrust analysis that requires quantification is the so-called‘efficiency defence’. This relates to claims of efficiency gains from certain restrictivepractices between firms and proposals for establishing joint ventures or full-blownmergers. Economics distinguishes three types of efficiency and each presents a numberof problems when it comes to empirical verification:

� allocative efficiency, which means that prices reflect costs such that firmsproduce relatively more of what people want and are willing to pay for. As aresult, resources are allocated within the economy in such a way that the outputmost valued by consumers is produced;

Bork, R., 1978, The Antitrust Paradox, Maxwell Macmillan: Oxford54

35

� productive (internal) or technical efficiency, which means that, given output,production takes place in practice using the most effective combination of inputs:so productive efficiency implies that internal slack is absent; and

� dynamic efficiency, which means that there is an optimal trade-off betweencurrent consumption and investment in innovation and technological progress.

2.54 The role of antitrust legislation is to improve allocative efficiency without restricting theproductive and dynamic efficiency of firms to the extent that there is no gain, or a netloss, in consumer welfare.54

Allocative efficiency

2.55 Traditionally, the economic literature has put particular emphasis on how competitionmight promote allocative efficiency. This mechanism is easier to understand and studyempirically than other notions of efficiency. Quite simply, competitive pressures tend topush prices towards marginal costs by eroding market power. Given a certain number offirms in a market, the alignment of prices with marginal costs generates allocativeefficiency. In this perspective, collaboration between independent firms or mergers thatreduce the number of firms are unlikely to promote allocative efficiency. The verificationor falsification of this hypothesis is the subject of the price-cost margin and concentrationratio analysis discussed above in paragraphs 2.42 to 2.52. Such a test is designed tosupport or reject the hypothesis that allocative efficiency is being impaired by a mergerproposal.

2.56 However in some circumstances marginal cost pricing may conflict with other objectives.In particular in the presence of increasing returns to scale, increasing the competitivepressures on prices may not give the optimal incentive for market entry. Morecompetition (in the sense of more firms) causes prices to fall towards marginal costs, butat the same time less advantage is taken of scale economies related to fixed entry costs,and so average cost rises. Under fairly general conditions, the negative externality thatan additional entrant imposes on existing firms, by taking business from them, mayoutweigh the positive externality to consumers in terms of lower price.

2.57 In this perspective the assessment of economies of scale and, in particular, their empiricalverification becomes central to an antitrust assessment. The measurement of economies

Not least because many production processes are multi-product and not simple single production55

processes. See Baumol W. J., J. C. Panzar and R. D. Willig (1982), Contestable markets and the theoryof industry structure. Harcourt Brace Jovanovich: New York.

See, for example, the overview by Schmalensee, R., (1989) Inter-Industry Studies of Structure and 56

Performance, in R. Schmalensee and R.D. Willig (eds.), The Handbook of Industrial OrganisationVolume II, North Holland: New York.

Davies S. and B. Lyons, 1996, Industrial organisation in the European Union: Structure, strategy and57

competitive mechanism, Clarendon press: Oxford. Sutton, J., 1991, Sunk Costs and Market Structure: price competition, advertising and the evolution58

of concentration, MIT Press: Cambridge Monopoly and Mergers Commission, 1989, The Supply of Beer59

Monopoly and Mergers Commission, 1986, White Salt.60

The European Commission has undertaken several antitrust investigations including Irish Sugar case61

97/624, Sugar Beet, case 90/45, and Napier Brown/British Sugar case 88/518.

36

of scale is, however, not a trivial task. While it is in general possible to use accounting55

data to derive point estimates of costs at different levels of output it is only through aproper econometric estimation of cost functions that more reliable estimates can beobtained. Numerous empirical studies of economies of scale have been undertaken at theindustry level. In the European context the study by Davies and Lyons is a good56 57

example. Another excellent study that goes beyond cross-sectional analysis of industriesand tries to explore the way unique cost structures govern the performance of selectedindustries is Sutton. In his path-breaking study he deals with, among others, the salt,58

sugar, soft drinks and beer industries and shows how the existence of exogenous sunkcost (technological) and endogenous sunk costs (advertising) interact with economies ofscale and determine minimum boundaries of concentration of a specific industry.

2.58 In the context of antitrust proceedings the estimation of economies of scale is very time-consuming and cannot usually be undertaken in merger investigations that are subject totight deadlines. However, major industry investigations allow room for more in-depthresearch. In the UK the beer industry, and the salt industry, for example, have been59 60

extensively investigated. In Europe, the sugar industry has been investigated by theEuropean Commission on a number of occasions.61

2.59 Another area of antitrust where cost structures and, in particular economies of scale, areabsolutely central to the considerations of an investigating authority is the failingcompany defence. Only if economies of scale exist and are very large with respect to thesize of a market, can this argument for a merger be made. The acquisition of BritishCaledonian by British Airways in 1987 was such a case, albeit controversially so. Thefailing company defence was also used in the Boeing/McDonnell Douglas merger, asummary of which can be found on page 30. In the European Union the issue of unviablecost structures and the need to achieve efficient levels of production is the key element

For a review of the state aid system see Hancher, L.T., T. Ottervanger and P.J.Slot, 1994, Chancery62

Law publishing and Chapter 12 in London Economics, 1997, Competition Issues, Volume 3, SubseriesV, Single Market Review 96. Office for Official Publications of the European Communities.

Caves, R. , 1990, Industrial Organisation, corporate strategy and structure, Journal of Economic 63

Literature, 64-92 Caves,R. and D.E. Barton, 1990, Efficiency in US Manufacturing Industries, MIT Press: Cambridge.64

Caves, 1992, Productivity dynamics in manufacturing plants, Brookings papers on Economic Activity,65

187-267 Nickell, 1992, Productivity Growth in UK Companies, 1975-86, European Economic Review, Vol 36,66

1055-91.

37

in State Aid proceedings under Article 92 of the Treaty of Rome. Typically theCommission undertakes an industry and cost analysis before deciding whether or not toallow state support to an individual company.62

Productive efficiency

2.60 The causality between competition and productive efficiency is deeply rooted ineconomic folklore: starting from Hicks’ notion that ‘the best of all monopoly profits isa quiet life’, economists have always had a ‘vague suspicion that competition is theenemy of sloth’. The theoretical literature is not in agreement on the exact nature of this63

relationship. However the empirical literature provides a relative wealth of evidence tosupport the notion that competition enhances productive efficiency. To mention but afew, Caves and Barton, and Caves use frontier production function techniques to64 65

estimate technical efficiency indices in a number of industries, and relate these toconcentration (as a proxy for competition). They find that increases in concentrationbeyond a certain threshold tend to reduce technical efficiency. Nickell finds that market66

concentration has an adverse effect on the level of total factor productivity. This meansthat, all other factors being equal, an increase in market concentration should be followedby a fall in productivity.

2.61 The MMC has not relied on such techniques very often. The major exception is theassessment of mergers in the utility sector where the justification depended on claims ofsignificant productive efficiency gains. Two parallel merger references to the MMC in1996 are good examples: Severn Trent/South West Water and Wessex Water/South WestWater. Both relied on extensive empirical analysis of expected efficiency gains withseveral consultant studies being submitted to the MMC.

Monopoly and Mergers Commission, 1995, South West Water Services Ltd: A report on the67

determination of adjustment factors and infrastructure charges for South West Water Services Ltd.

38

CASE STUDY 7: SOUTH WEST WATER SERVICES LTD

In the South West Water Services Ltd (SWWS) case of 1995, the MMC was required to67

determine the adjustment factor (K), and the standard amounts charged for infrastructure ascalculated for SWWS from 1995 to 2005. The adjustment factor is the percentage by which theweighted average charges for the supply of water and sewerage services is allowed to changerelative to the retail price index. Infrastructure charges are a way of recovering the costs ofmaking new water and sewerage connections. SWWS’s adjustment factors and infrastructurecharges were significantly greater than those calculated by the Director of Water Services.

The Office of Water Services (OFWAT) commissioned an analysis of the efficiency of thevarious companies providing water and sewerage services throughout England and Wales.Several regressions were then undertaken for water and sewerage services. These regressionswere to explain some of the variation between different companies in the costs of carrying outcertain activities, in terms of physical or demographic variables not directly under managerialcontrol. Once the appropriate variables were accounted for, any remaining variation was thenattributed to errors in the data, the fit of the model or the greater/lesser efficiency of thecompany. The results of these regressions were then used to rank the companies into tenefficiency bands in order of the difference between their actual operating expenditure and theoperating expenditure predicted by the regression equation.

OFWAT also commissioned Data Envelopment Analysis (DEA). Separate DEA runs werecarried out for water distribution and treatment. The sum of the expected costs for eachcompany from these two runs was then added to overall average water business activities costsand the result divided by actual distribution, treatment and business activities costs to give anoverall efficiency ratio. For most of the companies the results of the DEA runs were similar tothose of the regressions. Where they were significantly better, that particular company wasraised an efficiency band. The same methods were then used to analyse sewerage service.

The MMC mentioned that the use of DEA in this context was relatively novel and that it requiresmore development. Presently it is used to test and confirm the results produced by other, lessformal, analysis.

On the basis of this and other evidence, the MMC’s findings broadly followed those of OFWATalthough the MMC did allow a slightly larger adjustment factor to account for the substantialinvestment program being undertaken by SWWS to meet environmental standards. Generallyhowever, the MMC felt that SWWS’s rate of return is well in excess of the cost of this capital andthat there is scope for reducing rates of return to the benefit of customers without hindering thecompany’s ability to finance its investment program.

Dynamic efficiency

It is worth pointing out that there is, in principle, a trade-off between allocative and dynamic efficiency:68

costs might fall as a result of competition in technological innovation, yet this may actually lead toincreased market concentration. Prices in excess of marginal costs might also be necessary in order togive firms a suitable return on their R & D effort, see von Weizsäcker, C.C., 1980, A Welfare analysisof Barriers to Entry, Bell Journal of Economics, Vol. 11: 399-420.

Arrow, K. J., 1962, Economic Welfare and the Allocation of Resources for Invention, in NBER, The69

Rate and Direction of Inventive Activity: Economic and Social Factors, Princeton University Press:609-25

39

2.62 Dynamic efficiency is defined as the optimal trade-off between current consumption andinvestment in technological progress. The intensity of competition may be expected toaffect the incentive to undertake research and development, since it will condition thefirm’s rewards from innovation. Early discussions of the relationship between product-68

market competition and innovation (for example, that of Schumpeter) held that thedriving force in the process were firms with ex-ante market power, rather than just theprospect of market power. In this Schumpeterian view, as the product market becomesmore competitive, the payoff to innovation would become lower, and the incentive forresearch and development would be blunted.

2.63 However this conjecture has been extensively challenged. More product marketcompetition could lead to stronger incentives to innovate, since a potential benefit ofinnovation is escape from tough competition, by earning a monopoly right to an inventionprotected by a patent. The more recent theoretical literature on the subject has treated69

innovation as a patent race between firms, where the ‘prize’ for being first (and so beingable to appropriate the profits from the innovation) is the incentive spurring firms along.These micro models of research and development investments actually suggest thatcompetitive pressures typically boost, rather than dampen, innovation. The optimalinnovation pace will accelerate under the threat that actual (or potential) competitors mayregister the patent first.

2.67 These models are predicated on the existence of patents and other intellectual propertyrights (IPRs) to provide the necessary reward function for research and development. Ifthese are missing, too narrowly defined or, are costly to enforce, then achieving thecorrect trade-off between allocative and dynamic efficiency becomes an issue for antitrustauthorities. Similarly, there may be cases where IPRs are too widely defined or are usedto leverage legally protected market power into other markets where these do not haveany justification.

2.68 In the US the recognition of the importance of dynamic efficiency has led to a debate overthe best ways of incorporating these considerations into competition policy practice.Various proposals have been put forward. The two most relevant approaches are (i) toextend the time scale for considering potential entry within the traditional analyticalframework from the usual two years to four years and (ii) to separately identify so-calledinnovation markets and analyse the nature of competition within them.

Gilbert, R.J., 1995, The 1995 Antitrust Guidelines for the Licensing of Intellectual Property: New70

Signposts for the Intersection of Intellectual Property and Antitrust Laws, Paper given at the ABAsection of antitrust law spring meeting, Washington DC.

Temple Lang, J., 1996, Innovation markets and high technology industries, Paper presented at the71

Fordham Corporate Law Institute. Twenty-fifth Report on Competition Policy, European Commission, 199572

Monopolies and Mergers Commission, 1995, Video Games, and Monopolies and Mergers Commission,73

1995, Telephone Number Portability. Monopolies and Mergers Commission, 1995, The General Electricity Company plc and VSEL plc74

40

2.69 An innovation market is a term capturing the research and development activity thatoccurs, normally within a company, which provides the springboard for future generationproducts and processes. ‘Future generation products’ are products that do not presently70

exist but will result from current, or proposed, research and development. While it isrelatively easy to talk about these different markets conceptually, as one moves from thefirst to the third market the uncertainties of analysis rapidly increase. These uncertaintiesare associated both with data availability and one’s ability to analyse the interaction oftechnology, competitive behaviour and market structure. This makes the application ofthe innovation market concept difficult in practice and in the US, the debate about itsapplicability is still ongoing.

2.70 In Europe, innovation markets have not been explicitly recognised. However, accordingto John Temple Lang, in a number of cases involving high tech industries the71

Commission has arrived ‘at much the same results by using the more traditional conceptof competition by two companies in research and development directed towards the samegoal.’ Cases mentioned by Temple Lang include: Upjohn-Pharmacia, Glaxo-Welcome,Elf Atochem/Union Carbide, and Enichem Union Carbide. Temple Lang observes that72

these decisions of the Commission show a willingness to let mergers, joint ventures orother restrictive agreements go through.

2.71 In the UK, such issues have also played a role in recent cases involving video games andtelephone number portability where an incumbent was found to inhibit innovation by73

imposing switching costs on entrants and their customers. The GEC/VSEL report also74

canvassed these ideas with regard to the high tech issues in the defence industries.

41

Conclusions

2.72 In this chapter we have provided an overview of some of the key issues in antitrust withthe aim of showing how empirical economic analysis supports the application ofcompetition law in practice. Four main areas of antitrust have been covered:

� market definition;

� market structure analysis;

� models of competition; and

� efficiency defences.

2.73 This overview has demonstrated that in many cases, it is only by the use of appropriatestatistical and econometric techniques that the case be sensibly progressed and analysed.This is especially important under a rule of reason approach. Quantification of economicrelationships in antitrust is not all about measuring demand-side substitutability formarket definition purposes. There are many tests that have been applied by antitrusteconomists to determine, for example, the effect of a merger on prices, or to understandthe cost savings of a merger in the utility industries. What is equally clear from thisreview is that the range of techniques is very wide, both in terms of technical andeconomic sophistication. For example, the analysis of price trends ranges from simpleprice comparisons across countries to the analysis of structural breaks or co-integrationanalysis of time series. Similarly the analysis of demand can become very sophisticatedonce the interdependence of a system of demand equations is analysed simultaneously.

2.74 In the following chapters, we describe those statistical and econometric techniques thatare most commonly used in antitrust analysis. The techniques are described in ascendingorder of difficulty, from the simplest comparisons of prices, to the estimation of fully-fledged econometric models stemming directly from theoretical economic models.

42

(P1

P2

)

Technically, a statistical test is a statistic calculated from a sample in order to test a hypothesis about75

the population from which the sample is drawn. When testing hypotheses concerning more than onepopulation, the test statistic is computed from more than one sample.

There are cases when the analyst needs to compare several samples of data. There are several tests76

available for this task. An excellent exposition can be found in Chapter 12 of Rice, J.A., 1995,Mathematical Statistics and Data Analysis, 2 Edition. Duxbury Press.nd

Statistical packages such as SPSS will test the validity of this assumption, and if rejected, will present77

an alternative, more robust test.

43

PART II: STATISTICAL TESTS OF PRICES AND PRICE TRENDS

3 CROSS-SECTIONAL PRICE TESTS

3.1 Cross-sectional price tests use hypothesis testing to establish whether two sets of pricesare uniform, taking into account differences in costs or other external forces that couldaffect prices. The two sets can pertain to either two geographic areas, or to two products,or to two periods of time, and support the assessment of market power or the effect ofcartelisation. These tests are based on comparisons of cross-sectional data, and make useof purely statistical tests, that is, no economic theory or behaviour is explicitly analysed.75

3.2 The two sets of prices to be compared can be considered as random sampling from twopopulations. The test for price uniformity then is a test of the null hypothesis that the76

distributions of the two price populations are identical. The testing procedures varyaccording to the sampling methodology adopted. We will first consider the case wherethe two price samples are independent; then we will consider the case of paired samples.To give an example, if we want to establish whether there is evidence that prices arehigher in one area than in another, due, for instance, to price fixing, then we collect pricesamples from the two areas; these samples can be considered independent. If we suspectthat producers have fixed the price of a certain range of products at some point we cancompare two sets of prices before and after the alleged fixing has taken place; thesesamples cannot be considered as being independent of each other and will be paired.

Description of the technique: case of independent samples

3.3 Consider first the case of two independent samples. Two sets of prices, P and P , are1 2

drawn from two normal distributions with means µ and µ and identical variance �.1 277

The average prices in the two samples are unbiased estimates of the population means

and . The pooled sample variance S is an unbiased estimate of the population 2

t � [(P1

� P2

) / S (1/n�1/m)] [EQUATION 1]

(P1

P2

)

(P1

� P2

)

See case study, page 30, for a more detailed exposition of the MMC enquiry.78

44

variance. A test statistic of the hypothesis µ =µ when the sample sizes are n and m is1 2

given by:

which is distributed as a Student’s t-statistic with degrees of freedom equal to the totalnumber of observations minus 2. If the means of the two populations are the same, theestimated t has to be smaller than the tabulated critical values for those degrees offreedom and a significance level of 10% or less. As a rule of thumb, estimated t valuesof less than or equal to two supports the hypothesis that prices are uniform across twopopulations.

Description of the technique: case of matched samples

3.4 We now consider the case of so-called paired samples. These samples are notindependent because they consist of matched observations, often ‘before’ and ‘after’measurements on the same set of prices. The set of prices, P and P , are drawn from two1 2

normal distributions with means µ and µ and variances � and � . The test of the1 2 1 22 2

hypothesis that the two population means are the same is equivalent to a test of µ -µ =0.1 2

The average prices in the two samples and are unbiased estimates of the

population means, and the pooled sample variances (S and S ) are unbiased estimates1 22 2

of the population variances. Then the unbiased sample estimates of (µ -µ ) and of its1 2

variance are D = and S = (S + S - 2S ). A test statistic of the hypothesisD 1 2 122 2 2

µ -µ =0 is given by:1 2

t = [(D)/S (n)] [EQUATION 2]D

which is distributed as a Student’s t-statistic with degrees of freedom equal to the totalnumber of pairs minus one. Once the test statistic has been computed, the testingprocedure proceeds as above.

3.5 Where possible, paired samples should be used. Since S < S the paired-sample test isD

potentially a more powerful discriminator. So, for example, when analysing the supplyof recorded music, the MMC compared prices of paired samples of CDs across variouscountries. 78

When data is not normal, alternative non-parametric testing procedures (like the Wilkoxon test statistic)79

are available. See Rice, J.A. (1995), op. cit., for a discussion.

45

Data and computational requirements

3.6 The implementation of this test is quite straightforward and does not require sophisticatedcomputer packages or computational skills, although a dedicated statistical package suchas SPSS or SAS will often be convenient. It should be noted, however, that at least 20observations (as a rough guide) are needed in order to apply this test. A problem canarise if the distribution of the price data is not normal but shows some degree ofskewness. In such a case, the theory of normal distributions cannot be applied. Then79

it is advisable to transform the data before performing the t-test. The most populartransformations to solve the problem of skewness are taking the logs or the squared roots.

3.7 With SPSS the user is prompted as to whether the test is used on paired or matchedsamples. If using a spreadsheet the implementation of the relevant hypothesis testinvolves creating a new variable equal to the difference between each pair of matchedprices. Then, the mean and standard deviation of the new variable are computed: the teststatistic is simply the ratio between these two values.

Interpretation

3.8 As with all techniques, the data must be able to inform the analysis and be capable ofrejecting a given hypothesis. In international price comparisons in particular, there arecomplications arising from the different treatment of taxes (in the US, for example, localsales taxes are added to the indicated price where European prices might include VAT)and the use of an appropriate exchange rate. The actual rate of exchange between twocurrencies does not make allowances for the different purchasing power that a unit ofeach of the currencies possesses. This suggests the use of an exchange rate that reflectsthe purchasing power parity (PPP) between the two currencies. While the use of PPPadjusted exchange rates may in theory improve the usefulness of international pricecomparisons they introduce a degree of uncertainty into the analysis as the calculation ofPPPs is not straightforward. Many a formula has been developed to calculate PPP rates,and it is a matter of wide debate which one is the best, if any. Selection problems alsoarise when using actual exchange rates; the choice between spot rates and averages overthe relevant period has to be informed by a deep understanding by the analyst of the data.

3.9 Provided the data is appropriate the testing procedures we have just described lead tounequivocal conclusions about whether the means of the two distributions are statisticallythe same. However, if the two means turn out to be different, but their difference is verysmall, the analyst will have to use her/his judgement to decide how large a meandifference is needed to, say, unequivocally establish that differences in product pricesacross the two sets are economically significant. Also, the fact that the average prices

Supra, footnote 38.80

Ashworth, M.H., J.A. Kay and T.A.E. Sharpe, 1982, Differentials Between Car Prices in the United81

Kingdom and Belgium, IFS Report Series No 2, Institute for Fiscal Studies. European Commission, 1995, Car Price Differentials in the European Union on 1 May 1995,82

IP/95/768. Monopoly and Mergers Commission, 1992, New Motor Cars. A Report on the Supply of New Motor83

Cars Within the United Kingdom.

46

turn out to be different in two countries can be ascribed to many different causes, as theMMC has found for instance in its investigations.

Application: international comparison of prices

3.10 In 1993 there was a press campaign about prices of CDs in the UK being up to 50%higher than in the US. This case is more thoroughly discussed in the boxed case study onpage 30. The following is a summary of the events that took place.

3.11 After a highly charged public debate and a parliamentary investigation by the Trade andIndustry Committee of the House of Commons the competition authorities were askedto investigate whether the record companies were preventing the parallel imports of CDsto the disadvantage of UK consumers.

3.12 In the subsequent enquiry the MMC commissioned market research into relative pricesof a range of consumer goods in the US and the UK and received other evidence of thedistribution of prices of recorded music across different categories, types of outlets andgeographical areas. Note that in this case establishing a price difference was not80

sufficient. The overall difference in prices across countries was also deemed relevant.

3.13 The UK tended to be cheaper than France, Germany, or Denmark but dearer than theUSA (after taking account of taxes). As may be expected, this was found to be sensitiveto the exchange rate used. The report noted the difficulties of interpretation when usinginternational price comparisons (see paragraph 7.105 of the report). However, this maybe because no clear conclusions of excessive pricing were demonstrated.

3.14 Another frequent complaint of discriminatory pricing which adversely affects UKconsumers concerns the pricing of motor cars in Europe. Large price differentials havebeen observed between prices of cars sold in Belgium and the UK. The European81

Commission regularly publishes a report on price differentials across the EU. Both the82

MMC and the European Commission’s DGIV have undertaken major investigations intocar pricing. The MMC reported its findings in a 1992 report and the European 83

47

Commission recently fined the Volkswagen Group for distribution agreements andpractices that were held to be anti-competitive because they stopped consumers fromexploiting international price differentials.

3.15 In the motor cars enquiries of the MMC and the European Commission’s DGIV it wasnecessary to undertake price surveys across different countries. These surveys had to dealwith the problem of price comparisons of models that are not sold to the samespecification and therefore differ in terms of their characteristics. To deal with theseproblems of comparability, hedonic price indices had to be employed.

48

For an excellent exposition of this technique, see Chapter 4 in Berndt, E.R., 1991, The Practice of84

Econometrics: Classic and Contemporary. Addison Wesley.

49

4 HEDONIC PRICE ANALYSIS

4.1 Hedonic price analysis is used to compare the price of products whose quality changesover time or over product space, due to either technological or subjective factors, or otherservices and optional equipment. Typical examples of products whose quality differs atone point in time, or whose quality varies dramatically over time, are cars and computers.In such circumstances, price analysis has to be adjusted to account properly for qualitydifferences or quality changes. Hedonic price analysis is a particular kind of regressionanalysis that has been developed to purge prices of the effect of quality differences, sothat the pure price difference between ‘standardised’ products can be isolated. Purgedprices can then be used to carry out other price tests.

Description of the technique84

4.2 We consider a product W with, say, three distinctive characteristics, (X,Y,Z); for example,if W were cars, the characteristics could be horsepower, weight, luxury or basic, etc.There are many brands of W, and each brand supplies more than one version of W. Howdo we compare the prices of the different versions of W for sale, and how do we comparethe price of W over time, let us say over three years, if (X,Y,Z) tend to change quickly.The price of W can be expressed as:

log P(W )= � + � D + � D + b X + b Y + b Z + u [EQUATION 3]i 1 2 2 3 3 1 i 2 i 3 i i

where the subscript i denotes one of the many brands or versions of W available for sale.D is a dummy variable equal to one in period 2 and to zero in the other two periods.2

Likewise, D is equal to one in period 3 and to zero in the other two periods. U is a3 i

random error term with mean zero and constant variance. After data on price, and oncharacteristics X, Y, and Z are gathered for many brands and versions of W for the threeperiods, Equation 3 is estimated by OLS. Quality-adjusted price indices for the years 1,2, and 3 can be obtained very simply by taking anti-logarithms of the estimatedcoefficients of the dummy variables, a and a . Normalising the base year value to unity2 3

in year 1, the hedonic price index will be 1 in period 1, exp(� ) in period 2 and exp(� )2 3

in period 3. So, if the hedonic price index is 1 in period 1 and 0.87 in period 2 thequality-adjusted average price of W has gone down by 13% between periods 1 and 2. Ifthe non-adjusted price of W had actually gone up substantially, prompting allegations ofprice fixing or monopolistic abuse, hedonic price analysis would show that all the priceincrease was due to changes in the quality mix of the product, not necessarily to anti-competitive behaviour by the producers or retailers.

The R measure the percentage of the variability in the dependent variable which is ‘explained’ by the85 2

regression. An R of 0 means that the regression does not have any explanatory power, while with an2

R of 1, 100% of the variability is explained by the regression (‘perfect fit’). It should be noted that2

the R tends to be quite low when the estimation is carried out with cross-sectional data, and this should2

be taken into account when interpreting the results. Ashworth, M.H., J.A. Kay and T.A.E. Sharpe, 1982, op. cit.86

50

Data and computational requirements

4.3 The implementation of this technique, for the above example, requires a combination oftime series and cross-sectional data on the product price and its characteristics. Data willbe needed on a range of prices over a number of years. It is advisable to have a largeenough number of observations for the results to be meaningful; the analysis should becarried out with at least 20 to 30 observations plus as many observations as the numberof regressors in the estimated equation. The estimation of hedonic price regression canbe carried out using all econometric packages and those spreadsheets that have a built-inroutine to run multivariate regressions.

Interpretation

4.4 The computation of hedonic price indices requires a detailed knowledge of the demandand supply of the product for which the index is calculated, as it is of crucial importanceto the unbiasedness of the results that no significant quality variable is omitted from theregression. The quality of the results, as with all empirical analyses, depends cruciallyon how well the model explains the variability of the dependent variable (here, prices).This is measured by the R of the regression. Another problem that is often encountered2 85

when dealing with cross-sectional data is that of heteroscedasticity, that is, non-constancyof the error term variance. Although heteroscedasticity does not affect the unbiasednessof the results, it invalidates the significance tests. Heteroscedasticity can be dealt withusing weighted rather than Ordinary Least Squares regression: the procedure is availablein most econometric packages.

Application: car price differentials

4.5 One of the most common tests of progress towards European Union marketintegration is the trend towards price convergence. The question is whether theprices charged in national markets for similar products differ significantly, andif so, whether prices are converging. In the case of cars, models differ widelyacross countries and their characteristics change substantially over time. Hencethis particular nature of the market makes the hedonic price analysis the bestmethodology to carry out comparisons of prices between national markets.

4.6 For the UK, the first available study is a 1982 IFS report analysing car prices in86

Monopoly and Merger Commission, 1992, op. cit.87

Euromotor, 1991, Year 2000 and Beyond – The Car Marketing Challenge in Europe, Euromotor88

Reports. BEUC, 1989, EEC Study on Car Prices and Progress Towards 1992, BEUC/10/89.89

Flam, H. and H. Nordstrom, 1995, Why Do Pre-Tax Car Prices Differ So Much Across European90

Countries?, CEPR Discussion Paper No 1181. European Commission, 1995, op. cit.91

51

the UK and Belgium. The report showed that the average price differential, at39%, underestimated the quality-adjusted differential, which was 44%. Ten yearslater the MMC compared car prices in the UK, Germany, France, the Netherlandsand Belgium. The study sought to control for differences in characteristics, but87

not all specification differences were eliminated. Also, comparisons referred tothe year 1990, when the UK and the other countries were at very different phasesof the economic cycle. Surprisingly, the MMC report found no significantdifferentials in general, and only significant differences in the prices of smallermodels. The conclusions reached by the MMC report were not substantiated byfurther studies: a report by LAL found quality-adjusted differentials between the88

UK and other countries ranging from 13.8 to 35% for four models. All of the UKstudies have used the hedonic price analysis. The variation in their results maybe due to the incorrect filtering of the effect of the varying characteristics, whichwould lead to the results being somewhat biased.

4.7 At the European level, a report by the BEUC on behalf of the European89

Commission’s DGXI estimated differentials of between 12% to 50% for UK carscompared to Belgium, Germany, Greece, Spain, France, Ireland and Luxembourgand the Netherlands. The results of the study are unreliable, as they were notquality-adjusted. Flam and Nordstrom found price differences as high as 50%,90

averaging at around 12%. Model comparisons carried out in 1995 by theEuropean Commission showed differentials in excess of 20%. Although cars91

were compared by model, the specifications of each model tend to vary acrosscountries and we expect the estimated differentials to be biased.

Table 2 Estimated Differentials UK – Other CountriesStudy Reference Countries Used in the Estimated Difference

Year ComparisonIFS 1981 B 44%BEUC 1989 B, DK, G, GP, E, F, Irel, Lux, N 12-50%MMC 1990 G, F, B, NL None SignificantLAL 1991 n.a. 13.8-35%

52

4.8 Hedonic price adjustment is therefore an important tool to deal with the prices ofdifferentiated products, but results are dependent on model specification of thequality attributes.

53

5 PRICE CORRELATION

5.1 Price correlation is frequently used to determine whether two products or two geographicareas are in the same economic market. It is also often used to measure the degree ofinterdependence between prices and market shares or the concentration of sellers.Correlation analysis does not allow the analyst to make an inference on the causation ofthe relationship between the two variables being examined, but only on their degree ofassociation. As with other techniques, this provides one piece of evidence in a case.

Description of the technique

5.2 Correlation analysis is a statistical technique used to measure the degree ofinterdependence between two variables. Two variables are said to be correlated if achange in one variable is associated with a change in the other. This need not imply acausal relationship between the two since the movement in both variables can beinfluenced by other variables not included in the analysis. Correlation is positive whenthe changes in the two variables have the same sign (that is, they both become larger orsmaller), and negative otherwise (that is, one becomes larger while the other becomessmaller). Variables that are independent do not depend upon each other and will only becorrelated by chance (‘spurious correlation’).

5.3 The degree of association between two variables is sometimes measured by a statisticalparameter called covariance, which is dependent on the unit of measurement used. Thecorrelation coefficient between two variables, x and x , is, however, a standardised1 2

measure of association between two variables:

� = � / � � 12 1 2

where � is the covariance between x and x , and � and � are the square roots of the12 1 2 1 2

variances of x and x respectively. The correlation coefficient is a number ranging1 2

between -1 and 1. A coefficient of -1 implies perfect negative correlation, a coefficientof 1 implies perfect positive correlation, and a coefficient of zero implies no correlation(although it does not necessarily imply that they are independent).

Computing correlation coefficients with less than 15 observations is meaningless from a statistical point92

of view. Stigler, G.J. and Sherwin, R.A., 1985, op. cit. 93

See Waverman, L., 1991, Econometric Modeling of Energy Demand: When are Substitutes Good94

Substitutes, in D. Hawdon (ed.), Energy Demand: Evidence and Expectations. Academic Press. ibid, p 562.95

Stigler G.J. and R.A. Sherwin, 1985, op. cit., are aware of this problem, and discuss the possible96

solutions.

54

Data and computational requirements

5.4 The implementation of price correlation tests requires time series of data which have atleast 20 observations. It is customary to compute the correlation coefficient using the92

natural logarithm (log) of the price series, both due to efficiency reasons and because thefirst log difference is an approximation of the growth rate. Equal changes in the logrepresent equal percentage changes in price. Correlations should always be computedboth between levels and differences in the log prices. The computational requirementsto carry out the test are minimal; all statistical and econometric packages and mostspreadsheets have in-built routines to compute correlation coefficients. Packages suchas SPSS also provide results from significance tests on the estimated correlationcoefficients.

Interpretation

5.5 Stigler and Sherwin argued that given time series price data for two products or areas,93

the correlation coefficients between their levels and first differences can be used todetermine whether these products or areas are in the same market. Prices can differ94

because of transport and transaction costs or because of temporary demand or supplyshocks, so that the correlation coefficients will be less than 1 even in a perfect market.It is however impossible to determine how big the correlation coefficient needs to be inorder for the analysis to conclude that two areas or products are in the same market.Stigler and Sherwin ‘... believe that no unique criterion exists, quite aside from the fact95

that the degree of correspondence of two price series will vary with the unit and durationof time, the kind of price reported, and other factors’. In other words, even if theestimated correlation coefficient is statistically different from zero, the economicinterpretation of the test is not straightforward. This is due to the lack of an obvious cut-off point where it can be decided whether the estimated degree of interdependencebetween the prices can be taken as an indication of price uniformity.

5.6 A further problem with the use of the correlation coefficient is that if there are commonfactors influencing prices this statistic can lead to erroneous conclusions. To see why,96

consider the case of two producers using the same input, so that the prices of theirproducts are highly correlated with the input price. The analyst will find a high

Ibid.97

Ibid.98

55

correlation coefficient between the prices of the two products: that is because both theirprices are influenced by the input price, not because they are interdependent. Unless theinfluence of common factors is purged, the use of the correlation coefficient as a test ofprice interdependence leads to wrong conclusions, regardless of the size or the statisticalsignificance of the estimate. This is especially the case when the series cover periods ofhigh inflation and are therefore trended, or when the data is seasonal. The influence ofcommon factors can be purged by de-trending all the variables first or by using regressionanalysis: the price is regressed on the influencing factor (input price, or a time trend, orseasonal dummies, etc), and the residuals from that regression are taken to represent thepurged series.

5.7 A further problem with using correlation analysis lies in the fact that price responses forsome products, and in some areas, might be delayed. This would be the case when, forinstance, prices are negotiated at discrete time intervals which are not synchronised: theanalyst could find a very low correlation when in fact the series are highly correlated inthe long run. A visual inspection of the plotted price series can be of help in such cases.Another instance when prices of products that are closely related have low correlation iswhen the products are good substitutes and their supply is elastic.

Application: wholesale petrol markets in the US

5.8 Stifler and Sherwin have used correlation analysis to test whether the cities of97

Chicago, Detroit and New Orleans are in the same market for wholesale petrol.They correlate monthly fuel prices in the three cities during the period 1980-82inclusive. Stigler and Sherwin eliminate the effect of serial correlation by taking98

the first difference on every third price. They also remove the effect of commonfactors, which is a very important step in the analysis of petrol prices, asfluctuations in the price of crude oil tend to influence the price of refined petrolquite heavily. The results are taken by the authors as indicating that thecorrelation coefficients are very high: the coefficient between New Orleans andChicago is 0.792; that between New Orleans and Detroit is 0.967; and thatbetween Chicago and Detroit is 0.77. These results indicated to the authors, thatthe three cities are in the same economic market. However, correlation analysis,as the sole means of reducing market breadth, is no longer considered asufficiently robust approach.

56

The test was developed in Horowitz, I., 1981, Market Definition in Antitrust Analysis: A Regression99

Approach, Southern Economic Journal, 48: 1-16. Note that it is common practice to use the log form when running regressions involving rates of growth,100

or when estimating demand functions. This has desirable properties for interpretation. However, it isan empirical matter and whether the functional relationship is better estimated by levels or logs, canbe tested.

A time series is said to be stationary if the properties (that is, mean and variance) of its elements do not101

depend on time. This requires the mean and variance to be constant over time. If the absolute valueof the auto-regressive parameter, �, is equal or bigger than one, the series is non-stationary because itsvariance becomes infinite over time.

57

6 SPEED OF ADJUSTMENT TEST

6.1 In this and the following two chapters we review quantitative techniques based ondynamic models. Economic theories tell us what happens in equilibrium; for example,economic theory predicts that if two homogeneous products are in the same market theirprices have to be the same, making due allowances for transportation costs. Reality ishowever different. When shocks happen, there are adjustment lags before the systemreturns to equilibrium. Dynamic models account for this. The speed of adjustment testis a market definition test which is based on the idea that if two products are in the samemarket the difference in their prices are stable over time, so that relative prices tend to99

return to their equilibrium value after a shock. The variable of interest here is the pricedifference, and the underlying dynamic assumption is that the current difference betweentwo prices is a fraction of the past difference observed in the last period. This techniquehas been hardly ever used, however, as it is fundamentally flawed.

Description of the technique

6.2 The test is carried out by estimating a linear relationship between current and past pricedifferences:100

(log P - log P ) = � + � (log P - log P ) + u [EQUATION 4]1 2 1 2 t-1 t

where 1 and 2 are the two products or regions; � is a parameter measuring the speed ofadjustment to the equilibrium; and u is a random error term with mean 0 and constantt

variance. Equation 4 represents a first order auto-regressive process as the current valueof the price difference is a function of its past value plus a random element with meanzero and constant variance. � is the long-run price difference. In order for this processto be stationary the absolute value of � has to be less than one. If � is equal to zero in101

Equation 4 adjustment is instantaneous. The larger the estimate of �, the slower theadjustment process.

See Stifler and Sherwin, op.cit., p. 583 for a discussion.102

Although we should expect this result, it sometimes turns out that � is similar with different time103

frames. This makes interpretation difficult, and suggests that the estimate is not robust. For further discussion, see Werden, G.J. and L.M. Froeb, 1993, Correlation, Causality and All that104

Jazz: The Inherent Shortcomings of Price Tests for Antitrust Markets, Review of IndustrialOrganization, 8: 329-53, p. 341.

58

Data and computational requirements

6.3 To carry out this test, time series of price data is needed. The series are first transformedinto logs and the price difference variable is created. Using linear regression analysis(OLS) this variable is regressed on a constant and on its lagged value. All statistical andeconometric packages and most spreadsheets have in-built routines to estimate OLSregressions.

Interpretation

6.4 A simple t-statistic is used to test the hypothesis that � is equal to zero; the t-statistic willbe automatically supplied with the regression output. If � turns out to be bigger thanzero, the analyst is faced with the same problem mentioned in relation to the correlationtest, that is, to determine the critical value of � in economic terms. Again, there is nopredetermined rule.

6.5 The speed of adjustment test has serious drawbacks. First of all, it is sensitive to thefrequency of observation: a slow adjustment with daily data might appear as102

instantaneous with quarterly or annual data, for example, a 5% adjustment on a dailybasis implies adjustment within a month. Secondly, the above model assumes that the103

u are serially independent. This would be violated if each price series itself follows at

first order auto-regressive process, the estimated � measures the degree of dependenceof each price on its past value, not whether the price difference is constant or not. So, ifthe price series are highly auto-correlated the estimated � will be overestimated and theanalyst will draw the erroneous conclusion that the speed of adjustment is slow. Similar104

problems arise if the price series are trended or follow a seasonal pattern. As it is quitelikely for price data to exhibit these characteristics, the use of this technique is notadvisable. Finally, this technique is overly restrictive because it imposes a particularpattern to the dynamic adjustment process.

t

T

ssts

T

ssts uPPP ++= ∑∑

=−

=−

12

111 γβ

Granger causality is a specific econometric concept which does not necessarily imply causality in the105

normal sense of the word.

59

7 CAUSALITY TESTS

7.1 Rather than looking at the degree of interdependence between two prices, causality testsseek to determine if there is causation from one series to the other, or if they mutuallydetermine each other. Although the definition of causality is not a straightforward matter,one econometric testing procedure has gained considerable success in recent years:Granger causality testing. 105

Description of the technique

7.2 The idea behind Granger causality is simple: consider two time series of price data, P1

and P . P is said to Granger-cause P if prediction of the current value of P is enhanced2 2 1 1

by using past values of P . This can be shown using Equation 5: 2

[EQUATION 5]

where u is a random error term with mean 0 and constant variance. The empiricalt

implementation of the test proceeds as follows. P in Equation 5 is regressed on its past1

values and on past values of P . Although the choice of lags is arbitrary, in order to avoid2

omitted variable bias it is customary to start with a high number of lags, choosing thesame number of lags for both price series, and then reduce the number of lags bydropping those that are not significant. The analyst will have to keep in mind, however,that the lagged price variables typically tend to be highly correlated. This createsmulticollinearity, resulting in very high standard errors, and therefore low t-ratios. Thechoice of the number of lags has to be made bearing this problem in mind. If past levelsof P have no influence in determining the current value of P , then all the coefficients2 1

on the lagged values of P in Equation 5 have to be equal to zero. This hypothesis is2

tested by means of an F-test. The same method is followed to test whether P Granger-2

causes P .1

Data and computational requirements

7.3 The computational requirements to perform causality testing are pretty much the sameas those for the speed of adjustment test. It is, however, advisable to have a substantiallylonger time series of data. This is because for each additional lag, two extra right-hand

See Kennedy, P., 1993, A Guide to Econometrics, 3 Edition. Oxford:Basil Blackwell, page 68 for a106 rd

discussion of this issue, and of this technique in general.

60

side variables are introduced into the equation and one additional observation is lost. Forinstance, if there are five lags in the regression, five observations are lost and there areten regressors in the equation. A sample of about 50 observations would be the minimumrequirement. To perform the estimation and testing it is preferable to use econometricpackages rather than spreadsheets, because packages such as Microfit and PC-Give havebuilt-in routines to perform the F-test for causality.

Interpretation

7.4 Causality testing is an atheoretical method and does not require assumptions on thedynamic properties of the price adjustment mechanism. As noted above, Grangercausality is not causality in the normal sense of the word. At best it may providecircumstantial evidence. As with many statistical tests a negative result may be easier tointerpret then a positive one. Because of the nature of the possible spurious correlations,it is important to purge the price data from the effect of common factors. This can bedone in two ways. First, the series can be purged independently by regressing each ofthem on the vector of common factors and using the residuals from those regressions asthe purged variables. Secondly, the common factors can be added to the estimatedEquation 5. If the influence of common factors is not eliminated, the results will bemisinterpreted.

7.5 One major drawback associated with the use of this methodology is that the presence ofauto-correlation in the error term attached to Equation 5 invalidates the F-test. In timeseries data, random shocks have effects that often persist for more than one time period;also, owing to inertia, past actions often influence current actions. In these cases it is saidthat the disturbances are auto-correlated, and their covariance is different from zero. Thepresence of serial correlation in the error term does not affect the unbiasedness of theestimated parameters, but invalidates the F-test. In order to solve this problem it iscustomary, before running the regression, to transform the data series, so as to eliminatethe auto-correlation in the errors. There is, however, great debate amongeconometricians as to how to transform the data and, more worryingly, the extent towhich the test results change as a consequence of the transformation used.106

7.6 Finally, the results obtained from this exogeneity test are not always clear-cut, and thereis one problem that analysts often have to face. Suppose that the F-test results suggestthat the coefficients on past values of P in Equation 5 are simultaneously equal to zero2

and that the t-tests on the individual coefficients show that one (or more) of them is

Slade, M.E., 1986, Exogeneity Tests of Market Boundaries Applied to Petroleum Products, The107

Journal of Industrial Economics, 34: 291-302Slade added a vector of common factors to each regression in order to eliminate the effect of common108

factors. Such vector contained quadratic functions of time. The length of the lags to be added to eachequation was set at five, as the addition of five lags eliminated all traces of autocorrelation.

61

significantly different from zero. How to interpret this result? It is up to the analyst’sexperience and wisdom to decide whether the size of the impact of such variables iseconomically significant.

Application: US petrol markets

7.7 Margaret Slade has used causality tests to determine whether the north-eastern,107

south-eastern and western regions of the US are in the same market. Usingweekly data on wholesale prices for the year from March 1981 to February 1982(that is, 52 observations), Slade chose two cities for each region, namelyGreensboro and Spartanburg for the South-East; Baltimore and Boston for theNorth-East; and Los Angeles and San Francisco for the West Coast. Causalitytests were performed between each pair of cities, both within and acrossregions. The tests for exogeneity between each pair of cities lead to the108

following conclusions. The South-East represents one geographic market. Thereis some evidence of interrelation between city pairs in the North-East and South-East, but it is weak and therefore inconclusive. The West Coast and the South-East form quite distinct markets, as might be expected.

62

The interested reader will find an excellent exposition of time series techniques in Charemza, W.W.,109

and D.F. Deadman, 1997, New Directions in Econometric Practice, 2 edition. Edward Elgar.nd

63

8 DYNAMIC PRICE REGRESSIONS AND CO-INTEGRATION ANALYSIS

8.1 Dynamic price regressions and co-integration analysis techniques are used to determine109

the extent of the market and to analyse the mechanisms by which price changes aretransmitted across products or geographic areas. Price adjustments across markets maytake place over a period of time rather than instantaneously, so that assessing whethermarkets are integrated can depend critically on the length of the price adjustment. Thereactive adjustment process to changes in one price through a set of products orgeographic areas can be represented by a class of econometric models called errorcorrection models (ECM). ECM can be used to test whether two or more series of pricedata exhibit stable long-term relationships and to estimate the time required for suchrelationships to be re-established when a shock causes them to depart from equilibrium.Although the analysis of prices alone is not sufficient to establish whether a market is notan antitrust market, it is often the case that no other data but time series of prices areavailable to the analyst. In that case, the techniques developed in this chapter should beused, as they are the most correct ones from a statistical point of view.

Description of the technique

8.2 Consider the following general-lag model, where capital letters indicate naturallogarithms:

P = � + � P + � P + �P + u [EQUATION 6]1t 0 0 2t 1 2t-1 1t-1 t

Subtracting P from both sides of the equation, and adding and subtracting � P from1t 1 2t-1

the right-hand side, after simple manipulations we obtain:

�P = � + � �P - (1-�) {P - [(� + � ) / (1-�)] P } + u [EQUATION 7]1t 0 0 2t 1t-1 0 1 2t-1 t

where �P = (P - P ); �P = (P - P ) and u is a random error term with mean 0 and1t 1t 1t-1 2t 2t 2t-1 t

constant variance. � is the long term difference between the two prices. Equation 7 is0

called the Error Correction representation of Equation 6. The last term, (1-�) {P - [(�1t-1 0

+ � ) / (1-�)] P } , is called the error-correction term because it reflects the current1 2t-1

‘error’ in attaining long-run equilibrium: it measures the extent to which the two priceshave diverged. The parameter � has to be less than one for the system to be stable, thatis, to ensure convergence towards the equilibrium. Then -(1-�) is negative, whichimplies that the deviation from the long-run equilibrium is corrected during the nextperiods. If � were equal to zero, the adjustment would be instantaneous. Another

Engle, R.F. and C.W.J. Granger, 1987, “Co-integration and Error Correction: Representation,110

Estimation and Testing”, Econometrica, 55: 251-76. Op. cit. page 251111

Ibid.112

Consider the simplest example of a I(1) series, a random walk. Let x =x +u where u is a stationary113t t-1 t t

error term. We can see that x is I(1) as �x =u which is I(0). Now let us consider a more general formt tx =ax +u . If the absolute value of a is equal to 1 then x is I(1), that is, non-stationary. If thet t-1 t tabsolute value of a is less than 1 then x is I(0), that is, stationary. Formal tests for stationarity are testst

of the null hypothesis that a=1, and so the name unit root test. There is a wide variety of unit root tests

64

advantage of using Equation 7 rather than 6 is that by regressing �P on �P and the1t 2t

levels we have less multicollinearity, and therefore more precise estimates.

8.3 Error correction models are a powerful tool in econometrics, as they allow the estimationof equilibrium relationships using time series of data that are non-stationary. Generallyspeaking, a stationary series has a mean to which it tends to return, while non-stationaryseries tend to wander widely; also, a stationary series always has a finite variance (thatis, shocks only have transitory effects) and its autocorrelations tend to die out as theinterval over which they are measured widens. These differences suggest that whenplotting the data series against time, a stationary series will cut the horizontal axis manytimes, while a non-stationary series will not. Econometricians have discovered that manytime series of economic data are non-stationary, more precisely that they are integratedof order 1. A series is said to be integrated of order 1 if it can be made stationary bytaking its first difference. Two non-stationary time series are said to be co-integrated ifthey have a linear combination that is stationary. Engle and Granger have supplied110

many examples of non-stationary series that might have stationary linear combinations.Among them, they cited the prices of ‘close substitutes in the same market’. Engle and111

Granger also showed that integrated series whose relationship can be expressed in the112

form of an ECM are co-integrated. So, rather than having to estimate statistical modelsusing differenced data, and losing valuable economic information in the process, theproblem of non-stationarity can be solved by estimating an ECM, from which we canestimate directly, the speed of adjustment of price movements to their equilibriumrelationship after a shock has taken place. That is, we can use price levels that containmore information than price differences to measure the reactive adjustment of pricesbetween regions.

Data and computational requirements

8.4 Although the empirical implementation of the convergence test in Equation 7 does notappear difficult as it involves running an OLS regression and using a t-test for the nullhypothesis that � is equal to zero, the situation can become complicated. This happensif the estimated � is positive or greater than one in absolute value. Then there is evidenceof non-stationarity and new solutions need to be found. The correct way of proceedingis as follows. First, the analyst has to test for stationarity in the two price series via a unitroot test. If, and only if, the test results show that the data is non-stationary, then the113

available. The critical value for establishing whether the results of the test imply a unit root differhowever according to what kind of integrated process is assumed, that is, a random walk, or a randomwalk with drift, or a random walk with drift and added time trend, etc. The setting up and interpretationof these tests require an experienced practitioner.

If two I(1) variables are co-integrated, then the regression of one on the other produces I(0) residuals.114

Most tests for co-integration are unit root tests applied to the residuals.

65

analysis requires testing whether they are co-integrated. Testing for co-integration impliestesting whether there exists a linear combination of the two series that is stationary.114

If evidence of co-integration is found, then the conclusion can be drawn that therelationship between price movements tends to equilibrium in the long run. So, if asimple ECM representation cannot be found, co-integration analysis becomes moresophisticated and needs to be carried out by experienced analysts. Moreover, if theanalysis involves more than two prices, it can only be performed using specialistsoftware. This technique requires the availability of long time series of data, with at least50 observations.

Interpretation

8.5 There are some caveats that need to be carefully considered before drawing anyconclusions from Equation 7, even when the estimated � has the correct value and sign.First, it is advisable to account for the influence of common factors in much the sameway as when using causality tests or correlation analysis. Secondly, the lag structure ofthe model may be more complex than in Equation 7. It is advisable to introduce more lagsand test for their significance before discarding them. Thirdly, if the error term is auto-correlated this invalidates the significance test. To test for auto-correlation, the residualsfrom Equation 7 are regressed on their lagged values and on the regressors of Equation7. An F-test is then used to test the joint significance of the coefficients of the laggedresiduals.

8.6 The estimated speed of adjustment in Equation 7 might be too slow to make anyeconomic sense. This problem is even more severe when co-integration tests are used todetermine convergence. This is because co-integration is a long-run concept, and a co-integrating relationship will be found even when one price changes and it takes severalyears for the other price(s) to adjust.

8.7 All techniques based on the analysis of prices alone, including co-integration techniques,are very useful to define economic markets, but they should be used with care whenestablishing relevant antitrust markets. The fact that prices in one area are found toaffect prices in another area is not sufficient proof of the existence of a wider antitrustmarket. What needs to be determined in defining an antitrust market is whether two areasthat are in the same market at historical prices would still be in the same market if theproducers in one area would increase their price by some significant and non-transitoryamount. This question cannot be answered by looking at price movements alone.

Monopolies and Mergers Commission, 1991, Soluble Coffee, A Report on the Supply of Soluble coffee115

for Retail Sale in the UK. Similar studies were undertaken in the course of the MMC’s investigation into the supply of petrol in116

1989 (Monopoly and Mergers Commission, 1989, The Supply of Petrol). One study was undertaken,by the Department of Energy and the other by economic consultants on behalf of the Petrol Retailers’Association (PRA). The Department of Energy study examined the relationship between UK retailpetrol prices and spot petrol prices in the Rotterdam market. It found that pump prices adjusted to spotprices, over the long run, in similar ways across the six markets investigated (Belgium, France, WestGermany, Italy, Netherlands and the United Kingdom) . There was also evidence that the UK priceswere more responsive than elsewhere. The study also found that pump prices in all six countriesfollowed movements in spot prices with a short lag.

The study on behalf of the RPA looked at the relationship between crude oil prices and retail petrolprices in the UK. This study found that retail prices rose following a rise in crude prices in the period1977 to 1984 but did not experience an equivalent fall when crude prices dropped during 1985 to 1988.The consultants also looked at the relationship between retail prices and the Rotterdam price for thesame two periods. They concluded that margins between crude, Rotterdam petrol and UK retail priceshad widened since 1985.

66

Application: the supply of soluble coffee

8.8 In 1990 the MMC was asked to investigate and report on the sale of soluble coffee in115

the UK retail market. The Director General of Fair Trading was concerned that TheNestle Company Ltd (Nestle) was in a dominant position, given that it supplied 48% ofthe volume and 56% of the value of soluble coffee for retail sale in the UK. Nestle’sprofitability was also found to be higher than the majority of other firms in the industry.The MMC confirmed that a scale monopoly existed with respect to Nestle (whose mainbrand is Nescafe) but did not find any behaviour that operated against the public interest.

8.9 Of particular interest to the Commission was the slow adjustment of soluble coffee pricesto changes in the prices of coffee beans which, just prior to the time of the enquiry, hadexperienced major fluctuations. This can be seen in the graph overleaf. Nescafe wasparticularly slow to adjust to these price changes. Nestle claimed that several factorsinfluenced their decision whether or not to transmit the frequent changes in green coffeebean prices to consumers. First, the level of price volatility would be confusing to thecustomer and be difficult for the trade as a whole to manage. Secondly, in order tomaintain consumer confidence Nestle avoided sharp price changes by smoothing outprice increases. The MMC considered the following two studies of the price transmissionmechanism that applied quantitative techniques.116

67

Figure 1: The Nescafe Wholesale List Price and the Green Bean Price Lagged

8.10 An internal study by the Ministry of Agriculture, Fisheries and Food (MAFF) comparingthe prices of instant and ground coffee with changes in green coffee bean prices between1979 and 1989 was based on quarterly data from the National Food Survey. MAFFanalysed the correlation of retail prices of ground and instant coffee with the level of theraw bean equivalent price based on the sub-group indices of the Producer Price Index.The results from this analysis showed that a closer relationship between ground coffeeprices and raw bean prices than between soluble coffee prices and raw bean prices. Thecorrelation became stronger if the price of ground coffee was lagged two quarters, whilethe instant coffee price was best correlated after one quarter. Furthermore the studyfound that, on average, both instant and ground coffee prices respond more to raw beanprice rises than price falls.

8.11 GFL, the second largest supplier of soluble coffee in the UK market, submitted a studyundertaken by economic consultants which analysed the relationship between input andoutput prices of soluble coffee. The analysis focused particularly on how the pricechanges in the green coffee bean market fed through to retail prices and the extent towhich this transmission explained changes in retail coffee prices. The econometricestimation showed that an increase in the cost of beans for delivery led to an almost exact

Langenfeld, J.L. and G.C. Watkins, 1998, Geographic Oil Product Market Test: An Application Using117

Pricing Data, Mimeo: LECG Inc.

68

increase in retail selling prices. Furthermore, estimations of the relationship betweengreen coffee bean prices and wholesale realisations (1981-90) found that these prices alsomoved closely together. Over 50% of the change in the purchase cost of beans fedthrough to wholesale realisations within three months, and 75% of any change inwholesale realisations were transmitted to retail prices in the same quarter. When testingfor asymmetry in the relationship between green bean and output prices for periods ofgreen bean price rises against price falls, the consultants found no asymmetry – bothincreases and decreases were reflected in output prices to a similar extent and withinsimilar time periods. The price of Maxwell House (GFL’s leading brand) was foundlargely to follow movements in the price of Nescafe.

Application: petrol markets in Colorado

8.12 Langenfield and Watkins use co-integration analysis to test whether petrol117

prices in Denver are linked with those in the neighbouring towns of Tulsa, KansasCity, Cheyenne and Billings. They use weekly price series for the period fromJanuary 1992 to July 1997 inclusive, purged of the common effect of the price ofcrude oil. They find that Denver petrol prices have an equilibrium relationshipwith those in Tulsa, Kansas City, Cheyenne and Billings, although therelationship with Billing is somewhat weaker. The evidence therefore shows thatthese five towns are part of the same market for wholesale petrol.

See Carlton, D.W. and J.M. Perloff, 1994, Modern Industrial Organization, New York: Harper Collins,118

for an excellent exposition of the theory of residual demand.

69

PART III: DEMAND ANALYSIS

9 RESIDUAL DEMAND ANALYSIS

9.1 The residual demand facing a firm or a group of firms is the demand function specifyingthe level of sales made by the firm or group as a function of the price they charge. The118

estimation of the residual demand allows the analyst to understand the competitivebehaviour of a firm or group of firms, by accounting for supply substitution effects. Inthe discussion below, the term ‘firm’ is used, but it can be substituted with ‘a group offirms’ without loss of generality.

9.2 A firm operating in a competitive environment does not have the power to raise the priceabove the competitive level. Let us define own-price elasticity as the percentage decreasein any product’s demand due to a percentage increase in its price given that the prices ofother products remain unchanged. Accordingly, the own-price elasticity will always benegative, although it is often discussed in terms of the absolute value. Any segment ofa demand curve is said to be elastic if, when the price rises by x% the quantity demandeddecreases by more than x%. This corresponds to an absolute value of the elasticity largerthan one. A firm operating in a perfectly competitive market faces an infinitely elasticresidual demand curve. This is because if the firm raises the price of its product evenslightly, it will lose all its customers to the competition. The fewer the competitiveconstraints from other products or firms, the less elastic the residual demand curve facedby the firm. What this implies is that, by reducing the quantity it supplies, the firm couldcause a long-lasting price increase. So, the elasticity of the residual demand curveconveys invaluable information on the competitive situation of a firm. In general, thehigher the elasticity, the lower is the potential power of the firm to force a significant andnon-transitory price increase in the market for its product.

9.3 Formally, the residual demand faced by any firm is that part of the total demand whichis not met by the other firms in the industry:

D (p) = D(p) - S (p) [EQUATION 8]r o

where D (p) is the residual demand; D(p) is total demand and S (p) is the supply of ther o

Carlton and Perloff (1994 p102) calculate the elasticity of residual demand facing a firm operating in119

a market with increasing numbers of firms. Assuming that the elasticity of supply of firms is zero, theypresent three scenarios. In the first, the total demand is inelastic (elasticity = 0.5) the elasticity ofresidual demand varies between –5 (with no firms in the market) and –500 (with 1,000 firms). Whenthe elasticity of total demand is unitary, that of the residual demand rests between –10 (n=10) and–1000 (n=1000). Finally, with a moderately elastic total elasticity of –5, the effect of residual demandvaries from –50 (n=10) to –5000 (n=1000). To show that the elasticity of residual demand dependscrucially on the number of competitors, Carlton and Perloff, 1994, p.103, compute it for USagricultural markets where the elasticity of supply is 0. The demand for many crops is fairly inelasticbut farmer numbers are high. For instance, the elasticity of demand for apples is –0.21 but with 41187farmers in the market each farm faces an elasticity of –8,649. The sweet corn market discussed hasunitary elasticity (-1.06), but with 29,260 producers each farmer has an elasticity of –31,353! Whatthis implies is that if a single farmer increases its price by one thousandth of one percent, his demandwould fall by 31%. This is evidence enough to show that farmers are price-takers.

Werden, G.J. and L.M. Froeb, 1993, op. cit.120

A model to analyse residual demand in markets with differentiated products was developed in Baker,121

J.B. and T.F. Bresnahan, 1985, The Gains from Mergers or Collusion in Product DifferentiatedIndustries, Journal of Industrial Economics, 35: 427-44.

70

other firms in the industry; all three are a function of own and competitors’ prices, p.Equation 8 shows that the residual demand also depends on the supply response of theother firms. As with demand, supply is said to be elastic when a price increase of x%causes an increase in supply of more than x%. If there are n identical firms in the market,the demand elasticity for any of the n firms is given by:

� = �n-� (n-1) � <0 ; �<0; � >0 [EQUATION 9]r o r �

where is the price elasticity of residual demand, � is the elasticity of total demand forr

the (homogeneous) product, and is the supply elasticity of the other firms in the0

industry. So, if n=1 the economic market is a monopoly, and the residual and totaldemand elasticities coincide. The larger n, the larger in absolute value � will be even ifr

� and � are small, in simple economic models. o 119

9.4 As pointed out by Werden and Froeb, in homogeneous product markets the elasticity120

of the residual demand conveys all the information that is needed to define an antitrustmarket, as defined through the hypothetical monopolist test. From an estimate of suchelasticity, the analyst can infer whether a firm or a group of firms could cause asignificant and long-lasting price increase. If that is the case, then the product sold by thefirms, or the geographic area in which they operate, constitutes an antitrust market.Residual demand analysis is so powerful because it can be applied to any kind of market,and with due modifications it can be widened to allow for the analysis of markets withdifferentiated products. In what follows we discuss how residual demand analysis can121

be implemented empirically.

Description of the technique

71

9.5 The objective of residual demand analysis, applied to antitrust problems, is to obtain anunbiased estimate of the own-price elasticity of the residual demand curve. In general,this can be done by using Instrumental Variables (IV) or other simultaneous equationstechniques or simply a ‘reduced form’ simple regression. The empirical test revolvesaround finding an answer to the following question. Assume a group of firms have thefollowing residual demand function:

G = f(P , X, Y) [EQUATION 10]i i

where the subscript i identifies the group of firms, P is the price they charge, whichi

subsumes their cost structure, X is a vector of cost shift variable affecting the groups’rivals, and Y is a demand shift variable affecting the behaviour of consumers (such asincome, for instance). The problem with the formulation above is that the quantity soldand the price charged by group i are simultaneously determined. In order to obtain anunbiased estimate of the residual demand the analyst needs to reformulate the problemin the following way. The price charged by group i depends on the costs its membersface. Assume that there is a cost shift variable, Z, that only affects the costs of the firmsin the group, not their rivals’ costs. If the members of group i were able to transfer anincrease in the price of Z directly onto the customers, then they would form an antitrustmarket. As G and P are simultaneously determined, and they are both a function of thei i

cost shift variable Z, we can express the so-called reduced-form price and quantityequations as:

G=G(Z,X,Y)i

P = P(Z, X, Y) [EQUATION 11]i

where ‘reduced-form equation’ means an equation in which all the right-hand sidevariables are not influenced by the left-hand side variable. Technically speaking, areduced-form equation is one with no endogenous (simultaneous) variables on its right-hand side and its estimation yields unbiased parameter estimates. It is customary in theliterature to estimate the reduced-form price equation, rather than the quantity equation.There are two main reasons for this. First, it can be shown that a test of the nullhypothesis that �P / �Z =0 is all that is needed to test for market power. If �P / �Z = 0i i

the group of firms under investigation do not form an antitrust market because theycannot pass on price increases unique to the group to their customers, due to competitionfrom other firms. Secondly, data on prices is often more precise than data on quantities,and this affects the precision of the estimates.

A good example of this test can be found in Scheffman, D.T. and P.T. Spiller, 1987, op. cit., Table 5,122

p. 144.

72

9.6 To implement the test, the analyst must proceed as follows. First, data is gathered on theprice, quantity sold and cost shift variable for the group of firms under analysis; on costshift variables for rival firms in the industry; and on the demand shift variable. Secondly,the price equation is estimated by instrumental variable techniques. This requiresregressing the quantity sold by the group of firms on (Z, X, Y):

G = a + a Z + a Y + a´X+ u [EQUATION 12]i 0 1 2 i

Then using the estimated parameters, the fitted values for G are calculated as:i

� = â +âZ +â Y + â´X [EQUATION 13]i 0 1 2

Finally, the price equation is estimated as the last step of the instrumental variable (IV)regression procedure:

P = b + b � + b Y + b´X + ei [EQUATION 14]0 1 i 2 i

This procedure provides the estimated price elasticity of the residual demand equation,which is given by 1/b . The estimate can be used to test for the null hypothesis that such1

an elasticity be equal to any pre-specified value; the test statistic used will be a simple122

t-test. If the analyst is not interested in the estimated price elasticity, but only in whetherthis is perfectly elastic, that is, if costs limited to the group cannot be passed on becauseof competition from others, a simpler procedure can be followed. This involvesestimating the reduced-form price equation:

P = c + c Z + c Y + c´X + vi 0 1 2 i [EQUATION 15]

and testing, by means of a t-test, the null hypothesis that c =0. 1

Data and computational requirements

9.7 The implementation of this technique requires the use of a time series of data on prices,quantities, cost and demand-shift variables. The analyst should make sure that there areenough observations for the results to be meaningful: a minimum of about 50observations could suffice, but it is strongly recommended to use longer series, especiallylagged values. The dependent variables also need to be added to the regression to modelprice dynamics. The estimation of residual demand regressions can be carried out usingany econometric package. The most user-friendly packages, such as Microfit, contain in-built routines to automatically perform IV estimation.

Baker, J.B., 1987, Why Price Correlations Do Not Define Antitrust Markets: On Econometric123

Algorithms for Market Definition, Working Paper No 149, Washington DC: Bureau of EconomicsFederal Trade Commission.

73

Interpretation

9.8 The estimation of the elasticity of residual demand requires an in-depth knowledge of theproduction process for the product or service under study. It is of crucial importance thatthe cost shift variable for the group of firms under analysis be chosen correctly: if theinstrumental variable is poorly chosen, the estimated elasticity will be misleading. Also,it is crucial for the unbiasedness of the result that no important cost or demand shiftvariable be omitted from the analysis. One problem that is often encountered by analystswanting to use this technique is the lack of data on cost shifters; more precisely, it is oftenvery difficult to identify a variable that is a cost shifter only to the firm(s) of interest. Inthe particular case when a merger is investigated between a home producer and aproducer abroad, the exchange rate can be used as a cost-shift variable. However, if thereare many competitors in the home and abroad markets the costs are different and theestimation becomes extremely difficult.

9.9 Moreover, as pointed out by Baker, there are some technical problems associated with123

the use of this technique that have to be carefully addressed and solved beforeconclusions can be drawn from the results. These problems relate to aggregation acrossthe firms in the group under analysis; and to the dynamic specification of the priceregression (especially error auto-correlation).

9.10 In contrast to the empirical analysis of price movements, the analysis of residual demandstems from theoretical economic models: the estimated own-price elasticity is directlyderived from an equilibrium model of supply, demand, and behavioural assumptions.Based on behavioural assumptions, residual demand analysis allows the analyst to drawconclusions on whether a group of firms could impose a non-competitive price for itsproduct. If the answer is positive, then the analyst concludes that those firms representan antitrust market. So, while price analysis is a useful tool in the definition of aneconomic market, that is, a market within which the law of one price holds, and can offersignificant insights into antitrust markets, residual demand analysis is the technique thathas to be used to define an antitrust market directly.

9.11 The definition of an antitrust market is a fundamental issue in merger analysis, where theability of two merging firms to generate a significant and non-transitory price increaseis the first and foremost indication of a likely anti-competitive outcome from a merger.However, it is important to notice that if the analysis reveals that the firm or group of

74

firms under scrutiny have the ability to exercise some degree of market power, it is theanalyst who has to determine whether such market power is large enough to generateconcern.

Application: National Express Group plc and Midland Main Line Ltd

9.12 In 1996 the MMC investigated the National Express Group plc (NEL) and Midland MainLine Ltd (MML) case. In this instance the MMC was asked to examine the extent towhich competition between NEL’s coach services and MML’s rail services were affectedby the merger of these two companies. The quantitative evidence in the case centredaround cross-price and own-price elasticities.

9.13 One study looked at the cross-price elasticities between coach and rail. The results of thestudy showed that there were positive and significant cross-price elasticities for rail travelwith respect to coach prices. Another study examining the determinants of rail demandfound a smaller relationship between the extent of coach competition and demand for railservices. These results may, however, be explained by a more aggressive pricing policyby rail companies in response to coach deregulation.

9.14 Several studies were also conducted on the own-price elasticities of rail. One study onsecond class journeys between Nottingham and London found an own-price elasticityof –1.48, implying that a 10 % reduction in rail fares would increase demand by 14.8 %.The MMC commissioned its own study to analyse elasticities for leisure passengers onthis same route. Own-price elasticities were closer to zero than the above studies,suggesting more inelastic demand for rail journeys into London. The demand for coachtravel was found to be more sensitive to price changes. The cross-price elasticities foundin the MMC-commissioned study indicate that there is some degree of substitutionbetween rail and coach travel, and so demand for rail travel is to some extent responsiveto coach fares.

9.15 The various parties to the merger also commissioned studies based on cross-price andown-price elasticities. The results of these studies were different to those reached by theMMC-commissioned study. Many criticisms centred on the routes that the MMC-commissioned study analysed, in particular the use of data from regional railways to drawconclusions about routes that included London as their origin or destination. There wasalso dispute about elasticities that included all travel, and not just leisure or businesstravel. These are just some of the factors that will affect the results of elasticity studies.

75

9.16 The MMC, in deciding on the case, accepted that rail services provided strongercompetition to coach then coach services to rail. However, the MMC felt that theevidence also suggested that coach services do provide a degree of competition to railservices, and that joint ownership of rail and coach services dilutes this competition.

76

For residual demand analysis and survey techniques, see Chapter 9.124

Harris, B.C. and J.J. Simons, 1989, Focusing Market Definition: How Much Substitution is Necessary?,125

Research in Law and Economics, 12: 207-226.Harris B.C. and J.J. Simons, 1989, op. cit., page 211.126

77

10 CRITICAL LOSS ANALYSIS

10.1 Critical loss analysis is a necessary complement to residual demand analysis and surveytechniques, as it supplies the critical value of the elasticity of residual demand to be124

used in antitrust analysis.

10.2 Firms operating in markets where there is some degree of market power will experiencea loss in sales if they unilaterally raise the price for their product. This techniqueestimates the ‘critical’ loss in sales that would render unprofitable a unilateral priceincrease on behalf of a firm or group of firms. We can identify two effects on profitsresulting from a unilateral price increase. On the one hand, as the price goes up someconsumers switch to competitive or substitute products, and sales decline. On the otherhand, profits on those sales that are retained increase. Any price increase therefore isonly profitable if the second effect outweighs the first. Once the antitrust authorities havedetermined the size of the price increase, the investigator proceeds to establish whethersuch a price increase would, in fact, be profitable.

10.3 Critical loss analysis was developed by Harris and Simons who define the critical loss125

‘for any given price increase, (as) the percentage loss in sales necessary to make thespecified price increase unprofitable.’ After having assessed the critical loss, it is126

possible to determine whether it is likely to occur as a result of a merger or otherpotentially anti-competitive agreement. If the results show that the actual loss in salescaused by the price increase will be less than the critical loss, then that price increasewould be profitable and the investigation will proceed further.

Description of the technique

10.4 This technique relies on the use of simulation rather than econometric methods to assessthe critical loss. The empirical test is aimed at finding the answer to the following twoquestions. (i) Assume that a firm raises its price by Y%, and loses some customers as aresult. What would be the loss in sales that would leave profits unchanged and so causethe firm (or group of firms) to be indifferent between raising the price or not? Such lossin sales represents a critical value because for any larger loss, it will be unprofitable toincrease the price; and for any smaller loss increasing the price will be profitable. (ii) What would be the actual loss in sales that resulted from the price increase?

This is Equation 13 in Harris, B.C. and J.J. Simons, 1989, op. cit. The reader is referred to the original127

article for the derivation of the formula. An antitrust market is defined as that set of product or geographical area where a hypothetical128

monopolist could impose a profitable ‘small but significant and non-transitory price increase’ (US DOJMerger Guidelines). Such price increase is often taken to be 5% (so Y=0.05).

78

10.5 Under the assumptions that before the price increase the market is competitive, andtherefore the price equals marginal cost; and that fixed and average variables costs remainunchanged after the price increase, the critical loss in percentage terms is equal to:

Critical Loss = [Y/(Y + PCM)]*100 [EQUATION 16]127

where Y is the proportional hypothesised price increase and PCM is the price–cost128

margin, equal to [(Initial Price - Average Variable Cost)/Initial Price]. For example,assuming that the initial price for the product of interest is 100 and that the averagevariable cost is 60, the PCM is 0.4; if the hypothesised price increase is 5% (Y=0.05),then the critical loss is 11.1%.

10.6 The second step in the analysis is the assessment of the actual loss in sales due to theprice increase. How many sales will be lost depends on the residual demand elasticityfacing the firm or group under investigation. To each critical loss corresponds a criticalelasticity, computed by dividing the critical loss by the assumed percentage increase inprice. For the above example, the critical elasticity is 2.2. If the estimated residualelasticity is larger than this critical value, the actual loss will be larger than the criticalloss, and the price increase will be unprofitable. We have discussed the estimation ofresidual demand elasticity in Chapter 9. This estimation is often difficult due to datalimitations. In such a case, proxies for residual elasticity can be obtained by using theresults from consumer surveys, described in Chapter 12, although this can be expensiveand time-consuming.

Data and computational requirements

10.7 The estimation of the critical loss requires amazingly little information. The only twodata points needed are the initial price and average variable cost, and these are very easilyobtainable. As for computational requirements, all that is needed is pen and paper, or apocket calculator. The estimation of the actual loss is much more difficult requiringestimation of the residual demand elasticity as in Chapter 9, or survey data as in Chapter 12.

Interpretation

10.8 Critical loss analysis provides a reliable rule upon which the profitability of priceincreases can be assessed. It provides a much needed benchmark for decision making inthe definition of an antitrust market. This technique cannot be used on its own because,

79

while it provides a yardstick, it does not tell us anything about the actual loss in saleswhich is likely to occur as a result of the hypothesised price increase.

10.9 The estimated critical loss is a function of the price increase and price-cost margin. Asshown in the table below, for a given price increase, the higher the price-cost margin, thelower the critical loss. For a given price-cost margin, the higher the price increase, thehigher the critical loss. This sensitivity of the critical loss to the assumed price increasemakes it very important that the hypothesised price increase is correctly chosen.

Table 3 Critical Losses in Sales Necessary to Make a Price Increase Unprofitableunder Various Assumptions

Price-Cost Margin Assumptions Price Increase Assumptions

5% 10% 15%

20% 20% 33% 43% 30% 14% 25% 33% 40% 11% 20% 27% 50% 9% 17% 23% 60% 8% 14% 20% 70% 7% 13% 18%

10.10 For instance, say that a newspaper has 100 customers to whom it sells one newspapereach at $1, earns advertising revenues of $3 per customer and is earning a variable margin(over variable costs) of $3 per customer (that is, its variable costs are $1 per subscriber).If it raises price by $0.05 and loses only one customer, its profit gain is $0.05 * 99(=$4.95) and its loss is $3 * 1 (=$3) and so this increase would be profitable. However,if it loses two customers, its profit gain is now $0.05*98 (=$4.90) and its losses are now$3*2 (=$6) and the price change is unprofitable. So, in this simple model, if the variablemargin were 75%, and the price increase were 5%, a loss of more than 1% of customerswould make the price increase non-profitable (assuming no impact on marginal costsfrom the customer loss). By contrast, if the only revenue the newspaper received perreader were $1 for the price of the newspaper and the variables margin still equalled 75%,a 5% increase in price would require a 6% decline in subscribers to be unprofitable. Table 3 shows the minimum customer loss required for a price increase to be unprofitablefor various assumptions on percentage price increases and variable margins (assumingno impact on marginal cost). Table 3 considers two situations: (i) the impact on profitswithout considering the impact on advertising revenues; and (ii) the impact on profits

So, each lost subscriber results in lost advertising revenues equal to the original advertising revenue129

per subscriber. For example, if subscription prices are $1, we assume that advertising revenues persubscriber equal $3 (consistent with the ratio of advertising to subscriber revenues). If subscriptionprices increase to $1.05, we assume that advertising revenues per subscriber remain at $3. Eachcustomer who stops subscribing due to the price increase results in lost revenues of $1 for thesubscription and $3 for the advertising revenue and lower costs in the amount of the marginal cost ofproducing a paper for that subscriber.

80

assuming that advertising revenue per subscriber is three times the original subscriptionprice and advertising revenue per subscriber does not change as a result of the priceincrease.129

10.11 There are two circumstances in which critical loss analysis requires adjustments. First,the technique has to be modified if the firm(s) produce(s) more than one product and thereduction in the sales of one product allows the firm(s) to produce and sell more ofanother product. Then the computations have to take into account the increase in profitsgenerated by these extra sales. Secondly, adjustments are required if the firm(s) sell(s)products that are production complements, so that the reduction in the sales of oneproduct will force the firm(s) to produce, and sell, less of the other product as well. Thenthe computations have to take into account the decrease in profits stemming from theadditional loss in sales.

For the use of this variable see, for example, Jacquemin, A. and A. Sapir, 1988, International Trade and130

Integration of the European Community: An Econometric Analysis, European Economic Review, 31:1439-49.

81

11 IMPORT PENETRATION TESTS

11.1 How foreign competition is treated is an important issue in antitrust analysis. The mainquestion is whether foreign producers could thwart any attempt by domestic producersto raise prices. If that were the case, domestic producers could not effectively exercisemarket power. Import penetration tests may be used to assess whether imports are highlysensitive to variations in domestic prices, or better, to relative domestic and foreignprices. If imports are sensitive to domestic prices, and if there are no quotas or otherrestrictive trade practices, supply substitution is high. Consequently, the likelihood thata domestic firm or group of firms exercises market power is quite low.

Description of the technique

11.2 Formally, the sensitivity of imports to domestic prices is measured by the price elasticityof imports. Much the same as with domestic supply, import supply is said to be elasticif, when the domestic price rises by x% the quantity imported rises by more than x%. Ofcourse, a rise in domestic, relative to foreign, prices does not necessarily have to becaused by domestic producers; domestic prices rise relative to foreign prices due toappreciation in the exchange rate of the domestic currency. Import elasticities estimatedin the context of exchange rate fluctuations can then be used as proxies to analyse theeffect of a potential price increase by a set of domestic producers on foreign suppliers.

11.3 The responsiveness of imports to price changes is estimated by regression methods. Thelogarithm of the quantity imported (X ) is regressed upon a constant term (�), the log oft

the price of the product - or better, of the price relative to foreign prices, that is, the priceexpressed in a common foreign currency (P), and the log of the importing country’s per-t

capita income (Y). Often a variable to proxy world demand conditions (such as thet

growth rate of consumption in OECD countries) is added to the regression (�d ), as130t

well as a time trend (t) to pick up the effects of capacity and productivity growth:

X = � + � P + � Y + � �d + � t + ut 1 t 2 t 3 t 4 t [EQUATION 17]

Where u is an error term with usual properties. The estimated coefficient on the pricet

variable measures the price elasticity of imports; if the estimated elasticity is higher thanone, then import demand is elastic and there is limited room to exercise market power bythe local producers.

See Chapter 8 for a discussion of co-integration.131

82

Data and computational requirements

11.4 The estimation of the imports demand equation requires the use of a time seriesof data on all the relevant variables; there should be at least 35 to data points toestimate the simple equations described above, but longer series are stronglyrecommended. Any econometric package can be used to run the regression. Dataon commodities imports, as on the other variables needed in the estimation, isreadily available from official statistical sources.

Interpretation

11.5 The problems arising from the estimation of imports equations are mainly econometricin nature. The equation described above has no dynamic component. However, auto-correlation in the error term is a likely problem. Also, it would be advisable to test forco-integration, and if evidence is found for the existence of a co-integrating131

relationship, the equation should be modified accordingly.

11.6 One main problem with estimating import elasticity using single equation methods is thatof simultaneity bias: are we estimating the demand or the supply of imports? Can wereally distinguish between the two? This problem can be serious and can invalidate theresults. For this reason, it is preferable to estimate import elasticities within a well-defined model of supply and demand.

These techniques are discussed in Chapters 3 and 4 respectively.132

The estimation of demand equations using survey techniques is discussed in Chapter 14.133

83

12 SURVEY TECHNIQUES

12.1 There are many situations in which raw price, quantity or other data, both on a cross-sectional and time series basis, is not available. In this case techniques of the typeoutlined so far cannot be applied. One solution to this predicament is to create an ad hocdata set, from which the likely behaviour of consumers and market participants could beinferred. As data on the whole population of consumers and/or producers is generally notavailable, this is usually done through a survey, which can then be used to carry outnational or international price comparisons, using cross-sectional price tests and hedonicprice analysis; to estimate consumer demand functions; or to infer market definition.132 133

In this chapter we discuss the use of surveys for direct inference on market definition.

Description of the technique

12.2 To carry out a survey, it is first necessary to consider what information is required for theinvestigation being carried out, including some indication on the accuracy of the results.This information will guide the analyst in the selection of the sample, which has to berepresentative of the population the analyst wishes to investigate. Such a populationcould either be that of the actual consumers of a product, or that of the actual rivals in theproduction of such a product. The definition of the population under investigation iscrucial, especially in situations were products are differentiated.

12.3 Secondly, a questionnaire has to be devised in such a way as to minimise the likelihoodof obtaining biased responses. This should be piloted on a small sub-sample to identifyambiguities and other problems with the draft questionnaire. Thirdly, the questionnaireshave to be mailed to the individuals in the samples, or interviews have to be conducted.Fourthly, the data has to be transferred into files and cleaned. If individuals or companieswith certain characteristics have been over-sampled (for instance, consumers with higherincomes, or larger firms), weights have to be calculated at this stage.

12.4 Finally, the answers must be counted, and frequency counts and other statistical methodsapplied to make inference from the data. For instance, the proportion of customers thatwill switch product for a given price increase can be computed, and the relative demandelasticity calculated.

Shull, B., 1989, Provisional Markets Relevant Markets and Banking Markets: The Justice Department’s134

Merger Guidelines in Wise County, Virginia, Antitrust Bulletin, 34: 411-428.

84

Data and computational requirements

12.5 Carrying out a survey can be a very expensive undertaking, and in general it should onlybe done if no other appropriate technique is available. The sampling, questionnaire-devising and analysis phases all require the use of professional statisticians and it is bestto consult them as soon as the possibility of a survey becomes apparent. Biased resultsare worse than useless since they can be seriously misleading. Generally speaking, thesample size has to be large, if it is to be representative of the population. For example,in the investigation of a merger between Virginia National Bank and the First State Bankof Wise County, Virginia, a questionnaire was mailed to 750 households and over134

1,400 companies in the County.

12.6 Surveys can also be informal in nature. In the US, expert economists will often interviewcustomers, for example, to determine their likelihood of switching in the face of a priceincrease limited to products being considered for a hypothetical market definition. Ifsuch data is accurate, then they would help an economist answer the key questions indefining an antitrust market. However, one should be very careful in considering theviews of a small number of people in a sector as representative: informal surveys are notstatistically valid.

12.7 A representative survey usually results in a large dataset, which requires the use of a goodstatistical package, such as SAS or SPSS for the handling phases.

Interpretation

12.8 Using survey data, of the kind described above, to define markets can be misleading, asit elicits only information on what people say they did or would do, rather than what theywould actually do in response to economic incentives or competitive pressures. In otherwords, the results depend heavily on the way the questions are phrased, and they are atbest hypothetical. Given the nature of the analysis, the questions asked are necessarilyof the kind ‘what would you do if...’. The answer to that question does not actually shedlight on what the individual will necessarily do when faced with the situation.

12.9 It is also very important to ask the right question. In the Wise County case, they askedthe question, ‘what would you have done if a bank in a specified town, or a bank in thenext town, would have offered a 1% lower interest rate on a loan?’ This question posestwo problems. First, the right question would have been about a response to a 1% priceincrease by the local bank. This way, the respondent would have defined his geographic

See Shull, B., 1989, op. cit.135

Langenfeld, J., 1998, The Triumph and Failure of the US Merger Guidelines in Litigation, The Global136

Competition Review, December 1997 January 1998: 36-37.

85

market, rather than the questionnaire suggesting it. Secondly, one would expect that theanswer would be yes to a question about whether the person would go to the next townto shop. So, the question allows the creation of a chain of substitution that would set toolarge a geographic market.135

Application: the Engelhard/Floridin merger 136

12.10 In the Engelhard case, the District Court for the Middle District of Georgia, US disagreedwith the US Department of Justice, who sought to block the acquisition of Floridin byEngelhard. Such a merger would have reduced the number of competitors in the marketfor the US gel clay (GQA) from three to two. The issue of whether the market for GQAwas the relevant antitrust market was hotly debated.

12.11 One of the DOJ’s expert witnesses interviewed many existing customers of Engelhardand Floridin, asking whether they would switch for a 5 or 10% price increase in GQA.In addition, a number of customers were questioned directly. In general, the customerssaid that they would not switch to other products for a 5 to 10% price increase in GQA.

12.12 The Court explicitly disagreed with the DOJ and its’ experts use of the 5 to 10% priceincrease test for determining whether customers would substitute other products insufficient numbers to make such a price increase unprofitable and, so, excluding otherproducts from the relevant product market definition.

12.13 The Court noted that GQA accounted for less than 10% of the total cost of the finalproduction, so that a 5 to 10% price increase would raise the final production cost by verylittle. Moreover, changing GQA supplier is costly, as it requires product testing andpotential reformulation. So it is not surprising that some customers stated that they wouldnot switch from Engelhard GQA to Floridin’s. Therefore, a higher price increase testwould be necessary to establish the product market.

12.14 After considering all the evidence produced, the Court concluded that the evidence didnot support the claim that GQA was the relevant product market. The Court concludedthat there was insufficient evidence to define the market, and that the DOJ had thereforefailed to carry its burden of persuasion for an essential element of the case.

86

Bain, J.S., 1951, op. cit.137

See Fairburn, J. and P. Geroski, 1993, The Empirical Analysis of Market Structure and Performance,138

in Bishop, M. and J. Kay, eds., European Mergers and Merger Policy. Oxford: Oxford UniversityPress, pp. 217-238 for a discussion of the market power hypothesis.

Lerner, A., 1934, The Concept of Monopoly and the Measurement of Monopoly Power, Review of139

Economic Studies, 1: 157-75.

87

PART IV: MODELS OF COMPETITION

13 PRICE-CONCENTRATION STUDIES

13.1 Frequently applied in merger cases in the UK, price-concentration studies are based onthe structure-conduct-performance paradigm developed by Bain. According to this137

well known theory, market structure influences the performance of market participantsvia their conduct. Market concentration is the most commonly used proxy for marketstructure; and the so-called ‘market power hypothesis’ implies that concentration138

affects market performance (that is, profit margins) via its effect on pricing. In mergercases, the market power hypothesis translates into the following testable proposition. Ifhigher concentration is associated with higher prices/profits in the relevant market, thena merger that has a significant impact on concentration in a particular market raises anti-competitive concerns.

Description of the technique

13.2 The theoretical underpinning of the structure-market-performance paradigm can be tracedto the Lerner index of monopoly power, that for any firm i can be written as:139

[(P – MC ) / P] = [� + (1 – � )s ] / [EQUATION 18]i i i i dp

Where P represents the market price; MC the marginal cost; s the share of firm i in totalindustry output; the (absolute) industry price elasticity of demand; and � thedp

conjectural variation term. The conjectural variation term is a measure of the percentagechange in output that firm i expects other firms in the industry to undertake in responseto a 1% change in its own output. If � = 0, the firm expects no reaction on behalf ofother firms to an increase in its own output: the firm operates in a Cournot oligopoly. If� = 1, the firm expects its rivals to, say, decrease their output by 1% in response to a 1%decrease in its own output; this implies that the other firms behave so as to make it

See Martin, S., 1993, Advanced Industrial Economics, Oxford: Blackwells, Chapters 16 and 17 for an140

in-depth discussion of the issue. Martin, S., 1993, op. cit. page 547141

Usually the concentration measure enters the regression linearly. However, if the degree of collusion142

is greater, the higher the market concentration, then the latter could influence prices in a non linearfashion. This could result in insignificant estimates for the coefficient of the concentration measure,and to the wrong inference being drawn. This problem can easily be solved by entering both the leveland the square of the concentration variable as regressors; or by the use of dummy variables fordifferent levels of concentration.

Bain, J.S., 1956, op. cit.143

88

possible to restrict market output (thereby causing a price increase). Finally, if � = -1,the firm expects its rivals to offset its, say, 1% output reduction by a 1% increase in theirown output.

13.3 If we assume that � / and (1 – � )/ in Equation 18 are parameters to be estimated,i dp i dp

the empirical test of the structure-conduct-performance paradigm at the industry levelcan be carried out by testing whether the price-cost margin increases with the degree ofconcentration in the industry. At the firm level, price-cost margins depend on bothindustry concentration and the firm’s market share. The degree of concentration is thevariable used to proxy market structure. The most common measures of concentrationare the k-firm concentration ratio, CRk, and the HHI. The CRk index measures theproportion of the market sales accounted for by the largest k firms, where k is normallytaken as 4, 5, or 8, CR4, CR5, CR8. The HHI is the sum of the squared individual marketshares for all the firms in the industry.

13.4 Price-cost margins are defined in terms of marginal costs. Unfortunately, marginal costsare not often observable, and various proxies have been used in the empirical literaturefor the degree of profitability. Accounting measures of profits (the rate of return onsales), the Tobin’s q (the ratio of the stock market value of a company to the replacementcost of its assets), and the rate of return on equity holdings have all been used as proxiesfor the price-cost margins. Martin shows ‘there is a one-to-one mapping between140 141

price models and price-cost margin models …In principle, if one has a model ofprofitability, one also has a model of price’. So the market power hypothesis, that is, theability to impose prices that are above the competitive level, can be tested using eitherprices or one of the above profitability measures as the dependent variable in a regressionmodel.

13.5 Price-concentration studies are carried out at either industry or firm level by running aregression with price levels as the dependent variable, and a concentration measure andother variables as regressors, usually all in levels. As market power in concentrated142

industries cannot be exercised unless there are barriers to entry, such barriers have to143

be accounted for in the price regressions. Product differentiation, economies of scale andabsolute cost advantages all represent barriers to entry. Product differentiation can be

Bain, J.S., 1956, op. cit.144

89

proxied by using the advertising-to-sales ratio; while economies of scale can be proxiedby using an estimate of the minimum efficient scale of production. The level of imports,or the imports-to-sales ratio, is also added as a regressor to account for the effect offoreign competition on market power.

13.6 From the regression results the analyst learns the degree to which prices, or profits, aredependent upon industry concentration. If the coefficient is high and significant, this canbe taken as an indication that any further increase in market concentration, due to amerger for instance, would significantly increase market power and therefore the pricethat consumers of the product have to pay.

Data and computational requirements

13.7 The data requirements for this technique depend on whether the study is carried out atindustry or firm level. Industry-level analyses require time series data, while firm-levelanalyses utilise either cross-sectional or - preferably - panel data. For time seriesanalysis, a sufficiently long time series of data is necessary, and this can create problemsfor some of the variables. For cross-sectional analysis, data on single firms within themarket of interest is required; a sample of around 40 firms should be sufficient - usingless is not advisable. Smaller samples of firms can be used if observations on each firmare available for more than one time period, that is, if the data is in the form of a panel.

13.8 Time series and cross-sectional regressions can be carried out using most availableeconometrics packages, while panel data analysis requires specialised software packages,such as Limdep.

13.9 In the absence of information on the other variables, the correlation coefficient betweenprice and concentration can be considered, but such practice should be avoided wheneverpossible, as omitting the effects of entry, foreign competition and/or productdifferentiation would lead to biased estimates.

Interpretation

13.10 The empirical analysis of the relationship between price and concentration relies on theassumption that marginal costs are constant: according to economic theory, in thepresence of market power the price will be raised relative to marginal costs; therefore theanalyst should look at the relationship between price and concentration, while holdingmarginal costs constant. The regression of concentration on prices yields biased results144

Geroski, P., 1982, Simultaneous Equation Models of the Structure-Performance Paradigm, European145

Economic Review, 19: 145-58. Demsetz, H., 1973, Industry Structure, Market Rivalry and Public Policy, Journal of Law and146

Economics, 11: 55-65; and Demsetz, H., 1974, Two Systems of Belief about Monopoly, inGoldschmid, H.J., H.M. Mann, and J.F. Weston, eds, Industrial Concentration: The New Learning,Boston: Little Brown.

See Baker, J.B. and T.F. Bresnahan, 1992, Empirical Methods of Identifying and Measuring Market147

Power, Antitrust Law Journal, 61: 3-16, for a discussion.

90

if marginal costs are not constant and correlated with concentration. This is a majorproblem with the use of this technique, especially when industry rather than firm leveldata is used.

13.11 The implementation of this technique relies on the assumption that prices are influencedby concentration and by the other right hand side variables without influencing them inturn. Technically speaking, the regressors are assumed to be exogenous. However, ahigh price could result in new entry and more imports into the market, and ultimately leadto lower concentration. Therefore, concentration, imports and entry could themselves beinfluenced by price. In a detailed empirical investigation, however, Geroski found145

supporting evidence only for the endogeneity of the trade variable. Imports or imports-to-sales ratio then should be instrumented; their lagged values can be used to get around theproblem.

13.12 The market power hypothesis has been criticised by the economists of the Chicagoschool. Demsetz argued that if large firms are more efficient, then a positive146

relationship between concentration and profitability will be due to higher efficiency, notto the presence of market power. This argument is, however, only valid if the low-costfirms operate at full capacity. In such a case, the high-cost firms will supply the marketwith the difference between the market demand and the low-cost firms’ supply, and themarket price will be set at the high-cost firms’ level. Then the low-cost firms will earnan efficiency rent. If the low-cost firms, however, have spare capacity and restrict theiroutput to keep prices artificially high so that high-cost firms will not be forced to exit themarket, then low-cost firms will earn an economic profit that reflects the exercise ofmarket power. This would be the typical case if there is collusion in the market.

13.13 Finally, this methodology has been criticised as it does not take into account the fact thatin a market with differentiated products, the existence of substitute products, that areimmediately outside the relevant market, could influence the pricing behaviour of thefirms in the market, again no matter how concentrated the market.147

Monopoly and Mergers Commission, 1995, Service Corporation International and Plantsbrook Group148

Plc.

91

Application: the SCI/Plantsbrook merger

13.14 In this case the MMC analysed the degree of competition in the funeral service148

markets. The investigation was prompted by the proposed merger of Service CorporationInternational (SCI) and Plantsbrook Group PLC (Plantsbrook). The Commissionconcluded that the market for funeral directing services was local and relatively stable,and that although there was price competition, there was no consensus as to the strengthof this competition. Several econometric studies were used to influence this conclusion.

13.15 An SCI-commissioned study undertook multiple regression analysis to explaindifferences in funeral prices by differences in six chosen variables. Two of these variableswere designed to pick up the effects of local concentration – the level of concentrationand the number of different firms within the areas chosen. Major differences in thequality of funerals were represented by dummy variables for coffin type, and for other‘extras’ bought for the funeral. The two remaining variables were funeral costs andaverage wages in the locality.

13.16 These regressions concluded that the only significant variable was the one relating to‘extras’, and therefore, price was not sensitive to the number of competitors. Theseresults were essentially the same for funeral outlets outside London, and for those of theLondon study. The conclusion of the SCI-commissioned study was that there was norelationship between the price charged and the level of concentration regardless of howthe market was defined (from half a mile to three miles). The study also analysed therelationship between the number of competitors and price increases, and again found norelationship.

13.17 An MMC-commissioned study reviewed the dataset used in the regression analysis. Thisstudy found several problems with the dataset and the model used. First, the measure ofconcentration used did not provide a good proxy for competitive pressure as differencesbetween outlets were not considered (for example, larger outlets belonging to majorcompetitors as opposed to small, independent outlets). Secondly, nearly all theexplanatory power of the model came from the ‘extras’ and the variable for coffin types.However, these could also be consequences of concentration rather than independentdeterminants for price, that is, they could be endogenous to the model. Finally, standardeconometric testing of the model showed it was not robust in that it could not generatereliable outcomes in terms of specified relationships. The MMC-commissioned studycould not, however, generate an alternative model claiming that the dataset availablemade the development of a robust model impossible.

13.18 It seems that the MMC may have concluded that the gathered evidence meant either that

92

funeral markets were extremely localised or that there was no effective price competitioneven where there were several competitors. This then led to the recommendation thatSCI divest itself of some funeral parlours where it appeared that the merger had createda local monopoly. It is interesting to note that FTC economists did the same type ofstatistical analysis in the US funeral homes mergers, and found that prices were higherin markets with few competitors and in rural markets.

See Willig, R.D., 1991, Merger Analysis, Industrial Organisation Theory, and Merger Guidelines,149

Brookings Papers: Microeconomics, pp. 281-332 for a discussion. The cross-elasticity of demand between two goods, 1 and 2, is defined as the percentage change in the150

demand for good 1 when the price of good 2 is raised by x%. The logit model was developed in McFadden, D., 1973, Conditional Logit Analysis of Qualitative151

Choice Behavior, in Zarembka, P. ,ed., Frontiers in Econometrics. New York: Academic Press. Forapplications to antitrust issues, see the following. Froeb, L.M., G.J. Werden and T.J. Tardiff, 1993, TheDemsetz Postulate and the Effects of Mergers in Differentiated Products Industries, Economic AnalysisGroup Discussion Paper 93-5 (August 24, 1993). Werden, G.J. and M.L. Froeb, 1994, op. cit. WilligR.D., 1991, op. cit.

93

14 ANALYSIS OF DIFFERENTIATED PRODUCTS: THE DIVERSION RATIO

14.1 Many markets are characterised by differentiated products, which are not perfectsubstitutes for one another. In markets with differentiated products, prices are set tobalance the added revenue from marginal sales with the loss that would be incurred bylosing such sales due to a higher price. When producers of two close substitutes merge,there is a strong incentive to raise unilaterally the price of at least one product above thepre-merger level. This is because those sales of one product that would be lost due to149

an increase in its price would be partly or totally recouped with increased sales of thesubstitute product. Whether such a price increase is profitable depends crucially onwhether there are enough consumers for whom the two products represent their first andsecond consumption choice: the closer substitutes the two products are, the higher is thelikelihood that a unilateral post-merger price increase would be profitable.

14.2 Antitrust authorities dealing with mergers in differentiated product markets needinformation about the substitutability of the two products, both between each other andwith other products, in order to assess the likelihood of a price increase. This, and thefollowing, chapter review the quantitative techniques available to assess the price effectof a merger in markets with differentiated products. The technique discussed in thischapter, diversion analysis, allows the analyst to make inferences about sales diversionbetween two competing products by using market share data only. This technique relieson very restrictive assumptions, but it is easy to implement and uses readily availabledata.

Description of the technique

14.3 Diversion analysis relies heavily upon the Independence of Irrelevant AlternativesAssumption (IIAA), which implies that the cross-elasticities of demand between product1 and all the other products are identical. If the IIAA holds, then the demand for a150

given product can be analysed using the multinomial logit model, which is aneconometric model that allows the estimation of discrete demand models whereindividuals choose one in a set of differentiated, alternative goods. The first step in151

investigating the effects of a merger between sellers of differentiated goods, say 1 and 2,

Shapiro, C., 1995, Mergers with Differentiated Products, Address before the American Bar Association152

and International Bar Association program, The Merger Review Process in the US and Abroad,.Washington DC, November 9, 1995.

Ibid.153

94

is to assess the proportion of the sales of Good 1 that would be lost to Good 2 followinga given price increase for Good 1. This can be done very simply by dividing the cross-elasticity of demand between Products 1 and 2 by the own-price elasticity of demand forproduct 1. This ratio, defined the diversion ratio , can be immediately calculated if152

estimates of own- and cross-elasticities are available. However, this is not always thecase, due to data and time limitations. A very useful property of the logit model is thatit allows one to express the diversion ratio in terms of market shares. If all the productsin the market are either ‘close’ or ‘distant’ substitutes for each other, then the diversionratio between Products 1 and 2 (DR ) is given by:12

DR = (Market Share of Product 2) / (1- Market Share of Product 1) [EQUATION 19]12

It is evident from the above formula that, all other factors being equal, the lower themarket share of Product 2, the lower the diversion ratio. Also, all other factors beingequal, the higher the share of Product 1, the higher the diversion ratio. These imply thatthe diversion ratio will be higher when one of the merging partners is a dominant firm.For example, if the market share of Product 2 is 20%, the diversion ratio will be 0.25 ifthe market share of Product 1 is 20%, and 0.33 if its market share is 33%. If on the otherhand the market share of Product 2 is lower, say 10%, and that of Product 1 is again 20%,then the diversion ratio is 0.12.

14.5 Having computed the diversion ratio, the analyst proceeds to estimate the likely priceincrease resulting from the merger. Under the assumption that the elasticity of demandis constant over the price range that includes the pre- and post-merger prices, the formulato compute the post-merger profit-maximising price increase for Product 1 is: 153

Post Merger Price Increase = (mark-up × DR) / (1 – mark-up – DR) [EQUATION 20]

Where, under the assumption that the merger has no effects on costs, the variable mark-up is the pre-merger mark-up for Product 1: (price – incremental cost) / price.Continuing with the example above, and assuming a mark-up of 40%, the resulting priceincreases will be: 29% for a diversion ratio of 0.25; 49% for a diversion ratio of 0.33;and 10% for a diversion ratio of 0.12.

See Hausman, J.J. and G.K. Leonard, 1997, Economic Analysis of Differentiated Products Mergers154

Using Real World Data, George Mason Law Review, 5: 321-46. The assumption that the price elasticity would go up with the price is quite a reasonable one. Let q be155

the quantity demanded and p the price; the own price elasticity is then (�q/�p)*(p/q). With a lineardemand, the slope coefficient, �q/�p, is constant, and if p increases more than q decreases so does theelasticity.

Diamond, P. and J.J. Hausman, 1994, Contingent Valuation: Is Some Number Better Than No156

Number?, Journal of Economic Perspectives, 45.

95

Data and computational requirements

14.6 Diversion analysis is a simulation technique that requires the availability of only a fewdata points; the market shares of the merging partners, and the price mark-up. All thatis required to compute the diversion ratio and the subsequent price increase is a pocketcalculator.

Interpretation

14.7 Because of its simplicity, diversion analysis provides a very straightforward way toestimate the likely effects from mergers between producers of substitute products. It has,however, been fiercely criticised. The problem with using this technique is that it is154

based on the IIAA assumption which is often not realistic. The IIAA imposes too strong

a set of constraints on the cross-elasticities of demand. For example, if the price of acertain model of luxury cars went up, we would expect the demand for the other luxurymodels to go up, not the demand for, say, small hatchbacks. But the IIAA implies thatthe demand for all other cars will go up, with the models that sell most, no matter whattheir size, being the ones that receive the largest share of the diverted demand. This isobviously an untenable assumption.

14.8 Another very strong assumption is the constancy of the own-price elasticity of demand.If the own price elasticity for Product 1 rises as its price goes up, then the use of the155

diversion ratio leads to an overestimate of the price increase. This problem is moreserious, the higher the values of the Mark-up and DR. If the demand for the product islinear, the elasticity rises as the price rises, making the optimal post-merger price increasesmaller. With linear demand, an alternative formula should be used to compute the post-merger price increase: [(Mark-up×DR)/2(1-DR)].

14.9 Due to the problems that arise with this technique, and reliance on restrictive assumptionsthat cannot be tested when data is not available, we advise caution in the use of diversionratios. Diamond and Hausman make the very convincing argument that when data is156

not available, it is better to make no inference than the wrong one. Whenever data isavailable, it would be advisable to carry out a full fledged estimation of a demand modeland estimate the elasticities directly.

96

Hausman J.J., G.K. Leonard and J.D. Zona, 1994, Competitive Analysis with Differentiated Products,157

Annales D’Economie et de Statistique, 34: 159-180. Hausman J.J. and G.K. Leonard, 1997, op. cit.158

The model was developed in Deacon, A. and J. Muellbauer, 1980, An Almost Ideal Demand System,159

American Economic Review, 70: 313-326.

97

15 ANALYSIS OF DIFFERENTIATED PRODUCTS: ESTIMATIONOF DEMAND SYSTEMS

15.1 The diversion analysis described in the last chapter is often used when it is unfeasible toestimate the matrix of cross-elasticities for a given product market, or when the IIAAhypothesis is thought to hold. Two recent articles by Hausman, Leonard and Zona and157

by Hausman and Leonard develop a methodology to compute the likely price increase158

resulting from a proposed merger using econometric estimates of the matrix of marketelasticities. The econometric estimation is based on supermarket scanner data. As theanalysis is very technical, it should only be carried out by an expert practitioner. Oncethe elasticities have been estimated, however, it is very simple to compute the diversionratio and then evaluate the price increase. This is a field of research that is developingrapidly.

Description of the technique

15.2 The econometric analysis of demand for substitute products is based on Gorman’s multi-stage budgeting theory. According to this approach, there are three levels of demand fora differentiated product, such as bread: (i) the ‘top’ level, at which the overall demand forthe product (that is, bread) is determined; (ii) the ‘middle’ level, at which the demand fordifferent types of the product (that is, white bread, wholemeal bread etc.) is determined;and (iii) the ‘bottom’ level, at which the demand for each brand within a type isdetermined. The estimation procedure starts at the bottom level, using an econometricsmodel known as AIDS: 159

S = � + � log(y / P ) + � � logp + [EQUATION 21]int in i nt nt j=1…J ij jnt int

Where the subscript i denotes the brand; t denotes the time period; and n denotes the cityor area representing the sampling unit. S is the share of brand i in the total expenditurein the type analysed; y is the overall expenditure for that type; P is a price index; and pis the price for each brand in the segment. The intercept parameters, �, measure theeffect of the brand and area; the � parameters are brand-specific and measure the extentto which the expenditure share of brand i in total expenditure for that type depends on thetotal expenditure itself. Finally, the � parameters measure own and cross price effects(not elasticities as the dependent variable is not in logs). This system of demandequations has the very useful characteristic of being quite unrestrictive as to price effects.It allows one to estimate the whole matrix of price effects without making a prioriassumptions. The econometric estimates of the ‘bottom’ level, which have to be carried

See Chapter 9 on residual demand analysis for a discussion of endogeneity in demand analysis.160

98

out as usual using instrumental variables due to the likely endogeneity of theregressors, can then be used to construct price indexes for each of the m product types,160

for each area n and time t, . Such indices are used as instruments for the differentmnt

types of the product in the estimation of the ‘medium’ level:

logq = � + � logY + � � log + [EQUATION 22]mnt mn m nt m=1…M m mnt mnt

where q is the quantity demanded in the market, the subscript m identifying the differenttypes of product, and n and t are as defined above. Y denotes total expenditure on theproduct under analysis. The intercept terms, �, measure the effect of the type and area;the parameters � measure the effect of changes in total product expenditure on thequantity demanded of each type of the product. Finally, the parameters � measure theown and cross price elasticities of product types, and can be used to estimate a priceindex for the product as a whole, . Once the index has been constructed and suitablydeflated, the ‘top’ level equation can be estimated:

logG = � + �logYD + �log + Z + [EQUATION 23]t 0 t t t t

where G is total demand for the product, YD is real disposable income, and Z is a vectorof variables affecting the demand for the product (these could be for instancedemographic variables).

15.3 Once the model has been fully estimated, the full set of unconditional elasticities ofdemand for each brand in the market can be calculated. These figures will then be usedto estimate the likely unilateral price increases from the merger under investigation.Assuming that marginal costs are identical pre- and post-merger, the percentage changein price for the product produced by the merging firms is equal to:

Price Change = (P -P )/P = (1 – w ) / (1-w ) –1 [EQUATION 24]1 0 0 0 1

where the superscripts 1 and 0 refer to post- and pre-merger respectively, P is the priceand w is the vector of price-cost mark-ups. The mark-ups have to be computed for thepre- and post-merger periods, and w can be calculated using the following formula:

w = -(E) R [EQUATION 25]-1

where E is the matrix of own- and cross-price elasticities for the product of interest,multiplied by the revenue share for the product (that is, average retail price times thequantity sold) and R is the vector of revenues.

99

Data and computational requirements

15.4 The data requirements for the analysis described above are demanding. Information isneeded on prices and quantities sold at three levels of aggregation. The potentialendogeneity of prices calls for the availability of appropriate instrumental variables.Supermarket scanner data can be used to perform the estimation procedures. However,computational requirements are such that estimation should only be carried out by expertpractitioners.

Interpretation

15.5 Unlike most techniques reviewed in this paper, the interpretation of the coefficientsobtained from the estimation of AIDS models is not straightforward. The coefficients ofthe ‘bottom’ level demand equation do not represent elasticities but have to bemanipulated in order to obtain the elasticity matrix. Moreover, manipulations have to becarried out in order to obtain unconditional elasticities, as the estimated ones areconditional on the expenditure for a given product type, y . nt

15.6 Difficulties notwithstanding, this technique is very robust and powerful, and the readyavailability of supermarket scanner data and excellent software should ensure that theassessment of likely unilateral effects from mergers is carried out using the estimation ofdemand system, rather than more mechanic - and often biased - techniques. However,care needs to be taken in using retail prices to assess markets at stages before retailing,say, manufacturing. If retail markets are not fully competitive, then retail prices may bebiased and invalid.

Application: the Kimberley-Clark/Scott merger case

15.7 In 1995, Kimberley-Clark (KC) announced the merger of their world-wide operationswith the Scott paper company. The two companies are among the largest worldproducers of hygienic paper products, and a merger between them would have resultedin the creation of the largest company in the world. Both the US DOJ and the EuropeanCommission investigated the merger.

15.8 Jerry Hausman and Gregory Leonard consulted for Kimberley-Clark, and presented verydetailed evidence to the DOJ on the likely price-effects from the merger in the market forbath tissues in the US. At the time of the merger, KC had just introduced a new product,Kleenex Bath Tissue, in the ‘premium’ segment on the toilet tissue market. Scottproduced two different brands: Cottonelle, a premium brand, and ScotTissue, an‘economy’ brand. The issue at hand in the merger case was whether KS and Scott could

Hausman J.J. and G.K. Leonard, 1997, op. cit.161

Hausman J.J. and G.K. Leonard, 1997, op. cit.162

Hausman J.J. and G.K. Leonard, 1997, op. cit.163

100

profitably raise the price of their three products. Using supermarket scanner data fromfive cities for the period from January 1992 to May 1995, Hausman and Leonard161

presented evidence to the contrary.

15.9 The market for premium toilet paper was dominated by Charmin, a Procter & Gamblebrand, with a 31% share of the whole tissue market; the share for the second and thirdbrands, Northern and Angel Soft, were 12.4% and 8.8% respectively; Kleenex had ashare of 7.5% and Cottonelle of 6.7%. The economy brands were dominated byScotTissue, with a 16.7% share of the total market, followed by other brands, with 9.4%,and private labels with 7.6%.

15.10 Hausman and Leonard estimated a demand system for toilet tissue. They found that162

the own-price elasticity for Kleenex, Cottonelle and ScotTissue were –3.4, -4.5 and -2.9respectively, implying sales reduction of 34%, 45% and 29% respectively in responseto a 10% increase in the own-price. The estimated cross-price elasticities were very low.The largest one was that between Kleenex and Charmin: at 0.68, it implies that sales ofKleenex would go up by 6.8% in response to a 10% increase in the price of Charmin. Allthe other cross-elasticities were lower than that.

15.11 Examining the elasticity matrix for toilet tissue, the following conclusions could bedrawn. First, there was evidence of two separated market segments, the premium marketand the economy market. Secondly, the cross-elasticities were all different from eachother and asymmetric; that is, the elasticity between Kleenex and Charmin was differentthan that between Kleenex and Cottonelle, and from that between Charmin and Kleenex.This is hard evidence for the untenability of the IIAA assumption.

15.12 Using the estimated elasticities of demand, Hausman and Leonard then predicted the163

price effects from a merger between Kimberley-Clark and Scott. Assuming no costefficiencies, the prices of Kleenex, Cottonelle, and ScotTissue would be raised by 2.4%,1.4% and 1.2% respectively. These are very low values, and the merger was approved.It is interesting to note that diversion analysis based on share data would predict muchlarger price increases; for example, the simulated price increase for Kleenex would be12.7%, more than five times as large.

Hausman J.J. and G.K. Leonard, 1997, op. cit.164

101

15.13 As this example indicates, the use of diversion analysis to simulate price increasesresulting from mergers can lead to very biased results when the products in the marketare not perceived by consumers as equally substitutable. The margin of error is quitelarge, and this can lead the authorities to reach the wrong conclusions.

15.14 For the econometric estimation of full demand systems, the data requirements are quitelarge. Hausman and Leonard have used retailer scanner data in the form of a panel,164

from which the responsiveness of brand sales to changes in own- and substitute pricescan be inferred. The estimation of demand systems using econometric analysis requiresspecialised software, and specialised training in order to program the software andinterpret the results.

102

For an excellent survey of the economic literature on the subject, see McAfee, R.P. and J. McMillan,165

1987, Auctions and Bidding, Journal of Economic Literature, 25: 699-738. There has been a host ofempirical academic papers on auctions; for a comprehensive survey, see Hendricks, J. and P. Paarsh,1995, A Survey of Recent Empirical Work Concerning Auctions, Canadian Journal of Economics, 28:403-26.

Porter R.H. and J.D. Zona, 1993, Detection of Bid Rigging in Procurement Auctions, Journal of166

Political Economy, 101: 518-38.

103

16 BIDDING STUDIES

16.1 Custom-made or otherwise unusual products, and large orders of a product, are oftenbought and sold by formal or informal bidding procedures. The range of products soldin this manner is extensive and includes contracts for major infrastructure and capitalinvestment projects (for example, refineries/power plants) including their plant-specificequipment (for example, gas turbines, pumps). These auctions are characterised byinformational asymmetries, as each bidder does not know what the competitors arebidding.165

16.2 This has serious implications for antitrust analysis. In circumstances where there arerepeated bids there is a strong incentive for some of the bidders to collude and form acartel. Among bidding studies, of particular importance are those aimed at detecting bidrigging in order to stop or prevent the anti-competitive behaviour of a cartel of bidders.

Description of the technique when sufficient data is available

16.3 In order to detect bid rigging, the first step is to distinguish between bidders who aresuspected of forming a cartel and bidders who are not. If there is no bid rigging, thedeterminants of the bidding price for the two groups will not be very different. Theempirical analysis then seeks to ascertain whether discrepancies can be identified. Wewill discuss a methodology introduced by Porter and Zona.166

16.4 The econometric analysis of bid offers is carried out by running regressions of the bidprice for the auctions to which at least two members of the alleged cartel have taken part:

Log (b ) = a + bX + u [EQUATION 26]ij j ij ij

Where b is the price bid by bidder i in auction j; a is an auction-specific variable equalij j

to 1 for auction j and to zero for any other auction; X is a vector of variables affectingfirm i’s winning chances (like geographic proximity) or its costs; and u is a variableij

conveying the information that the firm has at each auction. u is a stochastic term withij

104

non-constant variance, as the variance varies with each auction. Due to this property ofthe error variance, the model has to be estimated using Generalised Least Squares (GLS)techniques. Estimation proceeds as follows:

Step 1 Equation 1 is estimated by Ordinary Least Squares, and the residualscalculated.

Step 2 The mean residual among the bidders participating at each auction iscomputed and each data point is divided by the appropriate mean residual.

Step 3 The model is re-estimated.

Equation 1 has to be estimated over three samples: the whole sample; the sampleincluding the potential cartel bidders; and the sample of non-colluding bidders. If therehas been bid rigging, the price equation for the cartel will be significantly different fromthat of the non-colluding bidders. A simple Chow test is performed at this point to assesswhether the two sets of parameters are the same. Define the whole sample as whole; thecartel sample as 1; the non-cartel sample as 2; let n be the number of bidders in 1; m thenumber of bidders in 2; k the number of regressors; and RSS the residual sum of squaresfrom the regression. Then:

[RSS – RSS – RSS ) / k] / [(RSS + RSS ) / (n – m – k)] [EQUATION 27]whole 1 2 1 2

is distributed as a Fisher’s F with k and (n – m – k) degrees of freedom. If the computedvalue is higher than the tabulated value, then there is evidence of bid rigging.

Data and computational requirements

16.5 The estimation of this technique requires data on successive bids for a number of bidders.The GLS method described above can be performed by using a good econometricpackage. The estimates can then be carried out quite easily, and the parameters, t-ratiosand other statistics will all be correct.

16.6 Data on costs, however, might be difficult to come by, and less sophisticated, but stillvalid methods can be used to determine whether bid rigging has occurred. They will bediscussed below.

Porter, R.H.and J.D. Zona, 1993, op. cit., page 518.167

105

Description of the technique when sufficient data is not available

16.7 Using data on market shares for the alleged cartel bidders, the analyst will investigatewhether the distribution of market shares (as represented, say, by the numbers of bidssubmitted) has remained constant over the period when the cartel was supposedly inforce. If the distribution has remained stable, this is an indication that the bidders mighthave been colluding, because by keeping shares stable the cartel removes the incentivefor cheating.

16.8 An alternative methodology to test for the presence of a bidding cartel is by assessingwhether the distribution of cartel bids is more concentrated than that of non-cartel bids.By using a simple test for the equality of means such as those described in Chapter 3, theanalyst will discover whether the ratio between the lowest and second lowest bid for thecartel and non-cartel bidders is the same. If it is not, there is some evidence of bidrigging.

Interpretation

16.9 Bid rigging is a serious problem. Porter and Zona report that ‘between 1982 and 1988167

more than half of the criminal cases filed by the Antitrust Division of the Department ofJustice involved bid rigging or price fixing in auction markets…’. The techniques wehave discussed provide one formal and two informal tests to detect such behaviour.

16.10 One of the main problems associated with these techniques is that they rely heavily onthe distinction between cartel members and other bidders. It is therefore very importantthat the samples are separated correctly. This requires that the analyst has an in-depthknowledge of the industry under investigation, especially its history.

16.11 A further problem is that bids may be different because the members of the alleged cartelare larger, more efficient firms, and therefore have lower costs. This allows them to bidlower than the competition, but this is due to their efficiency, not their anti-competitivebehaviour. This suggests that costs should be looked at whenever possible wheninvestigating alleged anti-competitive behaviour.

106

Application: paving tenders in Suffolk and Nassau Counties, New York

16.12 This case relates to procurement contracts, namely paving contracts awarded by theDepartment of Transportation in Suffolk and Nassau counties, New York, between 1979and 1985. Five firms were suspected of colluding, by designating a serious bidder for thecontract and how much it would bid. In 1984, one of these five firms was convicted ofbid ridding, and the other four listed as conspirators on a bid which resulted in a contractto build a motorway on Long Island. The five firms were sued repeatedly in various otherinstances. Using data on 75 auctions for paving contracts, in which there were 319 non-cartel bids and 157 cartel bids, and applying the GLS methodology, Porter and Zonafound convincing evidence that the bidding behaviour of cartel firms was different fromthat of non-cartel firms, and that therefore there was evidence of bid rigging.

For a more exhaustive analysis of the subject, see Stillman, R., 1983, Examining Antitrust Policy168

Towards Horizontal Mergers, Journal of Financial Economics, 11: 225-40; and Eckbo, B.E., 1983,Horizontal Mergers and Collusion, Journal of Financial Economics, 11: 241-73.

107

PART V: OTHER TECHNIQUES ANDCONCLUSIONS

17 TIME SERIES EVENT STUDIES OF STOCK MARKETS’REACTIONS TO NEWS

17.1 The use of financial analysis in competition policy is outside the terms of reference ofthis project. For this reason, we provide here only a brief summary of the hypothesesunderlying the time series analysis of stock markets’ reactions to news.

17.2 Stock markets’ reactions to news can be a particularly valuable source of information thatmay lead to inferences about the nature of a merger or a take-over. The rationale behindthe analysis of stock markets’ reactions is quite simple, and stems from financial theory.The stock market is assumed to be efficient, so that asset prices reflect the true underlyingvalue of a company. When a merger between two companies takes place, there are twopossible outcomes in the product market. First, if market power increases substantially168

after a merger, the product price will increase and so will profits for both the mergerpartners and the other firms operating in the market, at least in the short run (that is, priorto entry by new players). This implies that, on the assumption that the stock market isefficient, both the merging firms and horizontal rival firms in the industry should earnpositive abnormal returns when the anti-competitive merger is announced. Also, whensteps are taken by the authorities to investigate the merger, negative abnormal returnsshould be observed.

17.3 Secondly, if the merger generates cost efficiencies, then the merged firm will be moreprofitable, than the sum of the pre-merger entities, all other factors being equal; suchhigher profitability, however, will not extend to other firms in the industry. This impliesthat the merging partners should gain positive abnormal returns around the date of themerger announcement, and negative abnormal returns around the date when antitrustinvestigations are announced. The situation for rival firms is, however, more complex.If the market expects the cost efficiencies to be easily passed along to other players in theindustry, then rival firms should also earn positive abnormal results when the merger is

108

announced, and negative returns when legal proceedings are announced by theauthorities. Otherwise, negative abnormal returns can be expected for the rivals at thetime the merger is announced and positive returns when the investigation is announced.

17.4 Summarising, observing positive abnormal returns for the merging partners at the timeof the merger announcement does not allow the analyst to distinguish between a mergerwhich is expected to raise prices and one which is expected to lower costs. Likewise,observing positive abnormal returns for the horizontal rivals does not make it possibleto discriminate between the two hypotheses of a price-increasing versus cost-reducingmerger. However, observing insignificant or negative abnormal returns for the rivalsaround the announcement date is a sufficient condition to conclude that the marketexpects the merger to be cost-reducing, not price-increasing.

17.5 The above-mentioned hypothesis is tested by using time series stock-price data for rivalfirms, and comparing their actual stock price returns around the announcement date witha counterfactual measure of what the return would have been had the merger not takenplace, and summing over to obtain the cumulative abnormal returns. The counterfactualreturn for an asset can be calculated based either on the mean-adjusted return model(MARM); or on the market model; or the capital asset pricing model (CAPM); or themarket index model. According to the MARM, the counterfactual return from an assetis simply the average return over a specified period. The market model return is thepredicted value from a regression of actual returns on an intercept and the returns on amarket index. The CAPM requires estimating the predicted values from a regression ofactual returns on the returns on a risk-free asset and on the difference between the returnon a market index and on the risk free asset. Finally, the counterfactual return for themarket index model is simply the market index itself.

17.6 The implementation of this technique is simple, and it requires readily available data.However, it has to be borne in mind that unless the abnormal returns for the rival firmsturn out to be negative or insignificantly different from zero, this technique does not leadto unambiguous conclusions. Similarly, the assumption of (strong form) capital marketefficiency is not justified. Most studies show only weak form, or semi-strong, formefficiency. In these circumstances stock market data may not fully capture the truth aboutmarket power or cost reduction possibilities.

109

18 CONCLUSIONS

18.1 In this report we have investigated the use of quantitative techniques in antitrust analysis.This includes a review of the role of collecting and analysing empirical evidence in casesdealt with by antitrust authorities in the UK, US and the EC. The largest part of thereport is devoted to a systematic survey of 15 techniques. Each technique is describedin terms of the main characteristics it uses and its data requirements. A critical evaluationis provided through illustrative case summaries and in a discussion of the problems ofinterpreting the statistical and economic significance of each technique.

Role of quantitative techniques in antitrust analysis

18.2 Empirical analysis of economic issues is increasingly becoming an essential feature ofany major antitrust investigation, whether it be a merger which gives rise to complaintsof monopolisation, the abuse of a dominant position, a review of the compatibility ofa joint venture, or other co-operative agreements between firms.

18.3 This trend is almost universal and particularly pronounced regarding the definition of therelevant markets in merger enquiries. Here the recent European Commission Notice onmarket definition from December 1997 (see paragraph 2.5) has marked a watershed. Likethe US Merger Guidelines (see paragraph 2.9), which over the last 15 years have led tothe development of a more consistent and empirically based approach to marketdefinition, the Commission Notice can be expected to lead to similar changes in antitrustanalysis in Europe.

18.4 Market definition is, however, not the only area of antitrust analysis that relies onquantification and the use of quantitative techniques. The analysis of market structure hasregularly been subject to empirical analysis in a number of cases in the UK and the US.Usually it is the link between profit margins and levels of concentration across differentgeographic (local) markets that are studied through econometric analysis. Another areaof antitrust analysis that calls for increased use of empirical analysis is the competitivebehaviour of firms. With the advances in industrial economics, more and more modelsof competition are being applied empirically to study among other things, the effect ofa merger between firms supplying similar but not identical goods (key techniques usedin this context are diversion rations and demand analysis for differentiated products).

18.5 Finally, quantitative techniques can be used for the analysis of costs. While outside theterms of reference of this study, economic analysis of cost functions is nevertheless

110

useful for the assessment of economies of scale and the assessment of possible efficiencygains in a merger. It is here that economics and accounting come together to informantitrust analysis.

18.6 While very data intensive and requiring sophisticated econometric techniques, the abovemodels meet the main criteria for the appropriate use of quantitative techniques inantitrust analysis, namely that they are capable of providing statistical results that havean economic meaning. The same is true for bidding models, residual demand analysis,time series studies of stock market reactions to news and, to a lesser extent, co-integrationanalysis.

18.7 Quantitative techniques are only tools which are of assistance in assessing the structureand conduct of an industry. These tools can, and should be, used to give an answer towell-defined economic questions; these answers will be more accurate if the questionsare properly framed; if the data is reliable; and if the statistical tests are strong.

18.8 In order to frame the question correctly, the analyst has to become familiar with theindustry under investigation, and the behaviour of the main players within it. Only withan in-depth knowledge of the industry will the analyst be able to ask the right questions.Therefore one cannot begin with the use of these tools. They can only be employed afterthe analyst already has a good understanding of the background to the case, but is seekingspecific answers to crucial questions - what is the exact market, what role does priceformation play, what is likely to occur if the firms X and Y merge? Public sources, andinterviews with businessmen and other parties involved in the investigation, will take theanalyst a long way in understanding both the functioning and the focal points thatcharacterise the case under investigation.

18.9 The process of translating this knowledge into an analytical framework allows the mainissues to be targeted by the relevant empirical tests. The key issues in antitrust analysisare clearly market definition, market structure, and the nature of competition andefficiency effects. Each one of these issues should be analysed within a well-definedtheoretical framework, from which hypotheses can be derived and be tested empirically.

18.10 The actual implementation of the empirical test should always be carried out using thebest statistical techniques given the available data and the question to which an answeris sought. Therefore, the analyst must assess the reliability and suitability of the availabledata before implementing any empirical test. Statistical data is subject to sampling errors,biases, and changing definitions which have to be understood. The importance of thisrather pedantic part of the investigation cannot be stressed enough. By simply plottingthe different data series and computing basic statistics such as means, standarddeviations, measures of skewness, or frequency counts, it is possible to gain a good

111

feeling for the quality of the data and the basic underlying facts. It is very important notto fall into the obvious temptation of trying to infer the hypothesis to be tested from whatappears to be the answer in the data. The hypothesis has to come from economic theory.

18.11 In carrying out the empirical tests, the analyst should follow the golden rule - stick to theformulated hypothesis. Data mining especially, should be scrupulously avoided. By datamining we mean that procedure where, for example, dozens of different regressions arerun with as many variables as possible expressed in as many forms, in the hope ofobtaining statistically significant results. The truth is that if the data is ‘tortured’ hardenough, we are quite likely to obtain at least one significant result. For instance, if werun ten regressions with the same dependent variable but different regressors we have a40% probability of obtaining at least one regression with a significant coefficient, eventhough none of the regressors are related to the dependent variable. Significance obtainedby mining the data is meaningless, and would not stand up to a thorough investigation.

18.12 When quantitative techniques are used correctly and rigorously, in the fashion describedabove, they can be helpful tools. They are not, however, magic bullets. All empiricalanalysis performed during an investigation has to be weighted before it is used to derivepublic policy. Obviously, the weaker the data, or the more primitive or potentiallymisleading the statistical technique used, the less weight the empirical analysis shouldreceive.

18.13 With these preliminary observations about the preparation for, and the undertaking of,quantitative tests, we draw the following conclusions from our survey of 16 quantitativetechniques.

Statistical tests of prices and price trends (Part II)

18.14 Chapters 3 to 8 above describe quantitative techniques used in antitrust analysis toanalyse single variables, namely prices. Apart from hedonic analysis, these techniquesare all geared to test whether there are systematic differences between the prices of twoor more products or services, or between a product’s price across different areas or timeperiods.

18.15 Cross-sectional price tests are very simple statistical tests that are not based on anyeconomic hypothesis. For this reason, they should be given little weight in the contextof an antitrust investigation. More specifically, although these tests allow the analyst toestablish whether two products or services have the same mean price across regions, orbefore and after an alleged anti-competitive event has taken place, they do not provide

112

any explanation as to why it is so. These tests are often used as a first step in an analysisor as the only step when no other data but price data is available. It is important toemphasise that price data has to be carefully examined before it is used. Questions to beconsidered here include: is the data actually in the form of transaction prices and not listprices; are all characteristics of prices known (for example, warranty terms); are thegoods for which price data exists homogeneous?

18.6 The common price tests when time series data is available are listed below.

18.7 Hedonic price analysis is a good tool to be used as an intermediate step during aninvestigation into markets where product characteristics vary frequently. This techniqueallows the analyst to ‘deflate’ the prices due to the effect of qualitatively differentcharacteristics, thereby obtaining price series that are comparable. However, caution isimportant. The results obtained with hedonic price analysis are very sensitive to thecorrect specification of the product’s characteristics, so that an in-depth knowledge of theproduct qualities is required to obtain meaningful results.

18.8 Price correlation is a weak test, which should be used as a starting point in aninvestigation, but never as the sole piece of evidence. In too many analyses pricecorrelations appear to be the end rather than the beginning of statistical analysis.Spurious correlation is a serious problem that can lead the analyst to conclude that twoseries of prices are associated to one another while in reality they are related to someother variable that is influencing them both.

18.9 Speed of adjustment tests and Granger causality are techniques that were used inantitrust analysis for a short time. They have since been subsumed by co-integrationanalysis. We have included these techniques because they represent important stepstowards the production of a statistically correct methodology that is aimed at testingwhether price series tend to move together.

18.10 Co-integration analysis is a very robust methodology. When used properly withadequate data, this test does provide statistically meaningful results, unbiased byproblems such as spurious correlation. It is therefore one of our preferred tests. It does,however, require a highly skilled analyst for this test to carried out properly. It shouldtherefore, only be undertaken by expert statisticians or econometricians trained in timeseries analysis.

113

18.11 All statistical tests of price homogeneity, no matter how sophisticated, supply weakevidence in antitrust analysis as they are only capable of providing information onwhether price series are related to each other. The fact that the price of a certain productor area is found to affect prices of other products or areas is not sufficient proof that thoseproducts or areas are in the same antitrust market. What needs to be determined iswhether areas or products that are in the same market at historical prices will still be inthe same market if prices in one area, or for one set of products, were increased by asignificant and non-transitory amount.

18.12 We conclude that quantitative techniques based on the analysis of prices alone are, asidefrom co-integration analysis, relatively weak tests and should only be used if no otherdata is available. When these techniques are used, they should carry a lighter weight inthe context of an antitrust investigation as compared to techniques based on tests of well-defined economic hypotheses.

Analysis of demand (Part III)

18.13 The techniques discussed in the report dealing with demand analysis have been developedas quantitative tests of well-defined economic models. Since these tests derive from atheoretical structure they do not have some of the statistical and interpretation problemswhich exist with simpler price analyses described above.

18.14 Residual demand analysis is theoretically ideal, as it allows the direct estimation ofsupply substitution effects in the market for a product or service. It is a preferredtechnique. In a perfectly competitive market, a single firm does not have the power toraise its price beyond the market price, because it will lose its customers to thecompetition: hence the demand elasticity facing the firm is infinite. Residual demandanalysis provides a test for this hypothesis of perfect competition. Unfortunately there israrely enough information available to estimate a residual demand model correctly, asdata is needed on cost shifting variables of the firms operating in the industry - both thoseunder investigation and their competitors.

18.15 One interesting exception is when the market under investigation includes foreigncompetitors. In that case exchange rates can be used as cost shifters and the modelestimated.

18.16 Residual demand analysis supplies the value of the elasticity of residual demand for thefirm under investigation. The antitrust analyst is, however, concerned about whether –

One paper is Waverman, L., 1991, Econometric Modelling of Energy Demand: When are Substitutes169

Good Substitutes, Energy Demand: Evidence and Expectations, in D. Hawdon (ed.), Academic Press.

114

given such an elasticity - the firm can profitably raise the price by a significant amount.Economic theory provides no guide as to when the residual elasticity is big enough toinfer effective competition.169

18.17 Critical loss analysis provides an answer to whether competitive constraints are strongenough to consider a range of products in the same market. In this respect, critical lossanalysis is a second-tier technique, used to calculate the change in the quantity sold thatfor a given level of price-cost margin and price increase, makes that price increaseunprofitable. This technique is useful but cannot be used on its own; it has to be used asa complement to residual demand estimation. This is because critical loss analysisprovides an estimate, for a given price-cost margin and a given price increase, of thecritical value of the elasticity of demand. If the actual value - obtained with econometricanalysis or other means - is larger than this critical value, then that price increase will beunprofitable.

18.18 Import penetration tests are a useful tool to measure the constraint imposed on localproducers by foreign competition. How to treat foreign competition in antitrust analysisis a question that should be answered within a theoretical model of demand. The firststep in the analysis of foreign competition should be a careful examination of what theimports are, and what proportion of sales they represent. Then, if it is found that themarket has a strong foreign presence, and if sufficient data is available, the constrainingimpact of foreign competition has to be assessed. This can be done either by a directestimate of the import elasticity with respect to domestic prices, or by incorporating theeffect into a residual demand model.

18.19 Survey techniques should be used when sufficient data is not available to carry outfully-fledged demand estimation, provided the survey is carried out correctly. So,although surveys can provide a valuable insight into the questions being investigated,they should be designed and used carefully.

Models of competition (Part IV)

18.20 Within the structure-conduct-performance paradigm, price-concentration studies canbe helpful in assessing the impact of market structure (that is, concentration) on thepricing behaviour of market participants. The appropriate test of the underlyingeconomic theory requires the use of price-cost margins as a dependent variable in theeconometric analysis. The hypothesis to be tested is that higher concentration leads tohigher price-cost margins. The use of price data alone instead of margins relies on therather restrictive assumption that marginal costs are constant. Price-concentration analysiscan be carried out by calculating the correlation coefficient between the two variables.

115

That is, however, not an advisable practice. Information about why the two variables arerelated is required and that cannot be achieved by using correlation analysis. Note that theunderlying theory is the relationship at the individual firm level.

18.21 Often industry-wide rather than firm-specific tests are used. We would not recommendthis approach. Industry-level analysis of price-concentration relationships using timeseries data should only be performed if firm-level data is not available. And if industrydata is used, great care should be taken with the ensuing interpretation. One of the mainproblems with industry-level data is that market shares tend to be stable over time, so thatthe concentration measure varies very little. Variability is higher in firm-level data witha cross-sectional element. Also, the use of industry-level data does not allow for adistinction to be made between the market power and the efficiency hypotheses, whichrepresents a serious drawback.

18.22 In recent years, the scope for abusing market power in differentiated product marketsby merging competitors, has come to the forefront in antitrust analysis. Techniques havetherefore been developed to test whether the merger between producers of competingproducts is likely to lead to an increase in the price of their product(s). Diversionanalysis is a simulation technique that can be used for this purpose; it is very easy tocarry out and requires very little data, but it has been strongly criticised, as the underlyingeconomic assumptions are too restrictive. It is common to assume that the lost sales arediverted equally across all competitors, an assumption which is simply incorrect for anydifferentiated good industry. As supermarket scanner data on prices and quantities ofproducts sold becomes increasingly available, we believe that the estimation of demandsystems should become common practice. This technique is based on sound economictheory, and it is also econometrically sound. Demand system estimation is, however, asophisticated analysis requiring the use of simultaneous equation techniques and maytake the competition policy analyst far afield.

18.23 In Chapter 16 we have reviewed techniques aimed at detecting bid-rigging and cartelbehaviour in procurement auctions. As investigations into these kinds of actionsrepresent a good proportion of the activity of antitrust authorities, these techniques areimportant. They are also quite reliable, and easy to apply.

18.24 Finally, the anti-competitive effects of a merger can also be investigated using time seriesevent data of stock markets’ reactions to news. We have discussed this technique verybriefly, as the OFT has developed in-house literature on the subject. This type of analysisis quite useful as data is readily available and the technique is valuable. The mainproblem with this technique is that it is not easy to define when the new information wasgenerally available and so the technique does not always provide unambiguousconclusions.

116

Concluding remarks

18.25 From this brief review of the strong points and pitfalls of the various techniques coveredin this report, we arrive at the following conclusions. First and foremost, it is veryimportant to correctly specify the hypothesis being tested and to ensure that it is in linewith the underlying economic assumptions, that is, in line with a well-defined economicmodel. Secondly, there is no substitute for a good understanding of firm activities in theindustry under investigation. Thirdly, care must be taken to obtain sufficient data, capableof being subjected to statistical tests. These aspects of any empirical investigation areinterdependent. The empirical results obtained have to be analysed in light of thestrength of the data and techniques used. Results derived using first class data andpowerful empirical techniques should be given a heavier weight in the context of aninvestigation than results obtained using poor data and/or weak tests.

117

APPENDICES

A GLOSSARY OF TERMS

The glossary of terms starting overleaf explains some of the main statistical terms used in thetext. The following references provide a more complete explanation of the underlying theoryand concepts.

Fisher, F., 1980, Multiple regression in legal proceedings, in J. Monz (ed.), IndustrialOrganisation, Economics and the Law, The MIT Press: Cambridge, Mass.

Kaye, D. and D. Freedman, 1994, Reference Guide on Statistics, Reference Manual onScientific Evidence, Federal Judicial Centre.

Wonnacott, T. and R. Wonnacott, 1990, Introductory Statistics for Business and Economics,4 edition, John Wiley & Sons: New Yorkth

Term Explanation

Autocorrelation

(serial correlation)

Bias

In regression analysis, there is autocorrelation when thecovariance of the error term is not constant, that is, eachobservation is statistically dependent on the previous ones.The vast majority of cases arise in the context of time seriesdata. Correlation between the time series residuals at differentpoints in time is called autocorrelation. Correlation betweenneighbouring residuals (at times t and t+1) is called first-orderautocorrelation. In general, correlation between residuals attimes t and t+d is called dth-order autocorrelation. WhenLeast Squares techniques are being used to estimate theparameters of the regression, the estimates will be unbiasedbut inefficient in the presence of autocorrelation.Autocorrelation casts doubt on results of Least Squares andany inferences drawn from them. However, techniques areavailable to improve the fit of the model and the reliability ofinferences and forecasts, for example, autoregressive models.

A systematic tendency for an estimate to be too high or toolow.

Term Explanation

118

Coefficient ofDetermination (R )2

In regression analysis, the coefficient of determination (R2) isa measure of the proportion of the total variation in thedependent variable that can be explained by the regressionequation. R varies between 0 and 1.2

An R with a value of zero implies that movements in the2

independent variable (X) do not explain any of the movementin the dependent variable (Y). The higher the R , the greater2

the association between movements in the dependent andindependent variables. Obviously, an R of unity implies that2

the entire variation of the dependent variable can be explainedby the model. R is sometimes used as a measure of the2

strength of a relationship that has been fitted by Least Squares.

R tends to be larger when the regression involves time series2

data, and lower where cross-sectional data is used. This isbecause in cross-sectional data, individual effects are moreimportant. Where two regression models have the samedependent variables their explanatory power can be comparedby using the adjusted coefficient of determination, whichcorrects for differences in the number of the regressors. Whenthe dependent variables are different (for example, oneregression uses price levels as the dependent variable andanother uses log prices) then the explanatory power of the tworegressions cannot be compared by using R .2

Term Explanation

119

Confidence Interval An estimate, expressed as a range, for a quantity in thepopulation. For example, a 95% Confidence Interval is theinterval between which one can be 95% confident that the truepopulation value lies. If an estimate from a large sample isunbiased, a 95% confidence interval is the range from twostandard errors below to two standard errors above theestimate. Intervals obtained in this way cover the true valueabout 95% of the time.

Put another way the Confidence Interval can be considered assimply the set of acceptable hypotheses. It reflects onesconfidence in the estimation process of the population value.Therefore any hypothesis that lies outside the confidenceinterval may be judged implausible.

Covariance Used to measure how two variables, say X and Y, varytogether, that is, the extent of joint variability or association.The covariance will be positive when both X and Y are large,or both are small. When the covariance is negative one will belarge, while the other tends to be small.

Dummy variable These are variables usually assigned a value of 1 or 0. Theycan be used to assign values to categories, such as male orfemale, thereby transforming them into numerical termsamenable to statistical tools. Dummy variables can also beused in regression analysis.

Efficient Estimator Among all the possible unbiased estimators, the efficientestimator is the one with the minimum variance orequivalently, standard error. If the estimator is not efficient,inferences based on the t and F tests will be incorrect.

Term Explanation

120

Fisher’s F-Test When more than two population means have to be compared,the two-sample t-test can no longer be used. An extension isthen provided by the F-test, which compares the varianceexplained by the difference between the sample means, withthe unexplained variance within the sample means. AnANOVA (analysis of variance) table provides an orderly wayto calculate F, to test whether there is a statistically significantdifference between the populations.

Heteroscedasticity One of the assumptions of the standard Least Squares modelis that the variance of the errors is constant and does notdepend on the variation of X (the independent variable).Errors with this property are said to be homoscedastic. If thevariance of the errors is not constant, the errors are said to beheteroscedastic. There are many statistical tests to detectheteroscedasticity. When Least Squares techniques are beingused to estimate the parameters of the regression, theestimates will be unbiased but inefficient in the presence ofheteroscedasticity.

Heteroscedasticity tends to be a problem when cross-sectionaldata is estimated. Most computer packages contain a routinethat enables the analyst to correct the covariance matrixthereby solving the heteroscedasticity problem.

Hypothesis See Statistical Test.

Independence Two variables, say X and Y, are independent if the value of Xis not in any way affected by the value of Y.

Term Explanation

121

Least Squares Consider the simple case where there are only two variables,such that Y depends on the value of X, and that therelationship between X and Y is a straight line. In this casethe equation which best fits the data is of the form

� = a + bX. A formula to calculate a (the intercept) and b (theslope) then have to be found. The least square methodcalculates a and b so as to minimise the sum of the squares ofthe differences between the estimates (�) and the actualobservations (Y). The differences are squared because someof the deviations (Y–�) will be positive and some will benegative, that is, some estimates will lie above the fitted line,others below.

Null Hypothesis See Statistical test.

Outlier An observation that is far removed from the majority of thedata in a sample. Outliers may indicate an incorrectmeasurement (for example, the information was incorrectlyentered into a spreadsheet), and may exert undue influence ona summary statistics such as the mean, and on statistical teststhat incorporate these statistics such as regressions. As ageneral rule any observation that is more than three standarddeviations from the mean needs to be treated with suspicion.

Population The total collection of objects or people to be studied, fromwhich a sample is drawn. The population can be of any size.

Term Explanation

122

Random sample A sample drawn from a population where each observation isequally likely to be chosen each time an observation is drawn;by extension each of the possible samples of a given size havean equal probability of being selected. Random samples donot ensure that survey results are accurate but they do enablean assessment of the degree of accuracy of the results to bemade using statistical techniques.

Regression In its simplest terms a regression is designed to show how onevariable effects another. If there is more than one explanatoryvariable, then the regression is called ‘multiple’. This is themost common form of regression in economic applications.

There are two main uses of multiple regression. In the firstuse, ‘testing hypotheses’, the aim is to state whether or not aparticular relationship is true. In the second use, ‘parameterestimation’, the interest is to establish the precise magnitudeof the effects involved. There is a third use, which is not sowidespread, and that is forecasting the values of somevariable.

Residuals The difference between an actual and a predicted valuetypically drawn from a regression equation.

Term Explanation

123

Standard Error A statistic is simply a function of the variables in the sample.Through repeated sampling it would be possible to calculateall the possible values of the statistic that one could observe.These possible values constitute the sampling distribution ofthe statistic. The standard deviation of the samplingdistribution for a statistic is often called the standard error ofthe statistic.

Associated with the estimated value of each regressioncoefficient (a and b as mentioned above) is a figure known asthe standard error of the coefficient. The standard error is ameasure of the coefficient’s reliability. Generally, the largerthe standard error, the less reliable or accurate is the estimatedvalue of the coefficient.

In large samples, it is generally the case that the truepopulation mean is within approximately two standard errorsof the estimated mean 95% of the time.

Stationary Process A process whose statistical properties do not vary over time,that is, a series which has a constant mean and variance overtime.

Term Explanation

124

Statistical Test Using sample data, a statistical test is a procedure performedto estimate the probability that a hypothesis about thepopulation from which the sample was drawn is true. Thisinvolves testing a null hypothesis against an alternativehypothesis. The null hypothesis is often the hypothesis thatthere is no difference between the population parameters, orthat the population parameter equals zero. The alternativehypothesis is the converse of the null hypothesis. Anappropriate test statistic is chosen to evaluate the nullhypothesis and calculated for the sample of data. Theprobability (significant level) is judged small enough, the nullhypothesis is rejected. There is a wide range of statistical testsavailable. Among the most commonly used test statistics arethe Student’s t, the � , and the Fisher’s F.2

Student’s t-statistic A statistical test used to determine the probability that a

statistic obtained from sample data is merely a reflection of achance variation in the sample(s) rather than a measure of atrue population parameter.

In regression analysis the standard error of an estimatedcoefficient is used to make a statistical test of the hypothesisthat the true coefficient is actually zero, that is, that thevariable to which it corresponds has no impact on thedependent variable. The ratio of the estimated coefficient toits standard error is the t-statistic.

In large samples, a t-statistic of approximately 2 means thatthere is less than a 1 in 20 chance that the true coefficient isactually 0 and that the larger coefficient is observed bychance. In this case the coefficient is said to be ‘significant atthe 5% level’. A t-statistic of approximately 2 ½ means thatthere is only a 1 in 100 chance that the true coefficient is 0,that is, the coefficient is significant at the 1% level. In smallersamples the t-statistics need to be larger for any givensignificance level.

Term Explanation

125

Unbiased Estimator An estimate of a parameter is said to be unbiased when thereis no systematic error in the estimation procedure used. If anestimate is unbiased, the expected value of the parameterestimate is equal to the expected value of the (unknown)population parameter. Put more simply, U is an unbiasedestimator of G if the expected value of U is equal to G. Forexample, if � is, on average, equal to µ then it is an unbiasedestimate of µ. Note that a systematic error is not a randomerror.

126

B BIBLIOGRAPHY

Areeda, P. and H. Hoverkamp, 1992, Antitrust Law: An Analysis of Antitrust Principles andtheir Application, 1992 Supplement, Boston: Little Brown

Areeda, P. and D. Turner, 1975, Predatory Pricing and Related Practices under Section 2 ofthe Sherman Act, Harvard Law Review, 88: 697-733

Arrow, K., 1962, Economic Welfare and the Allocation of Resources for Invention, in NBER,The Rate and Direction of Inventive Activity: Economic and Social Factors, PrincetonUniversity Press: 609-25

Ashworth, M.H., J.A. Kay and T.A.E. Sharpe, 1982, Differentials Between car Prices in theUnited Kingdom and Belgium, IFS Report Series No 2. London: Institute for Fiscal Studies

Bain, J.S., 1956, Barriers to New Competition: Their Character and Consequences inManufacturing Industries, Cambridge: Harvard University Press

Bain, J.S., 1951, Relation of Profit Rates to Industry Concentration: AmericanManufacturing, 1936-1940, Quarterly Journal of Economics, 65: 293-324

Baker, J.,1997, Econometric Analysis in FTC. v. Staples, prepared remarks before theEconomics Committee, Section of Antitrust Law, American Bar Association, Washington,DC, 18 July, 1997

Baker, J.B., 1987, Why Price Correlations Do Not Define Antitrust Markets: On EconometricAlgorithms for Market Definition, Working Paper No 149, Bureau of Economics, U.S.Federal Trade Commission

Baker, J.B. and T.F. Bresnahan, 1992, Empirical Methods of Identifying and MeasuringMarket Power, Antitrust Law Journal, 61: 3-16

Baker, J.B. and T.F. Bresnahan, 1985, The Gains from Mergers or Collusion in ProductDifferentiated Industries, Journal of Industrial Economics, 35: 427-44

Baumol, W. J., J.C. Pauser and R.D. Willig, 1982, Contestable Markets and the Theory ofIndustrial Structure, NY: Hartcourt Brace Jovanovich

Berndt, E.R., 1991, The Practice of Econometrics: Classic and Contemporary. AddisonWesley

127

BEUC, 1989, EEC Study on Car Prices and Progress Towards 1992, BEUC/10/89

Bishop, B., 1997, The Boeing/McDonnell Douglas Merger, European Competition LawReview, 18 (7): 417-19

Bork, R., 1978, The Antitrust Paradox, New York: Basic Books

Bresnahan, T. , 1989, Empirical Studies of Industries with Market Power, in Schmalensee, R.and R. Willig, eds., Handbook of Industrial Organisation, Volume 2. Amsterdam: NorthHolland

Carleton, W.T., R.S. Harris and J.F. Stewart, 1980, An Empirical Study of Merger Motives.Washington, D.C.: Federal Trade Commission

Carlton, D.W. and J.M. Perloff, 1994, Modern Industrial Organisation, New York: HarperCollins

Carton board, 1994, Case IV/33833, OJ L243

Caves, R., 1992, Productivity Dynamics in Manufacturing Plants, Brookings Papers onEconomic Activity: 187-267

Caves, R., 1990, Industrial Organisation, corporate strategy and structure, Journal ofEconomic Literature: 64-92

Caves, R. and D.R. Barton, 1990, Efficiency in US Manufacturing Industries, MIT Press:Cambridge

Charemza, W.W., and D.F. Deadman, 1997, New Directions in Econometric Practice, 2nd

edition. Edward Elgar

Commission Notice on the Definition of the Relevant market for the Purposes of Communitycompetition law, 1997, OJ C372/5

Davies, S. and B. Lyons, 1996, Industrial Organisation in the European Union: Structure,strategy, and competitive mechanism, Clarendon Press: Oxford

Deaton, A. and J. Muellbauer, 1980, An Almost Ideal Demand System, American EconomicReview, 70: 313-326

Demsetz, H., 1973, Industry Structure, Market Rivalry and Public Policy, Journal of Law andEconomics, 11: 55-65

128

Demsetz, H., 1974, Two Systems of Belief about Monopoly, in Goldschmid, H.J., H.M.Mann, and J.F. Weston, eds, Industrial Concentration: The New Learning, Boston: LittleBrown

Department of Justice and Federal Trade Commission, 1992, Horizontal Merger Guidelines,Antitrust and Trade Regulation Report, 69 (1559), Washington D.C

Diamond, P. and J.J. Hausman, 1994, Contingent Valuation: Is Some Number Better ThanNo Number?, Journal of Economic Perspectives, 45

Doern, G.B. and S. Wilks (eds.), 1996, Comparative Competition Policy: National Institutionsin a Global Market. Oxford: Clarendon Press

Easterbrook, Judge, A.A. Poultry Farms Inc. V. Rose Acre Farms Inc. F.2d 1396 (7th Cir. 1989)

Eckbo, B.E.,1983, Horizontal Mergers and Collusion, Journal of Financial Economics, 11: 241-73

Engle, R.F. and C.W.J. Granger, 1987, “Co-integration and Error Correction: Representation,Estimation and Testing”, Econometrica, 55: 251-76

Euromotor, 1991, Year 2000 and Beyond - The Car Marketing Challenge in Europe, EuromotorReports: 183-84

European Commission, Irish Sugar case 97/624, Sugar Beet, case 90/45, and NapierBrown/British Sugar case 88/518

European Commission, 1992, Case IV/M.0291, 1992, Torras/Sarrio

European Commission, 1995, Car Price Differentials in the European Union on 1 May 1995,IP/95/768

European Commission, 1995, Twenty-fifth Report on Competition Policy

Fairburn, J. and P. Geroski, 1993, The Empirical Analysis of Market Structure and Performance,in Bishop, M. and J. Kay, eds., European Mergers and Merger Policy. Oxford: OxfordUniversity Press, pp. 217-238

Farrell J. and C. Shapiro, 1990, Horizontal Mergers: An Equilibrium Analysis, AmericanEconomic Review, 80: 107-26

129

Flam, H. and H. Nordstrom, 1995, Why Do Pre-Tax Car Prices Differ So Much Across EuropeanCountries?, CEPR Discussion Paper series No. 1181

Frankel, A. and J. Langenfeld, 1997, Sea change or Sub-markets?, Global Competition Review,June/July: 29-30

Froeb, L.M., G.J. Werden and T.J. Tardiff, 1993, The Demsetz Postulate and the Effects ofMergers in Differentiated Products Industries, Economic Analysis Group Discussion Paper 93-5(August 24, 1993)

Geroski, P., 1982, Simultaneous Equation Models of the Structure-Performance Paradigm,European Economic Review, 19: 145-58

Gilbert, R.J., 1995, The Antitrust Guidelines for the Licensing of Intellectual Property: Newsignposts for the intersection of intellectual property and antitrust laws, American BarAssociation Antitrust Law Spring Meeting, Washington D.C

Graham, M. and A. Steele, 1997, The Assessment of Profitability by Competition Authorities,OFT Research Paper 10

Harbord, D. and T. Hoehn, 1994, Barriers to Entry and Exit in European Competition Policy,International Review of Law and Economics

Harris, B.C. and J.J. Simons, 1989, Focusing Market Definition: How Much Substitution isNecessary? Research in Law and Economics, 12: 207-226

Hausman, J.J. and G.K. Leonard, 1997, Economic Analysis of Differentiated Products MergersUsing Real World Data, George Mason Law Review, 5: 321-46

Hausman, J.J., G.K. Leonard, and J. D. Zona, 1994, Competitive Analysis with DifferentiatedProducts, Annales d'Economie et de Statistique,, 34: 159-80

Hayek, F., 1945, The Use of Knowledge in Society, American Economic Review; 35: 519-30

Hendricks, J. and P. Paarsh, 1995, A Survey of Recent Empirical Works Concerning Auctions,Canadian Journal of Economics, 28: 403-26

Horowitz, I., 1981, Market Definition in Antitrust Analysis: A Regression Approach, SouthernEconomic Journal, 48: 1-16

Jacquemin, A. and A. Sapir, 1988, International Trade and Integration of the EuropeanCommunity: An Econometric Analysis, European Economic Review, 31: 1439-49

130

Kennedy, P., 1993, A Guide to Econometrics, 3 Edition. Oxford: Basil Blackwellrd

Klass, M. W. and G. I. Rosenberg, 1985, Economic Analysis of the Proposed Acquisition ofAssets by Lafarge Corporation From National Gypsum Company, Glassman-Oliver EconomicConsultants, November 8, 1985

Klevorick, A., 1993, The Current State of the Law and Economics of Predatory Pricing,American Economic Review, 83: 162-167

Kim, E.H. and V. Singal, 1993, Mergers and Market Power: Evidence from the Airline Industry,American Economic Review, 83:549-569

Lande, R. and J. Langenfeld, 1997, From Surrogates to Stories: The Evolution of Federal Mergerpolicy, Antitrust Magazine, Spring: 5-9

Landes, W.M. and R.A. Posner, 1981, Market Power in Antitrust Cases, Harvard Law Review,94:937-96

Langenfeld, J.L. and G.C. Watkins, 1998, Geographic Oil Product Market Test: An ApplicationUsing Pricing Data, Mimeo: LECG Inc

Langenfeld, J.L., 1998, The Triumph and Failure of the US Merger Guidelines in Litigation, TheGlobal Competition Review, Dec 1997/Jan 1998: 36-37

Langenfeld, J.L., 1996, The Merger Guidelines as Applied, in Coate, M. and A. Kleit, 1996, TheEconomics of the Antitrust Process

Lerner, A., 1934, The Concept of Monopoly and the Measurement of Monopoly Power, Reviewof Economic Studies, 1: 157-75

Liebowitz, S.J., 1987, The Measurement and Mis-measurement of Monopoly Power,International Review of Law and Economics, 7: 89-99 for an interesting discussion

London Economics, 1997, Competition in Retailing, Office of Fair Trading Research Paper 13.

London Economics, 1997, Competition Issues, Vol. 3, Sub-series V, Single market review 96,Office for Official Publications of the European Communities

London Economics, 1994, The Assessment of Barriers to Entry and Exit in UK CompetitionPolicy, OFT Research Paper 2

131

McAfee, R.P. and J. McMillan, 1987, Auctions and Bidding, Journal of Economic Literature,25: 699-738

McFadden, D., 1973, Conditional Logit Analysis of Qualitative Choice Behaviour, in Zarembka,P. ,ed., Frontiers in Econometrics. New York: Academic Press

Monopoly and Mergers Commission, 1986, White Salt

Monopoly and Mergers Commission, 1989, The Supply of Petrol. A Report on the Supply in theUnited Kingdom of Petrol by Wholesale

Monopoly and Mergers Commission, 1989, The Supply of Beer

Monopoly and Mergers Commission, 1991, Soluble Coffee. A Report on the Supply of SolubleCoffee for Retail Sale with the United Kingdom

Monopoly and Mergers Commission, 1992, Bond Helicopters Ltd and British InternationalHelicopters Ltd

Monopoly and Mergers Commission, 1992, New Motor Cars. A Report on the Supply of NewMotor Cars within the United Kingdom

Monopoly and Mergers Commission, 1994, The Supply of Recorded Music. A Report on theSupply in the UK of Pre-recorded Compact Discs, Vinyl discs and tapes containing music

Monopoly and Mergers Commission, 1995, Service Corporation International and PlantsbrookGroup plc

Monopoly and Mergers Commission, 1995, South West Water Services Ltd

Monopoly and Mergers Commission, 1995, Telephone Number Portability

Monopoly and Mergers Commission, 1995, The General Electricity Company plc and VSEL plc

Monopoly and Mergers Commission, 1995, Video Games

Monopoly and Mergers Commission, 1996, National Express Group plc and Midland MainlineLimited

Monopoly and Mergers Commission, 1998, Capital Radio plc and Virgin Holdings Limited

Mueller, D.C., 1985, Mergers and Market Share, Review of Economics and Statistics, 47: 259-67

132

Murray J. and N. Sarantis, , Price-Quality Relations and Hedonic Price Index for Cars in the UK

Myers, G., 1994, Predatory Behaviour in UK Competition Policy, Office of Fair TradingResearch Paper 5

Nickell, R., 1992, Productivity Growth in UK companies, 1975-86, European Economic Review,Vol. 36: 1055-91

NERA, 1993, Market Definition in UK Competition Policy, Office of Fair Trading ResearchPaper 1

Official Journal, 1991, Aerospatiale/Alenia/De Havilland, M. 53

Official Journal, 1992, NestlePerrier, 92/553/ CEE, 5-12 vol. L356

Official Journal, 1994, Carton board Case IV/33833, L243

OFT, 1999, The Competition Act 1998: OFT 400, The Major Provisions; OFT 401, The Chapter1 Prohibition; OFT 402, The Chapter II Prohibition; OFT 403, Market Definition; OFT 404,Powers of Investigation; OFT 405, Concurrent Application to Regulated Industries; OFT 406,Transitional Arrangements; OFT 407, Enforcement; OFT 408, Trade Associations, Professionsand Self-Regulating Bodies

Pantler, P.A., 1983, A Guide to the Herfindahl Index for Antitrust Attorneys, Journal of Law andEconomics, 22: 209-11

Phlips, L. and I.M. Moras, 1993, The AXZO decision: a case of predatory pricing?, Journal ofIndustrial Economics, 41: 315-21

Porter R.H. and J.D. Zona, 1993, Detection of Bid Rigging in Procurement Auctions, Journalof Political Economy, 101: 518-38

Posner, R., 1979, The Chicago School of Antitrust Analysis, University of Pennsylvania LawReview, 127: 159-83

Rice, J.A., 1995, Mathematical Statistics and Data Analysis, 2 Edition, Duxbury Pressnd

Shapiro, C., 1995, Mergers with Differentiated Products, Address before the American BarAssociation & International Bar Association program, The Merger Review Process in the U.S.and Abroad, Washington, D.C., November 9, 1995

133

Scheffman, D.T. and P.T. Spiller, 1987, Geographic market definition under the US Departmentof Justice Merger Guidelines, Journal of Law and Economics, 30: 123-47

Schmalensee, R., 1989, Inter-industry Studies of Structure and Performance, in R. Schmalenseeand R.D. Wiilig, (eds.), The Handbook of Industrial Organisation, Vol. II, North Holland: NewYork

Schmalensee, R., 1978, Entry deterrence in the ready-to-eat breakfast cereal industry, BellJournal of Economics, 9:305-27 in Office of Fair Trading Research Paper 2, 1994, Barriers toentry and exit in UK competition

Shull B., 1989, Provisional Markets, Relevant Markets and Banking Markets: The JusticeDepartment’s Merger Guidelines in Wise County, Virginia, Antitrust Bulletin, 34: 411-428

Slade, M.E., 1986, Exogeneity Tests of Market Boundaries Applied to Petroleum Products,Journal of Industrial Economics, 34: 291-302

Stewart, J.F., W.T. Carleton and R.S. Harris, 1984, The Role of Market Structure in MergerBehaviour, Journal of Industrial Economics, 32: 293-312

Stigler, G., 1968, The Organisation of Industry, Homewood: Richard D. Irwin Inc

Stigler, G. and R.A. Sherwin, 1985, The Extent of the Market, Journal of Law and Economics,28: 555-85

Stillman, R., 1983, Examining Antitrust Policy Towards Horizontal Mergers, Journal ofFinancial Economics, 11: 225-40

Sutton, J., 1991, Sunk costs and market structure: price competition, advertising and theevolution of concentration, MIT Press: Cambridge

Temple Lang, J., 1996, Innovation Markets and High Technology Industries, Paper presented atthe Fordham Corporate Law Institute

Utton, M.A., 1986, Profits and the Stability of Monopoly, Cambridge: Cambridge UniversityPress

Von Weizsäcker, C.C., 1980, A Welfare Analysis of Barriers to Entry, Bell Journal ofEconomics, Vol. 11: 399-420

Waverman, L., 1991, Econometric Modelling of Energy Demand: When are Substitutes GoodSubstitutes, in D. Hawdon (ed.), Energy Demand: Evidence and Expectations. Academic Press

134

Werden, G.J. and L.M. Froeb, 1994, The Effects of Mergers in Differentiated ProductsIndustries: Logit Demand and Merger Policy, Journal of Law, Economics, and Organisation, 10:407-26

Werden, G.J. and L.M. Froeb, 1993, Correlation, Causality and All that Jazz: The InherentShortcomings of Price Tests for Antitrust Markets, Review of Industrial Organisation, 8: 329-53

Willig, R.D., 1991, Merger Analysis, Industrial Organisation Theory, and Merger Guidelines,Brookings Papers: Microeconomics: 281-332

135

136

C LIST OF CASE SUMMARIES

Case study 1: Nestle-Perrier merger following paragraph 2.22

Case study 2: defining the geographical following paragraph 2.24extent of US petrol markets

Case study 3: Staples and Office Depot following paragraph 2.33

Case study 4: the supply of recorded music following paragraph 2.43

Case study 5: the Boeing/McDonnell following paragraph 2.50Douglas merger

Case study 6: AKZO following paragraph 2.52

Case study 7: South West Water following paragraph 2.61Services Ltd

Application of cross-sectional price tests: paragraphs 3.10 to 3.15international comparison of prices - CDs and motor cars

Application of hedonic price analysis: paragraphs 4.5 to 4.8car price differentials

Application of price correlation: paragraph 5.8wholesale petrol markets in the US

Application of causality tests: paragraph 7.7US petrol markets

Application of dynamic price regressions: paragraphs 8.8 to 8.11the supply of soluble coffee

Application of co-integration analysis: paragraph 8.12petrol markets in Colorado

Application of residual demand analysis: paragraphs 9.12 to 9.16National Express Group plc and MidlandMain Line Ltd

Application of survey techniques: paragraphs 12.10 to 12.14the Engelhard/Floridin merger

Application of price-concentration studies: paragraphs 13.14 to 13.18the SCI/Plantsbrook merger

137

Application of analysis of differentiated paragraphs 15.7 to 15.14products - estimation of demand systems:the Kimberley-Scott/Clark merger case

Application of bidding studies: paragraph 16.12paving tenders in Suffolk and NassauCounties, New York