CEF 2009 Pre-Conference Tutorial on “Heterogeneous and Multi-Agent Modelling”
15th International Conference Computing in Economics and Finance, University of
Technology at Sydney, Sydney, Australia July 14, 2009 Shu-Heng Chen, [email protected]
CEF 2009 Pre-Conference Tutorial on “Heterogeneous and Multi-Agent Modelling”
15th International Conference Computing in Economics and Finance, University of
Technology at Sydney, Sydney, Australia July 14, 2009 Shu-Heng Chen, [email protected]
Department of EconomicsDepartment of EconomicsNational Chengchi UniversityNational Chengchi University
Taipei, TaiwanTaipei, Taiwan
http://www.aiecon.org/
Time Table Time Table
9:00-10:30 Session 19:00-10:30 Session 110:30-11:00 Coffee Break10:30-11:00 Coffee Break11:00-12:30 Session 211:00-12:30 Session 212:30-14:00 Lunch12:30-14:00 Lunch14:00-15:30 Session 314:00-15:30 Session 315:30-16:00 Coffee Break15:30-16:00 Coffee Break16:00-17:00 Session 416:00-17:00 Session 4
Plan of the Tutorial
There are two ways to see what will be covered in this 5.5-hour tutorial.
First, what kinds of questions have been raised and addressed?
Second, what specific models have, therefore, been motivated to substantiate the study of questions above?
Of course, the tutorial will be more contained, if one can also see the complement from the two.
Plan of the Tutorial
This plan naturally leads us to two summary pages of tutorial. (Feature Page) The first page is a summary of views,
perceptions, insights, …regarding the nature of agent-based modeling in economics.
(Illustration Page) Then the second one is a summary of agent-based models covered in this totorial.
(Extension Page) Third, it is a list for what have not been said much or not said at all in this tutorial, but they are not the least.
Plan of the Tutorial
A plan should also include time allocation.We have a total of 5.5 hours, separated
into four sessions, three 90-minute sessions and one 60-minute session.
This time budget will be allocated within the feature page as well as the illustration page.
Feature PageWhat is agent-based computational economics
(ACE)?Why ACE, in light of the development of other
disciplines?What is the relation between
ACE and experimental economics (EE)? What are the relation between software agents and human
agents?What is the relation between
ACE and behavioral economics or psychological economics?From Homo Economicus to Homo Sapiens (Thaler, 2000)
ACE and evolutionary economics?From Homogeneity to Heterogeneity
Illustration Page: ModelsIllustration Page: Models
Thomas Schelling’ Segregation ModelThomas Schelling’ Segregation ModelAgent-Based Cobweb ModelAgent-Based Cobweb ModelAgent-Based OLG ModelAgent-Based OLG ModelAgent-Based Double Auction MarketAgent-Based Double Auction MarketAgent-Based Lottery MarketAgent-Based Lottery MarketAgent-Based Financial MarketAgent-Based Financial Market
Illustration Page: Tools
Reinforcement LearningClassifier SystemFuzzy Logic Fuzzy Classifier SystemGenetic AlgorithmsGenetic ProgrammingSelf-Organizing Maps
Computational Social Sciences: Computational Social Sciences: What and Why?What and Why?
The tutorial tries to answer two questions which we The tutorial tries to answer two questions which we consider quite fundamental to the study of agent-based consider quite fundamental to the study of agent-based economic models, namely, economic models, namely, whatwhat and and whywhy? ? What is the agent-based computational economics? What is the agent-based computational economics? Why do we need agent-based economic modeling of Why do we need agent-based economic modeling of
economy?economy? These two questions are generally shared by other These two questions are generally shared by other
social scientists who are also interested in agent-based social scientists who are also interested in agent-based modeling. modeling.
Therefore, they are better addressed in a broader Therefore, they are better addressed in a broader background, i.e., background, i.e., agent-based computational social agent-based computational social sciencessciences..
What What
To answer the first question, it would be nice if To answer the first question, it would be nice if we can start with some very simple agent-based we can start with some very simple agent-based social or economic models which are, however, social or economic models which are, however, have all the essences of agent-based models.have all the essences of agent-based models.
In fact, some of the earliest agent-based In fact, some of the earliest agent-based economic and social models satisfy this need.economic and social models satisfy this need.
Cellular automata first introduced by Von Cellular automata first introduced by Von Neumann and later used by Thomas Schelling Neumann and later used by Thomas Schelling provide such an illustration.provide such an illustration. Schelling's Segregation Model
http://ccl.northwestern.edu/netlogo/http://ccl.northwestern.edu/netlogo/
Computational Social Sciences: Computational Social Sciences: What are they?What are they?
Over the last decade, we evidenced that Over the last decade, we evidenced that agent-based agent-based modeling and simulation (ABMS)modeling and simulation (ABMS) were extensively used were extensively used among different disciplines of social sciences. among different disciplines of social sciences.
This tendency makes agent-based social scientists able This tendency makes agent-based social scientists able to find a same language between them, and facilitate the to find a same language between them, and facilitate the resultant interdisciplinary communication and resultant interdisciplinary communication and collaboration, which in turn defines a common interest collaboration, which in turn defines a common interest among social scientists.among social scientists.
This gathering also causes the emergence of a new This gathering also causes the emergence of a new discipline across social sciences, which is known as discipline across social sciences, which is known as computational social sciencescomputational social sciences. .
CSS is also known as… CSS is also known as…
agent-based social sciences (Trajkovski agent-based social sciences (Trajkovski and Collins, 2009)and Collins, 2009)
bottom-up social sciences (Epstein and bottom-up social sciences (Epstein and Axtell, 1996)Axtell, 1996)
algorithmic (behavioral) social sciencesalgorithmic (behavioral) social sciencesgenerative social sciences (Epstein, 2006)generative social sciences (Epstein, 2006)
Software AgentsSoftware Agents Schelling's segregation model provides a lucid illustration Schelling's segregation model provides a lucid illustration
of the constituents of an agent-based model. of the constituents of an agent-based model. First, it answer why CSS is called algorithmic social First, it answer why CSS is called algorithmic social
sciences, because each agents (actors) is represented sciences, because each agents (actors) is represented by an algorithm or a computational program. by an algorithm or a computational program.
This algorithm (program) corresponds to the decision This algorithm (program) corresponds to the decision rules, behavioral models or even preferences that rules, behavioral models or even preferences that characterize the agents. characterize the agents.
In a sense, it is a simple In a sense, it is a simple model of manmodel of man.. Borrowing the term from computer science, one may Borrowing the term from computer science, one may
also call it software agents or autonomous agents.also call it software agents or autonomous agents.
EmbednessEmbedness
In Schelling's model, the embedness is a two-In Schelling's model, the embedness is a two-dimensional cellular automata which defines the dimensional cellular automata which defines the geography of the space in which agents live.geography of the space in which agents live.
The geography (topology) of the city further The geography (topology) of the city further defines a social network for each agent. defines a social network for each agent.
In addition to the geographies, social networks, In addition to the geographies, social networks, other embednesses include institutions, cultures, other embednesses include institutions, cultures, histories..., etc.histories..., etc.
Aggregation (Emergent) Aggregation (Emergent) Third, it also answer why CSS is called bottom-Third, it also answer why CSS is called bottom-
up social sciences. up social sciences. The segregation phenomenon as an The segregation phenomenon as an
aggregation phenomenon is a sum of the aggregation phenomenon is a sum of the interactions of fairly tolerant people.interactions of fairly tolerant people.
Obviously, this is not a linear scaling-up. Obviously, this is not a linear scaling-up. ``From the bottom up'' normally refers to the ``From the bottom up'' normally refers to the
surprising phenomena that would not be surprising phenomena that would not be predicted from the model itself, which focuses on predicted from the model itself, which focuses on the actions of individual agents rather than the actions of individual agents rather than overarching downward-focused principles.overarching downward-focused principles.
Embedness
Interactions
Agents
Thomas Schelling’s Segregation
mobil agents with preference for identity
2-dimensional chess-board like network
local interaction
Distinguishing FeaturesDistinguishing Features
micro-meso-macroscopic structure of micro-meso-macroscopic structure of social phenomenasocial phenomena
micro-macro relations: aggregations, micro-macro relations: aggregations, emergenceemergence
Thomas Schelling was one of the pioneersin the field of agent-based social modeling. He emphasized the value of starting with rules of behavior for individuals and using simulation to discover the implications for large-scale outcomes. He called this ``micromotives and macrobehavior.’’
Segregation Model Segregation Model The space is a checkerboard with 64 squares representing places The space is a checkerboard with 64 squares representing places
where people can live.where people can live. There are two types of agents, and they are represented by pennies There are two types of agents, and they are represented by pennies
and nickels. (One can imagine the agents as Greens and Reds.)and nickels. (One can imagine the agents as Greens and Reds.) The coins are placed at random among the squares, with no more The coins are placed at random among the squares, with no more
than one per square.than one per square. An agent will be content if more than one-half of its immediate An agent will be content if more than one-half of its immediate
neighbors are of the same type as itself.neighbors are of the same type as itself. The immediate neighbors are the occupants of the adjacent The immediate neighbors are the occupants of the adjacent
squares. squares. For example, if all the eight adjacent squares were occupied, then For example, if all the eight adjacent squares were occupied, then
the actor is content if at least five of them are the same type itself as the actor is content if at least five of them are the same type itself as itself.itself.
If an actor is content, it stays put. If it is not content it moves.If an actor is content, it stays put. If it is not content it moves. In Schelling's original model, it would move to one of the nearest In Schelling's original model, it would move to one of the nearest
squares where it would be content.squares where it would be content.
Schelling’s Segregation Model Schelling’s Segregation Model (2-dimensional Cellular Automata)(2-dimensional Cellular Automata)
Schelling’s Segregation ModelSchelling’s Segregation Model (Replicate from Pans and Vriend, 2005)(Replicate from Pans and Vriend, 2005)
Sehelling’s Segregation Model Sehelling’s Segregation Model (Replicate from Pans and Vriend, 2005)(Replicate from Pans and Vriend, 2005)
Agent’s Preference Agent’s Preference (Strictly prefers for integration)(Strictly prefers for integration)
Sehelling’s Segregation Model Sehelling’s Segregation Model (Replicate from Pans and Vriend, 2005)(Replicate from Pans and Vriend, 2005)
The Usual Defense
The usual defense for agent-based modeling is its superiority to its alternatives, mainly, the top-down, system dynamics or equation-based systems.
The superiority can mean better understanding (explanations) of social
phenomena, better forecasting of future, others.
Epstein’s Long List (Epstein, 2008)
1. Explain (very distinct from predict)2. Guide data collection3. Illuminate core dynamics4. Suggest dynamical analogies5. Discover new questions6. Promote a scientific habit of mind7. Bound (bracket) outcomes to plausible ranges8. Illuminate core uncertainties.9. Offer crisis options in near-real time10. Demonstrate tradeoffs / suggest efficiencies11. Challenge the robustness of prevailing theory through perturbations12. Expose prevailing wisdom as incompatible with available data13. Train practitioners14. Discipline the policy dialogue15. Educate the general public16. Reveal the apparently simple (complex) to be complex (simple)
Scalable Extensions of Human-Subject Experiments
An easy answer for using agent-based modeling is that it is an extension of experimental social sciences, including experimental economics, experimental political sciences, …, etc.
Experimental social sciences have difficulties scaling up. The most obvious limitation is money:
limited number of subjects, limited number of scenarios, limited number of repetitions.
Human fatigue. Physical constraints of on-site lab.
Hence, replacing human agents with software agents seems to be an attractive alternative when the above-mentioned constraints are stringent.
Is this a real defense? However, if this defense is true, then what we really expect to see
from the development of the literature is the following research pattern.
An experimental economist conduce an experiment with 20 human subjects lasting for 2 hours.
An agent-based computational economist scale up this experiment with 2000 software agents lasting for 2 days.
Unfortunately, if we search for the literature, we will disappointed by seeing there is not many ACE papers motivated in this vein.
In fact, the second wave of ACE models appearing in the middle 1990s were in pursuit of a much modest goal: understanding human behavior observed from experiments.
So, what is wrong? The answer is that human agents are not that easily replaceable. This leads to our next subject.
ACE and Experimental Economics
Historical BackgroundThree-Stage Development
Replication (John Duffy, 2006)Competition (Tesfatision, 2009)Cooperation and Coordination (Chen, 2009)
A Wrap-Up ExampleExample 4:
Agent-Based Double Auction Markets
Historical Background In the middle 1990s, after the cellular-automata tradition,
another class of agent-based economic models appears. This series of ACE models has some distinguishing
characteristics: They are strongly motivated by human-subject experiments They illustrate how a neo-classical (homogeneous rational-
expectations) model can be rewritten as an agent-based economic model by using heterogeneous interacting learning agent model.
They show that the rational expectations equilibrium can be approached via these agent-based models. Or, in the case that they are multiple equilibria, ACE can help us make a selection.
Example 2: Agent-Based Cobweb Models Example 3:
Agent-Based Overlapping Generations Models
Cobweb Model
This model plays an important role in macroeconomics, because it is the place in which the concept rational expectations originated (Muth,1961).
It is also the first neo-classical macroeconomic prototype to which an agent-based computational approach was applied (Arifovic, 1994).
Cobweb Model
A competitive market composed of n firms.Cost function of each firm:
Expected Profit of firm i
niynqxqc tititi ,...,2,1 ,2
1 2,,,
titieti
eti cqP ,,,,
Cobweb Model
Expected Profit Maximization
Market Equilibrium
)(1
,, xPyn
q etiti
)(1
1,
1,
n
i
eti
n
itit xP
ynBAqBAP
Cobweb Model
Homogeneous Expectations
Homogeneous Rational Expectation Equilibrium
)()(1
,
11,
,
n
i
et
n
i
etit
et
eti
xPy
BAxP
ynBAP
iPP
yB
xAQ
yB
BxAyP
PP
t
t
te
t*
*
Dynamics of the Cobweb ModelQuestion typically asked in 1990s is:
Would actual price converge to homogeneous rational expectation equilibrium (HREE) price, even though agents are bounded rational and do not have rational expectations?
Earlier studies show that in general the market will not converge to HREE (Ezekiel,1938; Bray, 1982, Marcet and Sargent, 1989)
These studies indicate that depending on the so-called cobweb ratio, the market dynamics can be separated into the stable case and the unstable case.
,1/
,1/
caseunstableyB
casestableyB
Cobweb Experiments
Experimental evidence, however, show that even under the unstable case, the cobweb model is still stable (Wellford, 1989). The following two figures are from Arifovic (1994).
Replication of the Cobweb Experiments
Can we replicate the cobweb experimental results with bounded rational agents, at least qualitatively?
Yes, in 1994, Arifovic (1994) gave the first positive result, and two years later, Chen and Yeh (1996) also replicated this result, while with a different setup.
What is commonly shared in these two studies is a market composing of agents (firms) who started with heterogeneous beliefs (in either quantities or prices).
Then they learn and adapt under a social or individual learning process driven by evolutionary algorithms, such as genetic algorithms or genetic programming.
Chen and Yeh (1996)
Take Chen and Yeh as an example.Each agent is initially given an arbitrary
forecasting function of price.
As time goes on, each agent will change her own forecasting by learning from herself (individual learning) or others (social learning).
niPPfP tttieti ,...,2,1 ,...),,( 21,,
etn
et
et
t
etn
et
et
t
etn
et
et
t
P
P
P
P
P
P
P
P
P
P
P
P
1,
2,2
2,1
2
1,
1,2
1,1
1
,
,2
,1
Market Price Bottom-Up
',,2,1
2
2
1
1 ...
etn
et
et
ttt
PPP
where
PPP
et
et
et
et
P
PPP
Social Learning
In Chen and Yeh (1996), the learning of the population of the agents is driven by an evolutionary algorithm known as genetic programming.
GPGPGP
2et
e1t
et PPP
Overlapping Generations (OLG) Models
The OLG model has been extensively applied to studies of savings, bequests, demand for assets, prices of assets, inflation, business cycles, economic growth, and the effects of taxes, social security, and budget deficits.
2-Period OLG Model
It consists of overlapping generations of two-period-lived agents.
At time t, N young agents are born. Each of them lives for two periods (t,t+1). At time t, each of them is endowed with units
of a perishable consumption good and with units at time t+1.
An agent born at time t consumes in both periods.
All agents have identical preference given by
1e2e
021 ee
),( 21tt cc
)ln()ln(),( 2121tttt ccccU
2-Period OLG Model
In addition to the perishable consumption good, there is an asset called money which is held distributively by the old generation.
N
Hh
tH
tt
t
at timesupply money nominal the:
rate,inflation gross the:
. at time level price nominal the:
. period
in time spends and period at time acquires
agent that balancesmoney nominal the:
11
,
t
tt
t
ti
P
P
tP
1t
t
i m
Optimization
)(2
1
, ..
)ln()ln(max
1,211
,
..
1
,22,
1,1,
2,
1,
),( 2,
1,
etiti
COF
t
titi
t
titi
titicc
eec
P
mece
P
mcts
cctiti
Gs
s
P
P
GPPsPs
GPHHm
sP
mPsm
ces
N
iti
N
iti
t
tt
t
N
itti
N
itti
tt
N
itti
tit
tittiti
titi
1,
11,
1
111,
1,
11
,
,,
,,
1,
1,
Price Dynamics
spending government :
at timeagent of saving :,
G
ti s ti
state)(steady
),1
)((2
1
foresight)(perfect
1
2
1
22
1
1
,
tt
tt
teti
N
Gg
e
e
e
g
e
e
Steady State Equilibria
2
4)2
1(2
1
2
4)2
1(2
1
2
1
22
1
22
1
*
2
1
22
1
22
1
*
ee
eg
ee
eg
ee
ee
eg
ee
eg
ee
H
L
2
4)2
1(2
1 2
1
22
1
22
1
* ee
eg
ee
eg
ee
H
2
4)2
1(2
1 2
1
22
1
22
1
* ee
eg
ee
eg
ee
H
Multiple Equilibria
OLG Experiments The two equilibria of the 2-period OLG model
raise the issue of equilibrium selection. This can be particularly interesting because the
two equilibria, high inflation vs. low inflation, does have different welfare implication.
To solve this problem, experiments with human subjects have been applied to see which equilibrium is more likely to appear (Marimon and Sunder, 1993, 1994; Bernasconi and Kirchkamp 2000).
The results consistently show that the low-inflation equilibrium is chosen.
Agent-Based OLG Model Can agent-based models replicate the these
experimental results? The answer, to some extent , is yes. The general idea is very similar to the one in the agent-
based cobweb model. The homogeneous rational expectations (perfect
foresight) is replaced with bounded rational agents with heterogeneous beliefs in terms of consumption (Arifovic, 1995) or price expectation (Bullard and Duffy, 1999; Chen and Yeh 1999).
This population of heterogeneous beliefs are then revised either via individual learning or social learning.
Chen and Yeh (1999)
Take Chen and Yeh (1999) as an example.
Agents are initially endowed with arbitrary inflation expectation functions.
,...),( 2,1,,,eti
etiti
eti f
1,
1
1,
1,2
1,1
2,
2,2
2,1
,
,
,2
,1
1,
1,2
1,1
ti
t
tN
t
t
etN
et
et
ti
t
tN
t
t
etN
et
et
P
s
s
s
P
s
s
s
Forecasting errors as feedbacks to trigger further review and revision
Inflation Bottom Up
etN
et
et
t
etN
et
et
t
etN
et
et
t
etN
et
et
t
4,
4,2
4,1
4
2,
2,2
2,1
2
3,
3,2
3,1
3
1,
1,2
1,1
1
et
tet
t
3
3
1
1
ΠΠ
et
tet
t
4
4
2
2
ΠΠ
GPGP
3et
e1t ΠΠ
GPGP
42et
et ΠΠ
Inflation Expectations Dynamics
Three-Stage Development Duffy J (2006) Agent-based models and human subject
experiments. In: Tesfatsion L, Judd K (eds), Handbook of computational economics: Agent-based computational economics, Vol. 2. Elsevier, Oxford, UK, 949-1011.
Tesfatsion, L (2009) From Human-Subject Experiments to Computational-Agent Experiments (And Everything In Between), Keynote Speech at 2009 International Meetings of the Economic Science Association, www.econ.iastate.edu/tesfatsi/ESA2009.LT.pdf
Chen, S.-H (2009) Collaborative computational intelligence in economics, in C.L. Mumford, and L.C. Jain (eds.), Computational Intelligence: Collaboration, Fusion and Emergence, Intelligent Systems Reference Library, Vol. 1, Chapter 8. Springer, 2009.
Mirroring: A Statistical Criteria or An AI Criteria?
Presumably, software agents are naturally related to human agents since the former are frequently designed to replicate or mirror the latter at different degrees of precision.
Arthur’s “calibrating artificial agent” or his version of the Turing Test probably provides the earliest guideline for this mirroring function.
The statistical criteria have now been taken into account by agent-based economic model-builders, whereas not many follow the AI criteria.
Arthur (1993): Calibrating Artificial Agents
An important question then for economics is how to construct economic models that are based on an actual human rationality that is bounded or limited. As an ideal, we would want to build our economic models around theoretical agents whose rationality is bounded in exactly the same way human rationality is bounded, and whose decision-making behavior matches or replicates real human decision-making behavior. (Arthur, 1993, p. 2.)
Calibrating Artificial Agents
To build the software agents upon empirical grounds, he further suggested a statistical approach to design software agents, i.e., first, to parameterize software agents in terms
of their decision algorithms, and, second, to calibrate them.
Universal Decision Algorithms? The next important question is the choice of the
parametric software agents or the choice of parametric decision algorithms.
Although he did suggest to use those learning algorithms already widely used in economics, he did not believe the existence of the universal economic agent which is characterized as an universal decision algorithm that is applicable to all economic problems.
Instead, it is context-dependent; the behavior can differ considerably from one decision problem to another decision problem.
Example: Reinforcement Learning in N-Armed Bandit Problem (Arthur, 1993)
N-Armed Bandit Problem
In the N-armed bandit problem, the agent is provided with a set of N alternatives from which he has to choose one.
The consequence of choosing the nth alternative is a payoff which is random, n=1,2,...,N.
The stochastic structure of is unknown to the agent.
nr
nr
NnurErfr
Ndr
nnn
iid
n
i
H
idH
ddd iH
,...,2,1 ,)( ),(~
},...,2,1{ ,max1
},..,,{ 21
N-Armed Bandit Problem
The N-armed bandit problem is formally introduced as follows.
Robillard’s Experiment
Conducted by Laval Robillard at Harvard in 1952-53 (Bush and Mosteller, 1955)
2-armed bandit problem
),( :
-1y probabilit with 0
y probabilit with 1:
-1y probabilit with 0
y probabilit with 1:
BA
B
B
A
A
ffDesignalExperiment
f
fBOption
f
fAOption
Reinforcement Learning (Law of Effect) The essence of reinforcement learning is very
simple: choices that have led to good outcomes in the past are more likely to be repeated in the future.
A psychologically-motivated learning algorithm (from behavioral psychology, behaviorism, animal experiments)
Very popularly used in agent-based modeling, equally popular as opposed to evolutionary algorithms.
There are several different versions of reinforcement learning (RL). Arthur’s 2-parameter version mainly focuses on the speed of
learning (Arthur, 1993). Roth and Erev’s 3, 4 or 5-parameter version (Roth and Erev, 1995;
Erev and Roth, 1998) further extends it to cover several different psychological or cognitive considerations, such as memory, attention, aspiration (reference point), etc.
Arthur’s 2-Parameter RL
)0 ,31ncalibratio s(Arthur' )(
)()(
constant) a( )(0 ,)(
)(
,...,2,1 activated.not is if ),(
t,in time activated is if ),()()1(
) ealternativ theofstrength :)(( )()(
,...,2,1 , )(
)()( :
1
QtQ
Qttqtq
QtQifQttQ
ionnormalizat
Nnntq
nttqtq
ntqtqtQ
NntQ
tqtpchoicestochastic
nnormn
n
nn
n
N
nn
nn
Arthur (1993): Turing Test What would it mean to calibrate a behavioral algorithm? In
designing an algorithm to represent human behavior in a particular context, we would be interested not only in reproducing statistically the characteristics of human choice, but also in reproducing the "style" in which humans choose, possibly even the ways in which they might depart from perfect rationality. The ideal would be algorithmic behavior that could pass the Turing test of being indistinguishable from human behavior with its foibles, departures and errors, to an observer who was not informed whether the behavior was algorithm-generated or human-generated (Turing 1956). Calibration ought not to be merely a matter of fitting parameters, but also one of building human-like qualitative behavior into the algorithm specification itself. (Arthur, 1993, p. 3)
Turing Test
Of course, the AI one is much broader than the statistical one, and its implementation may be harder; hence, it draw much less attention from ACE economists.
Ecemis, Bonabeau, and Ashburn (2005) and Arifovic, McKelvey, and Pevnitskaya (2006) are the only examples known to us.
Market Frenzy Using an technique called interactive evolutionary computation to mimic a a
market frenzy that occurred on the London Stock Exchange in September 2002.
The event began at 10:10 am and within 5 minutes the FTSE100 index rose from 3,860 to 4,060. Within another few minutes, the index fell to 3,755, before returning to a value slightly above its original level at the end of the 20 minutes (Figure 2).
A “qualitative match” includes to match the amplitude, period, phase, and damping rate of the approximate wave – and of course the size, shape, and location of the price history.
Competing Interaction
In addition to mirroring the behavior of human agents, software agents are also used directly to interact with
human agents (Chen and Tai, 2005).
U-Mart The agent-based financial system U-MART provides one
illustration (Shiozawa, Nakajima, Matsui, Koyama, Taiguchi, and Hashimoto, 2006).
U-MART stands for UnReal Market as an Artificial Research Test bed.
U-MART enables us to address two basic questions in such kind of an integrating system. Can human agents compete with the software agents when they
are placed together in the market? Can the participation of software agents lead to different
dynamics, such price convergence, market efficiency, …, etc.
Agent-Based Electronic Market Grossklags and Schmidt (2006) studied whether market
efficiency can be enhanced when software agents are introduced to the markets which were originally composed solely of human agents.
They designed a continuous double auction market in the style of the Iowa electronic market, and introduced software agents with a passive arbitrage seeking strategy to the market experiment with human agents.
They then went further to distinguish the case where human agents are informed of the presence of software agents from the case where human agents are not informed of this presence.
Agent-Based Electronic MarketWhether or not the human agents are well
informed of the presence of the software agents can have significant impacts upon market efficiency (in the form of price deviations from the fundamental price).
They found that if human agents are well informed, then the presence of software agents triggers more efficient market prices when compared to the baseline treatment without software agents.
Otherwise, the introduction of software agents results in lower market efficiency.
Agent-Based Double Auction Markets
The double auction (DA) market is probably the most illuminating illustration of the connection between agent-based computational economics and experimental economics.
Having said that, we notice that DA is the context in which various versions of agents, crossing both the realm of EE and ACE, have been proposed.
It further serves as a backbone for the more sophisticated development of agent-based models involving the elements of trading, such as agent-based fish market (Kirman and Vriend, 2001), agent-based power market (Weidlich, 2008) and agent-based financial market.
So, it is kind of a connection from the past to the future, from the basic to the advanced. Therefore, it will be nice tto have a quick review of the development of agent-based double auction market.
Agent-Based Double Auction Markets
Experiments with Human SubjectsGode and Sunder (1993)Rust, Miller and Palmer (1993, 1994)Andrew and Prager (1994)Chen, Chie and Tai (2002, 2003)
Double Auction Market Both sides of the market, buyers and sellers, are able to submit
prices (bids or asks. The bids and asks are then matched under different trading rules. Most rules will matches those most competitive bids (highest few
bids) with those most competitive asks (lowest few asks). (Aurora Rule) In an extreme case (one unit per transaction), only the
holder of the highest bid (current bid) and the lowest ask (current ask) can be matched, of course, under the condition that the former is greater than the older.
The transaction price will then be somewhere between the current bid and current ask, say the middle of the two.
Four Kinds of Agents
Zero-Intelligent Agents (Gode and Sunder, 1993)
Programmed Agents (Rust, et al, 1993, 1994)
Calibrated Agents (Arthur, 1991, 1993; Chen and Hsieh, 2009)
Autonomous Agents (Andrew and Prager, 1994; Dawid, 1999; Chen, 2000)
Double Auction Experiments with Human SubjectsDouble Auction Experiments with Human Subjects
DAMarket
Buyer 1
. . .
Seller 1. . .
Buyer 2
Buyer 3
Buyer N1
Seller 2
Seller 3
Seller N2
Token-Value Generation Process
Token-Vaule Table
Demand and Supply Curves
Market
Structure
Curves’
Shapes
Competitive
Equilibria
Total Surplus = Consumers’ + Producers’ Surplus
A Trading Period
S-Step Loop
A Trading Step:
Bid-and-Ask Buy-and-Sell
AURORA
Actual Price and Actual Surplus
Double Auction Experiments
To place a real double auction environment into a laboratory, one need to create the “incentive” for market participants.
It is normally done through a token-value generating mechanism.
Gode and Sunder (1993) There are two research questions in the agent-based
double auction markets. The first one is: to achieve the degree of market
efficiency which we observed from the market experiments with human subjects, what is the minimum degree of intelligence required for our artificial agents?
In other words, if we want to replace the human agents in the market experiment with the software agents, how smart should we expect for these software agents.
Gode and Sunder (1993)’s “zero-intelligence agent” is mainly an answer to this question.
Gode and Sunder (1993)Gode and Sunder (1993)
DAMarket
Buyer 1
. . .
Seller 1. . .
Random
Buyer 2Random
Buyer 3
Buyer N1
Seller 2
Seller 3
Seller N2
Token-Value Generation Process
Token-Vaule Table
Demand and Supply Curves
Market
Structure
Curves’
Shapes
Competitive
Equilibria
Total Surplus = Consumers’ + Producers’ Surplus
A Trading Period
S-Step Loop
A Trading Step:
Bid-and-Ask Buy-and-Sell
AURORA
Actual Price and Actual Surplus
Random
Random
Random
Random
Random
Random
Intelligent-Irrelevance Hypothesis
The zero-intelligent agent is a concept of random-behaved agents, who are not purposive and are unable to learn.
To trade, they simply bid (ask) randomly but are constrained by their true reservation price (zero-profit price)
They showed that the market efficiency coming out of a group of zero-intelligent agent can match, or sometimes even perform better, than what we observed from human-subject experiments.
Therefore, their work, to some extent, verified the long-held “intelligence-irrelevance hypothesis” in the double auction market experiments.
Extensions Cliff (1997) showed that Gode-Sunder’s ZI agents work
only for the symmetric markets, but not asymmetric markets.
Cliff (1997) and Cliff and Bruten (1997) then argued that the software agents need to be smarter to match human subject experiments.
They, therefore, add a little learning capability to the ZI agent, which they called ZI-Plus agent, or ZIP agent.
Nevertheless, ZI agent has now been extensively used in agent-based economic and financial models (see Ladley, 2009 for a survey on this).
The virtue of the device of the ZI-agent is its simplicity, and therefore, analytical tractability. Hence, it is a benchmark of ACE.
Santa Fe Double Auction Tournament
The weakness of the ZI agents is that they are not purposive, but human agents are.
Hence, it helps us little to see the market dynamics from a game-theoretic viewpoint, be static or evolutionary.
The second research question of the agent-based double auction market is exactly about how to be winners.
An inquiry into a effective characterization of the ``optimal'' trading strategies used in the double auction market has led to a series of tournaments, known as the Santa Fe Double Auction Tournament.
This tournament organized by the Santa Fe Institute invited participants to submit trading strategies (programs) and test their performance relative to other submitted programs in the Santa Fe Token Exchange, an artificial market which is operated by the double auction mechanism.
They received 25 submission, and the best performing strategy is called the Kaplan Strategy (background player strategy).
Rust, Miller and Palmer (1993, 1994)Rust, Miller and Palmer (1993, 1994)
DAMarket
Buyer 1
. . .
Seller 1. . .
Buyer 2
Buyer 3
Buyer N1
Seller 2
Seller 3
Seller N2
Token-Value Generation Process
Token-Vaule Table
Demand and Supply Curves
Market
Structure
Curves’
Shapes
Competitive
Equilibria
Total Surplus = Consumers’ + Producers’ Surplus
A Trading Period
S-Step Loop
A Trading Step:
Bid-and-Ask Buy-and-Sell
AURORA
Actual Price and Actual Surplus
Tournament vs. Experiment: Off-Line vs On-Line
The idea of using tournament as an agent-based modeling originated from Axelrod (1984)’s Iterated Prisoners’ Dilemma tournament.
Software agents in the SFI-DA model are programmed agents, but they are hand-written by human, and hence they can also be consider as human agents.
However, the off-line setup of this tournament makes the participants unable to revise the programs once being submitted, and hence their incarnations are unable to learn as what they may.
Behavioral Economics Behavioral economists care what people actually do and
why so instead of what people ought to do in light of pure logic.
Behavioral economists are not satisfied with the act-as-if methodology, because it hides the real process by which the actual decision is made.
This tendency drives them to learn more about the “hardware” in which the agent’s decision is made, such as their cognitive capacity, personality, culture background, and even down to neurophysiological details.
Inevitably, this leads to an interdisciplinary study overarching economics, psychology and neuroscience.
ACE with Homo Sapiens Obviously, over the last decade, we have seen a fast development
of the so-called behavioral experiments, which take cognitive capacity, intelligence, personality, emotion, risk attitude, and culture as control attributes of experiments with human subjects.
Needless to say, soon, this development will be further rooted down to brain and its underlying DAN.
While ACE is very sympathy with the idea of bounded rationality, most ACE models developed so far has not taken this development into account, while they may have the potential to do so.
Given the relation between ACE and EE discussed earlier, it is expected to see how ACE can develop various autonomous agents or software agents such that this cognitive, personality, and cultural background can be incorporated.
This is certainly an important step toward successful models of Home Sapiens.
Example 5: Agent-Based Lottery Markets (Chen and Chie, 2008)
Agent-Based Lottery Markets
Lottery market is an area where gambling psychology also plays a quite active role.
Chen and Chie (2008) develope an agent-based lottery market, whose autonomous agents are grounded largely in this gambling psychology.
Their artificial agents have potential to develop three psychology characteristics:Halo Effect (Lottomania)Conscious Selection (Neglect of probability)Regret-Aversion (Interdependent preference)
Agent-Based Lottery Markets
What differs their model from the typical behavioral models is that these three characteristics are not imposed exogenously, but only have the possibility to emerge as an evolutionary outcome.
This exemplifies how ACE can work with behavioral economics and make the implicit selection process explicit, and provides a stability test for these behavioral patterns.
This model is also used to answer the question of the optimal lottery tax rate.
)(J Fuzzy Inference System . then , is If
Rules)Fuzzy of Style (Sugeno
iit aAJr
Bet toIncome of Proportion :),...,1(
SetsFuzzy :),...,1(
kia
kiA
i
i
issue.th theof days at the updated sizejackpot The : trJrt
Fuzzy Inference SystemFuzzy Inference System
Autonomous Agents with Three Autonomous Agents with Three Psychological Characteristics Psychological Characteristics
Evolutionary Economics
Economics is about change, and that subject has been very clearly stated in Alfred Marshall's following famous quotation.
“Economics, like biology, deals with a matter, of which the inner nature and constitution, as well as outer form, are constantly changing.” (Marshall, 1924, p. 772)
ACE has been considered as the ``modern’’ computerized evolutionary economics, and taking this legacy of Alfred Marshall (Tesfastion, 2001)
ACE and the Legacy of Marshall In terms of the legacy of Marshall, one unique feature of the ACE
model is its capability to modeling intrinsically constant changes. (Co-evolution) One essential ingredient of triggering constant
changing is to equip agents with a novelty-discovering or chance-discovering capability so that they may constantly exploit the surrounding environment, which causes the surrounding environment to act or react and hence to change constantly.
If economics is about constantly changing, and that happens because autonomous agents keep on searching for chances and novelties, then the change in each individual and the change in microstructure must accompany the holistic picture of constant changes (N-type agent-based financial model for an example).
We have already experienced the novelty-discovering agents in the agent-based double auction markets, we shall now focus more on the microstructure dynamics, which is about heterogeneities and the respective dynamics.
Heterogeneity Heterogeneity is another feature of ACE. Due to its computational flexibility, ACE can accommodate different
kinds of heterogeneity to different degree. Some kinds of heterogeneity are given, such as genetic materials,
DAN, intelligence, personality traits, preference, endowments, cultures, but some are endogenously determined, such as income, wealth, size of the firm (Robert Axtell), market share, life expectancy, expectations, beliefs.
Some are partially exogenous, but still may have the potential to change over time, such as personality, culture, etc.
Degree of heterogeneity (distribution) can change with the evolution, and new species which do not exist before can appear.
For those exogenously-given heterogeneities, ACE can study their long-term effects (survival analysis); for those endogenously-given heterogeneity, ACE can study its cause and predict its future.
Approaches to Microstructure Dynamics and Heterogeneity
Microstructure dynamics can be manifested in light of statistical mechanical approach, also the so-called mesoscopic approach (Aoki, 1996, 2002, 2006).
This approach actually connects ACE to physics or sociophysics or econophysics.
With this approach, the set of the behaviors or strategies, used to characterize agents, are finite or bounded.
A finite set does allow us to study the microstructure dynamics with a solid ground, but it inevitably implies the absence of novelties and their discovery processes.
Hence, alternative approaches exist to extend the analysis of the microstructure dynamics into an infinite set so that rich microstructure dynamics are embedded within novelties discovering processes.
Example 6: Agent-Based Financial Markets (Santa Fe)
Santa Fe Artificial Stock Markets: Origin
Origin: Brain Arthur at Santa Fe Institute (SFI) Arthur, B. (1992), “On Learning and Adaptation in the
Economy,” 92-07-038. Palmer, R. G., W. B. Arthur, J. H. Holland, B.
LeBaron, and P. Tayler (1994), “Artificial Economic Life: A Simple Model of a Stockmarket,” Physica D, 75, pp. 264-274.
Tayler, P. (1995), “Modelling Artificial Stocks Markets Using Genetic Algorithms,” in S. Goonatilake and P. Treleaven (eds.), Intelligent Systems for Finance and Business, pp.271-288.
Artificial Stock Markets: Further Development
Arthur, W. B., J. Holland, B. LeBaron, R. Palmer and P. Tayler (1997), ``Asset Pricing under Endogenous Expectations in an Artificial Stock Market,'' in W. B. Arthur, S. Durlauf & D. Lane (eds.), The Economy as an Evolving Complex System II, Addison-Wesley, pp. 15-44.
LeBaron, B., W. B. Arthur and R. Palmer (2000), ``Time Series Properties of an Artificial Stock Markets,’’ Journal of Economic Dynamics and Control.
LeBaron, B. (1999), ``Building Financial Markets with Artificial Agents: Desired goals and present techniques ,’’ in G. Karakoulas (ed.), Computational Markets, MIT Press.
LeBaron, B. (2001), ``Evolution and Time Horizons in an Agent Based Stock Market,’’ Macroeconomic Dynamics, 5, pp. 225--254.
Santa Fe Institute Artificial Stock Markets
Two assets: one risky and one risklessDividends and interests are exogenously
given. Dividend follow a stochastic process and the interest rate is exogenously fixed.
Agents share a common CARA utility function, and myopically maximize their next-period expected wealth by find out the optimal portfolio.
)()1(
at time agent by hold share:
at time agent by holdmoney :
aversionrisk of degree:
)exp()(
11,,1,
,
,
,,,
,,
tttititi
ti
ti
tittiti
titi
DPhMrW
tih
tiM
hPMW
WWUMax
Utility Function
)()1( ..
)|)exp(())((
11,,1,
,1,1,,
tttititi
titititi
DPhMrWts
IWEWUEMax
2,
11,*,
2
)1()(
),( ~
ti
ttttiti
dt
PrDPEh
dGaussianD
Optimal Investment
N
i ti
ttttitti
N
i
N
i ti
ttttitti
HPrDPE
NPh
nsExpectatiosHomogeneou
HPrDPE
Ph
mEquilibriuMarket
12,
11,*,
1 12,
11,*,
))1()(
()(
:
)1()()(
:
][1
)]([1
:
22 hdrN
Hd
rP
iumns EquilbrExpectatioRationalsHomogeneou
ddt
Homogeneous Rational Expectations Equilibrium
Heterogeneous Expectations
The main departure of the SFI agent-based financial model is to assume that information is imperfect, e.g., the stochastic nature of dividend is unknown to agents, which makes homogeneous expectations hard to realize.
They then build models of heterogeneous expectations in spirit of bounded rationality.
The focus is how agents form their expectations:
)( 11, ttti DPE
)(,
)(,
)(,
)3(,
)3(,
)3(,
)2(,
)2(,
)2(,
)1(,
)1(,
)1(,
1,
,,1,
,
,
,
,
)(
)(
ktktt
kti
kti
titttiti
titttiti
titttiti
tti
ttititti
ASifP
ASifP
ASifP
ASifP
PE
PPE
Classifier System +Genetic Algorithms
SFI’s Artificial Adaptive Agents They model each adaptive agent with classifier system,
which is evolved over time using genetic algorithms. Each constituent of the classifier system is a linear
regression; given the condition, it has to meet before activation, it is more like a local linear regression.
The entire classifier system is, therefore, a non-linear forecasting function composing of many local linear forecasts.
The set of conditions (thresholds) and the regression coefficients are all open to changes as agents learn from their experiences.
SFI agent-based financial markets
The SFI artificial stock market has been used to understand when the market will behave close to what fundamental equilibrium predicted, and when it will deviate away from it, by manipulating various parameters of the models, such as speed of learning, time-horizon of learning, etc.
It was also used to explained some financial stylized facts (we will talk more on this later)
A comprehensive review of the SFI stock market can be found in Ehrentreich (2007).
In terms of modeling autonomous agents, there are a number of further variations. The two which are more in vein of the SFI model are Tay and Linn (2001) and Chen and Yeh (2001).
SFI Agent-Based Financial Markets
One essential advantage of the ACE model is that it can implement survival analysis in an evolutionary context (Axtell, 2004; LeBaron, 2006)
Within this context, one can test various behavioral hypotheses proposed by either neoclassical economists and behavioral economist.
There are two test forms: a weak one and a strong one. For the former, the hypothesis is directly planted into the model, and watch its survivability; for the latter, the hypothesis is tested as an emergent property.
More examples would be Chen and Yeh (2002) on the martingale believers Chen and Huang (2008) on the risk-preference-irrelevance hypothesis Chen and Chie (2008) on the neo-classical gamblers
Variations of SFI
Tay and Linn (2001) applies the fuzzy classifier system instead of the original crisp classifier system proposed by the SFI.
Using fuzzy logic, Tay and Linn (2001) is able to model linguistic decision rules such as:
be high. should and βboth α
high, then value is undamentalIf price/f
PPE ttititti
)( ,,1,
Variations of SFI Chen and Yeh (2001) applies genetic programming to give a
general non-linear and nonparametric expectation formation, which leaves the size and shape to be determined through evolution.
Using genetic programming, they are able to measure the complexity of the rules used by the agents, and watch the dynamics of that complexity.
Using genetic programming, they can also observe what kind of variables agents will use in their forecasting of the future price, through they provided a alternative test for the sunspot equilibrium (Chen, Liao and Chou, 2008).
Furthermore, in vein of evolution, their model can also be used to illustrated the dinosaur hypothesis originally proposed by Arthur (1992).
ttimeatvolumetradingV
VVPPfPE
t
tttttti
:
,...),...,,()( 1,11,
ACE and Econometrics Earlier, we have mentioned the relation between ACE and EE with specific
reference to the mirroring function (Recall: The calibration work done by Arthur on the reinforcement learning model).
This goal generally applies to other ACE models which actually directly deal with field data instead of experimental data.
While there are still a lot of ACE researchers considering their model mainly for exclusively thought experiments, there is an increasing interest in the quality of build ``empirically based, agent-based models.’’ (Janssen and Ostrom, 2006).
This requires ACE researchers to validate their models with real data, and has further developed ACE into an econometric models which may be estimated by standard econometrics or other less standard estimation approaches.
Maybe the most mature area to see the connection between ACE and Econometrics is once again agent-based financial models.
Example 6: Agent-Based Financial Markets (Chen, Chang and Du, 2009)
ACE and Econometrics
Chen, Chang and Du (2009) present the development of agent-based computational economics in light of its relation to econometrics.
They propose a three-stage development and illustrate the development using the literature of agent-based financial modeling.
The three-stage development is Presenting ACE with EconometricsBuilding ACE with EconometricsEmerging Econometrics with ACE
ACE and Econometrics The agent-based financial market has made itself as a promising
example for agent-based social sciences. It, to an extent, successfully replicated some familiar stylized facts,
and points to the possible causes of them, so it enriches theory of financial economics.
However, based on the progress achieved so far, 2-type or 3-type models seems to be good enough. In this manner, finance is more like the complex science in the 1980s.
Econometric estimations of the agent-based financial models enables us to learn further from the data, in particular, the behavioral aspects of financial agents.
When applied to different markets, it may also shed light on the heterogeneity of financial agents in different markets.
ACE and Econometrics Nevertheless, there are also few questions observed. First, is the observed sustaining heterogeneity of financial agents
across different markets is an empirical fact or is just an spurious outcome from this simple agent-based model.
Second, introducing the addition type of financial agents to the market can, in some cases, result in significant change of the estimated parameters. This also requires a careful addressing.
Complex agent-based financial models are not unemployed. In fact, it is expected, when we move to other less exploited stylized facts, the autonomous-agent designs may become more helpful as a few studies recently have already indicated.
Nevertheless, it remains to be an issue whether one should seriously estimate this complex agent-based models. Why and how?
The SFI-like agent-based models should not be evaluated purely based on with their econometric or forecasting performance.
ACE and Econometrics
Instead of searching for an econometric foundation of these models, one may think in a reverse way, that the best role for them to play is to serve as an agent-based foundation of econometrics, as they can contribute to our study of aggregation problem.
Solving aggregation problem involve the various use of micro-macro models, and these complex agent-based models may enabled us to know more about the complex micro-macro relations than the simple agent-based model.
N-Type Designs (38) Autonomous-AgentDesigns (12)
2-Type Designs (18)
3-Type Designs (9)
Many-Type Designs (11)
ACF Models (50)
Stylized Facts
Collection of ACF Models In this paper, we survey a large number of agent-based
financial market models, to be exact, 50. This size of survey allows us to examine models
crossing many different classes. While in the literature there are already some
taxonomies of agent-based financial models, our perspective here concerns more with the simplicity and complexity of the models, in particular, the number of possible behavioral rules used in the model.
This concern draws our attention to the software-agent designs and divide the literature into the following two groups: N-Type Designs (N can be few, such as 2 or 3, or many) Autonomous-Agent Designs
The Two Groups The first group corresponds to the survey given by Hommes (2006),
whereas the second group corresponds to the survey given by LeBaron (2006).
The two groups can also been put into an interesting contrast. If we considers heterogeneity, adaptation, and interactions as three
essential ingredients of ACF, then the first group tends to be simpler in each of these three elements, while the later are more complex in each of the three.
This contrast, from simple to complex, therefore, enables us to reflect upon the heating discussion on the simplicity principle in modeling complex adaptive systems. The specific question, for example, is what the ``marginal gains’’ by making more complex models are.
Alternatively put, what are the minimum number of clusters of financial agents required to replicate financial stylized facts?
N-Type Models and the SFI models
Models with the N-type designs mainly cover the three major classed of ACF, namely, Kiram’s Ant Models (Kirman, 1991, 1993) Lux’s IAH Models (Lux, 1995, 1997, 1998; Lux and Marchesi
(1999, 2000) Brock and Hommes’ ABS Models (Brock and Hommes, 1998)
They also include some others which may be distinguished from the three above, such as the Ising models, minority games ($ games) models, prospect-theory-based models, and threshold models.
Models with the autonomous-agent designs are mainly either SFI (Santa Fe Institute) models or their variants.
Distribution of the 50
This sample is by no means exhaustive, but we hope that it well represent the population underlying it.
Sample Size: 50 N-Type Designs: 38
2-Type Designs: 183-Type Designs: 9 Many-Type Designs: 11
Autonomous-Agents Designs: 12
Demographic Structure
These four tables are by no means exhaustive, but just a sample of a large pile of existing studies.
Nonetheless, we believe that they well represent some basic characteristics of the underlying large pile of literature.
The largest class of ACF models is the few-type design (50%).
Two Remarks
We do not verify the model, and hence do not stand in a position to give a second check on whether the reported results are correct. On this regard, we assume that the verification of each model has been confirmed during the referring process.
We, however, do make a minimal effort to see whether proper statistics have been provided to support the claimed replication. The study which does not satisfy this criterion will not be taken seriously.
There are four stylized facts which obviously receive more intensive attention than the rest of others. These four are
fat tails (41 counts), volatility clustering (37), absence of autocorrelations (27), and long memory of returns (20).
Second, we also notice that all stylized facts explained are exclusively pertaining to asset prices; in particular, all these efforts are made to tackle with the low-frequency financial time series.
The Role of Heterogeneity and Learning
Do many-type models gain additional explanation power than the few-type models? Many-type models do not perform significantly better
than the few-type models.
Would more complex learning behavior help? Little marginal gain over the baseline models (2 or 3-
type models).
Furthermore, baselines models facilitate the estimation or calibration work, which characterizes the second-stage development.
Building ACE with Econometrics In the second stage, an ACE model is treated as a
parametric model, and its parameters are estimated using real financial data.
What concerns us are no longer just the stylized facts, but also the behavior of financial agents and their embeddings.
Up to the present, only the three major N-type models (ANT, IAH and ABS) have been seriously estimated.
Given the differences among the three models, what are estimated are obviously different, but, generally, they include two things, namely, the behavioral of financial agents and their embeddings.
What to Estimate and What to Know
Despite their technical details and differences, the three estimation works share a common interest, namely, the evolving fraction of financial agents.
Two features are involved. first, large swing between fundamentalist and
chartists; second, dominance of one cluster of financial
behavior for a long period of time. Putting them together, we may call it market
fraction hypothesis.
What to Estimate In addition to the evolving market fractions, more details
of financial agents’ behavior, such as Beliefs: reverting coefficients, extrapolating coefficients, Memory: memory in fitness and memory in belief formation, Intensity of Choices, Risk perceptions, The length of the moving-average window (fundamentalists), Fitness measure (realized profits or risk-adjusted profits),
but they received relative less attention. Amilon (2008) addressed the behavioral aspects found in
his empirical study of a 2-type and 3-type ABS models.
Aggregation Problems: Aggregation over Evolving Interacting Heterogeneous Agents
Aggregation problems are among the most difficult problems faced in either the theoretical or empirical study of economics. …There is no quick, easy, or obvious fix to dealing with aggregation problems in general (Blundell and Stoker, 2005, JEL)
bias? the of range possible a gauge weCan
bias? the is big How
biasagents) ative(representffp
biasffp
XEfSE
XEfSE
xfs
xfs
ii
iii
ˆlim
ˆlim
...))(()(
))(()(
)(
)(
Aggregation Problems
Example: Agent-Based CCAPM
Chen and Huang (2008, JEBO) and Chen, Huang and Wang (2009) .
We assume that all financial agents have unitary risk aversion coefficient, and starting from there we can generate a series of artificial data from the artificial market.
Data Generated
)(return individual },{
asseteach of share holding indiviudal },{
portfolio indiviudal },{
rate saving indiviudal },{
nconsumptio indiviudal },{
1,,
.
.
M
mtm
itm
it
it
itm
itm
it
it
RRR
q
c
return :}{
priceasset :}{
nconsumptio aggregate :}{
,
,
tm
tm
t
R
p
c
So, basically, regardless of using data at individual level or at macro level, we are far away from the true value (which is one) but the one with aggregated data are further away.
If we ignore the error, and take the econometric findings without hesitation, then we can ever come up with some spurious relations, for example, the relation between risk aversion and wealth.
Information Sciences (2007) ``If agents are
heterogeneous, some standard procedures (e.g. cointegration, Granger-causality, impulse-response functions of structural VARs) loose their significance. Moreover, neglecting heterogeneity in aggregate equations generates spurious evidence of dynamic structure.’’
ACE and Networks Most ACE models which we consider in this lecture do not contain
an explicit network for interaction (Thomas Schelling’s model is the only exception).
However, a lot of ACE models do take networks (physical networks or social networks) into account.
In these models, the network topology become an exogenous variable or an addition parameters for the ACE models, which may have non-trivial real effect.
However, recent interdisciplinary studies overarching economics, game theory and sociology have started to endogenize the formation of network topologies using ACE models.
Of course, further complexification can arise when one consider a full cycle between ACE and the embedded network topologies; they do feedback to affect each other.
This has been a very active area for study when data from WWW become extremely huge.
Concluding Remarks
Q: If I have no experience in ACE, but interested in learning more about it, and possibly considering making investment here, where should we start?
A: Prof Leigh Tesfatsion’s maintained website provides a total solution for the beginners. With this website, I can peacefully stop here. http://www.econ.iastate.edu/tesfatsi/