péter gács - computer science › faculty › gacs › courses › cs591g1 › lectures ›...
TRANSCRIPT
Agent-based modeling
Péter Gács
Computer Science DepartmentBoston University
Spring 2019
It is best not to print these slides, but rather to download themfrequently, since they will probably evolve during the semester.
The class structure
See the course homepage.In the notes, section numbers and titles generally refer to the book:Wilensky-Rand: Agent-Based Modeling.
Goals
• Experience in programming as a tool of exploration.• Setting up models whose behavior is hard to predict without
trying it out. They typically involve many interacting parts.• Mostly, the models represent some real-world phenomenon, but
brutally simplified. The question is then whether what isretained are the essential elements. Examples: ant foraging,flocking.
• Some of the models do not represent anything in the real worldbut they are still intriguing, due to the complex behavior arisingfrom simple rules. Example: cellular automata, Game of Life.
Technical details
To start programming from scratch: would take too much time toget to the fun part. We will use two platforms.
Netlogo You may have seen it even in high-school.
Advantages Very easy to start using. Has been around for 20years, with lots of sample models developed. Written inJava, has an API, allowing to control it from Java.
Disadvantages You have to learn its language (howeversimple) which you will not use elsewhere.Its comfortable environment is somewhat restrictive.Details of implementation (documented source code) arenot readily available.Harder to optimize for larger models.
MASON One of several alternatives.
Advantages In Java, a language you already know and willprobably use in the future.Well documented (manual and classdocs).
Disadvantages More effort to start (even if you know someJava) than Netlogo. But in the process you learn muchabout:• the structure of a mature software system• how to interact with and modify one.
Model analysis
• In many interesting cases, just playing with parameters andinitial settings does not give enough information about themodel. We will need some programming tools to comparescenarios, systematically and quantitatively.
• The platforms Netlogo and MASON supply some simple tools:• plots, histograms, and so on• repeating runs over a range of parameters, outputting the results
in a standardized form.• But a lot of the analysis will have to happen using outside tools.• Special modules in one of the languages you know: Python or
Java.• The data-analysis language R.• Even just a spreadsheet.
We will explore some of these possiblities; you will most likelyalso use some in your projects.
Modeling real-world phenomena
Explore the relation of the model to reality.
• Fitting parameters• Performing comparisons• Comparing alternatives
Flocking
• The natural phenomenon: flock of starlings, fish school,v-flight.
• What controls these global movements? Idea: they arise eventhough the animals obey only some very simple local rules.How to check this?
• Invent such rules and simulate the animals!• A Netlogo version.
Paths
A simple model in which to see several Netlogo features: patches,other agents (walkers), movement.Observe together both the model and the code.
Other models
In Netlogo, under File/Models Library, a large selection of modelsfrom
• Biology, Chemistry and Physics, Earth Science• Computer Science• Economics, Social Science
We will explore some of these and related examples, but youshould explore them, too.Soon you will start working on a project of your own: theseexamples can give you plenty of ideas.
Traffic
An example I am particularly interested in: traffic control.Given: an urban grid, and drivers, each with a starting place and agoal.Questions:
• Model different kinds of intersection: variants of traffic lights,roundabouts.
• Explore smarter strategies for traffic lights.• Effect of giving congestion information to drivers.
What else in the lectures
• Implementation details of a simulation platflorm (scheduling,lookups, efficiency).
• Techniques and tools for developing and analyzing the models.• Discussing the ideas and problems arising during your work.
Topics I will probably avoid though they are interesting:
• 3D visualization• Multi-thread techniques
(You may use such techniques in your project if you want.)
Structure of Netlogo
Object-oriented: the objects are called agents.Have their own variables and methods.
Environment Frequently represented by some agents calledpatches: 2D squares with fixed coordinates.
Turtles All other agents (legacy name). Location is built-inproperty. Various breeds—correspond to Java classes.
Links between agents can also be agents.
Observer One central agent.
Graphical representation (shapes and colors are built-inproperties).
Peculiarities of Netlogo
Few parentheses Can be confusing: calling a function withoutarguments looks the same as referring to a variable.You can add more parentheses for readability. Square brackets,on the other hand, are reserved for special syntactic functions.
Many built-in primitives These do make life easier if you knowthem. You don’t need to know all of them to startprogramming.
Identifier-names You can use something like thirst-intensity.But then, arithmetic operators -,+,* must be surrounded byspace!
Comment After ; the rest of the line is comment.
Breeds
Declare a class of agents by giving the plural and singular form oftheir names:
breed [worms worm]breed [sheep a-sheep]
(sheep is plural, so we invented a different singular form)
Variables
Once a variable x is declared, you give it value by
set x 3
x = 3 is not an assignment but a condition testing equality.If variable x is local to a procedure then the first time you must giveit value by
let x 3
Variables can be of three kinds:
Local Declared by setting it first by the let command.
Global Say x, my-place declared by
globals [ x my-place ]
If a global variable is controlled from the interface then it isconsidered declared there, and should not be declared here(best to comment out its declaration).
Belonging to a class If there is a breed worms then we declare theattributes of an agent belonging to it as
patches-own [wind-direction grass-level]worms-own [
head tail excitation-level]
There are some built-in attributes like color,patch-here.
Referring to attributes and methods of an object
To refer to variable x of agent head-sheep in a procedure notbelonging to this breed, use the “[x] of” syntax:
set y [x] of head-sheep
(Would be head-sheep.x in Java.)To change x in head-sheep in the same situation:
ask head-sheep [set x 3]
Flow control
if x < 5 [set x x + 1]
ifelse x < 5[set x x + 1][set x x - 1]
while [x < 5][set x x + 1]
Procedures
Definition of a procedure of sheep.
to eat-grass-p [amount half?]let x amountif half? [set x x / 2]let y [grass-level] of patch-hereif y > x [set y x]set energy energy + yask patch-here [set grass-level grass-level - y]
end
Call this by for example
ask this-sheep [eat-grass-p 5 true]
( In Java, you would write this_sheep.eat_grass_p(5,true).)
Reporters
Reporter: procedure returning a value (in Java both are calledfunctions).
; reports the amount actually eaten:to-report eat-grass-r [amount half?]
let x amountif half? [set x x / 2]let y [grass-level] of patch-hereif y > x [set y x]set energy energy + yask patch-here [set grass-level grass-level - y]report y
end
Call this as: set z eat-grass-r 5 true of [the-sheep].( In Java, you would writez = this_sheep.eat_grass_r(5,true).)
Stop
• Within a procedure definition, stop will stop the procedure(like return in Java).
• Within a reporter definition, report x will stop the reporter(like return x in Java).
• If the procedure is called by a “forever” button then therepetition will also stop.
Self and myself
Within a procedure for some agent, the word self refers to theagent itself. Within a command or reporter called for anotheragent, the calling agent refers to itself as myself. For example:
let d 0set targets sheep with [distance myself < 5]if any? targets [
let target one-of targetsset d distance self target
]
Agentsets
Using sets of agents, avoids explicit loops.
ask turtles with [color = red] [forward 2]
• turtles is an agentset, turtles with [color = red] a subset.• heading is a built-in variable showing the angle of the direction
of the turtle, counting clockwise from north.• forward is a built-in procedure moving the turtle in the
direction of its heading. forward 2 asks the turtle to moveforward 2 units in the direction where it is heading.
• The action forward 2 will be performed on the agentset inrandom order.
Lists
What are arrays in Java are lists here.• Defining lists:
set L list (random 2) (random 2) (eat-grass-r 5 false)
L is now a 3-element list, indexed by 0,1,2.• List with constant elements can be defined simpler.
set L1 = [1 3 5 7]set L2 = [[1 3] [5 7]]
• Referring to a list element, changing a list element:
set x item 2 L1 ; now x = 5set x item 1 (item 0 L2) ; now x = 3replace-item 2 L1 7 ; now L1 = [1 3 7 7]
• Adding element e to the beginning or end of a list:
set L fput e L, set L lput e L
• Removing the first element or last element:
set L but-first L, set L but-last L
• List from an agentset:
set L sort turtles with [color = red]
• Concatenating lists:
set L (sentence [1 2] ["a" "b] [3 1])
Now L is [1 2 "a" "b" "3" 1].• Concatenating strings:
set S (word "Good " "morning" "?")
Now S is "Good morning?" .
There are no primitives for splitting up a string, here is a trick,using the read-from-string primitive. This is useful when youread in a file line-by-line, as in the Life example.If s is "3 25 12.5 33" then
let s1 (word "[" s "])
Now s1 is "[3 25 12.5 33]"
let L read-from-string s1
Now L is the list [3 25 12.5 33].
Anonymous procedures and reporters, iterate on list
(Sometimes called lambda expressions. )• Anonymous procedure:
x -> print x
• Anonymous reporter:
x -> 3 * x
• Iterate through a list by anonymous procedure:
let c 0foreach (list turtle1 turtle2 turtle3) [
t -> if [color] of t = red [set c c + 1]
Now c shows the number of red turtles in the list.• Produce a new list by anonymous reporter:
set L map [t -> 3 * t] [2 5 10]
Now L is [6 15 30].
• Just like in Python (except for lack of parentheses),
range 8
returns the list [0 1 2 3 4 5 6 7]. This is how we can have“for” loops:
foreach range 8 [i -> create-turtles 1 [ move-to patch i i]
]
This creates 8 turtles along the diagonal.
Files
• Your model can be saved in a file. As seen in an editor, codelasts until the line @#$#@#$#@. Text below records the currentstate and user interface choices. Don’t edit it (unless you knowwhat you are doing).
• Some commands to read/write files: file-open, file-read,file-read-line, file-write, file-type, and so on, see theNetlogo Dictionary. Example: initial state in the Game of Life.
• file-open must again be used also on files that are open, ifyour read/wrote another file in the meantime. To copy a linefrom file2.txt to an open file file1.txt:
open "file1.txt"open "file2.txt"set line file-read-lineopen "file1.txt"file-type (word line "\n")
Randomization
Explicit • random-float 1 returns a random number between0 and 1.
• random 50 returns a random number between 0 and 49.
Implicit • When you ask an agentset to carry out a procedure,it is carried out in random order.
• one-of turtles with [color = white] returns arandom element of the set of white turtles.
Repeatability Calling random-seed 1984 at the beginning sets theseed to a specific number, so at repetition of the program thesame random numbers will be chosen.
Reflection
• If there is a procedure (or even just a sequence of commands oran anonymous procedure) with the name eat-grass then
set p "eat-grass"run p
has the same effect as calling eat-grass directly. This is usefulwhen the code has to decide at run-time which procedure touse and you don’t want a long set of case distinctions. Thename of the procedure can also come from user input.
• runresult does the same with reporters.
Main procedures
A typical Netlogo program has at least two procedures:
• setup: to initialize some variables and create agents.• go: the procedure that will be run repeatedly.
These procedures can call any number of other ones. Their namesare just conventions, you can use other ones and can have severalones.
Interface
• The Netlogo window has three tabs: Interface, Info and Code.The meaning of Info and Code is clear.
• The interface has a display whose properties can be controlledby the Settings.
• Other interface items can be added using the selector at top:Button, Input, Monitor, and so on.
• A button calls some commands. You will probably create asetup button and a go button.
• Each button has a checkbox called Forever. If it is checked thenthe button stays pushed and the commands are calledrepeatedly until the button is released. You will probably wanttwo versions of the go button: one with Forever checked andone without (can be called go once, or step).
• The Input button (and Slider, Choose, Switch) control someglobal variables. They also declare these variables, so theyshould not be de declared in the code.
Game of Life
• A cellular automaton runs on a grid (in this case a 2D grid).• Each point (site, cell) of the grid holds a state (in this case 0 or
1, corresponding to “dead” or “living”).• In each time step, each cell updates its state. For this, it applies
a transition rule telling its new state as a function of the statesof its (now 8) neighbors and itself. This rule defines our cellularautomaton.
• Even very simple rules can lead to complex behaviors.In the Game of Life, the rule is:
If you have 2 live neighbors, keep your state.If you have 3 live neighbors, live.Otherwise die.
• As each cell applies the transition rule, the global state evolvesin interesting ways. We saw local configurations called gliders.
• A particularly interesting initial configuration can be loadedfrom a file: Gosper’s glider gun. It repeatedly undergoes asequence of state changes and emits a glider.
• There is enough complexity in the behavior of the Game of Lifethat with an appropriate initial state it can be programmed toimplement any computation.
DLA (Diffusion-Limited Aggregation)
Droplets fly around randomly, and when they reach a frozen part,they stick to it.We start from a frozen “seed”.
1 Simplest version. wiggle for smoother random walks.
2 Decreasing the probability of sticking.
3 Adding influence by the frozen neighbors. Use of a toggle, andof read-from-string to get the probabilities from an interfacecontrol.
4 A hex version, allowing evaporation.
This is far from modeling real snowflakes:Snowflakes in laboratory
Goals
Explore the dynamics of predator-prey relationship. An old topic,first modeled by equations:
dsd t= αs− βsw
dwdt= δsw− γw
The equation model may assume a too aggregate point of view,abstracting away too much of the details of real situations, eventhough similar oscillations have been observed in nature, too.An agent-based model way come closer, and may allow theexamination of the influence of more factors. Our model will havethe following ingredients:
• A field of grass• Sheep move around and eat grass.• Wolves move around and feed on sheep.
Actually an oscillating predator-prey dynamic may be observedalready even in just the sheep-grass relation.
Build-up strategy
In creating a model, we can approach from two directions:
Top-down Specify carefully all the requirements and theconceptual details, before starting to code. (Unavoidable if thedesigner and the programmer are different.)
Bottom-up Start with the smallest possible element that you cantest for working.Add complexity gradually, testing and modifying. Test addedelements individually (unit testing) as much as possible.
In general, we combine the two approaches. Spell out as much aspossible what your goals are, but then build gradually. Not only isit difficult to debug a large piece of code, but the coding experiencewill typically lead to some changes in the original specifications.
Main points to consider at design (as seen in this example):
Driving question Under what conditions do we see oscillatingpopulation levels?
Agent types Sheep, wolves, grass.
Agent properties Energy, location, heading, grass-amount.
Agent behavior Move, die, reproduce, eat (each breed), regrow(grass).
Parameters Initial number of sheep and wolves, cost of living(moving) for each breed, energy gain (each breed), grassregrowth rate, reproduction threshold and cost (each breed).Possible additions: say, how far do the wolves see?
Time step Move, die maybe, eat, reproduce maybe, regrow grass.
What is randomized Move direction in each step, initial grassamount, sheep and wolf placement.
What to measure Sheep and wolf population over time.
Steps of development
1 Create the sheep, let it move. Setup and go.2 Slider to control the initial number.
Sheep energy, its consumption per step. Slider to control it.Dying.Plot the number of sheep.
3 Grass. Initializing it. Eating it (affects both the sheep and thegrass). Slider to control energy gain from it.Regrowing it. Slider to control rate.Updating its color (we could visualize the sheep’s energy alsosomehow).
4 Let the sheep reproduce (some parameter could control this,too). Can we produce oscillating population levels here?
5 Wolves. Slider to control initial number. They also move, die,eat (sheep) and reproduce. Slider to control energy gain fromsheep.Plot also the grass amount and the number of wolves (in thesame plot).
Links, epidemic
Links are agents in Netlogo. There can be several breeds of them.Example: spread of an epidemic
• Persons on a 2D grid. Each person is either susceptible orresistant (maybe because was vaccinated).
• Each person has also a number (0,1,2, a model choice) of linksto a random other person anywhere. Disease would spread toneighbors both on the grid and along links.
• Except for the links, similar to the Forest Fire model.Trees↔ susceptible persons.Fire↔ disease.Rigorous mathematical study of the probability of spreadingfar: percolation theory.
Critical value
• Random initial placement of trees, some density 0< d ≤ 1.• Light them up in a few (say 5) random places.• Measure the proportion of all trees burned.• Average over a (reasonable) number of repetitions to get a
number b(d).• b(d) is clearly increasing, but in special way.
First stays close to 0, then after d passes some value c, it getsrapidly close to 1.c is called a critical value.
• In the Forest Fire case, c ≈ 0.6.
Epidemic, links
• Same questions, when random links are also present.• Now the case d = 1 is also interesting: we may want to know
how fast the disease spreads to everybody. The number of stepsneeded is (half of) the diameter of the graph.Related to the small world phenomenon (six degrees ofseparation).
• The distribution (by length) of the random links may also beimportant, we just set it uniform.
• Compute the critical value for 0, 1, 2 random links per person.(We could have considered 0≤ λ < 1 links: choose withprobability λ whether to put a link.)
BehaviorSpace
Netlogo has a tool to organize a series of experiments and to recordtheir results: Tools/BehaviorSpace.Create a new experiment, name it, edit it. Choose:• Which variables to vary, over which values.• How many repetitions.• The reporter for the result.• Whether to record every step or only the end result.• Some other options.• After hitting Run, choose• Kind of output (table, see below, is good).• Whether to update the display (faster if not).• Number of simultaneous runs (if you have several processors).
I prefer the Table output (to be processed by other programs likePython) to spreadsheet (where processing is harder to standardizeand repeat).
Processing the table
The table is a csv file: comma-separated values. The first 6 lines aredata we will delete for processing.The next line shows the column headers.Python program process_table.py to compute the estimatedcritical values.
• For convenience, it relies on an off-the-shelf Python modulecalled csv.
• Parameters to the program are taken from a file whose name isgiven as a command-line argument, and which is imported(using the module importlib). I could be named, say,experiment-2.py.
• Delete the first 6 lines (by a utility function).• Import into a list of dictionaries.• Sort the list the run number (output of several processors was
interleaved), write the result.• Compute averages over the runs, write the result.• Compute the critical values, with the given threshold (now
0.1), write the result.
Introduction to Mason
• We will write programs in Java, relying on a well-developed setof tools, called MASON.
• A simulation, as any other complex system, is best broken upinto a number of parts that carry out different functionalities.Each such part is generally a Java class, or package containingrelated classes.
Working with Java, installing Mason
1 Make sure that you can use Java on your computer. In acommand window, the javac and java commands must work.If they don’t then either Java is not installed or is not in yourpath: figure this out.
2 Download mason.zip from the Mason website. Store in somefolder and unzip it into a folder mason. Rememer its completepathname: for example /Users/your-name/spring19/mason.This will be used in the classpath.
3 Create a work folder, say/Users/your-name/spring19/mason-sandbox.
4 To work with some package named for example students0,create a subfolder, mason-sandbox/students0.
5 Create a class file, for example students0/Students.javawith an editor.
6 To compile it (and other class files in the package), at thecommand prompt $, say$javac -classpath /Users/your-name/spring19/mason:. \
students0/*.java
Under Windows, you must say something like$javac -classpath C:\Users\your-name\spring19\mason;. \
students0\*.java
7 To run it, assuming Students.java has a main() method, say$java -classpath /Users/your-name/spring19/mason:. \
students0.Students
Structure
SimState The class of the central player, setting up things andputting them on the schedule.
Schedule List of events waiting to happen, to be called in order ofpriority.
Events The agents generating the events are derived from anabstract class called Steppable.
Environment Agents operate in various kinds of space (2D, 3D,discrete, continuous, or just a network of relations). Theseenvironments are represented by classes from the packagefield (and its subpackages).
User interface
Graphical interface is well-separated from the main simulation:without it, the simulation can run much faster.
Display Classes in the display package.
Portrayal Visual representation of some components (agents,fields): the portrayal package.
Console The class of the main controller window.• Provides ways to run or stop the simulation, and various
menus.• Offers ways to inspect (or even change) parts of the
simulation, for example some numerical values carried bysome agents, calling some inspectors.
GUIState The class responsible to connect the simulation with theGUI elements. Has access to all the above components.
Documentation
To learn Mason, we can use the following elements.
Manual Quite detailed, starts with a tutorial example.
Classdocs Description of each class indocs/classdocs/index.html.
Tutorials More tutorials in sim/app.
Source code
Cellular automata
Tutorial 1 a basic implementation of the Game of Life.
Tutorial 2 adds user interface to Tutorial 1.
The posted implementation changes these tutorials somewhat, asdescribed below.
Tutorials 1-2: cellular automata simulation
SimState The central class of a simulation is a subclass ofSimState; here it is called CaState. It can prepare thesimulation and schedule the agents. Its main() function canrun the simulation without GUI (graphical user interface).
Steppable The agent classes are subclasses of Steppable; thereare two here, LifeCa and VoteCa. They must have a step()method for the agent to execute every time the schedule callsit.
GUIState The GUI to the simulation is handled by a subclass ofGUIState; here it is called CaGui. It sets up the display and theconsole (which contains inspectors), and also the portrayals.
Configuration files
A program’s behavior depends generally on some externalinformation. There are several way to receive this:• Command line• Configuration file• Graphical user interface
Configuration files seem to be the most flexible. Java offers a basicconfiguration file format, in the class java.util.Properties. Anobject of class Properties is a dictionary with keys and values thatare strings. Its load() method loads it from a file.In our case this file is configs/ca.properties. It consists ofkey = valuelines, where “value” is a string ending only at the end of the line(so don’t leave spaces after it!).
Input from a file
• A simulation typically starts from an initial global state: in a lotof cases this needs to be read from a file.
• In case of 2D cellular automata, the inital state would be a 2Darray. An economical way to store such arrays is some basicpicture format, like PGM. (Look up Netpbm Format inWikipedia.)
• The Mason class sim.util.TableLoader has methods to readfrom such files.
• An example class that writes such files is MakeInit (also foundin the tutorial1and2 folder).
• The methods in MakeInit are a way to save a given CA globalstate (a field) to a file. This is different from checkpointing (seelater): that would save the entire Java simulation state.
1 Schoolyard
To get practice, we will follow in class the manual’s tutorial buildupof the schoolyard simulation. Everybody is supposed to downloadMason and follow this on their laptop! See instructions above onWorking with Java, Installing Mason.The gradual buildup can also be followed in Tutorials 01-14 of thefolder mason/sim/app/wccs.
• The scene is a schoolyard, some student in it, with a network ofrelations that regulates their movements.
• Find a work folder. Create a subfolder called students0. Thiswill be the package name, too.
• Within folder students0, define the main class Students in fileStudents.java.
2 Add agents
• Define the field:
public Continuous2D yard = new Continuous2D(1.0,100,100);
The object of class Continuous2D created is a 100× 100 square.The first argument, 1.0, corresponds to the size of “patches” inNetlogo: helps in lookups, but agents can be placed at anyreal-valued coordinates.
• Create a number of students and place them in the yard.
yard.setObjectLocation(student, new Double2D(x,y));
Here x,y are of type double. This lets the object yard knowthat there is a student at location (x , y).
3 Actions for the agents
• The class Student implements Steppable. Their step()function is the action: moving by a (randomized) “force”towards the center.
• The Students.start() function gets the command:
schedule.scheduleRepeating(student);
which will put each student on the schedule: the schedule willexecute repeatedly each student’s step().
4 GUI
• Create the class StudentsWithUI, which extends GUIState.Its constructor depends on a SimState, currently on Students.
• Its main() function creates the console from which the GUIruns the simulation.
5 Display and portrayals
• Portrayal classes give the simulation elements a visualrepresentation.
ContinuousPortrayal2D yardPortrayal= new ContinuousPortrayal2D();
will be used to represent the yard. In the setupPortrayals()function, the portrayals of the field and the students are givenby these commands:
yardPortrayal.setField( students.yard );yardPortrayal.setPortrayalForAll(new OvalPortrayal2D());
This makes students appear as grey disks.• The start() function gets called when the Play button is
pushed first. It calls setupPortrayals().But first, like several of the other standard functions, it callssuper.start(), the corresponding function of the superclassGUIState.
• load() is similar to start(), but is called when the simulationis restarted from a checkpoint state.
• init() is called at the time the console is set up. It• Sets up the display.• Registers it with the console.• Attaches the portrayals to it.
• quit() disposes of the display when the console (and thus thesimulation) is killed.
6,7 Network
• Create a Network with the students as nodes.• Add undirected edges with a random (positive or negative)
weight (buddiness) showing the strength of friendship orenmity.
• The set of nodes is a Bag: a set to which one can add and fromwhich one can delete.Its elements are also indexed, and are accessible by get(i).
• In Student.java, add “forces” attracting to friends andrepulsing from enemies. Not really forces, rather velocities,moving the student towards friends, etc.
8 Visualize the network
• import sim.portrayal.network.*;NetworkPortrayal2D buddiesPortrayal=new NetworkPortrayal2D();
• Tell the portrayal about the field, and how to portray the edges:buddiesPortrayal.setField(new SpatialNetwork2D(students.yard,
students.buddies));buddiesPortrayal.setPortrayalForAll(
new SimpleEdgePortrayal2D());
• Attach the portrayal to the display:display.attach( buddiesPortrayal, "Buddies");
• Now the display shows (thin black) edges between students.
9 Edge colors; extend a class anonymously
• Mason does not give a direct way to draw colored edges, butone can still do it. This is the first example of adding featuresby extending a class.
• Extend the class SimpleEdgePortrayal2D, overriding itsdraw() method.
• In the override, one sets some properties of the instance, beforecalling super.draw().
We could define a new class
MyPortrayal extends SimpleEdgePortrayal2D
in a new file. Instead, in this one-shot use, we just add the overrideto the place where the instance is created:
buddiesPortrayal.setPortrayalForAll(new SimpleEdgePortrayal2D(){public void draw(Object obj,Graphics2D graphics,DrawInfo2D info){
... // Set some properties.super.draw(obj, graphics, info);
}});
This created a new class: it is anonymous (no name likeMyPortrayal). Therefore Mason serialization requires also a line:private static final long serialVersionUID = 1;
Please, read the rest of the buildup of the Students exampleyourself in the Mason manual. You may need some of it in laterhomeworks.
Genetic algorithm
An interesting application of multi-agent modeling is imitatingbiological evolution. The Netlogo model Robby the robotdemonstrates the basic ideas.
Tasks or problems on which to measure fitness.
Individuals (a set of them) exposed to these problems.
Genetic material in them, encoding possible solution methods ofthe problems.
Selection and reproduction mechanism to choose fittest forsurvival.
Mutations allowing for new methods to emerge.
Recombination : various ways of combining different genomes,like sexual reproduction, crossover, and so on.
Task picking up cans on a 10× 10 grid surrounded by a wall.
Environment The (random) placement of these cans (with somedensity).
Cell states empty, can, wall: 35 = 243 states of a 5-cellneighborhood (called situations).
Actions 7 possible: go east, go north, go west, go south, random,pick up can, do nothing.
Strategy Function assigning an action to each situation.
Chromosome encoding of a strategy as a list of allelesof length 243.An allele is a possible value at a position of the chromosome(think gene).
Evaluation • (Cost of step), reward for can, penalty for hittinga wall or pick-up with no can.
• Run for, say, 100 steps.• Average this over 20 randomly chosen environments.
Selection 100 (say) individuals in a population.Repeat 50 times the following tournament:
• Pick the best of 15 random individuals for parent 1, repeatfor parent 2.
• Produce 2 children, using random crossover of theparents’ chromosomes.
Repeat for a lot of generations.
Remarks
• One must be very patient to see fitness levels rise.• What would be your optimal strategy?• I think it is a mistake not to give Robby any memory (even just
for keeping a direction of movement). A memory cell with 4states (say to remember the last heading) would increase thestrategy size 4 times, but may give better results (try!).Or, at least one could have just an extra action saying “maintainheading”.
• Crossover could be replaced with something more general forsexual reproduction.
• How much does sexual reproduction help here compared to justmutations? Only experiments can tell.
A table (like Robby’s “chromosome”) is a terrible representation ofstrategy for either learning or evolution: its size growsexponentially with the number of possible situations. Otherpossibilities:• Boolean circuit.• Neural network.• List of simple rules.• Program in some very simple language.
We don’t need to represent all strategies, but should try make the“simple” (and good) strategies easier to find.
Implementation
• Actions are obtained by indexing into the strategy; math forcomputing the index.
• The environment is only shown when asked for (speed).So there are 2 alternative displays• The environment (with Robby in it) .• A display of the strategies (as individuals).
Fitter to the right, vertically spread out by allele distance.
• Technically—awkwardly—the program switches displays byprocedures that hide the turtles in one or the other, also usingthe Boolean variable visuals?.
Controlling a Mason simulation
How to run for a certain number of steps? For a certain number ofrepetitions?Manual on the Students simulation, and on using doLoop().
Flocking in Mason
• Birds or fish. They move around, keep together and avoidobstacles as a group.
• Achieved by adding up the influence of several factors todetermine the next step.
Avoidance Don’t bump into each other or into obstacles.Cohesion Keep together.Common speed Adjust to the common speed of neighbors.Inertia Don’t change your own speed too abruptly (not
physical).
• Details of Mason implementation.• Visualisation (including trails): wrapping portrayals into other
portrayals repeatedly, adding features.
When freedom of choice is not optimal
More freedom may yield worse results for all agents.Example: Braess’s paradox. Adding a new road may slow down thetraffic.
• Cars traveling from point s to point t. A new car enters everyµ/2 seconds at point s.
• Two connections: s→ v1→ t, and s→ v2→ t.
s<latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit> t
<latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit>
v1<latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit>
v2<latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit>
• Roads s→ v1 and v2→ t are simple: a car takes α seconds topass each.
• Road s→ v2 is different: a car takes essentially no time to passit, but there is a speed bump at the end which holds up everycar for µ seconds, so the cars queue up before it.Road v1→ t is of the same kind.
• Travel time of each car on road s→ v1→ t depends on thenumber n1 of cars ahead of it on road v1→ t:
α+ n1µ.
It is similar on road s→ v2→ t, with n2µ+α.
• If an entering car knows the numbers n1, n2, it will choose theone with the smaller ni , so assume n1 = n2 = n.
• Since cars enter every µ seconds, the numbers ni = n don’tchange in time.
• Assume nµ= 0.45α, so the total travel time is 1.45α.
s<latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit> t
<latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit>
v1<latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit>
v2<latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit><latexit sha1_base64="(null)">(null)</latexit>
Open a connection between v1→ v2: fast, taking no time.
• Now a car that took s→ v1→ t switches to s→ v2→ v1→ t,since then time for the first leg becomes (n2 + 1)µ < α.Similarly, a car that took s→ v2→ t switches tos→ v2→ v1→ t, new time (n1 + 1)µ < α for the second leg.
• Other cars follow, until all take s→ v2→ v1→ t, using time
2(n1 + n2)µ= 4nµ= 1.8α > 1.45α,
worse for everybody than it was.• Still no incentive to switch back to the old route: s→ v2 costs
2nµ= 0.9α while s→ v1 costs α. (Similarly for the last leg.)
Nash equilibrium
The freedom of the extra link v2→ v1 resulted in a worse traveltime for everyone.• The situation can be formulated as a multi-person game.
In a game for players J1, . . . , Jk, every player Ji has a choicefrom a set of some strategies. Once the strategies σ1, . . . ,σk arechosen, the game is played and every player Ji gets some(positive or negative) payoff fi(σ1, . . . ,σk).
• An equilibrium is a set of strategies σ1, . . . ,σk such that noplayer Ji alone has incentive to change her strategy to some σ′i .It is also called Nash equilibrium—after John Nash, who provedthat it always exists in a finite game (if we allow randomizedstrategies—not needed here).
• Our game has only this one equilibrium.
• If a Nash equilibrium (could be several) is “not socially optimal”(like here), the difference is sometimes called the price ofanarchy.
• Older term, for a more special situation: the tragedy of thecommons (common property overexploited by everybody).
Inherited cooperation
A possible solution to “bad” equilibria: if the players have abuilt-in/inherited behavior overriding harmful greediness.Example: the Social Science/Cooperation model in Netlogo
• A field of grass with threshold height called low-high-threshold.If it is eaten below that, it grows back slower (probabilitieslow-growth-chance vs. high-growth-chance).
• Two kind of cows: greedy and cooperative.• Greedy ones eat all grass they find.• Cooperative ones eat only down to the threshold, and suffer the
consequences of less nutrition, slower reproduction.
• Which kind of cows will thrive in the long run?Depends on the parameters: mainly on the difference betweenhigh-growth-chance and low-growth-chance.(Many other parameters to play with.)
.
Data structures for simulation
Both in Netlogo and in Mason, some data structures are used toachieve efficiency. In Mason, they can be inspected in the sourcecode.
• The scheduler uses a priority queue data structure to find fastthe event to be scheduled next. The implementation in Masonis a (binary) heap.
• Some of the fields are sparse, since only certain of theirlocations contain agents. When looking for agents near alocation, we don’t want tio perform a complete loop over allagents (or all locations).The data structures used are simple versions of locality-sensitivehashing.
Priority queues
• Each item has a priority given by a number label. Smallernumbers: higher priority.
• Goal: a data structure into which it is fast to insert an item, andfast to extract the item with the lowest label.
• Our implementation: a heap. You may have learned aboutheaps in a data structure course (nothing to do with the notionof heap in memory management).
Operations: insert(H, x), x = extractMin(H).In the heap implementation, both take O(log n) steps where n is thesize.Main ideas:• Tree with keys decreasing towards root. Heap property:
H[parent(i)].key ≤ H[i].key.• A special array implementation of this tree.
parent(i) = bi/2c, leftChild(i) = 2i, rightChild(i) = 2i + 1.
If an insertion requires, double the size of the array.Insertion: let the item rise to its place (towards the root).Extract-min: replace root with last item, let it sink to its place.
The log n multiplier is not so harmless when we have a lot ofagents. There could be other issues, but individually scheduling1000 agents seems to result in a great slowdown.Comparing the implementations of a 1-dimensional traffic model,where each car moves at a random rate depending on itsneighborhood. If the rate of movement is c.r < 1 for car c thenthere are two ways to do it:
1 The schedule asks to do an update attempt for every car c. Forcar c, choose a uniform random variable in Y ∈ [0,1]. IfY < c.r then update (including recomputing the values c.r forc and its neighbors), else don’t.
2 Each car c is an agent, with a time c.t to move. They are on apriority queue with c.t as the key. The one with the smallestc.t updates. Then the rates and update times are recomputed.The update time c.t becomes c.t +τ where (a typical choicefor) the random variable τ is taken from the exponentialdistribution: P {τ > s }= e−c.r·s (makes sense also if c.r ≥ 1).
In my two implementations there is a huge difference, 2 is muchslower than 1 . I am still not sure that such a difference is justified.We will look at the Heap implementation of Mason and theimplementations 1 , 2 .The Heap could possibly be made more efficient, for example byjust updating the priorities of some items instead of removing andreinserting them; it is not clear how much difference this wouldmake.
Caveat I cannot easily upload these implementations for you to
run: they are part of a larger IntelliJ Mason project of cellularautomata.
Rabbits and wolves
Comments on the rabbits-and-wolves homework.
• Fine that everybody submitted a working program. But (sinceas usual) done by the last minute, by far not enough time foranalysis.
• Experimentation might lead to some more realistic choices.• Wolves reproduce slower than rabbits.• A wolf does not gain more energy from a rabbit than what the
rabbit has (see Netlogo sheep-wolves).• The completely random jump-around may be replaced by one in
which rabbits (and wolves) roam wider if they don’t find foodlocally.
• It may also may you observe some bugs:• Number of wolves never increases.• Number of wolves never decreases to some constant above 0.• Rabbits portrayed with grey dots in the middle.
Choices for the grid to store the wolves:
• A separate sparse grid, similar to the ones for rabbits.Still, inefficient to loop through all rabbits just to find onenearby. Sparse grids are there just in order have methods tofind nearby objects fast.
• The same grid.
In the second case, when a wolf collects in a bag the objects in theneighborhood, it will need to distinguish rabbits.
• Inefficient choice: run through all wolves to eliminate the onesthat happen to be wolves.
• From the elements of the bag, keep the ones that are rabbits.Two ways:• if (o instanceof Wolf) { ...• Rabbit r = new Rabbit();
for (Object o : b) {if (o.getClass() == r.getClass()) { ...
Project progress
Adam Fishers with their ships, nets and the market. Mson.
Aditya Addiction, influenced by various factors, including socialones. Netlogo. ??
Chris Passengers reaching their seats on a train. Netlogo.
Jean Road repair strategies. Mason.
Julius Kenmore square traffic. Using SUMO, Flow.
Seiya Population decline, by prefectorate, in Japan. Nlogo.
Thien, Tri Wolves and rabbits learning to survive simultaneously.Python Mesa (?).
Visual analysis in Python
We have seen already a way to use Python to processcomma-separated files produced by Netlogo BehaviourSpace.Now we will see some Python ways to display the informationusing matplotlib, a library in Python.It also uses numpy, a module for numerical calculations. (As usual),we will learn from each only as much as needed for our examples.
Plot, errorbar netlogo/flocking/stdevs_plot.py: plot witherror bar.
Contour plot netlogo/investor/utility_contour_plot.py 2Dcontour plot.
Several curves matplotlib-examples/lotka-volterra.py.
Animation predator-prey (the above) andmatplotlib-examples/double_pendulum_animated.py.
Iterated Prisoner’s Dilemma
A somewhat more professional example of agent-based model.
• The non-zero-sum game called Prisoner’s Dilemma.Behaviors: C = “cooperate”, D = “defect”.
C D
CHHHH
HH33 H
HHHHH-15
DHHH
HHH5-1 HHH
HHH00
The only equilibrium is (D, D).• Change of model: iterated PD.
Breeds of players with various strategies, some natural selectionwith mutation probability µ.Which strategies (breeds) prevail?
Strategies
Naive cooperator NC: always cooperate.
Naive defector ND: always defect.
Tit-for-tat TFT: Copy your partner’s previous behavior.
PAVLOV : Switch behavior when partner defects.
New element: space. Movement to find a new partner. All theabove strategies have mobile versions, but there are some newones.
Walk-away cooperator WAC: cooperate, and walk away if thepartner defects.
Walk-away defector WAD: defect, and walk away if the partnerdefects.
New element: With a small error probability e, the behaviorchanges to its opposite (just in the current step).
Experiments
Research papers analyzing the outcome under varyingcircumstances. I looked at two.• Aktipis 2004. Journal of Theoretical Biology. Starlogo program
not available to replicate any more.• Premo and Brown 2019. Theoretical Population Biology.
Netlogo 6.0.2 program available on www.comses.net.We can inspect it and play with it.
The experiments capped the number of agents by culling them iftheir number grew above a maximum.
• Start with some of the above breeds (randomly placed withrandom initial energies).The experiments depend on the choice of competing breeds.
• In several combinations the WAC’s seem to have competitiveadvantage, but this depends much on the density and errorprobability.
• Invasion: Once a stable proportion of a group of breeds isachieved, another breed is introduced to see its ability to invadethe other ones.
• It may happen that breed A helps breed B against breed C butthen is defeated by breed B.
WAC is favored only at high density and low error.
YTPBI: 2685
Please cite this article as: L.S. Premo and J.R. Brown, The opportunity cost ofwalking away in the spatial iterated prisoner’s dilemma. Theoretical Population Biology (2019),https://doi.org/10.1016/j.tpb.2019.03.004.
L.S. Premo and J.R. Brown / Theoretical Population Biology xxx (xxxx) xxx 3
If a cell contains an odd number of agents, one of those agents1
will remain unpaired after all of the other agents in the cell have2
formed mutually exclusive dyads. Second, paired agents play the3
PD. The strategy each agent plays is prone to error, as described4
above. Each player’s energy is updated with its payoff: T, R, P, or S.5
Agents that do not play the PD neither gain nor lose energy during6
this stage. Third, agents with energy less than or equal to zero die7
and are removed from the simulation. The fourth stage involves a8
random movement strategy known as ‘‘sidestep’’ (Smaldino and9
Schank, 2012). Sidestep allows an agent to move one cell to its10
left or right with equal probability (Smaldino and Schank, 2012).11
After moving to its left or right the agent sets its heading to face12
the direction it moved. For example, an agent with a heading of13
90 that moves to its left would move one cell ‘‘up’’ the lattice14
and then set its heading to 0, the direction in which it moved. An15
agent will ‘‘sidestep’’ into an adjacent cell if (1) the agent is a WAC16
or WAD and its partner played D during the current time step or17
(2) the agent had no partner during the current time step. Fifth,18
any agent with energy greater than or equal to 100 reproduces19
asexually and transfers half of its energy to its offspring. Sixth, if20
after the reproduction stage population size exceeds K, then the21
population is exposed to mortality. We employ the same form of22
density-dependent regulation as in Aktipis (2004), and it works as23
follows. The energy of a randomly chosen agent is reduced by 10.24
If the affected agent’s energy is less than or equal to zero, then it25
is immediately removed from the simulation, reducing the size of26
the population by 1. This continues until the population has been27
reduced to K agents. Because agents are chosen randomly (and28
with replacement) during this process, those with lower energy29
levels are on average more likely to die than those with higher30
energy levels.31
The initial population of agents is divided evenly among the32
number of strategies included in the competition. For example,33
in simulations that include all four strategies (WAC, WAD, NC,34
and ND), the initial population contains 25 agents of each starting35
strategy. In simulations that include just two starting strategies36
– say, NC and ND – the initial population contains 50 NCs and37
50 NDs. And as in Aktipis (2004), at the start of each simula-38
tion agents are distributed randomly throughout the lattice, each39
agent’s energy level is set to an integer chosen randomly from a40
uniform distribution with inclusive boundaries of 0 and 49, and41
each agent’s heading is set to one of the four possible values with42
equal probability.43
A measure of strategy success is needed to assess the effects of44
population density, error, and offspring dispersal on the evolution45
of cooperation. In experiments that exclude mutation, let the46
proportion of 1000 unique simulation runs in which the strategy47
evolved to fixation serve as a measure of that strategy’s success.48
Because mutation ‘‘reintroduces’’ strategies to a population, and49
thus no strategy that evolves to fixation during a simulation50
remains fixed indefinitely, we need a slightly different measure51
of success when µ > 0. For experiments in which µ > 0, let the52
proportion of runs (out of 100 this time due to much longer run53
times) in which the number of agents with the strategy is equal54
to or greater than 95 at the end of the 500,000th time step serve55
as the measure of success of that strategy. These proportions need56
not sum to 1 when µ > 0.57
3. Results58
Aktipis found that WAC evolved to fixation in 10 out of 1059
simulations in which the initial population contained an equal60
number of WAC, WAD, NC, and ND agents (K = 100, µ =61
0, e = .001, global offspring dispersal; see column 1 of Table62
1 in Aktipis, 2004). Aktipis’ model employs a 25 ⇥ 25 lattice,63
which corresponds to d = .16 when K = 100. Although we were64
Table 2Strategy success in four-strategy competitions.Cell values provide the proportion of 1000 sim-ulation runs in which each strategy evolved tofixation. Population density (d) decreases from leftto right: .649 (dark grey), .160 (grey), .040 (lightgrey), .010 (white). Offspring dispersal is globaland µ = 0.
unable to replicate her results for d = .16 (Table 2), the success 65
of WAC at our next highest d value is consistent with her finding. 66
The source code for the original model is currently unavailable, 67
so we can only speculate as to what might explain this apparent 68
difference.1 69
What is clear is that with low to moderate error WAC’s success 70
(as measured by the proportion of runs in which WAC evolved 71
to fixation) decreases as d decreases (Table 2). WAC is entirely 72
unsuccessful when error is relatively high (e = .1), regardless of 73
d. Our observation that increasing error reduces WAC’s success 74
is consistent with what Aktipis recorded (Aktipis, 2004, Table 2) 75
in competitions among eight strategies. Our results also clearly 76
highlight the important relationship between population density 77
and the success of NC. In contrast to WAC, NC’s success increases 78
as d decreases for all values of e tested. 79
To develop a better understanding of how population density 80
and error affect the evolution of cooperation in the four-strategy 81
competition we conducted the same experiment for each possi- 82
ble pair-wise combination of the four basic strategies (Table 3). 83
The simulation results highlight an interaction between popula- 84
tion density and strategic error. Decreasing population density 85
strengthens NC relative to WAD and ND for all of the error rates 86
tested here. We find that decreasing d strengthens NC relative to 87
WAC, but only when error is very low. Even the highest popula- 88
tion density we examined (d = .649) is sufficiently low to ensure 89
the success of NC over WAC when e � .01. The effect of density 90
on the success of WAC is more complicated. When error is high (e 91
= .1), decreasing population density strengthens WAC relative to 92
WAD, but weakens WAC relative to ND. All values of d tested here 93
were sufficiently low to ensure that WAC outcompetes either 94
1 We were unable to replicate Aktipis (2004) result for d = .16. We cannotdefinitively explain this disparity without access to the original source code.Perhaps during the course of the original study StarLogo was at some pointreinstalled or updated without the default size of the lattice being enlarged to25 ⇥ 25, or perhaps 25 ⇥ 25 was a typo in the original manuscript and thesize of the lattice used during data collection was smaller than reported.
Pairwise contest.WAC prevails over defectors only at high density and low error.NC prevails over WAC under all these conditions (opportunity cost).
YTPBI: 2685
Please cite this article as: L.S. Premo and J.R. Brown, The opportunity cost ofwalking away in the spatial iterated prisoner’s dilemma. Theoretical Population Biology (2019),https://doi.org/10.1016/j.tpb.2019.03.004.
4 L.S. Premo and J.R. Brown / Theoretical Population Biology xxx (xxxx) xxx
Table 3Strategy success in pair-wise competitions. Cell values provide the proportion of 1000 simulation runs in which the row strategy evolved to fixation over the columnstrategy. Population density (d) is .649 (dark grey), .160 (grey), .040 (light grey), .010 (white). Offspring dispersal is global and µ = 0.
type of defector when error – and, thus, the opportunity cost of1
walking away – is relatively low (e .01).2
For the higher population densities investigated here, increas-3
ing e from .001 to .1 strengthens NC relative to WAC. When4
population density is low, increasing e over the same range5
strengthens NC relative to WAD but slightly weakens NC relative6
to ND (although NC remains stronger than ND even when e= .1).7
Finally, our results show that increasing e from .001 to .1 weakens8
WAC relative to NC and to WAD only when d is relatively high,9
and it weakens WAC relative to ND only when d is relatively low.10
To summarize the pair-wise competition results: for all values11
of d tested here, WAC outcompetes either defector strategy when12
error is relatively low (e .01). For all e values tested here, NC13
regularly outcompetes WAD and ND when d is relatively low (d14
.04). NC outcompetes WAC under all values of d and e tested15
here.16
Armed with a better understanding of which strategy prevails17
in each of the pair-wise competitions, we introduce mutation (µ18
> 0) to investigate the robustness of the results of our original19
four-strategy competitions. Table 4 provides data on the long-20
term success of each strategy (as measured by the proportion21
of runs in which at least 95 agents display that strategy after22
the 500,000th time step). Comparing Table 4 to Table 2 shows23
that WAC enjoys less success when µ > 0 than when µ = 0. In24
the presence of mutation, both defector strategies enjoy greater25
success at the expense of WAC in the very same area of the26
parameter space that was conducive to cooperation when µ = 0.27
Although NC also fares worse when µ > 0, it is more robust than28
WAC to invasion by defectors under global offspring dispersal29
(Table 4).30
Like Smaldino and Schank (2012), we are also interested in31
how local offspring dispersal affects the evolution of cooperation.32
Here we investigate whether the effects of error and population33
density hold under the assumption that a new agent begins its34
life in a cell chosen at random from those within a radius of 335
cells of its parent’s position (excluding the parent’s cell) rather36
than in a cell chosen at random from the entire lattice. Under37
either form of dispersal, offspring are often placed in a cell that38
does not contain other agents. Because offspring dispersal merely39
defines the maximum distance from its parent that each offspring40
will begin to search for its first partner, global offspring dispersal 41
does not preclude an offspring from being placed near its parent. 42
Table 5 shows that local offspring dispersal enhances WAC’s 43
success at low to moderate population densities for low to mod- 44
erate error rates. Here, WAC gains the ground surrendered by NC, 45
which generally fares worse under local offspring dispersal than 46
under global offspring dispersal (compare Table 5 to Table 2). 47
Mutation amplifies the negative effect of local offspring dispersal 48
on NC (Table 6). In the presence of mutation and local offspring 49
dispersal, WAC gains some of the ground surrendered by NC, but 50
only when error is very low (compare Table 6 to Table 2). 51
4. Discussion 52
4.1. High error and low population density increase the opportunity 53
cost of walking away 54
Our results highlight conditions in which walking away does 55
not carry the day. WAC’s success in the four-strategy competition 56
decreases with increasing error and/or with decreasing popula- 57
tion density (Tables 2, 4, 5, and 6). This finding, which echoes 58
some of Aktipis’ results from competitions among eight strate- 59
gies (Aktipis, 2004), can be explained by the ways in which error 60
and population density affect the opportunity cost of walking 61
away from erroneous defections. 62
In addition to being ‘‘nice’’, ‘‘clear’’, and ‘‘forgiving’’, the ability 63
to respond to defection has been shown to facilitate the evolution 64
of cooperation in previous IPD studies that do not include error or 65
space (e.g., Axelrod, 1984; Izquierdo et al., 2010, 2014; Fujiwara- 66
Greve and Okuna-Fujiwara, 2009; Zhang et al., 2016; Zheng et al., 67
2017; K∞ivan and Cressman, 2017). The ability to walk away 68
from defection provides a mechanism by which an individual 69
can protect itself from prolonged exposure to a defector in the 70
spatial IPD. However, it is important to note that when e > 0, 71
as in Aktipis (2004) and in our model, agents that walk away 72
from a defection are not necessarily walking away from a defector. 73
Walking away in response to an ‘‘honest mistake’’ by a cooperator 74
incurs a cost. Let us first focus on how this cost might weaken 75
WAC relative to NC in the context of the three possible types of 76
cooperator–cooperator dyads: WAC–NC, WAC–WAC, and NC–NC. 77
• Mutation (µ > 0) has a somewhat similar effect as error, butthere can be differences when another breed is introduced—foxinto the henhouse.
• The effect of global versus local dispersal.So far, offspring was dispersed globally.Local dispersal may make a difference, for example byincreasing the clustering of NC (“henhouse”) vulnerable to a“fox” (ND or WAD).
• Under conditions when defectors prevail, interestingly NDappears stronger than WAD.(This probably depends on the particular T > R> P > Svalues.)
Word of mouth
Marketing over a network. A complex example of the propagationof expertise, using Netlogo.Spread of
Awareness by advertisement and word-of-mouth.
Expertise only by word of mouth. Initial percentage of expertsmay vary from very small to rather large.
Examples Who are experts? It depends.
• Plugin electric vehicles — How to charge? Loading stations?Acceleration?
• Reversible contraception devices — How does it work? Is itreally reversible? Side effects?
How these qualities spread depends on some properties of theindividuals.
Varying states of awareness
Unaware U .
Aware A.
Seeker S. Will seek expertise when given a partner. Times out.
Varying states of expertise
Ignorant I .
Knowledgeable K .
Promoter P. Passes on expertise when given a partner. Times out.
Constant predispositions of individuals
Curious? (U , I)awareness−−−−−→ (S, I) else (A, I).
Supporter? (U , K)awareness−−−−−→ (A, P) else (A, K).
Enthusiast? (S, I)expertise−−−−−→ (A, P) else (A, K),
(U , I)expertise−−−−−→ (U , P) else (U , K).
If interaction takes place, both awareness and expertise aretransferred.• Awareness may spread as curios agents turn into seekers. This
may result in cascades of awareness. (Green links.)• If they receive expertise at the same time: may result in chain
of retrieval. (Purple links.)• Expertise may also spread without awareness as enthusiasts
turn into promoters. This may result in cascades of expertise.(Orange links.)
The network
Netlogo has a network extension, loaded via extensions [nw].Defines many of network functions and many typical networks.The one used here is a so-called “small-world” proposed byWatts-Strogatz:
(nw:generate-watts-strogatz turtles linkscount-individuals count-neighboors 0.1)
• Nodes arranged on a circle.• Each is connected to count-neighbors nearest neighbors.• Then 10% of the links is randomly redirected to create
non-local connections.
Scenarios
What can be varied?
• Number of individuals, number of neighbors.• Percentage reached in one step by advertisement.• Initial number of experts.• Initial proportions of: curious, enthusiasts, supporters.
Categories, by the initial number of experts:
Disruptive < 5%.Supporters’ role limited. Lower bound of curious; thenenthusiasts more important.
Incremental 10%.Supporters and enthusiasts become more important.
Well-known > 50%.Some supporters: curious not that important.Even more supporters: enthusiasts’ role is also a limited.
Trust
What makes people behave honestly with strangers? What makessocieties/groups/environments differ in this? This model by Kleinand Marx examines a brutally simplified situation.
• Agents with two characteristics:Trustworthiness In this model, an immutable Boolean.
Reason: the time scale on which it would change is largerthan the one considered here. (o or x)
Trust degree A real number, estimation of trustworthiness of arandom stranger. Changes with experience.( < 0.5: read, ≥ 0.5: green).
• In a 2D space. In each round, agents try to pair up with anagent nearby for a game. The one who found the other is calledthe trustor, the other the trustee.
• If trustor is not trusting (trust < 0.5), defects.Learns nothing; trustee learns that trustor was not trusting.This is indirect experience: partner may be not trusting due toher own (direct and indirect) negative experiences.
• Else if trustee is trustworthy, cooperation.Trustor learns that trustee is trustworthy (direct).Trustee learns that trustor is trusting (indirect).
• Else trustee defects.Trustor learns that trustee is not trustworthy (direct).Trustee learns that trustor is trusting (indirect).
Learning algorithm
T ′ = T · (1−ws) + E ·ws, where
• T = current trust, T ′ the trust after having the experience.• E = the new experience. 0 or 1 showing whether the partner
was trustworthy (if the trustee, direct) or trusting (if the trustor,indirect).
• w is the weight given the new experience.• It is multiplied by the social factor s. If the experience is direct
then s = 1, if it is indirect then it is some constant < 1.
So T ′ is weighted average of T and E. Weight of E is w · s, with sdepending on whether direct.This gradually discounts past experiences by factors 1−ws.
A Netlogo model. Parameters to vary:
• Size of space.• Number of agents.• Number of trustworthy persons (to be picked randomly).• Initial mean trust. In the model, the mean of a normal
distribution with standard deviation 0.2. The actual initial trustT of each agent is an independent random variable chosen fromthis distribution.
• Weight of new information w: try a value 0.03.• Social factor s ≤ 1.• Mobility: how far to search for partners. If it is small then the
space may become segregated into islands within which agentstrust each other (green) or distrust each other (red).Divide the space into large squares of some given size.Segregation is measured by the total variation distance betweenthe green and the red distribution.
• See how trust or distrust prevails, given a fraction ofuntrustworthy agents and initial trust expectation (could becalled prejudice). Function of both variables.
• Create separate graphs depending on the values of• mobility• weight of new information• social learning factor.
• Examine the robustness of trust by introducing a shock.
Program efficiency
In this lecture, over the Mason example Heatbugs, we illustratesome common ways to speed up your simulations.
• 2D grid, each position has some temperature.• Heat diffuses at some rate, and gets lost (“evaporates”) at
another rate.• Some bugs move around in this space.• Bugs generate heat at individual rates.• Each has its own ideal (comfort) temperature, and it is trying to
move towards it (thus to cooler or to warmer places).
• Compared to other examples, not much new in theimplementation. But the program comments illustrate anoptimization process.
• Only the diffusion process is considered, being the mosttime-consuming one. Handled by a single agent of classDiffuser.(There is also a multi-thread version.)
Diffusion formula
Neighborhood N(p) of cell p, has size |N(q)|= n= 9.Cell p distributes diffusion rate δ of its heat h(x) to its neighbors:δ/n to each one (including itself), and receives δh(q)/n fromneighbor q ∈ N(p). Total received:
δ∑
q∈N(p)
h(q)/n= δA(p),
where A(p) is the average heat of the neighborhood. Since itdistributes δh(p), new value before evaporation:
h(p) +δ(A(p)− h(p))
Evaporation multiplies this with evaporation rate η:η(h(p) +δ(A(p)− h(p))).
for(int x=0;x< heatbugs.valgrid.getWidth();x++)for(int y=0;y< heatbugs.valgrid.getHeight();y++){
average = 0.0;heatbugs.valgrid.getMooreLocations(x,y,1,Grid2D.TOROIDAL,
true, xNeighbors,yNeighbors); // true: include originfor(int i = 0 ; i < xNeighbors.numObjs; i++)
average+=heatbugs.valgrid.get(xNeighbors.get(i),yNeighbors.get(i));
average /= 9.0;heatbugs.valgrid2.set(x,y, heatbugs.evaporationRate * (
heatbugs.valgrid.get(x,y) + heatbugs.diffusionRate *(average - heatbugs.valgrid.get(x,y))
));}
• Filling bags usingheatbugs.valgrid.getMooreLocations(...)Instead, loop through the neighbors explicitly.
• Access the field array of the grid directly, not throughget()/set().
• Use stx()/sty() instead of tx()/ty() when only small stepsare made.
• Take all instance variables out of the loop, defining localvariables.
final DoubleGrid2D _valgrid = heatbugs.valgrid;final double[][] _valgrid_field = heatbugs.valgrid.field;final double[][] _valgrid2_field = heatbugs.valgrid2.field;final int _gridWidth = heatbugs.valgrid.getWidth();final int _gridHeight = heatbugs.valgrid.getHeight();final double _evaporationRate = heatbugs.evaporationRate;final double _diffusionRate = heatbugs.diffusionRate;
double average;for(int x=0;x< _gridWidth;x++)
for(int y=0;y< _gridHeight;y++) {average = 0.0;for(int dx=-1; dx< 2; dx++)
for(int dy=-1; dy<2; dy++){int xx = _valgrid.stx(x+dx);int yy = _valgrid.sty(y+dy);average += _valgrid_field[xx][yy];
}average /= 9.0;_valgrid2_field[x][y] = _evaporationRate *
(_valgrid_field[x][y] + _diffusionRate *(average - _valgrid_field[x][y]));
}
Waste: the array valgrid_field[xx] is looked up repeatedly foreach y.
• Instead, use the rolling references
_current = _valgrid[x]_past = _valgrid[x-1]_next = _valgrid[x+1]_put = _valgrid2[x]
• Replace the 3× 3 inner for loop with explicit sum of 9.• Use sty() only when really needed; save when can be reused.