neuroeconomics || introduction to neuroscience

13
CHAPTER 5 Introduction to Neuroscience Paul W. Glimcher OUTLINE Introduction 63 The Cellular Structure of Information Encoding in the Brain 63 The Large-Scale Anatomical Structure of the Brain 68 Organizing Principles of Representation in the Brain 71 Neuronal Stochasticity 72 Plasticity and Memory 73 Summary and Conclusions 74 References 74 INTRODUCTION Perhaps the greatest hindrance to an economist hoping to seriously study, evaluate or engage in neu- roeconomic research is the lack of a foundational understanding of modern neurobiology. Modern neu- robiology provides a wealth of constraints and restric- tions that shape neuroeconomic theory, a fact that is not always appreciated by economists. The wide- spread lay view that “the human brain can compute anything,” that when it comes to anatomy “every- thing is connected to everything,” or the complemen- tary assumption that “what we know about the brain today is not enough to impose any meaningful con- straints on economics” are all equally false. Modern neurobiologists know basically how information is encoded in our nervous systems, can place constraints on the costs of this encoding, have a fairly complete (and surprisingly sparse) map of what connects to what in the human brain, and can even explain bio- physical/molecular constraints on the form and costs of learning. These are real accomplishments that sim- ply cannot be overlooked by anyone seriously hoping to link the algorithmic structure of the human brain to reduced form models of decision making. What follows is an effort to provide, in extremely compact form, a pre ´cis of the foundations of modern neurobiology, focusing on those aspects that are most relevant to the neuroeconomist with primary training in the social sciences. This chapter and the methodological chapter that follows (Chapter 6) offer a scaffold on which future study can build. Students who, after reading these two chapters, wish to learn more about the brain might turn to any number of textbooks. We would suggest that those interested in basic neuroscience consider reading Breedlove et al. (2010) or the slightly more advanced Bear et al. (2007). For those interested in a more detailed treatment of the brain imaging methods discussed in Chapter 6, Huettel et al. (2008) is recommended. For advanced material the reader is referred to either of the two standard graduate texts: Squire et al. (2012) or Kandel et al. (2012). The chapter that follows breaks down a founda- tional approach to neuroscience into four sections: the cellular structure of information encoding in the brain; the large-scale anatomical structure of the brain; orga- nizing principles of representation in the brain; and the biophysical mechanisms of learning and plasticity. THE CELLULAR STRUCTURE OF INFORMATION ENCODING IN THE BRAIN Like all organs the vertebrate brain is composed of cells, tiny self-sustaining units that are typically about a thousandth of an inch in diameter. The brain is com- posed of two types of these cells: glia and neurons. Glia are support cells of several different kinds that play 63 Neuroeconomics. DOI: http://dx.doi.org/10.1016/B978-0-12-416008-8.00005-X © 2014 Elsevier Inc. All rights reserved.

Upload: paul-w

Post on 15-Dec-2016

216 views

Category:

Documents


1 download

TRANSCRIPT

C H A P T E R

5

Introduction to NeurosciencePaul W. Glimcher

O U T L I N E

Introduction 63

The Cellular Structure of Information Encodingin the Brain 63

The Large-Scale Anatomical Structure of the Brain 68

Organizing Principles of Representation in the Brain 71Neuronal Stochasticity 72

Plasticity and Memory 73

Summary and Conclusions 74

References 74

INTRODUCTION

Perhaps the greatest hindrance to an economisthoping to seriously study, evaluate or engage in neu-roeconomic research is the lack of a foundationalunderstanding of modern neurobiology. Modern neu-robiology provides a wealth of constraints and restric-tions that shape neuroeconomic theory, a fact that isnot always appreciated by economists. The wide-spread lay view that “the human brain can computeanything,” that when it comes to anatomy “every-thing is connected to everything,” or the complemen-tary assumption that “what we know about the braintoday is not enough to impose any meaningful con-straints on economics” are all equally false. Modernneurobiologists know basically how information isencoded in our nervous systems, can place constraintson the costs of this encoding, have a fairly complete(and surprisingly sparse) map of what connects towhat in the human brain, and can even explain bio-physical/molecular constraints on the form and costsof learning. These are real accomplishments that sim-ply cannot be overlooked by anyone seriously hopingto link the algorithmic structure of the human brainto reduced form models of decision making.

What follows is an effort to provide, in extremelycompact form, a precis of the foundations of modernneurobiology, focusing on those aspects that are mostrelevant to the neuroeconomist with primary training inthe social sciences. This chapter and the methodological

chapter that follows (Chapter 6) offer a scaffold on whichfuture study can build. Students who, after reading thesetwo chapters, wish to learn more about the brain mightturn to any number of textbooks. We would suggest thatthose interested in basic neuroscience consider readingBreedlove et al. (2010) or the slightly more advancedBear et al. (2007). For those interested in a more detailedtreatment of the brain imaging methods discussed inChapter 6, Huettel et al. (2008) is recommended. Foradvanced material the reader is referred to either of thetwo standard graduate texts: Squire et al. (2012) orKandel et al. (2012).

The chapter that follows breaks down a founda-tional approach to neuroscience into four sections: thecellular structure of information encoding in the brain;the large-scale anatomical structure of the brain; orga-nizing principles of representation in the brain; andthe biophysical mechanisms of learning and plasticity.

THE CELLULAR STRUCTURE OFINFORMATION ENCODING

IN THE BRAIN

Like all organs the vertebrate brain is composed ofcells, tiny self-sustaining units that are typically abouta thousandth of an inch in diameter. The brain is com-posed of two types of these cells: glia and neurons. Gliaare support cells of several different kinds that play

63Neuroeconomics. DOI: http://dx.doi.org/10.1016/B978-0-12-416008-8.00005-X © 2014 Elsevier Inc. All rights reserved.

structural and metabolic roles in the maintenance ofthe brain. Together these cells perform several func-tions critical for neural processing. First, they play akey role in maintaining the biophysical environmentrequired for nerve function. Second, they providestructural support, in the form of a highly dynamicarmature that both maintains the spatial configurationsof nerve cells and allows plastic changes to that config-uration when required. Third, they serve as a kind offilter that allows some compounds found in the bloodaccess to nerve cells, but prevents other classes of com-pounds from crossing into the brain; a collective prop-erty of the gial and blood vessels of the brain knownas the blood�brain barrier. Finally, they can serve as akind of living neural insulation wrapped around spe-cific parts of some nerve cells; this insulation has theeffect of speeding the rate with which information tra-vels in the brain.

It is however neurons, or nerve cells, which performcomputations and serve as the foundation for mentalfunction. Figure 5.1 shows a cartoon of a fairly typicalneuron. The large bulbous center of the cell, or cellbody, contains all of the machinery necessary to keepthe cell alive. It is mostly here that costly sugars andoxygen absorbed from the blood are processed to pro-vide the energy that powers the cell, and it is also herethat DNA is used as a blueprint to produce new cellu-lar components. Note that extending from the cellbody are long thin processes called dendrites. Theseextensions serve as the inputs to a nerve cell, the struc-tural mechanism by which signals from other nerve

cells are mathematically integrated and analyzed dur-ing neural computation. Also extending from the cellbody is a single long thin process called the axon. Theaxon serves as an output wire for the nerve cell. Axonsmay be quite long, in rare cases almost a meter, andnerve cells use these axons to broadcast the outputs oftheir dendritic computation to other nerve cells, evenif those recipient cells are quite distant. They accom-plish this connection to other nerve cells at the end ofthe axon, the tips of the axons making physical contactwith the dendrites of other neurons. The cellular spe-cialization at this contact is called the nerve terminal apoint of contact that is highly specialized to maximizecomputational flexibility. This nerve ending-to-dendrite junction, the synapse, allows a receiving neu-ron to add, subtract, multiply, divide or even mathe-matically integrate the many continuous real-valuedsignals that its dendrites receive from the nerve term-inals that impinge upon it. It does this using simpleelectrochemical processes that physically instantiatemathematical operations on real numbers that havebeen mapped into the form of electrical voltages.

To better understand this process, however, we nexthave to understand what it means for a nerve cell tosend a “signal” to another nerve cell. Formally, signalsin nerve cells are called action potentials (or more collo-quially spikes) and they reflect a rather simple electro-chemical process that is now very well understood(Aidley, 1998; Hodgkin and Huxley, 1952). Like allcells, nerve cells are surrounded by membranes thatrestrict the flow of chemicals both into and out of the

Dendrite

Dendrite

Cell body

Axon

Nerve terminalSynapse

Postsynaptic

PresynapticPresynaptic

FIGURE 5.1 A neuron. Here we cansee a neuron making a synaptic contactwith a second neuron. The main elementsof the neuron are labeled.

64 5. INTRODUCTION TO NEUROSCIENCE

NEUROECONOMICS

cell (Figure 5.2). These membranes are essentially verythin walls impenetrable to water-based chemicals thatare perforated by tiny tubes colloquially called chan-nels. To a first approximation, we can think of thesemembranes as particularly restrictive to the flow of aparticular chemical compound, the positively chargedatom sodium (the active ingredient in table salt).Sodium is abundant outside (but not inside) of thesecells, the nerve membrane limits the flow of this chem-ical into the cell and at the same time completely pre-vents the flow of larger negatively charged proteins(which are abundant inside but not outside) out of thecell. The critical feature that this regulation of flow,and the separation of electrical charge that it imposes,is a stable equilibrium between two physico-chemicalforces. The high concentration of sodium outside thecell sets up a force known as diffusion that pushes toequalize the concentration of sodium inside and out-side the cell. This force acts to drive sodium into thecell. In opposition, an electrical force (involving thepositively charged ion potassium, which is overrepre-sented inside the cell at equilibrium) seeks to distrib-ute electrical charge equally by driving sodium out ofthe cell. Because of the construction of the membrane,these two forces reach a stable equilibrium state atwhich: (i) the inside of the cell carries a negativecharge (meaning force is exerted outwards) and thisoutside pushing force is opposed by (ii) an equal andopposite diffusive force pushing sodium inside. This

equilibrium state is called the resting potential, andperturbations of this equilibrium induced by transientchanges in the strength of the diffusive force serve asthe conceptual centerpiece for all neural computation.

These perturbations turn out to be quite easy toinduce by opening and closing the tubes, or channels,that span the membrane (Hille, 2001). Consider anopenable ion channel (Figure 5.3), a hollow tube span-ning the membrane with a hole that can be opened,and which when opened permits sodium atoms tocross the membrane one at a time. When several ofthese channels open on a dendrite, the result is thatthe dendrite is driven towards a new equilibrium stateby the new membrane condition which induces move-ment of the electrically charged particle sodium (bydiffusion) into the cell. This new equilibrium, one asso-ciated with a stronger diffusive force created by theopen channels, is characterized by a commensuratechange in the electrical force, in this case a shift to ahigher voltage inside the cell.

What opens these tiny ion channels? The answer isthat chemicals, called neurotransmitters, transiently openchannels of this type located on the dendrites. Thusneurotransmitters have the effect of briefly shiftingthe electrical voltage across each neuron’s membrane.This voltage is, of course, a signed real number rangingfrom about 2100 to 1100 millivolts and the state of themembrane’s ion channels determines the numericalvalue of this physical quantity at any given time. (Weshould note that the precise dynamics of this membranevoltage are well described by a set of widely under-stood differential equations; see Hodgkin and Huxley,1952.)

Sodium channels are not, however, the only type ofchannel located on the dendrites that determine themembrane voltage. Other classes of channels can havean effect opposite to that of sodium channels, causingthe local voltage to transiently shift to a lower (ratherthan a higher) equilibrium. By mixing and matchingboth channel types and neurotransmitters, we cantherefore construct a kind of instantaneous mechanicaladding machine that sums the effects of differentneurotransmitters (each with slightly different but well-understood temporal dynamics) as an electrical voltageacross the membrane.

Thus to take a specific example, one kind of neuro-transmitter might open voltage-increasing channels ona particular neuron. The more of that neurotransmitterthat is present in any one synapse on that neuron, thelarger the number of voltage increasing channels thatare opened at that synapse, and thus the higher thevoltage in the dendrite on which that synapse islocated. Voltage at a single synapse can thus be thoughtof as a monotone, and it turns out sometimes even alinear, function of local neurotransmitter concentration.

Outside CellSodium atom

Potassium atomInside cell

Ion channelsOpen ionchannel

FIGURE 5.2 A schematic of the neuronal membrane. Ion chan-nels span the membrane allowing chemicals like sodium to enter thecell, one atom at a time.

65THE CELLULAR STRUCTURE OF INFORMATION ENCODING IN THE BRAIN

NEUROECONOMICS

Another kind of neurotransmitter might open voltage-decreasing channels that have the opposite effect, andboth of these two classes of operations might evenoccur at the same time in adjacent synapses � thusaffecting together the voltage in a single dendrite. Theimportant point is that the physical membrane reacts toeach of these changes, effectively averaging these elec-trical fields over space and time such that the instanta-neous electrical field across the entire dendrite is a(surprisingly linear) readout of the sum of the neuron’ssynaptic inputs (Shepherd, 2004).

More sophisticated channels and transmitters caneven perform more complicated computations. Oneclass of channels, for example, stays open for a verylong time after activation by a molecule of neurotrans-mitter. In synapses with channels of this kind, themembrane voltage continues to increase if a fixednumber of neurotransmitter molecules are releasedevery instant � a kind of mechanical integration witha time constant locked to the open-time of the channel.Another class of channel affects the voltage of themembrane in a non-monotonic fashion, always drivingmembrane voltage towards a fixed point in voltage. Akind of non-linear interaction often called shunting.

A tremendous amount is known about neurotrans-mitters and these receptor-channels with which theyinteract, and it is now known that synapses are highlyheterogeneous (Cooper et al., 2003). One synapse maybe specialized for linear summation and another for ashort time constant of integration, but all of them share

these simple building blocks. For all of them, the den-dritic voltage is a continuous variable reflecting theoutput of a voltage computation being performed byneurotransmitter levels in the synapses that cover thesurface of the dendrite.

The next step in neural computation within a singleneuron involves a nonlinear threshold. The ion chan-nels along the axon, it turns out, are different fromthose in the dendrites. These ion channels open toallow sodium to enter the cell freely � whenever thevoltage near them exceeds a fixed threshold. Considernow what this means. Whenever the dendritic “com-putation” (the summed voltage in that region of thecell) exceeds a fixed threshold, these voltage-gatedsodium channels all open, thus driving the entire cell toa new equilibrium that has a much higher voltage.What this means in practice is that once the voltage ofthe cell is high enough to trigger the opening of thesevoltage-sensitive channels, those channels open. Thisin turn drives the voltage even higher up. That in turnactivates adjacent channels in the axon, which,although far away from the dendrite, are subsequentlyopened by this more proximal shift in the equilibriumvoltage. What happens is thus a wave of equilibriumshifts, realized as a change in the electrical state ofthe cell that propagates down the axon to the axon-terminal somewhat like a zipper opening ion channelsas it moves itself along the axon. This wave of activa-tion is the action potential and importantly it is alwaysof the same voltage � roughly the one specified by the

Closed ion channel Open ion channel

Na+

Na+

Na+

Cellmembrane

Neurotransmitterbinding site

Neurotransmitterbinding site

Sodium atoms

Outside cell

Inside cell

FIGURE 5.3 Ion channels in the open and closed states.

66 5. INTRODUCTION TO NEUROSCIENCE

NEUROECONOMICS

equilibrium state induced by the opening of thesevoltage-sensitive channels (less some interestingdynamical issues). It is this mechanism that allows a cellto signal to the nerve endings, which may be a meteraway, that the voltage of the cell body has crossed aspecified threshold (Nicholls, 2012).

It is critical to recognize, however, that we havetransformed a continuous and largely linear variable,membrane voltage, into a discrete single event. Howthen can nerve cells communicate the kinds of continu-ous real numbers that we need for meaningful compu-tation? The answer is that the action potential itself isautomatically reset after about a thousandth of a sec-ond, a process that can be likened to a second zippertravelling just behind the first that closes up the chan-nels in the membrane and thus returns the cell to itsbaseline (or resting) equilibrium state. A second actionpotential is then generated by this cell if and only ifthe voltage in the dendrites remains above thresholdafter this resetting process is complete. Because of thestochastic mechanics of channel opening, however, thehigher the voltage is above threshold the more rapidlythis second action potential can be generated. Theresult is that the rate of action potential generation, thefrequency with which action potentials are generated,becomes a roughly linear function of dendritic voltage.In practice this means that the number of action poten-tials generated per second by a cell is the continuousvariable onto which any voltage calculation in the den-drites must be mapped (Aidley, 1998; Nicholls, 2012).This is a bounded, monotonic, and near-linear trans-formation that relates dendritic voltage to actionpotential rate. Action potential rate ranges from about0 to 100 action potentials per second (or Hertz, theunits of frequency) for a typical neuron. Note that thisis a positively valued range, which imposes someinteresting computational constraints. Negative valuescan be encoded by assigning two neurons to theencoding, one for positive values and one for negativevalues. Alternatively, negative values can be encodedby defining, for example, 50 action potentials per sec-ond (or some other frequency) as “0.” Both encodingtechniques have been observed in the mammalianbrain for different subsystems. The range is also veryfinite, in practice, because of a fixed and significantvariance associated with these action potential rates.1

What happens to these action potentials next, afterthey reach the nerve terminal? The answer is that eachaction potential triggers the release of a tiny quantity of

neurotransmitter from each terminal (Figure 5.4). Thisneurotransmitter then diffuses across a truly tiny space,the synapse, which separates each nerve terminal fromthe dendrite with which it communicates. Lying at thefar side of the synapse, on the surface of the dendrite,are the same ion channels that we encountered whendiscussing dendritic function above. These were the ionchannels that were opened or closed by neurotransmit-ter molecules. These neurotransmitter molecules thusserve to open ion channels in those dendrites causingthe membrane of the post-synaptic cell to change volt-age. This completes the passage of the signal through asingle neuron and initiates a new computation at thenext neuron. Neuronal computation is thus incrementaland serial, with chains or networks of neurons perform-ing parallel mini-computations in continuous time.

At a micro-scale, networks of neurons can thus beviewed as largely linear devices that can performessentially any specifiable computation either singly orin groups. And a large proportion of the theorists andempiricists in neuroscience devote their time to thestudy of neural computation at this level. Some of thework described in Chapter 23 examines neural repre-sentation at this level of analysis. Neuronal recordingstudies conducted by neuroeconomists in monkeys,to take another example, take advantage of this factby measuring, one neuron at a time, the rate at whichaction potentials are generated as a function of either

Axon

Neurotransmitterbeing releasedinto synapse

Syn

apse

Ionchannels

PostsynapticPresynaptic

Den

drite

FIGURE 5.4 A synapse.

1Let us draw attention here to how obviously cardinal and linear is this discussion of firing rates as encoding schemes. To a

neurobiologist, who is essentially an algorithmic engineer, this is the most natural way to imagine firing rates. Perhaps somewhat

surprisingly, there is also a huge amount of data to support the conclusion that firing rates actually are monotone with important

environmental variables. Perhaps even more surprisingly, the activity level of a given neuron during rest actually does correspond,

in most cases, to the default state of the variable being encoded.

67THE CELLULAR STRUCTURE OF INFORMATION ENCODING IN THE BRAIN

NEUROECONOMICS

the options that a monkey faces or the choices that hemakes. This allows them to test the hypothesis, forexample, that to within a linear transform, the neuronsof a particular brain region encode in their spike rate aquantity linearly proportional to the expected utility ofan option.

A final point that needs to be made before we leavethe study of neurons is that all of these processes �the generation of action potentials, the release ofneurotransmitter, and the maintenance of dendriticelectro-chemical equilibrium, even the maintenance ofthe glia that support these nerve cells � are metaboli-cally costly. All of these processes consume energy inthe form of oxygen and sugars. These are probably themost costly metabolic processes in the human body. Over20% of the oxygen and sugar we employ as humans isused in the brain, even though the brain representsonly about 3% of the mass of the human body. So itis important to remember that more neural activitymeans more metabolic cost. This has two importantimplications. First, minimizing this activity is a centralfeature of the cost functions that lie behind neuralcomputation. Second, this metabolic demand is what ismeasured in most human brain scanning experiments.To the degree that this metabolic cost is a linear func-tion of neuronal activity, measurements of metabolicstate reflect the underlying neural activity, a pointtaken up in the next chapter.

THE LARGE-SCALE ANATOMICALSTRUCTURE OF THE BRAIN

Studies of single neurons do show evidence of aclear mapping between economic theory and brainfunction, but it is also critical to understand the size ofthe human brain when one is considering the functionof single neurons. The human brain is composed ofabout 1011 neurons. The average neuron receives, onits dendrites, inputs from hundreds of other neuronsand in turn makes synaptic contacts at its nerveendings with hundreds of other neurons. Estimates ofthe total number of synapses in a single human brainare typically in the range of 1015. If we were to imaginethat 107 of these neurons encoded (for example)expected utility (to within a linear transform), and thatthose neurons were randomly distributed in the brain,then it would in practice be impossible to find thoseneurons looking for them one at a time. The existenceof a second hidden cost function, however, solves thisproblem for neuroscientists. Axons are particularlycostly to maintain and as a result evolution has shapedthe human brain to minimize total axonal length (VanEssen, 1997). To achieve axonal minimization, twoprinciples seem to be widely adhered to in the neural

architecture. Neurons engaged in related computationstend to be grouped closely together, and communica-tion between distant groups of neurons tends toemploy highly efficient coding schemes employing aminimum number of axons.

These ex ante constraints, and a wealth of empiricalevidence, now support the conclusion that the brain isorganized around a set of modular processing stages(Brodman and Garey, 1999; Felleman and Van Essen,1991; Gazzaniga, 2009). Discrete regions of the brainperform specific computations and pass their computa-tional outputs, in a highly compact form, to other brainareas for additional processing. The next step in under-standing the brain is thus to move to a less reduction-ist level of analysis at which we can view the modularstructure of the brain, examine the basic functions andstructural modules and determine the known (and lim-ited) patterns of connectivity between these modules.We need to maintain, however, a clear mappingbetween analysis at the level of neurons, analysis atthe level of modules, and analysis at the level of inter-module (brain area) organization. To that end we nextexamine the basic modular structure of the humanbrain � the fundamentals of neuroanatomy � beforeturning to within-module features of the brainstructure.

Broadly speaking the primate, and hence human,brain can be divided into three main divisions. Theboundaries of these three divisions are based on con-verging evidence from developmental, genetic, physio-logical and anatomical sources. These three divisionsare, front to back, the telencephalon, or forebrain, the mes-encephalon, or midbrain, and the brainstem or hindbrain(Figure 5.5). For the purposes of contemporary neuroe-conomic study, the telencephalon, which all vertebratespossess in some form, will be our almost exclusivefocus. The mesencephalon, which lies beneath the

ForebrainForebrain

MidbrainMidbrain

HindbrainHindbrain

Forebrain

Midbrain

Hindbrain

FIGURE 5.5 Main divisions of the human brain.

68 5. INTRODUCTION TO NEUROSCIENCE

NEUROECONOMICS

telencephalon is outside the domain of nearly all con-temporary neuroeconomic study. The brainstem, whichfor our purposes includes the pons and medulla, playsmany critical roles in functions ranging from movementgeneration to breathing but is almost entirely outsidethe focus of neuroeconomic research today. A finalarea, the cerebellum, lies outside the brainstem and isprincipally involved in movement control.

The telencephalon itself can be divided into threemain divisions that will be familiar to many neuroeco-nomists, the cerebral cortex, the basal ganglia, and thethalamus. For the purposes of this review we willrestrict ourselves to the first two of these. Of thosetwo, the more evolutionarily ancient structure is thebasal ganglia. This is a brain region possessed in someform by all vertebrates. (Vertebrates include fish,amphibians, reptiles, birds and mammals.) The cere-bral cortex is a much more recently evolved structure.The first true cerebral cortex probably arose only about120 million years ago, towards the end of the age ofdinosaurs when mammals first became a widespreadgroup of animals. Well-developed cerebral corticesoccur in nearly all mammals and this structure isparticularly well developed in primates, a group ofanimals who arose about 60 million years ago. So whilethe cerebral cortex is a recently evolved structure, it isalso one that we share with a great many other familiarspecies ranging from mice to monkeys, a point repeatedin Chapter 7.

The basal ganglia, which lies beneath the cerebralcortex, is composed of a number of sub-regions inhumans and is dealt with in more detail in Chapters 15,16 and 17. There are five of these regions that are mostimportant to neuroeconomists. The caudate and putamentogether are known as the striatum. The striatum, andin particular the lower, or ventral, striatum is of particu-lar interest to neuroeconomists because activity hereappears to encode something like option value duringchoice tasks (see Chapters 8 and 20 for more on thispoint). These areas receive extensive inputs from thefrontal cortex and send almost all of their outputs totwo other nuclei of the basal ganglia, the globus pallidusand the substantia nigra pars reticulata. Speaking gener-ally, the caudate and putamen are the input areas of thebasal ganglia (axons entering the basal ganglia gener-ally synapse in these areas) and the globus pallidus andsubstantia nigra pars reticulata are the output areas(axons leaving the basal ganglia generally originate inthese areas). These output areas project, in turn, to asubregion of the thalamus that serves as a relay, passingthat information back to the frontal cortex. The corecircuit of the basal ganglia is thus a loop that takesinformation from the frontal cortex and passes it backto the frontal cortex after processing. The one remainingcritical region of the basal ganglia is the dopaminergic

system, composed of the dopamine releasing neurons ofthe ventral tegmental area and the substantia nigra parscompacta. These neurons receive projections from theoutput nuclei of the basal ganglia as well as from manyother areas and project both to the frontal cortex andthe input nuclei of the basal ganglia where their axonterminals release the neurotransmitter dopamine. Thedopamine neurons have been of particular interest,as will be described below, because there is now over-whelming evidence that these neurons encode a rewardprediction error signal appropriate for error-correctionbased learning. Much more detail on these dopamine-associated systems of the basal ganglia and frontalcortex can be found in Section 3 of this volume, whichbegins with Chapter 15.

The cerebral cortex of the telencephalon is muchlarger than the basal ganglia in most primate speciesand is surprisingly homogenous in structure. Essentiallyevery cortex (with the exception of the evolutionarilyancient hippocampus and olfactory cortex) is a 6-layeredsheet (Figure 5.6) with each of the layers showing veryspecific functional specializations and some specializa-tion from area to area. Layer 5, for example, always con-tains a specific class of cells called pyramidal neuronsthat send axons out of the sheet to make connectionswith other distant regions in the cortex. Layer 4, by

Cortex

I

II

III

IV

V

VI

Surface

Pyramidalcell

Outputs to other partsof the brain

Gray matter

Whitematter

FIGURE 5.6 The cerebral cortex is a six-layered sheet.

69THE LARGE-SCALE ANATOMICAL STRUCTURE OF THE BRAIN

NEUROECONOMICS

contrast, is a layer that receives input from other regions.The most important feature of this architecture is the factthat “cortex” is thus a fairly homogenous device per-forming a limited set of processing operations locallybefore passing these mathematically transformed signalsto other places, typically other places in the cortex.

The 6-layered structure of the cortex also means thatthis processing system is, at least structurally, a sheetlike device. This is obvious on gross inspection. Thecrinkled surface of the brain reveals that the cerebralcortex is a folded sheet that has been crumpled up tofit inside the skull. Beneath this folded sheet are denseruns of axons for interconnections between differentplaces in the cortex. The sheet itself, composed largelyof cell bodies, is referred to as grey matter. The denseruns of axons beneath it are referred to as white matter.For hundreds of years this sheet has been divided intofour to five main subdivisions, or lobes, that providethe first-order nomenclature for these systems. Theseare not functional subdivisions, but rather names ofconvenience. These main divisions are the frontal,parietal, occipital, and temporal lobes. Until recentlythe insula was considered an independent fifth lobe,although it is now often referred to as part of the fron-tal lobe.

Despite this casual parcellation into lobes, until thetwentieth century it was widely believed that the cor-tex was generally homogenous not only with regard toits anatomy but also with regard to its function. Thatconclusion was successfully challenged when it wasdemonstrated that some sub-areas in the cortex servedquite specific functional roles (e.g., Ferrier, 1890). Onearea, for example, is unique in that it receives inputs(via the thalamus) from the retina � an area nowcalled the primary visual cortex. Another projects out-puts uniquely to the muscles (via the spinal cord) �the primary motor cortex. Ultimately, these specializa-tions led the famous German anatomist KorbinianBrodmann (Brodmann and Garey, 1999) to conduct aseries of very detailed microscopic analyses of thecerebral cortex in a range of different animal species.What Brodmann found was that there are small differ-ences between the anatomical structures of differentregions of the cortex, differences small enough thatthey had been overlooked in the preceding two centu-ries. Based on these differences Brodmann dividedthe cortex into a large number of numerically labeledsub-areas, which still bear his numbers as names(Figure 5.7). Brodmann’s area 17, for example, can beshown to correspond to the primary visual cortexwhile area 4 can be shown to precisely correspond tothe primary motor cortex.

The principal Brodmann-area subdivisions, at a func-tional level, thus parcellate the cortex into a series ofareas with now largely known interconnectivities and

with discrete functions. Both of these properties areimportant. The connectivities are surprisingly sparse inthe sense that each cortical area connects with only afew other areas, and it should be noted that at this grosslevel of analysis these connections can be treated asidentical across individuals. It is also true that the func-tions of these areas are often surprisingly discrete, andvery well defined. The identification of a subset of theBrodmann areas as receiving sensory inputs fromplaces like the eyes and ears has naturally led to thedelineation of this group of areas as “sensory.” In a sim-ilar way, the group of areas associated with movementcontrol has been delineated “motor” areas. The frontalcortex, and the many sub-areas that make it up, are typ-ical examples of non-sensory and non-motor areas thatare often identified as “association” areas.

Two final areas that deserve particular mentionanatomically with regard to neuroeconomic study arethe amygdala and the hippocampus. The amygdala is aportion of the telencephalon that is not classically consid-ered part of the cerebral cortex or the basal ganglia.Developmentally and evolutionarily it is one of the old-est parts of the telencephalon and its cellular structure isfairly unique, certainly different from both the basal gan-glia and the cortex. The amygdala is of particular interestbecause a wealth of studies now suggest that the psycho-logical state of fear can be mapped to activation of theamygdala (LeDoux, 1996). (Although the reverse map-ping, from amygdala activation to the psychological stateof fear, has not been universally observed to be the case.)Anatomically, the amygdala receives inputs from manysensory systems and sends output to the hypothalamus,from which it can initiate somatic fear-responses, andcan regulate activity in several frontal cortical areas.Generalizing from these observations has led to the sug-gestion that psychologically defined emotional statesmay well map to neurally localizable activity. The good

1 23

4

5

7

39

19

40

68

9

10 4644

424143

18

1747

11

45

38

20

21 37

52

FIGURE 5.7 Brodmann areas of the human brain.

70 5. INTRODUCTION TO NEUROSCIENCE

NEUROECONOMICS

news is that this seems to be the case for fear. The badnews is that there is no compelling evidence, as yet, forsuch specific localization of other psychologicallydefined emotions. Chapter 12 provides much more infor-mation on this interesting structure and its relation toemotion and decision making.

The hippocampus lies adjacent to the amygdala andis a three-layered cortex-like structure that is widelybelieved to be the evolutionary progenitor of the cere-bral cortex (Kaas, 2007). The hippocampus plays a crit-ical role in the formation of several classes of long-term memory (Squire and Kandel, 2009). Bilateral(both left and right) damage to the hippocampus leadsto a peculiar deficit in the ability to make long-termverbal memories. The hippocampus looms large inneuroscientific studies of learning and memory. It ishere that many of the basic biochemical mechanismsof learning and memory were first worked out. It isalso an area particularly associated with learning andmemory with regard to spatial location and “cognitivemap” building. While it has received very little atten-tion from neuroeconomists there is some evidence thatthis is beginning to change. A brief discussion of someof these studies that tie the hippocampus to some clas-ses of value learning can be found in Chapter 21.

ORGANIZING PRINCIPLES OFREPRESENTATION IN THE BRAIN

We turn next to the structural features that manifestthemselves within cortical areas. Several fundamentalfeatures of cortical representation have been observedagain and again, in area after area. These featuresserve as important representational constraints and lieat the heart of a number of standard neuroeconomicmodels.

The central principle for understanding representa-tion in the cerebral cortex is the notion of modularity.We now have overwhelming evidence that activity ineach of the Brodmann areas, at least to a first approxi-mation, represents a very limited class of informationand that within most of these areas that information isorganized into a modular, repetitively tiled, represen-tation that constrains the computations a given areacan perform. To make this clear we turn to a brief tourof the cortical area responsible for vision.

Vision begins in the cortex in Brodmann’s area 17,often known as the visual cortex (Figure 5.7). Area 17 isunique in that it is the only cortical area that receivessignals directly from the eyeball itself (Hubel, 1988).2 Itis important to understand that this is the only portionof the cortex receiving direct visual signals from the

retina. And it is equally important to understand thatthis area receives no direct inputs from any other sen-sory system. This is a highly specialized module dedi-cated to representing, filtering and processing signalsabout the visual world.

Within area 17, information about the visual worldis organized topographically at both a global level andat a “tiled local level.” Recall that because area 17 is acortical area it is in essence a flat sheet that has beenfolded up to fit inside the back of the head. We canthus either mechanically or computer-graphicallyunfold this sheet and see area 17 (or any other area) asthe 2-dimensional object that it really is. And of courseit is equally important to remember that because thehuman brain is bilaterally symmetrical there are two ofthese flat sheets, one on each side. What we observe ifwe examine the pattern of activity in these two areas isthat each represents information about the contralateralpart of the visual world; all activity in the right area 17carries information about the left half of the visualworld. This is an observation that is repeated in manycortical areas. Sensory areas tend to represent contralat-eral sensory events and movement control areas repre-sent either movements of the contralateral half of thebody or movements into contralateral extra-personalspace (depending on the level of abstraction employedin a particular area).

Within each area 17, however, there is much moreorganization immediately apparent. Each area 17 formsa complete topographic map of the contralateral visualspace it represents. The back-most edge of each V1 repre-sents the visual world located straight ahead and as wemove gradually forward on the sheet the point in visualspace being represented moves out to the periphery. Asone moves up in the sheet the represented region movesdownwards in visual space and as one moves down-wards the area being represented moves upwards invisual space. More formally, the global organization ofarea 17 is an affine transformed map of the horizontaland vertical coordinates of the visual world. And it isalso important to note that this kind of global affinetransformed map is ubiquitous in the cortex and in otherbrain areas. Areas 1, 2, and 3 provide affine transformedmaps of the body surface where activity represents tac-tile (rather than visual) stimulation. Other areas providemaps of other sensations and even of higher-order prop-erties. Topographic organization is a fundamental representa-tional property of the brain.

If we zoom in on a particular location in area 17 wecan discover an even finer grained representationalstructure (Hubel, 1988). If, for example, we zoom in onthe region of the right area 17 topography that representsa point 10� to the left of straight ahead in the visual

2On their way to area 17 these signals pass through the thalamus, an important point but one we neglect here for clarity.

71ORGANIZING PRINCIPLES OF REPRESENTATION IN THE BRAIN

NEUROECONOMICS

world, we find nearly a million neurons examining thisarea but each in slightly different ways. Within this smallregion we find that one half of the area represents infor-mation originating from the left eye and half from theright eye, with intermediate points lying on the bordersof these two regions drawing information from botheyes simultaneously. At the center of each eye-specificrepresentation we find a cluster of cells specialized forthe representation of color-related information. And aswe move around these color areas we find cells special-ized for representing different patterns of light and dark(one cell might for example represent diagonally ori-ented black bars on a white field while a nearby groupmight respond to vertically oriented bars). The criticalobservation here is that at this more microscopic scalewe also see a kind of topographic representation of thevisual world, and this fine scale representation (some-times called an “ice-cube”) is tiled throughout area 17.Each of these tiles is known as a cortical column. Theresult is a compact repetitive code that effectively laysabout seven near-orthogonal dimensions of informationonto a two-dimension cortical sheet.

While we have taken the time to examine this finescale coding in area 17, it is equally important to notethat this kind of topographic organization has beenobserved in many Brodmann areas � in essentially allareas that are well understood. Indeed we even under-stand the developmental processes that generate thesemaps and how the statistical properties of the inputsto a cortical area lead to this tiled orthogonal represen-tation, at least to a first approximation (Sanes et al.,2012). For this reason it is widely assumed that essen-tially all cortical areas have this kind of underlyingorganization. This is a fact often obscured by the kindsof low-resolution brain imaging employed in manyneuroeconomic studies. The little tiles, or ice-cubes, liebelow the limits of standard fMRI technologies.Typical fMRI averaging protocols make it impossibleto see even the major organizational features of a brainarea. It should be stressed, though, that the major orga-nizational axes of Brodmann areas could be imagedusing fMRI. Area 17 has been well imaged in thisregard for over a decade and topographic maps ofchoice related areas in the parietal cortex have alsobeen successfully generated.

Neuronal Stochasticity

If we zoom in again much closer we see an importantfeature in the cells that make up the ice cubes that shouldalso not be overlooked. Recall that each of these neuronsrepresents information in its action potential generationrate. A cell embedded in a column that represents diago-nally oriented black bars located 10� to the right ofstraight ahead, does so by firing action potentials at a

high rate. As the visual stimulus 10� to the right ofstraight-ahead deviates from that ideal object, the firingrate of this neuron declines smoothly (and the firingrates of nearby neurons more perfectly tuned to the newstimulus begin to increase their firing rates). But it isvery important to understand that this is not in fact adeterministic process but rather a stochastic one.

Neuronal action potential rates are typicallydescribed as Poisson-like stochastic processes (Aidley,1998), the details of the specific distribution dependingon cell type. If the distribution of intervals betweenaction potentials for almost any cortical neuron is plot-ted, one observes a near-Poisson distribution. “Near-Poisson,” because sequential action potentials are notcompletely independent as is required for a truePoisson distribution. This is because neurons cannotfire two action potentials at exactly the same time, norcan two action potentials be fired one immediatelyafter the other (for well-understood biophysical rea-sons). This imposes a truncation on the distributionthat differs between different classes of neurons andleads to some heterogeneity in the stochastic structureof neuronal action potential rates.

Cortical neurons, to take the best-studied example,are quite homogenous in their stochastic structure. It hasbeen known for over 30 years (Tolhurst et al., 1983) thatmean firing rate is roughly proportional to variance forthese neurons and that they show a coefficient of variation(CV) of slightly more than 1. (The CV reports the stan-dard deviation divided by the mean and would be 1 fora true Poisson process.) Extensive studies of the CV ofneurons and the time-windowed version of the CV, theFano factor, have also been conducted (e.g., Churchlandet al., 2010) providing even more detailed information onthe stochastic properties of these elements.

Not all neurons, however, show this same degree ofvariability, suggesting that this is not an obligate prop-erty of the biophysics of neurons. The well-studieddopamine neurons discussed in Section 3 show CVsclose to 0.6 (Bayer et al., 2007) and neurons in the brainstem have even been observed with CVs of less than0.4 � a spike rate that begins to appear clock-like in itsregularity (e.g., Young et al., 1988).

The precise sources of this stochasticity in firing rateremain incompletely understood. Recall from earlier inthis chapter that neurons experience a changing mem-brane voltage driven by the effects of neurotransmitterson ion channel conductances. When that voltage crossesa threshold, an action potential is generated. The higherit is above that threshold the higher the rate of actionpotential generation. Mainen and Sejnowski (1995)recorded the time varying voltage of a group of neuronswhile also recording when they generated action poten-tials. They then artificially injected that same voltagepattern again and again and observed that the action

72 5. INTRODUCTION TO NEUROSCIENCE

NEUROECONOMICS

potentials were always generated at exactly the sametime. From this they concluded that the transform relat-ing voltage to spike rate is fully deterministic. This sug-gests that the stochasticity observed in neurons arisesfrom stochasticity in their membrane voltages, and hencefrom the synaptic interactions that give rise to thosemembrane currents. The latest available data suggestthat this stochasticity results from a mixture of thermalnoise and the fact that membrane voltages are driven byvery small numbers of atomic-scale events (seeGlimcher, 2005 for a review). Current estimates suggestthat when one synapse releases a neurotransmitter, thatmay lead to the opening of as few as two ion channelson the target neuron which may in turn lead to as few as10 charged atoms crossing through the two open ionchannels (Hofer and Bonhoeffer, 2010; Holtmaat andSvoboda, 2009). But these very small numbers suggestthat the number of charged particles influencing mem-brane voltage is highly variable and reflects the localproperties of the fluid immediately adjacent to the openchannels. In sum, these data suggest that membrane vol-tages are driven by random processes operating wellbelow the threshold for the law of large numbers. Andthe result is assumed to be the stochasticity observed inneuronal firing rates.

This stochasticity in neuronal firing rates is impor-tant to neuroeconomists for several reasons. First andforemost it places strong limits on the amount of infor-mation that a neuron can carry in its firing rate. If cor-tical neurons are bounded in their firing rates between0 and 100 Hz but have a CV of 1.1 then one cannotthink of them as mapping an infinite length of the realnumber line into their firing rate in any meaningfulway. Being able to tell a rate of 50 Hz from a rate of50.1 Hz when the standard deviation of firing rate isalso 50 Hz is impractical in finite time. Thus the preci-sion with which a single neuron can encode the realnumber line is very strongly bounded.

Second, the stochasticity of single neurons linksthem almost unavoidably to the random utility modelsof McFadden (2005). If, as is widely assumed, neuronsencode a utility-like object in their firing rates thenthey must do so in a stochastic manner. Variousmechanisms for encoding information across multipleneurons may limit the effective variance to mean-raterelationship but the fundamentally stochastic nature ofthese neurons will unavoidably ally their analysis toMcFadden’s approach.

PLASTICITY AND MEMORY

The final topic area with which any neuroeconomistmust have some familiarity is the processes that allowfor the formation of new memories or for the storage of

any kind of information. In the early 1900s it waswidely held that learning and memory could be viewedas an unitary object, probably broadly distributed in theprimate brain. Over the course of the last century, how-ever, that early view has been heavily revised. It is nowwidely known that the primate brain physically embo-dies a large number of learning and memory systemsthat are localized to specific modules in the cortex,extra-cortical areas like the hippocampus, and amyg-dala, and the basal ganglia amongst others (e.g. Squireand Kandel, 2009). Each of these modules appears spe-cialized in some ways and the memories that each ofthese modules encodes also appear to be specialized.

The amygdala, to take one well-known example,appears to play a critical role in emotional learning.Damage to the amygdala wipes out the ability ofhumans and animals to learn to fear events in the out-side world. Importantly it does not eliminate the abil-ity to learn that events in the outside world aredangerous but rather specifically targets learningrelated to the emotion of fear (See Chapter 12 for moreon this). In a similar way the basal ganglia is widelybelieved to serve as a module for learning the valuesof actions, a topic taken up in Chapters 15, 16 and 17.

Learning and storing information is thus known tobe a strongly modularized process with modules hav-ing overlapping functionality. This may be relevant toneuroeconomists because different learning and mem-ory systems may store different values for the samegood or action. This is a point taken up in Chapter 21.

Despite the heterogeneity of learning mechanisms atthe modular level, learning mechanisms at the biochem-ical level turn out to be quite homogenous and the fea-tures of this mechanism impose some interestingconstraints on how information (like learned prefer-ences or beliefs) might be stored and accessed. At analgorithmic level, these mechanisms are arrangedaround what is known as the Hebbian-synapse afterCanadian neurobiologist Donald Hebb (1949). Hebb’sgoal was to describe a computational mechanism thatcould link, using only phenomena local to the synapse,stimulus and response. His model was, in essence,designed to account for Pavlov’s dog who that learns tosalivate in response to a ringing bell if that bell is coinci-dent with the delivery of food. He believed that a circuitcould be constructed that accomplished this, if when-ever a “presynaptic” and a “postsynaptic” neuron fireaction potentials in close temporal proximity, the syn-apse between them was to be strengthened. His ideawas that the neurons causing salivation would be firingaction potentials whenever food was delivered. If a syn-apse carrying information about the bell being rungwere activated at the same time that the salivation neu-rons were active, then his mechanism would strengthenthat connection between the bell-encoding and

73PLASTICITY AND MEMORY

NEUROECONOMICS

salivation-triggering neuron, until the synapse betweenthose two neurons was itself strong enough to activatethe salivation system independently.

Subsequent to Hebb’s description of this algorithm,its biophysical instantiation was discovered by Bliss andLomo (1973). They found a class of synaptic ion channelthat allowed the atom calcium to enter the cell if andonly if the neuron upstream of them (the presynapticneuron) was releasing neurotransmitter (and was henceactive) and the neuron in which they were embedded(the postsynaptic neuron) had a high membrane voltage(and was hence also active). They found that under theseconditions this kind of co-activation led to a long-lastingstrengthening of the active synapse that they called long-term potentiation or LTP. Subsequent studies have estab-lished many features of the biochemistry of this processand have shown how entry of calcium leads to perma-nent increases in synaptic strength (Shepherd, 2004).

Subsequent studies have broadened our under-standing of these classes of mechanisms in a numberof ways. It is, for example, now known that a processfor synaptic weakening also exists that complementsthe LTP process (Lynch et al., 1977; Wiig et al., 1996).It is also known that dopamine can participate in aprocess much like LTP, although in this particular casethree events must co-occur for synaptic strengtheningto be observed: both the pre and post synaptic neuronsmust be coactive and dopamine must be present.Details of that process and how it appears to allow thevalues of actions and events to be learned is discussedin more detail in Chapters 15 and 16.

To summarize, memory is not an unitary object butrather a distributed series of objects localized to many ofthe brain modules encountered in the earlier portions ofthis chapter. That means that there are a number of par-allel mechanisms that might be expected to participate ineconomic processes like belief formation or the learningof preferences. The biochemical mechanism by whichinformation is stored in the nervous system over periodsof days or longer is a process of synaptic modification. Ifone thinks of the passage of an action potential from oneneuron to another as a transfer function, memories areencoded through changes in those transfer functions. Ata more algorithmic level, the details of the processes thatimpose these changes in the synaptic transfer functionare also important. They are the product of fairly localcomputations and the structure of those processes shapeour understanding of how learning occurs. This is apoint developed in much more detail in Section 3.

SUMMARY AND CONCLUSIONS

For an economist interested in neuroscience thereare two central messages about the foundations of

neuroscience. The first is that there seem to be clearand consistent mappings between events at the neurallevel and events at the behavioral level. The second,which follows from the first, is that the details ofneurobiological function provide valuable constraintsfor economic theories. What this points out in turn isthe critical need for basic neurobiological literacyamongst neuroeconomists.

References

Aidley, D.J., 1998. The Physiology of Excitable Cells. CambridgeUniversity Press.

Bayer, H.M., Lau, B., Glimcher, P.W., 2007. Statistics of midbraindopamine neuron spike trains in the awake primate. J.Neurophysiol. 98, 1428�1439.

Bear, M.F., Connors, B.W., Paradiso, M.A., 2007. Neuroscience:Exploring the Brain. Lippincott Williams & Wilkins.

Bliss, T.V., Lomo, T., 1973. Long-lasting potentiation of synaptictransmission in the dentate area of the anaesthetized rabbit fol-lowing stimulation of the perforant path. J. Physiol. 232, 331�356.

Breedlove, S.M., Watson, N.V., Rosenzweig, M.R., 2010. BiologicalPsychology: An Introduction to Behavioral, Cognitive, andClinical Neuroscience. Sinauer Associates, Inc. Publishers.

Brodmann, K., Garey, L., 1999. Brodmann’s Localisation in theCerebral Cortex. Imperial College Press (Distributed by WorldScientific Pub.).

Churchland, M.M., Yu, B.M., Cunningham, J.P., Sugrue, L.P., Cohen,M.R., Corrado, G.S., Shenoy, K.V., 2010. Stimulus onset quenchesneural variability: a widespread cortical phenomenon. Nat.Neurosci. 13, 369�378.

Cooper, J.R., Bloom, F.E., Roth, R.H., 2003. The Biochemical Basis ofNeuropharmacology. Oxford University Press.

Felleman, D.J., Van Essen, D.C., 1991. Distributed hierarchical proces-sing in the primate cerebral cortex. Cereb. Cortex. 1, 1�47.

Ferrier, D. (1890) The Croonian Lectures on Cerebral Localisation.Smith, Elder and Co.

Gazzaniga, M.S., 2009. The Cognitive Neurosciences. MIT Press.Glimcher, P.W., 2005. Indeterminacy in brain and behavior. Annu.

Rev. Psychol. 56, 25�56.Hebb, D.O., 1949. The Organization of Behavior; A Neuropsychological

Theory. Wiley.Hille, B., 2001. Ion Channels of Excitable Membranes. Sinauer.Hodgkin, A.L., Huxley, A.F., 1952. A quantitative description of

membrane current and its application to conduction and excita-tion in nerve. J. Physiol. 117, 500�544.

Hofer, S.B., Bonhoeffer, T., 2010. Dendritic spines: the stuff thatmemories are made of? Curr. Biol. 20, R157�159.

Holtmaat, A., Svoboda, K., 2009. Experience-dependent structuralsynaptic plasticity in the mammalian brain. Nat. Rev. Neurosci.10, 647�658.

Hubel, D.H., 1988. Eye, brain, and vision. Scientific AmericanLibrary (Distibuted by W.H. Freeman).

Huettel, S.A., Song, A.W., McCarthy, G., 2008. Functional MagneticResonance Imaging. Sinauer Associates.

Kaas, J.H., 2007. Evolution of Nervous Systems: A ComprehensiveReference. Elsevier Academic Press.

Kandel, E.R., Schwartz, J.H, Jessell, T.M., Siegelbaum, S.A.,Hudspeth, A.J., 2012. Principles of Neural Science (fifth edition).McGraw-Hill, Professional.

LeDoux, J.E., 1996. The Emotional Brain: The Mysterious Underpinningsof Emotional Life. Simon & Schuster.

74 5. INTRODUCTION TO NEUROSCIENCE

NEUROECONOMICS

Lynch, G.S., Dunwiddie, T., Gribkoff, V., 1977. Heterosynapticdepression: a postsynaptic correlate of long-term potentiation.Nature. 266, 737�739.

Mainen, Z.F., Sejnowski, T.J., 1995. Reliability of spike timing inneocortical neurons. Science. 268, 1503�1506.

McFadden, D.L., 2005. Revealed stochastic preference: a synthesis.Econ. Theor. 26, 245�264.

Nicholls, J.G., 2012. From Neuron to Brain. Sinauer Associates.Sanes, D.H., Reh, T.A., Harris, W.A., 2012. Development of the

Nervous System. Academic Press.Shepherd, G.M., 2004. The Synaptic Organization of the Brain.

Oxford University Press.Squire, L.R., et al., 2012. Fundamental Neuroscience (fourth edition).

Elsevier/Academic Press.

Squire, L.R., Kandel, E.R., 2009. Memory: From Mind to Molecules.Roberts & Co.

Tolhurst, D.J., Movshon, J.A., Dean, A.F., 1983. The statistical reliabil-ity of signals in single neurons in cat and monkey visual cortex.Vision Res. 23, 775�785.

Van Essen, D.C., 1997. A tension-based theory of morphogenesis andcompact wiring in the central nervous system. Nature. 385, 313�318.

Wiig, K.A., Cooper, L.N., Bear, M.F., 1996. Temporally graded retro-grade amnesia following separate and combined lesions of theperirhinal cortex and fornix in the rat. Learn. Mem. 3, 313�325.

Young, E.D., Robert, J.M., Shofner, W.P., 1988. Regularity and latencyof units in ventral cochlear nucleus: implications for unit classifi-cation and generation of response properties. J. Neurophysiol. 60,1�29.

75REFERENCES

NEUROECONOMICS