![Page 1: [IEEE 2012 IEEE 6th International Conference on Self-Adaptive and Self-Organizing Systems (SASO) - Lyon, France (2012.09.10-2012.09.14)] 2012 IEEE Sixth International Conference on](https://reader035.vdocuments.site/reader035/viewer/2022081812/5750936f1a28abbf6bb026f2/html5/thumbnails/1.jpg)
Simulating Human Single Motor Units using Self-Organizing Agents
Onder Gurcan∗†, Carole Bernon†, Kemal S. Turker‡, Jean-Pierre Mano§, Pierre Glize† and Oguz Dikenelli∗∗Ege University, Computer Engineering Department, Izmir, Turkey
†Paul Sabatier University, Institut de Recherche en Informatique de Toulouse, Toulouse, France‡Koc University, School of Medicine, Rumelifeneri Yolu, 34450 Sariyer, Istanbul, Turkey
§Upetec, Ramonville St Agne, Toulouse, France
Abstract—Understanding functional synaptic connectivity ofhuman central nervous system is one of the holy grails of theneuroscience. Due to the complexity of nervous system, it iscommon to reduce the problem to smaller networks such asmotor unit pathways. In this sense, we designed and developeda simulation model that learns acting in the same way ofhuman single motor units by using findings on human subjects.The developed model is based on self-organizing agents whosenominal and cooperative behaviors are based on the currentknowledge on biological neural networks. The results show thatthe simulation model generates similar functionality with theobserved data.
Keywords-self-wiring; biological neural networks
I. INTRODUCTION
Understanding the architecture of brain and mind is
one of the grand challenges accepted by the UK Com-
puting Research Committee (UKCRC) so far [1]. Despite
its importance, our understanding on how these highly-
complex highly-parallel interconnected systems of neurons
and synapses work is very basic. However, knowledge of the
synaptic connections between neurons is a key prerequisite
to understand the operation of the nervous system. While
the anatomy of these connections can be obtained by histo-
chemical methods, their functional connections can only be
determined using electrophysiological recordings. Compared
with the studies on human subjects, interpretations of the
connections of the activated fibers and neurons are straight-
forward and easy in animal experiments since it is possible
to make direct measurements. Although direct experiments
cannot be performed in humans, various indirect experiments
are performed to explore functional connections. However,
there are still gaps in our understanding of these wirings due
to technical difficulties. In addition, there is no satisfactory
theory on how these unknown parts of central nervous
system (CNS) operate. Therefore, neuroscientists still rely
upon the knowledge that is obtained in animal studies.
Thus, human studies revealing functional connectivity at the
network level are still missing.
CNS can be understood in terms of complex networks.
Characterizing structure and function of complex networks
[2], [3], [4] is an interdisciplinary approach called network
science [5]. Recent collaborative studies in network sci-
ence and neuroscience show us that CNS have features of
complex networks - such as small world topology, highly
connected hubs and modularity [6]. In another work, it
has been shown that an initially random wiring diagram
can evolve to a functional state characterized by a small-
world topology of the most strongly connected nodes and
by self-organized critical dynamics [7]. Thus, it seems that
the neural wiring problem can be reduced to the network
formation problem in which each node has discretion in
autonomously forming its links in the network relationship
[8]. Furthermore, it is widely accepted that agent-based
modeling and simulation coordinated by self-organization
and emergence mechanisms is an effective way for simulat-
ing biological systems [9] since it is possible to associate
different elements of a biological process with autonomous
computing entities called agents [10].
In this sense, a self-organized agent-based model that
learns the dynamics of CNS over time is designed and
developed. The model uses temporal data collected from
human subjects in order to estimate unknown connections
and aims to generate what is observed in natural situations.
The model has a number of properties for simulating real
biological neural networks. First, dynamic activity and spik-
ing are modeled in the individual neuron (cell) scale. This
scale was chosen since it represents the best compromise
between dynamics, complexity and observability for simulat-
ing the functional connectivity of neural networks [11]. The
dynamics of individual neurons are modeled as autonomous
agent behaviors. Driven by these autonomous behaviors,
an artificial neural network emerges through modification,
recruitment and dismissal of neurons. Secondly, the effect
of a spike on a target neuron is defined as a temporal
membrane potential change in response to the influence
of a source neuron that connects to it. That influence is
not instantaneous, and is delayed by the physical distance
between neurons (the speed of transmission is assumed
the same for all connections). Finally, to more realistically
simulate experimental conditions the model is initialized
with known connectivities, dynamic parameters and noise.
Remaining of this paper is organized as follows. The
next section gives a background information about CNS and
explains the exploration problem of synaptic functional con-
nectivity. In Section 3, the agent-based model is introduced
and explained in detail. Section 4 gives an experimental
2012 IEEE Sixth International Conference on Self-Adaptive and Self-Organizing Systems
978-0-7695-4851-7/12 $26.00 © 2012 IEEE
DOI 10.1109/SASO.2012.18
11
![Page 2: [IEEE 2012 IEEE 6th International Conference on Self-Adaptive and Self-Organizing Systems (SASO) - Lyon, France (2012.09.10-2012.09.14)] 2012 IEEE Sixth International Conference on](https://reader035.vdocuments.site/reader035/viewer/2022081812/5750936f1a28abbf6bb026f2/html5/thumbnails/2.jpg)
frame and shows its results. The discussion of the results
and the approach are given in Section 6. Related work is
given in Section 7 and finally, Section 8 concludes the paper.
II. SYNAPTIC FUNCTIONAL CONNECTIVITY PROBLEM
A. The Central Nervous System (CNS)
The nervous system is a network of neurons that com-
municate information about an organism’s surroundings and
itself. A neuron is an excitable cell that processes and
transmits information by electrochemical signaling. A typ-
ical neuron can be divided into three functionally distinct
parts, dendrites, soma and axon. Roughly, the dendrites
play the role of the input device that collects synaptic
potentials from other neurons and transmits them to the
soma. The soma is the central processing unit that performs
an important non-linear processing step (integrate & firemodel): If the total synaptic potential exceeds a certain
threshold in a certain time interval (temporal integration)
and makes the neuron membrane potential to depolarize
to the threshold, then a spike with 0.5 ms duration and
100 mV amplitude is generated [12]. After emitting the
spike, the neuron membrane potential is reset to a lower
value (after hyper-polarization - AHP) from where it starts
to move towards the threshold again if there is sufficient
input current. Meanwhile, the spike is taken over by the
output device, the axon, which delivers the spike to other
neurons through synapses. A synapse is a junction between
two neurons. Most synapses occur between an axon terminal
of one (presynaptic) neuron and a dendrite of or the soma of
a second (postsynaptic) neuron, or between an axon terminal
and a second axon terminal. When a spike transmitted by
the presynaptic neuron reaches a synapse, a postsynaptic
potential (PSP) occurs on the postsynaptic neuron for 4.0 ms
(PSP duration). This PSP can either increase (excitatory PSP
- EPSP) or decrease (inhibitory PSP - IPSP) a postsynaptic
neuron’s ability to generate a spike. The PSP for a unitary
synapse can range from 0.07 mV to 0.60 mV, but normally
it is between 0.10 and 0.20 mV (see Figure 4 in [13]). The
time for a spike to reach the postsynaptic neuron involves the
axonal delay and the synaptic processing time. The axonaldelay accounts for the forward-propagation of the spike to
the synapse through the axon, while the synaptic processingtime accounts for the conduction of the spike along the
dendritic tree toward to soma.
CNS pathways basically utilize three distinct neuron
types: sensory neurons, interneurons and motoneurons. Sen-
sory neurons are responsible for converting external stimuli
from the environment into internal stimuli. Unlike other
neurons, whose inputs come from other neurons, sensory
neurons are activated by physical modalities such as light,
sound, and temperature. Interneurons form a connection
between other neurons and take part in the long loop
of the reflex events to maintain the postural balance of
the subject. Interneurons are most of the time ready to
Figure 1. Tonic firing of a neuron (modified from [14]). During tonic firing,a neuron receives continuous current and hence its membrane potential con-tinuously rises to the firing threshold and makes it fire spontaneous spikes(a). The time intervals between consecutive spikes are called interspikeintervals (ISI) and the instantaneous frequency of a spike is calculated asf = 1000/ISI . While an EPSP induces a phase forward movement of thenext spike (and thus increases the instant frequency) (b), IPSP delays theoccurrence of the next spike (and thus decreases the instant frequency) (c).
react to disturbances that could raise a reflex [15], [16],
[17]. Motoneurons project their axons outside the CNS and
control muscles. Motoneurons are tonically active and are
affected by neurons connected to them. Hundreds of EPSPs
and IPSPs from sensory neurons and interneurons arrive
at different times onto a motoneuron. This busy traffic of
inputs create the ’synaptic noise’ on the membrane of the
motoneuron. As the consequence of this noise, spikes occur
at nearly random times (Figure 1). In several intracellular
studies of tonically active motoneurons (e.g., [18], [19]), it
has been reported that the amplitude of AHP is 10 mV.
B. Exploration of Synaptic Functional Connectivity
Wiring of neurons is a key prerequisite to understand the
operation of the nervous system. While the anatomy of these
connections can be obtained by histochemical methods,
their functional connections can only be determined using
electrophysiological recordings. The functional connection
of selected sensory neurons or corticospinal fibers to mo-
toneurons can be studied directly in animal preparations.
Compared with the studies on human subjects, experiments
on animals have advantages as precise stimulation of se-
lected nerve fibers / neurons and intracellular recordings
from exact sites are possible. Therefore, interpretations of
the connections of the stimulated fibers and neurons are
straightforward and easy in animal experiments. However,
studies on experimental animals have also limitations. To
prepare animals for experiments, they have to be reduced
(anesthetized, decerebrated, sliced, etc.). It is well recog-
nized that such reductions have severe effects on synaptic
potentials. Furthermore, one need also to remember that ac-
tive involvement of the supra-spinal pathways to discharges
of motoneurons is either completely suppressed or altered
significantly in these reduced animal preparations. There-
fore, although experiments on animals deliver the influence
12
![Page 3: [IEEE 2012 IEEE 6th International Conference on Self-Adaptive and Self-Organizing Systems (SASO) - Lyon, France (2012.09.10-2012.09.14)] 2012 IEEE Sixth International Conference on](https://reader035.vdocuments.site/reader035/viewer/2022081812/5750936f1a28abbf6bb026f2/html5/thumbnails/3.jpg)
of a stimulated fiber or neuron to another fiber or neuron
directly, one needs to accept the results of such experiments
with the experimental settings in mind. In particular, describ-
ing findings on reduced animal preparations as ‘functional
connections of neurons’ should be done with reservations.
Although such direct experiments cannot be performed
in humans, any experimental findings on conscious human
subjects are likely to be more functional since they are not
influenced by anesthetics or any other reduction processes
and since the activity of supra-spinal centers are maintained.
To study the functional connection of neurons in human sub-
jects it has been customary to use stimulus-evoked changes
in the discharge probability and rate of one or more motorunits in response to stimulation of a set of sensory neurons
(peripheral afferents) or corticospinal fibers. These are the
most common ways to investigate the workings of peripheral
and central pathways in human subjects. Although these are
indirect methods of studying the human nervous system, they
are nevertheless extremely useful as there is no other method
available yet to record synaptic properties directly in human
subjects.
Motor units are composed of one or more motoneurons
and the corresponding muscle fibers they innervate. When
motor units are activated, all of the muscle fibers they
innervate contract. Likewise, stimulation of a sensory-motor
peripheral nerve fiber produces similar contractions. Basi-
cally, a sensory-motor peripheral nerve fiber is composed of
the following components (from largest to narrowest): group
Ia and group Ib sensory axons, alpha motor axons; group II,
group III and group IV sensory axons and gamma motor
axons. A nerve fiber’s threshold to electrical stimulation is
inversely proportional to the diameter of its axon, larger
axons are more sensitive to electrical stimulation. In this
sense, while in a low-threshold stimulation experiment only
group Ia and group Ib sensory axons are activated, in a
high-threshold stimulation experiment all the sensory axons
can be activated. Additionally, it is known that group Ia
afferents make monosynaptic connections, group Ib sensory
neurons make disynaptic connections and group II sensory
neurons make polysynaptic connections with the alpha mo-
tor neurons. The output from the motor units is through
the motoneurons, and is measured by reflex recordings
from the muscle. However, most of the synaptic inputs to
motoneurons from sensory neurons do not go directly to
motoneurons, but rather to interneurons that synapse with
the motoneurons. And the synaptic connectivity of these
interneurons is still not known totally.
C. Functional Connectivity Analysis of Motor Units
The ability to record motor unit activity in human subjects
has provided a wealth of information about the neural control
of motoneurons, and in particular has allowed the study of
how reflex and descending control of motoneurons change as
a function of task, during fatigue and following nervous sys-
tem injury. Although synaptic potentials cannot be directly
recorded in human motoneurons, their characteristics can
be inferred from measurements of the effects of activating
a set of peripheral or descending fibers on the discharge
probability of one or more motoneurons. Recently, these
effects are being assessed by compiling a frequency-gram
(PSF) that uses the instantaneous discharge rates of single
motor units for estimating the synaptic potentials produced
by afferent stimulation by neuroscientists (for review see
[20]). The PSF plots the instantaneous discharge rate values
against the time of the stimulus and is used to examine
reflex effects on motoneurons, as well as the sign of the net
common input that underlies the synchronous discharge of
human motor units (Figure 2a). The instantaneous frequency
values comprising the PSF should not necessarily be affected
by previous (prestimulus) activity at any particular time.
However, since the discharge frequency of a motoneuron
reflects the net current reaching the soma [21], any signifi-
cant change in the poststimulus discharge frequency should
indicate the sign and the profile of the net input.
To determine significant deflections, the cumulative sum
(CUSUM) of PSF record is used (Figure 2b). The CUSUM
is calculated by substracting the mean pre-stimulus baseline
from the values in each bin and integrating the remainder
[22]; PSP-induced effects are considered significant if the
post-stimulus CUSUM values exceed the maximum pre-
stimulus CUSUM deviation from zero (i.e. the error box
[23], [24], indicated by the horizontal lines in Figure 2b).
As can be seen from this figure, there is an early and long-
lasting excitation (LLE) indicated by the increased frequency
from about 40 ms poststimulus to about 100 ms. After this
LLE there is a period of long-lasting inhibition (LLI) going
from about 100 ms poststimulus to about 300 ms. However,
since after 200 ms of stimulation the subject is able to
change the discharge rate of his/her motor unit, the events
later than 200 ms cannot be considered as reflex events. Only
before 200 ms of poststimulus discharge rates might give an
exact information about the network of the motor unit.
III. SIMULATING WITH SELF-ORGANIZING AGENTS
To tackle with the synaptic connectivity problem, we
have selected the agent-based simulation of the system [8]
with self-organizing agents using the Adaptive Multi-Agents
Systems (AMAS) approach [25].
A. The AMAS Approach
In the AMAS approach, the system is composed of a
set of dynamic number of agents A = {a0, a1, ...}. In this
approach, a system is said to be functionally adequate if it
produces the function for which it was designed, according
to the viewpoint of an external observer who knows its
finality. To reach this functional adequacy, it has been proven
that each autonomous agent ai ∈ A must keep relations as
cooperative as possible with its social (other agents) and
13
![Page 4: [IEEE 2012 IEEE 6th International Conference on Self-Adaptive and Self-Organizing Systems (SASO) - Lyon, France (2012.09.10-2012.09.14)] 2012 IEEE Sixth International Conference on](https://reader035.vdocuments.site/reader035/viewer/2022081812/5750936f1a28abbf6bb026f2/html5/thumbnails/4.jpg)
Figure 2. PSF and PSF-CUSUM from motoneuron responses to single PSP recorded from human soleus muscle.
physical environment [25], [26]. The configuration of inputs
coming from other agents (and physical environment) leads
ai to produce a new decision. A non-desired configuration
of inputs causes a non-cooperative situation (NCS) to occur.
ai is able to memorize, forget and spontaneously send
feedbacks related to desired or non-desired configurations
of inputs coming from other agents. We denote the set of
feedbacks as F and model sending a feedback fa ∈ F using
the action of the form send(fa,R) where a is the source of fand receiver agents R ⊂ A \ {a}. A feedback fa ∈ F can
be about increasing the value of the input (fa↑), decreasing
the value of the input (fa↓) or informing that the input is
good (fa≈).
When a feedback about a NCS is received by an agent,
at any time during its lifecycle, it acts in order to avoid or
overcome this situation [27] for coming back to a coopera-
tive state. This provides an agent with learning capabilities
and makes it constantly adapt to new situations that are
judged harmful. In case a NCS cannot be overcome by an
agent, it keeps track of this situation by using a level of
annoyance value ψfa where fa is the feedback about this
NCS. When a NCS is overcome, ψfa is set to 0, otherwise
it is increased by 1. The first behavior an agent tries to
adopt to overcome a NCS is a tuning behavior in which
it tries to adjust its internal parameters. If this tuning is
impossible because a limit is reached or the agent knows
that a worst situation will occur if it adjusts in a given way,
it may propagate the feedback (or an interpretation of it)
to other agents that may handle it. If such a behavior of
tuning fails many times and ψfa crosses the reorganization
annoyance threshold ψreorganization (reorganization condi-
tion), an agent adopts a reorganisation behavior in which
it tries to change the way in which it interacts with others
(e.g., by changing a link with another agent, by creating a
new one, by changing the way in which it communicates
with another one and so on). In the same way, for many
reasons, this behavior may fail counteracting the NCS and a
last kind of behavior may be adopted by the agent: evolutionbehavior. This is detected when ψfa crosses the evolution
annoyance threshold ψevolution (evolution condition). In the
evolution step, an agent may create a new one (e.g., for
helping itself because it found nobody else) or may accept to
disappear (e.g., it was totally useless and decides to leave the
system). In these two last levels, propagation of a problem
to other agents is always possible if a local processing is
not achieved. The overall algorithm for suppressing a NCS
by an agent is given in Algorithm 1.
Algorithm 1 Cooperative behaviors of an agent ai upon
receiving a feedback faj about a NCS of the agent aj .
1: if ψfaj< ψreorganization // tuning condition
2: if tuning behavior succeeds ψfaj← 0
3: else ψfaj← ψfaj
+ 1 and send(fj , R) endif4: else if ψfaj
< ψevolution // reorganization condition
5: if reorganization behavior succeeds ψfaj← 0
6: else ψfaj← ψfaj
+ 1 and send(faj , R) endif7: else // evolution condition
8: if evolution behavior succeeds ψfaj← 0
9: else send(faj , R) endif10: endif
The AMAS approach is a proscriptive one because each
agent must first of all anticipate, avoid and repair a NCS.
Thus, the designer, according to the problem to be solved,
has: 1) to determine what an agent is, then 2) to define
the nominal behavior which represents an agent’s behavior
when no NCS exist, then 3) to deduce the NCSs the agent
can be faced to, and finally 4) to define the cooperative
behavior (tuning, reorganization and/or evolution) the agent
has to perform when faced to each NCS in order to come
back to a cooperative state. This is the process adopted in
the following of this section. Moreover, to build a real self-
adaptive system, the designer has to keep in mind that agents
only have a local view of their environment and that they do
not have to base their reasoning on the collective function
that the system must achieve.
B. Identification of agents and their nominal behaviors
We model the agent-based simulation model Sim ba-
sically capturing all taken design decisions based on the
14
![Page 5: [IEEE 2012 IEEE 6th International Conference on Self-Adaptive and Self-Organizing Systems (SASO) - Lyon, France (2012.09.10-2012.09.14)] 2012 IEEE Sixth International Conference on](https://reader035.vdocuments.site/reader035/viewer/2022081812/5750936f1a28abbf6bb026f2/html5/thumbnails/5.jpg)
AMAS theory as Sim = (G, ν) where G is the agent-based
neural network and ν is the external observer agent of G.
1) Agent-based Neural Network Model: We model the
neural network as a dynamic directed graph G(t) =(N (t),S(t)) where N (t) ⊂ A denotes the time varying
neuron agent (vertex) set and S(t) denotes the time vary-
ing synapse (edge) set. The set of excitatory (respectively
inhibitory) neuron agents at time t is denoted by N+(t)(resp. N−(t)) where N (t) = N+(t)∪N−(t). One nominal
behavior of neuron agents is spike firing. A neuron agent
n fires a spike when its membrane potential pn crosses
the firing threshold. We define N spike(t) to be the set of
neuron agents that fired their last spike at time t. We also
denote tn for indicating the last spike firing time of the
neuron agent n where n ∈ N spike(tn). When a neuron agent
n fires a spike, this spike is emitted through its synapses
to the postsynaptic neuron agents. We denote the set of
presynaptic neighbors of a neuron agent n at time t as
Pren(t) = {m ∈ N (t)|{m,n} ∈ S(t)} and the set of
postsynaptic neighbors of a neuron agent n ∈ N at time tas Postn(t) = {k ∈ N (t)|{n, k} ∈ S(t)}. Apart from pre-
and postsynaptic neighbors, a neuron agent has also another
type of neighborhood which contains all the neuron agents
it has contacted during its lifetime, even the synapses in
between removed after some time. Formally, a friend agent
m of a neuron agent n at time t is denoted by there exists
some time t′ ≤ t such that m ∈ Pren(t′) or m ∈ Postn(t′).All friends of n at time t is denoted by Friendn(t).The set of presynaptic neuron agents that contributes to
the activation of a postsynaptic neuron n at time tn is
modeled as Contn(tn) where Contn(tn) ⊂ Pren(tn) and
tn > tk > tn − 4.0 for all k ∈ Contn(tn). Lastly, a neuron
agent has to know the friends who activated temporally
closest to its activation. Formally, a temporally closest friend
agent m of a neuron agent n at time t is denoted by there
exists some time t′ < tn ≤ t such that m ∈ Friendn(t),t′ = tm and there is no t′ < t′′ < tn such that for all
k ∈ Friendn(t), t′′ = tk. All temporally closest friends of
n at time t is denoted by Tempn(t).A synapse {n,m} conducts a spike from n to m through
the interval [tn, t′] if n ∈ N spike(tn), and t′ = tn + dnm
where dnm is the delay for delivering the spike from n to
m. We denote the spike delay as dnm = daxnm + ddennm where
daxnm is the axonal delay of {n,m} and ddennm is the synapticprocessing time. We assume that ddennm = 0.5 ms [28]. daxnm,
on the other hand, may change depending on the length
and type of the axon. We also say that a synapse {n,m}potentiates (respectively depresses) the membrane potential
p of m with a synaptic strength η at time t′ where 0.07 ≤|η| ≤ 0.60 during the PSP duration dpsp = 4.0 if n ∈ N+
(resp. n ∈ N−) and m is not removed at any time during
the interval [t′, t′ + dpsp].We model the set of sensory neuron agents at time t as
K(t) ⊂ N+(t) where for all n ∈ K(t), we have Pren(t) =
∅ and Postn(t) = ∅. Since Pren(t) = ∅, they have a
nominal action of the form activate() triggered by the viewer
agent (see next subsection) in order to be able to fire.We model the set of motoneuron agents at time t as
M(t) ⊂ N+(t) where for all n ∈ M(t), we have
Pren(t) = ∅ and Postn(t) = ∅. In the current model there
is only one n ∈ M(t) since the focus is on single motor
units. Apart from the integrate & fire nominal behavior, a
motoneuron agent n continuously increases its membrane
potential pn with Δp in order to imitate tonic firing behavior.Finally, we model the set of interneuron agents as I ⊂ N
where for all n ∈ I , we have Pren = ∅, Postn = ∅ and
dax = 0 since their axonal delays are extremely low.2) The viewer agent: The viewer agent ν is designed
to trigger the recruitment of synaptic connections and the
functional connectivity of the agent-based neural network
G. It acts like a surface electrode and gives inputs to Gby coordinating random activation of all sensory neuron
agents s ∈ K. Meanwhile, it monitors and records the
outputs of the motoneuron agent m that take place over
time to compare them with reference data. This comparison
takes place between the latency of the beginning (lbegin)
and the end (lend) of the network. These parameters are
given to ν initially, and the other outputs of m are regarded
as its unaffected behavior. When an output is observed by
ν at time t, first the latency of this output (lcurrent) is
calculated (Algorithm 2, line 1), then the observed output
is compared to the reference output observed at the same
latency (Algorithm 2, line 4) with a tolerance of τ , and
finally an appropriate feedback is sent to m (Algorithm 2,
lines 5, 6 and 7). According to this comparison, ν makes
assessments about the behavior of m for detecting if it is
functionally adequate to the real motoneuron or not. If the
observed output is generated between lbegin and lend where
lend > lbegin > 0, ν sends a feedback f ∈ Fif to m.
Otherwise, ν does not send any feedback to m.
Algorithm 2 Response of viewer agent ν upon monitoring
an output at time t from motoneuron m where tsti is the
last stimulation time of sensory neurons.
1: lcurrent ← (t - tsti)2: if lbegin ≤ lcurrent ≤ lend3: generate the simulated PSF
4: calculate the difference ε of PSFs at time lcurrent5: if ε > τ then send(fν ↓,m)6: else if ε < τ then send(fν ↑,m)7: else send(fν ≈,m) endif8: endif
Additionally, ν is responsible for stopping the simulation
run when the evolution of the neural network ends. ν detects
this situation by evaluating the output of m. Formally, G is
said to be stable at time t′ if for all t1, t2 ∈ R+ where
t2 > t1 ≥ t′, for all feedbacks f , we have f ∈ Fif ≈.
15
![Page 6: [IEEE 2012 IEEE 6th International Conference on Self-Adaptive and Self-Organizing Systems (SASO) - Lyon, France (2012.09.10-2012.09.14)] 2012 IEEE Sixth International Conference on](https://reader035.vdocuments.site/reader035/viewer/2022081812/5750936f1a28abbf6bb026f2/html5/thumbnails/6.jpg)
C. Identification of Non-Cooperative Situations
The proposed agent-based neural network model, in which
neuron agents and synapses can be inserted or removed, is
subjected to NCSs. All NCSs are identified by analyzing the
possible bad situations of real human single motor units.
1) Bad Temporal Integration: The temporal integration
of the inputs provided by synapses affects what a neuron
agent does. For an interneuron agent n, these inputs affect
whether n can fire or not, while for a motoneuron agent
m they affect the frequency of the spikes of m. Sensory
neurons, however, never detect this situation since they do
not need input neurons in order to fire. Consequently, when
this temporal integration is bad, a neuron agent either cannot
fire or have a bad firing behavior. When such a situation
is detected at time t, the neuron agent should improve its
existing inputs or should search for new inputs with the right
timing. To do so, it sends a temporal integration increaseor decrease feedback (f ∈ Fti↑ or f ∈ Fti↓) to some or
all of its neighbor neuron agents. Otherwise, the temporal
integration is good and a temporal integration good feedback
(f ∈ Fti≈) is sent to Pren(t).
An interneuron agent detects a bad temporal integrationNCS at time t if during the interval [t−Δtmax, t] it did not
generate any spikes where Δtmax is the maximum time slice
an interneuron agent can stay without spiking. However, the
motoneuron agent m is unable to detect the same situation
by itself. It detects when it receives an instant frequency
feedback fν ∈ Fif at time t about to its last spike at time
tm (see Section III-C2). Since this spike is related to the
temporal integration of its own membrane potential and its
presynapses, there are two cases: (1) Contm(tm) = ∅, and
(2) Contm(tm) = ∅. In the first case, m sends a feedback
fm ∈ Fif to its temporally closest friend neurons to be
able to have contributor synapses. In the second case, the
problem can be turned into a temporal integration problem
and a temporal integration feedback fm ∈ Fti is sent to its
temporally closest friends Tempm(t).
2) Bad Instant Frequency: A motoneuron agent fires
continuously and its firing behavior might be affected by
its pre-synapses when a stimulation is given to the sensory
neuron agents. The motoneuron agent is expected to gen-
erate frequencies similar to the reference data. When the
motoneuron agent emits a spike, the viewer agent observes
it and calculates the instant frequency value for that spike
using the previously emitted spike. However, it is not log-
ical to compare an individual frequency information to the
reference data since there can be many frequency values
at a specific time and the reference data contain the noisy
behavior of the motoneuron. In this sense, to reduce the
noise and to facilitate the comparison, the moving average
frequency values are used. Consequently, the average fre-
quency at time of spike is expected to be close enough to
the average frequency of the reference data. As a result of
this comparison, the viewer sends an instant frequency isgood, increase or decrease feedback (fν ∈ Fif↑, fν ∈ Fif↓or fν ∈ Fif≈) to the motoneuron agent.
D. Cooperative Behaviors
The tuning behavior of neuron agents is modelled using
the action of the form tune({n,m}, f ) for n,m ∈ N (t) and
f ∈ F , which correspond to the adjustment of {n,m}.η by fat time t. An autonomous and cooperative neuron agent must
be able to decide by itself the modification of its synapse.
Thus, this action can only be executed by n over {n,m}.Moreover it is assumed that no opposite adjustment is done
at the same time. The reorganization behaviors of neuron
agents are modeled using actions of the form add({n,m})and remove({n,m}) for n,m ∈ N (t), which correspond to
the formation and suppression (respectively) of {n,m} at
time t. It is assumed that no synapse is both added and
removed at the same time. The evolution behaviors of neuron
agents are modelled using actions of the form create(n,m),createInverse(n,m) and remove(n) for n,m ∈ N , which
correspond to the creation and suppression (respectively) of
neuron agents. It is assumed that no neuron agent is both
added and removed at the same time.
NCSs are suppressed by processing the aforementioned
actions as described in the following subsections.
Algorithm 3 Response to the feedback fm received at time
t of neuron agent n where fm ∈ Fif and m ∈ M(t) and
n ∈ N (t).
1: � evolution condition
2: if n ∈ N+ then create(Pren ∪ n,m) endif
1) Suppression of “Bad Instant Frequency” NCS: When
the motoneuron agent m receives fν ∈ Fif from ν about
its last spike at time tm, it evaluates fν taking into account
Prem(t) in a temporal manner. There might be two cases
depending upon some n ∈ Prem(t) that affects its last spike
in the right time exists or not. If there exists some n ∈Prespikem (t′) where t′ = tm − 0.5, m turns the problem
into a temporal integration problem and sends fm ∈ Fti
to all n ∈ Prem(t′). Otherwise, m sends fm ∈ Fif to
Prespikem (t′) where Prespikem (t′′) = ∅ for all tm ≥ t′′ > t′.When a feedback fm ∈ Fif is received by a neuron agent n,
since it cannot help m by tuning or reorganization, it directly
executes its evolution behavior (Algorithm 3): it creates an
excitatory interneuron agent k by including itself as one of
the presynaptic neurons of k. Therefore k will likely fire
after n and there will be a time shift in the network.2) Suppression of “Bad Temporal Integration” NCS:
When a feedback fm ∈ Fti is received by a neuron agent
n, it first tries to tune the synapse {n,m} if m ∈ Postm(Algorithm 4, line 2). If n cannot help m by tuning, it tries
reorganization: either by adding a synapse in between if
there is no synapse, or removing the existing synapse. A
16
![Page 7: [IEEE 2012 IEEE 6th International Conference on Self-Adaptive and Self-Organizing Systems (SASO) - Lyon, France (2012.09.10-2012.09.14)] 2012 IEEE Sixth International Conference on](https://reader035.vdocuments.site/reader035/viewer/2022081812/5750936f1a28abbf6bb026f2/html5/thumbnails/7.jpg)
Algorithm 4 Response to the feedback fm received at
time t of neuron n where fm ∈ Fif and m,n ∈ N (t).When n is unable to help, it propagates the feedback
fm to all its temporally closest friends Tempn(tn) using
send(fm, T empn(tn)).1: � tuning condition
2: if m ∈ Postn then tune({n,m}, fm) endif3: � reorganization condition
4: if m ∈ Postn then5: if ((n ∈ N+ ∧ fm ↓ ) ∨ (n ∈ N− ∧ fm ↑ )) then6: remove({n,m}) endif7: else // m /∈ Postn8: if ((n ∈ N+ ∧ fm ↑ ) ∨ (n ∈ N− ∧ fm ↓ )) then9: add({n,m}) endif
10: � evolution condition
11: if m ∈ Postn then12: if ((n ∈ N+ ∧ fm ↑ ) ∨ (n ∈ N− ∧ fm ↓ )) then13: create(Pren,m) endif14: else // m /∈ Postn15: if (n ∈ N+ ∧ fm ↓ ) ∨ (n ∈ N− ∧ fm ↑ ) then16: createInverse(Pren,m) endif
neuron n ∈ N+ (respectively n ∈ N−) adds a new synapse
(Algorithm 4, line 10) if fm ↑ (resp. fm ↓ ) and removes
the existing synapse (Algorithm 4, line 6) if fm ↓ (resp.
fm ↑ ). As a last resort, n creates a new neuron for helping
m. A neuron n ∈ N+ (respectively n ∈ N−) creates a new
neuron n ∈ N+ (resp. n ∈ N−) (Algorithm 4, line 15) if
fm ↑ (resp. fm ↓ ) and creates a new neuron n ∈ N− (resp.
n ∈ N+) (Algorithm 4, line 19) if fm ↓ (resp. fm ↑ ).
As mentioned before, the bad temporal integration NCS
can be detected by both interneuron agents and the mo-
toneuron agent. When detected by an interneuron agent,
any synaptic strength change within a 4 ms time range is
welcome, since the objective is to activate the interneuron
agent. Thus, when a neuron agent cannot suppress such a
situation, it asks its presynaptic or postsynaptic neighbors.
For the motoneuron agent, however, the objective is to have
a synaptic strength change at a specific time. Thus, when
a neuron agent cannot suppress such a situation, it asks its
temporally closest friend neurons. This way, the feedback
propagates through the network.
IV. EXPERIMENTAL FRAME
The proposed model was implemented using RePast Sym-
phony 2.0.0 beta, an agent-based simulation environment
written in Java [29]. This model was then verified and
validated by using the model testing framework given in
[30]. The dynamic parameter of the synapses, the strength,
is implemented as described in [31]. The model proceeds
0.5 ms time steps (tick). The suitable parameter space was
determined in preliminary investigations.
A. AMAS Parameters
The reorganization annoyance threshold is set to 20
(ψreorganization = 20) and the evolution annoyance thresh-
old is set to 40 (ψevolution = 40).
B. Initial Scenario
To test our model we have chosen to simulate the neural
circuitry of single motor units using the data obtained from
low-threshold stimulation experiments on human soleus
muscles (Figure 2a). The only a priori information about sin-
gle motor units is that Ia sensory neurons make monosynap-
tic connections with the motoneuron (see Section II-B). In
this sense, we considered this path as the shortest path in our
network and defined its duration as l. Thus, we initialized the
simulations as N (0) = {s,m}, S(0) = {{s,m}, {m,∅}}and d{s,m} = d{m,∅} = l/2 where s ∈ K, m ∈M and l is
the latency of the beginning of LLE extracted from the PSF-
CUSUM of the reference experimental data (Figure 2b). In
this sense, lbegin is set to l since the earliest stimulus-evoked
change of the motoneuron behavior can be observed at l and
lend is set to 200 (see Section II-C).
C. Data Configuration
All the data used in these experiments were obtained
by recording the single motor unit activity from human
subjects. To be able to determine the similarity between
the reference and the simulated data, both of them were
converted into their respective moving average frequencies.
The frequency of firing of motor units integrates the exci-
tatory and inhibitory synaptic activities [21] and involves
the noisy behavior of the motoneuron. The moving aver-
age frequencies reduce the noise of the motoneuron and
represent the net effect on the respective motoneuron. Any
significant increase1 (respectively decrease) in the average
frequency of the motor unit represents a stimulus-induced
net excitatory (resp. inhibitory) effect. Therefore, each time
the motoneuron agent emits a spike, the average frequency
for that spike is calculated and compared to the reference
average frequency and a proper feedback is sent to the
motoneuron agent by the viewer agent.
In order to provide a good feedback mechanism, the
moving average frequency diagram was developed in 3
stages: (1) the bin size is set to 0.5 and the bins in the PSF
diagram with no frequency value were given the values in the
preceding bin. This was to ensure that the average frequency
value did not suddenly drop down to 0 Hz in these bins
[32]. This approach assumed that empty bins represented
the same frequency as the preceding bin even though they
failed to be filled due to the low number of trials and/or
due to the chance. (2) Then the raw record of the PSF was
modified by averaging the frequency values in each bin (600
1A significant change in the activity is a change which is significantcompared to the prestimulus activity.
17
![Page 8: [IEEE 2012 IEEE 6th International Conference on Self-Adaptive and Self-Organizing Systems (SASO) - Lyon, France (2012.09.10-2012.09.14)] 2012 IEEE Sixth International Conference on](https://reader035.vdocuments.site/reader035/viewer/2022081812/5750936f1a28abbf6bb026f2/html5/thumbnails/8.jpg)
Table IRESULTS FOR 10 SIMULATION EXPERIMENTS
Similarity (r2) Neuron agents Synapses Convergence in ticks
0.9093 621 1013 97,201,624
0.9159 778 1307 202,681,101
0.9179 611 993 100,400,005
0.9186 680 1121 123,279,523
0.9214 564 905 97,003,656
0.9348 583 935 94,000,891
0.9351 769 1292 158,718,110
0.9521 664 1085 122,630,387
0.9586 553 875 81,120,244
0.9592 611 992 116,481,384
ms bandwidth). (3) The averaged frequency record was then
filtered by the moving averager using 4 bin average and
smoothed 8 times.
D. Tonic Firing Configuration
For imitating the nominal behavior of the biological mo-
toneuron, the motoneuron agent m uses the reference data.
When the reference data are provided to m, it removes the
discharge rate values after stimuli and calculates a statistical
distribution using the remaining part. Then using the statis-
tical parameters of the distribution, a frequency generator,
which is used to generate consecutive interspike interval
(ISI) values for m, is created. Each time a new ISI value
is calculated using this generator, the instant membrane
potential increase Δp is calculated as Δp = AHPm/ISIand at each tick, pm is increased by Δp ∗ tick. For the
statistical calculations, the SSJ2 (Stochastic Simulation in
Java) library is used. To increase the reliability of the
motoneuron agent, the goodness-of-fit test for the tonic firing
behavior is also performed using the aforementioned testing
framework [30]3.
V. SIMULATION RESULTS
After the simulation ends, the results are analyzed in
order to ensure that the generated network is functionally
equivalent to the reference real network. To calculate the
similarity of two networks, a cross-correlation analysis is
performed between the simulated PSF-CUSUM and the
reference PSF-CUSUM. The correlation r will yield a 0
when there is no correlation (totally uncorrelated) and a
1 for total correlation (totally correlated). The degree of
similarity is then calculated as r2. This information is fair
enough in order to claim that the two underlying networks
are functionally equivalent or not.
In 10 simulation experiments (Table I), the functional
similarity observed is between 90.93% and 95.92% (average
2http://www.iro.umontreal.ca/˜simardr/ssj/indexe.html, last access on 14July 2012.
3The case study given in [30] is comprehensively describing this test.
93.48%), the number of neuron agents is between 553 and
778 (average 643.4) and the number of synapses is between
875 and 1307 (average 1051.8). Figure 3 shows the PSF and
the PSF-CUSUM diagrams of a simulation run.
VI. DISCUSSION
The results are very promising. They show that the
developed model is able to learn and simulate the functional
behavior of human single motor units. The average number
of neuron agents observed at the end of simulations is not
an odd value for a single motor unit pathway. However, it is
unclear that this is the case. Moreover, the model does not
simulate the functional behavior smoothly (see Figure 3b).
This is probably caused by the lack of proper feedbacks
(see the gaps between discharge rates in Figure 3a), since
the viewer agent is sending feedbacks in response to the
observed outputs. Besides, according to the AMAS theory,
giving right feedbacks to the right agents is important. Thus,
the calculation of the average PSF must be done carefully
since it affects right recruitment of the network. It should
be done slightly without displacing the peaks and throughs
significantly. During our preliminary investigations, it has
been observed that when the average PSF is smoothed
more, the information about the dynamics of the system
is lost, and as long as it is smoothed less, the noise of
the motoneuron agent prevents the interneuron agents to
learn the right dynamics. The tolerance value τ is also
an important parameter closely related to right feedbacks.
Similar to smoothing, when τ is higher, the information
about the dynamics of the system is lost, and when τ is
smaller, it is harder to detect good outputs.
Another important aspect is the convergence efficiency of
the model. Parameters that affect the convergence are the
annoyance thresholds and the AHP level of interneurons.
The annoyance thresholds are set to high values to increase
the propagation of the feedbacks inside the network. When
these parameters are lower, the agents tend to make reorga-
nization and evolution more quickly and the perturbation of
the system increases. Notwithstanding, when they are higher,
although the dynamics of the network does not change
and the network successfully converges, the convergence
efficiency decreases. For computational simplicity and for
increasing the convergence speed, we allowed the AHP level
of interneurons a low value, so that they can activate with
at least two synapses. However, it is unclear that this should
be the case for real networks.
VII. RELATED WORK
Buibas et al. [11] present a formal modeling framework
for using real-world data to map the functional topology
of complex dynamic networks. The framework formally
defines key features of cellular neural network signalling
and experimental constraints associated with observation
and stimulus control, and can accommodate any appropriate
18
![Page 9: [IEEE 2012 IEEE 6th International Conference on Self-Adaptive and Self-Organizing Systems (SASO) - Lyon, France (2012.09.10-2012.09.14)] 2012 IEEE Sixth International Conference on](https://reader035.vdocuments.site/reader035/viewer/2022081812/5750936f1a28abbf6bb026f2/html5/thumbnails/9.jpg)
Figure 3. The simulated PSF (a) and PSF-CUSUM (b) compiled from the motoneuron agent responses of a simulation experiment. The functionalsimilarity between the simulated network (b) and the real network (Figure 2b) is 95.92 %.
model intracellular dynamics. They claim that, the frame-
work is particularly well-suited for estimating the functional
connectivity in biological neural networks from experimen-
tally observable temporal data. However, the framework
is unable to estimate and map the functional topology of
complex networks with unknown connectivities. There are
also approaches for estimating parameters and dynamics of
small groups of neurons [33], [34], Eldawlatly et al. [33]
use dynamic bayesian networks in reconstructing functional
neuronal networks from spike train ensembles. Their model
can discriminate between mono- and polysynaptic links
between spiking cortical neurons. Makarov et al. [34] present
a deterministic model for neural networks whose dynamic
behavior fits experimental data. They also use spike trains
and their model permits to infer properties of the ensemble
that cannot be directly obtained from the observed spike
trains. However, these studies are far from establishing a
complete functional connectivity of larger networks such as
single motor unit pathways.
There are also simulators focused on creating a wiring
diagram of all the neurons in the brain [35], [36]. The
Human Connectome Project (HCP) [35] is an ambitious
effort to map the neural pathways that underlie human brain
function. The HCP proposes to resolve this by using new-
generation magnetic resonance imaging (MRI) machines,
like that used to scan my brain, to trace the connectomes
of more than 1,000 individuals. The Human Brain Project
[36] aims to create a wiring diagram of all the neurons
in the brain, and neuroscientists have developed innovative
techniques for automatically imaging slices of mouse and
cat brain, yielding terabytes of data so far. However, it
is not proven that MRI techniques can produce a reliable
picture of normal connectivity, never mind the types of
abnormal connection likely to be found in brain disorders,
and some researchers argue that the techniques have not been
adequately validated [37].
VIII. CONCLUSIONS
Although there are studies aimed at simulating biological
neural networks in the literatureIn this study we have, for
the first time, generated artificial neural networks for human
single motor unit pathways using PSF data through a self-
organization process. The generated networks are modified
through NCSs of agents. Driven by intermittent activations
of sensory neuron agents and the spontaneous activity of the
motoneuron agent, an artificial neuronal pathway emerges
through recruitment, dismission and modification of neuron
agents and synapses. Our present findings do not constitute
a proof that the simulated neural network is exactly as the
real one. However, it is shown that the developed simulator is
able to generate what is observed in natural situations. Since
the nominal behaviors of the model elements conform to the
biological elements (they are all verified and validated), it
can also be said that the simulated network is biologically
plausable. This appears as an arresting conclusion that makes
our understanding about synaptic functional connectivity
of human motor units more clear. Nevertheless, a detailed
analysis of the generated network should be done in order to
prove this conclusion. As a result of this analysis the number
of neuron agents should be optimised if necessary.
To increase reliability of the model, we are planning to
use single motor unit data recorded from other muscles (e.g.
human tbialis anterior muscle). Furthermore, it would be
interesting to continue research along multiple motor unit
pathways, to see to what extend the pathways in humans
can be simulated.
ACKNOWLEDGMENT
Onder Gurcan is supported by the Turkish Scientific
and Technological Research Council (TUBITAK) through
a domestic PhD scholarship program (BAYG-2211) and
the French Government through the cotutelle scholarship
program.
REFERENCES
[1] J. Kavanagh and W. Hall, Eds., Grand Challenges in Com-puting Research Conference 2008. UKCRC, 2008.
[2] S. Strogatz, “Exploring complex networks,” Nature, vol. 410,pp. 268–276, 2001.
19
![Page 10: [IEEE 2012 IEEE 6th International Conference on Self-Adaptive and Self-Organizing Systems (SASO) - Lyon, France (2012.09.10-2012.09.14)] 2012 IEEE Sixth International Conference on](https://reader035.vdocuments.site/reader035/viewer/2022081812/5750936f1a28abbf6bb026f2/html5/thumbnails/10.jpg)
[3] R. Albert and A. Barabasi, “Statistical mechanics of complexnetworks,” Rev. Mod. Phys., vol. 74, no. 1, pp. 47–97, Jan2002.
[4] S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, and D.-U. Hwang, “Complex networks : Structure and dynamics,”Physics Reports, vol. 424, no. 4-5, pp. 175–308, Feb 2006.
[5] K. Borner, S. Sanyal, and A. Vespignani, “Network science,”in Ann. Rev. of Information Science & Technology, vol. 41,2007, pp. 537–607.
[6] E. Bullmore and O. Sporns, “Complex brain networks: graphtheoretical analysis of structural and functional systems,”Nature Rew. Neurosc., vol. 10, pp. 186–198, 2009.
[7] B. Siri, M. Quoy, B. Delord, B. Cessac, and H. Berry, “Effectsof hebbian learning on the dynamics and structure of randomnetworks with inhibitory and excitatory neurons,” Journal ofPhysiology-Paris, vol. 101, no. 1-3, pp. 136–148, 2007.
[8] O. Gurcan, O. Dikenelli, and K. S. Turker, “Agent-basedexploration of wiring of biological neural networks: Positionpaper,” in 20th Europ. Meeting on Cybern. and Systems Res.,R. Trumph, Ed., Vienna, Austria, 2010, pp. 509–514.
[9] G. Di Marzo Serugendo, M. Gleizes, and A. Karageorgos,“Self-organization in multi-agent systems,” Knowl. Eng. Rev.,vol. 20, no. 2, pp. 165–189, 2005.
[10] F. Amigoni and V. Schiaffonati, “Multiagent-based simulationin biology,” Model-Based Reasoning in Science, Technology,and Medicine, pp. 179–191, 2007.
[11] M. Buibas and G. A. Silva, “A framework for simulatingand estimating the state and functional topology of complexdynamic geometric networks,” Neural Comput., vol. 23, no. 1,pp. 183–214.
[12] W. Gerstner and W. Kistler, Spiking Neuron Models. Cam-bridge University Press, August 2002.
[13] R. Iansek and S. Redman, “The amplitude, time course andcharge of unitary post-synaptic potentials evoked in spinalmotoneurone dendrites,” J. of Neurophysiol., vol. 234, pp. 665– 688, 1973.
[14] K. S. Turker and T. S. Miles, “Threshold depolarizationmeasurements in resting human motoneurones,” Journal ofNeuroscience Methods, vol. 39, no. 1, pp. 103 – 107, 1991.
[15] C. Capaday, “The special nature of human walking and itsneural control,” Trends in neurosci., vol. 25, no. 7, pp. 370–376, July 2002.
[16] T. Lam and K. Pearson, “The role of proprioceptive feedbackin the regulation and adaptation of locomotor activity,” AdvExp Med Biol, vol. 508, pp. 343–355, 2002.
[17] J. Misiaszek, “Neural control of walking balance: If fallingthen react else continue,” Exercise & Sport Sciences Reviews,vol. 34, no. 3, pp. 128–134, July 2006.
[18] W. Calvin and P. Schwindt, “Steps in production of motoneu-ron spikes during rhytmic firing,” J. of Neurophysiol., vol. 35,pp. 297 – 310, 1972.
[19] P. Schwindt and W. Crill, “Factors influencing motoneuronrhytmic firing: results from a voltage-clamp study,” J. ofNeurophysiol., vol. 48, pp. 875 – 890, 1982.
[20] K. S. Turker and R. K. Powers, “Black box revisited: atechnique for estimating postsynaptic potentials in neurons.”Trends in neurosci., vol. 28, no. 7, pp. 379–386, July 2005.
[21] A. Gydikov, N. Tankov, L. Gerilovsky, and N. Radicheva,“Motor unit activity upon polysynaptic reflex in man.” Agres-sologie, vol. 18, no. 2, pp. 103–8, 1977.
[22] P. Ellaway, “Cumulative sum technique and its application tothe analysis of peristimulus time histograms,” Electroen. Clin.Neuro., vol. 45, no. 2, pp. 302 – 304, 1978.
[23] K. S. Turker, J. Yang, and P. Brodin, “Conditions for excita-tory or inhibitory masseteric reflexes elicited by tooth pressurein man,” Arch Oral Biol, vol. 42, no. 2, pp. 121 – 128, 1997.
[24] R. S. A. Brinkworth and K. S. Turker, “A method forquantifying reflex responses from intra-muscular and surfaceelectromyogram,” J Neurosci Meth, vol. 122, no. 2, pp. 179– 193, 2003.
[25] D. Capera, J. George, M. Gleizes, and P. Glize, “The amastheory for complex problem solving based on self-organizingcooperative agents,” in WETICE ’03: Proc. of the 20th Int.Workshop on Enabling Technologies. Washington, DC, USA:IEEE CS, 2003, p. 383.
[26] V. Camps, M. P. Gleizes, and P. Glize, “A self-organizationprocess based on cooperation theory for adaptive artificialsystems,” in 1st Int.l Conf. on Philosophy and Comp. Sci.:Processes of evolution in real and Virtual Systems, Krakow,Poland, 1998.
[27] C. Bernon, D. Capera, and J.-P. Mano, “Engineering societiesin the agents world ix,” A. Artikis, G. Picard, and L. Ver-couter, Eds. Berlin, Heidelberg: Springer-Verlag, 2009, ch.Engineering Self-modeling Systems: Application to Biology,pp. 248–263.
[28] E. R. Kandel, J. Schwartz, and T. M. Jessell, Principles ofNeural Science, 4th edition. Mc Graw Hill, 2000.
[29] M. North, N. Collier, and J. Vos, “Experiences creating threeimplementations of the repast agent modeling toolkit,” ACMTrans. Model. Comput. Simul., vol. 16, no. 1, pp. 1–25,January 2006.
[30] O. Gurcan, O. Dikenelli, and C. Bernon, “Towards a generictesting framework for agent-based simulation models,” in 5thInternational Workshop on Multi-Agent Systems and Simu-lation (MAS&S’11), Szczecin, Poland, EU, September 2011,pp. 635–642.
[31] S. Lemouzy, V. Camps, and P. Glize, “Principles and proper-ties of a mas learning algorithm: A comparison with standardlearning algorithms applied to implicit feedback assessment,”in Proceedings of the 2011 IEEE/WIC/ACM InternationalConferences on Web Intelligence and Intelligent Agent Tech-nology - Volume 02, ser. WI-IAT ’11. Washington, DC, USA:IEEE Computer Society, 2011, pp. 228–235.
[32] K. S. Turker and H. B. Cheng, “Motor-unit firing frequencycan be used for the estimation of synaptic potentials in humanmotoneurones,” J Neurosci Meth, vol. 53, no. 2, pp. 225 –234, 1994.
[33] S. Eldawlatly, Y. Zhou, R. Jin, and K. G. Oweiss, “Onthe use of dynamic bayesian networks in reconstructingfunctional neuronal networks from spike train ensembles,”Neural Comput., vol. 22, no. 1, pp. 158–189, Jan. 2010.
[34] V. A. Makarov, F. Panetsos, and O. de Feo, “A method fordetermining neural connectivity and inferring the underlyingnetwork dynamics using extracellular spike recordings,” JNeurosci Meth, vol. 144, no. 2, pp. 265 – 279, 2005.
[35] B. Rosen, V. J. Wedeen, J. D. V. Horn, B. Fischl, R. L.Buckner, L. Wald, M. Hamalainen, S. Stufflebeam, J. Roff-man, D. W. Shattuck, T. PM, R. P. Woods, N. Freimer,R. Bilder, and A. W. Toga, “The human connectome project,”in 16th Annual Meeting of the Organization for Human BrainMapping, Barcelona, Spain, 2010.
[36] H. Markram, “The blue brain project,” Nature Reviews Neu-roscience, vol. 7, pp. 153–160, 2006.
[37] J. Bardin, “Making connections: Is a project to map the brainsfull communications network worth the money?” Nature, vol.483, pp. 394–396, 12 March 2012.
20