multidisciplinary and historical perspectives for...
Post on 30-Jul-2020
1 Views
Preview:
TRANSCRIPT
VOLUME XX, 2018 1
2169-3536 © 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2018.Doi Number
Multidisciplinary and Historical Perspectives for Developing Intelligent and Resource-Efficient Systems
Aarne Mämmelä1, Senior Member, IEEE, Jukka Riekki2, Member, IEEE, Adrian Kotelba1, Senior Member, IEEE, and Antti Anttonen1, Senior Member, IEEE 1VTT Technical Research Centre of Finland Ltd, P.O. Box 1100, FI-90571 Oulu, Finland 2University of Oulu, P.O. Box 8000, FI-90014 University of Oulu, Finland
Corresponding author: Aarne Mämmelä (e-mail: Aarne.Mammela@vtt.fi).
The work reported in this paper was partially supported by the European H2020 Research and Innovation project COHERENT (GA No. H2020-ICT-671639).
The work was also conducted in the framework of the self-funded Future Communications Growth Area project and the AFORM5 project of the VTT
Technical Research Centre of Finland Ltd.
ABSTRACT As communication and computation systems become more complex and target at higher
performance, the fundamental limits of nature can be expected to constrain their development and
optimization. This calls for intelligent use of basic resources, that is, materials, energy, information, time,
frequency, and space. We present a multidisciplinary and historical review on the body of knowledge that
can be applied in researching such intelligent and resource-efficient systems. We review general system
theory, decision theory, control theory, computer science, and communication theory. While
multidisciplinarity has been recognized important, there are no earlier reviews covering all these five
disciplines. Based on the review, we build a chronology of intelligent systems and identify connections
between the disciplines. Optimization, decision-making, open- and closed-loop control, hierarchy, and degree
of centralization turn out as recurring themes in these disciplines, which have converged to similar solutions
that are based on remote control, automation, autonomy, and self-organization. We use future wireless
networks as an example to illustrate the open questions and how they can be addressed by applying
multidisciplinary knowledge. This paper can help researchers to use knowledge outside their own field and
avoid repeating the work done already. The resulting consolidated view can speed up research and is
especially important when the fundamental limits of nature are approached and new insights are required to
overcome the challenges. The general, long-standing problem to be tackled is multiobjective optimization
with autonomous and distributed decision-making in an uncertain, dynamic, and nonlinear environment
where the objectives are mutually conflicting.
INDEX TERMS Autonomous systems, fundamental limits, multidisciplinary view, resource efficiency
I. INTRODUCTION
We present a multidisciplinary review of reviews of
intelligent technologies and resource efficiency, which are
seen crucial in 2010-2050, but set mutually conflicting
requirements [1], [2], [3]. Wilenius [2] refers to this period
and states: “This emerging new wave of development calls
for intelligent use of resources.” We argue that a
multidisciplinary approach is required in researching
intelligent resource usage because the fundamental limits of
nature are expected to constrain the development of
communication and computation systems when these
systems become more complex and target at higher
performance. Moreover, analytical thinking leading to
specialized solutions and systems is not sufficient alone, but
rather systems thinking and more general solutions are
required. The rest of the introduction presents justification
for multidisciplinary studies, introduces the basic concepts,
and describes the contributions and structure of this paper.
A. MULTIDISCIPLINARY VIEW
Scientific inquiry has been successfully carried out in a
compartmentalized manner in specialized disciplines. We
have surveyed the earlier studies [4], [5], [6] and identified
five disciplines that can contribute to the research of
2
intelligent and resource-efficient systems. These disciplines
are general system theory [7], [4], decision theory [8], [9],
control theory [10], [11], [12], computer science [13], [6],
and communication theory [14], [15], [16]. Moreover, we
include electronics in general system theory as a method to
connect ideas to reality and to study implementation
complexity. The purpose of general system theory is to
gather results from different disciplines.
Decision theory has been developed in operations
research, management theory, and economics [4], [17], [6].
The theory is used in various disciplines and is especially
relevant when we consider the decisions in the control loop
in Fig. 1a). A human, who acts as a decision maker in the
remote control station on the left, receives real-time sensing
information (for example, location, direction, and speed)
from a ship on the right and is therefore able to control it
remotely. The control or feedback loop is generalized in Fig.
1b where the decision maker is replaced with a decision
block (Decide), a fundamental element in many automata.
The decision blocks may form a hierarchy and exchange
sensing and control information with the upper level decision
blocks. The process or the system to be controlled cannot be
accurately modeled and therefore it is usually assumed to
include some noise.
FIGURE 1. A control loop. a) An example where a ship is remotely controlled. b) A general control loop where the system to be controlled is called a process. Solid lines with arrows denote control signals and dashed lines with arrows denote sensing signals. The sense, decide, and act blocks close the loop through the process that is often called the plant.
Humans can participate in the decision-making of many
intelligent and resource-efficient systems. Due to advances
in science and technology, humans can have wider
awareness of their environment, even globally, and actual
presence is no more needed. The awareness can in the future
be expanded in the form of telepresence where all the senses
(for example, sight, hearing, and sense of touch) come into
use and the delays are minimized, so that the interactions
with the environment look almost instantaneous and an
operator feels physically present at the remote site [21], [22].
In virtual reality, the user has the impression of being
physically present in an imaginary environment instead of
actual presence.
Multidisciplinary studies of the five disciplines are still
rare [18], [19] and they usually have rather narrow scopes.
On the other hand, general books such as [7], [20], [4] have
much wider scopes. Books on systems thinking [4] cover the
history only until about 1980. In addition, the general books
[7], [20] need an update as most of the novel concepts
covered by our studies are excluded from these books and
the focus in [7] is in mechanical engineering. Ogata [10]
presents an excellent summary about control engineering but
does not discuss hierarchical, distributed, learning,
autonomous, and self-organizing control. Saridis [5]
combines the results of artificial intelligence (AI), operations
research, and control theory and uses them in intelligent
hierarchical control.
Control theory, computer science, and communication
theory have so far progressed rather independently although
they show similar trends, including the use of hierarchy and
feedback, especially in autonomous and self-organizing
systems. Control theory has often been leading the
development, sometimes by decades. The most advanced
concepts in control theory, computer science, and
communication theory are presented in the simplified
chronology in Fig. 2. A more extensive chronology is
presented in Section II. We observe that in control theory the
most advanced ideas are autonomy and remote control over
a network, whereas in computer science, the idea is
autonomy in self-organizing distributed computing, and in
communications, the idea is autonomy in a self-organizing
network. These terms are explained in Section II.
FIGURE 2. A simplified chronology of the most advanced systems that combine efforts from control theory, computer science, and communication theory.
Control theory, computer science, and communication
theory form a triplet that is merged in automatic,
autonomous, and networked control systems and using
different terms such as cyber-physical systems [23] and time
sensitive networks [24] (Fig. 2). Robotics is an area which
has used the results of different disciplines extensively,
including ideas of intelligent agents [25], [26]. Originally,
Wiener (1948) combined control and communications in his
cybernetics [4]. Autonomic computing combines control
theory and computer science [27]; remote control or
teleoperation combines control theory and communication
theory [28]; and distributed computing combines computer
science and communications [29]. Tactile Internet [30], [31],
[32] is an idea that has been developed in control theory for
many decades since about 1945 in the form of teleoperation
or remote control, networked control systems, and
kinesthetic and haptic feedback [33], [34], [35], [36].
Without a systems view connecting these disciplines
together, reinventions of similar concepts might happen also
in the future.
Multidisciplinary research covering the five disciplines
would facilitate accessing knowledge outside each
researcher’s own field and avoid repeating the work already
done. Moreover, multidisciplinary thinking enables the
emergence of new scientific principles and opportunities as
well [37]. On the other hand, multidisciplinary studies are
challenging since the amount of literature is increasing
exponentially, doubling every 10-15 years depending on the
discipline. Thus, historical reviews become important.
A map of the world can be used to illustrate the role of
multidisciplinary research. The map is based on the efforts
of thousands of cartographers. Such knowledge is not meant
to replace the more detailed knowledge attained by
specialists, but to evidence the relations of each part to other
parts and to the whole to unify, and to organize. This
example was originally presented by the founder of the
discipline of history of science, George Sarton (1884-1956)
[38]. He was interested in natural and human sciences,
3
including social sciences, and his goal was to create a
philosophy of science with the unity of science as a core idea.
Sarton’s predecessor was William Whewell (1794-1866).
Conventionally, in engineering, only natural sciences have
been seen important, but also human sciences have
connections to engineering, in studying the needs and use of
technology, and social sciences as well [39].
Some hierarchical systems include a phenomenon called
emergence [4]. It means that the upper level system has
properties that are not predictable from the lower levels.
Complexity in general calls for system studies, to find
common goals to experts [40]. Multidisciplinary view is an
additive view connected to systems thinking [41], [42]. The
basic tools supporting systems thinking are bibliographies
showing available reviews, books, and original papers,
chronologies presenting history, and conceptual analysis in
the form of classification or taxonomies and hierarchies.
Understanding the history is a prerequisite for systems
thinking. Knowing the history helps to understand better the
state of the art and the future trends. Furthermore, conceptual
analysis is needed to find unified terminology for
communication between different disciplines [7]. This may
imply that some disciplines change their definitions.
The multidisciplinary view is, however, only the first step
towards a holistic top-down systems view, which is the
opposite of reductive analytical bottom-up view [41], [42].
Interdisciplinary view is a more advanced, interactive view
and the third and most advanced, transdisciplinary view is
the holistic systems view. Hence, this paper presents our first
step towards the systems view. The term “analytical view” is
commonly used to refer to the opposite view leading to
specialized solutions and systems, but this terminology
includes a small inconsistency since “analysis” is also used
in systems view [41]. The difference is that in the systems
view, a system is observed as a whole and as part of the
environment, whereas in the analytical view, the focus is on
the parts of the system isolated from each other and from the
environment. In the systems view, all the approaches that can
maximize the system performance are considered in order to
achieve a good balance, for example, between computation
and communication. The best practices from all the
disciplines can be used. Due to emergence, the reductive
analytical view cannot produce as extensive knowledge as
the holistic systems view.
Our paper is about linking disciplines together, not about
the details in individual disciplines. The details are covered
in the references. The output of interdisciplinary scientific
research can be measured [43]. The top 1% most cited papers
exhibit higher levels of interdisciplinarity than other papers
in other citation rank classes [44]. Thus, interdisciplinary
research plays an important role in generating high impact
knowledge. However, in some disciplines, editors tend to
prefer contributions directly related to new methods and
algorithms rather than on results of applications from
existing methods [45].
B. BASIC CONCEPTS
Future communication and computation systems need to be
intelligent, that is, they need to have the ability to act
appropriately in an uncertain environment [46], [6].
Appropriate actions advance the goal of the system and
uncertainty means lack of precise knowledge [8]. A system
is manually controlled if a human is in the decision loop (Fig.
3). Some manually controlled systems are remotely
controlled manipulators [22]. A system is automatic if it
works without human intervention but uses some
predetermined control information, for example an external
control signal (Fig. 1b) called the set-point value or a
reference signal that may represent the route to the goal [10].
If there is an obstacle, an automatic system stops and waits
until the obstacle is removed. An automatic system is
autonomous or self-governing if it can achieve its goal
without any external control [12], [47], [48], [49]. An
autonomous system defines its own route to the goal, and if
there is an obstacle, the system is able to find a new route.
Self-organizing systems are advanced forms of autonomous
systems, able to change their own structure.
FIGURE 3. Manually-controlled, automatic, and autonomous systems. A self-organizing system is an advanced form of an autonomous system.
The term autonomic has been earlier used only for
autonomic nervous systems implying unconscious control.
Some disciplines, especially computing, started to use the
term autonomic in about 2001 [50], and the usage spread to
other disciplines, for example communication and control
[51], [52]. Authors have different opinions about the
meaning of the term. The consensus seems to be that
autonomic systems are more advanced than autonomous
systems. Autonomic computing systems “can manage
themselves, given high-level objectives from administrators”
[50]. They are self-managing, i.e., they have self-
configuring, self-optimizing, self-healing, self-monitoring,
and self-protecting properties [50], [47]. In [52], the authors
explain that contrary to autonomous systems, autonomic
systems are cooperating using a dedicated communication
channel. We prefer the general terms cooperative and self-
organizing systems.
Generally, a system is defined to be a set of parts with
some relationships so that the parts form a whole which has
properties of the whole rather than properties of the parts [4].
A system is an object of study whose boundaries are defined
by its observer [20]. When several disciplines meet, special
care is needed even with the basic concepts like the system
being studied. When a part of a larger system is studied, the
boundaries drawn by the observer determine which parts of
the larger system belong to the system being studied and
which parts belong to its environment. For example, in
artificial intelligence, the focus is typically on an agent
interacting with its environment via sensors and actuators
[6], whereas control theory focuses on controlling a plant or
a process in the environment that is included in the feedback
loop [10]. The same entity can be described in both ways, but
the observed boundaries can differ. We follow the
conventions of control theory.
4
According to [4], control, communication, emergence,
hierarchy, and open systems are essential concepts in
systems thinking. We also consider resource-efficiency an
important concept. A general control, learning, or decision
loop or cycle based on feedback is a core component of any
intelligent and resource-efficient system and a unifying
concept for the five disciplines covered in this survey. After
the stability analysis of the feedback loop [53], the learning
loop was used by Dewey, Lewin, and Piaget whose work
lead to Kolb’s concept on experiential learning [54]. Eykhoff
(1960) used the loop to describe the process of learning in
adaptive systems [55]. The operational phases of the loop
were measurement, data reduction or learning, decision, and
adjustment. The loop has been proposed independently many
times with different names such as sense-plan-act paradigm
in robotic systems [26]; observe, orient, decide, and act loop
(OODA) in combat operations process [56]; decision-
making process in situation awareness [57]; cognition or
cognitive cycle in cognitive radios [58], [59]; and monitor,
analyze, plan, execute, and knowledge loop (MAPE-K) in
autonomic computing [50].
All agents are based on a control loop [60], [25], [6]. A
human agent has sensors (e.g., eyes, ears, and skin) and
actuators (e.g., hands, legs, and vocal tract). A robotic agent
may have cameras, infrared range finders, and various
motors. In social sciences, autonomous agents are called
actors which are usually humans [41]. Such agents and
actors show rational goal-directed behavior [46], [6]. In a
multiagent system, the goals of individual agents may be
mutually conflicting. Hence, if cooperation is not organized,
competition may lead to unstable or otherwise less optimal
behavior. In fact, as discussed above, many different control
loops can be studied when system borders outline different
parts of the agent and place the rest of the agent into the
environment. The highest level loop outlines the whole agent
from its environment and is usually the focus in artificial
intelligence. When the studied system covers smaller parts
of the agent, control theory is typically applied, for example,
to turn the camera of a mobile robot or to control its heading
direction. Communication theory enters the stage when the
agent components are distributed in space and additional
control loops are needed for communication. Hierarchies
(discussed below) can be used to relate the different control
loops with each other.
Agents can be divided into hardware and software agents
depending on the implementation [25], [26], [61]. In
applications, agents can also be divided into autonomous
agents, cooperating multiagent systems, and assistant agents
assisting humans. According to the architecture, agents are
divided into reactive, proactive or deliberative, and
interacting or social agents and their mixed or hybrid forms.
Interacting agents can be either competitive or cooperative.
The goal of the control loop is to improve the
performance, that is, to achieve the desired plant behavior
while using resources efficiently. The key performance
indicators (KPIs) are often efficiency metrics demonstrating
the efficient use of resources. Metrics are called criteria or
objective functions. Each objective function may have a
constraint and a goal and those goals may be conflicting [62].
Objective functions, constraints, and goals are treated in [62]
as design metrics. We assume that objective, criterion, and
metric are synonymous terms. Optimization is moving from
the conventional single-objective optimization to
multiobjective optimization where the set of Pareto optimal
solutions is an essential concept [62]. The optimum is not
unique but shows the needed trade-offs. Intervention of a
decision maker is needed to make the final decision in
multiple-criteria decision-making; fairness must be
considered as well. In self-organizing systems, the structure
of the plant can be changed in addition to its behavior.
Stability is an essential property for the feedback loop. In
general, negative feedback is used, and a delay in the loop
may have harmful effects to the stability.
Complex systems are also called large-scale systems and
they are usually hierarchical [63]. In such a case, the set-
point value or the reference signal of a decision block can be
generated as an action by a decision block at an upper level
in the hierarchy; on the other hand, a decision block may
provide some sensing information to the upper level
(Information exchange in Fig. 1b). Hierarchical systems
combine centralized and decentralized control, often in a
mixed form called distributed control [29], [64].
Fundamental limits of nature set constraints for complex
communication and computation systems that target at high
performance. We are quickly approaching many different
fundamental limits of nature because of exponential trends
in requirements such as bit rate and delay [65]. These trends
have been possible because of miniaturization of electronics.
According to Moore’s prediction in 1975 (usually called
Moore’s law), the number of transistors on a chip has been
doubled every second year and this will continue until about
2021 [66], [67]. According to Keyes’ prediction, the
switching energy of a transistor has reduced by a factor of
100 every ten years until about 2000 after which there has
been a slowdown [66]. Understanding of the fundamental
limits of nature is indispensable to obtain high resource
efficiency when the resources are scarce [40], [4], [69]. We
explain also the fundamental limits relevant to
communication systems, whereas in [69], the limits were
presented only for computing.
Although surpassing the limits is being discussed (e.g.,
delays could be reduced to zero in the quantum Internet
[70]), our plan is to approach these limits with intelligent
management of the basic resources. We derive the basic
resources directly from the open system concept. Open
systems are such that their boundaries are crossed by
materials (mass), energy (power), and information (data and
control) [71], [7], [4]. In university physics, only closed
systems are usually considered although all technical
systems are open. Conventionally, the basic resources in
communications have included only energy and bandwidth
[72].
5
C. NEW CONTRIBUTION
Our contribution to the state of the art is the following: We
present an extensive multidisciplinary bibliography of earlier
reviews and books. This bibliography includes over 200
carefully selected books and review papers searched with the
keywords intelligence (a method to manage uncertainty in
complex environments), feedback (an enabler for learning
and iterative problem solving), hierarchy (a method to
manage complexity), distributed systems (a result from the
spatial extent of a network), and resource efficiency (a metric
to assess how well the basic resources are used to produce
the desired result). With the original papers, the total number
of references exceeds 400.
We show that the ideas have progressed partially
independently in different disciplines as demonstrated by the
comprehensive chronology covering the last hundred years.
We present a classification of hierarchical and distributed
systems using the early results of control theory. Using the
hierarchy of natural systems [73], we present a modern
hierarchy of human-made systems with increasing
uncertainty and decision complexity. Our hierarchy includes
static, simple dynamic, control, adaptive, learning, and self-
organizing systems. Using the open system concept, we
explain the relationships between the basic resources that are
materials, energy, information, time, frequency, and space.
The fundamental limits and the ideas to treat them are
summarized. We show that in many cases we are
approaching the limits and because of many conflicting
goals, multiple-criteria decision-making will become
mandatory to obtain improvement.
The bibliography, chronology, and conceptual analysis
provide a good starting point for a researcher who is
developing intelligent and resource-efficient systems and
evaluating whether the multidisciplinary systems approach
could be beneficial in the research being conducted. We use
future wireless networks as an example to illustrate the open
questions related to intelligent and resource-efficient
systems and how they can be addressed by applying
multidisciplinary knowledge. Wireless networks pose an
exceptional challenge for control since they are complex,
distributed in space, and the environment is rapidly changing
due to mobility.
Open problems of science were discussed by Bois-
Raymond in 1880 and they are still relevant [40]. Many of
them are related to emergence. Among them are
morphogenesis or self-organization [71] and consciousness
that is characterized by sensation, emotion, volition, and
thought [74]. Artificial consciousness including emotion and
volition is an active area of contemporary research [75], [76].
We focus on the cognition part of consciousness, including
the problems of sensing, deciding, and self-organization, and
exclude the problems of emotion and volition, implying free
and independent decisions, from this survey.
D. STRUCTURE OF THE PAPER
The paper is organized as follows. In Section II, we
present the major general reviews in different related
disciplines and the relevant definitions. We emphasize the
historical development of the disciplines by presenting a
chronology covering the last hundred years. In Section III,
we summarize the observed connections between the
disciplines. In Section IV, we specifically apply the state-of-
the-art results to wireless communication systems. Finally,
Section V includes our conclusions.
II. LITERATURE AND CHRONOLOGY OF INTELLIGENT RESOURCE-EFFICIENT SYSTEMS
Earlier reviews and books are listed in Table I using the
classification presented in the introduction. Due to the wide
scope of the review, we could select only a small fraction of
the literature available, and such selections are always
somewhat subjective. Quality was the main selection
criterion. Papers with clear conceptual analysis and broad
historical reviews were emphasized. The newest review
papers and books were preferred. Some reviews cover a
rather short time span; hence, some old reviews were
selected to provide complementary information. Journal and
magazine papers were favored over conference papers and
reports. The papers matching more than one category were
classified based on their main contribution.
TABLE I. Reviews and books
In Fig. 4, we present a chronology of intelligent systems
based on our review. We use the same division of disciplines
as in Table I but add some original papers. The concepts
presented in Fig. 4 are somewhat different from those in
Table I since our purpose is show explicitly the trends. The
details can be found from the references given in Table I and
elsewhere in this review.
FIGURE 4. Chronology of intelligent resource-efficient systems. The concepts and acronyms are explained in the text. The date of the first occurrence of a concept in the literature is indicated with the position of the first letter of each term.
In addition to the common divergence in research, there
is an opposite convergence trend, which is seen for example
in networked control systems. We can also see the general
trend towards more intelligent systems, though different
terminology is being used for similar concepts.
A. GENERAL SYSTEM THEORY
The history of general system theory starts in the antique [4],
but systems thinking is basically an idea of the last century
[20], originally started by Bertalanffy in a seminar
presentation in 1937 and in a written form in 1945 [71]. He
also presented the idea of open systems in 1940. A precursor
of Bertalanffy’s work was Alexander Bogdanov’s (1912)
universal organizational science or tectology [267].
Bogdanov was the pseudonym of Alexandr Malinovsky.
Since the 1950s, the concept of system of systems has been
used “to describe systems that are composed of independent
constituent systems, which act jointly towards a common
goal through the synergism between them” [268].
Bertalanffy’s theory started from biology [71], which can
be seen as a system theory of life. The International Society
6
for the Systems Sciences (ISSS, earlier until 1988 Society for
General Systems Research) was founded in 1954 “to
encourage the development of theoretical systems which are
applicable to more than one of the traditional departments of
knowledge”. The International Council on Systems
Engineering (ICOSE) was founded in 1990. There are good
specific vocabularies on systems engineering, for example
[269]. In addition, the general standards dictionary is very
useful [270].
Bionics is the science of systems which have some
function copied from nature, or which represent
characteristics of natural systems or their analogs [96]. Many
other synonymous terms are used as well, for example
biomimetics.
Most fundamental limits of nature were discovered
between 1850 and 1950 [271]. Cybernetics by Wiener (1948)
combines the results of communication and control [83], [4].
Cybernetics has been followed by second and higher order
cybernetics, which is using positive feedback in addition to
negative feedback and is useful for self-organizing systems
[272], [85]. The origin of adaptive systems is in the work of
Darwin (1859) [81], but he used the term for self-
organization. In biology, self-organization is called
morphogenesis that was described by Thompson (1917) and
Turing (1952) [71], [80]. Morphogenesis can be described by
using second-order cybernetics and positive feedback so that
the system can diminish or grow when necessary [272], [41],
[85]. Emergence appears in morphogenesis when new forms
are created when moving to a higher hierarchy level.
Emergence has been studied since the work of Broad (1923)
[4]. Early work on self-organization was also presented in a
paper by Ashby (1947) [84]. He developed the first adaptive
system called homeostat as a simulation of the brain in 1948
[273].
The study of complex systems is closely related to
bionics [90], [91]. Human brain is the most complex system
that humans are aware of and therefore a good model for
engineering. There is yet no single complexity theory but
there are a number of theories, for example nonlinear
dynamical system theory, chaos theory, and catastrophe
theory and several computer models, for example cellular
automata, neural networks, and genetic algorithms.
The attempts to simulate the brain since the 1980’s are
summarized in [274], [275]. Reference [101] presents a
modern summary of theories of cognition. There are three
main approaches, including symbolic approach,
connectionism, and dynamicism. By far the dominant
paradigm is the symbolic approach where the mind is
assumed to be software in a computer. In connectionism, the
mind is assumed to consist of large networks of nodes. In
dynamicism, the mind is compared to continuously coupled,
nonlinear dynamical systems, including feedback loops. The
first cognitive architecture was the General
Problem Solver (GPS) by Newell and Simon (1976). The
most successful and widely applied cognitive architecture is
the Adaptive Control of Thought - Rational (ACT-R)
architecture by Anderson (1976), which relies on symbolic
representations and incorporates connectionist-line
mechanisms. The author of [101] ignores similar work done
in robotics [25], [11], [26]. Some of this work is directly
developing computational models of the mind. Thus, we can
see two parallel and independent tracks of research, one in
neuroscience [101] and the other one in robotics [25], [11],
[12], [26].
The Semantic Pointer Architecture Unified Network
(Spaun) by Eliasmith (2012) is the best simulation of human
brain [101]. This artificial brain includes 2.5 million neurons
(0.003% of human brain) and works in a similar way. Spaun
can recognize numbers 0-10 and solve simple tasks. One
modern approach to simulate the human brain is
neuromorphic chips [276], [277], [278], [279]. The term
neuromorphic was established in [93], [94]. The
implementation approach is digital: firing neurons send
spikes of electrical impulses that each corresponds to 40 bits.
The timing of the spikes conveys important information.
Neurosynaptic chips are event-driven and operate only when
they are needed, resulting in cooler operating environment
and lower energy use [280]. Some of the newest brain
simulations are summarized in [278].
There have been attempts to simulate the human brain
with a supercomputer. One second of 1% of human brain
functionality was simulated in 40 minutes, which implies
that there was over five orders of magnitude difference for
real time simulation of the whole brain [281]. Also in the use
of power (in W), there was over five orders of magnitude
difference since the power consumption of the
supercomputer was 12.66 MW, but the power consumption
of the human brain is only 14.6 W [97]. The reason for this
kind of large difference is a mismatch between the
computing task and the computing platform. The human
brain is not based on floating point multiplications. It has
been estimated that the number of synapses in the brain is 2.4
× 1014, the average firing rate of 1 ms spikes is 5 Hz, and
each spike corresponds to 50 floating-point operations [97].
Thus, the human brain has a computing speed equivalent to
6 × 1016 floating-point operations per second (FLOPS), and
the energy consumption per operation is 0.24 fJ, which is five
to six orders of magnitude lower than that of the most energy
efficient digital signal processor. On the average, only 3% of
the neurons are used at the same time [282]. Some authors
have proposed massively parallel architectures that would
have brain-like speeds [283]. A new paradigm for computer
architectures is based on memcomputing, inspired by the
brain [284].
Deep Blue, a chess-playing computer which won against
the human champion in 1997, is a milestone in smart
computing technology [285]. Deep Blue was followed by
Watson which in 2011 won against two champions of the US
TV quiz show with question-answering (QA) technology.
AlphaGo, in turn, was able to win in 2016 against one of the
best Go players in the world. A more advanced version is
AlphaGo Zero that can improve itself as a player when only
the rules of the game are given [286].
7
B. DECISION THEORY
Decision theory is a theory of choice, and its origins are in
the 1600’s [106], [108], [109]. Decision theory is concerned
with the choices of an individual decision maker. A subfield
of decision theory is operations research (1936) where
operations originally refer to military operations. Situation
awareness is an essential part of operations research to
support human decision-making in a dynamic environment
[57]. The term has been used since the World War I.
Situation awareness includes the perception of the elements
in the environment in time and space, comprehension of their
meaning, and the projection of their status to the near future.
The starting point of multiobjective optimization (MOO)
was Kuhn and Tucker’s paper (1951) about vector
optimization [106]. Modern multiple-criteria decision-
making (MCDM) theory started officially from the
conference Multiple Criteria Decision Making organized in
1972 [8].
The MCDM problems typically lack a unique solution.
Hence, the solution has to be determined based on subjective
preferences of the decision maker, that is, the decision
makers select one of reasonable alternatives that they prefer
[62]. The reasonable alternatives can be defined more
precisely by using the concept of Pareto efficiency [62],
[114]. Pareto optimal solutions (1906) are those solutions
where no improvement in any objective can be made without
worsening some other objective. The set of reasonable
alternatives is typically identified using MOO methods,
including scalarization algorithms (e.g., weighted sum or
weighted product algorithm). The function used in
scalarization is sometimes called the utility function and its
output is called the utility [234]. Alternatively, the set of
reasonable alternatives can be found using heuristic
methods, metaheuristic algorithms (e.g., evolutionary
algorithms, genetic algorithms, swarm intelligence, neural
networks), and other optimization algorithms (fuzzy logic,
game theory). Heuristic approaches differ from MOO since
they are not guaranteed to find the optimal solution but they
often find an approximate solution which is good enough.
The MOO methods are thoroughly reviewed in [8]. The
general problems in optimization include the large size of the
search space as to forbid an exhaustive search for the
optimum, modeling of a complicated problem, changes over
time requiring an entire series of solutions, and constraints in
the form of fundamental and practical limits [163].
Game theory by von Neumann and Morgenstern (1947)
[8], [106] is concerned with interactions of decision makers
whose decisions affect each other. In game theory, decision
makers, simply referred to as players, are assumed to be
rational, which implies that they know each other’s strategy
and each of them tries to maximize their own utility. The
game will finally lead to the Nash equilibrium (1950); to a
situation where no player can gain anything by unilaterally
changing his own strategy. In general, only if all the players
are cooperating, the system may converge to the Pareto
optimal equilibrium: the performance of no player can be
improved without decreasing the performance of some other
player. In practical situations, the players typically have
incomplete information. The problem of incomplete
information was solved by Harsanyi (1967) [106]. He
assumed that the information available can be modelled as a
Bayesian probability distribution on variables of interest.
Thus, the game with incomplete information can be
transformed into an equivalent Bayesian game with complete
information.
Game-theoretic considerations provide useful insight
into decision problems where a decision maker faces
multiple, possibly conflicting, criteria. The main idea is quite
simple: if all players cooperate, the result is the same as if a
single decision maker acts as a decision-maker for an
MCDM problem [62], [287], [9], [110].
Decision theory has produced such widely used
principles as linear programming or optimization by
Dantzig, Kantororovich, and Koopmans (1947), nonlinear
programming by Kuhn and Tucker (1951), dynamic
programming or multistage optimization by Bellman (1950),
and statistical decision theory by Wald (1950) [106]. Graph
theory by Euler (1736) and König (1936) and fuzzy sets and
fuzzy logic by Zadeh (1965) are also products of decision
theory.
C. CONTROL THEORY
Control theory is a theory of automata, based on the concept
of feedback although some control systems are open-loop
systems [10]. Control may be centralized, decentralized, or
distributed. In a decentralized system, the local control units
are autonomous [288] and compete with each other, but in
distributed systems, they cooperate at least with their
neighbors at the same level in the hierarchy [29], [64].
Modern systems based on remote control are called
networked control systems (NCSs), which now include
haptic information, i.e., both kinesthetic (sense of
movement) and tactile (sense of touch) information [28],
[244]. The NCS is a general concept that includes both
control over network and control of a network. Robotics is
heavily based on control theory. A robot is a machine
capable of carrying out a complex series of actions
automatically [122].
Feedback has been used since the antique [120]. The
feedback was applied first in a water clock by Ktesibios in
the third century BCE. In the modern period, the first to use
feedback was Dreppel who invented the thermostat early in
the 1600s [6]. Feedback was also included in Watt’s steam
engine (1769) and later analyzed by Maxwell (1868) but the
analysis was forgotten until Wiener (1948) [128], [121].
Lyapunov presented a theory for stability of nonlinear
dynamic systems in 1892, but this result was also forgotten
until about 1960 [289]. The now common proportional-
integral-derive (PID) controller was developed by Minorsky
(1922) by studying the steering of a ship [10].
The negative-feedback amplifier was invented by Black
(1927), but his patent application was accepted only in 1937
since still at that time the idea looked controversial [121].
Feedback was used to reduce distortion. The starting point of
modern control theory and intelligent systems is Nyquist’s
8
stability analysis of the feedback [53] after which the
generality of this old concept was finally realized. Nyquist
was Black’s assistant. Feedback can help finding a solution
iteratively when it is not possible to find it directly [290],
[111]. In the iterative approach, a solution is kept in a
memory and updated in each iteration. Feedback is used for
two general purposes, namely for finding a solution in an
unknown noisy environment and for tracking changes in the
environment. Feedback must be slow enough so that it can
reduce the effects of noise, but fast enough so that it can track
the changes in the environment. A delay within the control
loop has detrimental effects on the stability and convergence
of the loop [291], [145]. Delays slow down the loop, and
such delays should be avoided as much as possible. For
example, for machines on or near the Moon, the unavoidable
delays are typically 3 s.
Feedback algorithms are often called closed-loop
algorithms. In some cases, it may be necessary to use open-
loop, feedforward, or block algorithms that try to find the
solution either directly in one shot or iteratively, but without
using the sensing output from the environment [262], [133],
[10]. Such systems are working blindly [6], using for
example a predefined time such as in a washing machine or
in traffic control [10]. The advantages and disadvantages of
open-loop control are summarized in [10]. Open-loop
control is simpler than closed-loop control, there is no
stability problem, and no need to measure the process output.
Moreover, open-loop control has the benefit of being faster
than the closed-loop control, which is important in many
applications since closed-loop algorithms may be very slow,
especially when there are many variables to be controlled.
However, for open-loop control, the convergence time may
not be easy to predict and depends on the number of degrees
of freedom or the number of variables [133]. In addition,
disturbances may cause errors and frequent calibration may
be needed.
In communications, open-loop control is sometimes used
in synchronization and channel estimation [292] and
transmitter power control where the reciprocity of the
channel is used [293]. In transmitter power control (TPC),
the closed-loop control may be impractical if the delay in the
feedback channel is excessive. Open-loop control is used in
some biological systems such as human brain [294].
In many cases, the optimum cannot be found with a
feedback loop only, but it is found in two phases called
acquisition and tracking [295], [208] where open and closed-
loop control are combined. There may be local optima that
must be avoided. In acquisition, a rough estimate is first
found by using an open-loop algorithm based on exhaustive
grid search with a suitable resolution. An alternative is
random search [5]. Acquisition is often based on a
correlation process so that the tracking phase may start.
Correlation is an optimal approach if the additive noise is
white and Gaussian. Tracking is usually iterative and based
on a feedback loop and therefore it may converge to the
global optimum only when it starts close enough.
In communications, the signal-to-noise ratio per symbol
is usually rather small, and averaging of channel parameters
must be done over several symbols [296]. The same rule is
valid for open-loop systems. Usually, the feedback works
iteratively, but in some cases, a direct or one-shot solution
may be used in the feedback, updated every now and then
[111].
Adaptive systems became popular at the end of the
1950’s. Adaptive control was first presented in [297] by
borrowing the term from biology. Adaptive control systems
are systems that monitor their own performance and adjust
some of their variables in the direction of better performance.
The environment may be slowly changing. The workhorse in
adaptive systems is the least-mean square (LMS) algorithm
[263], [208], [134], which was devised by Widrow and Hoff
in 1959 but whose original version was presented by von
Mises and Pollaczek-Geiringer (1929) and Robbins and
Monro in 1951 [132], [134], [238]. An early review paper is
[298], whereas [299] includes a bibliography of 49 papers.
Saridis derived in 1988 optimal control by using the entropy
concept [5]. He showed that the entropy is minimized by
using a three-level hierarchy if the process has widely
different time resolutions or scales, i.e., having at least an
order of magnitude difference, which makes the hierarchical
control possible. A similar conclusion can be drawn for
different spatial and frequency resolutions. The history of
self-organizing control was started by Mesarovic [63] and
further developed by Saridis [300].
The first learning control systems were summarized in
[301], [302], [303]. According to [302], the problem of
unsupervised learning was defined in machine learning and
pattern recognition by Abramson (1963). The first
unsupervised learning control systems were described in
[304], [302], [305]. One of the first books on adaptive and
learning control systems was [132] whose original version
was published in Russian in 1968. Tsypkin used the term
self-learning to describe unsupervised learning. According to
him, the term “learning without a teacher,” a term used at
that time for unsupervised learning, is a misnomer since in
self-learning there is really a teacher that has defined the
methodology for learning [305]. Intelligent control was
originally proposed in [306], but a clear definition was given
in [46]. The same definition was later in 1993 adopted by the
IEEE Control Systems Society for intelligent control [307].
The term autonomous control became popular in the
1980’s [135], [128], [12]. Originally, the term intelligent
control was used instead of autonomous control [126]. The
authors of [135] have the opinion that autonomy is the real
objective and intelligent controllers are one way to achieve
it. The first mobile robots had already a form of autonomy.
The term “robot” was first used to denote an automaton
in 1917 by J. Čapek [6]. The actual history of robotics started
from a robotic arm by Pollard (1938) [122]. The first tests
with remote control (at that time known as teleoperation)
were made by Goertz who started his work on an
electronically controlled telemanipulator in about 1945 and
published it with Thompson in 1954 [33], [35], [241]. Tesla
9
demonstrated a radio-controlled miniature boat already in
1898. In 1966, Ferrell published contact force feedback that
was the first paper to consider delays in control loops. It was
the first rudimentary haptic kinesthetic feedback system
where a delay as small as 100 ms was shown to destabilize
the teleoperator. One of the first NCSs was the Control Area
Network (CAN) developed in 1983-1986 [34]. Anderson and
Spong (1989) published a paper solving for the first time the
stability problem caused by the feedback delay [35]. In some
industrial applications, a 1 ms control delay is a requirement
[31]. Remote control over the Internet was started by
Goldberg (1995) [147].
The reaction times of human senses are summarized in
[105]. The speed of electrochemical signals in human nerves
is in the order of 10-100 m/s. If the speed is 100 m/s, the
signal propagates 10 cm in 1 ms. Our ears have a reaction
time of 140-160 ms, including the delay of the brain, our eyes
have a reaction time of 180-200 ms, and the sense of touch
has a reaction time of 155 ms, Auditory stimulus takes 8-10
ms to reach the brain and visual stimulus takes 20-40 ms,
which makes the difference in reaction times. A bulk of the
reaction time is used by the brain to make a decision. The
shortest reaction time of a sprinter is 100 ms, otherwise, it is
a false start according to the International Association of
Athletics Federations (IAAF) [308].
According to [146], the first review paper on tactile
interaction was published in 1984. The number of papers
started to grow in 1991. McCauley and Sharkey (1992)
published the first paper on cybersickness caused by the
closed-loop delay in the control loop that carries information
on the sensations of sight and touch [309], [151], [31]. The
delay should not be much larger than 1 ms. A more detailed
up to date discussion is in [153], [310]. The requirement
depends on the task at hand. The most demanding tasks in
this respect are dragging on a touch screen (maximum delay
of 1 ms) and inking or line drawing using a stylus (maximum
delay of 7 ms). Mouse control allows 60 ms delay [311]. The
sense of touch can differentiate between two stimuli just 5
ms apart [149], [151]. Our ear can resolve clicks separated
by 0.01 ms, while the eye requires 25 ms [312]. Touch is
highly sensitive to vibrations up to 1 kHz with the peak
sensitivity at about 250 Hz. In 2001, Elhajj et al. published a
review paper on haptic feedback over the Internet,
complementary to the conventional audiovisual feedback
[35], [241].
The development in robotics has continued from a
robotic arm towards mobile robots, bipedal walking robots,
and finally to cooperating robots [122]. An important branch
of robotics is microrobotics, which is leading to
nanorobotics. The names describe the size of the robots in
the micrometer and nanometer range, respectively.
Heinlein first proposed in 1942 a primitive telepresence
master-slave manipulator system. The term telepresence was
coined in a 1980 article by Minsky [21], who outlined his
vision for an adapted version of the older concept of
teleoperation that focused on giving a remote participant a
feeling of actually being present at a different location. One
of the first prototypes on virtual reality was developed by
Heilig (1962).
The results of control theory and computer science are
nicely combined in autonomous robots, which are agents at
the same time [25], [11], [12], [26]. Early mobile robots were
the two tortoises built by Walter in 1948-1949 [12], [313].
They were also the first autonomous robots. One of the first
robot architectures in the 1960’s was Shakey by Nilsson
(1969). In the beginning, only one hierarchy level was used,
but such robots were slow and ponderous. The most common
mixed architecture is based on the layer hierarchy. Brooks’
reactive subsumption architecture (1986) was the first
significant improvement where the layers were connected to
the sensors and actuators directly as in [136], leading later to
Arkin’s autonomous robot architecture (1990). In [11], [12]
several architectures are summarized to express a
computational model of intelligence. The authors of [11] are
using a model called Real-Time Control System (RCS),
originally developed by Barbera (1979). In modern proactive
architectures originated by Firby (1989), the lowest layer is
responsible for the movement. The next layer is responsible
for choosing the current movement, and the uppermost layer
is responsible for achieving long-term goals within resource
constraints using planning. Newest developments are
summarized in [190], [192], [61].
One of the first outdoor navigation systems was the
autonomous car reported by Tsugawa in 1979 [155], [158].
The car could drive autonomously at 30 km/h. It used a pair
of stereo cameras mounted vertically to detect expected
obstacles. The navigation relied mostly on obstacle
avoidance. Such self-driving or driverless cars are a form of
autonomous ground vehicles (AGVs). Similar principles
have been later developed for unmanned aerial vehicles
(UAVs) and autonomous underwater vehicles (AUVs).
Navigation can be map-based or mapless. Unmanned surface
vehicles (USVs) or autonomous surface vehicles (ASVs)
such as autonomous ships operate on the surface of the water
without a crew [159]. A large proliferation of USV projects
appeared in the 1990’s.
D. COMPUTER SCIENCE
Computer science includes two areas that are of interest to
us, namely artificial intelligence and computational
complexity analysis. The goal of computational complexity
analysis is to find efficient algorithms in terms of time and
space or the size of the memory [13]. The computational
complexity analysis started from the work of Hartmanis and
Stearns (1965) and was later complemented by Cook, Levin,
and Karp [171].
Artificial intelligence is a science of intelligent agents
[6]. Agents sense their environment and perform actions.
Machine intelligence is divided into artificial and
computational intelligence [314]. The artificial intelligence
refers to conventional hard computing methods using binary
logic, whereas computational intelligence includes such soft
computing methods as neural networks, evolutionary
computing (e.g. genetic algorithms), fuzzy computing, and
10
biologically inspired algorithms (e.g. swarm intelligence)
[170]. Some of the methods may be classified to belong to
decision theory but they are applied extensively in computer
science.
Augmented intelligence, also called intelligence
amplification, is complementary to artificial intelligence
referring to the use of computer science in augmenting
human intelligence. Ambient intelligence (AmI) brings
intelligence to our everyday environments and makes those
environments sensitive to us [178], [179]. The term was
coined by the European Commission in 2001. Ambient
intelligence research builds upon advances in sensors and
sensor networks, context-aware computing, pervasive or
ubiquitous computing, and artificial intelligence. Context
awareness is defined as the use of context to provide task-
relevant information or services to a user [180], [240], [181].
The term was introduced for the first time by Schilit in 1995
[240]. Cyber-physical systems (CPSs) are integrations of
computation and physical processes, combining computing,
control, and communications [195], [23], [196]. The
National Science Foundation (NSF) has defined CPS as
systems “where physical and software components are
deeply intertwined, each operating on different spatial and
temporal scales, exhibiting multiple and distinct behavioral
modalities, and interacting with each other in a myriad of
ways that change with context” [196]. CPS covers NCS
whose origin is in control theory [28]. The predecessors of
CPS were embedded systems [315], [316].
The starting point of computer science was the digital
computer (1946) and the supercomputer (1958), which led to
the concepts of artificial intelligence (1956), machine
learning (1959), and computational intelligence (1994)
[106], [122], [314], [162]. Neural networks were introduced
even earlier by McCullough and Pitts (1943) and the first
perceptron rule was developed by Rosenblatt (1960) [317],
[318], [319]. An important step was the multilayer neural
network, whose various versions were developed since 1965,
and later the term deep learning was used for large multilayer
networks since 1986. The pioneers of artificial intelligence
were Newell, Simon, McCarthy, Minsky, and Samuel.
McCarthy proposed the term artificial intelligence and later
Samuel proposed the term machine learning. The first
practical AI systems were expert systems by Feigenbaum
(1975) [122].
The terms cloud and edge computing were first used in
about 1996. Edge computing refers to computing in devices.
Fog computing is an intermediate between cloud and edge
computing, so that the cloud is in the local area network near
the edge of the network.
The origins of big data are in pattern recognition which
has a long history in statistics [320], [321]. The first papers
in the IEEE literature were published in 1954 [322].
Knowledge discovery and data mining started in about 1990.
The term big data was first suggested by Mashey (1998) after
which it became popular [323]. It has applications in
artificial intelligence. The appearance of Internet of Things
(IoT, 1999) has made the available data especially large.
Software agents were developed by Hewitt (1977). This
lead to multiagent systems (MAS) that were earlier called
distributed AI in about 1980 [187]. Multiagent systems pose
the problem of distributed learning. Mobile agents (1995) are
a more recent development [324]. They were first called
itinerant agents. More recently, autonomic computing (2001)
[50] and cyber-physical systems (2006) [23] were
developed. The first conference on distributed computing
was organized in 1982, but the history is much longer and
related to the first computer networks. According to [193],
the origin of autonomic computing was a project called
Situational Awareness System (SAS) funded by the Defense
Advanced Research Projects Agency (DARPA) in 1997. The
project’s aim was to create personal communication and
location devices for up to 10,000 soldiers on the battlefield.
Decentralized multihop ad hoc routing was used in a difficult
environment with the goal to keep round-trip latency below
200 ms. Thus, autonomic computing is a form of self-
organizing computing and its history is in distributed
computing.
E. COMMUNICATION THEORY
Communication theory is a theory of data transmission
through a network of links [15], [16]. Usually, also redundant
control information is needed to recover the transmitted data.
Programmable networks include active networks (1996)
and software-defined networks (1998) [214]. A software-
defined network (SDN) is a programmable network that
separates the control plane from the data plane [214], [217],
[218]. The network intelligence is logically centralized in
software-based controllers and the network switches become
simple forwarding devices. The SDNs represent centralized
systems.
A self-organizing network (SON) is an autonomous
network organized without any external control unit [224],
[51]. A self-organizing network “will create its own
connections, topology, transmission schedules, and routing
patterns in a distributed manner” [222]. Routing is an
example of self-organization. The individual nodes interact
directly with each other in a distributed fashion. The idea of
self-organization in communications may be traced back to
packet switching (1964) and dynamic channel assignment
(DCA) (1971) [253], [199]. Packet switching eventually led
to the invention of Internet Protocol by Cerf and Kahn
(1974), which is the basis of the Internet. An ad hoc network
is a collection of, possibly mobile, communication nodes that
wish to communicate but have no fixed infrastructure
available [223]. Ad hoc networks are therefore always
SONs. They have also been called packet radio and multihop
networks. Self-organization can include centralized control
if the control unit is within the network. Usually, SONs are
implemented using distributed control [222], [224].
Ideas from different disciplines are now merged for
example in Tactile Internet [30], [31], which has its roots in
teleoperation [33], networked control systems [28], and
haptic communications [35]. In 2010, Lenay mentioned the
term Tactile Internet for the first time in the scientific
11
literature, and the concept was introduced in more detail in
[31], [32]. Tactile Internet is a version of NCS with the haptic
feedback, both originally developed in control theory [28].
Some authors have proposed a more general term Haptic
Internet [244]. Time sensitive networking (TSN) is a set of
standards being developed since 2012 to support time
synchronized low latency traffic [24]. Internet of Skills
combines the results of communications, robotics, artificial
intelligence, and Tactile Internet [325]. Skills are capabilities
or behaviors that a human being or an agent may have [25],
[26]. Examples include remote surgery, education, and
reparation.
Autonomic computing resulted in the term autonomic
networks whose special case is cognitive networks [51],
[232], [236]. Autonomic networks are essentially self-
organizing networks. Regarding bionics, bio-inspired
networking and signal processing [265], [266] have been
studied for some time.
Much of the theoretical basis of communications is in
graph theory by Euler (1736), electromagnetic field theory
by Maxwell (1861), queuing theory by Erlang (1909), time-
frequency analysis by Gabor (1946), information theory by
Shannon (1948), and statistical decision theory by Wald
(1950). Before the statistical decision theory, optimization
was done by using the signal-to-noise ratio without any
statistical information [198]. The optimal receiver was the
matched filter which was invented many times. One of the
earliest inventions was made by W. W. Hansen of Stanford
University in 1941 although usually North (1943) is seen as
the original inventor [208], [198]. Matched filtering
corresponds to correlation. More generally, the optimal
receiver is the maximum a posteriori probability (MAP)
receiver by Woodward and Davies (1952). The graph theory
and statistics were combined in random graphs, developed
by Erdős and Rényi (1959) [106]. We concentrate on
intelligent aspects used in communications, including
hierarchy and feedback.
One of the common biggest efforts of computer and
communication scientists was the Open Systems
Interconnection (OSI) model that was finalized in 1984
[326], [15], [16]. It is a layer hierarchy that includes seven
layers, that is, the physical, data link, network, transport,
presentation, session, and application layers. The Internet is
using a modified version of this model, now called
Transmission Control Protocol/Internet Protocol (TCP/IP)
model where the most important protocols are in the
transport and network layers, respectively. The presentation
and session layers are excluded from the TCP/IP model. One
trend is to simplify the protocols for specific purposes, for
example [327]. The name of the protocol data unit (PDU)
depends on the layer [15], [16]. In the physical layer, the
PDU is a bit or symbol, in the data link layer, it is a frame,
and in the network layer, it is a packet. A data packet is
similar to a postal packet in the sense that it carries the
address of the destination with it. Each packet is routed in the
network layer from the source to the destination through the
network, and the packets are collected and put in the right
order in the transport layer. In the TCP/IP model, the PDU
in the network layer is a datagram, in the transport layer, it
is a segment, and in the application layer, it is a message or
bit. Hierarchy has been used in various other forms in
communication networks, including framing [15], routing by
Kleinrock (1977) [248], and network synchronization by
Lindsey (1980) [246], [247].
The use of feedback in communications started from the
phase-locked loop (PLL) by Appleton (1923) [328],
automatic gain control (AGC) by Wheeler (1925) [329],
automatic frequency control (AFC) by Travis (1935) [330],
and delay-locked loop (DLL) by Guanella (1938) [331]. One
of the first adaptive receivers was the Rake receiver by Price
and Green (1958) [208]. It is a form of adaptive matched
filter and an adaptive multipath diversity combiner, but the
authors did not use the term “adaptive” since it was not
commonly used in engineering at that time. Instead, the term
“automatic” was used. The adaptive equalizer was developed
by Lucky (1965, 1966) to combat intersymbol interference
(ISI) [332], [264]. Applebaum (1966) and Widrow (1967)
were the first to write about adaptive antenna beamforming
[133], [134]. Control theory is also widely used in adaptive
transmission, for example in TPC and adaptive modulation
and coding (AMC) [333], [202]. TPC research started from
the work of Hayes (1968), which was later complemented by
adaptive bit rate control by Cavers (1972). The ideas were
combined by Hentinen (1974). Adaptive automatic repeat
request (ARQ) schemes started from the work of
Mandelbaum (1972) [334], AMC in fading channels from
the work of Vucetic (1991) and Alamouti and Kallel (1994)
[333]. Such adaptive transmitters usually need a feedback
channel from the receiver to the transmitter.
Most of the feedback mechanisms belong to the physical
or data link layer. Feedback is also used in flow control and
congestion control in the transport layer [16]. Flow control
means sender and receiver speed matching because of finite
buffers. Congestion control is needed because of the finite
capacity of the network. Feedback can also provide
information on throughput, delay, jitter, reliability, and other
network properties in the transport layer [15]. The properties
form the state of the network.
In wireless communications, spectrum management is
used to avoid interference within a system and between
systems [335], [256]. Spectrum allocation has traditionally
been static in the form of spectrum regulation [336], [337].
After the introduction of cognitive radios by Mitola (1999)
[58], the idea of dynamic spectrum allocation (DSA)
appeared to manage the use of spectrum within a system
[254], [256]. An earlier similar term is radio resource
management (RRM) used for resource management between
different systems [335]. Usually, the radio resource of
interest is spectrum. A precursor of DSA was dynamic
channel assignment by Cox and Reudink (1971) [253]. DCA
and RRM were precursors of the cognitive radio concept.
Applications of artificial intelligence in communications are
introduced in [239], [231]. In addition, the use of agent based
12
optimization and big data are considered communications in
[338], [339].
Network traffic monitors and network analyzers have a
long history since about 1980. The general term network
awareness was proposed in [340]. It is quite similar to
situation awareness in operations research [57] and context
awareness in computer science [181]. Network sensing or
monitoring is the process of collecting information about
network performance. The term network monitor refers to an
entity in charge of the network-sensing tasks in a computer
or network. Existing network-aware systems monitor such
variables as available throughput, latency, packet loss rate,
and the load for different computers in the network.
III. RECOGNIZED CONNECTIONS BETWEEN DISCIPLINES
The connections that we have observed between the five
disciplines are presented in this section. The basic concept is
the control loop (Fig. 1b); in all the disciplines, the plant is
controlled to achieve or keep some state of the system or to
achieve some behavior. When a single decision block is not
sufficient, the blocks must be hierarchical and distributed.
Generally, hierarchy is used to limit the complexity of a
system, especially when there are different spatial, time, and
frequency resolutions [63]. A decision block has usually a
limited decision capability, and therefore the overall goal of
the system is divided into more manageable subgoals. We
focus on control hierarchy, that is, how the decision-making
is organized and how control signals flow in the hierarchy.
Hierarchies are important also for conceptual analysis to
show explicitly the relationships between different concepts
and the different terminology in different disciplines.
Systems are organized in a hierarchy according to the
complexity of the decision block. The classifications
emphasize differences rather than excluding the possibilities
of a system belonging to more than one class [63]. Many of
the ideas were first developed in control theory, but they are
useful in all disciplines. Communication networks have a
spatial extent by definition since the users are spatially
distributed, and the networks enable distributed control and
computing. We include also basic resources in the analysis.
The fundamental limits will lead us to multiple-criteria
decision-making.
A. CONTROL LOOP
Many of the most advanced systems in Fig. 2 are based on
the feedback concept. A general definition of feedback is
“the transmission of information about the actual
performance of any machine (in the general sense) to an
earlier stage in order to modify its operation” [4]. More
narrowly, feedback control refers to “an operation that, in the
presence of disturbances, tends to reduce the difference
between the output of a system and some reference input and
does so on the basis of this difference” [10]. We prefer the
more general definition from [4] since some feedback
systems based on unsupervised learning do not need a
reference signal, but their goal is to optimize some
performance criterion, possibly observing certain
constraints. In negative feedback, the modification is such as
to reduce the difference between actual and desired
performance; positive feedback in general induces instability
by reinforcing a change in performance. In a complex control
system, there may be a positive-feedback inner loop. Such a
loop is usually stabilized by the outer loop. Positive feedback
is used especially in some learning and self-organizing
systems.
The general feedback control loop consists of sense,
decide, act, and plant blocks [63], [124], [10], [6], see Fig.
1b and Fig. 5 for an alternative form. These four blocks form
the system being studied that can be called the control
system. The plant, i.e., the process, is the device or operation
being controlled. The control is realized with the sense,
decide, and act blocks. The plant itself may include control
loops that have been left outside the scope of the control loop
being studied. The interfaces between the blocks must be
defined by the observer. We focus on the decision block that
can be called control unit or controller. The plant is
controlled by performing actions according to the decisions
made by the decision block. The state including the
properties or performance of the plant is monitored with
sensors; this provides performance feedback [63], [4]. The
decision block may use external information in the form of a
set-point value or reference signal. The decision block is
goal-seeking, i.e., its purpose is to solve some possibly
constrained optimization problem so that the control system
obtains the maximum performance.
FIGURE 5. The control system in Fig. 1b in an alternative form. D refers to the decision block and P refers to the plant or process.
The sensor measures the state of the plant. The state of
the system is the total of all the measures of the performance
at a given time [7]. Usually, the state is expressed by using
certain KPIs such as throughput, delay, and reliability [65].
The desirable states are goals that are in general mutually
conflicting. Sensing is part of perception that is a process by
which sensory input is transformed into structured
information or knowledge, useful for reasoning, about the
environment [11]. In communications, the sensor is often
called the monitor [340]. The decision block essentially
makes choices of actions based on the sensed data. The
actuator executes the selected action. Acting may be also
called executing [26]. The decision block may include a
memory to which it collects the data and an input for a set-
point value or a reference signal. In self-organizing systems,
the actions change the structure of the system in an
autonomous way.
The decision block is a key element in the feedback loop.
In general, the block is deterministic or partially random. A
deterministic algorithm implies that when the input data are
the same, the output data are always the same. Because of
the feedback, the input data are changing.
B. HIERARCHY CONCEPT
13
An important general concept developed in control theory is
hierarchy. The human brain has a hierarchical structure
according to some theories [99]. A classification of
hierarchies is presented in Fig. 6 [63]. The terminology is
different in different disciplines. There are three main
hierarchies, including nested hierarchy (also called stratified
hierarchy), layer hierarchy (also called multilayer
hierarchy), and dominance hierarchy (also called
multiechelon hierarchy). In robotics, the layers are
sometimes called tiers [26]. The dominance hierarchy may
be also called organizational hierarchy or more loosely tree
hierarchy. We will use the descriptive terms nested, layer,
and dominance hierarchy. The term level is reserved as a
generic term referring to a hierarchy level. Mesarovic calls
the levels in nested, layer, and dominance hierarchies as
strata, layers, and echelons, respectively. Any multilevel
system may in general be described by using more than one
of these hierarchies [124]. When considering control, these
three hierarchies can be used to organize control decisions
and the flow of sensing and control signals.
FIGURE 6. Three common hierarchies.
The nested hierarchy is often used as a description or
abstraction hierarchy [63]. Most natural and human-made
systems can be described using this abstraction hierarchy and
modularity. In the nested hierarchy, the lower level systems
are inside the upper level systems, whereas in other
hierarchies, lower level systems are below and not inside the
upper level systems. The layer hierarchy by Lefkowitz and
dominance hierarchy by Mesarovic are the two classical
decision hierarchies in control theory [127]. In the layer
hierarchy, each level has a different time resolution but
shares a single goal. In the dominance hierarchy, each level
has a different time, frequency, and spatial resolution, and
there can be many conflicting goals. The dominance
hierarchy is especially useful for centralized control when
the plant is spatially distributed. The layer hierarchy is a
special case of the dominance hierarchy [63].
Information exchange between spatially separated
decision units can be done by using a shared memory or
database, message passing, or their mixture [167]. The
shared memory architecture is often called the blackboard
architecture.
In the diagrams (Fig. 6), the decision blocks interact
directly only with the nearest hierarchy level. This is only
superficial since any of the decision blocks may transmit
information to any other level. For example, Meystel’s
multiresolutional architecture is a special case of the
dominance hierarchy where each of the levels communicates
directly with the process but with a different resolution [136].
A system can be modeled at four abstraction levels,
namely, functional, behavioral, structural, and physical
levels (Fig. 7). We have combined the terminology used in
different disciplines [7], [20], [341], [342] to give a unified
picture. The functional level describes the input-output
relationship of the system. It corresponds to the systems
view. The model at the functional level may be called a
mathematical or black-box model having no internal
behavior or structure that would represent the reality [343].
At the behavioral level, the behavior is defined as a set of
successively attained states of the system where the state
includes all the properties of a system at a given time [7]. In
our case, the properties are measured with performance
metrics. Some authors consider functional and behavioral
levels to form one level. At the structural level, the structure
is the set of parts and their relationships. The structure is
“deeper” in the system and therefore more difficult to change
than the behavior. This is done in self-organizing systems
where organization is the same as structure. The structural
level is sometimes called architectural level. The model in
the behavioral and structural levels is called a theoretical
model, more detailed than the mathematical model [343].
The physical level of the system is located below the
structural level defining the physical parts of the system. It is
an analytical or reductionistic view to the system.
FIGURE 7. Typical abstraction levels of a system in different disciplines.
C. HIERARCHICAL AND DISTRIBUTED CONTROL
Hierarchical control was first extensively studied in [63].
Based on this work, the hierarchy in [126] has been so far the
standard approach in control theory, intelligent agents, and
robots [128], [25], [26] although the terminology may be
different. Control is often structured hierarchically because
the decision-making capability of a single control unit may
be limited, subsystems may be far from each other and have
limited communication with each other, there is a cost, delay,
or distortion in transmitting information, and subsystems
make decisions autonomously [127].
Usually, in a layer or dominance hierarchy (Fig. 6), three
functional levels are used: controller, coordinator, and
manager or organizer (Fig. 8). The hierarchy is based on the
use of different time and spatial resolutions, and it has also
strong analytical evidence based on the entropy concept [5].
However, many variations have been proposed with a
different number of levels, and with different names for the
levels as well; some examples are given below.
FIGURE 8. Hierarchical control in a layer and dominance hierarchy. Examples are given from robotics and communications.
The controller is a control system that has the lowest
complexity and the highest speed in the hierarchy and works
in real time with high accuracy. It has only a local view on
the system. It has no memory and it is using high time,
frequency, and spatial resolutions [249]. The coordinator is
a learning system that does not work in real time and it has a
short-term memory, which gives it a possibility to learn from
earlier experience. The manager or organizer is a self-
organizing system that has the highest decision complexity
and the lowest speed. It uses a long-term memory and low
spatial, time, and frequency resolution and has an overview
on the whole system. It can change the organization or the
structure of the system, thus the name organizer. In general,
the organizer must be capable of planning. Planning is a
14
reasoning process by which a system predicts the future and
selects the best course of action to achieve the goal [11].
The dominance hierarchy (Fig. 6) forms the basis of
centralized control in control theory. In communications
theory, the implementation can be based on an SDN [344].
Such a system works reliably but needs much control
information. The amount of control information is
minimized when the time, frequency, and spatial resolutions
at each hierarchy level are different. That is the case in most
practical situations. In robotics, the levels are sometimes
called behavioral, executive, and task-planning layer [26].
In the OSI model, the physical layer corresponds to the
controller (for example synchronization and adaptive
equalizers and estimators resemble control algorithms), the
data link layer corresponds to the coordinator (for example
medium access control is used to share the link resources
using a frame structure), and the network layer corresponds
to the organizer (for example routing). The time resolutions
can be mapped to each of the layers using the parameters in
the present fourth generation (4G) Third Generation
Partnership Project (3GPP) “Long Term Evolution –
Advanced” (LTE-Advanced) system [205]. The radio frame
of 10 ms is divided into ten subframes of 1 ms, each
including 14 orthogonal frequency division multiplexing
(OFDM) symbols having a subcarrier spacing of 15 kHz.
There are four essential time resolutions [205], [345]. These
are the sampling interval 32.5 ns, symbol interval 66.7 s,
subframe 1 ms, and mean user interarrival time, which
depends on the prevailing network congestion condition and
is typically assumed to be much higher than 1 ms.
The physical layer processes samples and symbols. The
maximum sampling interval used for example in filtering is
dependent on the inverse of the maximum bandwidth 20
MHz. The symbol interval used in modulation is the inverse
of subcarrier spacing and is large enough to minimize the
effects of intersymbol interference because of multipath
propagation. The data link layer operates with frames and
their subframes using the 1 ms resolution for, e.g.,
scheduling and link adaptation. The length of the subframe
corresponds to the coherence time of the channel with the
maximum terminal speed of 350 km/h at 2.1 GHz. Routing
belongs to the network layer. The functions of network layer,
such as admission control, are executed during the setup of
new user data flows within the time resolution that depends
on the observed mean user interarrival time. Typical
periodicity for some self-organizing decisions such as load
balancing is within 1-10 s. Thus, the different layers operate
with widely different time resolutions and bandwidths using
the idea of hierarchy in Fig. 8. In the quality of service (QoS)
requirements, the actual delays in the data link layer are
between 50 ms for real-time games and 300 ms for file
transfer.
In hierarchical routing, the three functional levels are
data plane, control plane, and management plane [249]. The
data plane (also called the user plane) performs packet
forwarding and implements functions such as queue
management and packet scheduling. The data plane is local
to an individual router and operates at the speed of packet
arrivals. A primary task of the control plane is to compute
the shortest routes between IP subnets. The control plane
operates at the time resolution of seconds without having a
complete view of the whole network. The management plane
stores and analyzes measurement data from the network and
generates the configuration state on the individual routers.
The management plane operates at the time resolution of
minutes or hours and has the spatial view of the whole
network.
In network function virtualization (NFV) [346], an
additional level, orchestration, is located above management.
Management creates and manages the infrastructure called
virtual network that consists of virtual resources: data links,
routers, memory capacity, and processing capacity.
Orchestration collects the virtual resources needed by an
end-to-end service from the virtual resources provided by the
manager.
Control systems can be classified according to the degree
of centralization into centralized and decentralized control
systems and their mixed forms, from which the most relevant
ones are distributed or clustered control systems [64] (Figs.
9 and 10). A similar classification can be presented for
computing systems [29]. The trade-off between edge and
cloud computing [347] is closely related to the degree of
centralization. In centralized control, a central control unit
controls all parts of the system using sensor information. In
decentralized control, many goals compete with each other,
and the local control units do not communicate with their
neighbors although they may sense their environment.
Hence, the local control units are autonomous [288].
Distributed control is a mixture of centralized and
decentralized control. This control system type should not be
mixed up with the “hybrid system” term in control theory
and computer science that refers to a combination of
continuous-time and discrete-time systems. In distributed
control with many goals, the local control units cooperate
with their neighbors by forming clusters, coalitions, teams,
swarms, or platoons, and the clusters may compete with each
other. Distributed control has the performance advantage of
centralized control while maintaining scalability, ease of
implementation, and robustness of decentralized control
[64].
FIGURE 9. Degree of centralization. a) Centralized, decentralized, and distributed control. b) Distributed control combined with dominance hierarchy using clusters of control units. FIGURE 10. Properties of systems with different degrees of centralization.
The degree of centralization can be combined with the
hierarchy concept. Fig. 9b [63] presents an example of
combining distributed control with dominance hierarchy.
Purely decentralized control schemes do not have enough
information to make fast decisions without an extensive
amount of trial-and-error iterations. In the centralized
approach, sensing information moves upwards in the
hierarchy and the control information moves downwards.
15
While a global decision maker at the highest hierarchy level
enables noniterative optimal decisions, the system may
become unreliable, require high signaling overhead, or lead
to large control delays. Another way to use the hierarchy
concept is to rely on distributed decision-making but allow
sensing information to be exchanged between the hierarchy
levels. An example of such a distributed approach is
presented in [348]. The raw control data obtained from the
lowest hierarchy with local network nodes is centrally
aggregated and distributed back to local nodes. The local
control units can then work autonomously based on a
combination of a locally sensed information and the optional
aggregated awareness of the overall situation. This may help
to overcome the aforementioned bottlenecks of purely
centralized or decentralized control approaches.
D. HIERARCHY OF SYSTEMS
The performance feedback concept in Fig. 5 and the
behavioral and structural levels in Fig. 7 form the basis of the
general hierarchy of systems (Fig. 11). We have formed this
hierarchy according to the increasing uncertainty and
sophistication of the decision block in Fig. 5. Each hierarchy
level includes conceptually the lower level as a special case.
When one moves upwards in the hierarchy, both the
complexity and energy consumption of the decision block
are increased. Any system should be implemented at the
lowest possible level to reduce power consumption as much
as possible, possibly by using decentralized or distributed
control to reduce communication between the different parts
of the system.
FIGURE 11. General hierarchy of systems in different disciplines according to the complexity of decisions. The structure changes only in self-organizing systems, otherwise the behavior changes, see Fig. 7.
The starting point for the hierarchy is [73], which was
originally developed mainly for natural systems. We have
also used the hierarchies in [63], [126], [20] and modified
them to human-made systems and used the newest
terminology. To support our views, we have also used many
other references [297], [301], [63], [303], [306], [126], [41]
to define the names of the hierarchy levels and their order.
Some other hierarchies have been presented in the literature,
for example in [227], but the hierarchy shown in Fig. 11 has
the longest history developed into this form already by the
year 1970 [73], [63] and still commonly used in control
theory [128], [5].
The lowest hierarchy level includes static systems, which
are fixed structures such as passive analog filters with no
decision capability. No energy is consumed for information
processing. Passive filters differ from active filters in that
they do not use any power supply. Thus, they can only
selectively attenuate signals in different frequencies, and part
of the energy in the input signal is changed to heat. They
correspond to infinite impulse response (IIR) filters in the
digital domain where the filters are always active and need a
power supply. Systematic design of passive filters was
started by Butterworth (1930).
The systems at the next level are simple dynamic systems
or clockworks which make periodic predetermined changes.
They are simple automata since they do not need manual
intervention after they have started their operation. The
clockworks form the basis for all synchronous computing
systems. Even when a computing system is in a sleep mode,
it still needs a clock to be able to wake up. The second level
includes also some simple machines. In general, a machine
is a system that changes energy to some mechanical
movement. In computing, a machine is an open system with
an internal state whose next state depends on the state of the
environment and the earlier internal state [71]. Hence, all
machines are state machines and in practice finite state
machines because of their finite memory. A motor is a
machine that turns energy into work in the form of rotating
motion. A motor is a clockwork and may include a gear for
manual control. Historically, a mill is a machine that takes its
energy from a stream of water or wind.
Control systems are automata acting upon controlled
variables to eliminate the effect of disturbances [349].
Examples include a thermostat and a PID controller. The set-
point value is fixed and given by a human operator.
Automatic systems are usually based on feedback and they
do not need any manual intervention [126], [349].
Adaptive systems need a performance criterion and they
eliminate the effect of variable disturbances on the
performance by using an algorithm in the feedback loop
[349]. The fixed set-point value is replaced by a reference
signal that is also called a desired or training signal.
Typically, a reference signal with a uniform spectrum is
needed if the system has some frequency-selectivity.
Adaptive receivers may be in a decision-directed mode. In
that case, the decisions of the receiver replace the external
reference signal. Some algorithms are blind in the sense that
they can generate their reference signal from the received
signal by using some kind of nonlinearity [208]. Sometimes,
adaptive systems are called “smart” [5].
Learning systems are adaptive systems that include
memory [128]. In the literature, also the term cognitive is
commonly used for learning systems [58], [258]. Simon
defines learning as “any process by which a system improves
its performance” [350]. The goal is to make the system faster,
more accurate, or more robust. A learning system changes its
behavior based on past experience, whereas adaptive
systems behave always in the same way. In machine
learning, algorithms are classified into supervised,
unsupervised, reinforcement, and evolutionary learning
algorithms [166]. Supervised learning is using a reference
signal just like in adaptive systems. Unsupervised learning
may be based, e.g., on pattern recognition. Reinforcement
learning is conceptually between supervised and
unsupervised learning where optimization is based on
maximizing some numerical reward, representing some
performance metric (Fig. 12). An example of reinforcement
learning is Q-learning. Reinforcement learning algorithms
may use positive feedback in addition to the conventional
negative feedback [136].
16
FIGURE 12. Reinforcement learning as a special case of the control loop in Fig. 5.
Autonomous and intelligent systems are advanced forms
of learning systems. Autonomous systems are learning
systems that do not need any external reference signal [48],
[49]. The unsupervised, reinforcement, and evolutionary
learning algorithms are autonomous. Decision-directed and
blind algorithms used in communications are simple
autonomous algorithms [208].
Intelligent systems are advanced autonomous systems
where the concept “intelligent” has many different
definitions [164]. We use those in control theory and
artificial intelligence. Legg and Hutter’s general definition is
a good starting point [164]: “Intelligence measures an
agent’s ability to achieve goals in a wide range of
environments.” The authors in [164] have summarized 10
definitions and 52 statements on intelligence. Some authors
use a broad interpretation that all systems from control
systems to self-organizing systems have a different level of
sophistication regarding intelligence [11]. In general, we
expect some form of autonomy and therefore learning
capability from intelligent systems [6]. Human intelligence
is a result of evolution and therefore also self-organization
[11].
In control theory, intelligence is defined as [46] “the
ability of a system to act appropriately in an uncertain
environment, where appropriate action is that which
increases the probability of success, and success is the
achievement of behavioral subgoals that support the
system’s ultimate goal. Both the criteria of success and the
systems ultimate goal are defined external to the intelligent
system. For an intelligent machine system, the goals and
success criteria are typically defined by designers,
programmers, and operators. For intelligent biological
creatures, the ultimate goal is gene propagation, and success
criteria are defined by the processes of natural selection.”
In artificial intelligence, a rational agent is defined as
“any device that perceives its environment and takes actions
that maximize its chance of success at some goal” [6].
Furthermore, “computer agents are expected to --- operate
autonomously, perceive their environment, persist over a
prolonged time period, adapt to change, and create and
pursue goals. A rational agent is one that acts so as to achieve
the best outcome or, when there is uncertainty, the best
expected outcome.” The definition in artificial intelligence,
regarding computer agents, is close to that of control theory.
In control theory, the environment is always assumed to be
uncertain, whereas in artificial intelligence, the environment
may be also perfectly known. For example, the traveling
salesman problem is complex, but there is no uncertainty.
Complex deterministic systems may appear stochastic or
probabilistic if the environment is only partially observable
[6]. In artificial intelligence, the goals may be set by the
system itself, whereas in control theory, the goals are defined
external to the intelligent system.
Uncertainty means lack of precise knowledge [8]. For
example, a mobile wireless channel is random and initially
unknown [351]. Uncertainty can be divided into epistemic or
knowledge uncertainty and aleatory or probabilistic
uncertainty. Epistemic uncertainty can be reduced by sensing
or measuring some variables of the system. In general, this
type of uncertainty can be reduced by simply obtaining more
information. Aleatory uncertainty, on the other hand, is
related to natural changes in nature which cannot be
controlled or eliminated. Aleatory uncertainty cannot be
reduced by obtaining more information. It may be too costly,
time consuming, or technologically infeasible to make the
observations, or the quantity in which we are interested, such
as the probability of a rare event or condition occurring in
the future, cannot be observed.
Control systems (Fig. 5) can effectively cope with
epistemic uncertainty. They sense the environment, i.e.,
observe the outputs of the process, and the feedback loop
provides this information to the decision block. Starting from
learning systems, one is able to model aleatory uncertainty
by performing statistical analysis of environment and system
states. Learning systems can store information about the
observed phenomena in their memories and can infer, yet
unobserved, phenomena in the future.
Self-organizing systems are autonomous learning
systems that are able to change their structure, in addition to
the behavior (Fig. 7). Self-organization or spontaneous order
refers to structural changes that do not need an external
control unit. Sometimes, such systems are called self-
restructuring [20]. In computer science, they are called
autonomic computing systems [50] and multiagent systems
[187] and in communications, they are called self-organizing
networks [222]. Many biological organisms are based on
self-organization through evolution and development of an
embryo.
The term self-organization was originally developed in
biology where the self-organization is “a phenomenon in
which system-level patterns spontaneously arise solely from
interactions among subunits of the system” [352]. The author
in [352] summarized ten definitions of self-organization
since the year 1947. Different authors have different
opinions on whether the interactions must be local or
whether the basic requirement is that there is no external
control unit. We use the latter more general definition.
Similar concepts are stigmergy and self-assembly, which
cannot be clearly separated from self-organization. In
stigmergy, the interactions are indirect through the
environment, for example temperature. This is a form of
global interaction. Stigmergy has various applications in
robotics, multiagent systems, and communication networks
[353]. Stigmergic communications was invented by Dorigo
in 1992. It is now also called ant colony optimization, which
is an example of swarm intelligence. It allows for very
efficient distributed control and optimization in a variety of
problem domains. In self-assembly, order is created by local
interactions for example between molecules. Self-assembly
is usually static. The system approaches an equilibrium by
17
reducing its energy. Dynamic self-assembly is usually called
self-organization. Self-organization may or may not include
positive feedback in addition to negative feedback [352].
Self-organizing systems can be based on centralized,
decentralized, or distributed control (Fig. 9). Some of the
first self-organizing communication networks were
distributed [354]. Such networks consist of a set of node
clusters, each node belonging to at least one cluster. Every
cluster has its own cluster head which acts as a local
controller for the nodes in that cluster. In one extreme, self-
organizing networks can be decentralized, so that even the
nodes work autonomously using for example swarm
intelligence [224]. In computer science, the term self-
management includes different autonomic properties [50],
[47]. Interacting cooperative systems are the most advanced
form of self-organizing systems because of possibly
conflicting goals [25]. Each society member is a learning,
autonomous, or intelligent agent or robot that must follow
certain laws, so that it does not do any harm to other entities
in its environment. Societies of autonomous agents or actors
are sometimes called value-laden systems with a culture
based on values [41]. Values such as trust or fairness are
needed in addition to laws to enable cooperation safely. An
essential concept in cooperating systems is collision
avoidance, which must be used also in communication
systems [15]. Autonomous agents may even teach each
other.
E. BASIC RESOURCES
We explain the basic resources using the open system
concept presented in Fig. 13 [71], [7], [4]. A system is
defined by the observer with its boundaries [20], and it can
be for example a control system or an actor (Fig. 5). The
three basic flows through all systems include materials,
energy, and information. Systems can be classified into
apparatuses (main flow is materials), machines (main flow is
energy), and devices (main flow is information) [356]. Parts
of those resources are transformed into waste, following
entropy law, and they are available for recycling.
FIGURE 13. Open system concept and basic resources that include materials (M), energy (E), information (I), time, frequency, and space.
Information includes data and control information [7]. It
is not an independent resource, but some energy is needed to
carry it. Transmitting information needs a certain bandwidth
and time which is perceived as delay. Information is
considered the third fundamental quantity in addition to
materials and energy [355]. Information can be presented as
a five-level hierarchy whose levels are from bottom up:
statistics (statistical information about symbols), syntactics
(information about the relationships between symbols),
semantics (information about the meaning of symbols in
reality), pragmatics (information about the practical effect of
symbols in reality), and apobetics (information about the
purpose of symbols in reality). The symbols may be, for
example, words. The hierarchy was originally developed by
Morris (1938), excluding the lowest and uppermost levels.
The system needs some space within its boundaries.
Therefore, the three additional basic resources are frequency,
time, and space. In electronics, the spatial limitation
including the form is often called form factor. All of the basic
resources are summarized in Table II.
TABLE II. Basic resources
Complexity of a system is in general defined as the
number of parts and interconnections within the system. In
engineering, we need a more fundamental definition.
Complexity may be defined as the amount of basic resources
needed to build the system up (these are called capital costs)
and the amount of basic resources used during operation
(these are called operational costs) [357]. In computer
science, computational complexity is defined by the time and
space requirements of algorithms [13]. Energy consumption
is also an important complexity parameter [86].
F. FUNDAMENTAL LIMITS
Exponential trends in performance requirements [65] are
extremely fast and we will soon approach some fundamental
limits of nature, which can be seen as walls beyond which
we cannot go. They form also constraints to our designs.
Some of the fundamental limits of nature are listed in Table
III. We have included only those, which are most interesting
for our purposes. There are good summaries available
elsewhere [40], [271], [174], [69] and additional references
are included in [66] where some special issues of journals are
mentioned.
TABLE III. Fundamental limits and principles
According to the energy conservation law, the energy
does not disappear anywhere but is changed into other forms.
The entropy law states that the available energy is reduced
and new energy is needed all the time. In general, heat has
the largest entropy. This causes heat transfer problems since
the heat must be removed to the environment, so that the
temperature of the components does not increase too much,
otherwise the components may be destroyed [358]. In
addition to a data flow, our system includes an energy flow
from energy source to energy sink, and cooling is based on
conduction, convection, or radiation. The energy density of
batteries is measured in J/dm2 or J/kg and it is increasing
quite slowly, in the order of 50% in ten years [359]. The
cooling efficiency (in W/cm2) is also increasing slowly and
there are fundamental limits for it [360]. According to the
present understanding, the maximum cooling efficiency is
0.25 - 1 W/cm2 with free air or water convection and up to
150 W/cm2 with forced water cooling [358].
In practice, the clock frequencies on a processor chip
were limited to 4 GHz in about 2004 mainly because of
power consumption limitations [361]. The maximum power
in hand-held terminals is 3 W because of cooling problems,
and the maximum power in single-chip processors is 200 W
[362], [363]. The maximum transmission power in hand-held
terminals is 200 mW because of safety reasons.
18
When the clock frequency was limited, the computing
speed could be still increased by parallel processing, but the
maximum speed-up has also theoretical bounds [364], [365],
[366]. The speed does not increase linearly with the number
of parallel processors.
The absolute zero is the lowest temperature in the Kelvin
scale. In practice, electronics is usually working at the room
temperature (290 K), and it would be expensive to reduce the
temperature to reduce the thermal noise level [367]. Thus,
thermal noise limits the energy efficiency of charge-based
electronics.
The most elementary operation is a switching operation
with the energy 𝐸sw, which must fulfil the inequality 𝐸sw/𝑁0
≥ ln 2, which is the Szilard limit [368]. The noise power
spectral density is 𝑁0 = kT, where k is the Boltzmann
constant and T is the absolute temperature [369]. The limit
was originally derived by Szilard (1929) and generalized by
Brillouin (1953), but it is also called the Boltzmann or
Landauer limit [369], [368]. We can derive the maximum of
the computing rate 𝐶sw (in logic operations per second) for a
given power consumption 𝑃sw. Since 𝑃sw = 𝐸sw𝐶sw, we
obtain 𝐶sw/𝑃sw = 1/𝐸sw, which is in the limit 𝐶sw/𝑃sw =1/𝑁0 ln 2. In practice, due to noise, we must be well below
this limit to guarantee high reliability in the computing [67].
In addition, electronics is limited by the Heisenberg limit
because of quantum effects [369]. The Szilard and
Heisenberg limits give a lower bound for the gate length of
a transistor. The theoretical lower bound for the gate length
is 4 nm, which corresponds only to about 20 silicon atoms
when we take the lattice structure into account. Because of
high manufacturing costs, the practical lower limit is 10 nm
for silicon transistors [370], [66]. With other materials, the
gate length can be smaller since the Heisenberg limit can be
reduced with a heavier effective carrier mass [371].
Although the transistors would be smaller, the Szilard limit
cannot be surpassed with irreversible or noninvertible
computing and the cooling problems would be larger
because of larger power density [360]. Reversible computing
is not seen as a practical approach for our purposes [368].
Thus, Moore’s prediction is not expected to continue for
silicon transistors after about 2021 [66].
A similar limit called the Shannon limit (1949) exists in
communications for the received energy per bit 𝐸b. It can be
derived from the Shannon channel capacity equation. The
ratio 𝐸b/𝑁0 is often called signal-to-noise ratio per bit, which
for reliable transmission must fulfil the inequality 𝐸b/𝑁0 ≥
ln 2, which is the Shannon limit. For an infinite bandwidth
and finite received power 𝑃, the capacity 𝐶 achieves its
maximum value, which is in a normalized form 𝐶/𝑃 =1/𝑁0 ln 2 [208]. A similar result was derived above for the
maximum computing rate.
For unbiased estimators, a lower bound called Cramer-
Rao lower bound exists, depending on the available signal-
to-noise ratio and the waveform. Since in communications
each bit and symbol has a limited signal-to-noise ratio, the
Cramer-Rao bound must be reduced by using the energy of
several symbols in the estimator.
The Gabor uncertainty principle (1946) says that the
product of signal duration and bandwidth cannot be below a
certain limit that is in the order of one, and the exact limit
depends on the definitions of duration and bandwidth. Thus,
if we reduce the duration of a pulse, the bandwidth is
increased and vice versa. An earlier result was from Nyquist
(1928) giving the minimum bandwidth where the pulses do
not interfere with each other if sent serially. Each pulse can
carry several bits if the pulse has many discrete levels (i.e.,
carrier amplitudes or phases). The maximum bit rate for a
certain bandwidth in a noisy channel is given by the Shannon
capacity, which can be approached by suitable modulation
and channel coding methods. Optical imaging instruments
have the Abbe diffraction limit for resolution. Abbe–
Rayleigh rule says that the wavelength of light is
approximately the smallest distance between two points that
can be distinguished with a lens [69]. The limit has been
applied also for antennas. However, it has been later shown
that there are no theoretical limits for directivity of antennas
and a concept called superresolution has emerged [372].
The speed of light is the upper speed limit, which
introduces challenges in communication networks and on
chips. The speed of light corresponds with a delay of 1 s for
a distance of 300 m. Radio waves are reflected from flat
surfaces and this creates the problem with multipath
propagation and resulting fading, in addition to shadowing
caused by the obstacles. Fading is a result of multipath
propagation and the finite speed of radio waves. Typical
delay spreads are 100 ns for indoor and 10 s for outdoor
systems. The delay spreads are much larger than the period
of the carrier with typical frequencies larger than 1 GHz,
causing constructive or destructive fading depending on the
phase relationships in a dynamic situation. On a chip, the
delay of wires or metal interconnections are now more
significant than the delays of logic gates [373].
There are problems that computers cannot solve [40],
[174]. Some problems are intractable or even unsolvable.
Problems can be classified into three groups: decision
problems (for example, yes or no), search problems (e.g.,
seven bridges of Königsberg), and optimization problems
(finding the best solution from all feasible solutions). The
difficulty of problems can also be classified into three groups
[174], [374]: First, tractable problems have polynomial
complexity (P). Second, intractable problems are problems
with exponential or superpolynomial complexity, including
nondeterministic polynomial (NP) complete and NP hard
problems. Although no formal proof exists, it is a common
consensus that NP complete problems are not tractable.
Third, for unsolvable problems, no solution can be found
with any computer according to the Church-Turing
conjecture. A distinction between polynomial and
exponential complexity was made first by von Neumann in
1953 [171]. The tractability of polynomial complexity
problems was first expressed by Edmonds (1965).
Intractable problems can in principle be solved, i.e., the
algorithm will eventually terminate, but the solution may
take an enormous amount of time and the number of memory
19
elements may be also enormous. Many engineering
problems are resource allocation problems that are often
intractable. Solving a problem is in general more difficult
than verifying the solution.
NP problems are decision problems whose solution can
be verified in polynomial time, but the problems cannot
always be solved in polynomial time with a deterministic
algorithm. Nondeterministic algorithm means in practice
guessing. NP problems include, e.g., polynomial time (P)
problems and NP complete problems. Quantum computers
can solve some exponential complexity NP problems in
polynomial time, but it is likely that quantum computers
cannot solve NP complete problems in polynomial time.
NP complete problems are the hardest problems in the NP
class. The existence of NP complete problems was
discovered independently by Cook (published in 1971) and
Levin (published in 1973, but discovered earlier). This is
called the Cook-Levin theorem. As far as known, a solution
can be found only in superpolynomial time, usually
exponential time. Polynomial time approximations are
available, for example randomization, restriction,
parameterization, and heuristic. There are over one thousand
NP complete problems, for example traveling salesman
problem (shortest path problem), knapsack problem
(resource allocation problem), timetable design, and
computer circuit fault detection problem [174]. For these
problems, the resources needed to find a solution are not
known, thus one can only compare the most recent result
with earlier results and continue solving until no significant
improvement can be obtained any more.
NP hard problems are a more general class of hard
problems, with NP complete problems as a special case.
Solutions cannot be always verified in polynomial time and
thus not all NP hard problems are NP problems. For example,
halting or termination problem is NP hard but not NP
complete. The halting problem is about whether a given
program halts for a given input in a finite time.
The analytical or deductive methods are not without
limitations. Gödel’s incompleteness theorem says that there
are always theorems which can neither be proved nor
disproved [40]. Often, the theorem is interpreted so that there
is no axiomatic system that would cover the whole
mathematics. The theorem is equivalent to the statement that
we can never tell whether a program is the shortest one that
will accomplish a given task [176].
Some deterministic feedback systems are chaotic and
thus unpredictable systems if they include a nonlinearity,
possibly caused by quantization, in the feedback loop [40].
The behavior of such systems is sensitive to initial
conditions. The algorithms based on feedback may converge
to a limit cycle or hang-up. Noise may be intentionally used
in the loop to avoid the limit cycle. The use of noise is
sometimes called dithering. In communications, repeated
collisions can be avoided by using random transmission
intervals [15]. A chaotic situation may also occur when many
adaptive loops interact with each other [262], for example, in
adaptive transmission and reception loops, or in a network
using transmitter power control. Generally, such situations
can be avoided by decoupling the interacting control loops.
Common solutions are to use centralized control or widely
different convergence speeds for different control loops.
G. MULTIPLE-CRITERIA DECISION-MAKING (MCDM)
Fundamental limits form constraints to our design. When we
are far from the limits, each objective can be optimized
independently without affecting too much the other
objectives. MCDM becomes relevant near the fundamental
limits [62], [113], [114]. From the MOO theory, we know
that the optimum, which is called Pareto optimum, is not
unique, but it gives the optimal trade-off between the
objectives. To make the final decision, we must rely on
subjective preferences. There is a significant number of
algorithms to find the optimum, each having their strengths
and weaknesses. The description of those algorithms is
outside the scope of this review. Interested reader is referred
to [8], [114] for more details.
Different algorithms are feasible for the different
hierarchy levels in Figs. 8 and 11. Selection criteria for
adaptive algorithms are summarized in [134]. The same
criteria can be used also for learning algorithms. Important
criteria include stability and rate of convergence, tracking
ability in a dynamic environment, and computational
complexity. In general, the algorithms are based on feedback
and their convergence speed depends on the number of
degrees of freedom in the optimization problem. The
algorithms can only track slow changes due to a trade-off
between noise and lag error [208].
In self-organizing control, heuristic MCDM algorithms
are often used when the structure must be changed [63].
MCDM algorithms can select the lower level algorithms and
decide some parameters for those algorithms. The selection
of algorithms for MCDM depends more on the form of
preference representation, whether objective functions are
linear or nonlinear, “shape” of feasible region (convex vs.
nonconvex), and, most importantly, their computational
complexity. For many of the engineering problems, the
feasible set is nonconvex with a nonlinear objective function
[62], [8]. Thus, heuristic methods (i.e., evolutionary and
genetic algorithms) have been popular. To guarantee global
convergence, genetic algorithms can be modified by using
immigration in addition to mutations [5]. High immigration
rate forces the algorithm towards random search and low
immigration towards a genetic algorithm. The classical
approach is to use scalarization methods, that is, a
multiobjective problem is reduced to a single-objective
problem by combining the objectives [62]. This works well
if the multiple objectives are somewhat independent and
monotonic, and the set of all possible solutions is convex. In
some cases, grid search could be a solution [295]. As an
example, optimization of the weights of a finite impulse
response (FIR) filter is known to be a convex problem, but if
the filter is an IIR filter, the problem is in general nonconvex
[133], [375]. Furthermore, IIR filters may experience
20
stability problems unless the optimization problem is
constrained in such a way that poles of the transfer function
lie inside a unit circle in the z-plane.
IV. MANIFESTATION OF SOME MULTIDISCIPLINARY CONCEPTS IN COMMUNICATION NETWORKS
In this section, we briefly summarize the state of the art from
the perspective of communication networks. The more
advanced interdisciplinary and transdisciplinary views [42]
remain for later work, as discussed in the introduction.
A. KEY PERFORMANCE INDICATORS (KPIS)
We will use examples of KPIs from communication systems
since in that discipline comprehensive analyses have been
made for several generations of mobile cellular systems
[376], [65]. The network-induced constraints in networked
control systems are described in [140]. The first generation
cellular systems were introduced in the beginning of the
1980’s, and a new generation has been introduced every ten
years. The networks of the 2010’s are thus called the fourth
generation (4G) systems, and those of the 2020’s are called
the fifth generation (5G) systems.
The network requirements are called performance
requirements or KPIs. Network performance must be
monitored and controlled through feedback (Fig. 5) [340].
Performance is defined as “the manner in which or the
efficiency in using the available resources with which
something reacts or fulfils its intended purpose” [377]. By
efficiency, we mean the ratio of benefits and expenditures [7]
where the benefits fulfil some user needs and expenditures
are some basic resources (Fig. 13). In communication
networks, the benefits are usually the correctly received data
bits and the expenditures are the other basic resources
including control bits. In computer science, the benefits are
operations consisting of elementary arithmetic operations,
including additions, subtractions, multiplications, and
divisions [378], [111]. There are also lower level operations
called logic operations in electronics and various higher-
level operations [13].
The metrics can be measured in any of the OSI layers
[15], [16]. In the application layer, subjective quality of
experience (QoE) metrics can be used in addition to the
metrics listed above [379]. When performance is measured,
we must define the layer, the corresponding PDU, and
whether the measurement is done one-way or round-trip.
The requirements of 4G and 5G systems are summarized
in Table IV [380], [65], [381]. Many of the KPIs are self-
explanatory. The requirements are developing exponentially.
For example, the network energy efficiency requirement is
increased by a factor of 100, so that the power consumption
per unit area is not increased when the area traffic capacity
is increased by the same factor.
Data rate requirements are measured in bit/s in the
application layer. The number of PDUs in second is often
called the throughput, often expressed in bit/s. The required
user experienced data rate should be available in at least 95%
of the locations, including at the cell-edge, for at least 95%
of the time within the considered environment. In computer
science, the throughput is measured in operations per second.
TABLE IV. KPIs and requirements in 4G and 5G communications networks
Spectral efficiency is the throughput normalized by the
bandwidth in a cell (in bit/s/Hz) [250]. Spectral efficiency
can be improved by using multilevel modulation and channel
coding methods and multiple antennas at the expense of
energy efficiency. The best known channel codes include
low-density parity check codes and polar codes. Network
energy efficiency refers to the quantity of information bits
transmitted to or received from users, per unit of energy
consumption of the network (in bit/J), including cellular
technologies, radio access and core networks, and data
centres. Energy efficiency in communications is thus the
inverse of the total energy per bit. Energy efficiency is the
same as power efficiency since 1 bit/J = 1 bit/Ws = 1 bit/s/W.
It is the throughput normalized by the power. Only data bits
are counted in the numerator of the energy efficiency
although all energy (data and control) is taken into account
in the denominator. Energy efficiency can in general be
improved by avoiding complex algorithms but usually this is
obtained at the expense of spectral efficiency. The
throughput normalized by the area is the area efficiency used
in electronics [382], which is called area traffic capacity in
5G systems. Radio Resource Management (RRM) and
Dynamic Spectrum Allocation (DSA) methods are crucial in
avoiding interference and in obtaining the maximum area
traffic capacity.
Similar normalized throughputs can be used also in
computer science when throughput is replaced by operations
per second. For example, the energy or power efficiency is
measured in operations/s/W [383] and its inverse is the
energy per operation.
Delay and latency are synonymous terms. They are
measured from the beginning of the transmission of a PDU
to the end of reception of the same PDU [384], [252]. Delay
includes processing, packetization, transmission, queuing,
and propagation delays. Total delay is also called transfer
delay or transit delay [385]. In the application layer, total
delay is called end-to-end delay. Processing delays are
caused by inefficient processing, for example interleaving
and ARQ. Packetization delay is incurred in filling up a
packet with data symbols. Transmission delay is caused by
serial transmission; it is the delay between the transmission
of the first and the last bits of a PDU. Queuing delays occur
when buffers in network devices become flooded.
Propagation delay is caused by the physical medium because
of the finite propagation speed of the electromagnetic waves.
Differently from most other metrics, delay is usually not
expressed as a ratio. However, we propose a normalization
with the smallest possible propagation delay using the speed
of light in vacuum for a given distance. In this case, delay
efficiency would be the ratio of the minimum propagation
delay and the actual total delay (in %). In computer science,
delay is measured for each operation.
21
The delay across the Internet can be on the order of 100
ms or even more depending on the physical distance [384].
Typically, computer users feel delays under 100 ms
unnoticeable. Conversations appear as real time two-way
communications when we receive the audio signal within 70
to 100 ms [32]. Lip synchronization between a video stream
and its soundtrack requires similar delays for video and
audio, otherwise the sound seems disconnected from the
movements on the video. In haptic communications, the
maximum delay should be in the order of 1 ms to avoid
cybersickness [31]. If the delay is limited to 1 ms, the
network radius must be limited to a few kilometers [243]. A
modern concept is the age of information by Kaul et al.
(2011) as a notion to characterize the freshness of the
knowledge about a process observed remotely [386]. The
age of information can be defined as the time interval from
the start of the transmission of the information to the present
time.
The concept dependability includes availability and
reliability [387], [388], [389]. Reliability is measured only
when the network is available [390]. The network
availability X is defined as follows: the network is available
for the targeted communication in X% of the locations where
the network is deployed and X% of the time. No numerical
value has been defined for availability in [65], [381], but in
[390] availability requirement is 99.999%. Availability is
improved if the robustness of the system is increased.
Reliability can be defined in many ways, and we present
the most common definitions. Reliability is usually defined
as the amount of sent PDUs successfully delivered to the
destination divided by the total number of sent PDUs [381].
Reliability is thus the complement of the error rate, i.e.,
reliability = 1 - error rate. An alternative is to define the
average interval between errors [391]. This may be more
fundamental from the user point of view than the error rate,
which does not directly tell the time distribution of errors
unless the throughput is given.
Because of different fundamental limits, there is a
fundamental trade-off between throughput, delay, and
reliability [201], [392] and therefore also between spectral,
energy, spatial, and delay efficiency, and reliability [208],
[392]. We have thus these five basic metrics. An example of
the trade-offs is computation-communication trade-off
[393], [66], [394]. In [393], the related trade-off between
operating and radiated energy is explained. Because of the
fundamental trade-off, it is reasonable to divide systems
according to the most critical performance metric. The
systems can be divided into bandwidth-, energy-, delay-,
space-, and reliability-limited or -sensitive systems [384],
[395], [396], [65].
B. HIERARCHY CONCEPT
Hierarchical control has been studied extensively in different
disciplines, as described in Section II. Hierarchical concepts
have played a significant role also in wireless
communications, such as in the form of OSI model [326],
[15], [16]. However, the interest of applying hierarchical
control paradigms to the rather complicated heterogeneous
wireless network architectures, providing a multitude of
different services, has emerged only recently. The future
networks must simultaneously support a massive number of
low data rate subscribers for sensing applications and a
number of ultrahigh data rate subscribers for high-definition
multimedia streaming applications. Augmented reality user
interfaces with large bandwidth and small latency might be
the first popular application for 5G. The main objective of
the required hierarchical arrangements is to find suitable
trade-offs between optimized performance criteria and
control complexity while allowing scalable network size. In
upcoming 5G heterogeneous networks, the RRM can be
done at different hierarchy levels, mainly reflecting the
selectable degree of centralization and resource allocation
resolution. We will next discuss these important topics more
closely.
Degree of centralization. A major debate in system
architecture discussions of modern wireless systems is the
degree of centralization of the underlying network control.
The RRM can be divided into centralized, decentralized, and
mixed (also semi-centralized, partially decentralized)
schemes [29], [64], [398]. Centralized schemes collect all
necessary information of a network to make all relevant
RRM decisions, while fully decentralized schemes use only
local information at the base stations and user devices.
The main trade-off between centralized and decentralized
control schemes reduces largely into 1) the required number
of internode iterations, i.e., iterations between control
decisions of separate network nodes to find an optimal
solution, 2) the amount of required control data per iteration,
and 3) the delay to convey information in each iteration.
Centralized schemes enable optimal decisions within a single
node without internode iterations, i.e., as one-shot decision
at the cost of large delay and excessive amount of control
data; whereas decentralized schemes may require a high
number of internode iterations every time the environment
changes. In general, a control loop (either open or closed)
may be required in all approaches because the environment
is changing over time or an iterative structure must replace a
one-shot solution that is too complicated to be resolved in
real time. Mixed schemes aim at finding suitable trade-offs
between the above three trade-off factors that fit the given
application requirements. A more detailed analysis of
finding the optimal degree of centralization in 5G networks
is found from [397], which includes both the centralization
of network control as well as centralization of signal
processing.
In mixed schemes, many features of centralized schemes
are used, but in this case, the role of the central unit is only
assistive or auxiliary [399]. In the past, the third generation
(3G) cellular networks were highly centralized, whereas in
4G networks, the degree of centralization is drastically
reduced, providing significantly more authority for
individual base stations to make RRM decisions [400]. In the
upcoming 5G networks, finding the optimal degree of
centralization is one of the key challenges where the location
22
of both the control units and server units are important [347].
The required control time resolution and delay requirements
of the controlled RRM parameters affect heavily the optimal
degree of centralization of the control architecture where the
degree should be adaptively modified. Typically, physical
layer variables are adjusted in a millisecond resolution,
whereas network layer variables can be adapted with orders
of magnitude lower time resolutions. The recent software-
defined networks further separate physical and logical
controllability [401]. Therein, network designers resort to a
physically distributed control plane, on which a logically
centralized control plane operates. In essence, the logically
centralized network view enables simplified models of the
actual distributed physical world but also results in
fundamental trade-offs between complexity and model
consistency.
Resource allocation resolution. Another important factor
naturally incorporating a hierarchical structure is the
resource allocation resolution for individual users and
network cells. In general, the higher the resolution, the better
the achieved optimization capability, but at the cost of higher
control complexity. For instance, in the 3GPP standards,
different terminal categories support different resourcing
resolutions that match the respective QoS requirements
[402]. In the 4G systems, the smallest resource block
allocated to users is 180 kHz in the frequency domain and 1
ms in the time domain. In the newest 3GPP releases, the
resolution is higher for narrowband Internet of Things (NB-
IoT) devices, so even as small as 3.75 kHz resource blocks
can be allocated to a single device to help to reduce the power
consumption with very low data rates [403]. Moreover, the
network virtualization concept is one of the most promising
approaches to implement efficient resource control for 5G
systems [404]. Therein, wireless virtual resources are
obtained by slicing network infrastructure into multiple
virtual slices that can be used very flexibly; independently of
the location or operator requesting it. Based on the
requirements, the virtualization can be done at the network
level, user flow level, packet level, subchannel level, or even
hardware level.
C. RESOURCE EFFICIENCY AND TRADE-OFFS
Efficient use of available basic resources (Fig. 13) is the
primary task of the RRM. The main objective of the RRM is
to control the use of available resources in such a way that
users’ QoS requirements are met and, at the same time, the
overall radio resource usage at a system level is minimized.
To this end, radio resource management uses different
mechanisms at different communication layers that include,
for example, channel quality measurement and reporting,
link adaptation, retransmission control, dynamic scheduling,
and management of QoS parameters.
The recent exponential growth in the volume of
transmitted data coupled with the high costs of energy
highlights the need to consider energy consumption of
communications equipment. Thus, efficient use of resources
requires a holistic approach to network planning, radio
resource management, and physical layer transmission.
Network planning. At the network planning stage,
designers typically face the task of balancing the amount of
basic resources needed to build the system up (i.e., capital
costs) and the amount of basic resources used during
operation (i.e., operational costs). To reduce capital costs,
network designers would prefer to cover the service area
with a few macro cells which, in turn, may lead to increased
energy consumption and thus higher operational costs. On
the other hand, increasing the number of base stations that
serve a given area, may decrease total energy consumption
but increase capital investments. These considerations lead
to a conclusion that a mixed deployment scheme, which
involves macro-, micro-, pico-, and femtocells, may be
needed to balance these two types of costs. The
corresponding maximum cell radius is 35 km, 2 km, 200 m,
and 10 m, respectively [405]. Other possibilities include
introduction of so-called sleep modes in base stations and
deployment of cooperative networks with distributed
antenna systems that consist of many low-cost and small-
coverage units. A major engineering challenge would be the
control of such a network and ensuring its resource
efficiency. We conjecture that such networks would need to
be capable of self-organization and require distributed
control.
Radio resource management. Dynamic scheduling is the
main mechanism of optimizing the use of resources with
respect to the requirements of upper layers. A dynamic
packet scheduler is responsible for allocating resource blocks
in time and frequency and assigning the modulation and
coding scheme. In addition to that, a dynamic scheduler
needs to cope with channel and traffic uncertainty.
One of the major challenges is to balance the average
end-to-end service delay versus average energy consumed in
the transmission. A mathematical model describing
relationships between end-to-end delay and average
consumed energy is quite difficult to create because it should
involve information and queueing theories and uncertainties
both in channel and traffic patterns. MCDM under
uncertainty could be used to obtain a set of reasonable pairs
of end-to-end delay and energy consumption with some
advanced learning system acting as a decision maker.
Another major challenge in radio resource management
is to balance efficiently the control or signaling and data
symbols. The present wireless networks rely on separation of
control and data symbols which seems not optimal from the
energy efficiency point of view. In general, studies on
resource allocation between control and data symbols are in
their initial stages. However, limited literature results are
available for point-to-point links. MCDM could be used to
find a reasonable balance between control and data bits with
some advanced learning system acting as a decision maker
and making the final selection.
Physical layer transmission. Many concepts of control
and adaptive systems have already been applied to
communication links, for example, AGC, AFC, AMC.
23
However, many trade-offs still require further studies, for
example, the achievable bit rate versus energy consumption
or used bandwidth versus power needed for transmission.
One of the major difficulties is the lack of sufficiently
accurate models of hardware energy consumption, which
precludes accurate estimation of energy consumption for
different modulation and coding schemes. Other difficulties
arise when one wishes to study multiuser and multicell
scenarios where the achievable link spectral efficiency of a
given link depends on interference conditions.
We expect that autonomous radios and networks, which
are a form of learning systems, will be capable of achieving
reasonable balance between used bandwidth and power
needed for transmission due to their inherent capability of
flexible use of available spectrum. Furthermore, cooperative
communication, which is again a manifestation of learning
systems, will help to balance spectral and energy efficiency.
V. CONCLUSION
We started this work with emphasis on control theory since
it is the origin of many of the concepts that are relevant for
intelligent and resource-efficient systems. Our work will
continue in the future by integrating a large body of artificial
intelligence research into the conceptual model. In control
theory, the basic system models are autonomous and remote
control systems; in computer science, the basic system model
is distributed self-organizing computing; and in
communication theory, self-organizing networks. We further
note that in addition to the normal divergent trend in
research, there is also a converging trend, leading to similar
concepts in different disciplines. Cyber-physical systems in
computer science and Tactile Internet in communication
theory can be seen as different forms of networked control
systems with haptic feedback, available in control theory in
a rudimentary form since 1966, and its roots in teleoperation
as far as 1945. They are also examples of telepresence, an
idea that was presented in 1980. We can also use the newest
theories of brain studies, morphogenesis, and second-order
cybernetics to improve self-organizing systems. We also
note that two independent tracks exist in brain studies, one in
neuroscience and another one in robotics. It would be very
useful to combine these efforts.
Future computing and communications systems need to
be based on intelligent use of resources. Since we are
approaching the fundamental limits in various fields, the
future development must be based on multiobjective
constrained optimization which can identify an optimal
combination for the use of multiple scarce resource types.
However, maintaining exponential predictions, such as those
of Moore and Keyes, is questionable. The limited basic
resources in most disciplines include materials, energy,
information, time, frequency, and space. Major
requirements, especially in communications include
throughput, delay, and reliability, which are mutually
conflicting. When throughput is normalized with bandwidth,
power, and area, we obtain spectral, energy, and area
efficiency, respectively, that are also mutually conflicting
objectives. Control overhead is properly taken into account
in these metrics when it is included only in the expenditures
and the throughput includes only the data bits.
The society expects increasingly higher research impact,
which calls for multidisciplinary approaches in order to solve
efficiently the demanding problems at hand. Relevance of
research can be enhanced by systems thinking. We presented
an extensive bibliography of earlier reviews and books in
general system theory, decision theory, control theory,
computer science, and communication theory. Our
chronology covers the last hundred years. To support
conceptual analysis, we presented a modern hierarchy of
human-made systems. The hierarchy includes static, simple
dynamic, control, adaptive, learning, and self-organizing
systems. Autonomous and intelligent systems are advanced
forms of learning systems. Self-organizing systems may
consist of cooperating agents or robots so that they form a
society.
Agents are hierarchical systems that are divided into
reactive, proactive, and interacting (competitive or
cooperative) agents, and their mixed forms. Proactive
systems are able to do planning; to predict the future and
select the best course of action to achieve the goal. These
ideas are useful also in communications theory. The use of
abstraction levels is mandatory in complex systems, in
communications theory in the form of network graphs.
We observed that feedback (for learning), hierarchy (for
managing complexity), degree of centralization (for
scalability), optimization (for efficient use of resources), and
decision-making (for subjective preferences when the
optimum is not unique) form the cornerstones of future
systems. In communications, the highest potential for highly
intelligent systems is in hierarchical and distributed resource
management using different time, frequency, and spatial
resolutions at each hierarchy level. The hierarchy has been
shown to be optimal in minimizing entropy. Not only
spectrum or energy are seen important resources, but also the
other basic resources such as time and delay. The intrinsic
value is not the intelligence itself but the hierarchy of
systems providing different levels of autonomy. The
hierarchy level must be selected according to the needs since
the higher in the hierarchy of systems we are, the higher
complexity is needed. In addition to closed-loop iterative
feedback systems, we also need possibly approximate and
suboptimal open-loop and one-shot solutions because of the
challenges with convergence time in the iterative systems.
The convergence time is not in general easy to predict and
depends on the number of degrees of freedom.
The multidisciplinary approach is demanding and it is not
reasonable that all researchers are educated as generalists.
Rather, the efficiency of our society is based on division of
work, and the experts should be coordinated with common
goals. This calls for systems thinking and knowledge
exchange between disciplines. Information acquisition tools
representing knowledge from multiple disciplines could
advance such efforts. The tools should be based on a unified
terminology and definitions for the disciplines in question.
Such a terminology and definitions are important for
24
personal communication between specialists in different
disciplines as well.
Our multidisciplinary view is the first step towards the
most advanced transdisciplinary view, which can be called
systems view. We will continue this endeavor with more
detailed conceptual analysis in order to produce a more
complete and more unified conceptual framework that
combines the bodies of knowledge from the five disciplines.
Our aim is that researchers form different disciplines can
place their efforts in this framework and identify the related
research that can help them to generate new knowledge that
advances the development of intelligent and resource-
efficient systems. The general, long-standing problem to be
tackled is optimization and distributed decision-making in an
uncertain, dynamic, and nonlinear environment with
mutually conflicting objectives.
ACKNOWLEDGMENT
Comments from Heikki Ailisto, Jarmo Alanen, Tapio
Frantti, Tapio Heikkilä, Hannu Heusala, Jyrki Huusko,
Marko Höyhtyä, Juhani Iivari, Pertti Järvensivu, Laszlo B.
Kish, Kari Leppälä, Teemu Leppänen, Ilkka Norros, Susanna
Pantsar, Pertti Raatikainen, and Alexandre Valentian are
gratefully acknowledged.
REFERENCES [1] J. B. Moody and B. Nogrady, The Sixth Wave: How to Succeed in a
Resource-Limited World. Random House Australia, 2010.
[2] M. Wilenius, “Leadership in the sixth wave—excursions into the new
paradigm of the Kondratieff cycle 2010–2050,” European Journal of
Futures Research, vol. 2, Article 36, pp. 1-11, December 2014.
[3] “Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2016–2021,” Cisco, San Jose, CA, White Paper,
Feb. 2017.
[4] P. Checkland, Systems Thinking, Systems Practice, new ed.
Chichester, UK: John Wiley & Sons, 1999.
[5] G. N. Saridis, Hierarchically Intelligent Machines. Singapore: World
Scientific Pub Co, 2001.
[6] S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach,
3rd ed. Upper Saddle River, NJ: Prentice-Hall, 2010.
[7] V. Hubka and W. E. Eder, Theory of Technical Systems: A Total
Concept Theory for Engineering Design. Berlin: Springer-Verlag,
1988.
[8] J. Figueira, S. Greco, and M. Ehrgott (Eds.), Multiple-Criteria
Decision Analysis: State of the Art Surveys. Boston, MA: Springer,
2005.
[9] S. S. Rao, Engineering Optimization: Theory and Practice, 4th ed. Hoboken, NJ: John Wiley & Sons, 2009.
[10] K. Ogata, Modern Control Engineering, 5th ed. Boston, MA:
Prentice-Hall, 2010.
[11] J. S. Albus and A. M. Meystel, Engineering of Mind: An Introduction
to the Science of Intelligent Systems. New York: John Wiley & Sons,
2001.
[12] G. A. Bekey, Autonomous Robots: From Biological Inspiration to Implementation and Control. Cambridge, MA: MIT Press, 2005.
[13] M. Sipser, Introduction to the Theory of Computation, 2nd ed.
Boston, MA: Thomson Course Technology, 2006.
[14] S. Haykin, Communication Systems, 4th ed. New York: John Wiley
& Sons, 2001.
[15] A. S. Tanenbaum and D. J. Wetherall, Computer Networks, 5th ed. Prentice-Hall, 2011.
[16] J. F. Kurose and K. D. Ross, Computer Networking: A Top-Down
Approach Featuring the Internet, 6th ed. Boston, MA: Addison-
Wesley, 2013.
[17] M. C. Jackson, “Fifty years of systems thinking for management,”
Journal of the Operational Research Society, vol. 60, Supplement 1,
pp. S24–S32, May 2009.
[18] W.-C. Hu, Multidisciplinary Perspectives on Telecommunications,
Wireless Systems, and Mobile Computing. Hershey, PA: Information
Science Reference, 2014. [19] G. M. Dimirovski (Ed.), Complex Systems: Relationships Between
Control, Communications and Computing. Springer International
Publishing Switzerland, 2016.
[20] S. J. Kline, Conceptual Foundations of Multidisciplinary Thinking.
Stanford University Press, 1995.
[21] M. Minsky, “Telepresence,” OMNI Magazine, vol. 2, no. 6, pp. 45-
51, Jun. 1980.
[22] T. B. Sheridan, Telerobotics, Automation, and Human Supervisory
Control. Cambridge, MA: MIT Press, 1992
[23] K. D. Kim and P. R. Kumar, “Cyber–physical systems: A perspective
at the centennial,” Proc. IEEE, vol. 100, Special Centennial Issue, pp.
1287 – 1308, May 2012.
[24] M. Lévesque and D. Tipper, “A survey of clock synchronization over
packet-switched networks,” IEEE Communications Surveys &
Tutorials, vol. 18, no. 4, pp. 2926 - 2947, Fourth Quarter 2016.
[25] J. P. Muller, “Architectures and applications of intelligent agents: A
survey,” Knowledge Engineering Review, vol. 13, no. 4, pp. 353-380,
1998.
[26] D. Kortenkamp and R. Simmons, “Robotic systems architectures and
programming,” in Springer Handbook of Robotics, B. Siciliano and
O. Khatib (Eds.), Berlin: Springer, 2008, pp. 187-206.
[27] S. Dobson, R. Sterritt, P. Nixon, and M. Hinchey, “Fulfilling the
vision of autonomic computing,” Computer, vol. 43, pp. 35-41, Jan. 2010.
[28] R. A. Gupta and M.-Y. Chow, “Networked control system: Overview
and research trends,” IEEE Transactions on Industrial Electronics, vol. 57, pp. 2527-2535, Jul. 2010.
[29] A. S. Tanenbaum and N. Van Steen, Distributed Systems: Principles
and Paradigms, 2nd ed. Upper Saddle River, NJ: Prentice-Hall, 2007. [30] C. Lenay, “ ‘It’s so touching’: Emotional value in distal contact,”
International Journal of Design, vol. 4, pp. 15-25, no. 2, 2010.
[31] G. P. Fettweis, “The Tactile Internet: Applications and challenges,”
IEEE Vehicular Technology Magazine, vol. 9, pp. 64–70, Mar. 2014.
[32] G. Fettweis and S. Alamouti, “5G: Personal mobile internet beyond
what cellular did to telephony,” IEEE Communications Magazine,
vol. 52, no. 2, pp. 140 – 145, Feb. 2014.
[33] T. B. Sheridan, “Telerobotics,” Automatica, vol. 25, no. 4, pp. 487-
507, 1989.
[34] J. Baillieul and P. J. Antsaklis, “Control and communication challenges in networked real-time systems,” Proc. IEEE, vol. 95, no.
1, pp. 9-28, Jan. 2007.
[35] I. Elhajj, N. Xi, W. K. Fung, Y. H. Liu, W. J. Li, T. Kaga, and T. Fukuda, “Haptic information in Internet-based teleoperation,”
IEEE/ASME Transactions on Mechatronics, vol. 6, no. 3, pp. 295 -
304, Sep. 2001.
[36] R. Aracil, J. M. Azorín, M. Ferre, and C. Peña, “Bilateral control by
state convergence based on transparency for systems with time
delay,” Robotics and Autonomous Systems, vol. 61, pp. 86-94, Feb.
2013.
[37] P. A. Sharp and R. Langer, “Promoting convergence in biomedical
science,” Science, vol. 333, 20 July 2011, p. 527.
[38] G. Sarton, “The new humanism,” Isis, vol. 6, no. 1, pp. 9-42, 1924.
[39] M. Crete-Nishihata, R. Deibert, and A. Senft, “Not by technical
means alone: The multidisciplinary challenge of studying
information controls,” IEEE Internet Computing, vol. 17, no. 3, pp.
34 - 41, May-Jun. 2013.
25
[40] J. D. Barrow, Impossibility: The Limits of Science and the Science of
Limits. Oxford University Press, 1998.
[41] I. Arbnor and B. Bjerke, Methodology for Creating Business
Knowledge, 2nd ed. Sage Publications, 1997.
[42] B. C. K. Choi and A. W. P. Pak, “Multidisciplinarity,
interdisciplinarity and transdisciplinarity in health research, services,
education and policy: 1. Definitions, objectives, and evidence of
effectiveness,” Clin. Invest. Med., vol. 29, pp. 351–364, Dec. 2006.
[43] C. S. Wagner, J. D. Roessner, K. Bobb, J. T. Klein, K. W. Boyack, J.
Keyton, I. Rafols, and K Börner, “Approaches to understanding and
measuring interdisciplinary scientific research (IDR): A review of the
literature,” Journal of Informetrics, vol. 5, pp. 14–26, Jan. 2011.
[44] S. Chen, C. Arsenault, and V. Larivière, “Are top-cited papers more
interdisciplinary?” Journal of Informetrics, vol. 9, pp. 1034-1046,
Oct. 2015.
[45] F. N. Silva, F. A. Rodrigues, O. N. Oliveira, Jr., and L. da F. Costa,
“Quantifying the interdisciplinarity of scientific journals and fields,”
Journal of Informetrics, vol. 7, pp. 469-477, April 2013.
[46] J. S. Albus, “Outline for a theory of intelligence,” IEEE Transactions
on Systems, Man, and Cybernetics, vol. 21, pp. 473-509, May/June
1991. [47] W. F. Truszkowski, M. G. Hinchey, J. L. Rash, and C. A. Rouff,
“Autonomous and autonomic systems: A paradigm for future space exploration missions,” IEEE Transactions on Systems, Man, and
Cybernetics, Part C (Applications and Reviews), vol. 36, no. 3, pp.
279 - 291, May 2006.
[48] S. Huberman, C. Leung, and T. Le-Ngoc, “Dynamic spectrum
management (DSM) algorithms for multi-user xDSL,” IEEE
Communications Surveys & Tutorials, vol. 14, pp. 109-130, First
Quarter 2012.
[49] C. D. Nguyen, S. Miles, A. Perini, P. Tonella, M. Harman, and M.
Luck, “Evolutionary testing of autonomous software agents,” Auton.
Agent Multi-Agent Syst., vol. 25, pp. 260–283, 2012.
[50] J. O. Kephart and D. M. Chess, “The vision of autonomic computing,” Computer, vol. 36, no. 1, pp. 41 – 50, Jan. 2003.
[51] S. Dobson et al., “A survey of autonomic communications,” ACM
Transactions on Autonomous and Adaptive Systems, vol. 1, no. 2, pp. 223-259, Dec. 2006.
[52] M. Schaefer, J. Vokriek, D. Pinotti, and F. Tango, “Multi-agent
traffic simulation for development and validation of autonomous car-to-car systems” in Autonomic Road Transport Systems, T. L.
McCluskey, A. Kotsialos, J. P. Muller, F. Klugl, O. Rana, and R.
Schumann (Eds.), Springer International Publishing Switzerland 2016, pp. 165-180.
[53] H. Nyquist, “Regeneration theory,” Bell System Technical Journal,
vol. 11, pp. 126-147, Jan. 1932.
[54] R. Miettinen, “The concept of experiential learning and John
Dewey’s theory of reflective thought and action,” International
Journal of Lifelong Education, vol. 19, pp. 54-72, Jan.-Feb. 2000.
[55] P. Eykhoff, “Adaptive and optimalizing control systems,” IEEE
Transactions on Automatic Control, pp. 148-152, June 1960.
[56] F. Osinga, Science, Strategy and War: The Strategic Theory of John
Boyd. Delft, the Netherlands: Eburon Academic Publishers, 2005.
[57] M. R. Endsley, “Towards a theory of situation awareness in dynamic
systems,” Human Factors, vol. 37, no. 1, pp. 32-64, Mar. 1995.
[58] J. Mitola and G. Q. Maguire, “Cognitive radio: Making software
radios more personal,” IEEE Personal Communications, vol. 6, no.
2, pp. 13 – 18, Aug. 1999.
[59] S. Haykin, “Cognitive radio: Brain-empowered wireless
communications,” IEEE Journal on Selected Areas in
Communications, pp. 201-220, February 2005.
[60] J. P. Muller, The Design of Intelligent Agents: A Layered Approach.
Berlin: Springer, 1996.
[61] D. Ye, M. Zhang, and A. V. Vasilakos, “A survey of self-organization
mechanisms in multiagent systems,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, pp. 441-461, Mar. 2017.
[62] R. T. Marler and J. S. Arora, “Survey of multi-objective optimization
methods for engineering,” Struct. Multidiscipl. Optim., vol. 26, no. 6,
pp. 369-395, Apr. 2004. [63] M. Mesarovic, D. Macko, and Y. Takahara, Theory of Hierarchical,
Multilevel Systems. New York: Academic Press, 1970.
[64] K. D. Frampton, O. N. Baumann, and P. Gardonio, “A comparison of
decentralized, distributed, and centralized vibro-acoustic control,”
The Journal of the Acoustical Society of America, vol. 128, pp. 2798–
2806, Nov. 2010.
[65] ITU-R Rec. M.2083 “IMT Vision – Framework and overall
objectives of the future development of IMT for 2020 and beyond,”
ITU-R, Sep. 2015.
[66] R. Courtland, “Transistors could stop shrinking in 2021,” IEEE Spectrum, vol. 53, pp. 9-11, Sep. 2016.
[67] A. Mämmelä and A. Anttonen, “Why will computing power need
particular attention in future wireless devices?” IEEE Circuits and
Systems Magazine, vol. 17, pp. 12-26, First Quarter 2017.
[68] R. S. Williams, “What’s next?” Computing in Science &
Engineering, vol. 19, no. 2, pp. 7 - 13, Mar.-Apr. 2017.
[69] I. L. Markov, “Limits on fundamental limits to computation,” Nature,
vol. 512, pp. 147–154, 14 Aug. 2014.
[70] H. J. Kimble, “The quantum Internet,” Nature, vol. 453, pp. 1023-
1030, Jun. 2008.
[71] L. von Bertalanffy, General System Theory: Foundations,
Development, Applications, rev. ed. New York: George Braziller,
1971. [72] M. P. Ristenbatt, “Alternatives in digital communications,” Proc.
IEEE, vol. 61, pp. 703–721, Jun. 1973.
[73] K. E. Boulding, “General systems theory,” Management Science, vol. 2, no. 3, pp. 197-208, Apr. 1956.
[74] P. B. Gove (Ed.), Webster’s Third New International Dictionary of
the English Language, Unabridged. Springfield, MA: Merriam-Webster, 1993.
[75] G. Buttazzo, “Artificial consciousness: Utopia or real possibility?”
Computer, vol. 34, pp. 24.30, Jul. 2001. [76] B. J. Baars and S. Franklin, “How conscious experience and working
memory interact,” Trends in Cognitive Sciences, vol. 7, pp. 166-172,
Apr. 2003.
[77] A. D. Hall, A Methodology for Systems Engineering. Princeton, NJ:
D. Van Nostrand Company, 1962.
[78] A. Kossiakoff, W. N. Sweet, S. J. Seymour, and S. M. Biemer,
Systems Engineering: Principles and Practice, 2nd ed. Hoboken, NJ:
John Wiley & Sons, 2011.
[79] C. B. Nielsen, P. G. Larsen, J. Fitzgerald, J. Woodcock, and J.
Peleska, “Systems of systems engineering: Basic concepts, model-
based techniques, and research directions,” ACM Computing Surveys,
vol. 48, no. 2, pp. 18:1-18:41, Sep. 2015
[80] R. Kicinger, T. Arciszewski, and K. De Jong, “Evolutionary
computation and structural design: A survey of the state-of-the-art,”
Computers and Structures, vol. 83, pp. 1943–1978, 2005.
[81] H. A. Orr, “The genetic theory of adaptation: A brief history,” Nature
Reviews, vol. 6, pp. 119-127, Feb. 2005.
[82] E. Karsenti, “Self-organization in cell biology: A brief history,”
Nature Reviews, vol. 9, pp. 255-262, Mar. 2008.
[83] N. Wiener, Cybernetics: Or Control and Communication in the
Animal and the Machine, 2nd ed. MIT Press, 1965.
[84] W. R. Ashby, An Introduction to Cybernetics. London: Chapman &
Hall, 1956.
[85] R. G. Mancilla, “Introduction to sociocybernetics (Part 1): Third
order cybernetics and a basic framework for society,” Journal of
Sociocybernetics, vol. 9, pp. 35-56, 2011.
[86] C. Mead and L. Conway, Introduction to VLSI Systems. Reading,
MA: Addison-Wesley, 1980.
[87] J. Allen, “Computer architecture for digital signal processing,” Proc.
IEEE, vol. 73, no. 5, pp. 852-873, May 1985.
26
[88] J. M. Rabaey, Low Power Design Essentials. New York: Springer-
Verlag, 2009.
[89] O. Wolkenhauer, H. Kitano, and K.-H. Cho, “System biology:
Looking at opportunities and challenges in applying systems theory
to molecular and cell biology,” IEEE Control Systems Magazine,
Aug. 2003.
[90] P. Anderson, “Perspective: Complexity theory and organization
science,” Organization Science, vol. 10, no. 3, pp. 216-232, May-Jun.
1999.
[91] S. M. Manson, “Simplifying complexity: A review of complexity
theory,” Geoforum, vol. 32, pp. 405-414, 2001.
[92] B. Burnes, “Complexity theories and organizational change,”
International Journal of Management Reviews, vol. 7, no. 2, pp. 73-
90, Jun. 2005.
[93] C. Mead, Analog VLSI and Neural Systems. Reading, MA: Addison-
Wesley, 1989.
[94] C. Mead, “Neuromorphic electronic systems,” Proc. IEEE, vol. 78,
no. 10, pp. 1629-1630, Oct. 1990.
[95] D. H. Ballard, An Introduction to Natural Computation. MIT Press,
2000.
[96] J. F. V. Vincent, O. A. Bogatyreva, N. R. Bogatyrev, A. Bowyer, and A.-K. Pahl, “Biomimetics: Its practice and theory,” Journal of the
Royal Society Interface, vol. 3, pp. 471–482, 2006. [97] R. Sarpeshkar, Ultra Low Power Bioelectronics: Fundamentals,
Biomedical Applications, and Bio-Inspired Systems. Cambridge, UK:
Cambridge University Press, 2010.
[98] M. Kaiser, “Brain architecture: A design for natural computation,”
Phil. Trans. R. Soc. A vol. 365, pp. 3033–3045, Dec. 2007.
[99] Y. Wang, W. Kinsner, and D. Zhang, “Contemporary cybernetics and
its facets of cognitive informatics and computational intelligence,”
IEEE Transactions on Systems, Man, And Cybernetics—Part B:
Cybernetics, vol. 39, pp. 823-833, August 2009. [100] E. Bullmore and O. Sporns, “The economy of brain network
organization,” Nature Reviews: Neuroscience, vol. 13, pp. 336-349,
Apr. 2012. [101] C. Eliasmith, How to Build a Brain: A Neural Architecture for
Biological Cognition. New York: Oxford University Press, 2013.
[102] A. Goriely et al., “Mechanics of the brain: Perspectives, challenges,
and opportunities,” Biomech. Model. Mechanobiol., vol. 14, pp. 931–
965, 2015.
[103] A. P. Wickens, A History of the Brain: From Stone Age Surgery to
Modern Neuroscience. London: Psychology Press, 2015.
[104] D. Fox, “Limits of intelligence,” Scientific American, Physics at the
Limits, Special Collector’s Edition, 2011, W. W. Gibbs (Ed.), pp.
104-111.
[105] R. J. Kosinski, “A literature review on reaction time,” Technical
Note, Clemson University, Clemson, SC, Sep. 2013. [106] S. I. Gass and A. A. Assad, An Annotated Timeline of Operations
Research: An Informal History. Berlin: Springer, 2005.
[107] A. Tsoukias, “From decision theory to decision aiding methodology,”
European Journal of Operational Research, vol. 187, pp. 138–161,
2008.
[108] J. Rosenhead, “Reflections on fifty years of operational research,”
Journal of the Operational Research Society, vol. 60, pp. S5-S15, May 2009.
[109] H. W. Ittman, “Recent developments in operations research: A
personal perspective,” vol. 25, pp. 87–105, no. 2, 2009.
[110] M. M. Köksalan, J. Wallenius, and S. Zionts, Multiple Criteria
Decision Making: From Early History to the 21st Century.
Singapore: World Scientific Publishing, 2011.
[111] C. L. Byrne, Applied Iterative Methods. Wellesley, MA: A K Peters,
2008.
[112] R. Roy, S. Hinduja, and R. Teti, “Recent advances in engineering
design optimisation: Challenges and future trends,” CIRP Annals -
Manufacturing Technology, vol. 57, pp. 697–715, 2008.
[113] E. Björnson, E. Jorswieck, M. Debbah, and B. Ottersten,
“Multiobjective signal processing optimization: The way to balance
conflicting metrics in 5G systems,” IEEE Signal Processing
Magazine, vol. 31, pp. 14 – 23, Nov. 2014.
[114] Z. Fei, B. Li, S. Yang, C. Xing, H. Chen, and L. Hanzo, “A survey of
multi-objective optimization in wireless sensor networks: Metrics,
algorithms, and open problems,” IEEE Communications Surveys &
Tutorials, vol. 19, pp. 550-586, First Quarter 2017.
[115] Y.-C. Ho, “Team decision theory and information structures,” Proc. IEEE, vol. 68, no. 6, pp. 644 - 654, June 1980.
[116] W. Saad, Z. Han, M. Debbah, A. Hjorungnes, and T. Basar,
“Coalitional game theory for communication networks,” IEEE Signal
Processing Magazine, vol. 26, no. 5, pp. 77 - 97, 2009.
[117] M. Srinivas and L. M. Patnaik, “Genetic algorithms: A survey,”
Computer, vol. 27, no. 6, pp. 17 - 26, Jun. 1994.
[118] C. A. C. Coello, “Evolutionary multi-objective optimization: A historical view of the field,” IEEE Computational Intelligence
Magazine, pp. 28-36, Feb. 2006.
[119] C. A. C. Coello, G. B. Lamont, and D. A. Van Veldhuizen, Evolutionary Algorithms for Solving Multi-Objective Problems, 2nd
ed. New York: Springer, 2007.
[120] O. Mayr, “The origins of feedback control,” Scientific American, vol.
223, pp. 110-118, Oct. 1970.
[121] S. Bennett, “A brief history of automatic control,” IEEE Control
Systems, vol. 16, pp. 17–25, Jun. 1996. [122] L. Nocks, The Robot: The Life Story of a Technology. Westport, CT:
Greenwood Press, 2007.
[123] C. Bissel, “History of automatic control,” in Springer Handbook of Automation, S. Y. Nof (Ed.), Berlin: Springer, 2009, pp. 53-69.
[124] M. Mesarovic, “Multilevel systems and concepts in process control,”
Proc. IEEE, vol. 58, pp. 111-125, Jan. 1970. [125] N. Sandell, P. Varaiya, M. Athans, M. Athans, and M. Safonov,
“Survey of decentralized control methods for large scale systems,”
IEEE Transactions on Automatic Control, vol. AC-23, pp. 108-128, Apr. 1978.
[126] G. N. Saridis, “Toward the realization of intelligent controls,” Proc.
IEEE, vol. 67, pp. 1115-1133, Aug. 1979.
[127] W. Findeisen, F. N. Bailey, M. Brdyś, K. Malinowski, P. Tatjewski,
and A. Wozniak, Control and Coordination in Hierarchical Systems.
New York: John Wiley & Sons, 1980.
[128] P. J. Antsaklis and K. M. Passino (Eds.), An Introduction to
Intelligent and Autonomous Control. Norwell, MA: Kluwer
Academic Publishers, 1993.
[129] D. D. Siljak and A. I. Zecevic, “Control of large-scale systems:
Beyond decentralized feedback,” Annual Reviews in Control, vol. 29,
no. 2, pp. 169–179, 2005.
[130] L. Bakule, “Decentralized control: An overview,” Annual Reviews in Control, vol. 32, pp. 87–98, April 2008.
[131] R. Scattolini, “Architectures for distributed and hierarchical model
predictive control – A review,” Journal of Process Control, vol. 19, pp. 723–731, 2009.
[132] Y. Z. Tsypkin, Adaptation and Learning in Automatic Systems. New
York: Academic Press, 1972.
[133] B. Widrow and A. D. Stearns, Adaptive Signal Processing.
Englewood Cliffs, MA: Prentice-Hall, 1985.
[134] S. Haykin, Adaptive Filter Theory, 5th ed. Englewood Cliffs, NJ: Prentice-Hall, 2013.
[135] P. J. Antsaklis, K. M. Passino, and S. J. Wang, “Towards intelligent
autonomous control systems: Architecture and fundamental issues,” Journal of Intelligent and Robotic Systems, vol. 1, pp. 315-342, 1989.
[136] A. M. Meystel and J. S. Albus, Intelligent Systems: Architecture,
Design, and Control. New York: John Wiley & Sons, 2002. [137] W. Zhang, M. S. Branicky, and S. M. Phillips., “Stability of
networked control systems,” IEEE Control Systems Magazine, vol.
21, pp. 84-99, Feb. 2001. [138] G. C. Walsh and H. Ye, “Scheduling of networked control systems,”
IEEE Control Systems Magazine, vol. 21, no. 1, pp. 57-65, Feb. 2001.
27
[139] J. P. Hespanha, P. Naghshtabrizi, and Y. Xu, “A survey of recent
results in networked control systems,” Proc. IEEE, vol. 95, no. 1, pp.
138–162, Jan. 2007. [140] L. Zhang, H. Gao, and O. Kaynak, “Network-induced constraints in
networked control systems—A survey,” IEEE Transactions on
Industrial Informatics, vol. 9, pp. 403-416, Feb. 2013.
[141] Y. Q. Xia, Y. L. Gao, L. P. Yan, and M. Y. Fu, “Recent progress in
networked control systems—A survey,” International Journal of
Automation and Computing, vol. 12, pp. 343-367, Aug. 2015.
[142] B. Galloway and G. P. Hancke, “Introduction to industrial control networks,” IEEE Communications Surveys & Tutorials, vol. 15, pp.
860-880, Second Quarter 2013.
[143] K.-Y. You and L.-H. Xie, “Survey of recent progress in networked control systems,” Acta Automatica Sinica, vol. 39, pp. 101-117, Feb.
2013.
[144] E. Garcia, P. J Antsaklis, and L. A. Montestruque, Model-Based
Control of Networked Systems. New York: Springer, 2014.
[145] T. B. Sheridan, “Space teleoperation through time delay: Review and
prognosis,” IEEE Transactions on Robotics and Automation, vol. 9,
pp. 592-606, Oct. 1993.
[146] M. H. Lee and H. R. Nicholls, “Tactile sensing for mechatronics - A
state of the art survey,” Mechatronics, vol. 9, pp. 1-31, 1999.
[147] P. F. Hokayem and M. W. Spong, “Bilateral teleoperation: An
historical survey,” Automatica, vol. 42, no. 12, pp. 2035-2057, 2006.
[148] A. El Saddik, “The potential of haptics technologies,” IEEE
Instrumentation & Measurement Magazine, vol. 10, no. 1, pp. 10-17,
Feb. 2007.
[149] A. El Saddik, M. Orozzo, M. Eid, and J. Cha, Haptics Technologies:
Bringing Touch to Multimedia. Berlin: Springer 2011.
[150] R. S. Dahiya, G. Metta, M. Valle, and G. Sandini, “Tactile sensing—
From humans to humanoids,” IEEE Transactions on Robotics, vol.
26, no. 1, pp. 1-20, Feb. 2010.
[151] R. S. Dahiya and M. Valle, Robotic Tactile Sensing: Technologies
and System. New York: Springer, 2013.
[152] M. A. Eid and H. Al Osman, “Affective haptics: Current research
and future directions,” IEEE Access, vol. 4, pp. 26 - 40, 2016.
[153] C. Attig, N. Rauh, T. Franke, and J. F. Krems, “System latency
guidelines then and now - Is zero latency really considered
necessary?” in Engineering Psychology and Cognitive Erogonomics:
Cognition and Design, D. Harris, Ed. Cham, Switzerland: Springer,
2017, pp. 3-33.
[154] R. Murphy, Introduction to AI Robotics. Cambridge, MA: MIT Press,
2000.
[155] G. N. DeSouza and A. C. Kak, “Vision for mobile robot navigation:
A survey,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 24, no. 2, pp. 237-267, Feb. 2002.
[156] T. Fong, I. Nourbakhsh, and K. Dautenhahn, “A survey of socially
interactive robots,” Robotics and Autonomous Systems. vol. 42, pp.
143–166, 2003.
[157] R. C. Luo, K. L. Su, S. H. Shen, and K. H. Tsai, “Networked
intelligent robots through the Internet: Issues and opportunities,”
Proc. IEEE, vol. 91, no. 3, pp. 371 - 382, Mar. 2003.
[158] F. Bonin-Font, A. Ortiz, and G. Oliver, “Visual navigation for mobile
robots: A survey,” J. Intell. Robot. Syst., vol. 53, pp. 263-296, 2008.
[159] Z. Liu, Y. Zhang, X. Yu, and C. Yuan, “Unmanned surface vehicles:
An overview of developments and challenges,” Annual Reviews in
Control, vol. 41, pp. 71-93, 2016.
[160] V. Pratt, Thinking Machines: Evolution of Artificial Intelligence.
Oxford, UK: Blackwell, 1987.
[161] P. McCorduck, Machines Who Think: A Personal Inquiry into the
History and Prospects of Artificial Intelligence, 2nd ed. Natick, MA:
A K Peters, 2004.
[162] N. J. Nilsson, The Quest for Artificial Intelligence: A History of Ideas
and Achievements. New York: Cambridge University Press, 2010.
[163] Z. Michalewicz and D. B. Fogel, How to Solve It: Modern Heuristics,
2nd ed. Springer, 2004.
[164] S. Legg and M. Hutter, “Universal intelligence: A definition of machine intelligence,” Minds & Machines, vol. 17, pp. 391–444,
2007.
[165] S. Sun, “A survey of multi-view machine learning,” Neural Comput.
& Applic., vol. 23, pp. 2031–2038, 2013.
[166] S. Marsland, Machine Learning: An Algorithmic Perspective, 2nd ed.
New York: Chapman & Hall/CRC, 2015.
[167] K. S. Decker, “Distributed problem solving techniques: A survey,”
IEEE Transactions on Systems, Man and Cybernetics, vol. SMC-17,
pp. 729-740, Sep.-Oct. 1987. [168] D. Datla et al., “Wireless distributed computing: A survey of research
challenges,” IEEE Communications Magazine, vol. 50, pp. 144 - 152,
Jan. 2012. [169] R. C. Eberhart and Y. Shi, Computational Intelligence: Concepts to
Implementations. Burlington, MA: Morgan Kauffman, 2007.
[170] R. V. Kulkarni, A. Forster, and G. K. Venayagamoorthy,
“Computational intelligence in wireless sensor networks: A survey,”
IEEE Communications Surveys & Tutorials, vol. 13, pp. 68 - 96, First
Quarter 2011.
[171] S. A. Cook, “An overview of computational complexity,”
Communications of the ACM, vol. 26, pp. 401-408, Jun. 1983.
[172] L. Fortnow and S. Homer, “A short history of computational
complexity,” Bulletin of the European Association for Theoretical
Computer Science, vol. 80, pp. 95-133, Jun. 2003.
[173] O. Goldreich, Computational Complexity: A Conceptual Perspective.
New York: Cambridge University Press, 2008.
[174] A. K. Dewdney, Beyond Reason: Eight Great Problems That Reveal
the Limits of Science. Hoboken, NJ: John Wiley & Sons, 2004.
[175] P. Cockshott, L. M. Mackenzie, G. Michaelson, Computation and Its Limits. Oxford University Press, UK, 2015.
[176] S. B. Cooper and M. I. Soskova, The Incomputable: Journeys Beyond
the Turing Barrier. Cham, Switzerland: Springer, 2017.
[177] N. Zheng et al., “Hybrid-augmented intelligence: Collaboration and
cognition,” Frontiers of Information Technology & Electronic
Engineering, vol. 18, no. 2, pp. 153-179, Feb. 2017.
[178] D. J. Cook, J. C. Augusto, and V. R. Jakkula, “Ambient intelligence: Technologies, applications, and opportunities,” Pervasive and
Mobile Computing, vol. 5, pp. 277-298, 2009.
[179] F. Sadri, “Ambient intelligence: A survey,” ACM Computing Surveys, vol. 43, no. 4, pp. 36.1-36.66, Oct. 2011.
[180] G. D. Abowd and A. K. Dey, “Towards a better understanding of
context and context-awareness,” in Lecture Notes in Computer Science 1707: Handheld and Ubiquitous Computing, H. W.
Gellersen (Ed.), Berlin: Springer, 1999, pp. 304-307.
[181] D. Zhang, H. Huang, C.-F. Lai, X. Liang, Q. Zou, and M. Guo, “Survey on context-awareness in ubiquitous media,” Multimed. Tools
Appl., vol. 67, pp. 179–211, 2013.
[182] A. Kovacevic, O. Heckmann, N. C. Liebau, and R. Steinmetz,
“Location awareness—Improving distributed multimedia
communication,” Proceedings of the IEEE, vol. 96, pp. 131 - 142,
Jan. 2008.
[183] M. Wooldridge and N. R. Jennings, “Intelligent agents: Theory and
practice,” Knowledge Engineering Review, vol. 10, no. 2, pp. 115-
152, 1995.
[184] S. Franklin and A. Graesser, “Is it an agent, or just a program?: A
taxonomy for autonomous agents,” in Intelligent Agents III. J. P.
Muller, M. Wooldridge, and N. R. Jennings, Eds. Berlin, Germany:
Springer-Verlag, 1997, pp. 21–35.
[185] Y. U. Cao, A. S. Fukunaga, and A. B. Kahng, “Cooperative mobile
robotics: Antecedents and directions,” Autonomous Robots, vol. 4,
no. 1, pp. 7–27, 1997.
[186] N. R. Jennings, K. Sycara, and M. Wooldridge, “A roadmap of agent
research and development,” Autonomous Agents and Multi-Agent
Systems, vol. 1, pp. 7–38, 1998.
28
[187] K. P. Sycara, “Multiagent systems,” AI Magazine, vol. 19, no. 2, pp.
1998.
[188] P. Stone, M. Veloso, “Multiagent systems: A survey from a machine
learning perspective,” Autonomous Robots, vol. 8, no. 3, pp. 345–
383, Jun. 2000.
[189] M. Wooldridge, An Introduction to Multiagent Systems, 2nd ed. John
Wiley & Sons, 2009.
[190] Y. Cao, W. Yu, W. Ren, and G. Chen, “An overview of recent
progress in the study of distributed multi-agent coordination,” IEEE
Trans. Ind. Informat., vol. 9, pp. 427–438, Feb. 2013.
[191] G. Weiss (Ed.), Multiagent Systems: A Modern Approach to
Distributed Artificial Intelligence. Cambridge, MA: MIT Press,
1999.
[192] Z. Yan, N. Jouandeau, and A. A. Cherif, “A survey and analysis of
multi-robot coordination,” International Journal of Advanced
Robotic Systems, vol. 10, no. 399, pp. 1-18, 2013.
[193] M. C. Huebscher and J. A. McCann, “A survey of autonomic
computing—degrees, models, and applications,” ACM Computing Surveys, vol. 40, pp. 7:1-7:28, Aug. 2008.
[194] A. K. Noor, “Potential of cognitive computing and cognitive
systems,” Open Eng., vol. 5, pp. 75–88, 2015
[195] P. Derler, E. A. Lee, A. S. Vincentelli, “Modeling cyber–physical
systems,” Proc. IEEE, vol. 100, no. 1, pp. 13-28, Jan. 2012.
[196] S. K. Khaitan and J. D. McCalley, “Design techniques and
applications of cyberphysical systems: A survey,” IEEE Systems
Journal, vol. 9, no. 2, pp. 350-365, Jun. 2015.
[197] D. S. Nunes, P. Zhang, and J. S. Silva, “A survey on human-in-the-
loop applications towards an Internet of All,” IEEE Communications
Surveys & Tutorials, vol. 17, no. 2, pp. 944-965, Second Quarter
2015.
[198] E. T. Jaynes, Probability Theory: The Logic of Science. New York:
Cambridge University Press, 2003.
[199] A. Huurdeman, The Worldwide History of Telecommunications.
Hoboken, NJ: John Wiley & Sons, 2003.
[200] T. K. Sarkar, R. Mailloux, A. A. Oliner, M. Salazar-Palma, and D. L.
Sengupta, History of Wireless. Hoboken, NJ: John Wiley & Sons,
2006.
[201] P. R. Kumar, “New technological vistas for systems and control: The
example of wireless networks,” IEEE Control Systems Magazine,
vol. 21, no. 1, pp. 24-37, Feb. 2001.
[202] H. Koivo and M. Elmusrati, Systems Engineering in Wireless
Communications. Chichester, UK: John Wiley & Sons, 2009.
[203] D. E. Comer, Internetworking with TCP/IP, vol. I, 4th ed. Essex, UK:
Pearson, 2000.
[204] K. R. Fall and W. R. Stevens, TCP/IP Illustrated, Volume 1: The
Protocols. Pearson, 2012.
[205] H. Holma and A. Toskala, LTE for UMTS: Evolution to the LTE-
Advanced, 2nd ed. Chichester, West Sussex, UK: John Wiley & Sons,
2011.
[206] E. Dahlman, S. Parkvall, and J. Skold, 4G: LTE/LTE-Advanced for Mobile Broadband, 2nd ed. Academic Press, 2014.
[207] S. Benedetto, E. Biglieri, and V. Castellani, Principles of Digital
Transmission. New York: Kluwer, 1999.
[208] J. G. Proakis, Digital Communications, 4th ed. New York: McGraw-
Hill, 2001.
[209] B. Sklar, Digital Communications, 2nd ed. Upper Saddle River, NJ:
Prentice-Hall, 2001.
[210] M. E. J. Newman, “The structure and function of complex networks,”
SIAM Review, vol. 45, no. 2, pp. 167–256, 2003.
[211] W. Ni, I. B. Collings, J. Lipman, X. Wang, M. Tao, and M.
Abolhasan, “Graph theory and its applications to future network
planning: Software-defined online small cell management,” IEEE
Wireless Communications, vol. 22, pp. 52 - 60, 2015.
[212] A.-L. Barabasi, Network Science. Cambridge, UK: Cambridge
University Press, 2016.
[213] R. Jain and S. Paul, “Network virtualization and software defined
networking for cloud computing: A survey,” IEEE Communications
Magazine, vol. 51, pp. 24-31, Nov. 2013.
[214] N. Feamster, J. Rexford, and E. Zegura, “The road to SDN: An
intellectual history of programmable networks,” ACM Queue, vol.
11, no. 12, pp. 1-21, Dec. 2013.
[215] N. A. Jagadeesan and B. Krishnamachari, “Software-defined
networking paradigms in wireless networks: A survey,” ACM
Computing Surveys, vol. 47, pp. 27:1-27:11, Nov. 2014.
[216] Y. Jarraya, T. Madi, and M. Debbabi, “A survey and a layered
taxonomy of software-defined networking,” IEEE Communication
Surveys & Tutorials, vol. 16, pp. 1955-1980, Fourth Quarter 2014.
[217] B. A. A. Nunes, M. Mendonca, X.-N. Nguyen, K. Obraczka, and T.
Turletti, “A survey of software-defined networking: Past present, and
future of programmable networks,” IEEE Communications Surveys
& Tutorials, vol. 16, pp. 1617-1634, Third Quarter 2014.
[218] D. Kreutz, F. M. V. Ramos, and P. E. Verissimo, “Software-defined
networking: A comprehensive survey,” Proc. IEEE, vol. 103, pp. 14
– 76, Jan. 2015.
[219] W. Xia, Y. Wen, C. H. Foh, D. Niyato, and H. Xie, “A survey on
software-defined networking,” IEEE Communications Surveys &
Tutorials, vol. 17, pp. 27-51, First Quarter 2015.
[220] A. Mohamed, O. Onireti, M. A. Imran, A. Imran, and R. Tafazolli,
“Control-data separation architecture for cellular radio access
networks: A survey and outlook,” IEEE Communication Surveys &
Tutorials, vol. 18, pp. 446-465, First Quarter 2016.
[221] R. E. Kahn, S. A. Gronemeyer, J. Burchfiel, and R. C. Kunzelman,
“Advances in packet radio technology,” Proc. IEEE, vol. 66, pp.
1468 - 1496, Nov. 1978.
[222] T. Robertazzi and P. Sarachik, “Self-organizing communication
networks,” IEEE Communications Magazine, vol. 24, Jan. 1986, pp. 28 – 33.
[223] R. Ramanathan and J. Redi, “A brief overview of ad hoc networks:
Challenges and directions,” IEEE Communications Magazine, vol. 40, no. 5, pp. 20-22, May 2002.
[224] C. Prehofer and C. Bettstetter, “Self-organization in communication
networks: Principles and design paradigms,” IEEE Communications
Magazine, vol. 43, pp. 78 – 85, no. 7, 2005.
[225] K. L. Mills, “A brief survey of self-organization in wireless sensor
networks,” Wireless Commun. Mobile Comput., vol. 7, no. 7, pp.
823–834, 2007. [226] F. Dressler, “A study of self-organization mechanisms in ad hoc and
sensor networks,” Comput. Commun., vol. 31, no. 13, pp. 3018–3029,
2008. [227] O. G. Aliu, A. Imran, M. A. Imran, and B. Evans, “A survey of self
organisation in future cellular networks,” IEEE Communications
Surveys & Tutorials, vol. 15, no. 1, pp. 336-361, First Quarter 2013.
[228] Z. Zhang, K. Long, and J. Wang, “Self-organization paradigms and
optimization approaches for cognitive radio technologies: A survey,”
IEEE Wireless Commun., vol. 20, no. 2, pp. 36–42, Apr. 2013.
[229] M. Conti and S. Giordano, “Mobile ad hoc networking: Milestones, challenges, and new research directions,” IEEE Commun. Mag., Jan.
2014, pp. 85-96.
[230] A. J. Fehske, I. Viering, J. Voigt, C. Sartori, S. Redana, and G. P.
Fettweis, “Small-cell self-organizing wireless networks,” Proc.
IEEE, vol. 102, pp. 334 - 350, March 2014.
[231] R. Li, Z. Zhao, X. Zhou, G. Ding, Y. Chen, Z. Wang, and H. Zhang,
“Intelligent 5G: When cellular networks meet artificial intelligence,”
IEEE Wireless communications, vol. 24, no. 5, pp. 175-183, Oct.
2017.
[232] B. Jennings, S. van der Meer, S. Balasubramaniam, D. Botvich, M.
O Foghlu, W. Donnelly, and J. Strassner, “Towards autonomic
management of communications networks,” IEEE Communications
Magazine, vol. 45, pp. 112 - 121, Oct. 2007.
29
[233] N. Samaan and A. Karmouch, “Towards autonomic network
management: An analysis of current and future research directions,”
IEEE Communications Surveys & Tutorials, vol. 11, pp. 22-36, Third
Quarter 2009.
[234] Y. Zhao, S. Mao, J. O. Neel, and J. H. Reed, “Performance evaluation
of cognitive radios: Metrics, utility functions, and methodology,”
Proc. IEEE, vol. 9, pp. 642-659, Apr. 2009.
[235] B. Wang and K. J. R. Liu, “Advances in cognitive radio networks: A
survey,” IEEE Journal of Selected Topics in Signal Processing, vol.
5, no. 1, pp. 5-23, Feb. 2011.
[236] Z. Movahedi, M. Ayari, R. Langar, and G. Pujolle, “A survey of
autonomic network architectures and evaluation criteria,” IEEE
Communications Surveys & Tutorials, vol. 14, pp. 464-490, Second
Quarter 2012.
[237] L. Gavrilovska, V. Atanasovski, I. Macaluso, and L. A. DaSilva, “Learning and reasoning in cognitive radio networks,” IEEE
Communications Surveys & Tutorials, vol. 15, pp. 1761-1777, Fourth
Quarter 2013. [238] A. H. Sayed, “Adaptive networks,” Proc. IEEE, vol. 102, pp. 460 -
497, Apr. 2014.
[239] X. Wang. X. Li, and V. C. M. Leung, “Artificial intelligence-based techniques for emerging heterogeneous network: State of the arts,
opportunities, and challenges,” IEEE Access, vol. 3, pp. 1379-1391, 2015.
[240] P. Makris, D. N. Skoutas, and C. Skianis, “A survey on context-aware
mobile and wireless networking: On networking and computing
environments’ integration,” IEEE Communications Surveys &
Tutorials, vol. 15, pp. 362-386, First Quarter 2013.
[241] E. Steinbach, S. Hirche, M. Ernst, F. Brandi, R. Chaudhari, J.
Kammerl, and I. Vittorias, “Haptic communications,” Proc. IEEE,
vol. 100, no. 4, pp. 937 - 956, Apr. 2012.
[242] M. Maier, M. Chowdhury, B. P. Rimal, and D. P. Van, “The Tactile Internet: Vision, recent progress, and open challenges,” IEEE
Communications Magazine, vol. 54, pp. 138-145, May 2016.
[243] M. Simsek, A. Aijaz, M. Dohler, J. Sachs, and G. Fettweis, “5G-
enabled Tactile Internet,” IEEE J. Sel. Areas. Commun., vol. 34, no.
3, pp. 460-473, Mar. 2016.
[244] A. Aijaz, M. Dohler, A. H. Aghvami, V. Friderikos, and M. Frodigh,
“Realizing the Tactile Internet: Haptic communications over next
generation 5G cellular networks,” IEEE Wireless Communications, vol. 24, no. 2, pp. 82-89, Apr. 2017.
[245] D. van den Berg et al., “Challenges in haptic communications over
the Tactile Internet,” IEEE Access, vol. 5, pp. 23502-23518, 2017.
[246] W. Lindsey, F. Ghazvinian, W. C. Hagmann, and K. Dessouky,
“Network synchronization,” Proc. IEEE, vol. 73, pp. 14M45-1467,
Oct. 1985.
[247] S. Bregni, “A historical perspective on network synchronization,” IEEE Communications Magazine, vol. 36, pp. 158 - 166, June 1998.
[248] S. Chen and K. Nahrstedt, “An overview of quality-of-service routing
for next-generation high-speed networks: Problems and solutions,”
IEEE Network, vol. 12, no. 6, pp. 64-79, Nov.-Dec. 1998.
[249] J. Rexford, A. Greenberg, G. Hjalmtysson, D. A. Maltz, A. Myers,
G. Xie, J. Zhan, and H. Zhang, “Network-wide decision making:
Toward a wafer-thin control plane,” in Proc. HotNets’04. [250] P. Rusavy, “Challenges and considerations in defining spectrum
efficiency,” Proc. IEEE, vol. 102, pp. 386-392, Mar. 2014.
[251] Z. Hasan, H. Boostanimehr, and V. K. Bhargava, “Green cellular networks: A survey, some research issues and challenges,” IEEE
Communications Surveys & Tutorials, vol. 13, pp. 524-540, Fourth
Quarter 2011. [252] B. Briscoe et al., “Reducing Internet latency: A survey of techniques
and their merits,” IEEE Communications Surveys & Tutorials, vol.
18, pp. 2149-2196, Third Quarter 2016. [253] I. Katzela and M. Naghshinenh, “Channel assignment schemes for
cellular mobile telecommunication systems: A comprehensive
survey,” IEEE Personal Communications, vol. 3, no. 3, pp. 10-31, Jun. 1996.
[254] P. Leaves, K. Moessner, R. Tafazolli, D. Grandblaise, D. Bourse, R.
Tonjes, and M. Breveglieri, “Dynamic spectrum allocation in
composite reconfigurable wireless networks,” IEEE Communications Magazine, vol. 42, pp. 72 - 81, May 2004.
[255] G. Salami, O. Durowoju, A. Attar, O. Holland, R. Tafazolli, and H.
Aghvami, “A comparison between the centralized and distributed
approaches for spectrum management,” IEEE Communications
Surveys & Tutorials, vol. 13, pp. 274-290, Second Quarter 2011.
[256] M. J. Marcus, “Spectrum policy for radio spectrum access,” Proc.
IEEE, vol. 100, pp. 1685 - 1691, Special Centennial Issue, 2012.
[257] Z. Han and K. J. R. Liu, Resource Allocation for Wireless Networks:
Basics, Techniques, and Applications. Cambridge, MA: Cambrige
University Press, 2008.
[258] S. Haykin, Cognitive Dynamic Systems: Perception-Action Cycle,
Radar, and Radio. Cambridge, UK: Cambridge University Press,
2012.
[259] M. Bkassiny, Y. Li, and S. K. Jayaweera, “A survey on machine-
learning techniques in cognitive radios,” IEEE Communications
Surveys & Tutorials, vol. 15, pp. 1136 – 1159, Third Quarter 2013.
[260] S. Haykin and J. M. Fuster, “On cognitive dynamic systems:
Cognitive neuroscience and engineering learning from each other,” Proceedings of the IEEE, vol. 102, pp. 608-628, no. 4, 2014.
[261] S. Haykin, P. Setoodeh, S. Feng, and D. Findlay, “Cognitive dynamic system as the brain of complex networks,” IEEE Journal on Selected
Areas in Communications, vol. 34, no. 10, pp. 2791-2800, Oct. 2016.
[262] T. Claasen and W. Mecklenbräuker, “Adaptive techniques for signal processing in communications,” IEEE Commun. Mag., vol. 23, pp. 8
– 19, Nov. 1985.
[263] S. U. H. Qureshi, “Adaptive equalization,” Proc. IEEE, vol. 73, no. 9, pp. 1349 - 1387, Sep. 1985.
[264] D. Falconer, “History of equalization 1860–1980,” IEEE
Communications Magazine, vol. 49, pp. 42-50, Oct. 2011.
[265] F. Dressler and O. B. Akan, “A survey on bio-inspired networking,”
Computer Networks, vol. 4, no. 6, pp. 881–900, Apr. 2010.
[266] M. Meisel, V. Pappas, and L. Zhang, “A taxonomy of biologically inspired research in computer networking,” Computer Networks, vol.
54, no. 6, pp. 901–916, Apr. 2010.
[267] P. Dudley, “Back to basics? Tektology and General System Theory (GST),” Systems Practice, vol. 9, pp. 273–284, June 1996.
[268] C. B. Nielsen, P. G. Larsen, J. Fitzgerald, J. Woodcock, and J.
Peleska, “Systems of systems engineering: Basic concepts, model-
based techniques, and research directions,” ACM Computing Surveys,
vol. 48, no. 2, Article 18, pp. 18:1-18:41, Sep. 2015.
[269] Software and Systems Engineering Vocabulary (SE VOCAB), IEEE
Computer Society and ISO/IEC JTC 1/SC7, 2017. [Online].
Available: https://pascal.computer.org.
[270] IEEE Standards Dictionary, IEEE, 2017. [Online]. Available:
http://ieeexplore.ieee.org.
[271] L. Lundheim, “On Shannon and ‘Shannon’s formula’,” Telektronikk,
vol. 98, pp. 20-29, no. 1, 2002.
[272] M. Maruyama, “The second cybernetics: Deviation-amplifying
mutual causal processes,” American Scientist, vol. 5, no. 2, pp. 164-
179, 1963.
[273] W. R. Ashby, Design for a Brain: The Origin of Adaptive Behavior.
Chapman & Hall, 1960.
[274] G. M. Shepherd et al., “The Human Brain Project: Neuroinformatics
tools for integrating, searching and modelling multidisciplinary
neuroscience data,” Trends in Neuroscience, vol. 21, no. 11, pp. 460-
468, 1998.
[275] N. Rose, “The Human Brain Project: Social and ethical challenges,”
Neuron, vol. 82, pp. 1212-1215, Jun. 2014.
[276] S. B. Furber, F. Galluppi, S. Temple, and L. A. Plana, “The
SpiNNaker project,” Proc. IEEE, vol. 102, no. 5, pp. 652-665, May
2014.
[277] D. Monroe, “Neuromorphic computing gets ready for the (really) big
time,” Communications of the ACM, vol. 57, pp. 13-15, Jun. 2014.
30
[278] X. Liu, Y. Zeng T. Zhang, and B. Xu, “Parallel brain simulator: A
multi-scale and parallel brain-inspired neural network modeling and
simulation platform,” Cogn. Comput., vol. 8, pp. 967–981, 2016.
[279] M. L. Schneider et al., “Ultralow power artificial synapses using nanotextured magnetic Josephson junctions,” Science Advances, vol.
4, no. 1, pp. 1-8, Jan. 2018
[280] P. A. Merolla et al., “A million spiking-neuron integrated circuit with a scalable communication network and interface,” Science, vol. 345,
no. 6197, pp. 668-673, Aug. 2014.
[281] “Largest neuronal network simulation achieved using K computer,” RIKEN, Press Release, 2 August 2013. [Online]. Available:
http://www.riken.jp.
[282] P. Lennie, “The cost of cortical computation,” Current Biology, vol. 13, no. 6, pp. 493-497, Mar. 2003.
[283] R. Courtland, “Soggy computing,” IEEE Spectrum, vol. 52, no. 5, p.
18, May 2015.
[284] F. L. Traversa and M. Di Ventra, “Universal memcomputing
machines,” IEEE Transactions on Neural Networks and Learning
Systems, vol. 26, no. 11, pp. 2702-2715, Nov. 2015. [285] D. Ferrucci et al., “Building Watson: An overview of the DeepQA
project,” AI Magazine, vol. 31, no. 3, pp. 59-79, Fall 2010.
[286] D. Silver et al., “Mastering the game of Go without human knowledge,” Nature, vol. 550, pp. 354-359, Oct. 2017.
[287] P. B. Bonissone, R. Subhu, and J. Lizzi, “Multicriteria decision
making (MCDM): A framework for research and applications,” IEEE Computational Intelligence Magazine, pp. 48-61, Aug. 2009.
[288] P. Tsiaflakis, F. Glineur, and M. Moonen, “Real-time dynamic
spectrum management for multi-user multi-carrier communication systems,” IEEE Transactions on Communications, vol. 62, pp. 1124-
1137, Mar. 2014.
[289] P. C. Parks, “A. M. Lyapunov's stability theory—100 years on,” IMA
Journal of Mathematical Control & Information, vol.4, no. 9, 1992,
pp. 275–303.
[290] J. Makhoul, “Linear prediction: A tutorial review,” Proc. IEEE, vol.
63, no. 4, pp. 561-580, Apr. 1975.
[291] G. Long, F. Ling, and J. G. Proakis, “The LMS algorithm with
delayed coefficient adaptation,” IEEE Transactions on Acoustics,
Speech, and Signal Processing, vol. 37, no. 9, pp. 1397-1405, Sep.
1989.
[292] S. A. Fechtel and H. Meyr, “Optimal parametric feedforward
estimation of frequency-selective fading radio channels,” IEEE
Transactions on Communications, vol. 42, pp. 1639-1650, Feb.-
Mar.-Apr. 1994.
[293] A. M. Monk and L. B. Milstein, “Open-loop power control error in a
land mobile satellite system,” IEEE Journal on Selected Areas in
Communications, vol. 13, pp. 205-212, Feb. 1995.
[294] P. Kanerva, “Hyperdimensional computing: An introduction to
computing in distributed representation with high-dimensional
random vectors,” Cognitive Computation, vol. 1, pp. 139–159, Jun.
2009.
[295] S. M. Kay, Fundamentals of Statistical Signal Processing:
Estimation Theory. Englewood Cliffs, NJ: Prentice-Hall, 1993.
[296] M. Schwartz, W. R. Bennett, and S. Stein, Communication Systems
and Techniques. New York: McGraw-Hill, 1966.
[297] R. F. Drenick R. A. Shahbender, “Adaptive servomechanisms,”
Transactions of the American Institute of Electrical Engineers, Part
II: Applications and Industry, vol. 76, pp. 286 - 292, Nov. 1957.
[298] J. Aseltine, A. R. Mancini, and C. W. Sarture, “A survey of adaptive
control systems,” IRE Transactions on Automatic Control, vol. 6, pp.
102-108, Dec. 1958.
[299] P. R. Stromer, “Adaptive or self-optimizing control systems: A
bibliography,” IRE Transactions on Automatic Control, vol. 7, pp.
65-68, 1959.
[300] G. N. Saridis, Self-Organizing Control of Stochastic Systems. New
York: Marcel Dekker 1977.
[301] J. Sklansky, “Learning systems for automatic control,” IEEE
Transactions on Automatic Control, vol. AC-11, no. 1, pp. 6-19, Jan.
1966.
[302] J. Spragins, “Learning without a teacher,” IEEE Transactions on
Information Theory, vol. IT-12, pp. 223-230, Apr. 1966.
[303] K. S. Fu, “Learning systems–A review and outlook,” IEEE
Transactions on Automatic Control, vol. AC-15, pp. 210-221, Apr. 1970.
[304] Z. Nikolic and K. Fu, “An algorithm for learning without external
supervision and its applications to learning control systems,” IEEE
Transactions on Automatic Control, vol. AC-11, no. 3, pp. 414 - 422,
Jul. 1966.
[305] Y. Z. Tsypkin, “Self-learning–What is it?” IEEE Transactions on Automatic Control, vol. AC-13, pp. 608-612, Dec. 1968.
[306] K. S. Fu, “Learning control systems and intelligent control systems:
An Intersection of artificial intelligence and automatic control,” IEEE Transactions on Automatic Control, vol. AC-16, pp. 70-72, Feb.
1971.
[307] P. J. Antsaklis, “Intelligent learning control,” IEEE Control Systems, pp. 5-7, Jun. 1995.
[308] M. T. G. Pain and A. Hibbs, “Sprint starts and the minimum auditory
reaction time,” Journal of Sports Sciences, vol. 25, no. 1, pp. 79-86, 2007
[309] M. L. van Emmerick, S. C. de Vries, and J. E. Bos, “Internal and
external fields of view affect cybersickness,” Displays, vol. 32, 169–174, 2011.
[310] M. Annett, “(Digitally) Inking in the 21st century,” IEEE Computer
Graphics and Applications, vol. 37, no. 1, pp. 92-99, Jan.-Feb. 2017. [311] V. Forch, T. Franke, N. Rauh, and H. F. Krems, “Are 100 ms fast
enough? Characterizing latency perception thresholds in mouse-
based interaction,” in Engineering Psychology and Cognitive Erogonomics: Cognition and Design, D. Harris, Ed. Cham,
Switzerland: Springer, 2017, pp. 45-56.
[312] M. A. Heller and W. Schiff (Eds.), The Phychology of Touch. New York: Psychology Press, 1991.
[313] P. F. Bladin, “W. Gray Walter, pioneer in the electroencephalogram,
robotics, cybernetics, artificial intelligence,” Journal of Clinical Neuroscience, vol. 13, pp. 170-177, 2006.
[314] I. J. Rudas and J. Fodor, “Intelligent systems,” Int. J. of Computers,
Communications & Control, vol. III, pp. 132-138, 2008. [315] P. Zave, “An operational approach to requirements specification for
embedded systems,” IEEE Transactions on Software Engineering,
vol. SE-8, no. 3, pp. 250 - 269, May 1982. [316] S. Edwards, L. Lavagno, E. A. Lee, and A. Sangiovanni-Vincentelli,
“Design of embedded systems: formal models, validation, and
synthesis,” Proc. IEEE, vol. 85, no. 3, pp. 366 - 390, Mar. 1997.
[317] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol.
521, pp. 436-444, May 2015.
[318] J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks, vol. 61, pp. 85–117, Jan. 2015.
[319] B. Widrow and M. A. Lehr, “30 years of adaptive neural networks:
Perceptron, Madaline, and backpropagation,” Proc. IEEE, vol. 78,
no. 9, pp. 1415 - 1442, Sep. 1990.
[320] K.-S. Fu, “Recent developments in pattern recognition,” IEEE
Transactions on Computers, vol. C-29, no. 10, pp. 697-722, Oct.
1980.
[321] A. K. Jain, R. P. W. Duin, and J. Mao, “Statistical pattern recognition:
A review,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 22, no. 1, pp. 4-37, Jan. 2000.
[322] L. N. Kanal, “Interactive pattern analysis and classification systems:
A survey and commentary,” Proc. IEEE, vol. 60, no. 10, pp. 1200-
1215, Oct. 1972.
[323] W. Fan and A. Bifet, “Mining big data: Current status, and forecast
to the future,” ACM SIGKDD Explorations Newsletter, vol. 14, no.
2, pp. 1-5, Dec. 2012.
[324] D. Chess, B. Grosof, C. Harrison, D. Levine, C. Parris, and G. Tsudik,
“Itinerant agents for mobile computing,” IEEE Personal
Communications, vol. 2, pp. 34 - 49, Oct. 1995.
31
[325] M. Dohler, “Internet of Skills, where robotics meets AI, 5G and the
Tactile Internet,” in Proc. EuCNC’17.
[326] J. Day and H Zimmermann, “The OSI reference model,” Proc. IEEE, no. 12, vol. 71, pp. 1334-1340, Dec. 1983.
[327] C. Bormann, A. P. Castellani, and Z. Shelby, “CoAP: An application
protocol for billions of tiny internet nodes,” IEEE Internet
Computing, vol. 16, pp. 62 - 67, Mar.-Apr. 2012.
[328] S. C. Gupta, “Phase-locked loops,” Proc. IEEE, vol. 63, no. 2, pp.
291-306, Feb. 1975.
[329] F. Nebeker, “Harold Alden Wheeler: A lifetime of applied
electronics,” Proc. IEEE, vol. 80, pp. 1223-1236, Aug. 1992.
[330] K. R. Thrower, “History of tuning,” in Proc. International
Conference on 100 Years of Radio, 1995, pp. 107 - 113.
[331] R. A. Scholtz, “The origins of spread-spectrum communications,”
IEEE Transactions on Communications, vol. COM-30, pp. 822-854,
May 1982.
[332] R. W. Lucky, J. Salz, and E. J. Weldon, Jr., Principles of Data
Communication. New York: McGraw-Hill, 1968.
[333] S. M. Alamouti and S. Kallel, “Adaptive trellis-coded multiple-phase-shift keying for Rayleigh fading channels,” IEEE Transactions
on Communications, vol. 42, June 1994, pp. 2305-2314.
[334] S. Lin, D. J. Costello, and M. J. Miller, “Automatic-repeat-request
error-control schemes,” IEEE Communications Magazine, vol. 22,
no. 12, pp. 5 - 17, Dec. 1984.
[335] J. Zander, “Radio resource management in the future wireless
networks: Requirements and limitations,” IEEE Communications Magazine, vol. 35, no. 8, pp. 30-36, Aug. 1997.
[336] K. Olms, “The post-WARC frequency situation in Central Europe,”
IEEE Trans. Commun., vol. 29, pp. 1098–1101, Aug. 1981.
[337] R. C. Kirby, “History and trends in international radio regulation,” in
Proc. International Conference on 100 Years of Radio, 1995.
[338] A. Mc Gibney, M. Klepal, and D. Pesch, “Agent-based optimization
for large scale WLAN design,” IEEE Transactions on Evolutionary
Computation, vol. 15, no. 4, pp. 470-486, Aug. 2011.
[339] A. Imran, A. Zoha, and A. Abu-Dayya, “Challenges in 5G: How to
empower SON with big data for enabling 5G,” IEEE Network, vol.
28, no. 6, pp. 27-33. Nov.-Dec. 2014.
[340] W. Caripe, G. Cybenko, K. Moizumi, and R. Gray, “Network awareness and mobile agent systems,” IEEE Communications
Magazine, vol. 6, pp. 44-49, Jul. 1998.
[341] N. D. Dutt and D. D. Gajski, “Design synthesis and silicon compilation,” IEEE Design & Test of Computers, vol. 7, no. 6, pp. 8-
23, Dec. 1990.
[342] P. E. Vermaas and K. Dorst, “On the conceptual framework of John
Gero’s FBS-model and the prescriptive aims of design
methodology,” Design Studies, vol. 28, pp. 133-157, Mar. 2007.
[343] I. G. Barbour, Myths, Models, and Paradigms. New York: Harper &
Row, 1974.
[344] R. Szabo, M. Kind, F.-J. Westphal, H. Woesner, D. Jocha, and A.
Csaszar, ”Elastic network functions: Opportunities and challenges,”
IEEE Network, vol. 29, pp. 15–21, May-Jun. 2015.
[345] G. Ku and J. M. Walsh, “Resource allocation and link adaptation in
the LTE and LTE Advanced: A tutorial,” IEEE Communications
Surveys & Tutorials, vol. 17, no. 3, pp. 1605-1633, Third Quarter
2015.
[346] R. Mijumbi, J. Serrat, J.-L. Gorricho, S. Latré, M. Charalambides,
and D. Lopez, “Management and orchestration challenges in network
function virtualization,” IEEE Communications Magazine, vol. 54,
pp. 98-105, Jan. 2016.
[347] S.-C. Hung, H. Hsu, S.-Y. Lien, and K.-C. Chen, “Architecture
harmonization between cloud radio access networks and fog
networks,” IEEE Access, vol. 3, pp. 3019-3034, Dec. 2015.
[348] M. Haddad, S. Elayoubi, E. Altman, and Z. Altman, “A hybrid
approach for radio resource management in heterogeneous cognitive
networks,” IEEE Journal on Selected Areas in Communications, vol.
29, no. 4, pp. 831-842, Apr. 2011.
[349] I. D. Landau, R. Lozano, M. M’Saad, and A. Karimi, Adaptive
Control: Algorithms, Analysis and Applications. London: Springer,
2011.
[350] P. R. Cohen and E. A. Feigenbaum, The Handbook of Artificial
Intelligence, vol. III. Los Altos, CA: William Kaufmann, 1982.
[351] E. Biglieri, J. Proakis, and S. Shamai, “Fading channels: Information-
theoretic and communications aspects,” IEEE Transactions on
Information Theory, vol. 44, pp. 2619 - 2692, Oct. 1998. [352] C. Anderson, “Self-organization in relation to similar concepts: Are
the boundaries to self-organization indistinct?” Biol. Bull., vol. 202,
pp. 247-255, Jun 2002. [353] M. Dorigo, M. Birattari, and T. Stutzle, “Ant colony optimization:
Artificial ants as a computational intelligence technique,” IEEE
Computational Intelligence Magazine, pp. 28-39, Nov. 2006. [354] D. J. Baker amd A. Ephremides, “The architectural organization of a
mobile radio network via a distributed algorithm,” IEEE
Transactions on Communications, vol. COM-29, pp. 1694-1701, Nov. 1981.
[355] W. Gitt, “Information: The third fundamental quantity,” Siemens
Review, vol. 56, part 6, pp. 2-7, Nov.-Dec. 1989.
[356] G. Pahl, W. Beitz, J. Feldhusen, and K.-H. Grote, Engineering
Design: A Systematic Approach, 3rd ed. Springer, 2007.
[357] P. Lima and G. Saridis, “Measuring complexity of intelligent
machines,” in Proc. IEEE International Conference on Robotics and
Automation, 1993.
[358] R. K. Cavin, III, R. Lugli, and V. V. Zhirnov, “Science and
engineering beyond Moore’s law,” Proc. IEEE, vol. 100, pp. 1720–1749, Special Centennial Issue, May 2012.
[359] M. S. Whittingham, “History, evolution, and future status of energy
storage,” Proc. IEEE, vol. 100, pp. 1518–1534, Centennial Special
Issue, 2012.
[360] A. Avila, R. K. Cavin, V. V. Zhirnov, and H. H. Hosack, “Fundamental limits of heat transfer,” in Proc. THERMES’07, pp. 25-
32.
[361] S. J. B. Yoo, “Energy efficiency in the future Internet: The role of optical packet switching and optical-label switching,” IEEE Journal
of Selected Topics in Quantum Electronics, vol. 17, pp. 406-418,
Mar./Apr. 2011. [362] Q. Xie, M. J. Dousti, and M. Pedram, “Therminator: A thermal
simulator for smartphones producing accurate chip and skin
temperature maps,” in Proc. ISLPED’14, pp. 117 - 122.
[363] “International Technology Roadmap for Semiconductors (ITRS),”
ITRS, 2001–2016, www.itrs2.net.
[364] G. M. Amdahl, “Computer architecture and Amdahl’s law,” IEEE Computer, vol. 46, pp. 38–46, Dec. 2013.
[365] J. L. Gustafson, “Reevaluating Amdahl's law,” Communications of
the ACM, vol. 31, no. 5, pp. 532-533, 1988.
[366] D. Fisher, “Your favorite parallel algorithms might not be as fast as
you think,” IEEE Transactions on Computers, vol. 37, pp. 211-213,
Feb. 1988.
[367] V. V. Zhirnov, R. K. Cavin, III, J. A. Hutchby, and G. I. Bourianoff,
“Limits to binary logic switch scaling – A Gedanken model,” Proc.
IEEE, vol. 91, no. 11, pp. 1934–1939, Nov. 2003.
[368] L. B. Kish, C.-G. Granqvist, S. P. Khatri, and F. Peper, “Zero and
negative energy dissipation at information-theoretic erasure,” J.
Comput. Electron., vol. 15, pp. 335–339, 2016.
[369] V. V. Zhirnov and R. K. Cavin, “Future microsystems for information
processing: Limits and lessons from living systems,” IEEE Journal
of the Electron Device Society, vol. 1, pp. 29–47, Feb. 2013.
[370] M. M. Waldrop, “More than Moore,” Nature, vol. 530, pp. 145-147,
Feb. 2016.
[371] V. V. Zhirnov and R. K. Cavin, “Emerging research nanoelectronic
devices: The choice of information carrier,” ECS Transactions, vol.
11, no. 6, pp. 17-28, 2007.
32
[372] N. I. Zheludev, “What diffraction limit?” Nature Materials, vol. 7,
pp. 420-422, June 2008.
[373] J. D. Meindl, “Beyond Moore's law: The interconnect era,” Computing in Science & Engineering, vol. 5, no. 1, pp. 20–24, Jan.-
Feb. 2003.
[374] G. J. Woeginger, “Open problems around exact algorithms,” Discrete
Applied Mathematics, vol. 156, pp. 397–405, 2008.
[375] J. J. Shynk, “Adaptive IIR filtering,” IEEE ASSP Mag., vol. 6, no. 2,
pp. 4-21, Apr. 1989.
[376] U. Varsney, “4G wireless networks,” IT Professional, vol. 14, pp. 34-
39, Sept.-Oct. 2012.
[377] Random House Webster’s Concise College Dictionary. New York: Random House, 1999.
[378] P. Pirsch, Architectures for Digital Signal Processing. New York:
John Wiley & Sons, 1998.
[379] R. Stankiewicz and A. Jajszczyk, “A survey of QoE assurance in
converged networks,” Computer Networks, vol. 55, pp. 1459-1473,
May 2011. [380] ITU-R Rep. M.2134, “Requirements related to technical performance
for IMT-Advanced radio interface(s),” 2008.
[381] ITU-R WP 5D Document 5/40-E, “Minimum requirements related to
technical performance for IMT-2020 radio interface(s),” Feb. 2017.
[382] F. Kienle, N. Wehn, and H. Meyr, “On complexity, energy-and
implementation-efficiency of channel decoders,” IEEE Transactions
on Communications, vol. 59, Dec. 2011, pp. 3301 - 3310.
[383] G. Frantz, “Signal core: A short history of the digital signal
processor,” IEEE Solid-State Circuits Magazine, vol. 4, pp. 16–20,
Spring 2012.
[384] T. C. Kwok, “Residential broadband Internet services and
applications requirements,” IEEE Communications Magazine, vol.
35, pp. 76 - 83, June 1997.
[385] “Vocabulary of terms of broadband aspects of ISDN,” ITU-T Rec.
I.113, Jun. 1997.
[386] A. Kosta, N. Pappas, and V. Angelakis, “Age of information: A new
concept, metric, and tool,” Foundations and Trends in Networking,
vol. 12, no. 3, pp. 162-259, 2017.
[387] A. Avizienis, J.-C. Laprie, B. Randell, and C. Landwehr, “Basic concepts and taxonomy of dependable and secure computing,” IEEE
Transactions on Dependable and Secure Computing, vol. 1, no. 1, pp.
11-33, Jan.-Mar. 2004. [388] M. Al-Kuwaiti, N. Kyriakopoulos, and S. Hussein, “A comparative
analysis of network dependability, fault-tolerance reliability,
security, and survivability,” IEEE Communications Surveys & Tutorials, vol. 11, pp. 106-124, Second Quarter 2009.
[389] J. P. G. Sterbenz, D. Hutchison, E. K. Cetinkaya, A. Jabbar, J. R.
Rohrer, M. Schöller, and P. Smith, “Resilience and survivability in communication networks: Strategies, principles, and survey of
disciplines,” Computer Networks, vol. 54, pp. 1245–1265, June 2010.
[390] “5G White Paper,” NGMN Alliance, Frankfurt, Germany, White
Paper, Feb. 2015.
[391] M. I. Skolnik, Introduction to Radar Systems, 3rd ed. New York:
McGraw-Hill, 2001.
[392] B. Soret, P. Mogensen, K. I. Pedersen, and M. C. Aguayo-Torres,
“Fundamental tradeoffs among reliability, latency and throughput in
cellular networks,” in Proc. GLOBECOM’14 Workshops.
[393] R. L. G. Cavalcante, S. Stanczak, M. Schubert, A. Eisenblätter, and U. Turke, “Toward energy-efficient 5G wireless communications
technologies,” IEEE Signal Processing Magazine, vol. 31, pp. 24-34,
Nov. 2014. [394] X. Ge, J. Yang, H. Gharavi, and Y. Sun, “Energy efficiency
challenges of 5G small cell networks,” IEEE Communications
Magazine, vol. 55, pp. 184-191, May 2017. [395] J. Hagenauer and T. Stockhammer, “Channel coding and
transmission aspects for wireless multimedia,” Proc. IEEE, vol. 87,
no. 10, pp. 1764-1777, Oct. 1999.
[396] G. Fettweis, “A 5G wireless communications vision,” Microwave
Journal, vol. 55, pp. 24–36, Dec. 2012.
[397] P. Rost, C. J. Bernardos, A. De Domenico, M. Di Girolamo, M.
Lalam, A. Maeder, D. Sabella, and D. Wubben, “Cloud technologies
for flexible 5G radio access networks,” IEEE Communications
Magazine, vol. 52, no. 5, pp. 68-75, May 2014.
[398] Y. Lee, T. Chuah, J. Loo, and A. Vinel, “Recent advances in radio
resource management for heterogeneous LTE/LTE-A networks,” IEEE Commun. Surveys & Tut., vol. 16, no. 4, pp. 2142-2180, Fourth
Quarter 2014.
[399] E. Pateromichelakis, M. Shariat, A. Quddus, and R. Tafazolli, “On
the evolution of multi-cell scheduling in 3GPP LTE/LTE-A,” IEEE
Commun. Surveys & Tut., vol. 15, pp. 701-717, Second Quarter 2013.
[400] T. Chen, H. Zhang, X. Chen, and O. Tirkkonen, “SoftMobile: Control
evolution for future heterogeneous mobile networks,” IEEE Wireless
Commun., vol. 21, pp. 70-78, Dec. 2014.
[401] D. Levin, A. Wundsam, B. Heller, N. Handigol, and A. Feldmann,
“Logically centralized? State distribution trade-offs in software
defined networks,” in Proc. HotSDN, Helsinki, Finland, Aug. 2012,
pp. 1-6.
[402] “Evolved Universal Terrestrial Radio Access (E-UTRA); User
equipment radio access capabilities,” 3GPP TS 36.306, 3GPP
Technical Specification, v13.5.0, Mar. 2017.
[403] Y.-P. E. Wang, X. Lin, A. Adhikary, A. Grövlen, Y. Sui, Y.
Blankenship, J. Bergman, and H. S. Razaghi, “A primer on 3GPP
narrowband Internet of Things,” IEEE Commun. Mag., vol. 55, no.
3, pp. 117-123, Mar. 2017.
[404] C. Liang and F. R. Yu. “Wireless network virtualization: A survey,
some research issues and challenges,” IEEE Communication Surveys
& Tutorials, vol. 17, pp. 358-380, First Quarter 2015.
[405] I. F. Akyildiz, S. Nie, S.-C. Lin, M. Chandrasekaran, “5G roadmap:
10 key enabling technologies,” Computer Networks, vol. 106, pp. 17–
48, Sep. 2016.
AARNE MÄMMELÄ (M’83–SM’99) received
the degree of D.Sc. (Tech.) (with honors) from the University of Oulu in
1996. He was with the University of Oulu from 1982 to 1993. In 1993, he joined VTT Technical Research Centre of Finland in Oulu. Since 1996, he
has been a Research Professor of digital signal processing in wireless communications. He has visited University of Kaiserslautern in Germany in
1990-1991 and the University of Canterbury in New Zealand in 1996-1997.
Since 2004, he has been a Docent (equivalent to Adjunct Professor) at the University of Oulu. He is a Technical Editor of the IEEE Wireless
Communications and a member of the Research Council of Natural Sciences
and Engineering in the Academy of Finland, which is funding basic research. He has given lectures on research methodology at the University
of Oulu for over 15 years, including the systems approach in addition to the
conventional analytical approach. His research interests are in general system analysis, philosophy and history of science and engineering,
adaptive and learning systems, and resource efficiency in communications.
33
JUKKA RIEKKI (M’92) received the D.Sc. (Tech.) degree from the Department of Electrical Engineering, University of Oulu,
Finland, in 1999. Since 2001, he has been a professor of software
architectures for embedded systems at the University of Oulu and since 2014 the dean of the Faculty of Information Technology and Electrical
Engineering. He visited Intelligent Systems Division, Electrotechnical
Laboratory, Tsukuba, Japan in 1994–1996. In his PhD thesis, he studied
reactive task execution of mobile robots. In 2000, he started to study
context-aware and ubiquitous systems. In recent years, he has shifted his
research focus on Internet of Things and edge computing, emphasizing efficient application of Artificial Intelligence and Semantic Web
technologies in distributed and resource-limited systems. He has been
advisor of 9 doctoral theses. He has published 40 journal papers, 6 book chapters and over 160 conference papers.
ADRIAN KOTELBA (M’08–SM’15) received the D.Sc. (Tech.) degree from the Department of Electrical and Information
Engineering, University of Oulu, Finland, in 2013. He works as a Senior
Scientist at VTT Technical Research Centre of Finland Ltd. His personal interests include communications theory, adaptive signal processing,
adaptive modulation and coding, cross-layer design, cryptography, and
physical layer security. He has participated in a number of research projects
related to modelling of wireless multi-antenna channels, multimedia transmission in cellular networks, adaptive radio resources allocation and
adaptive signal processing, including adaptive compensation of nonlinear
amplifiers, and security of wireless networks. His doctoral thesis, published in 2013, was about application of decision theory, including multiple-
criteria decision making and multi-objective optimization with variational
methods, to adaptive transmission in wireless fading channels.
ANTTI ANTTONEN (M’10–SM’14) received
the D.Sc. (Tech.) degree from the Department of Electrical and Information
Engineering, University of Oulu, Finland, in 2011. He works as a Senior Scientist at VTT Technical Research Centre of Finland Ltd. He has made
longer research collaboration visits to Lucent Technologies (USA) in 2000,
University of Hannover (Germany) in 2008, and University of Leuven (Belgium) in 2014. He has participated in numerous national and European
research projects related to wireless communications. Dr. Anttonen is a
Senior Member of the IEEE. His main interests include energy efficient transmission algorithms (physical layer, decision theory) and cooperative
smart resource management (network layer, communications and queueing
theory) for personal area and cellular networks. His doctoral thesis
published in 2011 was about designing resource-efficient and low-
complexity short-range communication transceivers. He has further
published several scientific journal papers related to energy efficiency, energy sustainability, and delay sensitivity.
34
Control information
Sensing information
(a)
StateProcessSense Decide Act
Feedback
Information exchange
Noise
(b)
FIGURE 1. A control loop. a) An example where a ship is remotely controlled. b) A general control loop where the system to be controlled is called a process. Solid lines with arrows denote control signals and dashed lines with arrows denote sensing signals. The sense, decide, and act blocks close the loop through the process that is often called the plant.
1990 2000 2010 2020 2030
Autonomic networks
Communication theory
Timeline
Computer science
Autonomic computingCyber-physical systems
Control theory
Networked control systems
Kinesthetic feedback
Multiagent systems
Remote control
Self-organizing networks
Distributed computing
Time sensitive networks
Mobile agents
1980197019601950
Autonomous robots
Automatic systems
Artificial intelligence
Cellular systems
Local area networksBroadcast networks
Autonomic robots
Haptic feedback
Embedded systems
FIGURE 2. A simplified chronology of the most advanced systems that combine efforts from control theory, computer science, and communication theory.
D D
Goal Route Goal
Manually-controlledsystem
Automaticsystem
Autonomoussystem
FIGURE 3. Manually-controlled, automatic, and autonomous systems. A self-organizing system is an advanced form of an autonomous system.
35
1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000 2010 2020 2030
Self-organization
Cybernetics
General system theoryFundamental limits General system theory
Timeline
Open systems
Adaptation
Second order cybernetics
Complexity theories
Brain models
Morphogenesis
Bionics
System of systems
Decision theory
Game theory
Pareto optimum
Linear programming
Statistical decision theoryDynamic programming
Genetic algorithms
Nonlinear programming
Graph theory
Games with incomplete information
Operations research
MOO, multi-objective optimizationMCDM, multiple-criteria decision
making
MOO MCDM
1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000 2010 2020 2030Timeline
Autonomous control
Control theoryStability analysis
Adaptive control
Learning control
Feedback
Intelligent control
Decentralized systems
Networked control systemsRemote control
Hierachical control systems
Robotic arm
Microrobots
Cooperating robotsMobile robots Bipedal robots
Haptic feedbackIdea of robot
Self-organizing control
Idea of telepresence
Robot architectures Proactive architectures
1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000 2010 2020 2030Timeline
Computer scienceArtificial intelligence
Machine learning Computational intelligence
Autonomic computing
Expert systemsSoftware agents
Mobile agents
Cyber-physical systems
Multiagent systems
Complexity analysis
Distributed computing
1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000 2010 2020 2030Timeline
Embedded systems
Time sensitive networks
OSI Open Systems InterconnectionAGC automatic gain controlPLL phase-locked loopAFC auomatic frequency controlDLL deloy locked loopTPC trasmitter power controlARQ automatic repeat requestAMC adaptive modulation and codingDCA dynamic channel assignmentRRM radio resource managamentDSA dynamic spectrum allocation
Hierachical systems
OSI model
DSA
Self-organizing networksAd hoc networks
Autonomic networks
Communication theory
Internet ProtocolPacket radio systems
TPC
Cognitive radiosPLL
Network awareness
AGCEqualizationDLL
AFCAMC
Rake
Beamforming
ARQ
DCA RRMInternet of skills
Software-defined networks
Active networks
Packet switching
Network traffic monitoring
1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000 2010 2020 2030Timeline
Tactile Internet
FIGURE 4. Chronology of intelligent resource-efficient systems. The concepts and acronyms are explained in the text. The date of the first occurrence of a concept in the literature is indicated with the position of the first letter of each term.
36
Sense Act
P
D
Noise
Information exchange
Environment
Agent
FIGURE 5. The control system in Fig. 1b in an alternative form. D refers to the decision block and P refers to the plant or process.
D
Nested hierarchy
P
D D
Layer hierarchy
P
D
Dominance hierarchy
D
P
D D
FIGURE 6. Three common hierarchies.
Structural level
Behavioral level
Functional level
Description levels
Physical level
FIGURE 7. Typical abstraction levels of a system in different disciplines.
37
Controller
Coordinator
Manager(Organizer)
Low complexity, high speed (wide bandwidth), high accuracy, no memory, narrow spatial extent, control and adaptive algorithms
Short-term memory, learning algorithms
High complexity, low speed (small bandwidth), low accuracy, long-term memory, wide spatial extent, autonomous and self-organizing algorithms
Hierarchy level
Properties
Behavioral layer: Responsible for movement, avoiding obstacles, closest to sensors and actuators.
Executive layer: Coordinates the behavioral layer, responsible for choosing the current behavior (movement) to achieve a task.
Task-planning layer: Long-term goals of the robot within resource constraints, taking into account priorities, order of tasks, and recharging.
Example from robotics
Physical layer: bits or symbols transmitted
Data link layer: frames transmitted
Network layer: packets or datagrams transmitted
Example from communications
FIGURE 8. Hierarchical control in a layer and dominance hierarchy. Examples are given from robotics and communications.
38
Centralizedcontrol
Process
D D
D
Decentralizedcontrol
Distributedcontrol
Process
D
Process
D DD D
(a)
Process
D DD D
D
D D
(b)
FIGURE 9. Degree of centralization. a) Centralized, decentralized, and distributed control. b) Distributed control combined with dominance hierarchy using clusters of control units.
Centralized(automatic)
Degree of centralization
Central control unit used to force cooperation, sensing information proceeds upwards, control information downwards in hierarchy, also called automatic systems
Properties
Decentralized(autonomous)
Interacting competitive rational systems, may be reactive or proactive, sensing the environment, also called autonomous systems
Distributed(autonomic)
Interacting cooperative systems forming clusters, message passing or shared memory used to exchange information, sometimes called autonomic systems
FIGURE 10. Properties of systems with different degrees of centralization.
39
1. Static systems Everything fixed Passive filter
2. Simple dynamic systems Periodic predetermined
changesClock
4. Adaptive systemsReference signal, performance
criterion, algorithm
Adaptive estimator, parameter identification,
LMS algorithm
5. Learning systems Memory, past experience Pattern recognition, Q-learning
6. Self-organizing systems Structure changesCooperating robots, self-
organizing networks, structural identification
Hierarchy level Properties Examples
3. Control systems Feedback, set-point value PID control
FIGURE 11. General hierarchy of systems in different disciplines according to the complexity of decisions. The structure changes only in self-organizing systems, otherwise the behavior changes, see Fig. 7.
State
P
D
Reward
FIGURE 12. Reinforcement learning as a special case of the control loop in Fig. 5.
M
E
IOpen system
M
E
I
Waste (recycling, cooling, interference)
M E I
Delay
Space
Bandwidth
Capital
Data and control
EnvironmentBoundary
FIGURE 13. Open system concept and basic resources that include materials (M), energy (E), information (I), time, frequency, and space.
40
TABLE I. Reviews and books in various disciplines
Discipline Reviews and books General system theory History [77], [71], [4], systems engineering [7], [78], multidisciplinary conceptual analysis [20],
systems of systems [79], adaptation and self-organization [80], [81], [82], cybernetics [83], [84], [85], fundamental limits [40], complexity in electronics [86], [87], [88], system biology [89], complexity theories [90], [91], [92], bionics (biomimetics) [93], [94], [95], [96], [97], brain models [11], [75], [76], [98], [99], [100], [101], [102], [103], [104], reaction times of human senses [105]
Decision theory History [106], [107], [108], [109], [110], general theory [9], multiple-criteria decision-making and multiobjective optimization [62], [8], [111], [112], [113], [114], game theory [115], [116], genetic and evolutionary algorithms [117], [118], [119]
Control theory History [120], [121], [122], [123], general theory [10], hierarchical and distributed control [63], [124], [125], [126], [127], [128], [129], [130], [131], adaptive and learning systems [132], [133], [134], autonomous and intelligent control [135], [5], [136], networked control systems [137], [138], [34], [139], [28], [140], [141], [142], [143], [144], remote control and haptic feedback [33], [145], [146], [35], [147], [148], [149], [150], [151], [152], [153], mobile robots and navigation [60], [25], [154], [155], [156], [157], [12], [158], [26], [159]
Computer science History [160], [161], [162], general theory [13], artificial intelligence [6], machine learning [163], [164], [165], [166], distributed computing [167], [29], [168], computational intelligence [169], [170], computational complexity [171], [172], [173], fundamental limits in computation [174], [69], [175], [176], augmented intelligence [177], ambient intelligence [178], [179], context awareness [180], [181], location awareness [182], multiagent systems [183], [184], [185], [186], [187], [188], [189], [190], [191], [192], [61], autonomic computing [50], [193], [27], [194], cyber-physical systems [195], [23], [196], [197]
Communication theory History [198], [199], [200], general theory [201], [202], [15], [16], [203], [204], [205], [206], [207], [14], [208], [209], graph theory [210], [211], [212], basic resources [72], software-defined networks [213], [214], [215], [216], [217], [218], [219], [220], ad hoc and self-organizing networks [221], [222], [223], [224], [225], [226], [227], [228], [229], [230], [231], adaptive, cognitive, and autonomous networks [51], [232], [233], [234], [235], [236], [237], [238], [239], context-aware networking [240], tactile and haptic communications [241], [31], [32], [242], [243], [244], [245], hierarchical network synchronization [246], [247], [24], hierarchical routing [248], [249], spectral efficiency [250], energy efficiency [251], reduction of delays [252], radio resource management and dynamic spectrum allocation [253], [254], [255], [256], [257], cognitive radios [58], [59], [258], [259], [260], [261], adaptive systems [262], [263], [264], bio-inspired networking and signal processing [265], [266]
TABLE II. Basic resources
Basic resource Other related terms Materials Energy Information Time Frequency Space
Mass Power Data, control Delay, latency Bandwidth, spectrum Size, area, volume
TABLE III. Fundamental limits and principles
Fundamental limit or principle Inventor Meaning Energy conservation law Entropy law Heat transfer Maximum speed-up Absolute zero Szilard limit (Landauer limit) Heisenberg limit Channel capacity Cramer-Rao bound Gabor uncertainty principle Abbe diffraction limit Upper speed limit Church-Turing conjecture Cook-Levin theorem Incompleteness theorem Unpredictable systems
Joule 1843 Carnot 1824, Clausius 1854 Avila et al. 2007 Amdahl 1967, Gustafson 1988 Kelvin 1848 Szilard 1929 Heisenberg 1927 Shannon 1948 Rao 1945, Cramer 1946 Gabor 1946 Abbe 1873 Einstein 1905 Church 1936, Turing 1936 Cook 1971, Levin 1973 Gödel 1931 Lorenz 1963
Energy only changes its form Available energy is reduced Limits for cooling Parallel processing Lowest temperature Energy limit for computation Location and velocity Energy limit for communication Smallest variance Duration and bandwidth of signals Optical imaging instruments Speed of light in vacuum Unsolvable problems Intractable problems Unprovable theorems Chaotic systems
41
TABLE IV. KPIs and requirements in 4G and 5G communications networks
Key performance indicator Basic resources Requirement in 4G Requirement in 5G Peak data rate (bit rate) User experienced data rate Peak spectral efficiency Average spectral efficiency Network energy efficiency Area traffic capacity Delay (latency) Reliability
Time Time Time and frequency Time and frequency Energy Time and space Time Data
1 Gbit/s 10 Mbit/s 15 bit/s/Hz 3 bit/s/Hz 1× 0.1 Mbit/s/m2 10 ms 99.99%
20 Gbit/s 100 Mbit/s 30 bit/s/Hz 9 bit/s/Hz 100× 10 Mbit/s/m2 1 ms 99.999%
top related