Download - Autonomic computing seminar documentation
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 1
INTRODUCTION
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 2
1. INTRODUCTION
As enterprises strive to meet their current challenges, they
require an IT infrastructure that supports their business goals. An IT infrastructure that
enables business to be more responsive, variable, focused, and resilient. Autonomic systems
are such systems that are self-configuring, self-healing, self-protecting and self-optimizing.
They are Intelligent open systems that manage complexity, know themselves, continuously
tune themselves, adapt to unpredictable conditions, prevent and recover from failures and
provide a safe environment. They let enterprises focus on business, not on IT infrastructure.
Autonomic computing refers to the self-managing characteristics
of distributed computing resources, adapting to unpredictable changes while hiding intrinsic
complexity to operators and users. Started by IBM in 2001, this initiative ultimately aims to
develop computer systems capable of self-management, to overcome the rapidly growing
complexity of computing systems management, and to reduce the barrier that complexity
poses to further growth.
The system makes decisions on its own, using high-level policies;
it will constantly check and optimize its status and automatically adapt itself to changing
conditions. An autonomic computing framework is composed of autonomic components
(AC) interacting with each other. An AC can be modeled in terms of two main control loops
(local and global) with sensors (for self-monitoring), effectors (for self-adjustment),
knowledge and planner/adapter for exploiting policies based on self- and environment
awareness.
Driven by such vision, a variety of architectural frameworks
based on “self-regulating” autonomic components has been recently proposed. A very
similar trend has recently characterized significant research in the area of multi-agent
systems. However, most of these approaches are typically conceived with centralized or
cluster-based server architectures in mind and mostly address the need of reducing
management costs rather than the need of enabling complex software systems or providing
innovative services. Some autonomic systems involve mobile agents interacting via loosely
coupled communication mechanisms.
Autonomy-oriented computation is a paradigm proposed by
Jiming Liu in 2001 that uses artificial systems imitating social animals' collective behaviours
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 3
to solve difficult computational problems. For example, ant colony optimization could be
studied in this paradigm. The term autonomic computing is emblematic of a vast and
somewhat tangled hierarchy of natural self-governing systems, many of which consist
0of myriad interacting, self-governing components that in turn comprise large numbers
of interacting, autonomous, self-governing components at the next level down. The
enormous range in scale, starting with molecular machines within cells and extending to
human markets, societies, and the entire world socio economy, mirrors that of
computing systems, which run from individual devices to the entire Internet. Thus, we
believe it will be profitable to seek inspiration in the self-governance of social and
economic systems as well as purely biological ones.
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 4
EVOLUTION
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 5
2. EVOLUTION
2.1AUTONOMIC SYSTEMS
Autonomic systems are such systems that are self-configuring, self-healing, self-
protecting and self-optimizing. They are ‘Intelligent’ open systems that manage complexity,
know themselves, continuously tune themselves, adapt to unpredictable conditions, prevent
and recover from failures and provide a safe environment.
The term ‘autonomic’ comes from an analogy to the autonomic central nervous
system in the human body, which adjusts to many situations automatically without any
external help. A possible solution could be to enable modern, networked computing systems
to manage themselves without direct human intervention. The Autonomic Computing
Initiative (ACI) aims at providing the foundation for autonomic systems. It is inspired by
the autonomic nervous system of the human body. This nervous system controls important
bodily functions (e.g. respiration, heart rate, and blood pressure) without any conscious
intervention.
In a self-managing autonomic system, the human operator takes on a new role: instead of
controlling the system directly, he/she defines general policies and rules that guide the self-
management process. For this process, IBM defined the following four functional areas:
1. Self-configuration: Automatic configuration of components;
2. Self-healing: Automatic discovery, and correction of faults;
3. Self-optimization: Automatic monitoring and control of resources to ensure the
optimal functioning with respect to the defined requirements;
4. Self-protection: Proactive identification and protection from arbitrary attacks.
IBM defined five evolutionary levels, or the autonomic deployment model, for its
deployment: Level 1 is the basic level that presents the current situation where systems are
essentially managed manually. Levels 2 - 4 introduce increasingly automated management
functions, while level 5 represents the ultimate goal of autonomic, self-managing systems.
The design complexity of Autonomic Systems can be
simplified by utilizing design patterns such as the model-view-controller (MVC) pattern to
improve concern separation by encapsulating functional concerns. The system makes
decisions on its own, using high-level policies; it will constantly check and optimize its status
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 6
and automatically adapt itself to changing conditions. An autonomic computing framework is
composed of autonomic components (AC) interacting with each other. An AC can be
modeled in terms of two main control loops (local and global) with sensors (for self-
monitoring), effectors (for self-adjustment), knowledge and planner/adapter for exploiting
policies based on self- and environment awareness.
Driven by such vision, a variety of architectural
frameworks based on “self-regulating” autonomic components has been recently proposed. A
very similar trend has recently characterized significant research in the area of multi-agent
systems. However, most of these approaches are typically conceived with centralized or
cluster-based server architectures in mind and mostly address the need of reducing
management costs rather than the need of enabling complex software systems or providing
innovative services. Some autonomic systems involve mobile agents interacting via loosely
coupled communication mechanisms.
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 7
2.2The Complexity Problem
The increasing complexity of computing systems is overwhelming the capabilities of
software developers and system administrators who design, evaluate, integrate, and
manage these systems. Today, computing systems include very complex infrastructures and
operate in complex heterogeneous environments. With the proliferation of handheld devices,
the ever-expanding spectrum of users, and the emergence of the information economy with
the advent of the Web, computing vendors have difficulty providing an infrastructure to
address all the needs of users, devices, and applications. SOAs with Web services as their
core technology have solved many problems, but they have also raised numerous complexity
issues. One approach to deal with the business challenges arising from these complexity
problems is to make the systems more self-managed or autonomic. For a typical information
system consisting of an application server, a Web server, messaging facilities, and layers of
middleware and operating systems, the number of tuning parameters exceeds human
comprehension and analytical capabilities. Thus, major software and system vendors
endeavor to create autonomic, dynamic, or self-managing systems by developing methods,
architecture models, middleware, algorithms, and policies to mitigate the complexity
problem. In a 2004 Economist article, Kluth investigates how other industrial sectors
successfully dealt with complexity [Kluth 04]. He and others have argued that for a
technology to be truly successful, its complexity has to disappear. He illustrates his
arguments with many examples including the automobile and electricity markets. Only
mechanics were able to operate early automobiles successfully. In the early 20th century,
companies needed a position of vice president of electricity to deal with power generation
and consumption issues. In both cases, the respective industries managed to reduce the need
for human expertise and simplify the usage of the underlying technology. However, usage
simplicity comes with an increased complexity of the overall system (e.g., what is "under the
hood"). Basically for every mouse click or return we take out of the user experience, 20
things have to happen in the software behind the scenes. Given this historical perspective
with this predictable path of technology evolution, maybe there is hope for the information
technology sector.
Not only the number of computer systems but also the complexity of those systems are also
increasing. We can clearly identify the drastic increase in the number of users in internet
from the following graph. And we are very much familiar with the advanced features of these
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 8
systems , so that we can easily arrive at a conclusion that the internal architecture is so
complexed to make the system user friendly.
Fig1. Global population v/s Inetrnet Users
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 9
2.3 The Evolution Problem
By attacking the software complexity problem through technology simplification and automation,
autonomic computing also promises to solve selected software evolution problems. Instrumenting
software systems with autonomic technology will allow us to monitor or verify requirements
(functional or nonfunctional) over long periods of time. For example, self-managing systems will be
able to monitor and control the brittleness of legacy systems, provide automatic updates to evolve
installed software, adapt safety critical systems without halting them, immunize computers
against malware automatically, facilitate enterprise integration with self-managing integration
mechanisms, document architectural drift by equipping systems with architecture analysis
frameworks, and keep the values of quality attributes within desired ranges.
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 10
WORKING
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 11
3.WORKING
In order for the autonomic managers and the managed resources in an autonomic system to
work together, the developers of these components need a common set of capabilities. This
section conceptually describes an initial set of core capabilities that are needed to build
autonomic systems. These core capabilities include:
Solution Installation : A common solution knowledge capability eliminates the
complexity introduced by many formats and many installation tools. By capturing
install and configuration information in a consistent manner, autonomic managers can
share the facilities as well as information regarding the installed environment.
Common Systems Administration : Autonomic systems require common console
technology to create a consistent human interface for the autonomic managers of the
IT infrastructure. The common console capability provides a framework for reuse and
consistent presentation for other autonomic core technologies. The primary goal of a
common console is to provide a single platform that can host all of the administrative
console functions in server, software, and storage products in a manner that allows
users to manage solutions rather than managing individual systems or products.
Problem Determination : Autonomic managers take actions based on problems or
situations they observe in the managed resource. Therefore, one of the most basic
capabilities is being able to extract high quality data to determine whether or not a
problem exists in managed resources. In this context, a problem is a situation in which
an autonomic manager needs to take action. A major cause of poor quality
information is the diversity in the format and content of the information provided by
. the managed resource. To address the diversity of the data collected, a common
problem determination architecture normalizes the data collected, in terms of
format, content, organization, and sufficiency
Autonomic Monitoring : Autonomic monitoring is a capability that provides an
extensible run-time environment for an autonomic manager to gather and filter data
obtained through sensors. Autonomic managers can utilize this capability as
amechanism for representing, filtering, aggregating, and performing a range of
analysis of sensor data.
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 12
Complex Analysis : Autonomic managers need to have the capability to perform
complex data analysis and reasoning on the information provided through sensors.
The analysis will be influenced by stored knowledge data. An autonomic manager's
ability to quickly analyze and make sense of this data is crucial to its successful
operation.
Policy-based Management : Policies are a key part of the knowledge used by
autonomic managers to make decisions, essentially controlling the planning portion of
the autonomic manager. By defining policies in a standard way, they can be shared
across autonomic managers to enable entire systems to be managed by a common set
of policies.
Heterogeneous workload management : Heterogeneous workload management
includes the capability to instrument system components uniformly to manage
workflow through the system. Business workload management is a core technology
that monitors end-to-end response times for transactions or segments of transactions,
rather than at the component level, across the heterogeneous infrastructure.
These capabilities are enabled through a series of tools and facilities that are collected in
IBM Autonomic Computing Toolkit.
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 13
3.2 ELEMENTS OF AUTONOMIC COMPUTING
Fig3. Elements of Autonomic Computing
IBM researchers have established an architectural framework for autonomic
systems. An autonomic system consists of a set of autonomic elements that contain and
manage resources and deliver services to humans or other autonomic elements. An
autonomic element consists of one autonomic manager and one or more managed
elements. At the core of an autonomic element is a control loop that integrates the
manager with the managed sensors, filters the data collected from them, and then stores
the distilled data in the knowledge base. The analysis engine compares the collected data
against the desired sensor values also stored in the knowledge base. The planning engine
devises strategies to correct the trends identified by the planning engine. The execution
engine finally adjusts parameters of the managed and element. The autonomic
manager consists of sensors, effectors, and a five-component analysis planning engine as
depicted in Figure 2. The mo
nitor observes the element by means of effectors and stores the affected values in the
knowledge base.
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 14
Now we can discuss the elements in detail. The autonomic manager implements autonomic
control loops by dividing them into four parts: monitor, analyze, plan, and execute. The
control loop carries out tasks as efficiently as possible based on high- level policies
Monitor: Through information received from sensors, the resource monitors the
environm-ent for specific, predefined conditions. These conditions don't have to be
errors; they can be a certain load level or type of request.
Analyze: Once the condition is detected, what does it mean? The resource must
analyze the information to determine whether action should be taken.
Plan: If action must be taken, what action? The resource might simply notify the
administrator, or it might take more extensive action, such as provisioning another
hard drive.
Execute: It is this part of the control loop that sends the instruction to the effector,
which actually affects or carries out the planned actions.
The managed resources are controlled system components that can range from single
resources such as a server, database server, or router to collections of resources like server
pools, clusters, or business applications.
All of the actions in the autonomic control loop either make use of or supplement the
knowledge base for the resource. For example, the knowledge base helps the analysis phase
of the control loop to understand the information it's getting from the monitor phase. It also
provides the plan phase with information that helps it select the action to be performed.
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 15
APPLICATIONS
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 16
4. APPLICATIONS
A fundamental building block of an autonomic system is the sensing capability (Sensors Si),
which enables the system to observe its external operational context. Inherent to an
autonomic system is the knowledge of the Purpose (intention) and the Know-how to operate
itself (e.g., bootstrapping, configuration knowledge, interpretation of sensory data, etc.)
without external intervention. The actual operation of the autonomic system is dictated by the
Logic, which is responsible for making the right decisions to serve its Purpose, and influence
by the observation of the operational context (based on the sensor input).
This model highlights the fact that the operation of an autonomic system is purpose-driven.
This includes its mission (e.g., the service it is supposed to offer), the policies (e.g., that
define the basic behaviour), and the “survival instinct”. If seen as a control system this would
be encoded as a feedback error function or in a heuristically assisted system as an algorithm
combined with set of heuristics bounding its operational space.
Control loops
A basic concept that will be applied in Autonomic Systems are closed control loops. This
well-known concept stems from Process Control Theory. Essentially, a closed control loop in
a self-managing system monitors some resource (software or hardware component) and
autonomously tries to keep its parameters within a desired range.
According to IBM, hundreds or even thousands of these control loops are expected to work in
a large-scale self-managing computer system.
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 17
ADVANTAGES
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 18
5ADVANTAGES
Fig2.Diagram of characteristics of autonomic computing
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 19
An autonomic system can self-configure at runtime to meet changing operating
environments, self-tune to optimize its performance, self-heal when it encounters
unexpected obstacles during its operation, and-of particular current interest-protect
itself from malicious attacks. Research and development teams concentrate on
developing theories, methods, tools, and technology for building self-healing, self
configuring, self-optimizing, and self-protecting systems, as depicted in Figure 3. An
autonomic system can self-manage anything including a single property or multiple
properties.
An autonomic system has the following characteristics :
Self-Configuring: Self-configuring systems provide increased responsiveness
by adapting to a dynamically changing environment. A self-configuring system
must be able to configure and reconfigure itself under varying and unpredictable
conditions. Varying degrees of end-user involvement should be allowed, from
user-based reconfiguration to automatic reconfiguration based on monitoring and
feedback loops. For example, the user may be given the option of reconfiguring
the system at runtime; alternatively, adaptive algorithms could learn the best
configurations to achieve mandated performance or to service any other desired
functional or nonfunctional requirement. Variability can be accommodated at
design time (e.g., by implementing goal graphs) or at runtime (e.g., by adjusting
parameters). Systems should be designed to provide configurability at a feature
level with capabilities such as separation of concerns, levels of indirection,
integration mechanisms (data and control), scripting layers, plug and play, and
set-up wizards. Adaptive algorithms have to detect and respond to short-term and
long-term trends.
Self-Optimizing: Self-optimizing systems provide operational efficiency by
tuning resources and balancing workloads. Such a system will continually monitor
and tune its resources and operations. In general, the system will continually seek
to optimize its operation with respect to a set of prioritized nonfunctional
requirements to meet the ever changing needs of the application environment.
Capabilities such as repartitioning, reclustering, load balancing, and rerouting
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 20
must be designed into the system to provide self-optimization. Again, adaptive
algorithms, along with other systems, are needed for monitoring and response.
Self-Healing: Self-healing systems provide resiliency by discovering
and preventing disruptions as well as recovering from malfunctions. Such a
system will be able to recover-without loss of data or noticeable delays in
processing from routine and extraordinary events that might cause some of its
parts to malfunction. Self-recovery means that the system will select, possibly
with user input, an alternative configuration to the one it is currently using and
will switch to that configuration with minimal loss of information or delay.
Self-Protecting: Self-protecting systems secure information and resources
by anticipating, detecting, and protecting against attacks. Such a system will
be capable of protecting itself by detecting and counteracting threats through the
use of pattern recognition and other techniques. This capability means that the
design of the system will include an analysis of the vulnerabilities and the
inclusion of protective mechanisms that might be employed when a threat is
detected. The design must provide for capabilities to recognize and handle
different kinds of threats in various contexts more easily, thereby reducing the
burden on administrators.
Some More Characteristics…
Reflexivity: An autonomic system must have detailed knowledge of its
components, current status, capabilities, limits, boundaries, interdependencies
with other systems, and available resources. Moreover, the system must be aware of
its possible configurations and how they affect particular nonfunctional
requirements.
Adapting: At the core of the complexity problem addressed by the AC initiative
is the problem of evaluating complex tradeoffs to make informed decisions. Most of
the characteristics listed above are founded on the ability of an autonomic system to
monitor its performance and its environment and respond to changes by
switching to a different behavior. At the core of this ability is a control loop.
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 21
Sensors observe an activity of a controlled process, a controller component
decides what has to be done, and then the controller component executes the
required operations through a set of actuators. The adaptive mechanisms to be
explored will be inspired by work on machine learning, multi-agent systems, and
control theory.
Automatic: This essentially means being able to self-control its internal functions
and operations. As such, an autonomic system must be self-contained and able to
start-up and operate without any manual intervention or external help. Again, the
knowledge required to bootstrap the system (Know-how) must be inherent to the
system.
Aware: An autonomic system must be able to monitor (sense) its operational
context as well as its internal state in order to be able to assess if its current operation
serves its purpose. Awareness will control adaptation of its operational behaviour in
response to context or state changes.
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 22
5.2 NEED OF AUTONOMIC COMPUTING
In the evolution of humans and human society automation has always been the
foundation for progress. If human can handle one of his needs automatically, then he has free
mind and resources to concentrate on another task. So step by step he can get ability to
concentrate on more complex problems. For example few of people worry about harvesting
the grain to grind the flour to bake bread; other people even eat bread but do not worry about
producing they can simply buy it at a nearby store. Therefore they have relaxed mind for do
another job. Farmer’s also having the benefit of autonomic by using machines for their work.
It causes to save their time and cost and also increase the amount of harvest get by unit of
land.. But computing systems have proved that evolution via automation also produces
complexity as an unavoidable byproduct. Follow the evolution of computers from single
machines to modular systems to personal computers networked with larger machines and an
unmistakable pattern emerges. Compare with pervious machines incredible progress in
almost every aspect of computing, for example microprocessor power up by a factor of
10,000, storage capacity by a factor of 45,000, communication speeds by a factor of
1,000,000 . Along with that growth has become increasingly sophisticated architectures
governed by software whose complexity now demands tens of millions of lines of code.
Some operating environments weigh in at over 30 million lines of code created by over 4,000
programmers. In fact, the growing complexity of the IT infrastructure threatens to undermine
the very benefits information technology aims to provide. Until now the computer systems
relied mainly on human intervention and administration to manage this complexity. When
considering about current rates of expansion, there will not be enough skilled IT people to
keep the world’s computing systems running. Even in uncertain economic times, still have
high demand for skilled IT workers. Even if people could somehow come up with enough
skilled people, the complexity is growing beyond human ability to manage it. As computing
evolves, the overlapping connections, dependencies, and interacting applications call for
administrative decision-making and responses faster than any human can deliver. Identifying
root causes of failures becomes more difficult, while finding ways of increasing system
efficiency generates problems with more variables than any human can hope to solve.
Without new approaches, things will only get worse. To solve the problem people need
computer systems with autonomic behavior.
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 23
Value …
By enabling computer systems to have self-configuring, self-healing, self-optimizing and
self-protecting features, autonomic computing is expected to have many benefits for business
systems, such as reduced operating costs, lower failure rates, more security, and the ability to
have systems that can respond more quickly to the needs of the business within the market in
which they operate.
The implications for an autonomic, on demand business approach are
immediately evident: A network of organized, smart computing components can give clients
what they need, when they need it, without a conscious mental or even physical effort.
Few examples of the results delivered by implementing autonomic computing solutions with
self-management characteristics are : Operational efficiency, Supporting business needs with
IT, Workforce productivity. Systems that are self- managing free up IT resources which then
can move from mundane system management tasks to focusing on working with users to
solve business problems.
Examples
A variety of autonomic computing capabilities are already in use throughout IBM
products, and these products are already helping companies succeed. IBM Self-
Managing Autonomic Computing capabilities are present in all IBM software product
families; Information Management, Lotus, Tivoli, Rational, Websphere, Ö .
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 24
DISADVANTAGES
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 25
6. DISADVANTAGES
Dealing with issues of trust is critical for the successful design,
implementation, and operation of AC systems. Since an autonomic system is
supposed to reduce human interference or even take over certain heretofore human
duties, it is imperative to make trust development a core component of its design.
Even when users begin to trust the policies hard-wired into low-level autonomic
elements, it is a big step to gain their trust in higher level autonomic elements that
use these low-level elements as part of their policies. Autonomic elements are
instrumented to provide feedback to users beyond what they provide as their service.
Deciding what kind of feedback to provide and how to instrument the autonomic
elements is a difficult problem. The trust feedback required by users will evolve with
the evolution of the autonomic system. However, the AC field can draw experience from
the automation and HCI communities to tackle these problems.
Autonomic systems can become more trustable by actively
communicating with their users. Improved interaction will also allow these systems
to be more autonomous over time, exhibiting increased initiative without losing the
users' trust. Higher trustability and usability should, in turn, lead to improved
adoptability.
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 26
FUTURESCOPE
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 27
7.FUTURE OF AUTONOMIC COMPUTING
The computer industry is filled with pundits, speculators, visionaries, salesman, brilliant
architects and professors. Each provides invaluable insight into their experience, their
intelligence, their alma mater, their ticker symbol, their ego and what’s next. Some win the
“what’s next lottery”, others work for years of brilliance in relative obscurity.
Seemingly, a world that has deployed over 1 Billion devices a year for the last 3 years , is
incapable of understanding the gravity of a new programming models, a new hardware
architecture, a sleek new design that delivers on a vision that Gene Rodenberry thought of in
the 1960’s or Da Vinci in the 15th Century. What is old is new…..and let me tell you why? It
will revolutionize the industry (not evolutionize…a term reserved for slower growing
industry’s that require government assistant every decade or so…), transform your
environment and provide freedoms you had only hoped to enjoy….and we invented it 40
years ago. Does any of this sound familiar?
It should. These are the paraphrased slogans of an industry in transition. Real products
matters, product differentiation matters, standards matter, interoperability matters….and
shareholders pay for future expectations.
The future of computing…is NOW. The future of the computer industry is NOW. The
next generation of computer programming, software architectures and transformational
technologies is NOW. As an industry we have finally begun to embrace interface,
architectural and software programming standards to usher in a new era of interoperability
and scalability. Behind us are the days of “proprietary interfaces” (What does that actually
mean other than I am going to sell you some extra accessories that will be worthless in 2
years?), which do not provide a differentiated performance/cost advantage. Gone are the days
of developing programming languages that lock-in customers to individual companies,
whether vendors innovate or not. These rules of the past are slowly melting away, allowing
the entire industry to embrace interoperability and standards at the highest level in history.
Industry diversity is healthy and insures that the most innovative and technologically relevant
companies will “win” most of the time. Allowing the 1 Billion and the Next Billion
customers of the world to enjoy the best interface technology yet developed….each other. It
also provides us with a unique ability to move to the next phase in our dynamic industry’s
growth, autonomic instrumentation.
Why is this important? Instrumentation matters. As we apply business and personal rules to
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 28
our growing compute environments it has become increasingly clear that the more tools we
make available to users the better informed we are in making decisions. The more disclosure
we provide to investors through the use of autonomic programming architectures the more
informed they will be of their investing decisions.
How can you day trade $1B in 35 different stocks without clear autonomic controls in your
data center, your database, your application and your client devices?
How can you move 450 Million people efficiecntly throughout a country for 2 weeks
without autonomic controls on transportation: plains, trains, boats and automobiles, as they
do during the Spring Festival in China?
How can you process 1 Billion text messages a day without clear business rules? What
happens when these messages are also coming from machines to other machines, modifying
databases, applications and clients?
As humans, we must apply guidelines, much like laws, for our machines to take action when
we are asleep, when we are tired, when we are not present, when we are just simply being
human….to slow to react to a rapidly changing environment.
The innovators of the computer industry today understand this NOW. We do not need to
discuss a vision of 40 years ago without a plan to act NOW. Claiming ideas without action is
dishonorable at best, criminal at worst. The innovators of today must build products and
services that help solve the problems of today. We do not need to look to 2050 without a plan
to act NOW. The visionaries of tomorrow are…..not born. The visionaries of today…can call
me in 10 years.
Autonomic controls are in place today, machine to machine computer architectures are
here today, scalable compute engines are here today. Are they perfect, no. Are they effective,
yes. The design architects, product engineers and systems designers of today need to address
these concerns. Autonomic Instrumentation delivers control to the administrator, the user and
the developer. Rules engines can be modified to maximize efficiency, minimize consumption
and increase productivity. All of these will lead to increase shareholder (read: No just people
who buy shares of stock) value across your enterprise, your school, our hospitals, our
governments, and your home.
When executed properly, Autonomic controls should be able to deliver 20-25%
performance and efficiency increases with each new generation of Moore’s law. In some
cases, as in the Intel Xeon® 5500 Series these increases have been over 150% in
virtualization performance, these increases will be a combination of software architecture
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 29
enhancement and silicon optimization. In other cases, it will be through the dedicated hard
work of increase instrumentation capability of a processor platform at the same price of the
previous generation through energy efficiency and memory controls.
Autonomic controls will also allow end users to avert disasters in our data centers, our homes
and in our hands. Autonomic instrumentation design frameworks, allow users to set
parameters on data migrations, data backup, security, memory access, power consumption
and virtual machine architectures.
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 30
CONCLUSION
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 31
8 .CONCLUSIONS
The time is right for the emergence of self-managed or autonomic systems. Over the past
decade, we have come to expect that "plug-and-play" for Universal Serial Bus (USB) devices,
such as memory sticks and cameras, simply works-even for technophobic users. Today,
users demand and crave simplicity in computing solutions.
With the advent of Web and grid service architectures, we begin to expect that an average
user can provide Web services with high resiliency and high availability. The goal of building
a system that is used by millions of people each day and administered by a half time person,
as articulated by Jim Gray of Microsoft Research, seems attainable with the notion of
automatic updates. Thus, autonomic computing seems to be more than just a new
middleware technology; in fact, it may be a solid solution for reining in the
complexity problem.
Historically, most software systems were not designed as self-managing systems.
Retrofitting existing systems with self-management capabilities is a difficult problem. Even if
autonomic computing technology is readily available and taught in computer science and
engineering curricula, it will take another decade for the proliferation of autonomicity in
existing systems.
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 32
APPENDIXES
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 33
9.APPENDIXES
FIGURE NAME
FIGURE NUMBER
Global population v/s Internet Users
Fig1
Diagram of characteristics of
autonomic computing
Fig2
Elements of Autonomic Computing
Fig3
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 34
BIBLIOGRAPHY
AUTONOMIC COMPUTING BACHELOR OF COMPUTER APPLLICATION
COLLEGE 35
10.BIBLIOGRAPHY
1. Autonomic Computing: IBM's perspective on the state of information technology.
2. Jeffery 0. Kelhart and David M Chess (2003) "The vision of autonomic
computing".
3. http:1/www. research. ibm. com/autonomic/research/index. html
4. http:/Iautonomiccomputing.org/.
5. http:1/www. ibm.com/developerworks/autonomic.
6. https://communities.intel.com/community/datastack/blog/2009/06/26/the-future-of-
autonomic-computing- innovation.
7. http://en.wikipedia.org/wiki/Autonomic_computing