god made brains , we make ai

41
Acknowledgement I am greatly indebted to God Almighty for being the guiding light throughout and everything I am, is due to His grace and not by my merits. I express my heartfelt gratitude to Dr.V.P.Devassia, Principal, College Of Engi- neering, Chengannur for extending all the facilities required for the presentation of my seminar. Now I extend my sincere thanks to my seminar co- ordinators Dr.Smitha Dharan, Head, Department of Computer Science and Engineering, Ms.Shiny B., As- sistant Professor in Computer Science and Engineering, for guiding me in my work and providing timely advices and valuable suggestions. Last but not the least, I extend my heartfelt gratitude to my parents and friends for their support and assistance. i

Upload: bipin-thomas

Post on 16-Jan-2016

8 views

Category:

Documents


0 download

DESCRIPTION

Improvised Modular Neural Exploring Travelling Agent. Efficient approach to the existing model so as to conquer the horizons of Artificial Intelligence capabilities.

TRANSCRIPT

Page 1: God Made brains , we make AI

Acknowledgement

I am greatly indebted to God Almighty for being the guiding light throughout and

everything I am, is due to His grace and not by my merits.

I express my heartfelt gratitude to Dr.V.P.Devassia, Principal, College Of Engi-

neering, Chengannur for extending all the facilities required for the presentation of my

seminar.

Now I extend my sincere thanks to my seminar co- ordinators Dr.Smitha Dharan,

Head, Department of Computer Science and Engineering, Ms.Shiny B., As-

sistant Professor in Computer Science and Engineering, for guiding me in my

work and providing timely advices and valuable suggestions.

Last but not the least, I extend my heartfelt gratitude to my parents and friends for

their support and assistance.

i

Page 2: God Made brains , we make AI

Abstract

MoNETA (Modular Neural Exploring Traveling Agent), the brain inspired chip, is

the software being designed at the department of cognitive and neural systems, which

will run on a brain inspired microprocessor under development at HP Labs in Califor-

nia. MoNETA, (the ancient Roman Goddess of memory) stands for MOdular Neural

Exploring Traveling Agent, and is a collective name that represents a family of neural

modeling projects in the Neuromorphics Lab. MoNETA is designed to be modular: a

whole brain system, or artificial nervous system including many cortical and subcorti-

cal areas found in mammalian brains, is progressively refined with more complex and

adaptive modules, and is tested in increasingly more challenging environment.

MoNETA neural models are designed to be able to perceive sensory data and produce

self-sufficient and observable cognitive behavior, in other words to be autonomous. Thus,

the goal of MoNETA is to integrate sensory and behavior (motor) models as a first step

to creating an artificial whole brain system, to develop an animat that can intelligently

interact and learn to navigate a virtual world making decisions aimed at increasing

rewards while avoiding danger. The animat brain is designed in Cog Ex Machina (Cog),

the software realized by HP in collaboration with Boston University in the DARPA

SyNAPSE project. MoNETA is being implemented using memristor a new class of

electronic device invented by the HP Labs in 2008.

MoNETA will be a general-purpose mammalian-type intelligence, an artificial, generic

creature known as an animat. The key feature distinguishing MoNETA from other AI

is that it wont have to be explicitly programmed MoNETA is engineered to de as adapt-

able as mammals brain. The entity bankrolling the research is the DARPA. When work

on the brain-inspired microprocessor is complete, MoNETAs first starring role will likely

be in the U.S.military , standing in for irreplaceable humans. MoNETA is a technology

that will lead to a true artificial intelligence

ii

Page 3: God Made brains , we make AI

Contents

1 Introduction 1

1.1 Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1.1 An Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.1.2 The Birth of Artificial Intelligence . . . . . . . . . . . . . . . . . . 2

1.1.3 Cybernetics and Early Neural Networks . . . . . . . . . . . . . . 3

1.1.4 AI Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.1.5 Rise of Expert Systems . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2 MoNETA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2.1 Defense Advance Research Project Agency . . . . . . . . . . . . 9

1.2.2 SyNAPSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.3 Neuroscience and AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 Detailed Description 14

2.1 The Brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.1.1 Anatomy and Structure . . . . . . . . . . . . . . . . . . . . . . . 15

2.1.2 Neuron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.2 Computational Neuroscience . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.2.1 Von Neumann Architecture . . . . . . . . . . . . . . . . . . . . . 20

2.2.2 Neuromorphic Architecture . . . . . . . . . . . . . . . . . . . . . 22

2.2.3 Wetware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.3 Memristor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.3.1 Design and Working . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.3.2 Pros and Cons . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.4 Modular Neural Exploring Travelling Agent . . . . . . . . . . . . . . . . 27

2.4.1 Cog Ex Machina . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

2.4.2 Role of HP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.4.3 Architecture and Working . . . . . . . . . . . . . . . . . . . . . . 29

2.4.4 Morris Water Navigation Task . . . . . . . . . . . . . . . . . . . . 32

2.4.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.4.6 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3 Conclusion and Future Scope 36

4 References 37

iii

Page 4: God Made brains , we make AI

List of Figures

1 Representation of Brain Anatomy . . . . . . . . . . . . . . . . . . . . . . 15

2 A Typical Neuron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3 Synapse Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4 The Great Brain Race . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

5 von Neumann Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 21

6 Hardware vs. Wetware . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

7 Memristor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

8 Memristor Symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

9 Normal Brain Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

10 Design for MoNETA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

11 Schematic drawing of the Morris water navigation. . . . . . . . . . . . . . 33

12 A rat undergoing a Morris water navigation test . . . . . . . . . . . . . . 33

iv

Page 5: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

1 Introduction

1.1 Artificial Intelligence

The term artificial intelligence is generally the ability of a computer to carry out func-

tions and reasoning typical of the human mind. Artificial intelligence is the search for

a way to map intelligence into mechanical hardware and enable a structure into that

system to formalize thought. It includes the theory and techniques for developing algo-

rithms that allow machines the ability to show or be clever, at least in specific domains.

Artificial Intelligence (AI) is a broad field, which is concerned with getting computers

to do tasks that require human intelligence. However, having said that, there are many

tasks which we might reasonably think require intelligence - such as complex arithmetic

- which computers can do very easily. Conversely, there are many tasks that people do

without even thinking - such as recognizing a face - which are extremely complex to

automate. AI is concerned with these difficult tasks, which seem to require complex

and sophisticated reasoning processes and knowledge.

The modern definition of artificial intelligence (or AI) is ”the study and design of

intelligent agents” where an intelligent agent is a system that perceives its environment

and takes actions which maximizes its chances of success.

John McCarthy, considered as the father of Artificial Intelligence, coined the term in

1956, defines it as ”the science and engineering of making intelligent machines”. Other

names for the field have been proposed, such as computational intelligence, synthetic

intelligence or computational rationality. AI research uses tools and insights from many

fields, including computer science, psychology, philosophy, neuroscience, cognitive sci-

ence, linguistics, operations research, economics, control theory, probability, optimiza-

tion and logic. AI research also overlaps with tasks such as robotics, control systems,

scheduling, data mining, logistics, speech recognition, facial recognition and many oth-

ers. There are numerous definitions of what artificial intelligence is:

1. Systems that think like humans (focus on reasoning and human framework)

2. Systems that think rationally (focus on reasoning and a general concept of intel-

ligence)

3. Systems that act like humans (focus on behavior and human framework)

4. Systems that act rationally (focus on behavior and a general concept of intelli-

gence)

What is rationality? - ”doing the right thing” - simply speaking

College of Engineering,Chengannur 1

Page 6: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

1.1.1 An Overview

The history of artificial intelligence began in antiquity, with myths, stories and rumours

of artificial beings endowed with intelligence or consciousness. The seeds of modern

AI were planted by classical philosophers who attempted to describe the process of

human thinking as the mechanical manipulation of symbols. This work culminated in

the invention of the programmable digital computer in the 1940’s, a machine based on

the abstract essence of mathematical reasoning. This device and the ideas behind it

inspired a handful of scientists to begin seriously discussing the possibility of building an

electronic brain. The field of artificial intelligence research was founded at a conference

on the campus of Dartmouth College in the summer of 1956.

1.1.2 The Birth of Artificial Intelligence

In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics,

psychology, engineering, economics and political science) began to discuss the possibility

of creating an artificial brain. The field of artificial intelligence research was founded as

an academic discipline in 1956. Artificial intelligence is based on the assumption that

the process of human thought can be mechanized.

Early AI researchers developed algorithms that imitated the step-by-step reasoning

that humans use when they solve puzzles, play board games or make logical deductions.

The early ’70s saw the development of production systems, consisting of programs that

exploited a body of knowledge organized in the database, by applying production rules,

to get answers to specific questions. The expert systems have replaced production

systems because of the difficulties encountered by them, with particular reference to

the need to provide original knowledge in explicit form and the lack of flexibility of

production. Many early AI programs used the same basic algorithm. To achieve some

goal (like winning a game or proving a theorem), they proceeded step by step towards

it (by making a move or a deduction) as if searching through a maze, backtracking

whenever they reached a dead end. This paradigm was called ”reasoning as search”.

The principal difficulty was that, for many problems, the number of possible paths

through the ”maze” was simply astronomical (a situation known as a ”combinatorial

explosion”). Researchers would reduce the search space by using heuristics or ”rules of

thumb” that would eliminate those paths that were unlikely to lead to a solution.

Progress in AI has continued, despite the rise and fall of its reputation. Problems

that had begun to seem impossible in 1970 have been solved and the solutions are now

used in successful commercial products.

College of Engineering,Chengannur 2

Page 7: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

1.1.3 Cybernetics and Early Neural Networks

Recent research in neurology had shown that the brain was an electrical network of

neurons that fired in all-or-nothing pulses. Norbert Wiener’s cybernetics described con-

trol and stability in electrical networks. Claude Shannon’s information theory described

digital signals (i.e., all-or-nothing signals).Alan Turing’s theory of computation showed

any form computation could be described digitally. The close relationship between these

ideas suggested that it might be possible to construct an electronic brain.

Turing’s Test In 1950 Alan Turing published a landmark paper in which he specu-

lated about the possibility of creating machines with true intelligence. He noted that

”intelligence” is difficult to define and devised his famous Turing Test. If a machine

could carry on a conversation (over a teleprinter) that was indistinguishable from a

conversation with a human being, then the machine could be called ”intelligent.” This

simplified version of the problem allowed Turing to argue convincingly that a ”thinking

machine” was at least plausible and the paper answered all the most common objections

to the proposition. The Turing Test was the first serious proposal in the philosophy of

artificial intelligence.

1.1.4 AI Challenges

In the early seventies, the capabilities of AI programs were limited. AI researchers had

begun to run into several fundamental limits that could not be overcome in the 1970s.

Although some of these limits would be conquered in later decades, others still stymie

the field to this day.

• Limited computer power: There was not enough memory or processing speed

to accomplish anything truly useful. For example, Ross Quillian’s successful work

on natural language was demonstrated with a vocabulary of only twenty words,

because that was all that would fit in memory. It was argued in 1976 that comput-

ers were still millions of times too weak to exhibit intelligence and an analogy was

suggested: artificial intelligence requires computer power in the same way that

aircraft require power. Below a certain threshold, it’s impossible, but, as power

increases, eventually it could become easy. With regard to computer vision, it

was estimated that simply matching the edge and motion detection capabilities

of human retina in real time would require a general-purpose computer capable

of 109operations/second (1000 MIPS). However nowadays , practical computer

vision applications require 10,000 to 1,000,000 MIPS.

College of Engineering,Chengannur 3

Page 8: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

• Intractability and the combinatorial explosion. In 1972 Richard Karp

showed there are many problems that can probably only be solved in exponen-

tial time (in the size of the inputs). Finding optimal solutions to these problems

requires unimaginable amounts of computer time except when the problems are

trivial.

• Commonsense knowledge and reasoning. Many important artificial intelli-

gence applications like vision or natural language require simply enormous amounts

of information about the world: the program needs to have some idea of what it

might be looking at or what it is talking about.

• Moravec’s paradox: Proving theorems and solving geometry problems is com-

paratively easy for computers, but a supposedly simple task like recognizing a face

or crossing a room without bumping into anything is extremely difficult. This

helps explain why research into vision and robotics had made so little progress in

the previous decades.

• The frame and qualification problems. AI researchers (like John McCarthy)

who used logic discovered that they could not represent ordinary deductions that

involved planning or default reasoning without making changes to the structure

of logic itself. They developed new logics (like non-monotonic logics and modal

logics) to try to solve the problems.

In the late 80s, several researchers advocated a completely new approach to artificial

intelligence, based on robotics. It was believed that, to show real intelligence, a machine

needs to have a body it needs to perceive, move, survive and deal with the world.

1.1.5 Rise of Expert Systems

An expert system is a program that answers questions or solves problems about a specific

domain of knowledge, using logical rules that are derived from the knowledge of experts.

The earliest examples were Dendral. Dendral, begun in 1965, identified compounds from

spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases.

They demonstrated the feasibility of the approach.

Expert systems restricted themselves to a small domain of specific knowledge (thus

avoiding the common sense knowledge problem) and their simple design made it rel-

atively easy for programs to be built and then modified once they were in place. In

1980, an expert system called XCON was completed at CMU for the Digital Equipment

Corporation. It was an enormous success. Corporations around the world began to

develop and deploy expert systems. An industry grew up to support them, including

College of Engineering,Chengannur 4

Page 9: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

hardware companies like Symbolics and Lisp Machines and software companies such as

IntelliCorp and Aion.

Artificial intelligence in the 90’s is centered around improving conditions for humans.

Research is focusing on building human-like robots. This is because scientists are in-

terested in human intelligence and are fascinated by trying to copy it. If A.I. machines

can be capable of doing tasks originally done by humans, then the role of humans will

change. Robots have already begun to replace factory workers. They are acting as

surgeons, pilots, astronauts, etc.

A new paradigm called ”intelligent agents” became widely accepted during the 90s.

Although earlier researchers had proposed modular ”divide and conquer” approaches

to AI, the intelligent agent did not reach its modern form until some concepts were

adopted from decision theory and economics into the study of AI.

An intelligent agent is a system that perceives its environment and takes actions

which maximize its chances of success. The simplest intelligent agents are programs

that solve specific problems. The most complicated intelligent agents known are rational,

thinking human beings.

Human intelligence involves both “mundane” and “expert” reasoning. By mundane

reasoning we mean all those things which (nearly) all of us can routinely do (to various

abilities) in order to act and interact in the world. This will include:

• Vision: The ability to make sense of what we see.

• Natural Language: The ability to communicate with others in any language.

• Planning: The ability to decide on a good sequence of actions to achieve your

goals.

• Robotics: The ability to move and act in the world, possibly responding to new

perceptions.

By expert reasoning we mean things that only some people are good at, and which

require extensive training. It can be especially useful to automate these tasks, as there

may be a shortage of human experts. Expert reasoning includes:

• Medical diagnosis.

• Equipment repair.

• Computer configuration.

• Financial planning.

College of Engineering,Chengannur 5

Page 10: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

Expert Systems are concerned with the automation of these sorts of tasks. AI

research is concerned with automating both these kinds of reasoning. It turns out,

however, that it is the mundane tasks that are by far the hardest to automate.

The field of AI, now more than a half a century old, finally achieved some of its

goals. It began to be used successfully throughout the technology industry. Some of

the success was due to increasing computer power and some was achieved by focusing

on specific isolated problems and pursuing them with the highest standards of scientific

accountability.

Applications of AI A.I. use in many ways is not apparent at first glance. Microsoft’s

homepage uses A.I. for the help system that allows you to go directly to the things the

user needs. Julia, an unsophisticated bot compared to the 90’s, was created using

A.I. to pass the Turing Test to show computer intelligence. An important goal of

AI research is to allow computers to communicate in natural languages like English

Artificial intelligence is used to help pilots fly planes. Robotics incorporating artificial

intelligence interaction with laser, ultrasound, MRI scanning, are performing delicate

brain surgery more accurately than by traditional surgical approaches. A.I. was used

in the investigation of Mars in July 1997. The artificially intelligent, Sojourner, was

programmed to leave the Pathfinder to explore on its own. Another important use of

A.I. is in Hazardous Duty Robots. Robots are used in areas of dangerous chemicals

or radioactivity. The possibilities of artificial intelligence are endless. In all of these

examples, it is key to note that the position of A.I. is one of creating a safer more easily

used and understood environment.

On 11 May 1997, Deep Blue became the first computer chess-playing system to

beat a reigning world chess champion, Garry Kasparov. In 2005, a Stanford robot

won the DARPA Grand Challenge by driving autonomously for 131 miles along an

unrehearsed desert trail. In February 2011, in a Jeopardy! quiz show exhibition match,

IBM’squestion answering system, Watson, defeated the two greatest champions, Brad

Rutter and Ken Jennings, by a significant margin.

Today artificial intelligence in used in our homes and sophisticated establishments

such as military bases and the NASA space station. NASA has even sent out artificially

intelligent robots to grace some planets and to learn more about their atmosphere and

habitat, the intention being to investigate if there is a possibility of human living on

other planets. Major application areas of AI include medical diagnosis, aviation systems,

speech recognition, heavy industries and space, financial data analysis, computational

neuroscience, development of various video and pc games.

College of Engineering,Chengannur 6

Page 11: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

The ultimate goal of research in AI and Robotics is to produce a software which can

interact meaningfully with human beings. A huge amount of research effort is being

exerted in order to achieve this aim and lots of progress has already been made. Re-

searches have manufactured software like androids that can walk on legs, that can climb

stairs, that can grasp objects without breaking or dropping them, that can recognize

faces and a variety of physical objects, that can imitate what they see human beings

doing and so on.

Artificial intelligence hasnt stood still over the past half century, even if we never

got the humanlike assistants that some thought wed have by now. Way back in the

1960s, the relatively recent invention of the transistor prompted breathless predictions

that machines would outsmart their human handlers within 20 years. Now, 50 years

later, it seems the best that is achieved is automated tech support, Computers diagnose

patients over the Internet. High-end cars help keep you from straying out of your lane.

But even the most helpful AI must be programmed explicitly to carry out its one

specific task. What is required is a general-purpose intelligence that can be set loose on

any problem, one that can adapt to a new environment without having to be retrained

constantly.

In the case of Artificial Intelligence there are at least three obvious reasons that AI

could improve unexpectedly fast once it is created. The most obvious reason is that

computer chips already run at ten million times the serial speed of human neurons and

are still getting faster. The next reason is that an AI can absorb hundreds or thousands

of times as much computing power, where humans are limited to what theyre born

with. The third and most powerful reason is that an AI is a recursively self-improving

pattern. Just as evolution creates order and structure enormously faster than accidental

emergence, it may find that recursive self-improvement creates order enormously faster

than evolution.

1.2 MoNETA

MoNETA (Modular Neural Exploring Traveling Agent), considered to be the brain on a

chip, is the software being designing at Boston Universitys department of cognitive and

neural systems, which will run on a brain inspired microprocessor under development

at HP Labs in California. It will function according to the principles that distinguish

humans most profoundly from machines. MoNETA will do things no computer ever

has. It will perceive its surroundings, decide which information is useful, integrate

that information into the emerging structure of its reality, and in some applications,

formulate plans that will ensure its survival. In other words, MoNETA can lead to true

College of Engineering,Chengannur 7

Page 12: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

artificial intelligence.

Researchers have suspected for decades that real artificial intelligence cant be done

on traditional hardware, with its rigid adherence to Boolean logic and vast separation

between memory and processing. But that knowledge was of little use until about two

years ago, when HP built a new class of electronic device called a memristor. Before the

memristor, it would have been impossible to create something with the form factor of

a brain, the low power requirements, and the instantaneous internal communications.

Turns out that those three things are key to making anything that resembles the brain

and thus can be trained and coaxed to behave like a brain.

The Defense Advanced Research Projects Agency (DARPA)-sponsored Systems of

Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project is looking for

hardware solutions that reduce power consumption by electronic synapses to achieve

memory density of 1015 bits per square centimetre. One approach is based on mem-

ristive devices. The memristor, initially theorized by University of California, Berkeley

Professor Leon Chua1 and later discovered by HP Labs,2 has the unique property of

remembering its stimulation history in its resistive state. It does not require power

to maintain its memory, making it ideal for implementing dense, low-power synapses

supporting large-scale neural models. The challenge is to build a software platform able

to exploit the memristor’s capacities.

This platform, named Cog ex Machina (Cog), is being developed at Hewlett-Packard

by Greg Snider. Cog abstracts away the underlying hardware and allocates processing

resources by computational algorithms based on CPU/GPU availability. Cog exposes

a programming interface that enforces synchronous parallel processing of neural data

encoded as multidimensional arrays.

Modular Neural Exploring Traveling Agent (MoNETA) project, supported by DARPA’s

SyNAPSE with HP, uses Cog to progressively implement complex, whole-brain systems

able to leverage the power of memristive hardware that is to be designed. MoNETA

is the brain of an animat, a neuromorphic agent autonomously learning to perform

complex behaviors in a virtual environment. It combines visual scene analysis, spatial

navigation, and plasticity. The system is intended to replicate a rodent’s learning to

swim to a submerged platform in the Morris water maze task a behavior that involves

cooperation among several brain areas. The MoNETA brain will eventually implement

many cortical and subcortical areas that will allow an animat or robot to engage with

a virtual or real environment.

College of Engineering,Chengannur 8

Page 13: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

1.2.1 Defense Advance Research Project Agency

The entity bankrolling the research that will yield this new artificial intelligence is the

U.S. Defense Advanced Research Projects Agency (DARPA).

The Defense Advanced Research Projects Agency (DARPA) is an agency of the

United States Department of Defense responsible for the development of new technology

for use by the military. DARPA has been responsible for funding the development of

many technologies which have had a major effect on the world, including computer

networking, as well as NLS, which was both the first hypertext system, and an important

precursor to the contemporary ubiquitous graphical user interface.

DARPA is a Defense Agency with a unique role within DoD(Department of Defence).

DARPA is not tied to a specific operational mission: DARPA supplies technological

options for the entire Department, and is designed to be the technological engine for

transforming DoD. Original created as Advanced Research Projects Agency (ARPA),

but it was renamed to ”DARPA” (for Defense. DARPA is independent from other more

conventional military R and D and focuses on developing many small scale and large

scale short-term projects run by small, purpose-built teams.

1.2.2 SyNAPSE

SyNAPSE is a DARPA program that aims to develop electronic neuromorphic machine

technology that scales to biological levels. More simply stated, it is an attempt to build

a new kind of computer with similar form, function, and architecture to the mammalian

brain. Such artificial brains would be used in robots whose intelligence matches that of

rats, cats, and ultimately even humans.

SyNAPSE is a backronym standing for Systems of Neuromorphic Adaptive Plastic

Scalable Electronics. The name alludes to synapses, the junctions between biological

neurons. The program is being undertaken by HRL Laboratories (HRL), Hewlett-

Packard, and IBM Research.

The initial phase of the SyNAPSE program developed nanometer scale electronic

synaptic components capable of adapting the connection strength between two neurons

in a manner analogous to that seen in biological systems, and simulated the utility

of these synaptic components in core microcircuits that support the overall system

architecture.

Continuing efforts will focus on hardware development through the stages of mi-

crocircuit development, fabrication process development, single chip system develop-

ment, and multi-chip system development. In support of these hardware developments,

College of Engineering,Chengannur 9

Page 14: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

the program seeks to develop increasingly capable architecture and design tools, very

large-scale computer simulations of the neuromorphic electronic systems to inform the

designers and validate the hardware prior to fabrication, and virtual environments for

training and testing the simulated and hardware neuromorphic systems.

The vast majority of current-generation computing devices are based on the Von

Neumann architecture. This core architecture is wonderfully generic and multi-purpose,

attributes which enabled the information age. The Von Neumann architecture comes

with a deep, fundamental limit, however. A Von Neumann processor can execute an

arbitrary sequence of instructions on arbitrary data, enabling reprogrammability, but

the instructions and data must flow over a limited capacity bus connecting the processor

and main memory. Thus, the processor cannot execute a program faster than it can

fetch instructions and data from memory. This limit is known as the Von Neumann

bottleneck.

In the last thirty years, the semiconductor industry has been very successful at

avoiding this bottleneck by exponentially increasing clock speed and transistor density,

as well as by adding clever features like cache memory, branch prediction, out-of-order

execution and multi-core architecture. The exponential increase in clock speed allowed

chips to grow exponentially faster without addressing the Von Neumann bottleneck at

all. From the user perspective, it doesnt matter if data is flowing over a limited-capacity

bus if that bus is ten times faster than that in a machine two years old. Beyond a clock

speed of a few gigahertz, processors dissipate too much power to use economically.

Cache memory, branch prediction and out-of-order execution more directly mitigate

the Von Neumann bottleneck by holding frequently-accessed or soon-to-be-needed data

and instructions as close to the processor as possible. The exponential growth in transis-

tor density (colloquially known as Moores Law) allowed processor designers to convert

extra transistors directly into better performance by building bigger caches and more

intelligent branch predictors or re-ordering engines.

Multi-core and massively multi-core architectures are harder to place, but still fit

within the same general theme. Extra transistors are traded for higher performance.

Rather than relying on automatic mechanisms alone, though, multi-core chips give

programmers much more direct control of the hardware. This works beautifully for

many classes of algorithms, but not all, and certainly not for data-intensive bus-limited

ones.

Unfortunately, the exponential transistor density growth curve cannot continue for-

ever without hitting basic physical limits. At this point, Von Neumann processors will

cease to grow appreciably faster and users wont need to keep upgrading their computers

College of Engineering,Chengannur 10

Page 15: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

every couple years to stave off obsolescence.

Over six decades, modern electronics has evolved through a series of major develop-

ments (e.g., transistors, integrated circuits, memories, microprocessors) leading to the

programmable electronic machines that are ubiquitous today. Owing both to limita-

tions in hardware and architecture, these machines are of limited utility in complex,

real-world environments, which demand an intelligence that has not yet been captured

in an algorithmic-computational paradigm. As compared to biological systems for ex-

ample, todays programmable machines are less efficient by a factor of one million to one

billion in complex, real-world environments. The SyNAPSE program seeks to break the

programmable machine paradigm and define a new path forward for creating useful,

intelligent machines.

The vision for the anticipated DARPA SyNAPSE program is the enabling of elec-

tronic neuromorphic machine technology that is scalable to biological levels. Pro-

grammable machines are limited not only by their computational capacity, but also

by an architecture requiring (human-derived) algorithms to both describe and process

information from their environment. In contrast, biological neural systems (e.g., brains)

autonomously process information in complex environments by automatically learning

relevant and probabilistically stable features and associations. Since real world systems

are always many body problems with infinite combinatorial complexity, neuromorphic

electronic machines would be preferable in a host of applicationsbut useful and practical

implementations do not yet exist.

SyNAPSE seeks not just to build brain-like chips, but to define a fundamentally

distinct form of computational device. These new devices will excel at the kinds of

distributed, data-intensive algorithms that complex, real-world environment requires

precisely the kinds of algorithms that suffer immensely at the hands of the Von Neumann

bottleneck.

Consider Deep Blue, IBMs 1.4-ton supercomputer, which in 1997 faced then world

chess champion Garry Kasparov. In prior years, Kasparov had defeated the computers

predecessors five times. After a taut series comprising one win apiece and three draws,

Deep Blue finally trounced Kasparov in game six. Nevertheless, Deep Blue was not

intelligent. To beat Kasparov, its special-purpose hardware used a brute-force strategy

of simply calculating the value of 200 million possible chess moves each second. In the

same amount of time, Kasparov could plan roughly two chess positions.

Over the next 10 years, computing capabilities skyrocketed: By 2007 the processing

power of that 1.4-ton supercomputer had been contained within a Cell microprocessor

roughly the size of a thumbnail. In the decade between them, transistor counts had

College of Engineering,Chengannur 11

Page 16: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

jumped from 7.5 million on an Intel Pentium II to 234 million on the Cell. But that

explosion of computing power did not bring artificial intelligence the slightest bit closer,

as DARPAs Grand Challenge has amply demonstrated. DARPA had launched the

Grand Challenge to create autonomous vehicles that could drive themselves without

human intervention. AI had been credited (again) with a major victory, when Stanley,

Stanfords Volkswagen Touareg, drove itself 212 kilometers (132 miles) across Californias

Mojave desert to claim the US dollar 2 million prize.

1.3 Neuroscience and AI

The brain is the center of the nervous system in all animals except a few primitive

invertebrates. The brain is located in the head, usually close to the primary sensory

apparatus such as vision, hearing, balance, taste, and smell. Neuroscience is the scientific

study of the nervous system.

From an evolutionary-biological point of view, the function of the brain is to exert

centralized control over the other organs of the body. This centralized control allows

rapid and coordinated responses to changes in the environment. Some basic types

of responsiveness such as reflexes can be mediated by the spinal cord but sophisticated

purposeful control of behavior based on complex sensory input requires the information-

integrating capabilities of a centralized brain.

From a philosophical point of view, what makes the brain special in comparison

to other organs is that it forms the physical structure that generates the mind. The

mechanisms by which brain activity gives rise to consciousness and thought have been

very challenging to understand: despite rapid scientific progress, much about how brains

work remains a mystery. The operations of individual brain cells are now understood in

considerable detail, but the way they cooperate in ensembles of millions has been very

difficult to decipher. The most promising approaches treat the brain as a biological

computer, very different in mechanism from electronic computers, but similar in the

sense that it acquires information from the surrounding world, stores it, and processes

it in a variety of ways.

The field of neuroscience encompasses all approaches that seek to understand the

brain and the rest of the nervous system Cognitive science seeks to unify neuroscience

and psychology with other fields that concern themselves with the brain, such as com-

puter science (artificial intelligence and similar fields) and philosophy.

Computational neuroscience is the study of brain function in terms of the in-

formation processing properties of the structures that make up the nervous system.

It is an interdisciplinary science that links the diverse fields of neuroscience, cognitive

College of Engineering,Chengannur 12

Page 17: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

science and psychology with electrical engineering, computer science, mathematics and

physics.Computational neuroscience encompasses two approaches: first, the use of com-

puters to study the brain; second, and the study of how brains perform computation.

Cognitive science is the interdisciplinary scientific study of mind and its processes.

It includes research on how information is processed (in faculties such as perception,

language, memory, reasoning, and emotion), represented, and transformed in behavior,

(human or other animal) nervous system or machine (e.g., computer). It spans many

levels of analysis, from low-level learning and decision mechanisms to high-level logic

and planning; from neural circuitry to modular brain organization. The collaboration

between artificial intelligence and neuroscience can produce an understanding of the

mechanisms in the brain that generate human cognition.

Recent years have seen increasing applications of genetic and genomic techniques to

the study of the brain. The most common subjects are mice, because of the availability

of technical tools. It is now possible with relative ease to ”knock out” or mutate a wide

variety of genes, and then examine the effects on brain function. More sophisticated

approaches are also beginning to be used: for example, using the Cre-Lox recombination

method it is possible to activate or inactivate genes in specific parts of the brain, at

specific times.

College of Engineering,Chengannur 13

Page 18: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

2 Detailed Description

Artificial intelligence has long been the overarching vision of computing, always the

goal but never within reach. But using memristors from HP and steady funding from

DARPA, computer scientists at Boston University are on a quest to build the electronic

analog to a human brain. The software they are developing called MoNETA for Modular

Neural Exploring Traveling Agent should be able to function more like a mammalian

brain than a conventional computer.

The memristor, a concept that HP first realized in 2008. The memristor, put simply,

is an electronic component in which the resistance is dependent on the amount of charge

that passed through it at a previous time. In other words, it remembers the state it

was in the last time charge was applied, unlike a conventional RAM cell (which requires

constant power to maintain the same state).

Their ability to both store and process information as it transfers charge (and to do

so with far less power consumption) makes memristors more analogous to the neurons

in the brain than any other previously developed electronic component, and they are

small enough, cheap enough and efficient enough to someday be used to build computing

platforms that function more like the brain: learning, making decisions, and even using

a machine version of intuition to execute their roles.

Despite recent advances in computational power and memory capacity, realizing

brain functions that allow for perception, cognition, and learning on biological temporal

and spatial scales remains out of reach for even the fastest computers. By contrast,

these functions are easily achieved by mammalian brains. For example, a rodent placed

in a water pool can find its way to a submerged platform using visual cues to self-

localize its position and reach a learned safe location. Even a best-case extrapolation

for implementing such behaviour at a functional level using an artificial brain based on

conventional technology would consume several orders of magnitude more power and

space than its biological counterpart. Clearly, the computational principles employed

by a mammalian brain are radically different from those used by today’s computers.

Classical implementations of large-scale neural systems in computers use resources

such as central processing unit (CPU) and graphics processing unit (GPU) cores, mass

memory storage, and parallelization algorithms. Designs for such systems must cope

with power dissipation from data transmission between processing and memory units.

By some estimates, this loss is millions of times the power required to actually compute,

in the sense of creating meaningful new register contents. Such a high transmission loss

is unavoidable as long as memory and computation are physically distant. The creation

College of Engineering,Chengannur 14

Page 19: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

of an electronic brain stuffed into the volume of a mammalian brain is thus impossible

via conventional technology.

2.1 The Brain

The brain is the most sophisticated processing unit that can perform some remarkable

tasks. It is the center of the nervous systems for most of the animals and is well

developed in mammals.

2.1.1 Anatomy and Structure

Figure 1: Representation of Brain Anatomy

The shape and size of the brains of different species vary greatly, and identifying

common features is often difficult. Nevertheless, there are a number of principles of brain

architecture that apply across a wide range of species. Some aspects of brain structure

are common to almost the entire range of animals species. The cerebral cortex is the part

of the brain that most strongly distinguishes mammals. The elaboration of the cerebral

cortex carries with it changes to other brain areas. The superior colliculus, which plays

a major role in visual control of behaviour in most vertebrates, shrinks to a small size in

mammals, and many of its functions are taken over by visual areas of the cerebral cortex.

The cerebellum of mammals contains a large portion (the neocerebellum) dedicated to

supporting the cerebral cortex, which has no counterpart in other vertebrates.

The brains of all species are composed primarily neurons. Neurons are considered

to be the most important cells in the brain. The property that makes neurons unique

is their ability to send signals to specific target cells over long distances.

College of Engineering,Chengannur 15

Page 20: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

2.1.2 Neuron

Neurons also known as brain cells are cells that send and receive electro-chemical signals

to and from the brain and nervous system. There are about 100 billion neurons in the

brain. There are many type of neurons. They vary in size from 4 microns (.004 mm)

to 100 microns (.1 mm) in diameter. Their length varies from a fraction of an inch to

several feet.

Figure 2: A Typical Neuron

Neurons are nerve cells that transmit nerve signals to and from the brain at up to 200

mph. The neuron consists of a cell body with branching dendrites (signal receivers) and a

projection called an axon, which conduct the nerve signal. The axon, a long extension of

a nerve cell, and take information away from the cell body. Bundles of axons are known

as nerves or, within the CNS (central nervous system), as nerve tracts or pathways. The

length of an axon can be extraordinary: for example, if a pyramidal cell of the cerebral

cortex were magnified so that its cell body became the size of a human body, its axon,

equally magnified, would become a cable a few centimeters in diameter, extending more

than a kilometer. These axons transmit signals in the form of electrochemical pulses

called action potentials, which last less than a thousandth of a second and travel along

the axon at speeds of 1100 meters per second. Some neurons emit action potentials

constantly, at rates of 10100 per second, usually in irregular patterns; other neurons

College of Engineering,Chengannur 16

Page 21: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

are quiet most of the time, but occasionally emit a burst of action potentials. (the gap

between the axon terminal and the receiving cell). Dendrites bring information to the

cell body. At the other end of the axon, the axon terminals transmit the electro-chemical

signal across a synapse. The junction between two neurons is called synapse.

Synapses are the key functional elements of the brain. The essential function of

the brain is cell-to-cell communication, and synapses are the points at which commu-

nication occurs. The human brain has been estimated to contain approximately 100

trillion synapses; even the brain of a fruit fly contains several million. The functions

of these synapses are very diverse: some are excitatory (excite the target cell); others

are inhibitory; others work by activating second messenger systems that change the

internal chemistry of their target cells in complex ways. A large fraction of synapses

are dynamically modifiable; that is, they are capable of changing strength in a way that

is controlled by the patterns of signals that pass through them. It is widely believed

that activity-dependent modification of synapses is the brain’s primary mechanism for

learning and memory.

Figure 3: Synapse Generation

The functions of the brain depend on the ability of neurons to transmit electro-

chemical signals to other cells, and their ability to integrate electrochemical signals

received from other cells. The electrical properties of neurons are controlled by a wide

variety of biochemical and metabolic processes, most notably the interactions between

neurotransmitters and receptors that take place at synapses.

The invention of electronic computers in the 1940s, along with the development of

mathematical information theory, led to a realization that brains can potentially be

understood as information processing systems. This concept formed the basis of the

field of cybernetics, and eventually gave rise to the field now known as computational

College of Engineering,Chengannur 17

Page 22: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

neuroscience. The earliest attempts at cybernetics were somewhat crude in that they

treated the brain as essentially a digital computer in disguise. Over the years, though,

accumulating information about the electrical responses of brain cells recorded from

behaving animals has steadily moved theoretical concepts in the direction of increasing

realism.

The essence of the information processing approach is to try to understand brain

function in terms of information flow and implementation of algorithms. Follow up

studies in higher-order visual areas found cells that detect binocular disparity, color,

movement, and aspects of shape, with areas located at increasing distances from the

primary visual cortex showing increasingly complex responses. Other investigations

of brain areas unrelated to vision have revealed cells with a wide variety of response

correlates, some related to memory, some to abstract types of cognition such as space.

Theorists have worked to understand these response patterns by constructing math-

ematical models of neurons and neural networks, which can be simulated using com-

puters. Some useful models are abstract, focusing on the conceptual structure of neural

algorithms rather than the details of how they are implemented in the brain; other

models attempt to incorporate data about the biophysical properties of real neurons.

No model on any level is yet considered to be a fully valid description of brain function,

though. The essential difficulty is that sophisticated computation by neural networks

requires distributed processing in which hundreds or thousands of neurons work coop-

erativelycurrent methods of brain activity recording are only capable of isolating action

potentials from a few dozen neurons at a time.

Despite their speed and memory capacity, silicon-based computers struggle to em-

ulate the sophisticated processing mechanism of the mammalian brain. The branch of

computer science called Artificial Intelligence tries to narrow the gap, and one of the

basic tools of AI is the neural network. An artificial neural network (ANN), usually

called neural network (NN), is a mathematical model or computational model that is

inspired by the structure and/or functional aspects of biological neural networks. A neu-

ral network consists of an interconnected group of artificial neurons, and it processes

information using a connectionist approach to computation. In most cases an ANN is

anadaptive system that changes its structure based on external or internal information

that flows through the network during the learning phase. Modern neural networks

are non-linear statistical data modeling tools. They are usually used to model complex

relationships between inputs and outputs or to find patterns in data.

Now consider the case of an humble rat. Its biological intelligence uses general-

purpose wetwarethe biochemical hardware and software puree that is the brainto solve

tasks like those of the Grand Challenge cars, with much better results. First, a hungry

College of Engineering,Chengannur 18

Page 23: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

Figure 4: The Great Brain Race

College of Engineering,Chengannur 19

Page 24: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

rat will explore creatively for food. It might follow familiar, memorized routes that it has

learned are safe, but at the same time it must integrate signals from different senses as it

encounters various objects in the environment. The rat can recognize dangerous objects

such as a mousetrap and will often avoid them even though it may never have seen the

object at that particular angle before. After eating, the rat can quickly disengage its

current plan and switch to its next priority. All these simultaneous challenges, with all

their varied complexities, are impractical for a machine, because you cant fit a computer

that size into a vehicle smaller than a semi. And yet they are negotiated by a brain

whose networks of millions of neurons and billions of synapses are distributed across

many brain areas a brain that weighs no more than 2 grams and can operate on the

power budget of a Christmas-tree bulb. Why is the rat brain so superior? In a word,

architecture. The brain of an adult rat is composed of 21 million nerve cells called

neurons (the human brain has about 100 billion).

Neurons talk to each other by way of dendrites and axons. You can think of these

tendrils as the in-boxes (dendrites) and out-boxes (axons) of the individual neuron,

transmitting electrical impulses from one neuron to another. Most of the process-

ing performed in the nervous system happens in the junctions between neurons-the

synapse. Replicating the functionalities of brain has been a challenging task for Artifi-

cial Intelligence over the years and branch that focuses on it is known as Computational

Neuroscience.

2.2 Computational Neuroscience

Computational neuroscience has focused largely on building software that can simulate

or replicate a mammals brain in the classic von Neumann computer architecture. This

architecture separates the place where data is processed from the place where it is stored,

and it has been the staple of computer architectures since the 1960s.

2.2.1 Von Neumann Architecture

A computer architecture conceived by mathematician John von Neumann which forms

the core of nearly every computer system in use today(regardless of size). A von Neu-

mann machine has a random-access memory(RAM) which means that each successive

operation can read or write any memory location, independent of the location accessed

by the previous operation.

A von Neumann machine also has a central processing unit(CPU) with one or more

registers that hold data that are being operated on. The CPU has a set of built-in

operations(its instruction set) that is far richer than the Turing Machine. The CPU

College of Engineering,Chengannur 20

Page 25: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

can interpret the contents of memory either as instructions or as data according to the

fetch-execute cycle. Von Neumann machines forms a sequential system.

Figure 5: von Neumann Architecture

On a standard computer, the memory and processor are separated by a data channel,

or bus, between the area where the data is stored and where it is worked on. The

processor reserves a small number of slots, called registers, for storing data during

computation. After doing all the necessary computation, the processor writes the result

back to memoryagain, using the data bus.

To minimize the amount of traffic flowing on the fixed-capacity bus, most modern

processors augment the registers with a cache memory that provides temporary stor-

age very close to the point of computation. If an often-repeated computation demands

multiple pieces of data, the processor will keep them in that cache, which the compu-

tational unit can then access much more quickly and more efficiently than it can the

main memory.

Hardware implements cache as a block of memory for temporary storage of data

likely to be used again. CPUs and hard drives frequently use a cache, as do web

browsers and web servers. To be cost efficient and to enable an efficient use of data,

caches are relatively small. Nevertheless, caches have proven themselves in many areas

of computing because access patterns in typical computer applications have locality of

reference. References exhibit temporal locality if data is requested again that has been

recently requested already.

College of Engineering,Chengannur 21

Page 26: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

2.2.2 Neuromorphic Architecture

Neuromorphic engineering or neuromorphic computing is a concept developed by Carver

Mead, in the late 1980s, describing the use of very-large-scale integration (VLSI) systems

containing electronic analog circuits to mimic neuro-biological architectures present in

the nervous system. In recent times the term neuromorphic has been used to describe

analog, digital, and mixed-mode analog/digital VLSI and software systems that imple-

ment models of neural systems.

The vast majority of the computing and power budget of such a brain-simulating

systemcomputer scientists call it a neuromorphic architecturegoes to mimicking the sort

of signal processing that happens inside the brains synapses. Indeed, modeling just one

individual synapse requires the following to happen in the machinery: The synapses

statehow likely it is to pass on a signal like input from a neuron, which is the major

factor in how strong the association is between any two neuronsis in a location in main

memory.

To change that state, the processor must package an electronic signal for transfer over

the main bus. That signal must travel between 2 and 10 centimeters to reach the physical

memory and then must be unpackaged to actually access the desired memory location.

Now multiply that sequence by up to 8000 synapsesas much as a single rat neuron might

have. Then multiply that by the number of neurons in the brain youre emulatingbillions.

Congratulations! Youve just modeled an entire millisecond of brain activity. A biological

brain is able to quickly execute this massive simultaneous information orgy and do it in

a small packagebecause it has evolved a number of stupendous shortcuts.

Heres what happens in a brain: Neuron 1 spits out an impulse, and the resultant

information is sent down the axon to the synapse of its target, Neuron 2. The synapse

of Neuron 2, having stored its own state locally, evaluates the importance of the in-

formation coming from Neuron 1 by integrating it with its own previous state and the

strength of its connection to Neuron 1. Then, these two pieces of informationthe in-

formation from Neuron 1 and the state of Neuron 2s synapseflow toward the body of

Neuron 2 over the dendrites. And here is the important part:

By the time that information reaches the body of Neuron 2, there is only a single

valueall processing has already taken place during the information transfer. There is

never any need for the brain to take information out of one neuron, spend time processing

it, and then return it to a different set of neurons. Instead, in the mammalian brain,

storage and processing happen at the same time and in the same place.

That difference is the main reason the human brain can run on the same power

budget as a 20-watt lightbulb. But reproducing the brains functionality on even the

College of Engineering,Chengannur 22

Page 27: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

most advanced supercomputers would require a dedicated power plant. To be sure,

locality isnt the only difference. The brain has some brilliantly efficient components

that we just cant reproduce yet. Most crucially, brains can operate at around 100

millivolts. Complementary metal-oxide-semiconductor logic circuits, however, require

a much higher voltage to function properly (close to 1 volt), and the higher operating

voltage means that more power is expended in transmitting the signal over wires. Now,

replicating the structure weve described above is not totally impossible with todays

silicon technology.

A true artificial intelligence could hypothetically run on conventional hardware, but

it would be fantastically inefficient. Inefficient hardware wont stop us from running

neuromorphic algorithms (such as machine vision), but we would need an entire massive

cluster of high-performance graphics processing units (GPUs) to handle the parallel

computations, which would also come with the power requirements of a midwestern

college town.

2.2.3 Wetware

Figure 6: Hardware vs. Wetware

College of Engineering,Chengannur 23

Page 28: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

The term wetware is used to describe the embodiment of the concepts of the physical

construct known as the central nervous system (CNS) and the mental construct known

as the human mind. It is a two-part abstraction drawn from the computer-related

idea of hardware or software. The first abstraction solely concerns the bioelectric and

biochemical properties of the CNS, specifically the brain. If the impulses traveling

the various neuronsare analogized as software, then the physical neurons would be the

hardware. The amalgamated interaction of the software and hardware is manifested

through continuously changing physical connections, and chemical and electrical influ-

ences spreading across wide spectrum of supposedly unrelated areas. This interaction

requires a new term that exceeds the definition of those individual terms. The second

abstraction is relegated to a higher conceptual level. If the human mind is analogized

as software, then the first abstraction described above is the hardware. The process by

which the mindand brain interact to produce the collection of experiences that we de-

fine as self-awareness is still seriously in question. Importantly, the intricate interaction

between physical and mental realms is observable in many instances. The combination

of these concepts is expressed in the term wetware.

Brain : In the mammalian brain, storage and computation happen at the same

time and in the same place. Neuron 1 sends a signal down the axon to Neuron 2. The

synapse of Neuron 2 evaluates the importance of the information coming from Neuron

1 by contrasting it with its own previous state and the strength of its connection to

Neuron 1. Then, these two pieces of information the information from Neuron 1 and

the state of Neuron 2s synapse flow toward the body of Neuron 2 over the dendrites.

By the time that information reaches the body of Neuron 2, there is only a single value

all computation has already taken place during the information transfer.

Computer : On a computer, the memory and processor are physically separateda

significant physical distance separates the areas where the data is stored from the areas

where it is manipulated. Modeling just a single synapse requires the following to happen

in the machinery: The synapses state is in a location in main memory. To change

that state, a signal must originate somewhere on the processor, travel to the edge of

the processor, be packaged for transfer over the main bus, travel between 2 and 10

centimeters to reach the physical memory, and then be unpackaged to actually access

the desired memory location. Multiplying that sequence by up to 8000 synapsesas many

as in a single rat neuronand then again by the brains billions of neurons yields a single

millisecond of the brain activity.

College of Engineering,Chengannur 24

Page 29: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

2.3 Memristor

Memristor is a passive two-terminal circuit element in which there is a functional rela-

tionship between charge and magnetic flux linkage. Memristor theory was formulated

and named by Leon Chua in a 1971 paper. In 2008, a team at HP Labs announced

the development of a switching memristor based on a thin film of titanium dioxide. It

has a regime of operation with an approximately linear charge-resistance relationship

as long as the time-integral of the current stays within certain bounds. These devices

are being developed for application in nanoelectronic memories, computer logic, and

neuromorphic computer architectures.

Basically, memristors are small enough, cheap enough, and efficient enough to fill

the bill. Perhaps most important, they have key characteristics that resemble those of

synapses. Thats why they will be a crucial enabler of an artificial intelligence worthy

of the term.

Figure 7: Memristor

2.3.1 Design and Working

A memristor is a passive two-terminal electronic component in which there is a func-

tional relationship between charge and magnetic flux linkage. When current flows in

one direction through the device, the resistance increases; and when current flows in the

opposite direction, the resistance decreases, although it must remain positive. When

the current is stopped, the component retains the last resistance that it had, and when

the flow of charge starts again, the resistance of the circuit will be what it was when it

was last active.

College of Engineering,Chengannur 25

Page 30: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

Figure 8: Memristor Symbol

• Electrical resistor with memory.

• Simpler than existing transistor technology.

In a memristor, resistance changes depending on the amount, direction, and duration

of voltage thats applied to it.

The interesting thing about a memristor is that whatever be its past state, or re-

sistance, it freezes that state until another voltage is applied to change it. Maintaining

that state requires no power. Thats different from a dynamic RAM cell, which requires

regular charge to maintain its state. The upshot is that thousands of memristors could

substitute for massive banks of power-hogging memory. Just to be clear, the memris-

tor is not magicits memristive state does decay over time. That decay can take hours

or centuries depending on the material, and stability must often be traded for energy

requirements which is one of the major research reasons memristors arent flooding the

market yet.

Physically, a memristor is just an oxide junction between two perpendicular metal

wires. The generic memristor can be thought of as a nanosize sandwichthe bread is

the intersection of the two crossing wires. Between the bread slices is an oxide; charge-

carrying bubbles of oxygen move through that oxide and can be pushed up and down

through the material to determine the statethe last resistanceacross the memristor. This

resistance state is what freezes when the power is cut. Recent DARPA-sponsored work

at HP has yielded more complex memristors, so this description is necessarily a bit

generic. The important thing to recall is that the memristors state can be considered

College of Engineering,Chengannur 26

Page 31: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

analogous to the state of the synapse that we mentioned earlier: The state of the synapse

depends how closely any two neurons are linked, which is a key part of the mammalian

ability to learn new information.

2.3.2 Pros and Cons

• Small, cheap, efficient enough.

• Hailed as the fourth fundamental electronic component.(resistor, capacitor, induc-

tor).

• Work as essential non-volatile memory.

• Address challenges of neuromorphic computing

• Key characteristics resembles human synapse

Though memristors are dense, cheap, and tiny, they also have a high failure rate at

present, characteristics that bear an intriguing resemblance to the brains synapses. It

means that the architecture must by definition tolerate defects in individual circuitry,

much the way brains gracefully degrade their performance as synapses are lost, without

sudden system failure. Basically, memristors bring data close to computation, the way

biological systems do, and they use very little power to store that information, just as

the brain does. For a comparable function, the new hardware will use two to three

orders of magnitude less power than Nvidias Fermiclass GPU.

For the first time we will begin to bridge the main divide between biological com-

putation and traditional computation. The use of the memristor addresses the basic

hardware challenges of neuromorphic computing: the need to simultaneously move and

manipulate data, thereby drastically cutting power consumption and space. You might

think that to achieve processing thats more like thinking than computation would re-

quire more than just new hardwareit would also require new software. Youd be wrong,

but in a way that might surprise you. Basically, without this paradigm shift in hardware

architecture, you couldnt even think about building MoNETA.

2.4 Modular Neural Exploring Travelling Agent

To build a brain, it is required to discard the conceit of separate hardware and software

because the brain doesnt work that way. In the brain its all just wetware. To replicate

a mammalian brain, software and hardware would need to be inextricable. There have

been no idea how to build such a system at the moment, but the memristor has allowed

College of Engineering,Chengannur 27

Page 32: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

to take a big step closer by approximating the biological form factor: hardware that

can be both small and ultralow power. Collaborating with HP and Boston University

their biological algorithms will create this entity: MoNETA. Think of MoNETA as

the application software that does the recognizing, reasoning, and learning. HP chose

our team at Boston University to build it because of our experience at the Center of

Excellence for Learning in Education, Science, and Technology (CELEST), funded by

the National Science Foundation.

CELEST : Center of Excellence for Learning in Education synthesizes experimental,

modeling, and technological approaches to research in order to understand how the

brain learns as a whole system.

CELESTs core scientific objective is to understand how the whole brain learns;

i.e., how it adapts as an integrated system to enable intelligent autonomous behavior.

CELESTs core technology objective is to develop novel brain-inspired technologies by

implementing key insights gained from experiments and modeling: from bench to models

to applications. To achieve these objectives, CELEST strategically allocates funding

to projects involving collaborations between experimentalists, modelers, and engineers

working together to solve key problems in the neuroscience of learning. Projects include

researchers from at least two of these three disciplines, resulting in research that

1. uses neural models to theoretically guide experimental studies,

2. uses experimental results to improve these models, and

3. translates insights from these experiments and models into technological applica-

tions.

At CELEST, computational modelers, neuroscientists, psychologists, and engineers

collaborate with researchers from Harvard, MIT, Brandeis, and BUs own department

of cognitive and neural systems. CELEST was established to study basic principles of

how the brain plans, organizes, communicates, and remembers.

2.4.1 Cog Ex Machina

To allow the brain models and the neuromorphic hardware to interact, HP built a kind

of special-purpose operating system called Cog Ex Machina. Cog, built by HP principal

investigator Greg Snider, lets system designers interact with the underlying hardware

to do neuromorphic computation. Neuromorphic computation means computation that

can be divided up between hardware that processes like the body of a neuron and

hardware that processes the way dendrites and axons do.

College of Engineering,Chengannur 28

Page 33: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

• Special purpose operating system built by HP

• Allows brain models and the neuromorphic hardware interaction.

• Cog allows neuromorphic computation

2.4.2 Role of HP

Hewlett-Packard Company commonly referred to as HP, is an American multinational

information technology corporation headquartered in Palo Alto, California, USA that

provides products, technologies, software, solutions and services to individual con-

sumers, small- and medium-sized businesses and large enterprises, including customers

in the government, health and education sectors. It is one of the world’s largest in-

formation technology companies, operating in nearly every country. HP specializes in

developing and manufacturing computing, data storage, and networking hardware, de-

signing software and delivering services. Major product lines include personal computing

devices, enterprise, and industry standard servers, related storage devices, networking

products, software and a diverse range of printers, and other imaging products.

HPs recent majorl breakthrough in the development of a next-generation memory

technology called memristors, which some see as a potential replacement for today’s

widely used flash and DRAM technologies is being utilized in the implementation of

MoNETA. Also HP is responsible taking care of the hardware component of the neu-

romorphic processor requires for builing up MoNETA. HP is one among the prime

contractors of the SyNAPSE project to which DARPA has awarded funds. Members

of the Neuromorphics Lab within the Department of Cognitive and Neural Systems at

Boston University have worked in the past year with Information and Quantum Systems

Lab at HP.

2.4.3 Architecture and Working

The architecture of the brain inspired microprocessor under development at HP Labs

can be thought of as a kind of mermistor-based multicore chip. Basically, mermistor

bring data close to computation, the way biological systems do, and they produce very

little power to store that information, just as the brain does. To allow the brain models

and the neuromorphic hardware to interact, HP built a special purpose operating system

called Cog Ex Machina . Neuromorphic computations means computation that can be

divided up between hardware that process like the body of a neuron and hardware that

processes the way dendrites and axons do.

College of Engineering,Chengannur 29

Page 34: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

Figure 9: Normal Brain Simulation

The two kinds of cores deal with processing in fundamentally different ways. A

neuron-type CPU architecture makes this core flexible, letting it handle any operation

you throw at it. In that way, its characteristics resemble those of the neuron. But the

trade-off is that the core sucks up a lot of power, so like neurons, these elements should

make up only a small percentage of the system.

A dendritic core works more like a GPU, an inexpensive and high-performance mi-

croprocessor. Like a dendrite, a GPU has a rigid architecture that is optimized for

only a specific kind of computationin this case, the complicated linear algebra opera-

tions that approximate what happens inside a dendrite. Because GPUs are optimized

for parallel computation, we can use them to approximate the distributed computation

that dendrites carry out. But theres a cost to using these, too: GPU cores perform only

a limited set of operations.

The dendrite cores in the final DARPA hardware will be much less flexible than

neuron cores, but they will store extraordinary amounts of state information in their

College of Engineering,Chengannur 30

Page 35: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

Figure 10: Design for MoNETA

massive memristor-based memory banks, and like the tendrils of neurons, they will make

up the vast bulk of the systems computational elements.

Memristors, finally, will act as the synapses that mediate the information trans-

fer between the dendrites and axons of different neurons. For a programmer, taking

full advantage of a machine like thiswith its two different core types and complicated

memory-storage overlay is tremendously challenging, because the problems need to be

properly partitioned across those two radically different types of processors. Thanks

to Cog, we computational neuroscientists can forget about the hardware and focus on

developing the soul inside the machine.

Truly general-purpose intelligence can emerge only when everything happens all at

once: In intelligent creatures like our humble rat, all perception (including auditory and

visual inputs, or the brain areas responsible for the generation of fine finger movements),

emotion, actions, and reactions combine and interact to guide behavior. Perceiving

without action, emotion, higher reasoning, and learning would not only fail to lead to

a general purpose AI, it wouldnt even pass a commonsense Turing test.

Creating this grail-like unified architecture has been precluded by several practical

limitations. The most important is the lack of a unified theory of the brain. But the

creation of large centers such as CELEST has advanced our understanding of what key

College of Engineering,Chengannur 31

Page 36: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

aspects of biological intelligence might be applicable to our task of building a general-

purpose AI.

The animat is said to be built successfully when we are able to motivate MoNETA

to run, swim, and find food dynamically, without being programmed explicitly to do

so. The animat will learn about objects in its environment, navigate to reach its goals,

and avoid dangers without the need for us to program specific objects or behaviours.

We will test our animat in a classic trial called the Morris water navigation task. In

this experiment, neuroscientists teach a rat to swim through a water maze, using visual

cues, to a submerged platform that the rat cant see. That task might seem simple,

but its anything but. To get to the platform, the rat must use many stupendously

sophisticated brain areas that synchronize vision, touch, spatial navigation, emotions,

intentions, planning, and motor commands.

2.4.4 Morris Water Navigation Task

The Morris water navigation task is a behavioural procedure widely used in behavioural

neuroscience to study spatial learning and memory.

In the typical paradigm, a rat or mouse is placed into a small pool of waterback-

end first to avoid stress, and facing the pool-side to avoid biaswhich contains an escape

platform hidden a few millimetres below the water surface. Visual cues, such as coloured

shapes, are placed around the pool in plain sight of the animal. The pool is usually

1.2 to 1.8 meter in diameter and 60 centimetres deep. The pool can also be half-filled

with water to 30 centimetres in depth. A side wall above the waterline prevents the

rat from being distracted by laboratory activity. When released, the rat swims around

the pool in search of an exit while various parameters are recorded, including the time

spent in each quadrant of the pool, the time taken to reach the platform (latency), and

total distance travelled. The rat’s escape from the water reinforces its desire to quickly

find the platform, and on subsequent trials (with the platform in the same position)

the rat is able to locate the platform more rapidly. This improvement in performance

occurs because the rat has learned where the hidden platform is located relative to the

conspicuous visual cues. After enough practice, a capable rat can swim directly from

any release point to the platform.

Neuroscientists have studied the water maze task at great length, so we know a great

deal about how a rats anatomy and physiology react to the task. If we can train the

animat to negotiate this maze, well be confident that we have taken an important first

step toward simulating a mammalian intelligence.

By the middle of next year, researchers will be working with thousands of candidate

College of Engineering,Chengannur 32

Page 37: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

Figure 11: Schematic drawing of the Morris water navigation.

Figure 12: A rat undergoing a Morris water navigation test

animats at once, all with slight variations in their brain architectures. Playing intelligent

designers,the best ones will be choosen from the bunch and keep tweaking them until

they unquestionably master tasks like the water maze and other, progressively harder

experiments. The researchers and developers watch each of these simulated animats

interacting with its environment and evolving like a natural organism. They expect to

eventually find the simulation of brain areas and connections that achieves autonomous

intelligent behavior. They will then incorporate those elements into a memristor-based

neural processing chip.

2.4.5 Applications

Once that chip is manufactured, the researchers will build it into robotic platforms that

venture into the real world. Robot companions for the elderly, robots to be sent to

College of Engineering,Chengannur 33

Page 38: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

Mars to forage autonomously, and unmanned aerial vehicles will be just the beginning.

Consider MoNETA-enabled military scout vehicle for a moment. It will be able to go

into a mission with partially known objectives that change suddenly. It will be able to

negotiate unfamiliar terrain, recognize a pattern that indicates hostile activity, make

a new plan, and hightail it out of the hostile area. If the road is blocked, it will be

able to make a spur-of-the-moment decision and go off-road to get home. Will these

chips experience vision and emotions by simulating and appropriately connecting the

brain areas known to be involved in the subjective experience associated with them. It

will take time to analyze that. However, the goal is not to replicate subjective expe-

rienceconsciousnessin a chip but rather to build functional machines that can behave

intelligently in complex environments. In other words, the idea is to make machines

that behave as if they are intelligent, emotionally biased, and motivated, without the

constraint that they are actually aware of these feelings, thoughts, and motivations.

MoNETA large scale simulations will leverage the Iterative Evolution of Models

software framework (ItEM) as well as high performance computing resources, such as

the new GPU cluster hosted at HP Labs under the direction of Greg Sniderand Dick

Carter. The cluster, called Simcity, features a total of 144 GPUs, 576 GB of conventional

memory, 432 GB of GPU memory, and an Infiniband interconnect. A prototype cluster

containing three nodes and six GPUs, called Simtown, is also available for testing and

debugging. By the end of 2011, the Neuromorphics Lab will finish building out a twin

system with roughly half the computing power of Simcity.

2.4.6 Challenges

The true challenge for an AI is that it is not possible to pre-program a lifetime of

knowledge into avirtual or robotic animat. I t should learn throughout its lifetime

without needing constant reprogramming or needing to be told a priori what is good

for it, and what is bad. Such wisdom has to be learned from the interaction between

a brainwith its large (but not infinite) number of synapses that store memories and

an environment that is constantly changing and dense with information. Intuition,

pattern recognition, improvisation, and the ability to negotiate ambiguity: All of these

things are done really well by mammalian brainsand absolutely abysmally by todays

microprocessors and software.

The animat must be able to learn about objects in its environment, navigate to

reach its goals, and avoid dangers without the need for us to program specific objects

or behaviors. Such an ability comes standard- issue in mammals, because our brains

are plastic throughout our lives. We learn to recognize new people and places, and we

acquire new skills without being told to do so. MoNETA will need to do the same.

College of Engineering,Chengannur 34

Page 39: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

Also as mentioned intelligence in living form is a result of coordinated action of brain

activities. If those activities and concepts are not implemented properly it would result

in non-functioning intelligence.

• Firstly, lack of unified theory of the brain

• The animat should learn without reprogramming-true challenge of AI

• MoNETA need to acquire new skill without being told.

• True general purpose can emerge only when everything happen all at once.

In creatures like the rat, all perception (including auditory and visual inputs, or the brain

areas responsible for the generation of fine finger movements), emotion, actions, and

reactions combine and interact to guide behavior Perceiving without action, emotion,

higher reasoning, and learning would not only fail to lead to a general purpose AI, it

wouldnt even pass a Turing test.

MOdular Neural Exploring Traveling Agent (MoNETA) project aims at developing

an animat that can intelligently interact and learn to navigate a virtual world making

decisions aimed at increasing rewards while avoiding danger. To build functional ma-

chines that can behave intelligently in complex environments Machines that behave as

if they are intelligent, emotionally biased and motivated.

College of Engineering,Chengannur 35

Page 40: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

3 Conclusion and Future Scope

MoNETA will be general-purpose mammalian type intelligence, an artificial, generic

creature known as an animat. As mentioned the key feature distinguishing MoNETA

from other AIs is that it wont have to be explicitly programmed. MoNETA is being

engineered to be as adaptable and efficient as a mammals brain. We intend to set it

loose on a variety of situations, and it will learn dynamically.

Biological intelligence is the result of the coordinated action of many highly inter-

connected and plastic brain areas. Most prior research has focused on modeling those

individual parts of the brain. The results, while impressive in some cases, have been

a piecemeal assortment of experiments, theories, and models that each nicely describes

the architecture and function of a single brain area and its contribution to perception,

emotion, and action. But if tried to stitch those findings together, it would more likely

end up with a non-functioning disastrous intelligence. Neuromorphic chips wont just

power niche AI applications. The architectural concept described here will reform all

future CPUs. The fact is, conventional computers will just not get significantly more

powerful unless they move to a more parallel and locality-driven architecture. While

neuromorphic chips will first supplement todays CPUs, soon their sheer power will

overwhelm that of todays computer architectures.

The semiconductor industrys relentless push to focus on smaller and smaller tran-

sistors will soon mean transistors have higher failure rates. This year, the state of the

art is 22-nanometer feature sizes. By 2018, that number will have shrunk to 12 nm, at

which point atomic processes will interfere with transistor function; in other words, they

will become increasingly unreliable. Companies like Intel, Hynix, and of course HP are

putting a lot of resources into finding ways to rely on these unreliable future devices.

Neuromorphic computation will allow that to happen on both memristors and transis-

tors. It wont be long until all multicore chips integrate a dense, low power memory with

their CMOS cores.

It is expected that in the near future neuromorphic chips will eventually come in as

many flavors as there are brain designs in nature: fruit fly, earthworm, rat, and human.

All our chips will have brains. Neuromorphic chips revolutionize future CPUs that can

lead to true artificial intelligence and will bring about astounding changes in electronic

industry and also more brain like system will be available widely.

College of Engineering,Chengannur 36

Page 41: God Made brains , we make AI

Modular Neural Exploring Travelling Agent

4 References

[1] Versace M. and Chandler B. (2010) MoNETA: A Mind Made from Memristors. IEEE

Spectrum, December 2010. PDF

[2] Gorchetchnikov A., Leveille J., Versace M., Ames H., Livitz G, Chandler B., Mingolla

E., Carter D., Amerson R., and Snider G. (2011). MoNETA: Massive parallel appli-

cation of biological models navigating through virtual Morris water maze and beyond.

Computational Neuroscience Meeting Abstracts, Stockholm, Sweden (CNS 2011)

[3] Livitz G., Versace M., Gorchetchnikov A., Vasilkoski Z., Ames H., Chandler B., Lev-

eille J. and Mingolla E. (2011) Scalable adaptive brain-like systems, The Neuromorphic

Engineer, : 10.2417/1201101.003500 February 2011. PDF

[4] Leveille, J., Livitz, G., Ames, H., Chandler, B., Gorchetchnikov, A., Versace, M.,

Mingolla, E. and Snider, G. (2011). Learning to see in a virtual world. Neuroinformatics

2011, Boston, MA.

[5] Leveille J., Ames H., Chandler B., Gorchetchnikov A., Livitz G., Versace M. and

Mingolla E. (2011) Invariant object recognition and localization in a virtual animat.

International Conference on Cognitive and Neural Systems (ICCNS) 2011, Boston, MA,

USA.

[6] Gorchetchnikov A., Ames H., Chandler B., Leveille J., Livitz G., Mingolla E. and

Versace M. (2010) MoNETA: Modular Neural Exploring Traveling Agent. Functional

Connections workshop, CELEST, Boston, October 2010.

[7] L. Chua and S. M. Kang, Memristive devices and systems, Proc. IEEE 64 (2), pp.

209223, 1976.

[8]. D. B. Strukov, G. S. Snider, D. R. Stewart, and R. S. Williams, The missing

memristor found, Nature 453, pp. 8083, 2008.

[9]. G. Snider, R. Amerson, D. Carter, H. Abdalla, S. Qureshi, J. Leveill e, M. Versace,

et al., Adaptive computation with memristive memory, IEEE Comp., 2010. (in press)

[10] http://ieeexplore.ieee.org/

[11] http://en.wikipedia.org/

[12]http://spectrum.ieee.org/

College of Engineering,Chengannur 37