task 6: artificial intelligent

10
1 FACULTY OF ENGINEERING DEPARTMENT OF CIVIL AND STRUCTURAL ENGINEERING INTELLIGENT URBAN TRAFFIC CONTROL SYSTEM KKKA 6424 Ir. Dr. Riza Atiq Abdullah O.K. Rahmat Task 6: Artificial Intelligent Prepared by: 1-HAIDER FARHAN P65405 2- MUSTAFA TALIB P60915 3- SAHAR ABD ALI P65295

Upload: mustafa-talib

Post on 22-Mar-2016

234 views

Category:

Documents


0 download

DESCRIPTION

artificial intelligent

TRANSCRIPT

1

FACULTY OF ENGINEERING

DEPARTMENT OF CIVIL AND STRUCTURAL

ENGINEERING

INTELLIGENT URBAN TRAFFIC CONTROL SYSTEM

KKKA 6424

Ir. Dr. Riza Atiq Abdullah O.K. Rahmat

Task 6: Artificial Intelligent

Prepared by:

1-HAIDER FARHAN P65405

2- MUSTAFA TALIB P60915

3- SAHAR ABD ALI P65295

2

Artificial Intelligent The research of artificial intelligence has been developed since 1956,

when the term “Artificial Intelligence, AI” was used at the meeting hold

in Dartmouth College. Artificial intelligence, a comprehensive discipline,

was developed based on the interaction of several kinds of disciplines,

such as computer science, cybernetics, information theory, psychology,

linguistics, and neurophysiology. Artificial intelligence is a branch of

computer science, involved in the research, design and application of

intelligent computer.

The goal of this field is to explore how to imitate and execute some

of the intelligent function of human brain, so that people can develop

technology products and establish relevant theories. The first step:

artificial intelligence’s rise and fall in the 1950s. The second step: as the

expert system emerging, a new upsurge of the research of artificial

intelligence appeared from the end of 1960s to the 1970s. The third step:

in the 1980s, artificial intelligence made a great progress with the

development of the fifth generation computer.

The fourth step: in the 1990s, there is a new upsurge of the research

of artificial intelligence: with the development of network technology,

especially the international internet technology, artificial intelligence

research by a single intelligent agent began to turn to the study of

distributed artificial intelligence based on network environment. People

study not only the same goal-based distributed problem solving, but also

the multiply intelligent agents problem solving, which made the artificial

intelligence more practical.

Additionally, a thriving scene of artificial neural network research

and application emerged and it had been deep into all areas of life as the

Hopfield multilayer neural network model put forward. The main theories

and methods of artificial intelligence are summarized as symbolism,

behaviorism, and connectionism approach. Since the appearance of

artificial intelligence AI in the 1950s, a lot of hopes and dreams about it

have been generated. Now we will elaborate the latest progress of

artificial intelligence technology in all aspects of civil engineering and

their relationship as follows.

Artificial intelligence is a branch of computer science, involved in the

research, design, and application of intelligent computer. Traditional

methods for modeling and optimizing complex structure systems require

3

huge amounts of computing resources, and artificial-intelligence-based

solutions can often provide valuable alternatives for efficiently solving

problems in the civil engineering.

This papers summarizes recently developed methods and theories in

the developing direction for applications of artificial intelligence in civil

engineering, including evolutionary computation, neural networks, fuzzy

systems, expert system, reasoning, classification, and learning, as well

as others like chaos theory, cuckoo search, firefly algorithm, knowledge-

based engineering, and simulated annealing. The main research trends are

also pointed out in the end. The paper provides an overview of the

advances of artificial intelligence applied in civil engineering.

1-Neural network

An artificial neural network, often just named a neural network, is a

mathematical model inspired by biological neural networks. A neural

network consists of an interconnected group of artificial neurons, and it

processes information using a connectionist approach to computation. In

most cases a neural network is an adaptive system changing its structure

during a learning phase. Neural networks are used for modeling complex

relationships between inputs and outputs or to find patterns in data.

Network function:

4

Word network in the term 'artificial neural network' refers to the

inter–connections between the neurons in the different layers of each

system. An example system has three layers. The first layer has input

neurons, which send data via synapses to the second layer of neurons, and

then via more synapses to the third layer of output neurons. More

complex systems will have more layers of neurons with some having

increased layers of input neurons and output neurons. The synapses store

parameters called "weights" that manipulate the data in the calculations.

An ANN is typically defined by three types of parameters:

1. The interconnection pattern between different layers of neurons

2. The learning process for updating the weights of the

interconnections

3. The activation function that converts a neuron's weighted input to

its output activation.

Mathematically, a neuron's network function is defined as a

composition of other functions , which can further be defined as a

composition of other functions. This can be conveniently represented as a

network structure, with arrows depicting the dependencies between

variables. A widely used type of composition is the nonlinear weighted

sum, where , where (commonly referred to as

the activation function[1]) is some predefined function, such as

the hyperbolic tangent. It will be convenient for the following to refer to a

collection of functions as simply a vector .

ANN dependency graph

This figure depicts such a decomposition of , with dependencies

between variables indicated by arrows. These can be interpreted in two

ways.

The first view is the functional view: the input is transformed into a

3-dimensional vector , which is then transformed into a 2-dimensional

vector , which is finally transformed into . This view is most commonly

encountered in the context of optimization.The second view is the

5

probabilistic view: the random variable depends upon the random

variable , which depends upon , which depends upon the

random variable . This view is most commonly encountered in the

context of graphical models.

The two views are largely equivalent. In either case, for this particular

network architecture, the components of individual layers are

independent of each other (e.g., the components of are independent of

each other given their input ). This naturally enables a degree of

parallelism in the implementation.

Two separate depictions of the recurrent ANN dependency graph

Networks such as the previous one are commonly

called feedforward, because their graph is a directed acyclic graph.

Networks with cycles are commonly calledrecurrent. Such networks are

commonly depicted in the manner shown at the top of the figure,

where is shown as being dependent upon itself. However, an implied

temporal dependence is not shown.

2-Genetic algorithm

Genetic Algorithms are a family of computational models inspired by

evolution. These algorithms encode a potential solution to a speci_c

problem on a simple chromosome-like data structure and apply

recombination operators to these structures as as to preserve critical

information. Genetic algorithms are often viewed as function optimizer,

although the range of problems to which genetic algorithms have been

applied are quite broad.

An implementation of genetic algorithm begins with a population of

(typically random) chromosomes. One then evaluates these structures and

allocated reproductive opportunities in such a way that these

6

chromosomes which represent a better solution to the target problem are

given more chances to `reproduce' than those chromosomes which are

poorer solutions. The 'goodness' of a solution is typically de_ned with

respect to the current population.

Principal of work of genetic algorithm :

The working principle of GA is illustrated in Fig. 1. The major steps

involved are the generation of a population of solutions, _nding the

objective function and _tens function and the application of genetic

operators. These aspects are described briey below. They are described in

detail in the following subsection.

/*Algorithm GA */

Formulate initial population

Randomly initialize population

Repeat

Evaluate objective function

_nd _tness function

Apply genetic operators

Reproduction

Crossover

Mutation

Until stopping criteria

Figure 1: The Working Principle of a Simple Genetic Algorithm

An important characteristic of genetic algorithm is the coding of

variables that describes the problem.

7

The most common coding method is to transform the variables to a

binary string or vector; GAs perform best when solution vectors are

binary.If the problem has more than one variable, a multi-variable coding

is constructed by concatenating as many single variables coding as the

number of variables in the problem. Genetic Algorithm processes a

number of solutions simultaneously. Hence, in the _rst step a population

having P individuals is generated by pseudo random generators whose

individuals represent a feasible solution.

This is a representation of solution vector in a solution space and is

called initial solution. This ensures the search to be robust and unbiased,

as it starts from wide range of points in the solution space. In the next

step, individual members of the population are evaluated to _nd the

objective function value. In this step, the exterior penalty function

method is utilized to transform a constrained optimization problem to an

unconstrained one. This is exclusively problem speci_c. In the third step,

the objective function is mapped into a _tness function that computes a

_tness value for each member of the population. This is followed by the

application of GA operators.

3-Expert system

In artificial intelligence, an expert system is a computer system that

emulates the decision-making ability of a human expert. Expert systems

are designed to solve complex problems by reasoning about knowledge,

like an expert, and not by following the procedure of a developer as is the

case in conventional programming. The first expert systems were created

in the 1970s and then proliferated in the 1980s. Expert systems were

among the first truly successful forms of AI software.

An expert system has a unique structure, different from

traditional computer programming. It is divided into two parts, one fixed,

independent of the expert system: the inference engine, and one variable:

the knowledge base. To run an expert system, the engine reasons about

the knowledge base like a human. In the 80s a third part appeared: a

dialog interface to communicate with users. This ability to conduct a

conversation with users was later called "conversational".

8

The rule base or knowledge base

In expert system technology, the knowledge base is expressed

with natural language rules IF ... THEN ... For examples :

1. "IF it is living THEN it is mortal"

2. "IF his age = known THEN his year of birth = current year - his

age in years"

3. "IF the identity of the germ is not known with certainty AND the

germ is gram-positive AND the morphology of the organism is

"rod" AND the germ is aerobic THEN there is a strong probability

(0.8) that the germ is of type enterobacteriacae"

This formulation has the advantage of speaking in everyday language

which is very rare in computer science (a classic program is coded).

Rules express the knowledge to be exploited by the expert system. There

exist other formulations of rules, which are not in everyday language,

understandable only to computer scientists. Each rule style is adapted to

an engine style.

The inference engine:

The inference engine is a computer program designed to produce a

reasoning on rules. In order to produce a reasoning, it should be based

on logic. There are several kinds of logic: propositional logic, predicates

of order 1 or more, epistemic logic, modal logic, temporal logic, fuzzy

logic, probabilistic logic (implemented in a Bayesian network), etc.

Propositional logic is the basic human logic, that is expressed

9

in syllogisms. The expert system that uses that logic is also called

a zeroth-order expert system. With logic, the engine is able to generate

new information from the knowledge contained in the rule base and data

to be processed.

4-Fuzzy logic Fuzzy logic is a form of many-valued logic or probabilistic logic; it

deals with reasoning that is approximate rather than fixed and exact.

Compared to traditional binary sets (where variables may take on true or

false values) fuzzy logic variables may have a truth value that ranges in

degree between 0 and 1. Fuzzy logic has been extended to handle the

concept of partial truth, where the truth value may range between

completely true and completely false.

Furthermore, when linguistic variables are used, these degrees may be

managed by specific functions. Irrationality can be described in terms of

what is known as the fuzzjective.

The term "fuzzy logic" was introduced with the 1965 proposal

of fuzzy set theory by Lotfi A. Zadeh.Fuzzy logic has been applied to

many fields, from control theory to artificial intelligence. Fuzzy logics

however had been studied since the 1920s as infinite-valued logics

notably by Łukasiewicz and Tarski.

Overview:

Classical logic only permits propositions having a value of truth or

falsity. The notion of whether 1+1=2 is absolute, immutable,

mathematical truth. However, there exist certain propositions with

variable answers, such as asking various people to identify a color. The

notion of truth doesn't fall by the wayside, but rather a means of

representing and reasoning over partial knowledge is afforded, by

aggregating all possible outcomes into a dimensional spectrum.

Both degrees of truth and probabilities range between 0 and 1 and

hence may seem similar at first. For example, let a 100 ml glass contain

30 ml of water. Then we may consider two concepts: Empty and Full.

The meaning of each of them can be represented by a certain fuzzy set.

Then one might define the glass as being 0.7 empty and 0.3 full. Note that

the concept of emptiness would be subjective and thus would depend on

the observer or designer. Another designer might equally well design a

set membership function where the glass would be considered full for all

values down to 50 ml. It is essential to realize that fuzzy logic uses truth

11

degrees as a mathematical model of the vagueness phenomenon while

probability is a mathematical model of ignorance.

Applying truth values :

A basic application might characterize subranges of a continuous

variable. For instance, a temperature measurement for anti-lock

brakes might have several separate membership functions defining

particular temperature ranges needed to control the brakes properly. Each

function maps the same temperature value to a truth value in the 0 to 1

range. These truth values can then be used to determine how the brakes

should be controlled.

Fuzzy logic temperature

In this image, the meanings of the expressions cold, warm,

and hot are represented by functions mapping a temperature scale. A

point on that scale has three "truth values"—one for each of the three

functions. The vertical line in the image represents a particular

temperature that the three arrows (truth values) gauge. Since the red

arrow points to zero, this temperature may be interpreted as "not hot".

The orange arrow (pointing at 0.2) may describe it as "slightly warm" and

the blue arrow (pointing at 0.8) "fairly cold".