A
DISSERTATION SYNOPSIS ON
“Computational Intelligence with Neural Network and Swarm Intelligence for
Preventing Asteroid Disaster”
SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS
FOR THE DEGREE OF
MASTER OF ENGINEERING IN COMPUTER ENGINEERING
BY
Mr. RAHULGADE ASHOK KAILASH WANDANA
(P.G. REGISTRATION NO. MGM-606)
UNDER THE GUIDANCE OF
PROF. VIJAY R BHOSALE
Department of Computer Engineering
MAHATMA GANDHI MISSION COLLEGE OF ENGINEERING AND
TECHNOLOGY
KAMOTHE, NAVI MUMBAI-410209
UNIVERSITY OF MUMBAI
2018 - 19
PROF. VIJAY R BHOSALE DR. K. SANKAR
PROJECT GUIDE HEAD OF DEPARTMENT
MAHATMA GANDHI MISSION COLLEGE OF ENGINEERING AND
TECHNOLOGY
KAMOTHE, NAVI MUMBAI-410209
NAME OF THE DISSERTATION : COMPUTATIONAL INTELLIGENCE WITH
NEURAL NETWORK AND SWARM
INTELLIGENCE FOR PREVENTING
ASTEROID DISASTER
NAME OF THE STUDENT : Mr. RAHULGADE ASHOK KAILASH
WANDANA
CLASS : M.E. (COMPUTER ENGINEERING)
COLLEGE : MAHATMA GANDHI MISSION COLLEGE
OF ENGINEERING AND TECHNOLOGY,
KAMOTHE, NAVI MUMBAI-410209
SEMESTER : IV (CBCGS)
UNIVERSITY REGISTRATION No : MGM- 606
DATE OF REGISTRATION : 30/09/2016
EXAMINATION FEES RECEIPT No. : DU75893585
NAME OF THE GUIDE : PROF. VIJAY R BHOSALE
SEMESTER DETAILS :
SEMESTER UNIVERSITY
SEAT NUMBER
MONTH & YEAR
OF PASSING
MARKS
OBTAINED
TOTAL
C*G
SGPI
I 4405 DEC-2016 376/600 135 6.43
II 4405 MAY-2017 385/600 146 6.95
STUDENT SIGNATURE
Mr. RAHULGADE ASHOK KAILASH WANDANA
ABSTRACT
Swarm Intelligence is an emerging field of biologically-inspired Artificial Intelligence
based on behavioral models of social insects such as ants, bees, wasps, termites etc. This
technique can be use in a number of applications. The U.S military is investigating swarm
techniques for controlling unmanned vehicles. NASA is investigating the use of swarm
technology for planetary mapping. Artificial Neural Network gathers their knowledge by
detecting the patterns and relationships in data and learns through experience, not from
programming.
This research is based on deriving new Computational Intelligence Model based on two
existing models. These models are Artificial Swarm Intelligence and Artificial Neural Network.
Artificial Swarm Intelligence is a population-based optimization model. This optimization is
achieved through the various computational algorithms such as global or local Particle Swarm
Optimization (PSO). Artificial Neural Network is inspired by the biological brain which consists
of layer of neurons which are interconnected with each other.
These two models when combining together can be used to solve the real time problems.
This proposed model performs with a low computational complexity. This model will help us to
detect and study various Asteroids and other Near Earth Object.
This research highly inspired by the “Deep Asteroid” project of NASA [20]. They use
Deep Learning’s predictive model to classify the Asteroid according to their physical properties.
This synopsis gives an abstract overview of my research work and my work is implemented in
the Python programming language. Analysis of the results of the algorithm is based on the
outputs gets from the Python language.
1 | P a g e
1. Introduction
In the social colony, a worker does not perform all tasks, but rather specializes in a set of
tasks, according to its morphology, age, or chance. This division of labor among nest-mates,
whereby different activities are performed simultaneously by groups of specialized individual. It
is believed to be more efficient than if tasks were performed sequentially by unspecialized
individuals.
The social insect metaphor for solving problems has become a hot topic in the last one
decade. This approach emphasizes distributedness, direct or indirect interactions among
relatively simple agents, flexibility, and robustness. The number of successful applications is
exponentially growing in combinatorial optimization, communications networks, and robotics.
More and more researchers are interested in this new existing way of achieving a form of
artificial intelligence known as swarm intelligence.
Swarm Intelligence systems consist typically of a population of simple agents interacting
locally with one another and with their environment. The inspiration often comes from nature,
especially biological systems. The agents follow very simple rules, and although there is no
centralized control structure dictating how individual agents should behave, local, and to a
certain degree random, interactions between such agents lead to the emergence of "intelligent"
global behavior, unknown to the individual agents. Examples in natural systems of Swarm
Intelligence include ant colonies, bird flocking, animal herding, bacterial growth, fish schooling
and microbial intelligence.
Artificial neural networks (ANN) are computing systems inspired by the biological neural
networks that constitute animal brains. Such systems "learn" to perform tasks by considering
examples, generally without being programmed with any task-specific rules. For example,
in image recognition, they might learn to identify images that contain cats by analyzing example
images that have been manually labeled as "cat" or "no cat" and using the results to identify cats
in other images. They do this without any prior knowledge about cats, e.g., that they have fur,
tails, whiskers and cat-like faces. Instead, they automatically generate identifying characteristics
from the learning material that they process.
An ANN is based on a collection of connected units or nodes called artificial neurons which
loosely model the neurons in a biological brain. Each connection, like the synapses in a
biological brain, can transmit a signal from one artificial neuron to another. An artificial neuron
that receives a signal can process it and then signal additional artificial neurons connected to it.
2 | P a g e
2. Problem Statement
Artificial Swarm Intelligence is the subfield of Computational Intelligence which has an
ability to solve the more complex problem, mostly optimization. Here ‘N’ number of an
individual also known as agents are representing the swarm. Each agent communicates with the
other individual in the swarm and shares the information about the environment. So each agent
has their own knowledge also known as cognitive knowledge and global knowledge also known
as social knowledge.
With this knowledge, the Individual find out the best position in the hyper-dimensional
search space which takes the agent closer to their optimal solution.
2.1 Problem with the PSO
In the real-world application, the size of the swarm may be large and also the size of the
search space could be vast. So each agent in the swarm will compute their own calculation.
However, PSO suffers from high computational complexity and slow convergence speed. High
computational complexity hinders its use in applications that have limited power resources while
slow convergence speed makes it unsuitable for time-critical applications.
In this research, I am using a methodology which helps to minimize the Computational
complexity problem of Artificial Swarm Intelligence and this technique is known as Artificial
Neural Network
Artificial neural networks (ANNs) are information processing systems inspired by the
biological neural system i.e. model of the human or animal brain. Such systems learn
(progressively improve performance) to do tasks by considering examples, generally without
task-specific programming.
Artificial neural network build the model which able to recognize and classify the objects
and learn patterns from features
In my research, I am going to combine the Artificial Neural Network with Computational
Swarm Intelligence. Using this model, before moving agents to the objective function or we can
say the target, each agent tries to find out patterns and type of objective using neural network and
only that agent take participation to the convergence that has an ability to deal with the target.
3 | P a g e
The research shows that the use of both these techniques in conjunction results in a
reduction in the number of computations required as well as gives the ability to find out patterns
form the search space in time-critical applications.
2.2 Handling Asteroid Disaster
These combine techniques are used to handle and prevent the asteroid disaster. In my
research, I am going to find out various features for asteroid which distinguish it with other near
earth objects. This feature helps to the swarm robots to find out patterns from the Near Earth
Object. And also we find out the solution to what if they come into the path of earth.
To achieve my work, I am gathering the Near Earth Object datasets from the NASA
Planetary Data System; we will see the dataset gathering later in this synopsis. We will see how
to feed this dataset to artificial neural network and the algorithm of Global best particle swarm
optimization for global convergence. Finally, we will see our proposed system’s algorithm and
block diagram. Our proposed system firstly computes the pattern classification and then
performs the convergence.
4 | P a g e
3. Review of Literature
The concept of swarm intelligence is employed in the work on artificial intelligence
by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems [1]. Boids is
an artificial life program, developed by Craig Reynolds in 1986, which simulates the flocking
behavior of birds. His paper on this topic was published in 1987 in the proceedings of the ACM
SIGGRAPH conference [2]. Self-propelled particles (SPP), also referred to as the Vicsek model,
was introduced in 1995 by Vicsek [3] and as a special case of the boids model introduced in
1986 by Reynolds [4].
PSO is originally attributed to Kennedy, Eberhart, and Shi [5][6] and was first intended
for simulating social behavior [7], as a stylized representation of the movement of organisms in a
bird flock or fish school. The algorithm was simplified and it was observed to be performing
optimization. The book by Kennedy and Eberhart [8] describes many philosophical aspects of
PSO and swarm intelligence. An extensive survey of PSO applications is made by Poli [9] [10].
Recently, a comprehensive review on theoretical and experimental works on PSO has been
published by Bonyadi and Michalewicz [11] and a review of historical and recent developments
along with hybridization perspectives by Sengupta, Basak and Peters [12].
Warren McCulloch and Walter Pitts [13] in 1943 created a computational model for
neural networks. This work led to work on nerve networks and their link to finite automata[14]
In the late 1940s, D. O. Hebb [15] created a learning hypothesis based on the mechanism
of neural plasticity that became known as Hebbian learning. Researchers started applying these
ideas to computational models in 1948 with Turing's B-type machines. Farley and Clark (1954)
first used computational machines, then called "calculators", to simulate a Hebbian network [16].
Other neural network computational machines were created by Rochester, Holland, Habit and
Duda (1956) [17].
Rosenblatt in 1958 [18] created the perceptron, an algorithm for pattern recognition. With
mathematical notation, Rosenblatt described circuitry not in the basic perceptron, such as
the exclusive-or circuit that could not be processed by neural networks at the time In 1959, a
biological model proposed by Nobel laureates Hubel and Wiesel was based on their discovery of
two types of cells in the primary visual cortex: simple cells and complex cells. The first
functional networks with many layers were published by Ivakhnenko and Lapa in 1965,
becoming the Group Method of Data Handling Neural network research stagnated after machine
learning research by Minsky and Papert in 1969 [19].
5 | P a g e
“Deep Asteroid” project [20] of NASA uses the Deep Learning Model to classify the
Asteroid according to their properties. This project explains the various important features which
can be used for classifying the Asteroid. Nola Taylor Redd deeply explains Asteroid on his blog
presented on “Space.com” [21]. He explains the physical properties of the asteroid and other near
earth objects such as Meteoroids, Comets etc.
Muhammad S. Sohail, Muhammad O. Bin Saeed [22] systematically explains how to
deploy the Particle Swarm Optimization algorithm in the time-critical applications with low
complexity. Kevin P. Murphy [23] explains how to build the classifier with the probabilistic
approach. He used conditional probability to build a generative model for classification. Dean
Abbott [24] uses the predictive model for classifying the objects. Wes McKinney [25] shows a
great way to analyze the dataset using python. He shows how to perform the exploratory data
analysis and preprocessing the dataset using Pandas.
Elio Tuciand Christos Ampatzis [26] explains how two cooperating robots can be
communicated acoustically. Valerio serati, Vito Trianni and Stefano Nolfi [27] describes how the
swarm robots can organize their own path. This shows that the swarm robots do not require any
explicit path. They make their own. Cooperation between the robot also mentioned by Freserrik
Ducatelle, Gianni A Di Caro, Carlo Pinciroli and Luca M Gambardella [28].
6 | P a g e
4. Gathering and Analyzing the Asteroid Dataset
To find out the patterns that are best suitable to classify the Asteroid, we need to choose a
good dataset. How correctly an artificial neural network classify the data, is totally depends on
how much good quality datasets we have chosen because this dataset is used to train our neural
network.
There are lots of websites are available through which we can collect the data of
Asteroid. One of the main steps is to go through the data preprocessing technique on that dataset.
Before getting this dataset you need to study well about the Asteroid and their physical
characteristic.
I have collected four features of Asteroid during data gathering phase but, for simplicity,
I am selecting only three features for training purpose. These features are explained below:
Figure 4.1: Collected Dataset of Asteroids from Different Sources
MOID (AU) : Minimum orbit intersection distance
e : Eccentricity
dv (km/s) : Delta Velocity given in kilometer per second
a (AU) : It is semi major axis
7 | P a g e
Outputs of the Exploratory Data Analysis:
To visualize the distribution of the Asteroid dataset, Python’s seaborn tool is used. It is
very important to implement Exploratory Data Analysis.
Figure 4.2: Univarite distribution of a (AU) Figure 4.3: Bivariate distribution of a (AU) and e
Figure 4.4: Visualizing pair wise distribution of features
8 | P a g e
5. Artificial Neural Network and Analysis of Result
Proposed model consists of a single layer neural network having three inputs and single
output neurons. For simplicity, we selected only three features among four in our dataset because
they are easy to visualize in three-dimensional spaces. They are a (AU), e, dv (km/s).
To train the model, we use the perceptron learning rule. It updates the synaptic weights (W) and
bias (B) using the following formula:
Wi(new) = Wi(old) +αtxi --------------(1)
B(new) = B(old) + αt --------------(2)
Output:
1. After one epoch we get weights:
[0.08114604681809592, 0.008194680786234532, 0.01860311345400166]
Figure 5.1: Weights and error after one epoch
2. After 100 epochs we get weights:
[0.09933939927404453, -0.0745049449824916, 0.00371252586824012]
9 | P a g e
Figure 5.2: Weights and errors after 100 epochs
3. After 500 epochs we get weights:
[0.07769153263323839, 0.02477412376510263, 0.015263283338134315]
Figure 5.3: Weights and errors after 500 epochs.
Calculating cost (Errors) during Training:
1. After 1 Epoch: -0.23921588871571092
2. After 100 Epochs: -0.22810843626573563
3. After 500 Epochs: -0.24186926681407517
10 | P a g e
6. Artificial Swarm Intelligence and Analysis of Result
Global Best Particle Swarm Optimization (GBest PSO) algorithm helps to demonstrate how
the swarm updates their velocity in the global hyper dimensional search space. In the GBest
PSO, the individuals in swarm receive the cognitive knowledge from all others individual in the
swarm and determine the best position in the swarm. All the knowledge in the swarm helps to
find the best global position in the hyper-dimensional search space.
Output:
After 100 generations, we get global best position at:
x-axis = [0.31797679], y-axis = [0.01404559], z-axis = [-0.3848791]
After 500 generations, we get global best position at:
x-axis = [1.39278552], y-axis = [0.86130815], z-axis = [1.68779856]
After 1000 generations, we get global best position at:
x-axis = [0.97081198], y-axis = [1.01590381], z-axis = [1.14433679]
Figure 6.1: Position of the individual in search space. Black dot shows the global best position found by
the swarm since initialization
11 | P a g e
Errors:
Error can be defined as how far the best individual in the swarm from the objective
function. In this case, the best individual means among the n numbers of individual, we select the
one who’s Euclidean distance will be Lowest. So based on the best individual gets from the
generations or e can say epoch, the error will be:
1. After 100 generations we get global best position error:
[[0.68202321 0.98595441 0.6151209]]
2. After 500 Generation we get Global best position:
[[0.39278552 .0.13869185 0.68779856]]
3. After 1000 Generation we get Global best position:
[[0.02918802 0.01590381 0.14433679
As we increase the generation, we will get the low error rate. To get the errors, we subtract
the Gbest position with 1 because our objective convergence is 1.
12 | P a g e
7. Algorithm and Flowchart of the System:
Algorithm and the flowchart of the proposed system are shown below:
Algorithm:
Step 1: Each individual in the algorithm perform the neural network pattern classification
fffffffff algorithm using perceptron learning rule.
Step 2: Only those individuals in the swarm perform convergence that is able to deal with it.
Step 3: Performs the global best only those individual who have take participation in the
hshhhhjconvergence.
Step 4: Convergence is made to the Euclidean distance for 3 D space which is equal to 1
Step 5: Stop.
13 | P a g e
Flowchart of proposed system:
No
Yes
Figure 7.1: Flowchart of Proposed System
Start
If y = 1
?
Stop
Initially each individual perform pattern
classification using the weight obtain by the
Perceptron learning rule
For each individual do:
Y = sig (b + w1*x1 + w2*x2 + w3*x3)
Do Global best particle swarm optimization
algorithm
Update:
Velocity, Pbest, Gbest
14 | P a g e
Block diagram of proposed system:
Figure 7.2: Block Diagram of Proposed System
15 | P a g e
8. Conclusion
We have studied the artificial swarm intelligence and the artificial neural network. Both
the concept has different problem solving methodology. We have studied asteroid datasets, the
property of asteroid and their features. There is still not explored Asteroid present in the space.
Scientists are trying to discover new types of asteroid. We can use our proposed system to solve
such real time complex problem.
Neural Network helps to train the asteroid dataset and we get the result as weights:
[0.07769153263323839, 0.02477412376510263, 0.015263283338134315]
And the global best position in the search space:
x-axis = [0.97081198], y-axis = [1.01590381], z-axis = [1.14433679]
In PSO when the numbers of the individual are increases, then the complexity of the
system are also increases because each individuals will updates personal best as position the
search space. On the contrary, when we apply our proposed algorithm we can achieve the same
task with the low complexity. The following graph compares the both algorithm. The upper line
shows the complexity of the PSO with respect to the number of individuals. On the other hand,
lower line shows complexity for our proposed algorithm.
Figure 8.1: Complexity Plot with respect to number of individuals in the system.
16 | P a g e
REFERENCES
[1] Bemi, G., Wang. J., “Swarm Intelligence in Cellular Robotics Systems”. NATO
advanced workshop on robots and biological systems, Tuscany, Italy, June 26-30-1989.
[2] Reynolds, Craig (1987). "Flocks, berds and schools: A distributed behavioral
model". SIGGRAPH '87: Proceedings of the 14th annual conference on Computer
graphics and interactive techniques.
[3] Vicsek, T, Czirok A, Ben-Jacob E, Cohen I, Shochet O, (1995), “Novel type of phase
transition in a system of self driven particles”. Physic review latters.
[4] Reynolds, C. W. (1987). "Flocks, herds and schools: A distributed behavioral model"
[5] Kennedy J, Eberhart R (1995), “Particle swarm optimization”, proceedings of IEEE
International conference on neural networks.
[6] Shi, Y.; Eberhart, R.C. (1998). "A modified particle swarm optimizer".
[7] Kennedy, J. (1997). "The particle swarm: social adaptation of knowledge".
[8] Kennedy, J.; Eberhart, R.C. (2001). Swarm Intelligence. Morgan Kaufmann.
[9] Poli, R. (2007). "An analysis of publications on particle swarm optimisation
applications". Techinical report CSM-469. Department od CD, University of Essex, UK.
[10] Poli, R. (2008), "Analysis of the publications on the applications of particle swarm
optimisation". Journal of artificial evoluation and applications 2008.
[11] Bonyadi, M. R.; Michalewicz, Z. (2017). "Particle swarm optimization for single
objective continuous space problems: a review". Evolutionary Computation.
[12] Sengupta, S.; Basak, S.; Peters II, R.A. (2018). "Particle Swarm Optimization: A Survey
of Historical and Recent Developments with Hybridization Perspectives". Machine
learning and knowledge extraction.
[13] McCulloch, Warren; Walter Pitts (1943). "A Logical Calculus of Ideas Immanent in
Nervous Activity". Bulletin of mathematical biophysics 5(4): 115-133.
[14] Kleene, S.C. (1956). "Representation of Events in Nerve Nets and Finite Automata".
Annals of mathematics studies(34). Princeton University Press. Pp. 3-41.
[15] Hebb, Donald (1949). The Organization of Behavior. New York: Wiley. ISBN 978-1-
135-63190-1.
17 | P a g e
[16] Farley, B.G.; W.A. Clark (1954). "Simulation of Self-Organizing Systems by Digital
IRE Transactions on Information Theory.
[17] Rochester, N, J.H. Holland, L.H. Habit, W.L. Duda (1956). "Tests on a cell assembly
theory of the action of the brain, using a large digital computer".
[18] Rosenblatt (1958). “The perceptron: A Probabilistic model for information storage and
organization in the brain”
[19] Minsky, Marvin; Papert, Seymour (1969). Perceptrons: An Introduction to Computational
Geometry. MIT Press Andries P. Engelbrecht,” Computational Intelligence An
Introduction”,Wiley
[20] “Deep Asteroid Project”, NASA
[21] Asteroid by Nola Taylor Redd, Spce.cm Contributor.
[22] Muhammad S. Sohail, Muhammad O. Bin Saeed, Member, Syed Z. Rizvi, Student
Member Mobien Shoaib, Member and Asrar U. H. Sheikh,” Low-Complexity Particle
Swarm Optimization for Time-Critical Applications”
[23] Kevin P. Murphy,”Machine Learning A Probablistic Perspective”
[24] Dean Abbott, “Strategies for building Predictive Models”, KNIME User Group Meeting
Feb2014
[25] Wes McKinny,”Python for Data Analysis”, O’REILLY
[26] Elio Tuci, Christos Ampatzis, ”Evolution of acoustic communication Between two
cooperating robots”
[27] Valerio sperati, Vito Trianni, and Stefano Nolfi,”Evolution of self organized path
formation in a swarm of robots”
[28] Freserrick Ducatelle, Gianni A Di Caro, Carlo Pinciroli, Luca M Gambardella,”Self
organizes Cooperation between Robotic Swarms”