data analysis theory and practice case: python and excel …

97
Birendra Khadka DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel Tools Thesis CENTRIA UNIVERSITY OF APPLIED SCIENCES Information Technology December 2019

Upload: others

Post on 17-Apr-2022

27 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

Birendra Khadka

DATA ANALYSIS THEORY AND PRACTICE

Case: Python and Excel Tools

Thesis

CENTRIA UNIVERSITY OF APPLIED SCIENCES

Information Technology

December 2019

Page 2: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

ABSTRACT

Centria University

of Applied Sciences

Date

June 2019

Author

Birendra Khadka

Degree programme

Information Technology

Name of thesis

DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel Tools

Instructor

Kauko Kolehmainen

Pages

92 + 5

Supervisor

Kauko Kolehmainen

Data science is a multifaceted field used to gain insights from complex data. The aim of this thesis is to

discover how to analyse data using Python and Excel with the different data sets. The proposal of this thesis

to analyse datasets of Corona virus by using Python´s libraries also this thesis analyse the excel datasets and

illustrate the comparative study between them. The data analytics is the investigation of analysing unrefined

data to resolve on selections about the information where data science is an interdisciplinary field that uti-

lizes logical techniques, procedures, calculations and frameworks to separate information and bits of

knowledge from organized and unstructured information which is identified with data mining and big data.

This thesis comprises three parts including theoretical part which demonstrates the introduction of data

analysis, data analysis vs data science, data wrangling, normalization, formatting, binning and exploratory

data analysis of Titanic datasets. Additionally, practical part exemplifies the data analysis of Corona virus

spreading all over the China which is completed by Python based data analysis beside this it contains Excel-

based data analysis of excel datasheets and comparative study of two data sets by using two types of data

analysis: Python-based data analysis and Excel-based data analysis. Finally, this thesis concludes the

preeminent data analysis method by describing its features and recommend for individuals who want to

develop career in data science as well as refers to future of data analysis.

Key words

AI, ANOVA, CSV, Matplotlib, NumPy, Pandas, Plotly, SciPy, Seaborn, Theano, Sci-kit learn

Page 3: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

CONCEPT DEFINITIONS

CSV: comma-separated values

ANOVA: Analysis of variance

MSE: Mean squared error

MLR: Multiple linear Regression

SLR: Simple Linear Regression

SAS: Statistical Analysis Software

MATLAB: Matrix Laboratory

SQL: Aka Structured Query Language

SCIPY: (Scientific python) is a free and open-source Python library used for scientific computing and

technical computing.

NUMPY: Numeric Python

MATPLOTLIB: is a plotting library for the Python

AI: Artificial intelligence

JSON: JavaScript Object Notation

XLSX: is a file extension for an open XML spreadsheet file format used by Microsoft Excel

HDF: Hierarchical Data Format

EXCEL: is a spreadsheet program

STD: Subscriber Trunk Dialing

EDA: Exploratory Data Analysis

IQR: Interquartile Range

MST: Minimum Spanning Tree

SVM: Support Vector Machine

RBF: Radial Basis Function

NCOV: Novel Corona virus

Page 4: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

ABSTRACT

CONCEPT DEFINITIONS

CONTENTS

1 INTRODUCTION ................................................................................................................................ 1

2 DATA ANALYSIS ............................................................................................................................... 2

2.1 The role of Data Analytics ............................................................................................................. 2 2.2 Python in Data Analytics ............................................................................................................... 4

2.3 Data Science .................................................................................................................................... 4 2.4 Python for Data Science ................................................................................................................. 5 2.5 Importance of Data Science........................................................................................................... 6

2.6 Python package for Data Science.................................................................................................. 7 2.6.1 Numpy ................................................................................................................................... 7 2.6.2 Pandas ................................................................................................................................... 8 2.6.3 Matplotlib.............................................................................................................................. 8

2.6.4 TensorFlow ........................................................................................................................... 9 2.6.5 Scipy ...................................................................................................................................... 9

2.7 Data Science VS Data Analytics .................................................................................................. 10

3 DATA WRANGLING ....................................................................................................................... 12

3.1 Pre- processing data ..................................................................................................................... 12 3.2 Dealing with missing values......................................................................................................... 13

3.3 Data formatting ............................................................................................................................ 14 3.3.1 Data format based on file .................................................................................................. 14

3.3.2 Data format based on directory ........................................................................................ 14 3.3.3 Connections between database .......................................................................................... 15

3.4 Data Normalization ...................................................................................................................... 15 3.4.1 Methods of normalizing data ............................................................................................ 16 3.4.2 Simple feature scaling in Python ...................................................................................... 16

3.4.3 Min-Max in Python ............................................................................................................ 16 3.4.4 Z – score in python ............................................................................................................. 17

3.5 Binning in python ......................................................................................................................... 19 3.6 Categorical variables in Python .................................................................................................. 19

4 EXPLORATORY DATA ANALYSIS IN PYTHON ..................................................................... 20 4.1 Descriptive Statistics .................................................................................................................... 20

4.1.1 Interpretation of Descriptive Statistics ............................................................................ 20

4.1.2 Processes of Descriptive Statistics ..................................................................................... 21 4.2 GroupBy in Python ...................................................................................................................... 22 4.3 Correlation .................................................................................................................................... 22

4.3.1 Negative and Positive Correlation .................................................................................... 22 4.3.2 Multiple and Partial Correlation ...................................................................................... 23

4.4 Linear and Non-Linear (Curvilinear) Correlation ................................................................... 23 4.5 Analysis of Variance ANOVA ..................................................................................................... 24

4.5.1 Revealing of analysis of variance ...................................................................................... 24

4.5.2 Example of How to Use ANOVA ...................................................................................... 25 4.5.3 One-Way ANOVA Versus Two-Way ANOVA ............................................................... 25

4.6 Exploratory Analysis of the Titanic data set ............................................................................. 26

Page 5: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

5 MODEL EVALUATION .................................................................................................................. 38

5.1 Regression Models ........................................................................................................................ 38

5.1.1 Sum Squared Error (SSE), Mean Square Error (MSE) and Root Mean Square Error

(RMSE) ......................................................................................................................................... 39 5.1.2 Relative Squared Error (RSE) .......................................................................................... 43 5.1.3 Mean Absolute Error (MAE) ............................................................................................ 43 5.1.4 Mean Absolute Deviation (MAD) ..................................................................................... 44

5.1.5 Relative Absolute Error (RAE) ......................................................................................... 45 5.2 Classification Models ................................................................................................................... 45 5.3 Ridge Regression .......................................................................................................................... 46

6 MACHINE LEARNING MODEL EVALUATION ....................................................................... 47 6.1 Holdout .......................................................................................................................................... 47

6.2 Cross-validation............................................................................................................................ 48

6.3 Model Evaluation Metrics ........................................................................................................... 48

6.3.1 Classification Accuracy ..................................................................................................... 49 6.3.2 Multiclass averaging .......................................................................................................... 50 6.3.3 Confusion metrics............................................................................................................... 51 6.3.4 Logarithmic loss ................................................................................................................. 52

6.3.5 Area under the curve (AUC) ............................................................................................. 53 6.3.6 F- Scores .............................................................................................................................. 54

7 DATA ANALYTICS TOOLS ........................................................................................................... 55 7.1 R programming ............................................................................................................................ 55

7.2 Python............................................................................................................................................ 55 7.3 Apache Spark ............................................................................................................................... 55 7.4 SAS and Rapid Miner .................................................................................................................. 56

7.5 Excel .............................................................................................................................................. 56

7.6 Tableau Public .............................................................................................................................. 57 7.7 KNIME .......................................................................................................................................... 57 7.8 Splunk and Qlik............................................................................................................................ 57

8 EXPERIMENTAL DATA ANALYSIS OF THE WUHAN CORONAVIRUS DATASET ....... 58 8.1 Coronavirus .................................................................................................................................. 58

8.2 The emergency, starting today .................................................................................................... 58 8.3 A prologue to the dataset ............................................................................................................. 59

8.3.1 Insight the dataset .............................................................................................................. 59

8.3.2 Plotting the data ................................................................................................................. 64

9 EXCEL TOOLS FOR DATA ANALYSIS ...................................................................................... 70

9.1 Data Analysis of Excel datasheet ................................................................................................ 71

9.2 Complementing the Excel data analysis tool Pak...................................................................... 71 9.3 Excel datasets ................................................................................................................................ 72

9.3.1 Descriptive Statistics .......................................................................................................... 72 9.3.2 T- test in excel ..................................................................................................................... 73 9.3.3 Regression in Excel ............................................................................................................ 75

9.3.4 ANOVA in Excel ................................................................................................................ 76 9.3.5 Correlation in excel ............................................................................................................ 78

9.4 Excel Data ..................................................................................................................................... 79 9.5 Comparing excel based data analysis with python data analysis ............................................ 79

Page 6: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

9.5.1 Getting the data .................................................................................................................. 80

9.5.2 Analysing data .................................................................................................................... 81

9.5.3 Portraying ........................................................................................................................... 81

10 CONCLUSION ............................................................................................................................ 83

REFERENCES ...................................................................................................................................... 86

APPENDICES

GRAPHS

GRAPH 1. Min-Max normalization……………………………………………………………………21

GRAPH 2. Z-Score Normalization in Python…………………………………………………………..22

GRAPH 3. A regression model got a line of best fit…………………………………………………….42

GRAPH 4. A regression model got a different line of best fit…………………………………………...42

GRAPH 5. MSE where baseline is the mean……………………………………………………………43

GRAPH 6. MSE in pattern is correct and mean is incorrect…………………………………………….43

GRAPH 7. Number of confirmed cases by date………………………………………………………...67

GRAPH 8. Plotting two line for deaths and recoveries respectively……………………………………68

GRAPH 9. Most affected countries other than China…………………………………………………...69

GRAPH 10. Mortality rate over time……………………………………...……………………………70

GRAPH 11. Ten most affecting provinces of China………………………………….………………..71

GRAPH 12. Scatter diagram using regression in Excel tools……………………………………...……79

FIGURES

FIGURE 1. Different role of Data analytics……………………………………………………………...3

FIGURE 2. Phases of Data science………………………………………………………………………5

FIGURE 3. Importance of Data science in different fields………………………………………….…..7

FIGURE 4. Comparison of Data Science with Data analytics…………………………………………13

FIGURE 5. Importing the Pandas using as pd…………………………………………………………..30

FIGURE 6. Summary of the different attributes………………………………………………………..31

FIGURE 7. Checking the missing values………………………………………………………………31

FIGURE 8. Analyzing the different values distributions……………………………………………….32

FIGURE 9. Coloring the function matplotlib inline…………………………………………………….32

FIGURE 10. Matplotlib separating by gender………………………………………………………….33

FIGURE 11. Comparing survival with age……………………………………………………………..33

Page 7: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

FIGURE 12. Counting the values with values counts…………………………………………………..34

FIGURE 13. Binning the aging groups…………………………………………………………………34

FIGURE 14. Age distribution of the passengers………………………………………………………..35

FIGURE 15. Distribution of other variables……………………………………………………………36

FIGURE 16. Dissemination of male passenger…………………………………………………………36

FIGURE 17. Allocation of female passenger…………………………………………………………...37

FIGURE 18. Cross reference regarding with gender……………………………………………………37

FIGURE 19. Distribution of the survived and not survived passenger………………………………….38

FIGURE 20. Example of Regression models…………………………………………………………...40

FIGURE 21. An example of two different classification models to get MSE…………………………51

FIGURE 22. Coding showing the accuracy…………………………………………………………….52

FIGURE 23. Screenshot of coding shows multiclass averaging………...……………………………...54

FIGURE 24. Coding shows confusion metrics…………………………………………………………55

FIGURE 25. Computed programming represents Logarithmic loss……………………………………56

FIGURE 26. Computed programming of AUC………………………………………………………...56

FIGURE 27. Computed programming of AUC when target classes are of type String…………………57

FIGURE 28. Computed programming of F-score………………………………………………………57

FIGURE 29. Computed programming of F-score when target classes are of type String……………….62

FIGURE 30. Comparing last update with date segment………………………………………………...63

FIGURE 31. Checking for duplicate rows……………………………………………………………...64

FIGURE 32. Countries list of spreading infection across the world…………………………………….64

FIGURE 33. Checking the dates in the date section…………………………………………………….64

FIGURE 34. Converting date to datetime object………………………………………………………..65

FIGURE 35. The total number of confirmed cases for each country……………………………………65

FIGURE 36. Assembling the Analysis Tool Pak……………………………………………………….73

FIGURE 37. Checking analysis Tool Pak………………………………………………………………74

FIGURE 38. Data tab showing the data analysis………………………………………………………..74

FIGURE 39. Example of using data analysis Tool Pak…………………………………………………74

FIGURE 40. Descriptive statistics in Excel………………………………...…………………………..75

FIGURE 41. Output of the descriptive statistics by using excel tools…………………………………..76

FIGURE 42. T-test in Excel Tools……..……………………………………………………………….76

FIGURE 43. Output of t-test in Excel……………………………...…………………………………...77

FIGURE 44. Regression in Excel………………………………………………………………………78

FIGURE 45. Output of the regression in Excel Tools………………………………….………………78

Page 8: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

FIGURE 46. Analyzing the ANOVA in Excel Tool Pak………………………………………………80

FIGURE 47. Examination of correlation in Excel………………………………………………………81

TABLES

TABLE 1. An example of two different classification models to get MSE……………………………..41

TABLE 2. The main five lines data by using data.head()……………………………………………….62

TABLE 3. Restoring the general details by using describe() function…………………………………..63

TABLE 4. Number of cases reported each day…………………………………………………………66

Page 9: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

1

1 INTRODUCTION

Data analytics is the investigation of separating unrefined data to settle on choices about the information.

A noteworthy number of the strategies and methods of information investigation have been automated

into mechanical systems and figurines that work over unrefined data for human use. Data analytics meth-

ods can uncover patterns and measurements that would some way or another be lost in the mass of data.

This data would then be able to be utilized to improve procedures to expand the general productivity of

a business or framework. Data analytics is a wide term that incorporates numerous various kinds of

information examination. Any kind of data can be exposed to information examination methods to get

the knowledge that can be utilized to improve things. For instance, fabricating organizations regularly

record the runtime, personal time, and work line for different machines and after that investigate the

information to even more likely arrangement the remaining tasks at hand, so the machines work nearer

to crest limit.

Data analytics can do substantially more than introduce bottlenecks underway. Gaming organizations

use information investigation to set reward plans for players that keep most of the players dynamic in

the game. Substance organizations utilize a significant number of similar information investigation to

keep clicking, viewing, or re-sorting out substance to get another view or another snap. Data science is

an idea to bind together measurements, information analysis, AI and their related strategies to compre-

hend and break down real wonders with information. It utilizes systems and speculations drawn from

numerous fields inside the setting of arithmetic, insights, software engineering, and data science.

This thesis will mainly discover how to dissect data utilizing Python together with absorbed from the

fundamentals of Python to analyzing a wide range of sorts of data. This thesis illustrates Excel-based

data analysis. Finally, this thesis will conclude how to prepare data for analysis, perform simple statisti-

cal analysis, create meaningful data visualizations and predict future trends from data.

Page 10: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

2

2 DATA ANALYSIS

Data Analysis is a way of thinking about information from social events and then set it up for major

conferences. Information analysts discuss the use of notable methods related to the description and

control of information. Every one of these bits of knowledge permits the organizations to define better

procedures and to settle on remotely enhanced choices. Data Analysis is characterized as a procedure of

cleaning, changing, and displaying data to find valuable data for business basic leadership. The motiva-

tion behind Data Analysis is to extricate valuable data from information and getting the preference reliant

on the data analysis. Similarly, Data Analysis is a procedure of examining, purging, changing and dis-

playing data with the objective of finding helpful data and establishing basic leadership. Data analysis

has many aspects and methods. They combine different methods under many different names and used

in different fields of business, science, and sociology. In today's business world, data analysis is respon-

sible for making progressive logical choices and assistance to organizations work more successfully.

(Thumar 2019.)

Data analysis is a procedure of gathering and demonstrating information with the objective of finding

the necessary data. The outcomes are conveyed, proposing ends, and supporting basic leadership. Occa-

sionally, data perception depicts the simplicity of finding valuable examples in the data for the data.

Data analysis along with Data modelling statements indicate the consequent.

(tutorialspoint 2019.)

2.1 The role of Data Analytics

Big Data Analytics initiates a relationship to agreement with the data and use it to find new possibilities.

This prompts progressively insightful business moves, higher advantages, profitable maneuvers, and

energetic customers. The idea is to share the business prospects in a better way soon and then use it with

an examination idea. Information expands at a fast speed and the pace of improvement of information is

high. Information age appears through various customers, endeavors, and associations. It is crucial to

amalgamate this data that has been produced across the business. On the off chance that it gets squan-

dered, loads of important data will be lost. (Keshari 2019.) Basics roles of data analytics can be shown

in figure 1.

Page 11: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

3

FIGURE 1. Different roles of Data analytics (Keshari 2019).

Information analysis is a practice for constructing original data and solving various models in the data

through numerical estimation and calculation. Analyzing data facilitates to create new information and

accessible various bits of information. Obtaining data from an organization's database or deleting data

from external sources for research is one of the major occupations of any data analyst. Clearing infor-

mation is the basic stage of the entire data classification process, and a stupid and unimproved way to

review, identify, and use the data. When investigating deterministic data to select key decisions, data

analysts should start with prudent data cleansing methods. It is important to check the phenomena es-

tablished by data cleansing, and cleansing incorporates data deletion operations that may destroy score

or organize information in a single way. (Keshari 2019.)

Apparently, this activity is necessary for any data analyst. Data testing is a profession that studies

vague real data elements to meet specific requirements. This is a way to evaluate data using logical

reasoning and real predicting to quickly view each data provided. People use quantitative tools to ver-

ify and dismantle data. Extensive data validation is dedicated to discovering examples, links, and mod-

els in confusing data sets. The information analyst will consider both the current time and the long-

term situation. Standard examination affects to perceive how business has performed and anticipate

where business maneuvers and practices will be held. Sufficiently, it will give reflections in respect of

how it might change issues to move the business preference. Announcing makes a translation of crude

data into information while revealing urges associations to screen their online business and became

conscious of when data falls outside of predicted degrees. Remarkable reporting ought to raise issues

about the business from its end customers. (Keshari 2019.)

Page 12: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

4

2.2 Python in Data Analytics

Python is a deciphered, significant level, universally useful programming language invented by Guido

van Rossum. First released in 1991, Python's structure reasoning underscores code coherence with its

eminent utilization of critical whitespace. Its language builds an article arranged methodology implies

to assist software engineers with composing clear, coherent code for bit and huge scale of projects.

Python is progressively composed, and trash gathered. It underpins different programming ideal models,

including procedural, object-oriented, and practical programming. Python is frequently portrayed as a

battery included language because of its exhaustive standard library. (Dave 2012.)

Data Science has increased a terrific agreement of infamy over the most recent couple of years. This

current field's important facility is to transfer important data into advertising and business methodologies

which enables an organization to develop. The data is put away and inquired about to find in a coherent

arrangement. Previously, just the top IT organizations were associated with this field but today organi-

zations from different parts and fields, for example, online business, medicinal services, endowment,

and others are utilizing information examination. (Donnabhain 2019.)

There are various gadgets open for information investigation, for instance, Hadoop, R programming,

SAS and SQL. Finally, Python is the most standard and easy to use instrument for information investi-

gation which is known as a Swiss Army cutting edge of the coding scene since it underlines organized

programming and object-oriented programming similarly as the helpful programming language then

others. According to the Stack Overflow of 2018, the most standard programming language on world

and called the most sensible language for data science mechanical assemblies and applications is Python.

Additionally, Python won the Center of Architects in the Hacker Rank 2018 Creator study. (Donnabhain

2019.)

2.3 Data Science

When the world entered to the time of large information, the requirement for data science stockpiling

additionally developed. Data science is an interdisciplinary field that utilizes logical techniques, proce-

dures, calculations and frameworks to remove information and insights from organized and unstructured

information. Data science is identified with information mining and big data. Data Science is a mix of

different devices, calculations, and AI standards with the objective to find concealed examples from the

raw data. (Sharma 2019.)

Page 13: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

5

FIGURE 2. Phases of Data science (Sharma 2019).

From the above figure 2, a Data Analyst for the most part clarifies what is happening by preparing

historical backdrop of information. Conversely, data scientist not exclusively does the exploratory in-

vestigation to find bits of knowledge from it, in addition, data scientist utilizes different propelled AI

calculations to distinguish the event of a specific occasion later. A Data scientist will examine at the

information from numerous points, sometimes fringes not known in advance. In this way, Data Science

is essentially used to resolve on choices and expectations utilizing prescient causal analysis, prescriptive

examination and AI. (Sharma 2019.)

2.4 Python for Data Science

Compared to quantitative and logical processing, Python has a unique attribution and easy to use. It is a

longstanding trade leader and often widely used in various industries such as oil and gas, signal man-

agement and funds. In addition, Python has been used to strengthen Google's internal foundations and

build applications like YouTube. Python is commonly used, a very popular device and an adaptable

language that has been released to the public. Its massive library is used for information control, and it

is very easy for beginners to learn. In addition to remain an autonomous stadium, it effectively incorpo-

rates any current frameworks that can be used to solve the most surprising problems. Most banks use it

to process information, organizations use it for perception and preparation, and climate indicator organ-

izations for instance predictive analytics also use it. (Donnabhain 2019.)

Page 14: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

6

Python is preferred over other Data Science tools attributable to the different fields. Python is considered

a primitive language, and any substitute or scientist with simple basic data can start experimenting. The

time required to analyze the code and the various restrictions on removing programming signatures are

also limited. Compared to other programming languages (such as C, Java, and C #), the ideal opportunity

to execute code is fewer, which helps designers and architects to invest more energy in computing.

Python provides an extensive database library that includes artificial knowledge and artificial intelli-

gence. The most popular libraries include Scikit Learn, TensorFlow, Seaborn, Pytorch and Matplotlib.

Many operational, Data Science and Artificial Intelligence resources are available online for effective

access. Compared to other programming languages such as Java and R, Python is considered a more

adaptable and faster language. It provides the flexibility to solve problems that cannot be solved through

other programming languages. Many organizations use it to create various types of applications and fast

devices. There are different perceptual alternatives in Python. Matplotlib library provides a solid foun-

dation on which can create different libraries, such as Ggplot, Panda Drawing and PyTorch. These pack-

ages help develop a framework that can be used for the web and graphic design. (Donnabhain 2019.)

2.5 Importance of Data Science

Data Science is a mix of information induction which utilizes logical strategy, procedure, calculations,

and frameworks to separate information. It utilizes both organized and unstructured information for bits

of knowledge. Information science is an idea to bring together insights, AI, and information examination.

Unmistakably, it utilizes procedures and hypotheses drawn from the field of arithmetic, insights, and

data science. This part can have the option to discover about the importance of Data Science and Infor-

mation Science industry patterns. (Onlineitguru 2019.)

FIGURE 3. Importance of Data science in different field (Onlineitguru 2019).

Page 15: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

7

As shown in figure 3, the importance of Data Science categories in three different analytics. More than

7 billion gadgets associated with the web at the present time while 3.5 million terabytes of information

produced each day. Before the year 2019 is over, it might reach a million terabytes of information con-

stantly. In descriptive examination, according to informational index, it portrays snippets of data. In

predictive analytics, the issue appears being as though which can conjecture or gauge verifiable infor-

mation. For example, using the bank articulation can anticipate how its costs will be. Prescriptive ana-

lytics is the point at which needs to redress use, for instance, spending several nutrition and voyaging so

utilizing prescriptive examination can know the best classification for taking a shot at to decrease the

costs. (Onlineitguru 2019.)

2.6 Python package for Data Science

Python Libraries and Packages are several helpful modules and capacities that reduce the utilization of

code in everyday life. There are more than 137,000 Python libraries and 198,826 Python bundles pre-

pared to facilitate engineers' standard programming experience. These libraries and bundles are antici-

pated for an assortment of cutting-edge arrangements. Python libraries and Python bundles believe a

fundamental job in ordinary AI. Indeed, their utilization is not restricted to AI when it happened. Infor-

mation Science, picture and information control, information perception – everything is a piece of their

moderate applications. Python Packages are a set of Python modules, while Python libraries are a group

of Python functions aimed to carry out special tasks. However, this thesis is going to discuss both the

libraries and the packages for ease. (Aurora 2020.)

2.6.1 Numpy

In Python programming language, NumPy is a library containing assistance for with a massive assort-

ment of higher-level numerical capabilities to operate on these constellations and multi-dimensional

exhibitions and networks. The predecessor of NumPy, Numeric, was initially produced by Jim Hugunin

with commitments from a few different designers. In 2005, Travis Oliphant made NumPy by consoli-

dating highlights of the contending Num-array into Numeric with wide changes. NumPy is open-source

programming language and has numerous benefactors. (Aurora 2020.)

Page 16: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

8

NumPy (Numerical Python) is the basic bundle for numerical calculation in Python; it contains a ground-

breaking N-dimensional cluster object. It has around 18,000 remarks on GitHub and a functioning net-

work of 700 givers. It is a universally useful cluster preparing bundle that gives superior multidimen-

sional articles called exhibits and devices for working with them. NumPy likewise addresses the gradu-

alness issue halfway by giving these multidimensional exhibits just as giving capacities and administra-

tors that work proficiently on these clusters. The main features of the NumPy is to give quick, precom-

piled capacities for numerical schedules. Similarly, it exhibits situated registering for better proficiency

along with it establishes an article situated methodology and conservative and quicker calculations with

vectorization. (Aurora 2020.)

2.6.2 Pandas

In computer programming, for data control and examination in Python programming language a product

library composes which is known as Pandas. Specifically, it extends data structure and activities for

regulating mathematical tables and time arrangement. It is free programming language released under

the three-provision BSD permit. The name is grown from the expression "panel data", an econometrics

term for informational collections that incorporate perceptions over various timeframes for similar peo-

ple. Pandas (Python data analysis) is an indisputable requirement in the Data Science life cycle. It is the

most famous and broadly utilized Python library for Data Science, alongside NumPy in matplotlib. With

around 17,00 remarks on GitHub and a functioning network of 1,200 benefactors, it is actively utilized

for data analysis and cleaning. Pandas give quick, adaptable information structures, for example, infor-

mation outline CDs, which are intended to work with organized data rapidly and instinctively. (Aurora

2020.)

2.6.3 Matplotlib

Matplotlib is a plotting library in which mathematical science reinforcement is NumPy for Python pro-

gramming language. Matplotlib gives an article arranged API implanting plots through useful toolbox

such as Tkinter, wxPython, Qt, or GTK+ are utilizing broadly applications. There is additionally a pro-

cedural "pylab" interface dependent on a state machine (like OpenGL), intended to attentively resemble

of MATLAB, however, its utilization is debilitated. SciPy utilizes Matplotlib. Matplotlib has incredible

pleasant perceptions. It is a plotting library for Python with around 26,000 remarks on GitHub and an

Page 17: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

9

exceptionally energetic network of around 700 benefactors. On account of the charts and plots that it

delivers, it is widely utilized for data representation. It likewise gives an item situated API, which can

be utilized to implant those plots into applications. (Aurora 2020.)

2.6.4 TensorFlow

TensorFlow is a free and open-source programming library for dataflow and differentiable programming

over a scope of assignments. It is a representative math library and is likewise utilized for AI applica-

tions, for example, neural systems. It is utilized for both research and creation at Google. TensorFlow

was created by the Google Brain group for inner Google use. It was released under the Apache License

2.0 on November 9, 2015. TensorFlow is a library for exceptional numerical calculations with around

35,000 remarks and a lively network of around 1,500 donors. It is utilized across different logical fields.

TensorFlow is a structure for characterizing and running calculations that include tensors, which are

mostly characterized by computational articles. (Aurora 2020.)

2.6.5 Scipy

Scipy is a free and open-source Python library utilized for logical processing and specialized figuring.

SciPy contains modules for enhancement, straight variable based math, coordination, insertion, unique

capacities, FFT, sign and picture handling, ODE solvers and different errands normal in science and

building. SciPy expands on the NumPy exhibit object and it is a piece of the NumPy stack which incor-

porates devices like Matplotlib, Pandas, and SymPy, and a growing arrangement of logical registering

libraries. This NumPy stack has equivalent clients to different applications, for example, MATLAB,

GNU Octave, and Scilab. The NumPy stack is likewise occasionally referred to as the SciPy stack. (Au-

rora 2020.)

SciPy (Scientific Python) is another free and open-source Python library widely utilized in Information

Science for elevated level calculations. SciPy has around 19,000 remarks on GitHub and a functioning

network of around 600 supporters. It is generally utilized for logical and specialized calculations since

it broadens NumPy and gives numerous easy to use and proficient schedules for logical estimations.

(Aurora 2020.)

Page 18: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

10

2.7 Data Science VS Data Analytics

Data Science is a collective term for a set of progressively powerful fields that focus on extracting large

amounts of information indexes and working to find new creative pieces of knowledge, models, tech-

nologies, and procedures. Data analysis depends on the order in which important experiences are added

to help quickly expand the company's specialists. It is part of a broader strategy that be component of

Information Science. (Durcevic 2018.)

FIGURE 4. Comparison of data science with data analytics (Durcevic 2018).

While looking toward data science and deeper data analysis, one element that distinguishes the two

controls is the skills or data needed to communicate success. For Data Analysis, basic scientific

knowledge and basic skills are essential, as are programming skills and job data in online data represen-

tation tools and average measurements. In the field of Data Science, despite job management and the

provision of many unstructured metrics and knowledge snippets, extensive knowledge of SQL databases

and coding is still required. Information seekers increasingly require complex skills in information

Page 19: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

11

presentation, preliminary research, programming, information acquisition, and guiding ideology. Fun-

damentally, it must have several artificial intelligence and design or programming skills to enable to

control information. (Durcevic 2018.)

Compared to Data Analysis and Data Science, the variety is displayed at this moment, and blows the

problem on small and large scales. As mentioned above, fundamentally, Data Science is a broad inter-

disciplinary field that covers the broader field of Data Analysis, and it is closely linked with the great

processing of organized and unstructured information. On the other hand, Information Analysis is a

miniaturized field that penetrates the explicit components of business activities, and its ultimate purpose

is to inform the departmental model and rationalize the form. Focusing on organized information, there

are many information analyses models that show the actual situation and its impact on the company.

Although these two instructions analyze a wide range of companies, specializations, ideas, and practices,

Data Science is often used in important areas such as business exams, web index design, and self-suffi-

ciency, such as knowledge ability AI and AI ML. Data Analysis is an evolving and progressive idea;

though, this area of competence or computerized data innovation is often used within human services,

retail, gaming and travel companies to quickly respond to company difficulties and goals (Durcevic

2018.)

Another basic component of establishing separate Data Analysis and Science is the definition point or

goal of each control. Although it stated the concept, it is very important and worth emphasizing: the

basic goal of data science is to use a large number of advanced easy to use methods and pieces of

knowledge to find the questions that need to be asked to guide, development, progress and development.

Using existing data to reveal projects and knowledge images in clear areas as the basic point and Data

Analysis aims to obtain extraordinary information that depends on clear points, activities and key per-

formance indicators. (Durcevic 2018.)

Apart from comparison, when analyzing Data Science and Analysis, one must pay attention to the sim-

ilarities between the two: the most important issue is to utilize many information. It will recognize that

each control provides computerized information in different ways to achieve different results. Irrespec-

tive, no matter how different they are, they can use terrific information to bring benefits to the industry,

brand, company or association. Organizations that decide to use the maximum capacity to analyze large

information can increase their operating margins by up to 60%, and because these two areas focus on

substantial amounts of information, the awards of analyzing science and analytics can possibly be re-

markable. (Durcevic 2018.)

Page 20: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

12

3 DATA WRANGLING

Data wrangling is the way in the direction of cleaning and binding together untidy and complex infor-

mational indexes for simple access and investigation. With the measure of data and data sources quickly

developing and growing, it is becoming more gradually basic for several accessible data to be arranged

for analysis. This procedure regularly incorporates physically changing over data from one crude struc-

ture into another configuration to contemplate progressively helpful utilization and association of the

data. Data wrangling alludes to the way toward cleaning, rebuilding and enhancing the crude information

accessible into an increasingly usable configuration. This will enable the researcher to revive the proce-

dure of basic leadership, and in this manner improve bits of knowledge in less time. After the instruction,

there are many large companies in the industry, most of which are stemmed from their advantages, and

in the case of incompleteness, considerable information should be broken down. It turns away that or-

ganizing and cleaning up data before analysis is very valuable and can help organizations quickly break

up larger-scale data. (Acadgild 2018.)

3.1 Pre- processing data

A data mining method that involves editing raw data within a reasonable range is known as data prepro-

cessing. In particularly, by conflicting and painful practices or deviations and verifiable data is often

insufficient and may contain many errors. Data preprocessing is an effective strategy to solve these

problems. Data preprocessing can prepare raw data for other management. Database-based applications

(such as customer relationships, executives, and rules-based applications (such as nervous systems) use

data preprocessing. Data preprocessing is a significant advance in the data mining process. The expres-

sion "trash in, trash out" is especially relevant to data mining and AI ventures. Data gathering strategies

are regularly almost controlled, arriving in out-of-extend values (e.g., Income: −100), unthinkable infor-

mation blends (e.g., Sex: Male, Pregnant: Yes) and missing qualities. Examining data that has not been

deliberately screened for such issues can create deceiving results. In this way, the portrayal and nature

of data are as a matter of first importance before running an examination. Regularly, data preprocessing

is the most significant period of an AI project, particularly in computational science. (techopodia 2018.)

Page 21: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

13

On the off chance that there is many unimportant and excess data present or loud and temperamental

data, at that point data disclosure during the preparation stage is increasingly troublesome. Data readi-

ness and sifting steps can take several handling times. Data preprocessing incorporates cleaning, instance

determination, standardization, change, highlight extraction, and choice, and so on. The result of data

preprocessing is the last preparing set. Data pre-processing may influence how the results of the last data

preparation can be deciphered. This viewpoint ought to be deliberately viewed as when the translation

of the outcomes is a key point, such in the multivariate preparation of compound data (chemometrics).

(techopodia 2018.)

3.2 Dealing with missing values

Most datasets contain missing data, mistakenly encoded data, or other data that can not be utilized for

demonstrating. Occasionally missing data is only that — missing. There is no genuine incentive in each

field, for instance, an unfilled string in a CSV record. Different occasions data is encoded with an ex-

traordinary watchword or a string. Some basic encodings are NA, N/A, None, and - 1. Before utilizing

data with missing data fields, it must change those fields so they can be utilized for examination and

display. There are AI calculations and bundles that can naturally distinguish and manage missing data,

however, it is however a reasonable practice to change that data physically. (Tunguz 2018.)

In the same way as other different parts of Data Science, there is a considerable several craftsmanship

and expertise associated with how to manage missing data, and Data Science may really be more work-

manship than science. Understanding the data and the area from which it comes from is significant. For

example, by computing mean bodes well for specific highlights and areas than for other people. Having

missing qualities in data is not really a mishap. Essentially, as a rule one can gather a ton of helpful data

from missing qualities, and they can be utilized for the reasons for highlight designing. Individual must

be cautious, however: on the off chance that there is an element in the dataset that has a truly elevated

level of missing qualities. At that point which element much the same as some other low changes include,

ought to be dropped. (Tunguz 2018.)

Page 22: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

14

3.3 Data formatting

Data is displayed in various sizes and shapes, which can be digital data, content, mixed media, data

query or several different types of data. Information design is considered an organization for encoding

data which is encoded in different encodings that various applications and projects can inspect, perceive

and use. When selecting a data set, there are a few issues to check, such as data attributes or data size,

company foundation and use case. When checking the read and write speed of a data file, some tests are

performed to select the correct data set. In most cases, three different kinds of data elements are also

referred to as GIS data sets. Data locations of these elements are administered interchangeably and used

for various objectives. (ECML 2018.)

3.3.1 Data format based on file

This type of data set is embedded in a record or multiple documents. These records are archived in one

of the subjective organizers. Generally, only a single document is used for DGN; however, the case in

return subsequently contained three documents, nonetheless. The increase in file names for these three

records is not the same as each other which is SHX, SHP and DBF. Each of these three records is im-

portant and is required here because all three documents perform various activities internally. A name

that uses a file name as the source of information which has many levels, and it can be considered based

on the file name alone. As in Shapefile, every Shapefile contains an information access point, and there

is only one layer called the document name. Examples of document-based information sites include

MicroStation design files, Shapefile files, and GeoTIFF representations. (ECML 2018.)

3.3.2 Data format based on directory

In this type of data set, whether a record exists or there are multiple documents, they are entirely stored

in the main envelope in a way. In some cases, there is another prerequisite for the organizer in the doc-

ument tree in other regions, in order to be able to effectively achieve this goal. There are many documents

in the registers which are discussed in the various layers of accessible information. For example,

PAL.ADF speaks using polygon data. Since there is more than one record in the envelope, the addition

of ADF documents is also recorded in ESRI ArcInfo Coverage. Adding ADF records will contain infor-

mation about line strings or information about collapsed strings. All ADF documents are completed at

Page 23: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

15

the levels available in the organizer's information source. Some examples of catalogue-based data for-

mats are TIGER from the US Census. UU and ESRI ArcInfo Coverage. (ECML 2018.)

3.3.3 Connections between database

On the one hand, database associations are very similar to the information requirements of the document

and index-based information design described above. For decryption, Map Server provide information

about the geographic organization. One must reach the address inside the map server, which is the vector

dataset. The instruction waves provided by the database association are inadvertently stored in memory.

The map server then checks these instructions to create a guide. The organization of the data is the most

important part, and most attention is focused on the data. However, information and unimaginable fea-

tures may be required. Usually, the database association is composed of accompanying data, such as

host, the name of the server, database name, username, password and database name (as host). Some

examples of database connections include ESRI, MySQL, ArcSDE and PostGIS. (ECML 2018.)

3.4 Data Normalization

The definition of data normalization is not straightforward although selecting one might be a little un-

stable. With all the various descriptions in mind, data normalization is basically a process where the data

in the database is updated so that the customer can use the database appropriately for further questions

and inquiries. When embarking on a data standardization process, there are some goals are at top priority.

The first is to delete all replica data that can be viewed in the information index. This tests the database

and eliminates any redundancy that may occur. Also, downsizing can adversely affect data analysis be-

cause that constitute unnecessary values. By removing from the database helps to reorder the data for

inspection. Correspondingly, another goal is to accumulate data intelligently that requires the data that

can identify each other that will happen in a data-standardized database. If the data have been linked, it

should be close to the information index. (Import.io 2018.)

Sometimes the dataset will contain competing data, hence data normalization aims to resolve this conflict

and reveal it before continuing. The third step is to organize the data. This creates the data and becomes

it a protocol that can be done more preparation and analysis. Finally, data normalization will consolidate

information and merge it into a substantially more ordered structure. (Import.io 2018.)

Page 24: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

16

3.4.1 Methods of normalizing data

There are a few different ways to standardize information where it will be merely mapped three strate-

gies. The primary technique is referred to as straightforward component scaling just partitions each in-

centive by the ultimate inducement for that element. This makes the new qualities extend somewhere in

the range of zero and one. The subsequent technique is called min-max takes each worth X(old) to sub-

tract it from the base estimation of that element and at that point by separating by the scope of that

component. Once more, the subsequent new qualities extend somewhere in the range of zero and one.

The third technique is called z-score or standard score. For this method, it subtracts the mu which is the

normal of the element and afterward partition by the standard deviation sigma. The subsequent qualities

drift around zero and ordinarily extend between negative three and positive three still can be sequential.

(Zhang 2019)

X(new) = X(old) / X(max) -simple feature scaling

X(new) = X(old) – X(min) / X(max) – X(min) - min-max

X(new) = X(old) – μ / σ -Z-score

3.4.2 Simple feature scaling in Python

Feature scaling is a technique used to standardize the scope of free factors or highlight data. In data

preparation, it is otherwise called information standardization and is for the most part performed during

the data preprocessing step. By following the former model which can apply standardization technique

on the length feature. To begin with, which uses the straightforward element scaling technique, where it

separates by the most extreme incentive in the element. Utilizing the Pandas strategy max, this should

be possible in only one line of code. (Zhang 2019.)

3.4.3 Min-Max in Python

Min-max normalization is one of the most widely recognized approaches to normalize data. For each

element, the base estimation of that component will be changed into a 0, the most extreme worth finds

altered into a 1, and each other worth is altered into a decimal somewhere in the range of 0 and 1.

Page 25: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

17

For instance, if the base estimation of a component is 20, and the most extreme worth is 40, at that point

30 will be modified to about 0.5 since it is somewhere between 20 and 40. The formula is in accordance

with the following. (codeacademy 2019.)

Min-max normalization has one critical drawback: it does not deal with anomalies great indeed. For

instance, on the off chance that it has 99 qualities somewhere in the range of 0 and 40, and one worth is

100, at that point, the 99 qualities will all be changed to an incentive somewhere in the range of 0 and

0.4. The data is similarly as squished as it has been in the past by examining the picture underneath to

see a case of this. (codeacademy 2018.)

GRAPH 1. Min-Max normalization (Codeacademy 2018).

3.4.4 Z – score in python

Z-score normalization methodology for the of normalizing data that maintains a strategic distance from

this exception issue. The equation for Z-score normalization lies beneath:

Page 26: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

18

Here, μ is the mean estimation of the component and σ is the standard deviation of the element. On the

off chance that value is equivalent to the mean of the considerable number of estimations of the element,

it will be standardized to 0. If it is below the mean, it will be a negative number, and on the possibility

that it is over the mean, it will be a positive number. The size of those negative and positive numbers is

controlled by the standard deviation of the first component. On the off chance that the unnormalized

information had an immense standard deviation, the standardized qualities will be more like 0. The be-

neath graph 2 is identical data from earlier, but this time around utilizing z-score standardization. (co-

decademy 2019.)

GRAPH 2. Z-Score Normalization in python (Codeacademy 2019).

While the data despite looks squeezed, the aims are generally present on generally a similar scale for the

two highlights. Practically all applications are between - 2 and 2 on both the x-axis and y-axis. The main

potential drawback is that the highlights are not precisely on the same scale. With min-max normaliza-

tion, assured to reshape both highlights to be somewhere in the range of 0 and 1. Utilizing z-score stand-

ardization, the x-hub currently has a range from about - 1.5 to 1.5 while the y-axis has a range from

about - 2 to 2. This is indubitably superior than it could have been anticipated; the x-axis, which has

recently had a scope of 0 to 40, is no more impressive the y-axis.

Page 27: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

19

3.5 Binning in python

Binning or discretization is the way toward changing numerical factors into all-out partners. A model is

to canister esteems for Age into classifications, for example, 20-39, 40-59, and 60-79. Numerical factors

are generally discretized in the demonstrating techniques dependent on recurrence tables (for instance

decision trees). In addition, binning may improve the exactness of the prescient models by diminishing

the commotion or non-linearity. In the end, binning permits simple ID of exceptions, invalid and missing

estimations of numerical factors. (Klein 2019.)

Data binning, which is also called bucketing or discretization, is a strategy utilized in information prep-

aration and measurements. Binning can be utilized for instance, if there are more potential data focuses

than watched data focuses. A model is to receptacle the body statures of individuals into interims or

classifications. Allow to assume, it will take the statures of 30 individuals. The length esteems can be

between - generally speculating - 1.30 meters to 2.50 meters. Hypothetically, there are 120 diverse cm

esteems conceivable, however, it can possess altogether 30 unique qualities from example collecting

information. One approach to aggregate could be to place the deliberate qualities into canisters running

from 1.30 - 1.50 meters, 1.50 - 1.70 meters and 1.70 - 1.90 meters. This implies; the first data esteems

will be allowed to a receptacle into which they fit by their size. The first qualities will be supplanted by

values speaking to the relating interims. Binning is a type of quantization. Bins do not really need to be

numerical, they can be unmitigated estimations of any sort, like hounds, feline and hamsters. It may also

be used to lessen the measure of information, by joining neighboring pixels into single pixels. (Klein

2019.)

3.6 Categorical variables in Python

Categorical are a Pandas data type relating to all-out factors in insights. A straight-out factor takes on a

constrained and normally fixed number of potential qualities (classifications; levels in R). Models are

sexual orientation, social class, blood classification, national association, perception time or rating by

means of Likert scales. As opposed to factual clear-cut factors, unmitigated data may have a request (for

example 'firmly concur' versus 'concur' or 'first perception' versus 'second perception'), yet numerical

activities (augmentations, divisions) are impractical. All estimations of absolute data are either in classes

or np.nan. A request is characterized by the request for classes, not the lexical request of the qualities.

Inside, the data structure comprises of a classifications cluster and a number exhibit of codes that point

to the genuine incentive in the classes exhibit. (Pandas 2014.)

Page 28: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

20

4 EXPLORATORY DATA ANALYSIS IN PYTHON

In data mining, to abridge principle qualities often with visual techniques, there is a way to deal with

analyzing datasets which is called Exploratory data analysis (EDA). To display the data, EDA is used

which could be identified prior to the launch an event. It is difficult to view the complete digital parts or

spreadsheets and decide the crucial data quality. Identifying pieces of knowledge when looking at simple

numbers can be repetitive, exhausting, and overwhelming. In this case, analytical methods for explora-

tory data have been discovered as a reference point. The analysis of exploratory data usually has two

different crossing orders. Initial, each technique is univariate or multivariate (usually a single variable).

Secondly, every method is graphical or non-graphical. (Chartio 2019.)

4.1 Descriptive Statistics

Distinct insights are brief clear coefficients that condense a given informational collection, which can

be either a portrayal of the whole or an example of a populace. Expressive insights are separated into

proportions of focal inclination and proportions of inconstancy (spread). Proportions of focal propensity

incorporate the mean, middle, and mode, while proportions of changeability incorporate the standard

deviation, difference, the base, and most extreme factors, and the kurtosis and skewness. (Kenton 2019.)

4.1.1 Interpretation of Descriptive Statistics

Descriptive statistics, in simple terms, help depict and comprehend the highlights of an informational

collection by giving short synopses about the example and proportions of the data. The most perceived

kinds of distinct insights are proportions of focus: the mean, middle, and mode, which are utilized at

practically all degrees of math and measurements. The mean, or the normal, is determined by including

every one of the figures inside the informational index and afterwards isolating by the number of figures

inside the set. For instance, the entirety of the accompanying informational collection is 20: (2, 3, 4, 5,

6). The mean is 4 (20/5). The method of an informational index is to show up frequently, and the middle

is the figure arranged in the informational collection. This is the figure isolating the higher figures from

the lower figures inside an informational index. Aside, there are fewer fundamental types of illustrative

measurements that are still significant. (Kenton 2019.)

Page 29: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

21

Individuals utilize engaging measurements to repurpose difficult to comprehend quantitative bits of

knowledge over a gigantic informational index into reduced down portrayals. Substitute evaluation point

normal (GPA), for instance, gives a reasonable comprehension of expressive measurements. The possi-

bility of a GPA is that it takes information that focuses on a wide scope of tests, classes, and grades, and

midpoints them together to give a general comprehension of an stand in general scholastic capacities.

(Kenton 2019.)

4.1.2 Processes of Descriptive Statistics

Every descriptive statistic is either a proportion of focal inclination or proportions of inconstancy, oth-

erwise called proportions of scattering. Proportions of focal propensity centre around the normal or cen-

tre estimations of informational collections; however, proportions of inconstancy centre around the scat-

tering of information. These two estimates use diagrams, tables, and general exchanges to assist individ-

uals with the understanding of the importance of the collapsed information. Proportions of focal propen-

sity portray the average scenario of dispersion for an informational collection. An individual examines

the recurrence of every datum point in the conveyance and depicts it utilizing the mean, middle, or mode,

which quantifies the most widely recognized examples of collapsed informational collection. (Kenton

2019.)

Proportions of fluctuation, or the proportions of spread, guide in dissecting how spread-out the disper-

sion is for many information. For instance, while the proportions of focal inclination may give an indi-

vidual the normal of an informational collection, it does not portray how the information is circulated

inside the set. In this way, while the normal of the information possibly 65 out of 100, there can at present

be information concentrates on both 1 and 100. Proportions of inconstancy help convey this by portray-

ing the shape and spread of the informational index. Range, quartiles, total deviation, and discrepancy

are generally to create instances of percentages of inconstancy. Consider the additional informational

collection: 5, 19, 24, 62, 91, 100. The scope of that informational index is 95, which is determined by

subtracting the most minimal number (5) in the informational index from the most remarkable (100).

(Kenton 2019.)

Page 30: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

22

4.2 GroupBy in Python

The GROUP BY proclamation in SQL is utilized to organize indistinguishable data into bunches with

the assistance of certain capacities. If a column has the same values in different rows, then it will arrange

these rows in a group. Python is language for performing information investigation, fundamentally due

to the incredible environment of data-driven Python bundles. Pandas is one of those bundles and makes

forming in and breaking down data plenty simpler. (Ranjan 2019.)

4.3 Correlation

A verifiable device for quantifying the connection between at least two factors for the goal of which is

to combine a variable in one inconstant with a capricious in another variable is called correlation. Cor-

relation is often described as the ratio of a direct relationship between two quantitative factors, such as

height and weight. A slightly more flexible definition is usually used, so this connection essentially,

means there is a connection between the two factors. This thesis describes the characteristics of positive

and negative connections, provides some examples of relationships, reveals how to quantify a connec-

tion, besides verifies some contact traps. Since the estimate of one variable improves as the estimate of

another variable increases, it is called a positive correlation. Negative correlation is an estimate of one

variable, while negative correlation decreases with the estimation of another variable, thus establishing

the opposite relationship. (Bock 2018.)

4.3.1 Negative and Positive Correlation

If the connection amongst the following factors is large or negative is dependent on the guidance of

expansion. This relationship is determined when two factors move in a similar manner, that is, the point

where one variable establishes another variable's point with another equal normal increment, and the

reduction of to another variable is further diminished. When both factors move in opposite directions,

the connection is negative, that is, the appropriate direction in which variables which produce different

decreases. (Business jargons 2019.)

Page 31: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

23

4.3.2 Multiple and Partial Correlation

If the connection is fractal, simple or fluctuating varies depending on the number of factors examined.

Once two elements are taken into consideration, the correlation is simple. If there is at least three com-

ponents be anticipated, the relationship is unpredictable or an unfinished. Similarly, when three factors

are examined simultaneously, the relationship is called a multiple relationship. This is a different issue,

for example, when it should be considered the link between cereal yields per tract of land and the number

of compounds and rainfall used. While it explored several factors due to some relationships and consid-

ering two factors that affect each other. The goal is to keep the impact of other influencing variables

unchanged. For example, in the previous model, if it examines the relationship between performance

and pectin at a specific room temperature, and this was an intermediate relationship. (Business jargon

2019.)

4.4 Linear and Non-Linear (Curvilinear) Correlation

When the progress measure of one variable and the progress measure of another variable typically has a

consistent ratio, it can be said that the connection is direct. For example, based on estimates of the two

factors listed below, there is no doubt that the progress ratio between these factors is equivalent:

While the progress value of one variable does not have a constant relationship with the progress of

another variable, the correlation is called non-linear or curve. For example, if the composting method is

doubled, wheat production will not increase radically. (Business jargons 2019.)

Page 32: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

24

4.5 Analysis of Variance ANOVA

Analysis of variance (ANOVA) is an examination apparatus utilized in insights that parts a monitor total

changeability found inside an informational collection into two sections: deliberate components and ar-

bitrary variables. The precise elements have a measurable impact on the given informational collection,

while the irregular components do not. Examiners utilize the ANOVA test to decide the impact that

autonomous factors have on the reliant variable in a relapse study. The t-and z-test techniques created in

the twentieth century were utilized for factual investigation until 1918 when Ronald Fisher made the

examination of fluctuation strategy. ANOVA is similarly called the Fisher examination of difference,

and it is the expansion of the t-and z-tests. The term turned out to be outstanding in 1925, following

appearing in the Fisher's book, "Factual Methods for Research Workers." It was utilized in trial brain

research and later extended to subjects that were progressively unpredictable. (Kenton 2019.)

The Formula of ANOVA is:

F = MST/MSE

Where:

F = ANOVA coefficient

MST = Mean sum of squares due to treatment

MSE = Mean sum of squares due to error

4.5.1 Revealing of analysis of variance

The ANOVA test is the underlying progress in examining factors that influence a given informational

index. When the test is done, an examiner plays out extra testing on the efficient variables that quantifi-

ably add to the informational index's irregularity. The expert uses the ANOVA test to bring about an f-

test to create extra information that lines up with the proposed relapse models. The ANOVA test permits

an examination of multiple gatherings simultaneously to decide if a relationship exists between them.

The consequence of the ANOVA equation, the F measurement (additionally called the F-proportion),

takes into consideration the examination of numerous gatherings of information to decide the incon-

stancy among tests and inside examples. On the possibility that no genuine distinction exists between

the tried gatherings, which is known as the invalid speculation, the consequence of the ANOVA's F-

proportion measurement will be near 1. Vacillations in its examining will undoubtedly pursue the Fisher

Page 33: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

25

F dispersion. This is a gathering of circulation capacities, with two trademark numbers, called the nu-

merator degrees of opportunity and the denominator degrees of opportunity. (Kenton 2019)

4.5.2 Example of How to Use ANOVA

A researcher may, for instance, test replacements from numerous schools to check whether substitutes

from one of the universities reliably defeated alternates from different universities. In a business appli-

cation, an R&D specialist may test two distinct procedures of making an item to check whether one

procedure is superior to the next when it comes to the cost-effectiveness. The category of ANOVA test

utilized relies upon various components. It is applied when information should be exploratory. Investi-

gation of difference is utilized if there is no entrance to factual programming production processing

ANOVA by hand. It is easy to utilize and most appropriate for little examples. With numerous test plans,

the example sizes must be the equivalent for the different factor level blends. ANOVA is useful for

testing at least three factors. It is similar to different two-example t-tests. In any event, it brings about

less sort I mistake and is suitable for a scope of issues. ANOVA clusters contrast by examining at the

methods for each gathering and incorporates spreading out the fluctuation into differing sources. It is

utilized with subjects, test gatherings, among assemblies and inside rallies. (Kenton 2019.)

4.5.3 One-Way ANOVA Versus Two-Way ANOVA

There are two kinds of ANOVA: single direction (or unidirectional) and two-way. Single direction or

two-route alludes to the number of free factors in investigation of change test. A single direction

ANOVA assesses the effect of a sole factor on a sole reaction variable. It decides if every single one of

the examples is the equivalent. The single direction ANOVA is utilized to decide if there are any factu-

ally massive contrasts between the methods for at least three free (disconnected) gatherings. A two-way

ANOVA is an expansion of the single direction ANOVA. In a single direction, it has one autonomous

variable influencing a needy variable. With a two-path ANOVA, there are two independents. For in-

stance, a two-way ANOVA enables an organization to think about laborer profitability dependent on

two autonomous factors, for example, compensation and range of abilities. It is used to observe the

collaboration between the two factors and tests the impact of two factors simultaneously. (Kenton 2019.)

Page 34: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

26

4.6 Exploratory Analysis of the Titanic data set

This section illustrates the introduction of important concepts of machine learning with the practical

exercise using real data set about the Titanic disaster. This part is regarding performing exploratory

analysis on the Titanic data set. Firstly, to introduce the Titanic disaster data set where how to get the

data and how the data set appears and then will perform an exercise of exploratory analysis from this

data set. This chapter will briefly introduce the notion of data visualization, which is going to be dis-

cussed with more detail during this exercise. This analysis introduces some concepts of machine learning

and in particularly discuss supervised learning. These concepts are going to be applied to the Titanic

data set. Analysis describes the survival prediction as a supervised learning problem and then it will

implement it using scikit-learn.

So, for this implementation, this analysis introduces the library's matplotlib for data visualization and

the scikit-learn for machine learning. Therefore, this is an opportunity to use the library properly in order

to install the libraries from an Anaconda by element. This analysis can simply run the command conduct

install followed by the library name. So, the story of the Titanic should be well known and it was a ship

that was supposed to be unsinkable but then during its first journey more than 100 years ago the tragedy

happened and now the ship is at the bottom of the ocean. It may also be able to read more about the

Titanic on the web for instance, Wikipedia is a page with countless details but the data set itself as a data

set about by going through a survival, which is available on Kaggle which can be downloaded the data

from this address https://www.kaggle.com/c/titanic/data.

By turning with the coding which import pandas using the usual as pd which could see the location

where saving the data might fix the file and folder names according to the setup after that analysis loads

the data into a data frame. Simply, it focuses on the training set first and for the time being it ignores the

other file that will use later. It has 891 records in the data frame, this set also has about 400 icons for a

grand total of about 1,300. This number is much smaller than the actual number of passengers on the

Titanic is likely to be almost half.

Page 35: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

27

FIGURE 5. Importing the Pandas using as pd

Using the head function in figure 5 which can have the structure of the data where it has several fields

at this point the passenger ID, which is just serial number, then the survived variable with 0 means the

passenger did not survive, one means the passenger survived. B class is the passenger class, first ,second

or third then the full name, the sex male or female, the ages earlier number of years, then this acronym

here represent the number of siblings or spouses aboard the Titanic followed by the number. Parents and

children in the Titanic then the ticket number, the cabin number and finally many describe where the

passenger has embarked. So, S for Southampton in England starting point of the journey or C for Cher-

bourg in France and Q for Queenstown in Ireland. Thus, following information is a quick summary of

the different attributes to get acquainted with the structure of this data set.

FIGURE 6. Summary of the different attributes

Page 36: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

28

In figure 6 one of the first issues that have to do with checking for missing values. Using the count

function, it might be seen that a couple of field age, cabin and embarked the lost data, this may or may

not be a problem for it.

FIGURE 7. Checking the missing values

Figure 7 is starting with the minimum and maximum values for the field age that can be noticed that the

minimum value is zero point something so probably for little babies younger than one year. The age was

originally given several months or maybe weeks normalized by the other side, because the maximum

value that is 80. Next, analysing the value distribution for some other interesting attributes for example,

which could be seen how many passengers survived using the value count function which can also look

at it in terms of percentage so it can see that the more than 61 percent of the passengers did not survive.

Similarly, stating at the gender distribution with 577 males and the 314 females and also the distribution

between classes where the majority of the passengers being in the third class is naturally, which is stim-

ulating, but maybe this kind of observations can benefit from displaying the data in a graphical form. It

is fairly easy to accomplish combining pandas with matplotlib.

Page 37: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

29

FIGURE 8. Analysing the different values distributions

Firstly, in figure 8 which requires to colour the magic function matplotlib inline. This will make sure

that the plots are displayed within its notebook. Secondly, defining here for alpha colour at 50% which

will be used later simply to produce some of the plots look awhile even more beautiful. So, starting from

the survival distribution, this could be at the value counts and then plotting this distribution for this type

of plot that choose a bar chart, therefore it sets the type equal to bar. And what can be noticed at this is

that the split is about 60 to 40.

Page 38: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

30

FIGURE 10. Matplotlib separating by gender

Page 39: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

31

Figure 10 is performing the same exercise splitting by gender so in figure 10 can see that most of the

passengers are female. Finally, an individual can also replicate the same exercise over the passenger

classes. Also, it proposes to sort by index because figure wants to make sure that the classes first, second

and third are displayed in the correct order and on this plot, it can be observed that in figure10 most of

the passengers were in third class.

FIGURE 11. Comparing survival with age

Figure 11 is attempting to compare survival with age that could be used a scatter plot for this analysis.

Essentially plotting to numeric variables and observing some correlations because the Survivor variable

is a binary is perhaps a scatter plot which is not the best. Besides, in order to get a scatter plot, simply

that must specify which variables are compare as X and Y so here x is a survived variable and Y is the

age. Figure 11 is one dot for each passenger, and which can be noticed that the true distribution appears

quite similar. So, in both situations, survival of passenger can be shown on the right and not survival

passengers are shown on the left in which there are passengers of all kinds of ages.

Page 40: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

32

FIGURE 12. Counting the values with values counts

Figure 12 is filtering its data set where the variable survive is equal to 1 then focus on the variable Age.

Similarly, figure 12 is collecting the data by counting the values with value counts, then sort them by

age and finally function plots numbers. This distribution and more accurate on the labels are quite diffi-

cult to read because data has a huge number of values representing different ages. So, there are certain

situations bucketing, also called binning of the solution to avoid this problem.

FIGURE 13. Binning the aging groups

Page 41: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

33

Figure 13 is bucketing the aging groups of 10 effectively, the edge goes from 0 to 80 as a result the

boundaries of bins are 0 to 10 and10 to 20. Analysis can use the Pandas function to cut bin the age and

which stores the result in a new field called Age Bin. Figure 13 customizes the labels for the bins, but

that does not require for this example, so after binning, the distribution can be plotted again. Now this is

easier to read for the 20 to 30 or maybe even 20 to 40 that is the most common age can be observed in

figure 13.

FIGURE 14. Age distribution of the passengers

Figure 14 seems to be at the same age distribution for passengers who did not survive which can see that

the plot is slightly different but not a match. So, figure 14 looks the same as before. In fact, if figure

looks at the plot for all the passengers without introducing the survival variable that can see the figure

is basically the same. So, what the figure represents here is that age does not tell much about the survival

of the passenger.

Page 42: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

34

FIGURE 15. Distribution of other variables

Figure 15 is looking at the other variables for example reference the passenger’s class with survival. The

passengers in first class had a better luck. But the split is not so clear what is perhaps 60/40 again for

passengers’ third class instead the situation is quite the opposite, so the majority of the third-class pas-

sengers did not survive.

Page 43: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

35

FIGURE 17. Allocation of male and female passenger

Figure 16 is describing the gander of the passenger using the function sex with value count which rep-

resents the dissemination of the male passenger and allocation of the female passenger. According to the

above figure most of the male passengers did not survive. Similarly figure 16 also represents the female

passengers and the majority as an alternative did survive.

Page 44: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

36

FIGURE 18. Cross reference regarding with gender

Figure 18 is considering through cross-reference gender with class at this point the situation has male

passengers who are in first class and that can be observed the way function is using this binary indexing

combining the two variables thus the majority of male passengers in first class did not survive. Similarly,

the majority of male passengers in third class also did not survive but the distribution is much more

skewed. So, if someone was a male passenger in third class has had a bad luck.

Page 45: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

37

FIGURE 19. Distribution of the survived and not survived passenger

Figure 19 is the concerning the female passengers in first class and at this point the distribution is quite

clear as a result almost all the female passengers in first class did survive. For female passengers in third

class as an alternative the distribution is 50/50. As a result, in conclusion being a female in first class is

a strong indication of survival while being a male passenger in third class is the strongest indication of

not surviving while age due to the fact that this one has previously noticed does not seem to play a

significant role here.

This is a practical exercise to put everything together with real data set and extract some interesting

information from data. This analysis has debated the case for data visualization, and analysis shows how

the situation can be useful for data exploration or exploratory data analysis.

Page 46: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

38

5 MODEL EVALUATION

As the model is improved, the evaluation of the model represents a major advance. Some strategies, for

example, artificial neural network models perform evaluations when performing backpropagation (be-

cause the main evaluation of the model is to compare the expected quality with the actual quality com-

pensated to a specific point during backpropagation,). However, despite several issues, it evaluates the

model physically through different strategies. Remember that models can be effectively evaluated when

working under direct learning conditions, because real qualities can be used together to make evaluation

techniques work. Referred to in the chapter 4, the learning models used in sub-supervision are divided

in two categories: regression issues and categorization issues. Therefore, the evaluation techniques of

these models are also included in these two categories: classification models and regression problems.

There is also an important distinction amongst strategies that applied to evaluate regression models and

classification models. Through regression, which can be managed uninterrupted quality, in which dif-

ferent systems try to identify errors between actual and expected values. (Datavedas 2019.)

Regardless, when trying to evaluate a classification model, the emphasis is on the amount of focus on

effectively organized data. In order to effectively evaluate the classification model, it is also necessary

to consider methods for mischaracterizing the data. Similarly, managing two different kinds of classifi-

cation models, some of which create superior performance, for instance SVM and KNN, when the exe-

cution is the group name, and other classes are probabilistic creation models, such as regression logistics

and Random Forest, where performance is the data guide Probability of having a place in a particular

category. By using break values in that category can change these probabilities in the hierarchy and

ultimately characterize the data method. (Datavedas 2019.)

5.1 Regression Models

The evaluation method of the regression model is based relating to the process of determining a similar-

ity among the real condition along with an anticipated value. There are techniques, for instance the

squared error of the sum, the squared mean of the error, and the square root of the error, which are

different ways to maintain this distinction. In addition, there are increasingly advanced strategies, such

as adjusted coefficients of determination, which further contemplate the problem of over-adaptation.

(Datavedas 2019.) This chapter explores each of these technologies.

Page 47: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

39

FIGURE 20. Example of Regression models (Datavedas 2019).

From above figure 20 it can be noticed with managed learning arrangements, different model assessment

methods can be used, which encourages to explore how well the models have been performed. The main

strategy for evaluating the model is to find accuracy, a comparison between expected quality and actual

quality, however, this is not an ideal technology and will lead to poor basic operating. Therefore, it

requires to take different measures to evaluate different models and selecting the correct evaluation rate

is very important to identify and select the correct model from different models. (Datavedas 2019.)

5.1.1 Sum Squared Error (SSE), Mean Square Error (MSE) and Root Mean Square Error

(RMSE)

These are the most widely recognized methods for assessing a regression model. To discover the Sum

Squared Error, first take on the contrast between the genuine and anticipated worth and square it and

after summed up them all it gives Sum Squared Error. (Datavedas 2018.)

After dividing this value by the number of observations, then Mean Square Error can be generated.

By taking a Square root of MSE, it gives the Root Mean Square Error.

Page 48: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

40

In these cases, the RMSE estimate must be as low as expected, since the RMSE estimate appears lower,

hence the model's expected value is improved. Higher RMSE shows a massive difference between ex-

pected and true values. RMSE is a well-known criterion for evaluating regression models because it is

simple. RMSE suspects misuse and follows typical distribution. When selecting highlighting, RMSE is

often used. By identifying RMSE as a mixture of different highlights to test whether an item completely

improves the model's expectations. One of the disadvantages of MSE / RMSE is that it is easily affected,

so the exceptions to MSE / RMSE must be removed for normal operation. It should understand this with

an example, having a dataset with 10 notes, and applying two different models to arrange them, and from

both models, which gives MSE. However, if it appears to be next to departure which has a chance to

exit, which indicates that for the first dataset, each prediction for ten notes has a $ 1 error. The prediction

for the second dataset has a $ 9 error in valid predictions of the 10 cases, the prediction for the last case

was halted at $ 4.16. (datavedas 2018.)

TABLE 1. An example of two different classification models to get MSE (Datavedas 2019).

Along from an example of table 1, the downside of MSE is that a major error will have the same effect

as many minor errors. The reason for this is that when squaring the errors in the equation, the single

model 2 made a huge error. It had the same effect on the ten small errors of Model 1. This problem

becomes apparent when there are anomalies in the data. For example, dataset has 11 perceptions with

mandatory variables where: "stress level" and "outcome" are independent variables which fits regression

models and obtains the most suitable lines and drawings. (Datavedas 2019.)

Page 49: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

41

GRAPH 3. A regression model got a line of best fit (Datavedas 2019).

The emphases that form the line will be the expectations and the decent ways from the real data points

to the predicted data point is the error. Despite that, on the off chance that present an exception, the

model will attempt to suit the anomaly and to do so it will create an alternate line of best fit and this is

going to make the outcomes is skewed.

GRAPH 4. A regression model received a different line of best fit (Datavedas 2019).

The formation point of this line is expected to be an error and means of moving from the actual data

point to the expected data point. However, in case of anomalies occurring from time to time, the model

will attempt to adapt to the anomalies, which creates the most appropriate line that will false the results.

This occurs especially when the measured error value is specified as MSE. Another disadvantage of

these measures is that this is unusual for correlation coefficients. They are not sensitive to the average

Page 50: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

42

and size of predictions, for example, when its model has the opportunity to find the correct meaning, in

all cases, the example of a radical reduction in MSE was completely absent. For example, in the above

model, if its most favourable line is essentially the average true value, then MSE size would be much

lower than the best fit for the accurate slope, but it would miss the average value. Therefore, for MSE,

the baseline is average can be found in Graph 5. (Datavedas 2019.)

GRAPH 5. MSE where baseline is the mean (Datavedas 2019).

Graph 6 is representing when the pattern is right and the mean is not right, the MSE will have an ex-

tremely low result failing to give the data that the example had been expected efficiently. Thus, the MSE

is highly dependent and sensitive towards the mean and the scale.

GRAPH 6. MSE in pattern is correct and mean is incorrect (Datavedas 2019).

Page 51: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

43

5.1.2 Relative Squared Error (RSE)

The relative square error is related to what happens when using the basic indicators. More clearly, this

basic indicator is natural in terms of true quality. According to these ideas, the relative squared error will

get a comprehensive squared error, which is normalized by dividing by the absolute squared error of the

base index. (Datavedas 2019.). It can be used to compare models with measurement errors in different

units.

Scientifically, the relative square error of a single system i is evaluated according to the following con-

ditions:

The disadvantage of using this strategy to clarify MSE is its effect on the mean and magnitude of the

prediction. Here, the MSE part of the model and the MSE of the model using the average value as an

index, for example, the most suitable line is only the average value of the variable Y. If the result might

be greater than 1 at this time, this variable will provide with output, which proves that the model created

is not even the same as the graph 6. (Datavedas 2019)

5.1.3 Mean Absolute Error (MAE)

In the vision, MAE represents the ratio of differences between two fixed factors. Expectations X and Y

are two factors in the same opinion that express similar visions. Examples of Y and X include forecasting

and viewing, result time and entry time, an estimation strategy and a comparison of performance alter-

natives. Consider a graph scattered from the centre of n, where point I has coordinates (xi, yi). The mean

absolute error (MAE) is the normal vertical interval between each point and the character line. In addi-

tion, MAE is the regular planar spacing between each point and a character line. MSE divides the dif-

ference between the expected value and the actual value by the number of attributes, or use it to estimate

directly, rather than square the difference. (Datavedas 2019.)

The formula of MAE is

Page 52: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

44

The formula of MAE does not take an absolute value, when added a negative value at this time, negative

discrimination will offset the positive variance and keep it to zero. In this way, a clear understanding of

important issues is important. With simple quality, MAE can manage MSE / RMSE anomalies. Follow-

ing these principles, rather than not involving MSE / RMSE at all, is a significant mistake and does not

cover many of the smallest mistakes. MAE is simple, this is an undetermined because it analyses signif-

icant features that undermine the reliability of this process and contradicts MSE / RMSE. Its applicability

is poor because it does not consider further Mass error (MSE / RMSE by knowing the error conditions).

Unlike MSE, which is totally based on the mean, MAE uses the mean as a metric and has the opportunity

to deviate from the model's predictions which are predominantly mean, at this point the MAE is usually

smaller (like how the model expects to mean a low MSE). Also, like MSE / RMSE, MAE is very sensi-

tive to the average and foreground size. In this way, MAE has its points of interest and harm, because it

unilaterally helps to handle the exception again, but ignores the rejection of the greater error clause.

(Datavedas 2019.)

5.1.4 Mean Absolute Deviation (MAD)

The specified mean absolute deviation for indexing information is the mean absolute or positive devia-

tion of a given piece of information from a given value (most of it the focus value). It is a brief measure

of fact dispersion or contrast. In general structure, the main problem may be an average, median, posi-

tion, or other percentage of focus tendency, or any information directly related to a specific information

index. The highest estimate of the difference between the concentration of information, its tendency to

focus, and its isolation is summarized by the number of information concentration. (Datavedas 2019.)

In MAE, after summarizing the absolute differences between expected quality and true quality, that

separates them with the perceived quantity (for example, normalized the contrast between expected qual-

ity and true quality). At MAD, which creates a comprehensive comparison of expected quality and true

quality before looking at the middling points. Therefore, MAD is entirely straightforward for anomalies.

In any case, the most worrying aspect of this measure is that it cannot accept its subordinates. Since

Page 53: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

45

MAD is no different, it is difficult to update the beta version to reduce errors. Therefore, MAD does not

have a subsidiary like MAE at all, but MAD does not offer us this option, which makes the technology

very maddening. Again, like MSE / RMSE and MAE, it is good to mean and scale. (Datavedas 2019.)

5.1.5 Relative Absolute Error (RAE)

The relative absolute error (RAE) is a method of measuring outdated model representations. It is mainly

used for artificial intelligence, data mining and table tasks. RAE should not be confused with relative

errors. This is the complete accuracy. For tools like a timer, a ruler, or a ruler accuracy is a certain

percentage. Relative absolute errors are reported at scale and can see the average (residual) error of the

error generated by the small or harmless model. A rational model (one that produces more than any small

result) will lead to a reduction in the size of the model. Like relative squared errors, it is also used to

compare models in which errors are measured in different units. (Datavedas 2019.)

The formula for relative absolute error is

5.2 Classification Models

The classification model tries to make some decisions based on the observed esters. Looking at least one

source of information, this model will attempt to anticipate an estimate of at least one outcome. As a

result, the codes can be applied to the data set. For example, "cheating" or "approved" when separating

"spam" or "junk" messages when exchanging information. There are two methods of machine learning:

supervised and unattended. In the monitoring model, the prepared data set is preserved in the classifica-

tion algorithm. At that point in time, checking the test information was a reversal of a decision to conduct

a "fake" transaction. This type of learning belongs to the classification. (Datavedas 2019.)

Again, unattended forms are encouraged to search in the unnamed data set and look at the packets in the

set. It is most likely used to scan the data set for similarity or to identify design or errors. Typical use

Page 54: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

46

case will explore the comparison of images. Unattended forms can also be used to explore "fake" ex-

changes by looking for inconsistent codes in the data set. This type of learning belongs to a "group".

Anti-Spam uses the Naive Bayes sorting algorithm. When an individual classifies a spam message as

spam, the words in this email message are put into a database called junk mail whereas the email has

gone to the harm database. Over time, a series of unwanted words and phrases appeared. At the time,

anti-spam algorithms can indicate the possibility of email messages as spam or junk mail and ensure that

their needs be dependent on them. (Fuchs 2017.)

5.3 Ridge Regression

Tikhonov regularization, named for Andrey Tikhonov, is a technique for regularization of not properly

introduced topics. Otherwise called ridge regression, it is especially helpful to alleviate the issue of mul-

ticollinearity in direct relapse, which regularly happens in models with massive quantities of parameters.

The strategy gives improved proficiency in parameter estimation issues in return for a fair measure of

predisposition. In the simplest case, the problem of a near-singular moment matrix (XT X) is alleviated

by adding positive elements to the diagonals. The approach can be conceptualized by posing a constraint

∑B2 = c to the least squares problem, such that

where λ is the Lagrange multiplier of the constraint. The minimizer of the problem is the simple ridge

estimator

where “I” is the identity matrix and the ridge parameter λ serves as the positive constant shifting the

diagonals, thereby decreasing the condition number of the moment matrix. (Bhattacharyya 2018.)

Page 55: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

47

6 MACHINE LEARNING MODEL EVALUATION

Machine-learning remains to be an inexorably necessary segment of natural life, regardless of whether

applying the strategies to research or business issues. Machine learning models should have the oppor-

tunity to give exact forecasts to create genuine incentives for a specific organization. While preparing a

model is a key advance, how the model sums up on concealed data is a similarly significant perspective

that ought to be considered in each machine learning pipeline. (Mutuvi 2019.)

This thesis will clarify the methods utilized in assessing how well a machine learning model sums up to

new, in advance discreet data. It will additionally delineate how regular model assessment measurements

are executed for characterization and relapse issues utilizing Python. Methods for evaluating a model's

exhibition are separated into 2 classes: to be exact, Holdout and Cross-approval. The two methods utilize

a test set (data not seen by the model) to assess model execution. It is not specified to utilize the data

used to construct the model to evaluate it. Model will just recall the entire preparing set which will in

this way consistently foresee the right mark for any point in the preparation set is referred to as overfit-

ting. (Mutuvi 2019.)

6.1 Holdout

Holdout is sometimes referred to as testing data; a holdout subset gives the last gauge of the machine

learning model's presentation after it has been prepared and approved. Holdout sets ought to never be

utilized to settle on choices about which calculations to utilize or for improving or tuning calculations.

The propose of holdout evaluation is to test a model on unexpected data in comparison to prepared on.

This gives an unbiased estimate of learning execution. (Mutuvi 2019).

In this method, the dataset is arbitrarily partitioned into three subsets. Training set is a subset of the

dataset used to assemble prescient models; validation set is a subset of the dataset used to evaluate the

exhibition of the model implicit in the preparation stage. Holdout gives a test stage to tweaking a model's

parameters and choosing the best performing model. Not all demonstrating calculations need an approval

set and test set or unseen data is a subset of the dataset used to evaluate the reasonable future execution

of a model. If a model fits the preparation set the higher to match the test set, overfitting is expected the

reason. The holdout approach is helpful on account of its speed, straightforwardness, and adaptability.

Page 56: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

48

In any case, this procedure is regularly connected with high fluctuation since contrasts in the preparation

and test dataset can bring about important contrasts in the gauge of precision. (Mutuvi 2019.)

6.2 Cross-validation

Cross-validation is a procedure that includes parcelling the first perception dataset into a preparation set

used to prepare the model, and an autonomous set used to assess the analysis. The most widely recog-

nized cross-validation system is k-overlap cross-validation, where the first dataset is apportioned into k

equivalent measured subsamples, called folds. The k is a user-determined number, usually with 5 or 10.

This is rehashed k times, with the end goal that each time, one of the k subsets is utilized as the test

set/approval set and the other k-1 subsets are assembled to frame a preparation set. The blunder estima-

tion is arrived at the midpoint of overall k preliminaries to get the complete suitability of model. (Mutuvi

2019)

For example, when performing five-overlap cross-validation, the data is first divided into 5 pieces of

(roughly) equivalent size. A grouping of models is prepared. The primary model is prepared to utilize

the principal overlay as the test set, and the rest of the folds are utilized as the preparation set. This is

rehashed for every one of these 5 parts of the data and the estimation of precision is found and the middle

value of over each of the 5 preliminaries to get the maximum viability of the model. As can be seen,

each reference point to find a workable rate a test set precisely once and finds a decent rate a preparation

set k-1 periods. (Mutuvi 2019)

6.3 Model Evaluation Metrics

While data preparation and training an AI, model is a key advance in the AI pipeline, it is equally vital

to quantify the exhibition of this prepared model. The extent to which the model summarizes on the

concealed information is the idea that characterizes versatile versus non-versatile AI models. By utilizing

various measurements for execution assessment, should be in a context to improve the general prescient

intensity of model before stepping up for production on concealed information. (Mutuvi 2019.)

The idea of building machine learning models works on a constructive feedback principle. Model eval-

uation metrics are required to evaluate model execution. The decision of assessment measurements relies

Page 57: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

49

upon a given AI task, (for example, arrangement, regression, positioning, bunching and subject demon-

strating). A few measurements, for example, accuracy review, are helpful for various undertakings. Su-

pervised learning assignments, for example, characterization and relapse comprise a dominant part of

AI applications. Following part will cover the measurements for grouping activities as referenced, eval-

uation metrics are attached to the AI. There are two potential yield classes in the twofold arrangement.

In multi-class grouping, exist in the overflow of two potential classes. (turi 2018.)

6.3.1 Classification Accuracy

In short, classifiers often give exact estimates of correct predictions. Accuracy is the proportion among

the number of legitimate prognostications along with the number of calculations that are out of focus.

Accuracy is a typical evaluation criterion to describe a problem. It is the number of valid predictions

made as a percentage of all predictions produced. In general, the sklearn module is used to record the

accuracy of configuration jobs, as follows. (Turi 2018.)

FIGURE 22. Coding showing the accuracy (Turi 2018).

According to figure 22 in any case which believes that there is no difference between classes. The correct

answers for each chapter are handled in the same manner. This is not sufficient in several places. In each

chapter, the number of the desired model shakes. This could be the case if the categorization expenditure

is exceptional or if the prospect for escaping includes more testing data of a particular type than each

Page 58: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

50

other. For example, appointing a patient without knowing that they have cancer (called a false positive)

is completely different from determining that a patient has no malignant (false negative) tumour when

infected. (Turi 2018.)

6.3.2 Multiclass averaging

Multi-layer installation is a dual installation extension. From different perspectives, accurate measure-

ments can be created in the middle of each layer several of them. Miniature performs common image

measurements by effectively and incorrectly predicting the number of times for all categories. Macro

calculates the scale automatically for each category and find the weighted average. This does not con-

sider the name oblique. None returns metrics compared to each category. (Turi 2018.)

FIGURE 23. Screenshot of coding shows multiclass averaging (Turi 2018).

In generally from figure 23 when the number of models in each category is different, the normal accuracy

of each type will be unique in comparison to the small-scale conventional accuracy. Although the cate-

gories are heterogeneous, there are many categories that have a larger issue than their colleagues, at the

moment, categories with multiple models will devastate the measurements, accuracy will produce irreg-

ularly curved images. The accuracy of each type must be considered (mean = "small scale"), just like

the exact number for each person (mean = none). However, the accuracy of each class is not short of the

Page 59: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

51

alert for tests, if there are too many cases in a class, the test results of that class will be very dismal. As

far as the number of averages are concerned, it is an unstable difference. (Turi 2018.)

6.3.3 Confusion metrics

The Error Framework provides progress points by analysing the correct and incorrect command points

for each category. Complicated (or confusing) tables show an increasingly great division of the right and

wrong arrangements for each chapter. This is the case of how the network is registered. The puzzle table

is a travel frame that consists of three parts, target label is naming the truth on the ground, pictorial label

predictive label and count is the number of times the target label is predicted. (Turi 2018. Figure 24 is

underwriting in the matrix, which plainly shows the visual improvement of the categories that the model

can better recognize. This data will be lost only if the overall accuracy is lost.

FIGURE 24. Coding shows confusion metrics (Turi 2018).

Page 60: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

52

6.3.4 Logarithmic loss

Explicitly, if the original productivity of a workbook is a number instead of a category name, then log-

arithms can be used at this time. Simply, this ability will be full of unavoidable tests. The real logo may

be "0", but the compiler is assumed to be included in category "1" with a probability of 0.51, at which

point the compiler the mistake. In any case, this is approximately because the particular strength is close

to the limit of 0.5. Regrettably, the logarithm is precise estimates of accuracy, that improve thought

about the probability. (Turi 2018.)

Mathematically, Log-Loss provides a double workbook as follows:

Here, pi is the probability that the i-th data point will have a "1" position if the workbook is used as a

real tag, and "0" or "1". The magnificence of this definition is that it relates to data assumptions: intui-

tively, the log fortress measures the uncertainty of “increased noise” resulting from the use of indicators

rather than the basic symbols. Be firm by reducing entropy, it expands classification accuracy.

(Turi 2018.) Figure 25 is representing the how to record the example of the logarithmic loss through

computed programming.

FIGURE 25. Computed programming represents Logarithmic loss (Turi 2018).

Log loss has not been defined once the likelihood value, or. Consequently, prospects are

trimmed

Page 61: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

53

6.3.5 Area under the curve (AUC)

The AUC here represents the area under the curve, the curve is the ROC curve which shown in figure

25, there is a large area under the large ROC curve (because the positive ratio is as high as 100%). A

poor ROC curve hardly covers any area. The space under the ROC curve is a performance indicator that

measures the ability of binary classifications to distinguish between positive and negative classes. (Turi

2018.)

FIGURE 26. Computed programming of AUC (Turi 2018).

Record AUC points to use combined chart, containing 100 thousand rows in future. Figure 27 changes

entirely to the fifth decimal point. Similarly, when subject categories are of type series, AUC scores can

be distinguished. For binary classifications, when the subjective name is a string, the names are arranged

in alphabetical order at this time and the larger names are considered as "positive" labels. (turi 2018.)

FIGURE 27. Computed programming of AUC when target classes are of type String (Turi 2018).

Page 62: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

54

6.3.6 F- Scores

The result F1 is a separate measure that combines the two parts and is considered by their mean:

The result is in the range [0.1], where 1 means precision and 0 means nastiest. Unlike numerical algo-

rithms, consonants laws have a propensity toward the smallest components. Therefore, if accuracy or

evaluation is rare, then the F1 score will be small. F1 scores (sometimes referred to as fair F-beta scores)

are a rare measure of F-Beta's results and determine the importance of accurate recovery assessment for

the largest possible number of participants. (Turi 2018.)

FIGURE 28. Computed programming of F-score (Turi 2018).

Like different indicators, figure 28 purposes list is the string type, F1 points (or F-beta points) can also

be distinguished. For a parallel arrangement, when the target is marked as a string, the names displayed

in alphanumeric order and the larger name are considered “positive” labels. (Turi 2018.)

FIGURE 29. Computed programming of F-score when target classes are of type String (Turi 2018).

Page 63: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

55

7 DATA ANALYTICS TOOLS

Data analytics tools are broadly utilized in giving a significant examination of a massive arrangement of

information. Python is an inevitably mainstream apparatus for information investigation. In previous

years, various libraries have disembarked at development, permitting R and Stata clients to exploit the

quality, adaptability, and execution of Python without surrendering the usefulness these more established

projects have amassed throughout the years. (Trustradius 2019.)

7.1 R programming

R is the main examination instrument in the business and generally utilized for measurements and infor-

mation demonstrating. It can with little of stretching the control information and present it in various

methods. It has surpassed SAS from numerous points of opinion like the limit of information, execution,

and result. R integrates and works on many different stages namely UNIX, Windows and macOS. It has

11,556 packages and a license to view packages by level. In addition, R also provides equipment that

can naturally import all software packages according to customer needs and can also use big data to

collect many software packages. (Proschool 2018.)

7.2 Python

Python is an object-oriented, scriptable and maintenance scripting language, and a free and open source

tool. It was created by Guido van Rossum in the late 1980s and establishes practical structured program-

ming techniques. Python is tough to get to know for the reason that it is primarily JavaScript, Ruby and

PHP. As a supplement to, Python also has excellent AI libraries for instance Tensor Flow, Theano,

Learning Sci-Kit, Keras. In addition, an important part of Python is the ability to properly accumulated

at any stage, which are MongoDB, SQL or JSON databases. Python processes content information

properly if needed (Preschool 2018.)

7.3 Apache Spark

In 2009, Berkeley's AMP Lab the University of California established Apache. Apache Spark stands

important information in Hadoop, it has fast engine preparation and application execution performance,

Page 64: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

56

which can pick them up faster and breed quickly in memory and within a circle. Spark depends on

information science; the ideas make information science easy. In addition, Spark is known for the de-

velopment of information pipelines and AI models. Apache Spark includes a library and MLlib, which

provides dynamic machine computing arrangements for monotonous information science systems (such

as sorting, gradients, collaborative filtering, groups). (Preschool 2018.)

7.4 SAS and Rapid Miner

SAS is a factual programming suite created by SAS Institute for the information board, progressed ex-

amination, multivariate investigation, business insight, criminal examination and prescient examination.

SAS was additionally evolved during the 1980s and 1990s with the option of new measurable strategies,

extra parts and the presentation of JMP. RapidMiner is an information science programming stage cre-

ated by the organization of a similar name that gives a coordinated domain to information planning, AI,

profound learning, content mining, and prescient investigation. It is utilized for commercial and business

applications while for investigate, instruction, preparing, quick prototyping, and application improve-

ment and supports all means of the AI procedure including information arrangement, results representa-

tion, model approval and optimization. RapidMiner is created on an open centre model. The RapidMiner

Studio Free Edition, which is restricted to 1 consistent processor and 10,000 information columns is

accessible under the AGPL license, with reliance on different non open-source software parts. (Pro-

school 2018)

7.5 Excel

Excel is an essential analytical tool and is widely used in almost every industry. Whether an individual

is an expert of SAS, R or Tableau, still they require Excel. Excel becomes very important when an

individual had better to analyse customer internal data. It analyses the complex task of aggregating data,

while previewing a pivot table can help filter data based on customer needs. Excel has advanced business

analysis options to help with modelling capabilities, including pre-built options such as automatic rela-

tionship detection, DAX generation and time consolidation. (Proschool 2018.)

Page 65: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

57

7.6 Tableau Public

Tableau Software is an American intelligent information representation programming organization es-

tablished in January 2003 by Christian Chabot, Pat Hanrahan, and Chris Stolte, in Mountain View, Cal-

ifornia. The organization is at present headquartered in Seattle, Washington, United States concentrated

on business knowledge on August 1, 2019, Salesforce procured Tableau. Chabot, Hanrahan and Stolte

were scientists at the Department of Computer Science at Stanford University anyone had practical ex-

perience in representation methods for investigating and breaking down social databases and information

solid shapes. The organization was started as a business outlet for exploration formed at Stanford be-

tween 1999 and 2002. (Proschool 2018.)

7.7 KNIME

KNIME is a free and open-source information investigation, detailing and joining stage. KNIME incor-

porates different parts for AI and information mining through its secluded information pipelining idea.

A graphical UI and utilization of JDBC permits get together of hubs mixing various information sources,

including pre-processing (ETL: Extraction, Transformation, Loading), for displaying, information in-

vestigation and representation without, or with just negligible, programming. Since 2006, KNIME has

been utilized in pharmaceutical research, it is also utilized in different regions like CRM client infor-

mation investigation, business insight, content mining and monetary information examination. (Pro-

school 2018.)

7.8 Splunk and Qlik

Splunk is an American open worldwide company situated in San Francisco, California, that produces

programming for looking, checking, and dissecting machine-created large information by means of a

Web-style interface. Splunk catches, files, and relates continuous information in an accessible archive

from which it can produce diagrams, reports, cautions, dashboards, and perceptions. Qlik gives a start

to finish stage which incorporates information joining, client driven business knowledge and conversa-

tional examination. The product organization was established in 1993 in Lund, Sweden and is presently

situated in King of Prussia, Pennsylvania, United States. The organization's primary items are QlikView

and Qlik Sense and computer programming for business insight and information representation. (Pro-

school 2018.)

Page 66: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

58

8 EXPERIMENTAL DATA ANALYSIS OF THE WUHAN CORONAVIRUS DATASET

An infection that was first revealed in the Chinese city of Wuhan, has now spread to in excess of twelve

nations over the world, commencing significant wellbeing and financial emergency. The World Health

Organization (WHO) has stated the incident of the Wuhan coronavirus "a general wellbeing crisis of

worldwide concern". This thesis will analyse the present emergency and subsequently soar further into

Kaggle's "Novel Corona Virus 2019 Dataset".

8.1 Coronavirus

As indicated by the WHO, coronaviruses (CoV) are a massive group of infections that cause sickness

extending from the normal virus to progressively serious illnesses, for example, Middle East Respiratory

Syndrome (MERS-CoV) and Severe Acute Respiratory Syndrome (SARS-CoV). The story coronavirus

(nCoV) is another strain that has not been recently recognized in people. The infection is acknowledged

as the reason for the ongoing episode is being referred to as the 2019-nCoV or Wuhan coronavirus.

8.2 The emergency, starting today

As per the most recent report by the New York Times, "the quantity of affirmed contaminations rose to

37,198 and loss of life in China has ascended to 811, outperforming the loss of life from the SARS

pestilence. Sixteen urban areas in China, with a consolidated general population of in excess of 50 mil-

lion individuals, are on lockdown. Airplanes over the world have plummeted flights to and from China.

A few nations are clearing their residents on exceptional flights and further placing them under acute

isolation. To exacerbate the situation, securities exchanges have plunged in China and markets over the

world are feeling the impacts. A few experts anticipate that the flare-up presents a risk to the worldwide

economy and it can possibly trigger extensive geopolitical results.

Page 67: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

59

8.3 A prologue to the dataset

The "Novel Corona Virus 2019 Dataset", distributed on Kaggle, has been gathered by the John Hopkins

University. The group has gathered information from different sources like WHO, area CDC and news

sources. They have additionally made a constant dashboard to screen the spread of the infection. Firstly,

it is beginning with imports the Python libraries as shown in figure 30 and loading the data where it

reads the data from the csv file on Kaggle with that reads data from local csv file which store in PC.

Proviso: Please note that the dataset is not being refreshed, so observations recorded under probably will

not be a true idea of the present situation.

FIGURE 30. Importing libraries and loading data

8.3.1 Insight the dataset

Initially, by obtaining the basic ability of the dataset and performing information cleaning tasks, if es-

sential. Table 2 is checking the number of lines and sections by using data.shape and output (770, 8)

which means there are 770 perceptions and 8 sections in the dataset. Also checking the main 5 lines by

using data.head() is following in table 2.

Page 68: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

60

TABLE 2. The main five lines data by using data.head()

The names of the segments are obvious as quartz as shown in figure 30. The main segment 'Sno' looks

as if a column number and does not enhance the investigation. The fifth section 'Last Update' shows a

similar incentive as the 'Date' segment except for a couple of situations where the numbers were re-

freshed later, and figure 30 is expelling these two sections before continuing.

FIGURE 30. Comparing last update with date segment

Aside from 'Region/State', none of the sections have invalid qualities as can be shown. Further exami-

nation shows that the names of territories are absent for nations like the UK, France and India. In this

case it is not possible to anticipate or fill missing qualities from any ace rundown. Table 3 is proceeding

onward to the numeric sections.

Page 69: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

61

TABLE 3. Restoring the general details by using describe() function

The describe() technique restores the general details of the numeric sections in the data frame is shown

in table 3. A quick end from the output is that the information has been accounted for in aggregate, for

example, the number of cases written about the particular date incorporates the cases announced in the

past. The 'maximum' regard for demise's is 479 which is constant with media reports a couple of days

prior (when this information was distributed).

FIGURE 31. Checking for duplicate rows

In figure 31, the duplicated() strategy restores a Boolean series which is then utilized as a cover on the

first information outline. The out shows that no two records have a similar nation, state and date. Con-

sequently, it may be possible to infer that all perceptions in the dataset are remarkable.

Page 70: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

62

FIGURE 32. Countries list of spreading infection across the world

Figure 32 is list of the countries spreading the infection across the world where the information shows

that the infection has spread to 32 nations across Asia, Europe and America. With the end goal of this

analysis, that could combine information for 'China' and 'Mainland China'. Before progress in another

chapter, Figure 33 is checking the dates in the 'Date' section.

FIGURE 33. Checking the dates in the date section

Figure 33 is checking the dates from the dates sections where the information has been refreshed on

various occasions every day which can remove the dates from the timestamp and use them for additional

examination. This will assist in keeping the dates uniform.

FIGURE 34. Converting date to datetime object

Page 71: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

63

In figure 34, the first line of code is converting Date column to datetime object along with second code

is obtaining dates from timestamps. Next step will get a sense of the effect of the flare-up on every nation

where the total number of confirmed cases for each country will be presented by using df_country func-

tion. According to output of the function, China has 16678 confirmed case and 479 deaths because of

corona virus although positively 522 case are recovered in China.

FIGURE 35. The total number of confirmed cases for each country

Since the information is accumulative, it has to utilize the max() function with groupby() so as to get the

most extreme number of uncovered cases for every nation. As it turned out that table 4 use sum() function

which will be twofold counting. The information affirms that China has the greatest number of declared

cases and about the totality of the 481 are dead up to now. As can be shown from figure 35, on a gradually

optimistic note, China furthermore has 522 recuperations, followed by Thailand which has 7.

Page 72: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

64

TABLE 4. Number of cases reported each day

Table 4 is representing the number of cases reported each day around the world by using the function

data_groupby. By referencing the above table 4, the maximum of confirmed cases reported on date 2020

February 04 which means number of cases are increasing day by day across world. Now this one has

completed the information arrangement steps so it should proceed onward to predict the information to

search for any rising developments and models.

8.3.2 Plotting the data

For information opinion, the analysis will utilize two innovative python libraries — Matplotlib and Sea-

born. Matplotlib is the default 2D representation library utilized by most information researchers. Sea-

born, based over matplotlib, assists with building better looking and progressively complex opinions

like heatmaps. Graph 7 is establishing five observations dependent on various parts of the information.

Page 73: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

65

GRAPH 7. Number of confirmed cases by date

The quantity of cases described day by day has expanded by almost 250% since the 28th of January. The

number of cases provided specifics relate to the fourth of February was 3915. This shows the virus is

exceptionally infectious and is spreading quickly.

Page 74: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

66

GRAPH 8. Plotting two line for deaths and recoveries respectively

According to graph 8, during the major week, the rate of death's was higher than that of recoveries. Since

the 31st of January, the pace of recovery has fired up and is demonstrating a positive pattern. There were

255 recoveries on the fourth of February contrasted with 66 deaths. The recovery rate will keep on

expanding as more individuals find a practical rate warning and are quick by studying for a cure.

Page 75: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

67

GRAPH 9. Most affected countries other than China

As shown in above graph 9, China is the most affected country by a large margin. Graph 9 is created for

comparing the countries other than China. Nations geographically nearby to China, like Thailand, Japan

and Singapore, have uncovered a larger number of cases than other Asian and European nations. Ger-

many is an exclusion and has the most notable number of cases in Europe.

Page 76: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

68

GRAPH 10. Mortality rate over time

Graph 10 is calculating the mortality rate at any point in time by dividing the number of the deaths by

the number of confirmed cases. By following the above graph, the death rate has never crossed 3% and

is slowly lessening to 2%. More recoveries in the approaching weeks may decrease this further.

Page 77: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

69

GRAPH 11. Ten most affected provinces of China

According to Graph 11, the Chinese area of Hubei is the focal point of the outbreak. It has altogether

more declared cases than the several regions joined. There are a few regions where there have been no

deaths and each single affected unrelenting have recovered. The analysis shows the alarming rate at

which the Wuhan coronavirus is spreading. At least, 811 individuals have now died during the present

menace, surpassing the 774 fatalities declared during the SARS flare-up seven years prior.

Page 78: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

70

9 EXCEL TOOLS FOR DATA ANALYSIS

The Analysis Tool is an Excel that includes a program that gives information examination devices to

money related, factual and designing information analysis. Data Analysis is undetermined although the

history behind the information. It has a terrific agreement of programming that can help in the factual

investigation. Even though Excel is laden with equations spreadsheets even, Excel can do the infor-

mation analysis. (Schmuller 2014.)

ANOVA single factor is examination of fluctuation for at least two examples where ANOVA Two Fac-

tor with Replication is the analysis of difference with two free factors, and different perceptions in every

mix of the degrees of the factors. Similarly, ANOVA Two Factor without Replication is analysis of

fluctuation with two free factors, and one perception in every mix of the degrees of the factors. It can

utilize it for Repeated Measures ANOVA. In contrast, Correlation is multiple estimations on an example

of people computes a grid of connection coefficients for every conceivable pair of the estimations. While

Covariance is with multiple estimations on an example of people ascertains a grid of covariances for

every conceivable pair of the estimations. (Schmuller 2014.)

Descriptive Statistics generates a report of focal propensity, changeability, and different attributes of

qualities in the chose scope of cells. Exponential Smoothing is a succession of qualities figures a forecast

dependent on a previous arrangement of qualities, and on an earlier expectation for those qualities. F-

Test Two-Sample for Variances performs an F-test to think about two changes Histogram tabulates in-

dividual and aggregate frequencies for values in the chose scope of cells while Moving Average is an

arrangement of qualities ascertains a forecast which is the normal of a predetermined number of going

before esteems. (Schmuller 2014.)

Arbitrary Number Generation provides a predefined measure of irregular numbers produced from one

of seven potential disseminations. However, Rank and Percentile creates a table that shows the ordinal

position and the rating rank of each an incentive in a lot of qualities. Regression creates a report of the

relapse measurements dependent on straight relapse through several information containing one ward

variable and at least one autonomous factor. Similarly, Sampling creates an example from the qualities

in a predetermined scope of cells. (Schmuller 2014.)

Page 79: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

71

9.1 Data Analysis of Excel datasheet

Usually when assignment ideas which needs to use measurable programming. These tools can perform

calculations such as t-test, chi-square test, and communication. Excel is not intended to search data. In

any case, this does not mean that cannot be completed. Miserably, the authentic functionality of Excel

is not always primitive. Additionally, they give vague results. Therefore, instead of using detailed fea-

tures, Excel is used insights to move to the data analysis tool library. Despite some deplorable incongru-

ities, Tool Pak still has extensive measurement value. This application will need to know what can be

done with the Excel measurement program.

9.2 Complementing the Excel data analysis tool Pak

The Analysis Tool Pak is an excel that includes a program that gives information examination tools to

monetary, factual and building data investigation. To stack the analysis Tool Pak in Excel, include im-

plementation the accompanying innovations. For doing this from the file tab, by clicking options and

under Add-ins, Analysis Tool Pak should be selected and snapping into a go button. As shown in figure

36, there are different alternatives are available in addition to that for installing the Analysis Tool Pak

in Excel, Analysis Tool Pak should be selected and when selecting Tool Pak, it will automatically install

in excel system. When Excel data analysis Tool Pak is already connected in the system Excel Data tab

shows the data analysis function as shown in figure 36. Finally, Tool Pak is ready to use some analysis;

by selecting Data Analysis function in Excel Tool Pak, there will be different types of Data Analysis

tools, for instance by selecting the histogram from the Data Analysis Tool Pak as shown in Figure 36

which represents the histogram of the using data sets.

Page 80: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

72

FIGURE 36. Assembling the Analysis Tool Pak

9.3 Excel datasets

For the implementation, the datasets of 60 houses which shows their land value, the land size, the number

of bedrooms, bathrooms and total rooms in the house, the age of the property and whether it has a fire-

place or not. Now it will use this data to use the data analysis tool which can calculate some descriptive

statistics like the mean, median, mode and range.

9.3.1 Descriptive Statistics

Regardless of what measurable test is running, it apparently needs to get Excel's edifying measurements

first. This will give data on implies, medians, difference, standard deviation and error, kurtosis, skew-

ness, and an assortment of different figures. However, the coefficient of variation will be calculated

manually.

Page 81: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

73

FIGURE 40. Descriptive statistics in excel

Running distinct measurements in Excel is simple. Figure 40 is presenting the Data Analysis of descrip-

tive statistics in Excel. After clicking data Analysis tool in Excel data tab, then descriptive Statistics is

selected with choosing the data to proceed. By clicking the stretch beside the information run field, click

and-drag to choose information, and hit Enter. Fundamentally, coefficient of variation is the standard

deviation divided by the mean. Finally, the mean value of a house is 414850.00 whereas the median

value is 420,550.00 and the standard deviation is 99,416.24445 then the coefficient of variation is

23.96%.

FIGURE 41. Output of the descriptive statistics by using excel tools

9.3.2 T- test in excel

One of the most essential factual tests is the t-test in Excel bit difficult to silhouette with the Tool Pak.

By clicking the data analysis many alternatives that can be discovered among them t-test is analyzing

Page 82: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

74

can be shown in figure 42. It has three options; T-test paired two sample for mean, T-test two sample

presuming identical variances and T- test two sample presuming disparate variances.

FIGURE 42. T-test in excel tools

T-Test paired Two Sample for Means ought to be utilized when estimations or perceptions were com-

bined. Utilize this when took two estimations of similar subjects, for example, estimating circulatory

strain when a mediation. T-Test two sample presuming identical variances which ought to be utilized

when estimations are free. It will examine the "equivalent fluctuations" part in a moment. T-Test two-

sample presuming disparate variances is additionally for autonomous estimations, however, is utilized

when its fluctuations are inconsistent. Since t stat 1.68 is less than t critical 2.00 so it can not reject the

Ho. Therefore, it is not possible to conclude that ideals of houses which have a fireplace and do not

possess a fireplace.

FIGURE 43. Output of t-test in excel

Page 83: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

75

9.3.3 Regression in Excel

Regression is one of the most ordinarily utilized measurable tests in the business, and Excel packs an

incredible measure of intensity for this count. It will run a vigorous various relapse in Excel in Figure

44. Its reliant variable is house values, and two autonomous factors are total rooms and fireplace admis-

sion. It needs to see which is a superior indicator of house values (or on the probability that these are the

two merchandise). (Albright 2017.) By clicking data Analysis that can be noticed many alternatives with

each other Regression is selected. The Input Y Range box ought to contain its single ward variable. The

Input X range box can incorporate different free factors that could be shown in figure 44.

FIGURE 45. Output of the regression in excel tools

As shown from the figure 45, the curious thing in R Square, which is highlighted, the intercepts and sort

of rooms. The coefficients are displayed in graph 12 with highlighted. It is possible to assume a couple

of issues and to create a scatter diagram to further strengthen of the analysis.

Page 84: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

76

GRAPH 12. Scatter diagram using regression in excel tools

As shown from graph 12, the scatter diagram has been created so is possible to view the property values

downward and the total rooms increasing the chart and the following are scattering in the middle. Though

it appears across such as the property amounts is increased, so the property values can actually draw a

straight line in the centre of these, which gives a straight line that means there is a fairly close relationship

between the property values in total rooms. The R Square indicates 73.8% which is fundamentally de-

claring that there is a 73.8% variation between the values and the total rooms so that might be about 73

or 74 percent conference. There is a positive relationship between these two aspects of property values

and total rooms. This is just an Excel printout however, that could be used the p-value which is in this

printout to understand to accept or deny. So, in this case, it is concluded at 5% significance that is positive

linear relationship existed based on this P value.

9.3.4 ANOVA in Excel

The Excel Pak Data Analysis tool provides three types of oscillation analysis (ANOVA). Basic subse-

quent tests such as Tukey or Bonferroni cannot be run. In any case that can be checked whether several

different factors are linked. There are three ANOVA tests in Excel. Analysis of variance separates indi-

vidual factors from changes by using one wing variable and one independent variable. While having

multiple clusters, it is more desirable to use different t tests.

Page 85: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

77

Analysis of variance is a binary factor with a replication effect is like a combined t-test; it contains

various estimates for a subject. The Two Factors section of this test shows two free factors. Analysis of

variance two factors without duplication contain two free factors, but no frequency in the estimate. Mul-

tiple factors will be investigated here. In this model, it will watch three series of numbers, labelled Rec-

onciliation 1, Reconciliation 2 and Reconciliation 3. To open ANOVA, it will have to click data Analy-

sis, at this point, it has to select ANOVA: single factor. (Albright 2017.)

FIGURE 46. Analysing the ANOVA in excel tool Pak

Page 86: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

78

Through analysis in figure 46, note that the estimated value of P is less than 0.05 and therefore gets a

critical result, indicating the critical variance between any groups in the test. However, since Excel does

not run tests to see which clusters are different, the best issue to do is to understate the midpoint shown

in the outline. In its model, intervention 3 is similar to a potential change. This is not substantial. Either

way, if it needs to check for discrimination and know which groups might cause the situation.

9.3.5 Correlation in excel

Detecting correlation in Excel is much more complicated than testing t or ANOVA. Figure 47 is using

a data analysis tool to open the analysis tool window and select the link. This will not be bringing any

proportions of centrality; however, it could be perceived how each gathering relates to the other. Esti-

mation of one is a flat-out connection, showing that the qualities are the equivalent. The more like one

the connection esteem, the more prepared the relationship.

FIGURE 47. Examination of correlation in excel

Page 87: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

79

9.4 Excel Data

While Excel is not known for its factual force, it really sets some incredibly useful practicality, for ex-

ample, the Power Query apparatus, which is convenient for assignments like joining informational col-

lections. Additionally, the Data Analysis Tool Pak understanding involves which indeed derives from a

portion of Excel's superlative features. It has found a way to utilize the Tool Pak, and that would be able

to waste time individually to understand the meaning of how to utilize a superior amount of its capacities.

(Albright 2017.)

9.5 Comparing excel based data analysis with python data analysis

This part endeavours to give a distinctive rational structure to investigative the Python versus Excel

conversation and assist people with choosing on the possibility that they ought to subsidise their time

learning Python or upskill Excel. Entirely belongings measured, if it works in finance, bookkeeping,

deals, advertising, or some other job that requires to do data analysis, where continue utilizing Excel

learning. Despite that, on the off chance that activity is to manage information, at that point, that one

ought to put resources into Python. (Yu 2018.)

The speculation around Python versus Excel is not in every case clear. Speed, effectiveness, data analysis

highlights, and substantial information are regularly referred to as a reason why not to utilize Excel.

These reasons are somewhat right; however, they miss the comprehensive view. The period of data

analysis has just shown up. From the state, government, and responsibilities to people, immense infor-

mation and data analysis have become patterns that everybody knows about. Moreover, it might not have

the expert information on data analysis and programming, or it is shown in a terrific agreement about

the assumption of data analysis despite everything cannot repeat this. Clearly, Excel is not enough on

the basis that the activity as analyst can be summarized in 3 sections.

Excel is useful for step 2, however just providing manual, and constraining choices for step 1 and 3. This

makes Excel an astonishing answer for individually assigned only once a data analysis, however Python

is better. VBA can improve Excel's capacity to do step 1 and 3. Putting resources into VBA aptitudes

will likely yield quicker outcomes than learning Python. This is not on the grounds that learning VBA

is simpler than learning Python, because there are less to learn with VBA than there are with Python. In

any case, as a result of VBA's constraining application outside of associating with Microsoft Office, its

Page 88: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

80

outdated and blacken sentence structure, and its slighter client base, learning Python will be a greater

conjecture choice over the ordeal, in any event for individuals who can contribute an opportunity to

learning Python. (Yu 2018.)

9.5.1 Getting the data

Getting the information utilizing Excel is a manual procedure whereas getting information utilizing Py-

thon is a computerized procedure. In case an individual is doing a specially appointed report, or an

exploration venture, where the information just should be stacked once, at that point there is no require-

ment for Python, Excel will be adequate. Nonetheless, on the possibility that an individual is doing a

report on a planned premise, at that point, Python will expand efficiency fundamentally. Suppose an

individual is going to find a stab at report that calculates the day by day work spend for organization and

the information derives from a site. This is procedure: open website, download the CSV report, open the

CSV report and Ctrl-C and Ctrl-V the report to the main Excel report. (Yu 2018.)

In Python, this could be composed content utilizing demands to get information, at that stage use xlwings

to compose the secured information to the basic Excel report. Following are similar innovations set apart

with the Python bundles to assist with computerizing which they are: open website (requests or sele-

nium), download the CSV report (requests or selenium), open the CSV report and Copy (pandas or) and

paste the report to the main Excel report (xlwings). (Yu 2018.)

In the event that unexpectedly an individual has to add 4 progressively extra sites to this procedure,

scaling the manual procedure will be to duplicate stage from 1–4 by multiple times, though the comput-

erized procedure, the situation is 1 hour of coding, and the procedure will at present be 'python script.py'.

When a person has completed the content that person need to utilize cron apparatus to plan the errand to

run consequently. Likewise, Python can bring information from any information source. It can scratch

information from a site with no download choice, associate with any database (Mysql, PostgreSQL, or

MongoDB), any API, or another Excel record. In the unlikely event that it genuinely needs to find elab-

orate which could even make an online interface for others to enter information with web systems, for

example, Flask or Django and save the information easily to a database, or Excel document. (Yu 2018.)

Page 89: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

81

9.5.2 Analysing data

For analysing data Excel is good owing to its visual, uncomplicated and flexible. In everyday work, an

individual can accomplish a reasonable measure of work in Excel because it is instinctive and quick.

Despite this current thesis inclination towards Python which requires work out as an investigator, range

of abilities is deficient without a not dreadful measure of Excel information (vlookup, sumifs, Pivot

Table). A similar guideline even though all applies here: One-time venture, use Excel while Recurring

detailing use Python. Following are some bundles in Python for investigating information. Pandas is

fundamentally Excel with no GUI, quite instinctive to learn on the possibility that know Excel. The

fundamental instrument on the off chance that need to perform information analysis in Python. Scikit-

learn is machine learning library for factual examination. (Yu 2018.)

9.5.3 Portraying

This one is very straightforward, exhibiting information in Excel regularly involves making a rundown

page and messaging the whole exercise manual to the target group on the other hand, reordering the

synopsis page into an email. In Python, which is not exclusively would be able to computerize those

means, that it could likewise use Python's numerous perception libraries to make diagrams and present

the information by means of a website page. Imagine business group no more needs to search for the

information in an Excel document on their PC while they travel however, they can simply open a book-

mark in their program and get all the information. (Yu 2018.)

Finally, a last note on whether a person could put the time in Python or Excel. To emphasize what this

chapter written in the opening section, VBA, while hard to learn, will yield results a much faster. This

is on the grounds that while Python is simpler to learn, there are easily other issues to find out about

Python. This ought to be a significant factor in choosing how an individua contributes restricted time.

Just in case if a person needs results quickly, they will have to learn VBA. In the event that an individual

could stand up to put the time in a more attracted to out term venture, Python can do significantly more

than VBA, and in this way will not just make a better expert yet open more entryways fas far as openings

for work. (Yu 2018)

Following is a model, creating VBA code is as basic as opening an Excel document and then pressing

alt+f11. While in Python, in addition to the fact that must get a word processor or an IDE, likewise need

Page 90: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

82

to know a smidgen of direction line to execute record. To show information on the web as portrayed in

stage 3, in any event, need to know one web system, fundamental Python, Pandas, SQL, direct line, git,

and code arrangement. Indeed, even simply introducing Python can be confounding. In addition to the

fact that an individual must introduce Python, yet additionally need to realize what virtualenv is.

Nevertheless, it would have to think about these as obstacles to progress. If it happens to that anticipate

the interest of Python empowered investigator and Excel expert is the equivalent since learning Python

is not as direct as learning Excel while Python aptitude is substantially more important. (Yu 2018.)

Page 91: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

83

10 CONCLUSION

Python in data science has empowered data researchers to accomplish more in less time. Python is an

adaptable programming language that can be effectively comprehended and is unusually ground-break-

ing as well. Python is extremely adaptable and can work in any condition effectively. Likewise, with

inconsequential changes, it can run on any working framework and can be coordinated with other pro-

gramming languages. Every one of these qualities has descended on Python the best option for data

researchers and designers.

In addition, Python has developed help for data science and endless different controls. That implies that

it can offer adaptability to the data researcher to create answers for data preparing which depend on well-

structured libraries and which can interface effectively with different pieces of the inexorably unpredict-

able framework of present-day organizations. Furthermore, finally, it has a massive, strong and devel-

oping network and an extraordinary future both as far as employments have been made and libraries

being continually improved. From various perspectives, Python is the Swiss Army Knife of program-

ming languages. It gives excellent libraries in an assortment of orders reasonably identified with data

science, for example, measurements, AI, data preparing, data perception and a much more in subjects

bordering it, for example, logical programming, picture handling, and web advancement.

As referenced in this thesis, the network of Python is immense and neophyte acceptable. Practically

speaking, that implies that it is generally simple to look or post on stack flood or other comparable sites

and find support. It is likewise generally straightforward for others to look and comprehend code since

the network has developed to make the instruments to render it in this way, and that makes finding

support and input very straight forward. The interest in data researchers that are capable in Python is

likewise successfully massive and developing. Google is utilizing Python for an assortment of assign-

ments and Airbnb, Pinterest and Spotify for their data pipelines. Count-less start-ups are utilizing it for

both prototyping and the final product. In one of the start-ups, there is running at start from a Jupyter

Notebook and exasperate with a Python-coordinated data pipeline for analysis. Lastly, as indicated by

the O'Reilly Data Science Survey, utilizing Python as a data researcher is one of the two significant

boosting components to one's compensation.

Page 92: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

84

The future of data science and Python is invigorating. Data Science is a new field, developing quickly.

An ever-increasing number of reflections are being created to make it simpler for the data researcher to

concentrate on the main issue, which is examining data fundamentally to computerize away when a

significant portion of the work is imaginable. Giving a simple to-utilize and uncluttered interface to the

software engineer is something that is prepared inside Python's way of thinking as a language, so by

antedating that this pattern should proceed with increasingly unusual libraries and perspectives on and

sharing moments of knowledge being created. Python is grappling with a long drag in the data environ-

ment, the language has accessible devices and programs and now there is no better time to start learning

it.

Page 93: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

85

REFERENCES

Acadgild, X. 2018. What is Data Wrangling? 6 Steps in Data Wrangling. Available at:

https://acadgild.com/blog/6-steps-in-data-wrangling. Accessed by 23 December 2019.

Albright, D. 2017. How to Do Basic Data Analysis in Excel. Available at :

https://www.makeuseof.com/tag/data-analysis-excel/. Accessed by 07 February 2020.

Arora, S. 2020. Big data and analytics: top 5 Python Libraries for Data Science. Available at:

https://www.simplilearn.com/top-python-libraries-for-data-science-article. Accessed by 02 January

2020.

Bhattacharyy, S. 2018. Ridge and Lasso Regression: L1 and L2 Regularization. Available at: https://to-

wardsdatascience.com/ridge-and-lasso-regression-a-complete-guide-with-python-scikit-learn-

e20e34bcbf0b. Accessed by 17 December 2019.

Bock, T. 2019. What is correlation. Availabe at: https://www.displayr.com/what-is-correlation/. Ac-

cessed by 04 November 2019.

Business Jargons, 2019. Available at: https://businessjargons.com/types-of-correlation.html. Accessed

by 05 December 2019.

Chartio, 2019. What is Exploratory Data Analysis. Available at: https://chartio.com/learn/data-analyt-

ics/what-is-exploratory-data-analysis/. Accessed by 05 January 2020.

Codecademy, 2019. Normalization. Available at: https://www.codecademy.com/articles/normalization.

Accessed by 05 February 2020.

Dave, K. 2012. A Python book: Beginning Python, Advanced Python and Python exercises.

DataVedas, 2019.Available at: https://www.datavedas.com/model-evaluation-techniques/. Accessed by

06 November 2019.

Page 94: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

86

Donnabhain, C. 2019. How Python is used in Data Science. Available at: https://irishtechnews.ie/how-

python-is-used-in-data-sci-

ence/?__cf_chl_jschl_tk__=2a69af5765dbcb4076162c9604a19d2886051136-1580376157-0- GlIuzx.

Accessed by 11 October 2019.

Durcevic, S. 2018. Data Science. Available at: https://www.datapine.com/blog/data-science-vs-data-an-

alytics/. Accessed by 18 January 2020.

ECML, 2018. 3 types of Data Formats Explained. Available at: https://www.ecml.org/data-science/3-

types-of-data-formats-explained/. Accessed by 05 December 2019.

Import.io, 2019. Data Normalization and Its Important. Available at: https://www.import/post/what-is-

data-normalization-and-why-is-it-important/. Accessed by 23 January 2020.

Kenton, W. 2019. Analysis of Variance. Available at: https://www.investopedia.com/terms/a/anova.asp.

Accessed by 13 December 2019.

Kenton, W. 2019. Corporate finance and accounting financial analysis Descriptive Statistics. Available

at: https://www.investopedia.com/terms/d/descriptive_statistics.asp. Accessed by 15 December 2019.

Keshari, K. 2019. Data Analyst Roles and Responsibilities. Available at:

https://www.edureka.co/blog/data-analyst-roles-and-responsibilities/. Accessed at 04 January 2020.

Klein, B. 2019. Binning in Python and Pandas. Available at: https://www.python-course.eu/pandas_py-

thon_binning.php. Accessed by 09 December 2019.

Mckinney W. 2012. Python for data analysis: First edition

Mutuvi, S. 2019. Introduction to Machine Learning Model Evaluation. Available at: https://heart-

beat.fritz.ai/introduction-to-machine-learning-model-evaluation-fa859e1b2d7f. Accessed by 17 Decem-

ber 2019.

Onlineitguru, 2019. Importance of data science. Available at: https://onlineitguru.com/blog/importance-

of-data-science. Accessed by 12 January 2020.

Page 95: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

87

Pandas, 2014. Categorical data. Available at: https://pandas.pydata.org/pandas-docs/sta-

ble/user_guide/categorical.html. Accessed by 07 November 2019.

Proschool, 2018. Available at: https://www.proschoolonline.com/blog/top-10-data-analytics-tools. Ac-

cessed by 19 December 2019.

Ranjan, S. 2019.Python | Pandas dataframe.groupby(). Available at: https://www.geeksforgeeks.org/py-

thon-pandas-dataframe-groupby/. Accessed by 09 December 2019.

Schmuller, J. 2016. Statistical Analysis with Excel For Dummies, 4th Edition

Sharma, H. 2019. Data Science and Guide to Data Science. Available at:

https://www.edureka.co/blog/what-is-data-science/. Accessed by 02 December 2019.

Techopedia, 2019. Data preprocessing. Available at: https://www.techopedia.com/defini-

tion/14650/data-preprocessing. Accessed by 03 November 2019.

Thumar, C. 2019. Why Python is Good for Data Analytics. Available at: https://thriveglobal.com/sto-

ries/why-python-is-good-for-data-analytics/. Accessed by 12 January 2020.

TrustRadius, 2019. data science platforms. Available at: https://www.trustradius.com/data-sci-

ence#products. Accessed by 02 February 2020.

Tunguz, B. 2018. 6 Methods for Dealing with Missing Data. Available at: https://www.zest.ai/blog/6-

methods-for-dealing-with-missing-data. Accessed by 13 December 2019.

Turi, 2019. Available at: https://turi.com/learn/userguide/evaluation/classification.html#accuracy. Ac-

cessed by 21 December 2019.

Tutorialspoint, 2019. Available at: https://www.tutorialspoint.com/excel_data_analysis/data_analy-

sis_process.htm. Accessed at 12 December 2019.

VanderPlas, J. 2016. Python Data Science Handbook: Essential Tools for Working with Data.

Page 96: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …

88

Yu, Y. 2018. Mediocre at best: Excel is dead: you should learn python. Available at: https://me-

dium.com/mediocreatbest/excel-is-dead-you-should-learn-python-9bbfa6b82df1. Accessed by 24 Octo-

ber 20

Page 97: DATA ANALYSIS THEORY AND PRACTICE Case: Python and Excel …