data mining lab record for iv b.tech

Upload: madhu4a

Post on 02-Apr-2018

236 views

Category:

Documents


1 download

TRANSCRIPT

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    1/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    1 Bharath Annamaneni [JNTU WORLD]

    Syllabus

    Credit Risk Assessment

    Description: The business of banks is making loans. Assessing the credit worthiness of anapplicant is of crucial importance. You have to develop a system to help a loan officer decide whetherthe credit of a customer is good. Or bad. A banks business rules regarding loans must consider twoopposing factors. On the one hand, a bank wants to make as many loans as possible. Interest on theseloans is the banks profit source. On the other hand, a bank can not afford to make too many badloans. Too many bad loans could lead to the collapse of the bank. The banks loan policy mustinvolve a compromise. Not too strict and not too lenient.

    To do the assignment, you first and foremost need some knowledge about the world of credit.You can acquire such knowledge in a number of ways.

    1. Knowledge engineering: Find a loan officer who is willing to talk. Interview her and try torepresent her knowledge in a number of ways.

    2. Books: Find some training manuals for loan officers or perhaps a suitable textbook on finance.Translate this knowledge from text from to production rule form.

    3. Common sense: Imagine yourself as a loan officer and make up reasonable rules which can beused to judge the credit worthiness of a loan applicant.

    4. Case histories: Find records of actual cases where competent loan officers correctly judgedwhen and not to. Approve a loan application.

    The German Credit Data

    Actual historical credit data is not always easy to come by because of confidentiality rules.Here is one such data set. Consisting of 1000 actual cases collected in Germany.

    In spite of the fact that the data is German, you should probably make use of it for thisassignment (Unless you really can consult a real loan officer!)

    There are 20 attributes used in judging a loan applicant (ie., 7 Numerical attributes and 13Categorical or Nominal attributes). The goal is to classify the applicant into one of two categories.Good or Bad.

    Subtasks:

    1. List all the categorical (or nominal) attributes and the real valued attributes separately.

    2. What attributes do you think might be crucial in making the credit assessment? Come up withsome simple rules in plain English using your selected attributes.

    3. One type of model that you can create is a Decision tree . train a Decision tree using the completedata set as the training data. Report the model obtained after training.

    4. Suppose you use your above model trained on the complete dataset, and classify credit good/badfor each of the examples in the dataset. What % of examples can you classify correctly?(This is also

    called testing on the training set) why do you think can not get 100% training accuracy?

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    2/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    2 Bharath Annamaneni [JNTU WORLD]

    5. Is testing on the training set as you did above a good idea? Why or why not?

    6. One approach for solving the problem encountered in the previous question is using cross-validation? Describe what is cross validation briefly. Train a decision tree again using cross validation

    and report your results. Does accuracy increase/decrease? Why?

    7. Check to see if the data shows a bias against foreign workers or personal-status. One way todo this is to remove these attributes from the data set and see if the decision tree created in thosecases is significantly different from the full dataset case which you have already done. Did removingthese attributes have any significantly effect? Discuss.

    8. Another question might be, do you really need to input so many attributes to get good results?May be only a few would do. For example, you could try just having attributes 2,3,5,7,10,17 and 21.Try out some combinations.(You had removed two attributes in problem 7. Remember to reload the

    arff data file to get all the attributes initially before you start selecting the ones you want.)

    9. Sometimes, The cost of rejecting an applicant who actually has good credit might be higher thanaccepting an applicant who has bad credit. Instead of counting the misclassification equally in bothcases, give a higher cost to the first case ( say cost 5) and lower cost to the second case. By using a costmatrix in weak. Train your decision tree and report the Decision Tree and cross validation results.Are they significantly different from results obtained in problem 6.

    10. Do you think it is a good idea to prefect simple decision trees instead of having long complex

    decision tress? How does the complexity of a Decision Tree relate to the bias of the model?

    11. You can make your Decision Trees simpler by pruning the nodes. One approach is to use ReducedError Pruning. Explain this idea briefly. Try reduced error pruning for training your Decision Treesusing cross validation and report the Decision Trees you obtain? Also Report your accuracy using thepruned model Does your Accuracy increase?

    12. How can you convert a Decision Tree into if-then-else rules. Make up your own small DecisionTree consisting 2-3 levels and convert into a set of rules. There also exist different classifiers thatoutput the model in the form of rules. One such classifier in weka is rules. PART, train this model andreport the set of rules obtained. Sometimes just one attribute can be good enough in making thedecision, yes, just one ! Can you predict what attribute that might be in this data set? OneR classifieruses a single attribute to make decisions(it chooses the attribute based on minimum error).Report therule obtained by training a one R classifier. Rank the performance of j48,PART,oneR.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    3/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    3 Bharath Annamaneni [JNTU WORLD]

    1. Weka Introduction

    Weka is created by researchers at the university WIKATO in NewZealand. University of

    Waikato, Hamilton, New Zealand Alex Seewald (original Command-line primer) David Scuse(original Experimenter tutorial)

    It is java based application. It is collection often source, Machine Learning Algorithm. The routines (functions) are implemented as classes and logically arranged in packages. It comes with an extensive GUI Interface. Weka routines can be used standalone via the command line interface.

    The Graphical User Interface

    The Weka GUI Chooser (class weka.gui.GUIChooser) provides a starting point for launchingWekas main GUI applications and supporting tools. If one prefers a MDI (multiple documentinterface) appearance, then this is provided by an alternative launcher called Main (classweka.gui.Main). The GUI Chooser consists of four buttonsone for each of the four major Wekaapplicationsand four menus.

    The buttons can be used to start the following applications:

    Explorer An environment for exploring data with WEKA (the rest of this

    documentation deals with this application in more detail).

    Experimenter An environment for performing experiments and conducting

    statistical tests between learning schemes.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    4/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    4 Bharath Annamaneni [JNTU WORLD]

    Knowledge Flow This environment supports essentially the same functions as the

    Explorer but with a drag-and-drop interface. One advantage is that it supports

    incremental learning.

    SimpleCLI Provides a simple command-line interface that allows direct executionof WEKA commands for operating systems that do not provide their own

    command line interface.

    I. Explorer

    The Graphical user interface

    1.1 Section Tabs

    At the very top of the window, just below the title bar, is a row of tabs. Whenthe Explorer is first started only the first tab is active; the others are greyed out. This is because it isnecessary to open (and potentially pre-process) a data set before starting to explore the data.

    The tabs are as follows:

    1. Preprocess. Choose and modify the data being acted on.2. Classify. Train & test learning schemes that classify or perform regression3. Cluster. Learn clusters for the data.4. Associate. Learn association rules for the data.

    5. Select attributes. Select the most relevant attributes in the data.6. Visualize. View an interactive 2D plot of the data.

    Once the tabs are active, clicking on them flicks between different screens, on which therespective actions can be performed. The bottom area of the window (including the status box, thelog button, and the Weka bird) stays visible regardless of which section you are in. The Explorer canbe easily extended with custom tabs. The Wiki article Adding tabs in the Explorer [7] explains thisin detail.

    II. Experimenter2.1 Introduction

    The Weka Experiment Environment enables the user to create, run, modify, and analyseexperiments in a more convenient manner than is possible when processing the schemesindividually. For example, the user can create an experiment that runs several schemes against aseries of datasets and then analyse the results to determine if one of the schemes is (statistically)better than the other schemes.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    5/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    5 Bharath Annamaneni [JNTU WORLD]

    The Experiment Environment can be run from the command line using the Simple CLI. Forexample, the following commands could be typed into the CLI to run the OneR scheme on the Iris

    dataset using a basic train and test process. (Note that the commands would be typed on one line intothe CLI.) While commands can be typed directly into the CLI, this technique is not particularlyconvenient and the experiments are not easy to modify. The Experimenter comes in two flavours,either with a simple interface that provides most of the functionality one needs for experiments, orwith an interface with full access to the Experimenters capabilities. You can choose between thosetwo with the Experiment Configuration Mode radio buttons:

    Simple Advanced

    Both setups allow you to setup standard experiments, that are run locally ona single machine, or remote experiments, which are distributed between several hosts. Thedistribution of experiments cuts down the time the experiments will take until completion, but on theother hand the setup takes more time. The next section covers the standard experiments (both, simpleand advanced), followed by the remote experiments and finally the analysing of the results.

    III. Knowledge Flow3.1 Introduction

    The Knowledge Flow provides an alternative to the Explorer as a graphical front end toWEKAs core algorithms.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    6/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    6 Bharath Annamaneni [JNTU WORLD]

    The KnowledgeFlow presents a data-flow inspired interface to WEKA. The user canselectWEKA components from a palette, place them on a layout canvas and connect them together inorder to form a knowledge flow for processing and analyzing data. At present, all of WEKAsclassifiers, filters, clusterers, associators, loaders and savers are available in the KnowledgeFlowalong withsome extra tools.

    The Knowledge Flow can handle data either incrementally or in batches (theExplorer handles batch data only). Of course learning from data incremen- tally requires a classifierthat can be updated on an instance by instance basis. Currently in WEKA there are ten classifiers thatcan handle data incrementally.

    The Knowledge Flow offers the following features:intuitive data flow style layout

    process data in batches or incrementally

    process multiple batches or streams in parallel (each separate flow executes

    in its own thread)

    process multiple streams sequentially via a user-specified order of

    execution

    chain filters together

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    7/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    7 Bharath Annamaneni [JNTU WORLD]

    view models produced by classifiers for each fold in a cross validation

    visualize performance of incremental classifiers during processing

    (scrolling plots of classification accuracy, RMS error, predictions etc.)

    plugin perspectives that add major new functionality (e.g. 3D datavisualization, time series forecasting environment etc.)

    IV. Simple CLI

    The Simple CLI provides full access to all Weka classes, i.e., classifiers, filters,clusterers, etc., but without the hassle of the CLASSPATH (it facilitates the one, with which Wekawas started). It offers a simple Weka shell with separated command line and output.

    4.1 Commands

    The following commands are available in the Simple CLI: java []

    invokes a java class with the given arguments (if any) break

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    8/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    8 Bharath Annamaneni [JNTU WORLD]

    stops the current thread, e.g., a running classifier, in a friendly mannerkillstops the current thread in an unfriendly fashion

    clsclears the output area

    capabilities []

    lists the capabilities of the specified class, e.g., for a classifier with itsoption:

    capabilities weka.classifiers.meta.Bagging -W weka.classifiers.trees.Id3 exit

    exits the Simple CLI help []

    provides an overview of the available commands if without a commandname as argument, otherwise more help on the specified command

    4.2 Invocation

    In order to invoke a Weka class, one has only to prefix the class with java. This command tells the

    Simple CLI to load a class and execute it with any given parameters. E.g., the J48 classifier can beinvoked on the iris dataset with the following command:

    java weka.classifiers.trees.J48 -t c:/temp/iris.arff

    This results in the following output:

    4.3 Command redirection

    Starting with this version of Weka one can perform a basic redirection:

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    9/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    9 Bharath Annamaneni [JNTU WORLD]

    java weka.classifiers.trees.J48 -t test.arff > j48.txtNote: the > must be preceded and followed by a space, otherwise it is not recognized as redirection,but part of another parameter.

    4.4 Command completion

    Commands starting with java support completion for classnames and filenames via Tab(Alt+BackSpace deletes parts of the command again). In case that there are several matches, Wekalists all possible matches. package name completionjava weka.clresults in the following output of possible matches of package names:Possible matches:weka.classifiers

    weka.clusterers classname completionjava weka.classifiers.meta.Alists the following classesPossible matches:weka.classifiers.meta.AdaBoostM1weka.classifiers.meta.AdditiveRegressionweka.classifiers.meta.AttributeSelectedClassifier filename completionIn order for Weka to determine whether a the string under the cursor is a classname or a filename,

    filenames need to be absolute (Unix/Linx: /some/path/file;Windows: C:\Some\Path\file) orrelative and starting with a dot (Unix/Linux: ./some/other/path/file;Windows:.\Some\Other\Path\file).

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    10/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    10 Bharath Annamaneni [JNTU WORLD]

    2. ARFF File Format

    An ARFF (= Attribute-Relation File Format ) file is an ASCII text file thatdescribes a list of instances sharing a set of attributes.

    ARFF files are not the only format one can load, but all files that can be converted with Wekascore converters. The following formats are currentlysupported:

    ARFF (+ compressed) C4.5 CSV libsvm binary serialized instances XRFF (+ compressed)

    10.1 Overview

    ARFF files have two distinct sections. The first section is the Header information, which isfollowed the Data information. The Header of the ARFF file contains the name of the relation, a list ofthe attributes (the columns in the data), and their types.

    An example header on the standard IRIS dataset looks like this:% 1. Title: Iris Plants Database%% 2. Sources:% (a) Creator: R.A. Fisher

    % (b) Donor: Michael Marshall (MARSHALL%[email protected])% (c) Date: July, 1988%@RELATION iris

    @ATTRIBUTE sepallength NUMERIC@ATTRIBUTE sepalwidth NUMERIC@ATTRIBUTE petallength NUMERIC@ATTRIBUTE petalwidth NUMERIC@ATTRIBUTE class {Iris-setosa,Iris-versicolor,Iris-virginica}The Data of the ARFF file looks like the following:

    @DATA5.1,3.5,1.4,0.2,Iris-setosa4.9,3.0,1.4,0.2,Iris-setosa4.7,3.2,1.3,0.2,Iris-setosa4.6,3.1,1.5,0.2,Iris-setosa5.0,3.6,1.4,0.2,Iris-setosa5.4,3.9,1.7,0.4,Iris-setosa4.6,3.4,1.4,0.3,Iris-setosa5.0,3.4,1.5,0.2,Iris-setosa4.4,2.9,1.4,0.2,Iris-setosa4.9,3.1,1.5,0.1,Iris-setosaLines that begin with a % are comments.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    11/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    11 Bharath Annamaneni [JNTU WORLD]

    The @RELATION, @ATTRIBUTE and @DATA declarations are case insensitive.

    The ARFF Header Section

    The ARFF Header section of the file contains the relation declaration and at-

    tribute declarations.The @relation Declaration

    The relation name is defined as the first line in the ARFF file. The format is:@relation

    where is a string. The string must be quoted if the name includes spaces.The @attribute Declarations

    Attribute declarations take the form of an ordered sequence of @attribute statements. Eachattribute in the data set has its own @attribute statement which uniquely defines the name of thatattribute and its data type. The order the attributes are declared indicates the column position in thedata section of the file. For example, if an attribute is the third one declared then Weka expects that

    all that attributes values will be found in the third comma delimited column.

    The format for the @attribute statement is:@attribute

    where the must start with an alphabetic character. If spaces are to beincluded in the name then the entire name must be quoted.

    The can be any of the four types supported by Weka:

    numeric integer is treated as numeric real is treated as numeric string date [] relational for multi-instance data (for future use)where and are defined below. The keywords numeric,

    real, integer, string and date are case insensitive.

    Numeric attributesNumeric attributes can be real or integer numbers.

    Nominal attributes

    Nominal values are defined by providing an listing the possiblevalues: , , ,...For example, the class value of the Iris dataset can be defined as follows:@ATTRIBUTE class {Iris-setosa,Iris-versicolor,Iris-virginica}Values that contain spaces must be quoted.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    12/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    12 Bharath Annamaneni [JNTU WORLD]

    String attributes

    String attributes allow us to create attributes containing arbitrary textual values. This is veryuseful in text-mining applications, as we can create datasets with string attributes, then writeWekaFilters to manipulate strings (like String- ToWordVectorFilter). String attributes are declared asfollows:

    @ATTRIBUTE LCC string

    Date attributes

    Date attribute declarations take the form:@attribute date []where is the name for the attribute and is an optional string specifying howdate values should be parsed and printed (this is the same format used by SimpleDateFormat). Thedefault format string accepts the ISO-8601 combined date and time format: yyyy-MM-ddTHH:mm:ss. Dates must be specified in the data section as the corresponding stringrepresentations of the date/time (see example below).

    Relational attributes

    Relational attribute declarations take the form:@attribute relational

    @end For the multi-instance dataset MUSK1 the definition would look like this (... denotes an omission):@attribute molecule_name {MUSK-jf78,...,NON-MUSK-199}@attribute bag relational@attribute f1 numeric

    ...@attribute f166 numeric@end bag@attribute class {0,1}...

    The ARFF Data SectionThe ARFF Data section of the file contains the data declaration line and theactual instance lines.

    The @data DeclarationThe @data declaration is a single line denoting the start of the data segment in the file. The format is:@dataThe instance data

    Each instance is represented on a single line, with carriage returns denoting the end of theinstance. A percent sign (%) introduces a comment, which continues to the end of the line.

    Attribute values for each instance are delimited by commas. They must appear in the orderthat they were declared in the header section (i.e. the data corresponding to the nth @attributedeclaration is always the nth field of the attribute).Missing values are represented by a single question mark, as in:

    @data

    4.4,?,1.5,?,Iris-setosa

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    13/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    13 Bharath Annamaneni [JNTU WORLD]

    Values of string and nominal attributes are case sensitive, and any that contain space or the comment-delimiter character % must be quoted. (The code suggests that double-quotes are acceptable and thata backslash will escape individual characters.)

    An example follows:@relation LCCvsLCSH@attribute LCC string@attribute LCSH string@dataAG5, Encyclopedias and dictionaries.;Twentieth century.AS262, Science -- Soviet Union -- History.AE5, Encyclopedias and dictionaries.AS281, Astronomy, Assyro-Babylonian.;Moon -- Phases.AS281, Astronomy, Assyro-Babylonian.;Moon -- Tables.

    Dates must be specified in the data section using the string representation specified in the attributedeclaration.

    For example:@RELATION Timestamps@ATTRIBUTE timestamp DATE "yyyy-MM-dd HH:mm:ss"@DATA"2001-04-03 12:12:12""2001-05-03 12:59:55"

    Relational data must be enclosed within double quotes . For example an instance of the MUSK1dataset (... denotes an omission):

    MUSK-188,"42,...,30",1

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    14/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    14 Bharath Annamaneni [JNTU WORLD]

    3. Preprocess Tab

    1. Loading Data

    The first four buttons at the top of the preprocess section enable you to loaddata into WEKA:

    1. Open file.... Brings up a dialog box allowing you to browse for the data file on the local file system.

    2. Open URL.... Asks for a Uniform Resource Locator address for where the data is stored.

    3. Open DB.... Reads data from a database. (Note that to make this work you might have to edit thefile in weka/experiment/DatabaseUtils.props.)

    4. Generate.... Enables you to generate artificial data from a variety of Data Generators. Using theOpen file... button you can read files in a variety of formats: WEKAs ARFF format, CSV format, C4.5

    format, or serialized Instances format. ARFF files typically have a .arff extension, CSV files a .csvextension, C4.5 files a .data and .names extension, and serialized Instances objects a .bsi extension.

    The Current Relation: Once some data has been loaded, the Preprocess panel shows a variety ofinformation. The Current relation box (the current relation is the currently loaded data, which canbe interpreted as a single relational table in database terminology) has three entries:

    1. Relation. The name of the relation, as given in the file it was loaded from. Filters (described below)modify the name of a relation.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    15/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    15 Bharath Annamaneni [JNTU WORLD]

    2. Instances. The number of instances (data points/records) in the data.

    3. Attributes. The number of attributes (features) in the data.

    2.3 Working with Attributes

    Below the Current relation box is a box titled Attributes. There are fourButtons, and beneath them is a list of the attributes in the current relation.The list has three columns:

    1. No... A number that identifies the attribute in the order they are specified in the data file.

    2. Selection tick boxes. These allow you select which attributes are present in the relation.

    3. Name. The name of the attribute, as it was declared in the data file. When you click on differentrows in the list of attributes, the fields change in the box to the right titled Selected attribute. This boxdisplays the characteristics of the currently highlighted attribute in the list:

    1. Name. The name of the attribute, the same as that given in the attribute list.2. Type. The type of attribute, most commonly Nominal or Numeric.3. Missing. The number (and percentage) of instances in the data for which this attribute is missing(unspecified).4. Distinct. The number of different values that the data contains for this attribute.5. Unique. The number (and percentage) of instances in the data having a value for this attribute that

    no other instances have.

    Below these statistics is a list showing more information about the values stored in thisattribute, which differ depending on its type. If the attribute is nominal, the list consists of eachpossible value for the attribute along with the number of instances that have that value. If theattribute is numeric, the list gives four statistics describing the distribution of values in the datatheminimum, maximum, mean and standard deviation. And below these statistics there is a colouredhistogram, colour-coded according to the attribute chosen as the Class using the box above thehistogram. (This box will bring up a drop-down list of available selections when clicked.) Note thatonly nominal Class attributes will result in a colour-coding. Finally, after pressing the Visualize Allbutton, histograms for all the attributes in the data are shown in a separate window.

    Returning to the attribute list, to begin with all the tick boxes are unticked.They can be toggled on/off by clicking on them individually. The four buttons above can also beused to change the selection:

    PREPROCESSING

    1. All. All boxes are ticked.2. None. All boxes are cleared (unticked).3. Invert. Boxes that are ticked become unticked and vice versa.4. Pattern. Enables the user to select attributes based on a Perl 5 Regular Expression. E.g., .* id selects

    all attributes which name ends with id.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    16/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    16 Bharath Annamaneni [JNTU WORLD]

    Once the desired attributes have been selected, they can be removed by clicking the Removebutton below the list of attributes. Note that this can be undone by clicking the Undo button, which islocated next to the Edit button in the top-right corner of the Preprocess panel.

    Working With Filters

    The preprocess section allows filters to be defined that transform the data in various ways. TheFilter box is used to set up the filters that are required. At the left of the Filter box is a Choose button.By clicking this button it is possible to select one of the filters in WEKA. Once a filter has beenselected, its name and options are shown in the field next to the Choose button. Clicking on this boxwith the left mouse button brings up a GenericObjectEditor dialog box. A click with the right mousebutton (or Alt+Shift+left click) brings up a menu where you can choose, either to display theproperties in a GenericObjectEditor dialog box, or to copy the current setup string to the clipboard.

    The GenericObjectEditor Dialog Box

    The GenericObjectEditor dialog box lets you configure a filter. The same kindof dialog box is used to configure other objects, such as classifiers and clusterers(see below). The fields in the window reflect the available options. Right-clicking (or Alt+Shift+Left-

    Click) on such a field will bring up a popup menu, listing the following options:

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    17/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    17 Bharath Annamaneni [JNTU WORLD]

    1. Show properties... has the same effect as left-clicking on the field, i.e., a dialog appears allowingyou to alter the settings.

    2. Copy configuration to clipboard copies the currently displayed configuration string to thesystems clipboard and therefore can be used anywhere else in WEKA or in the console. This is rather

    handy if you have to setup complicated, nested schemes.

    3. Enter configuration... is the receiving end for configurations that got copied to the clipboardearlier on. In this dialog you can enter a class name followed by options (if the class supports these).This also allows you to transfer a filter setting from the Preprocess panel to a Filtered Classifier usedin the Classify panel.

    Left-Clicking on any of these gives an opportunity to alter the filters settings. For example, thesetting may take a text string, in which case you type the string into the text field provided. Or it maygive a drop-down box listing several states to choose from. Or it may do something else, depending

    on the information required. Information on the options is provided in a tool tip if you let the mousepointer hover of the corresponding field. More information on the filter and its options can beobtained by clicking on the More button in the About panel at the top of the GenericObjectEditorwindow.

    Some objects display a brief description of what they do in an About box, along with a Morebutton. Clicking on the More button brings up a window describing what the different options do.Others have an additional button, Capabilities, which lists the types of attributes and classes theobject can handle.

    At the bottom of the GenericObjectEditor dialog are four buttons. The first two, Open... and

    Save... allow object configurations to be stored for future use. The Cancel button backs out withoutremembering any changes that have been made. Once you are happy with the object and settings youhave chosen, click OK to return to the main Explorer window.

    Applying Filters

    Once you have selected and configured a filter, you can apply it to the data by pressing theApply button at the right end of the Filter panel in the Preprocess panel. The Preprocess panel willthen show the transformed data. The change can be undone by pressing the Undo button. You canalso use the Edit...button to modify your data manually in a dataset editor. Finally, the Save... buttonat the top right of the Preprocess panel saves the current version of the relation in file formats that canrepresent the relation, allowing it to be kept for future use.

    Note: Some of the filters behave differently depending on whether a class attribute has been set or not(using the box above the histogram, which will bring up a drop-down list of possible selections whenclicked). In particular, the supervised filters require a class attribute to be set, and some of theunsupervised attribute filters will skip the class attribute if one is set. Note that it is also possible toset Class to None, in which case no class is set.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    18/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    18 Bharath Annamaneni [JNTU WORLD]

    4. Classification Tab

    5.3.1 Selecting a ClassifierAt the top of the classify section is the Classifier box. This box has a text field

    that gives the name of the currently selected classifier, and its options. Clicking on the text box with

    the left mouse button brings up a GenericObjectEditor dialog box, just the same as for filters, that youcan use to configure the options of the current classifier. With a right click (or Alt+Shift+left click)you can once again copy the setup string to the clipboard or display the properties in aGenericObjectEditor dialog box. The Choose button allows you to choose one of the classifiers thatare available in WEKA.

    5.3.2 Test Options

    The result of applying the chosen classifier will be tested according to the options that are setby clicking in the Test options box. There are four test modes:

    1. Use training set. The classifier is evaluated on how well it predicts the class of the instances it wastrained on.

    2. Supplied test set. The classifier is evaluated on how well it predicts the class of a set of instancesloaded from a file. Clicking the Set... button brings up a dialog allowing you to choose the file to teston.

    3. Cross-validation. The classifier is evaluated by cross-validation, using the number of folds that areentered in the Folds text field.

    4. Percentage split. The classifier is evaluated on how well it predicts a certain percentage of the datawhich is held out for testing. The amount of data held out depends on the value entered in the %field.

    Note: No matter which evaluation method is used, the model that is output isAlways the one build from all the training data. Further testing options can beSet by clicking on the More options... button:

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    19/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    19 Bharath Annamaneni [JNTU WORLD]

    1. Output model. The classification model on the full training set is output so that it can be viewed,visualized, etc. This option is selected by default.

    2. Output per-class stats. The precision/recall and true/false statistics for each class are output. Thisoption is also selected by default.

    3. Output entropy evaluation measures. Entropy evaluation measures are included in the output.This option is not selected by default.

    4. Output confusion matrix. The confusion matrix of the classifiers predictions is included in the

    output. This option is selected by default.

    5. Store predictions for visualization. The classifiers predictions are remembered so that they can bevisualized. This option is selected by default.

    6. Output predictions. The predictions on the evaluation data are output.

    Note that in the case of a cross-validation the instance numbers do not correspond to the location inthe data!

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    20/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    20 Bharath Annamaneni [JNTU WORLD]

    7. Output additional attributes. If additional attributes need to be output alongside the predictions,e.g., an ID attribute for tracking misclassifications, then the index of this attribute can be specifiedhere. The usual Weka ranges are supported,first and last are therefore valid indices as well(example: first-3,6,8,12-last).

    8. Cost-sensitive evaluation. The errors is evaluated with respect to a cost matrix. The Set... buttonallows you to specify the cost matrix used.

    9. Random seed for xval / % Split. This specifies the random seed used when randomizing the databefore it is divided up for evaluation purposes.

    10. Preserve order for % Split. This suppresses the randomization of the data before splitting intotrain and test set.

    11. Output source code. If the classifier can output the built model as Java source code, you can

    specify the class name here. The code will be printed in the Classifier output area.

    5.3.3 The Class AttributeThe classifiers in WEKA are designed to be trained to predict a single class

    attribute, which is the target for prediction. Some classifiers can only learn nominal classes; others canonly learn numeric classes (regression problems) still others can learn both.

    By default, the class is taken to be the last attribute in the data. If you wantto train a classifier to predict a different attribute, click on the box below the Test options box to bringup a drop-down list of attributes to choose from.

    5.3.4 Training a Classifier

    Once the classifier, test options and class have all been set, the learning process is started byclicking on the Start button. While the classifier is busy being trained, the little bird moves around.You can stop the training process at any time by clicking on the Stop button. When training iscomplete, several things happen. The Classifier output area to the right of the display is filled withtext describing the results of training and testing. A new entry appears in the Result list box. We lookat the result list below; but first we investigate the text that has been output.

    5.3.5 The Classifier Output Text

    The text in the Classifier output area has scroll bars allowing you to browsethe results. Clicking with the left mouse button into the text area, while holdingAlt and Shift, brings up a dialog that enables you to save the displayed outputin a variety of formats (currently, BMP, EPS, JPEG and PNG). Of course, youcan also resize the Explorer window to get a larger display area.The output isSplit into several sections:

    1. Run information. A list of information giving the learning scheme options, relation name,instances, attributes and test mode that were involved in the process.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    21/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    21 Bharath Annamaneni [JNTU WORLD]

    2. Classifier model (full training set). A textual representation of the classification model that wasproduced on the full training data.

    3. The results of the chosen test mode are broken down thus.

    4. Summary. A list of statistics summarizing how accurately the classifier was able to predict the trueclass of the instances under the chosen test mode.

    5. Detailed Accuracy By Class. A more detailed per-class break down of the classifiers predictionaccuracy.

    6. Confusion Matrix. Shows how many instances have been assigned to each class. Elements show thenumber of test examples whose actual class is the row and whose predicted class is the column.

    7. Source code (optional). This section lists the Java source code if one

    chose Output source code in the More options dialog.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    22/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    22 Bharath Annamaneni [JNTU WORLD]

    5. Clustering Tab

    5.4.1 Selecting a ClustererBy now you will be familiar with the process of selecting and configuring objects. Clicking on

    the clustering scheme listed in the Clusterer box at the top of the

    window brings up a GenericObjectEditor dialog with which to choose a newclustering scheme.

    Cluster Modes

    The Cluster mode box is used to choose what to cluster and how to evaluate

    the results. The first three options are the same as for classification: Use training set, Supplied test setand Percentage split (Section 5.3.1)except that now the data is assigned to clusters instead of tryingto predict a specific class. The fourth mode, Classes to clusters evaluation, compares how well thechosen clusters match up with a pre-assigned class in the data. The drop-down box below this optionselects the class, just as in the Classify panel.

    An additional option in the Cluster mode box, the Store clusters for visualization tick box,determines whether or not it will be possible to visualize the clusters once training is complete. Whendealing with datasets that are solarge that memory becomes a problem it may be helpful to disable this option.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    23/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    23 Bharath Annamaneni [JNTU WORLD]

    5.4.3 Ignoring Attributes

    Often, some attributes in the data should be ignored when clustering. The Ignore attributesbutton brings up a small window that allows you to select which attributes are ignored. Clicking onan attribute in the window highlights it, holding down the SHIFT key selects a range of consecutive

    attributes, and holding down CTRL toggles individual attributes on and off. To cancel the selection,back out with the Cancel button. To activate it, click the Select button. The next time clustering isinvoked, the selected attributes are ignored.

    5.4.4 Working with Filters

    The Filtered Clusterer meta-clusterer offers the user the possibility to apply filters directlybefore the clusterer is learned. This approach eliminates the manual application of a filter in thePreprocess panel, since the data gets processed on the fly. Useful if one needs to try out different filtersetups.

    5.4.5 Learning Clusters

    The Cluster section, like the Classify section, has Start/Stop buttons, a result text area and aresult list. These all behave just like their classification counterparts. Right-clicking an entry in theresult list brings up a similar menu, except that it shows only two visualization options: Visualizecluster assignments and Visualize tree. The latter is grayed out when it is not applicable.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    24/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    24 Bharath Annamaneni [JNTU WORLD]

    6. Associate Tab5.5.1 Setting Up

    This panel contains schemes for learning association rules, and the learners are chosen andconfigured in the same way as the clusterers, filters, and classifiersin the other panels.

    5.5.2 Learning Associations

    Once appropriate parameters for the association rule learner have been set, click the Startbutton. When complete, right-clicking on an entry in the result listallows the results to be viewed or saved.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    25/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    25 Bharath Annamaneni [JNTU WORLD]

    7. Selecting Attributes Tab

    5.6.1 Searching and Evaluating

    Attribute selection involves searching through all possible combinations of attributes in thedata to find which subset of attributes works best for prediction.To do this, two objects must be set up: an attribute evaluator and a search method. The evaluatordetermines what method is used to assign a worth to each subset of attributes. The search methoddetermines what style of search is performed.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    26/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    26 Bharath Annamaneni [JNTU WORLD]

    5.6.2 Options

    The Attribute Selection Mode box has two options:

    1. Use full training set. The worth of the attribute subset is determined using the full set of training

    data.

    2. Cross-validation. The worth of the attribute subset is determined by a process of cross-validation.The Fold and Seed fields set the number of folds to use and the random seed used when shuffling thedata. As with Classify (Section 5.3.1), there is a drop-down box that can be used to specify whichattribute to treat as the class.

    5.6.3 Performing Selection

    Clicking Start starts running the attribute selection process. When it is finished, the results are

    output into the result area, and an entry is added to the result list. Right-clicking on the result listgives several options. The first three, (View in main window, View in separate window and Saveresult buffer), are the same as for the classify panel. It is also possible to Visualize reduced data, or ifyou have used an attribute transformer such as Principal Components, Visualize transformed data.The reduced/transformed data can be saved to a file with the Save reduced data... or Savetransformed data... option.

    In case one wants to reduce/transform a training and a test at the same time and not use theAttribute Selected Classifier from the classifier panel, it is best to use the Attribute Selection filter (asupervised attribute filter) in batch mode (-b) from the command line or in the Simple CLI. The

    batch mode allows one to specify an additional input and output file pair (options -r and -s), that isprocessed with the filter setup that was determined based on the training data.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    27/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    27 Bharath Annamaneni [JNTU WORLD]

    8. Visualizing Tab

    WEKAs visualization section allows you to visualize 2D plots of the current relation.

    5.7.1 The scatter plot matrix

    When you select the Visualize panel, it shows a scatter plot matrix for all the attributes, colourcoded according to the currently selected class. It is possible to change the size of each individual 2Dplot and the point size, and to randomly jitter the data (to uncover obscured points). It also possibleto change the attribute used to colour the plots, to select only a subset of attributes for inclusion in thescatter plot matrix, and to sub sample the data. Note that changes will only come into effect once theUpdate button has been pressed.

    5.7.2 Selecting an individual 2D scatter plot

    When you click on a cell in the scatter plot matrix, this will bring up a separate window with avisualization of the scatter plot you selected. (We describedabove how to visualize particular results in a separate windowfor example, classifier errorsthesame visualization controls are used here.)Data points are plotted in the main area of the window. At the top are two drop-down list buttons forselecting the axes to plot. The one on the left shows which attribute is used for the x-axis; the one onthe right shows which is used for the y-axis.

    Beneath the x-axis selector is a drop-down list for choosing the colour scheme. This allows

    you to colour the points based on the attribute selected. Below the plot area, a legend describes what

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    28/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    28 Bharath Annamaneni [JNTU WORLD]

    values the colours correspond to. If the values are discrete, you can modify the colour used for eachone by clicking on them and making an appropriate selection in the window that pops up.

    To the right of the plot area is a series of horizontal strips. Each strip represents an attribute,and the dots within it show the distribution of values of the attribute. These values are randomlyscattered vertically to help you see concentrations of points. You can choose what axes are used in the

    main graph by clicking on these strips. Left-clicking an attribute strip changes the x-axis to thatattribute, whereas right-clicking changes the y-axis. The X and Y written beside the strips showswhat the current axes are (B is used for both X and Y).

    Above the attribute strips is a slider labelled Jitter, which is a random displacement given to allpoints in the plot. Dragging it to the right increases the amount of jitter, which is useful for spottingconcentrations of points. Without jitter, a million instances at the same point would look no differentto just asingle lonely instance.

    5.7.3 Selecting Instances

    There may be situations where it is helpful to select a subset of the data using the visualizationtool. (A special case of this is the User Classifier in the Classify panel, which lets you build your ownclassifier by interactively selecting instances.)

    Below the y-axis selector button is a drop-down list button for choosing a selection method. Agroup of data points can be selected in four ways:

    1. Select Instance. Clicking on an individual data point brings up a window listing its attributes. Ifmore than one point appears at the same location, more than one set of attributes is shown.

    2. Rectangle. You can create a rectangle, by dragging, that selects the points inside it.

    3. Polygon. You can build a free-form polygon that selects the points inside it. Left-click to addvertices to the polygon, right-click to complete it. The polygon will always be closed off byconnecting the first point to the last.

    4. Polyline. You can build a polyline that distinguishes the points on one side from those on theother. Left-click to add vertices to the polyline, right-click to finish. The resulting shape is open (asopposed to a polygon, which is always closed).

    Once an area of the plot has been selected using Rectangle, Polygon or Polyline, it turns grey.At this point, clicking the Submit button removes all instances from the plot except those within thegrey selection area. Clicking on the Clear button erases the selected area without affecting the graph.

    Once any points have been removed from the graph, the Submit button changes to a Resetbutton. This button undoes all previous removals and returns you to the original graph with allpoints included. Finally, clicking the Save button allows you to save the currently visible instances toa new ARFF file.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    29/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    29 Bharath Annamaneni [JNTU WORLD]

    9. Description of German Credit Data.

    Credit Risk Assessment

    Description: The business of banks is making loans. Assessing the credit worthiness of an

    applicant is of crucial importance. You have to develop a system to help a loan officer decidewhether the credit of a customer is good. Or bad. A banks business rules regarding loans mustconsider two opposing factors. On th one han, a bank wants to make as many loans as possible.Interest on these loans is the banks profit source. On the other hand, a bank can not afford to maketoo many bad loans. Too many bad loans could lead to the collapse of the bank. The banks loanpolicy must involved a compromise. Not too strict and not too lenient.

    To do the assignment, you first and foremost need some knowledge about the world of credit.You can acquire such knowledge in a number of ways.

    1. Knowledge engineering: Find a loan officer who is willing to talk. Interview her and try torepresent her knowledge in a number of ways.

    2. Books: Find some training manuals for loan officers or perhaps a suitable textbook on finance.Translate this knowledge from text from to production rule form.

    3. Common sense: Imagine yourself as a loan officer and make up reasonable rules which can beused to judge the credit worthiness of a loan applicant.

    4. Case histories: Find records of actual cases where competent loan officers correctly judgedwhen and not to. Approve a loan application.

    The German Credit Data

    Actual historical credit data is not always easy to come by because of confidentiality rules.Here is one such data set. Consisting of 1000 actual cases collected in Germany.

    In spite of the fact that the data is German, you should probably make use of it for thisassignment(Unless you really can consult a real loan officer!)

    There are 20 attributes used in judging a loan applicant( ie., 7 Numerical attributes and 13Categorical or Nominal attributes). The goal is the classify the applicant into one of two categories.Good or Bad.The total number of attributes present in German credit data are.

    1. Checking_Status2. Duration3. Credit_history4. Purpose5. Credit_amout6. Savings_status7. Employment8. Installment_Commitment9. Personal_status10.Other_parties11.Residence_since12.Property_Magnitude13.Age

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    30/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    30 Bharath Annamaneni [JNTU WORLD]

    14.Other_payment_plans15.Housing16.Existing_credits17.Job18.Num_dependents19.Own_telephone20.Foreign_worker21.Class

    Tasks (Turn in your answers to the following tasks)

    1. List all the categorical (or nominal) attributes and the real valued attributes separately.

    Ans) The following are the Categorical (or Nominal) attributes)

    1. Checking_Status2. Credit_history3. Purpose4. Savings_status5. Employment6. Personal_status7. Other_parties8.

    Property_Magnitude9. Other_payment_plans

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    31/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    31 Bharath Annamaneni [JNTU WORLD]

    10.Housing11.Job12.Own_telephone13.Foreign_worker

    The following are the Numerical attributes)

    1. Duration2. Credit_amout3. Installment_Commitment4. Residence_since5. Age6. Existing_credits7. Num_dependents

    2. What attributes do you think might be crucial in making the credit assessment? Come up with

    some simple rules in plain English using your selected attributes.

    Ans) The following are the attributes may be crucial in making the credit assessment.1. Credit_amount2. Age3. Job4. Savings_status5. Existing_credits6. Installment_commitment7. Property_magnitude

    3. One type of model that you can create is a Decision tree . train a Decision tree using thecomplete data set as the training data. Report the model obtained after training.

    Ans) We created a decision tree by using J48 Technique for the complete dataset as the training data.The following model obtained after training.=== Run information ===

    Scheme: weka.classifiers.trees.J48 -C 0.25 -M 2Relation: german_creditInstances: 1000

    Attributes: 21checking_statusdurationcredit_historypurposecredit_amountsavings_statusemploymentinstallment_commitmentpersonal_statusother_partiesresidence_since

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    32/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    32 Bharath Annamaneni [JNTU WORLD]

    property_magnitudeageother_payment_planshousingexisting_credits

    jobnum_dependentsown_telephoneforeign_workerclass

    Test mode: evaluate on training data

    === Classifier model (full training set) ===

    J48 pruned tree

    ------------------

    Number of Leaves : 103

    Size of the tree : 140

    Time taken to build model: 0.08 seconds

    === Evaluation on training set ===

    === Summary ===

    Correctly Classified Instances 855 85.5 %Incorrectly Classified Instances 145 14.5 %Kappa statistic 0.6251Mean absolute error 0.2312Root mean squared error 0.34Relative absolute error 55.0377 %Root relative squared error 74.2015 %Coverage of cases (0.95 level) 100 %Mean rel. region size (0.95 level) 93.3 %Total Number of Instances 1000

    === Detailed Accuracy By Class ===

    TP Rate FP Rate Precision Recall F-Measure ROC Area Class0.956 0.38 0.854 0.956 0.902 0.857 good

    0.62 0.044 0.857 0.62 0.72 0.857 bad

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    33/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    33 Bharath Annamaneni [JNTU WORLD]

    WeightedAvg.0.855 0.279 0.855 0.855 0.847 0.857

    === Confusion Matrix ===

    a b

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    34/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    34 Bharath Annamaneni [JNTU WORLD]

    0.84 0.61 0.763 0.84 0.799 0.639 good0.39 0.16 0.511 0.39 0.442 0.639 bad

    Weighted Avg. 0.705 0.475 0.687 0.705 0.692 0.639

    === Confusion Matrix ===

    a b

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    35/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    35 Bharath Annamaneni [JNTU WORLD]

    Root relative squared error 101.8806 %Coverage of cases (0.95 level) 92.8 %Mean rel. region size (0.95 level) 91.3 %Total Number of Instances 1000

    === Detailed Accuracy By Class ===

    TP Rate FP Rate Precision Recall F-Measure ROC Area Class0.891 0.677 0.755 0.891 0.817 0.662 good0.323 0.109 0.561 0.323 0.41 0.662 bad

    Weighted Avg. 0.721 0.506 0.696 0.721 0.695 0.662

    === Confusion Matrix ===

    a b

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    36/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    36 Bharath Annamaneni [JNTU WORLD]

    Mean rel. region size (0.95 level) 91.9 %Total Number of Instances 1000

    === Detailed Accuracy By Class ===

    TP Rate FP Rate Precision Recall F-Measure ROC Area Class0.954 0.363 0.86 0.954 0.905 0.867 good0.637 0.046 0.857 0.637 0.73 0.867 bad

    Weighted Avg 0.859 0.268 0.859 0.859 0.852 0.867

    === Confusion Matrix ===

    a b

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    37/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    37 Bharath Annamaneni [JNTU WORLD]

    8. Another question might be, do you really need to input so many attributes to get good results?May be only a few would do. For example, you could try just having attributes 2,3,5,7,10,17 and 21.Try out some combinations.(You had removed two attributes in problem 7. Remember to reloadthe arff data file to get all the attributes initially before you start selecting the ones you want.)

    Ans) We use the Preprocess Tab in Weka GUI Explorer to remove 2nd attribute (Duration). In ClassifyTab, Select Use Training set option then Press Start Button, If these attributes removed from thedataset, we can see change in the accuracy compare to full data set when we removed.

    === Evaluation on training set ====== Summary ===

    Correctly Classified Instances 841 84.1 %Incorrectly Classified Instances 159 15.9 %Confusion Matrix ===

    a b

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    38/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    38 Bharath Annamaneni [JNTU WORLD]

    a b

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    39/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    39 Bharath Annamaneni [JNTU WORLD]

    === Summary ===

    Correctly Classified Instances 859 85.9 %Incorrectly Classified Instances 141 14.1 %

    === Confusion Matrix ===

    a b

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    40/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    40 Bharath Annamaneni [JNTU WORLD]

    0.0 5.01.0 0.0

    Then close the cost matrix editor, then press ok button. Then press start button.=== Evaluation on training set ====== Summary ===

    Correctly Classified Instances 855 85.5 %Incorrectly Classified Instances 145 14.5 %

    === Confusion Matrix ===

    a b

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    41/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    41 Bharath Annamaneni [JNTU WORLD]

    176 124 | b = bad

    By using pruned model, the accuracy decreased. Therefore by pruning the nodes we can make ourdecision tree simpler.

    12. How can you convert a Decision Tree into if-then-else rules. Make up your own smallDecision Tree consisting 2-3 levels and convert into a set of rules. There also exist differentclassifiers that output the model in the form of rules. One such classifier in weka is rules. PART,train this model and report the set of rules obtained. Sometimes just one attribute can be goodenough in making the decision, yes, just one ! Can you predict what attribute that might be in thisdata set? OneR classifier uses a single attribute to make decisions(it chooses the attribute basedon minimum error).Report the rule obtained by training a one R classifier. Rank the performanceof j48,PART,oneR.

    Ans) Sample Decision Tree of 2-3 levles.

    Converting Decision tree into a set of rules is as follows.

    Rule1: If age = youth AND student=yes THEN buys_computer=yesRule2: If age = youth AND student=no THEN buys_computer=noRule3: If age = middle_aged THEN buys_computer=yesRule4: If age = senior AND credit_rating=excellent THEN buys_computer=yesRule5: If age = senior AND credit_rating=fair THEN buys_computer=no

    In Weka GUI Explorer, Select Classify Tab, In that Select Use Training set option .There also existdifferent classifiers that output the model in the form of Rules. Such classifiers in weka are PART

    and OneR . Then go to Choose and select Rules in that select PART and press start Button.

    Age?

    Student Credit_rating

    yes

    no yes no yes

    youth

    Middle_aged

    senior

    fair excellent

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    42/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    42 Bharath Annamaneni [JNTU WORLD]

    == Evaluation on training set ====== Summary ===

    Correctly Classified Instances 897 89.7 %

    Incorrectly Classified Instances 103 10.3 %

    == Confusion Matrix ===

    a b

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    43/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    43 Bharath Annamaneni [JNTU WORLD]

    VIVA VOICE QUESTIONS AND ANSWERS

    1.Define Data mining.

    It refers to extracting or mining knowledge from large amount of data. Data mining is aprocess of discovering interesting knowledge from large amounts of data

    stored either, in database, data warehouse, or other information repositories

    2.Give some alternative terms for data mining.

    Knowledge mining Knowledge extraction Data/pattern analysis. Data Archaeology Data dredging

    3.What is KDD.

    KDD-Knowledge Discovery in Databases.

    4.What are the steps involved in KDD process.

    Data cleaning Data Mining Pattern Evaluation Knowledge Presentation Data Integration Data Selection Data Transformation

    5.What is the use of the knowledge base?

    Knowledge base is domain knowledge that is used to guide search or evaluate theinterestingness of resulting pattern. Such knowledge can include concept hierarchies used to organizeattribute /attribute values in to different levels ofabstraction.

    6.Mention some of the data mining techniques. Statistics Machine learning Decision Tree Hidden markov models

    Artificial Intelligence Genetic Algorithm Meta learning

    7.Give few statistical techniques.

    Point Estimation Data Summarization Bayesian Techniques Testing Hypothesis Correlation

    Regression

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    44/61

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    45/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    45 Bharath Annamaneni [JNTU WORLD]

    18.Classifications of Data mining systems. Based on the kinds of databases mined:o According to model

    _ Relational mining system

    _ Transactional mining system_ Object-oriented mining system_ Object-Relational mining system_ Data warehouse mining systemo Types of Data

    _ Spatial data mining system_ Time series data mining system_ Text data mining system_ Multimedia data mining system Based on kinds of Knowledge mined

    o According to functionalities_ Characterization_ Discrimination_ Association_ Classification_ Clustering_ Outlier analysis_ Evolution analysiso According to levels of abstraction of the knowledge mined

    _ Generalized knowledge (High level of abstraction)

    _ Primitive-level knowledge (Raw data level)o According to mine data regularities versus mine data irregularities Based on kinds of techniques utilizedo According to user interaction

    _ Autonomous systems_ Interactive exploratory system_ Query-driven systemso According to methods of data analysis

    _ Database-oriented_ Data warehouse-oriented_ Machine learning

    _ Statistics_ Visualization_ Pattern recognition_ Neural networks Based on applications adopted

    o Financeo Telecommunicationo DNAo Stock marketso E-mail and so on

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    46/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    46 Bharath Annamaneni [JNTU WORLD]

    19.Describe challenges to data mining regarding data mining methodology and user interactionissues.

    Mining different kinds of knowledge in databases Interactive mining of knowledge at multiple levels of abstraction Incorporation of background knowledge

    Data mining query languages and ad hoc data mining Presentation and visualization of data mining results Handling noisy or incomplete data Pattern evaluation

    20.Describe challenges to data mining regarding performance issues. Efficiency and scalability of data mining algorithms Parallel, distributed, and incremental mining algorithms

    21.Describe issues relating to the diversity of database types.

    Handling of relational and complex types of data Mining information from heterogeneous databases and global information systems

    22.What is meant by pattern?

    Pattern represents knowledge if it is easily understood by humans; valid on test data with somedegree of certainty; and potentially useful, novel,or validates a hunch about which the used wascurious. Measures of pattern interestingness, either objective or subjective, can be used to guide thediscovery process.

    23.How is a data warehouse different from a database?

    Data warehouse is a repository of multiple heterogeneous data sources, organizedunder a unified schema at a single site in order to facilitate management decision-making. Databaseconsists of a collection of interrelated data.

    1. Define Association Rule Mining.

    Association rule mining searches for interesting relationships among items in a given data set

    2. When we can say the association rules are interesting?

    Association rules are considered interesting if they satisfy both a minimum support threshold and aminimum confidence threshold. Users or domain experts can set such thresholds.

    3. Explain Association rule in mathematical notations.Let I-{i1,i2,..,im} be a set of itemsLet D, the task relevant data be a set of database transaction T is a set ofItems An association rule is an implication of the form A=>B where A C I, B C I,and An B=f. The rule A=>B contains in the transaction set D with support s,where s is the percentage of transactions in D that contain AUB. The Rule A=> Bhas confidence c in the transaction set D if c is the percentage of transactions in Dcontaining A that also contain B.

    4. Define support and confidence in Association rule mining.

    Support S is the percentage of transactions in D that contain AUB.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    47/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    47 Bharath Annamaneni [JNTU WORLD]

    Confidence c is the percentage of transactions in D containing A that also contain B.Support ( A=>B)= P(AUB)Confidence (A=>B)=P(B/A)

    5. How are association rules mined from large databases?

    I step: Find all frequent item sets: II step: Generate strong association rules from frequent item sets

    6. Describe the different classifications of Association rule mining.

    Based on types of values handled in the Rulei. Boolean association ruleii. Quantitative association rule Based on the dimensions of data involvedi. Single dimensional association ruleii. Multidimensional association rule

    Based on the levels of abstraction involvedi. Multilevel association ruleii. Single level association rule Based on various extensions

    i. Correlation analysis7. What is the purpose of Apriori Algorithm?

    Apriori algorithm is an influential algorithm for mining frequent item sets for Boolean associationrules. The name of the algorithm is based on the fact that thealgorithm uses prior knowledge of frequent item set properties.

    8. Define anti-monotone property.

    If a set cannot pass a test, all of its supersets will fail the same test as well.

    9. How to generate association rules from frequent item sets?

    Association rules can be generated as followsFor each frequent item set1, generate all non empty subsets of 1.For every non empty subsets s of 1, output the rule S=>(1-s)If Support count(1) = min_conf,Support_count(s)Where min_conf is the minimum confidence threshold.

    10. Give few techniques to improve the efficiency of Apriori algorithm.

    Hash based technique Transaction Reduction Portioning Sampling Dynamic item counting

    11. What are the things suffering the performance of Apriori candidate generation technique.

    Need to generate a huge number of candidate sets

    Need to repeatedly scan the scan the database and check a large set of

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    48/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    48 Bharath Annamaneni [JNTU WORLD]

    candidates by pattern matching

    12. Describe the method of generating frequent item sets without candidate generation.

    Frequent-pattern growth(or FP Growth) adopts divide-and-conquer strategy.Steps:

    Compress the database representing frequent items into a frequent pattern treeor FP treeDivide the compressed database into a set of conditional databaseMine each conditional database separately

    13. Define Iceberg query.

    It computes an aggregate function over an attribute or set of attributes inorder to find aggregate values above some specified threshold.Given relation R with attributes a1,a2,..,an and b, and an aggregate function,agg_f, an iceberg query is the form

    Select R.a1,R.a2,..R.an,agg_f(R,b)From relation RGroup by R.a1,R.a2,.,R.anHaving agg_f(R.b)>=threshold

    14. Mention few approaches to mining Multilevel Association Rules

    Uniform minimum support for all levels(or uniform support) Using reduced minimum support at lower levels(or reduced support) Level-by-level independent Level-cross filtering by single item

    Level-cross filtering by k-item set

    15. What are multidimensional association rules?

    Association rules that involve two or more dimensions or predicates Interdimension association rule: Multidimensional association rule with norepeated predicate or dimension Hybrid-dimension association rule: Multidimensional association rule withmultiple occurrences of some predicates or dimensions.

    16. Define constraint-Based Association Mining.

    Mining is performed under the guidance of various kinds of constraints provided by the user.

    The constraints include the following Knowledge type constraints Data constraints Dimension/level constraints Interestingness constraints Rule constraints.

    17. Define the concept of classification.

    Two step processA model is built describing a predefined set of data classes or concepts.

    The model is constructed by analyzing database tuples described by attributes.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    49/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    49 Bharath Annamaneni [JNTU WORLD]

    The model is used for classification.

    18. What is Decision tree?

    A decision tree is a flow chart like tree structures, where each internal node denotes a test on anattribute, each branch represents an outcome of the test, and leaf nodes represent classes or class

    distributions. The top most in a tree is the root node.

    19. What is Attribute Selection Measure?

    The information Gain measure is used to select the test attribute at each node in the decision tree.Such a measure is referred to as an attribute selection measure or a measure of the goodness of split.

    20. Describe Tree pruning methods.

    When a decision tree is built, many of the branches will reflect anomalies in the training data due tonoise or outlier. Tree pruning methods address this problem of over fitting the data.Approaches:

    Pre pruning Post pruning

    21. Define Pre Pruning

    A tree is pruned by halting its construction early. Upon halting, the node becomes a leaf. The leafmay hold the most frequent class among the subset samples.

    22. Define Post Pruning.

    Post pruning removes branches from a Fully grown tree. A tree node is pruned by removing itsbranches.

    Eg: Cost Complexity Algorithm

    23. What is meant by Pattern?

    Pattern represents the knowledge.

    24. Define the concept of prediction.Prediction can be viewed as the construction and use of a model to assess the class of an unlabeledsample or to assess the value or value ranges of an attribute that a given sample is likely to have.

    1.Define Clustering?

    Clustering is a process of grouping the physical or conceptual data object into clusters.

    2. What do you mean by Cluster Analysis?

    A cluster analysis is the process of analyzing the various clusters to organize the different objects intomeaningful and descriptive objects.

    3. What are the fields in which clustering techniques are used?

    Clustering is used in biology to develop new plants and animal taxonomies. Clustering is used in business to enable marketers to develop new distinct groups of theircustomers and characterize the customer group on basis of purchasing. Clustering is used in the identification of groups of automobiles

    Insurance policy customer.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    50/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    50 Bharath Annamaneni [JNTU WORLD]

    Clustering is used in the identification of groups of house in a city onthe basis of house type, their cost and geographical location. Clustering is used to classify the document on the web for informationdiscovery.

    4.What are the requirements of cluster analysis?The basic requirements of cluster analysis are Dealing with different types of attributes. Dealing with noisy data. Constraints on clustering. Dealing with arbitrary shapes. High dimensionality Ordering of input data Interpretability and usability Determining input parameter and

    Scalability

    5.What are the different types of data used for cluster analysis?

    The different types of data used for cluster analysis are interval scaled, binary, nominal, ordinal andratio scaled data.

    6. What are interval scaled variables?Interval scaled variables are continuous measurements of linear scale. For example, height andweight, weather temperature or coordinates for any cluster. These measurements can be calculatedusing Euclidean distance or Minkowski distance.

    7. Define Binary variables? And what are the two types of binary variables?

    Binary variables are understood by two states 0 and 1, when state is 0, variable is absent and whenstate is 1, variable is present. There are two types of binary variables, symmetric and asymmetricbinary variables. Symmetric variables are those variables that have same state values and weights.Asymmetric variables are those variables that have not same state values and weights.

    8. Define nominal, ordinal and ratio scaled variables?

    A nominal variable is a generalization of the binary variable. Nominal variable has more than twostates, For example, a nominal variable, color consists of four states, red, green, yellow, or black. InNominal variables the total number of states is N and it is denoted by letters, symbols or integers. An

    ordinal variable also has more than two states but all these states are ordered in a meaningfulsequence. A ratio scaled variable makes positive measurements on a non-linear scale, such asexponential scale, using the formula AeBt or Ae-Bt ,Where A and B are constants.

    9. What do u mean by partitioning method?

    In partitioning method a partitioning algorithm arranges all the objects into various partitions, wherethe total number of partitions is less than the total number of objects. Here each partition represents acluster. The two types of partitioning method are k-means and k-medoids.

    10. Define CLARA and CLARANS?

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    51/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    51 Bharath Annamaneni [JNTU WORLD]

    Clustering in LARge Applications is called as CLARA. The efficiency of CLARA depends upon thesize of the representative data set. CLARA does not work properly if any representative data set fromthe selected representative data sets does not find best k-medoids. To recover this drawback a newalgorithm, Clustering Large Applications based upon RANdomized search (CLARANS) isintroduced. The CLARANS works like CLARA, the only difference between CLARA and CLARANS

    is the clustering process that is done after selecting the representative data sets.

    11. What is Hierarchical method?

    Hierarchical method groups all the objects into a tree of clusters that are arranged in a hierarchicalorder. This method works on bottom-up or top-down approaches.

    12. Differentiate Agglomerative and Divisive Hierarchical Clustering?Agglomerative Hierarchical clustering method works on the bottom-up approach. In Agglomerativehierarchical method, each object creates its own clusters. The single Clusters are merged to makelarger clusters and the process of merging continues until all the singular clusters are merged into

    one big cluster that consists of all the objects. Divisive Hierarchical clustering method works on thetop-down approach. In this method all the objects are arranged within a big singular cluster and thelarge cluster is continuously divided into smaller clusters until each cluster has a single object.

    13. What is CURE?

    Clustering Using Representatives is called as CURE. The clustering algorithms generally work onspherical and similar size clusters. CURE overcomes the problem of spherical and similar size clusterand is more robust with respect to outliers.

    14. Define Chameleon method?

    Chameleon is another hierarchical clustering method that uses dynamic modeling. Chameleon isintroduced to recover the drawbacks of CURE method. In this method two clusters are merged, if theinterconnectivity between two clusters is greater than the interconnectivity between the objectswithin a cluster.

    15. Define Density based method?

    Density based method deals with arbitrary shaped clusters. In density-based method, clusters areformed on the basis of the region where the density of the objects is high.

    16. What is a DBSCAN?

    Density Based Spatial Clustering of Application Noise is called as DBSCAN. DBSCAN is a density

    based clustering method that converts the high-density objectsregions into clusters with arbitrary shapes and sizes. DBSCAN defines the cluster as a maximal set ofdensity connected points.

    17. What do you mean by Grid Based Method?

    In this method objects are represented by the multi resolution grid data structure. All the objects arequantized into a finite number of cells and the collection of cells build the grid structure of objects.The clustering operations are performed on that grid structure. This method is widely used becauseits processing time is very fast and that is independent of number of objects.

    18. What is a STING?

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    52/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    52 Bharath Annamaneni [JNTU WORLD]

    Statistical Information Grid is called as STING; it is a grid based multi resolution clustering method.In STING method, all the objects are contained into rectangular cells, these cells are kept into variouslevels of resolutions and these levels are arranged in a hierarchical structure.

    19. Define Wave Cluster?

    It is a grid based multi resolution clustering method. In this method all the objects are represented bya multidimensional grid structure and a wavelet transformation is applied for finding the denseregion. Each grid cell contains the information of the group of objects that map into a cell. A wavelettransformation is a process of signaling that produces the signal of various frequency sub bands.

    20. What is Model based method?

    For optimizing a fit between a given data set and a mathematical model based methods are used.This method uses an assumption that the data are distributed by probability distributions. There aretwo basic approaches in this method that are1. Statistical Approach

    2. Neural Network Approach.

    21. What is the use of Regression?

    Regression can be used to solve the classification problems but it can also be used for applicationssuch as forecasting. Regression can be performed using many different types of techniques; inactually regression takes a set of data and fits the data to a formula.

    22. What are the reasons for not using the linear regression model to estimate theoutput data?

    There are many reasons for that, One is that the data do not fit a linear model, It is possible however

    that the data generally do actually represent a linear model, but the linear model generated is poorbecause noise or outliers exist in the data. Noise is erroneous data and outliers are data values thatare exceptions to the usual and expected data.

    23. What are the two approaches used by regression to perform classification?

    Regression can be used to perform classification using the following approaches1. Division: The data are divided into regions based on class.2. Prediction: Formulas are generated to predict the output class value.

    24. What do u mean by logistic regression?Instead of fitting a data into a straight line logistic regression uses a logistic curve.The formula for the univariate logistic curve isP= e (C0+C1X1)1+e (C0+C1X1)The logistic curve gives a value between 0 and 1 so it can be interpreted as theprobability of class membership.

    25. What is Time Series Analysis?

    A time series is a set of attribute values over a period of time. Time Series Analysis may be viewed as

    finding patterns in the data and predicting future values.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    53/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    53 Bharath Annamaneni [JNTU WORLD]

    26. What are the various detected patterns?

    Detected patterns may include: Trends : It may be viewed as systematic non-repetitive changes tothe values over time. Cycles :The observed behavior is cyclic.

    Seasonal :The detected patterns may be based on time of year or month or day. Outliers :To assist in pattern detection , techniques may be needed to remove or reduce the impactof outliers.

    27. What is Smoothing?

    Smoothing is an approach that is used to remove the nonsystematic behaviors found in time series. Itusually takes the form of finding moving averages of attribute values. It is used to filter out noise andoutliers.

    28. Give the formula for Pearsons r

    One standard formula to measure correlation is the correlation coefficient r, sometimes calledPearsons r. Given two time series, X and Y with means X and Y,each with n elements, the formula for risS (xi X) (yi Y)(S (xi X)2 S(yi Y)2)1/2

    29. What is Auto regression?

    Auto regression is a method of predicting a future time series value by looking at previous values.Given a time series X = (x1,x2,.xn) a future value, x n+1, can be found usingx n+1 = x + j nx n + j n-1x n-1 ++ e n+1

    Here e n+1 represents a random error, at time n+1.In addition, each element in the time series can beviewed as a combination of a random error and a linear combination of previous values.

    1.Define data warehouse?

    A data warehouse is a repository of multiple heterogeneous data sources organized under a unifiedschema at a single site to facilitate management decision making .(or)A data warehouse is a subject-oriented, time-variant and nonvolatile collection of data in support ofmanagements decision-making process.

    2.What are operational databases?

    Organizations maintain large database that are updated by daily transactions are called operationaldatabases.

    3.Define OLTP?

    If an on-line operational database systems is used for efficient retrieval, efficient storage andmanagement of large amounts of data, then the system is said to be on-line transaction processing.

    4.Define OLAP?

    Data warehouse systems serves users (or) knowledge workers in the role of data analysis anddecision-making. Such systems can organize and present data in various formats. These systems are

    known as on-line analytical processing systems.

    w.jntuworld.com

    www.jntuworld.com

    www.jwjobs.net

  • 7/27/2019 Data Mining Lab Record for IV B.tech

    54/61

    Data Mining Lab Record JNTU WORLD [www.jntuworld.com]

    54 Bharath Annamaneni [JNTU WORLD]

    5.How a database design is represented in OLTP systems?

    Entity-relation model

    6. How a database design is represented in OLAP systems?

    Star schemaSnowflake schemaFact constellation schema

    7.Write short notes on multidimensional data model?

    Data warehouses and OLTP tools are based on a multidimensional data model. This model is usedfor the design of corporate data warehouses and department data marts. This model contains a Starschema, Snowflake schema and Fact constellationschemas. The core of the multidimensional model is the data cube.

    8.Define data cube?It consists of a large set of facts (or) measures and a number of dimensions.

    9.What are facts?

    Facts are numerical measures. Facts can also be considered as quantities by which we can analyze therelationship between dimensions.

    10.What are dimensions?

    Dimensions are the entities (or) perspectives with respect to an organization for keeping records andare hierarchical in nature.

    11.Define dimension