intelligent agents for manageme~t of learning: an introduction and a case study · 2009-10-28 ·...

12
INTELLIGENT AGENTS FOR OF LEARNING: An Introduction and A Case Study Q Ibrahim F. lrnam t Center for Machine Learning and Inference George Mason University 4400 University Dr. Fairfax, V A 22030 email: [email protected] ABSTRACT Machine learning systems are widely used for obtaining useful knowledge that can assist the users in different ways. Most learning systems are equipped with a set of learning parameters to adapt the leaming algorithm for different kinds of problems. Since many users exen the least dfon possible. these parameters may not be set properly, or they may be used with their default values. This may cause problems resulting in learning less accurate or more complex knowledge. This .' paper proposes an initial framework for building intelligent agents that assist the users in managing the learning process using the available set of leaming parameters. These agents gain initial knowledge through experimental runs. Then they use this initial knowledge to set the leaming parameters whenever the learning system is used later. The agents adapt their knowledge. using an explanation-based leaming approach. when a new conD"adictory feedback 1rom the user is given. A case study is introduced for building intelligent agents for managing the learning system AQ 15. The results were tested on two of the MONKs problems. key words: intelligent agent. machine leaming. adaptive systems 1. Introduction . Learning systems are essential tools in the development of intelligent applications. Most learning systems are designed to learn concept descriptions from different kinds of datasets (e.g. simple. complex. noisy. small. large. etc.). These capabilities are usually presented in the learning systems by a set of ktvni", ptJI'tJmel4TS. Each leaming parameter represents one characteristic of the system's performance. Ignoring and/or misusing these parameters may limit or misdirect the learning algorithm. resulting in learning less accurate.)llogical, or complex concept descriptions. Selecting the opdmal seainp of parameters for a given problem is considered to be a way of managing a learning system. Managing learning systems is the process of selecting the best leaming strategies. selecting the optimal set of parameters. seJecting the best space representation. andlor using background knowledge to improve the perfonnance of the leaming system on a gIven problem. This requires cenain expertise which is not available for common users. A simple and interesting approach to Ir8Jlsfer this expertise along with the learning systems is to develop a set of intelligent agents (hal can easily adapt the learning process 10 suit a given problem or task. 9S

Upload: others

Post on 27-Apr-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study · 2009-10-28 · INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study Q

INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study

QIbrahim F. lrnam t

Center for Machine Learning and Inference George Mason University

4400 University Dr. Fairfax, V A 22030

email: [email protected]

ABSTRACT

Machine learning systems are widely used for obtaining useful knowledge that can assist the users in different ways. Most learning systems are equipped with a set of learning parameters to adapt the leaming algorithm for different kinds of problems. Since many users exen the least dfon possible. these parameters may not be set properly, or they may be used with their default values. This may cause problems resulting in learning less accurate or more complex knowledge. This

.' paper proposes an initial framework for building intelligent agents that assist the users in managing the learning process using the available set of leaming parameters. These agents gain initial knowledge through experimental runs. Then they use this initial knowledge to set the leaming parameters whenever the learning system is used later. The agents adapt their knowledge. using an explanation-based leaming approach. when a new conD"adictory feedback 1rom the user is given. A case study is introduced for building intelligent agents for managing the learning system AQ 15. The results were tested on two of the MONKs problems.

key words: intelligent agent. machine leaming. adaptive systems

1. Introduction .Learning systems are essential tools in the development of intelligent applications. Most learning systems are designed to learn concept descriptions from different kinds of datasets (e.g. simple. complex. noisy. small. large. etc.). These capabilities are usually presented in the learning systems by a set of ktvni", ptJI'tJmel4TS. Each leaming parameter represents one characteristic of the system's performance. Ignoring and/or misusing these parameters may limit or misdirect the learning algorithm. resulting in learning less accurate.)llogical, or complex concept descriptions. Selecting the opdmal seainp of parameters for a given problem is considered to be a way of managing a learning system. Managing learning systems is the process of selecting the best leaming strategies. selecting the optimal set of parameters. seJecting the best space representation. andlor using background knowledge to improve the perfonnance of the leaming system on a gIven problem. This requires cenain expertise which is not available for common users. A simple and interesting approach to Ir8Jlsfer this expertise along with the learning systems is to develop a set of intelligent agents (hal can easily adapt the learning process 10 suit a given problem or task.

9S

Page 2: INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study · 2009-10-28 · INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study Q

The paper introduces an initial framework for developing intelligent agents that manage learning systems through lhe learning parameters. The goa! of these agems is to positively influence the learning process to pn:x1uce more efficient and/or optimized knowledge for the given problem. The framework can be generalized to cover three categories of imelligent agems for managing the learning process: I) determining the appropriate parameter setting for the learning system; 2) searching for the represenlation space. knowledge representation. or set of amibutes that is most relevant to the decision classes; 3) selecting the most appropriate inferential process or developing the optimal plan for learning from a set of different inferemial processes (e.g. induction. deduction. generalization. analogy. etc.).

Most of the related research in this area is concerned with the last two categories. Examples of su(:h work include intelligent agents that search for the best inferential process to apply such as MOBAL (Morik. 1994), MOSAL is a model for balancing the cooperation between several learning algorithms for supporting the knowledge engineers in developing knowledge-based systems. Examples of the second category of learning managemem agents include intelligent agents that consauct a new representation space for a given problem. These agents are very useful whe n the original representation space is inadequate for learning a hypothesis (Wnek & Michalski. 1993; Rendell & Seshu. 1990). An equally anractive example is agents that modify. the knowledge to accomplish a given wk. Examples of such lasks include: I) perfonning a decision making process (Michalski & [mam. (994); 2) improving the perfonnance of the learned knowledge <Baffes & Mooney. 1993}. There is not much work on the development of parametric agents.

The paper mainly focuses on the flfSt category of intelligent agents that manage the learning process. A simple case study is introduced to explain the method. The case study is concerned With developing an intelligent parametric agent for the learning system AQIS (Michalski. et aI. 1986). The intelligent agent gain its primarily knowledge from experimental runs. About 162 experiments were executed on randomly selected datasets. A subset of these datasets are selected and used to drive theprimarly knowledge of the intelligent agent. The knowledge of the intelligent agent is tested with two of the MONKs problems (Thrun. Mitchell & Cheng. (991).

2. Intelligent Agents 2.1. An Overview and Related Work

An intelligent agent is a system that assists users in a cenain application. Intelligent agents use learning methodologies to perfonn the required task. The learning capabilities anow agent~ (Q

dynamically adapt cheir procedure of assisting the user. Minsky (1994) defines an agent as a machine (system) that accomplishes something the user needed without knowing how it did it. He differentiates between an intelligent agent which uses learning capabilities to grow or to improve its capabilities and stable agent that develops to a certain point then freezes into some s[able fonn. In this paper, the concept "growing" of an agent is limited to changing the represen[ation of knowledge, adapting a set of parameters. or adding a new learning strategy.

Intelligent agents. in general. can be classified into two categories depending on the function of the learning algorithm. solving the problem or organizing the solution. The first category is concerned with using the learning systems directly in assisting the users. An example of such an agent is CAP

96

Page 3: INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study · 2009-10-28 · INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study Q

(Calendar APprentice) (Mitchell. et al. 1994) which assists the user in scheduling his/her calendar using hislher own scheduling preferences. CAP is an intelligent agent that learns rules from a sct of examples that describe previous meetings. The learning algorithm is applied to different sets of decision classes or features (e.g. duration of a meeting. location of a meeting. day of the week. and appropriate time for the meeting). CAP has intelligent fearures that fit in the other category of intelligent agents; It acquires facrual knowledge about each new attendee that appe:us on the calendar and uses this knowledge later in controlling the tearned rules.

The second category is concerned with agents that use a learning algorithm to assist the user in perfonning intelligent tasks using one or more learning systems. [n other word!i the!ie agent5 work as front-ends for learning systems. The learning functions of these agents can include selecting the best set of parameters for different inputs. refonnulating the representation space to simplify the task. mapping the learned knowledge into simple fonnats for the given task. or searching for the best sequence of inferences that may result in solving the problem. The implementation of such agents is very difficult due to the fact that the agents must grow by learning how to improve the use of learning systems without much interaction from the user. There are quite interesting methodologies in this area, however. most of that work has not been used in the development of intelligent agents. In other words. there is need for using these strategies in interacting with and assisting the user.

8rodley (1993) addressed the problem of automatically selecting one or more algOrithms or models from among a set of learning algorithms that will produce the best accuracy for a given task. A model class selection (MCS) system is developed to learn a hybrid Il"ee that classifies to determine the best model class for each node. MeS uses a set of heuristic rules that describe the biases of learning algorithms and learning representations to solve the given problem. These rules represent an infonnation guide in generating the hybrid tree.

An example of an intelligent agent that has different capabilities is Copycat (Mitchell. 1993). Copycat was used to discover new analogies in alphabetic letter strings. CopyCal consists of different agents that compete with one another to find the strongest analogy. An agent may search for analogy to the saing "ss" as the fIrSt and last letters are equal; another may search for similarilY of the same saing as a repetition as it appears in the saing "aaxxssk.k."; a third agenl may search for an analogy where characters of that smng ~ successors to surrounding characters such as in "pqrssft. CopyCat can discover entirely unpredictable analogies.

1.1. Important Issues in the Development or an Intelligent Agent

,This s~bsection presents some of the basic issues in developing intelligent agents. The development of any agent. in general. should be guided by some limitations on the functions of that agent. These limitations dcfJJ1e the degree of specialty of an agent. FinaUy. the success of an intelligent agent depends on the ability to use that agent The usability of an agent somehow depends on the simplicity of using the agent and the amount of control given to the user.

2.2.1. Sp~citlllJ oJ ,,,,,Ui,,,,, A""" The degree of specialty of an agent is defined by how much assistance can be provided by that agent. For example. an agent that sells airplane tickets can explain infonnation regarding the night

97

Page 4: INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study · 2009-10-28 · INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study Q

number, the time of the travel. etc. However. such an agent can not teU the user the make of the engine of the airplane. The specialty of a travel agent is limited to the traveling information. Another kind of information that may be answered by a human travel agent but is very difficult 10

get from an autonomous agent is common sense information su<.:h as rime and space (\1insky. 1994). Measuring the degree of specialty of an intelligent agent should exclude this common sense infonnation.

Detennining the specialty of an intelligent becomes more difficult when many agents must Inter-a!.:t

together. Since each agent is specialized in performing one or more different task from the others. some knowledge should be shared to allow mUNa! interaction (Guha & Lenat, 1994). This knowledge is considered equivalent to common sense knowledge. Determining the knowledge to be shared between different (a group or all) agentS increases the specialty of an agent and the expectation that it will use such knowledge whenever needed.

2.2.2. Usability 0/ Intelligent Agenll

The usability of an intelligent agent relates to the user acceptance of the agent as an assistant. (n other words, the usability of an agent is determined by the reactions that may be taken by the user toward these agentS. Usability of an agent may not be that imponant if the agent has no intelligent Intelligent agentS that do not have adequate control may adapt their procedure of assisting the user in such a way that it is no longer suitable for the user. For example. a travel agent discovers that people ask for more information about different flights. so it adjusts its procedure of reserving a flight to provide extra information at the beginning of the inquiry. A real life example is a company that uses time consuming telephone voice messages to classify the request of a user. Such companies record. a long message with lots of choices and under each choice there are many items. The agent who organizes this message uses hislher intelligence to classify all possible requests of the company's customers. In spite of all the advantage that this company may have, many customers would prefer a smaller company which has an agent that receives their exact request and processes it in a few seconds instead.

Another very imponant aspect of the usability of intelligent agentS is the false image that users may have of intelligent agents. Simply stated. it is the idea that intelligent agents are human-like autonomotons that work without supervision to assist the user. Norman (1994) proposed a set of factors to close the gap between the users and intelligent agents. These factors include ensuring people's feel of control over their systems. the nature of the human-agent interaction, built-in safeguards to pn:vent runaway computation, providing accurate expectations, privacy concerns, and hiding complexity while simultaneously revealing the underlying operations. These factors fonn many concerns reacting to the development of intelligent agents. and add limitation on the functionality (degree of specialty) of intelligent agents.

2.2.3. Growingn,u 0/ Intelligent Agenll

The growingness of an intelligent agent is its ability to self-adaptation without supervision. It can also be defined as changing or extending the specialty of an agent Subsequently, growingness may negatively affect the usability of intelligent agents. It is very difficult to develop intelligent agentS capable of growing. The closest example of an intelligent agent that can grow is the system Copycat (Mitchell. 1993) which was described by Boden (994) as a system that uses many

98

Page 5: INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study · 2009-10-28 · INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study Q

independent descriptors in trying to interpret a given anaJogy and to find a new but similar one. The descriptors are applied in paraJlel with many hidden complexities such as different probabilistic variations and others. The growingness in Copycat can be defined in the sense that it is able of generating unexpected analogies.

2.2.4. Integration 0/ Intelligent Agents

When integrating multiple agents to perfonn a task. it is very necessary to develop a way of communication among them. Such communication requires the sharing of some common sense knowledge. communal knowledge. by all agents. Communal knowledge includes knowledge about the function or the specialty of other agents, when each agent is supposed to start working. what infonnation is needed to accomplish an agent's work. which agent(s) can provide such infonnation, what information should be provided to which agents. etc. Another important issue in integrating intelligent agents is the methodology used to contrOl communication between agents. One way of doing this is to develop a control mechanism (Riecken. 1994) thai interactO\ with all agents (e.g. Blackboard architecture), Another way is to allow all agents to inrerdct with ea<.:h other. which requires increasing the understandability of agent's communication with each other (Guha & Lenat, 1994),

3. Managing the Learning systems by Intelligent Agents This section describes the parameaic intelligent agent and the proposed methodology for building such an agenL It provides also an analysis of its specialty, growingness, and usability. Finally. the section shows a simple case study.

3.1. Why do we need Intelligent Parametric Agents?

lnunediately after the breakout of the Expen Systems technology, many companies. researchers and research laboratories developed shells thac allow any user to build his/her own ES. Organizations and individuals staned developing their own expen system. few of which have succeeded. A major reason for the failure of such systems was linked to the ES shells, rather than to the agents (e.g. knowledge enp.eers) or the interfaces of these shells.

Machine learning systems are becoming very popular powerful tools for learning. discovery and problem solvina. Many learning systems have been publicized. and are available for u~e by different users. One of the popular ways of misusing these systems is by comparing them With other new systems without setting the correct parameters for the given problem.

An interesting approach to overcome this problem is to develop an intelligent agent as a part of the learning system. The specialty of this agent is [0 conD"OI the use of learning systems within their established set of parameters.

3.1. MethodolOI1 tor Buildinl Intelligent Parametric Agents

The methodology for developing intelligent parameDic agents (lPA) consists of (Wo steps. In the first step. a set of experiments are perfonned on different kinds of data with different 'iIZes.

Page 6: INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study · 2009-10-28 · INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study Q

different characteristics, different numbers of ataibu[es. etc. The parameter seuings and the associated results are stored in the fonn of new sets of examples. The results are classified into two or more categories for each important feature (accuracy. time. complexity. etc.), The new data sets are used to learn a set of rules that describes the feature set in terms of the set of parameters (e. g. accuracy could be high or low and can be described in [enns of the degree of pruning or [he size of the number of eumples; complexity could be complex. simple. or average and can be described in [enns of the size of a deCIsion tree or the number of conditions in the decision rules). This set of rules are used by the agent to set the parameters for any future use.

The second step allows the agent to grow and improve itself. This should be done when the parameters suggested by the agent did not produce the expected results (they were more complex. less accurate, etc.). An explanation-based reasoning methodology is used to modify or improve the background knowledge of the intelligent agent. The methodology presented here for modifying the background knowledge of the agent follows me worlc by Pazzani (1990). Pazzani's methoo classifies the failure of hypomesis into three categories: I) inconsistent prediction; 2) ennoblemem violated; 3) unusual input. This paper focuses on the third type of failures. That is. the agent will correct its knowledge when a failure occurs because the learning system was applied to a new or unusual set of data. In case of failure. if the rule related to that failure is more general than the correction. the agent will generate a new rule by adding the correction to the related rules. and keep the old rule. Otherwise. if the related rule is not more general than the correction. the agent will add the correction to the given rule and replace it with the old one.

3.3. Case Study

3.3.1. A Brief Description or the AQlS Rule Learning Program

AQl5learns decision rules for a given set of decision classes from examples of decisions, uSing the STAR methodology (Michalski. 1983). The simplest algorithm based on this methooology. called AQ. starts with a "seed" example of a given decision class. and generates a set of the most general conjunctive descriptions of me seed (alternative decision rules for the seed example). Such a set is called the "star" of the seed example. The algorithm selects from me Star a description that optimizes a criterion refIectinl the needs of the problem domain. If the criterion is not defined. the program uses a default criterion mar selects the description that covers the largest number of positive examples (to minimize the total number of rules needed) and. with the second priority, that involves the smallest number of anributes (to minimize me number of ataibutes needed for arriving at a decision).

If the selected description does not cover.all examples of a given decision class, a new seed is selected from the uncovered examples, and the process continues until a complete class descripnon is generated. The algorithm can work. with few examples or with many examples. and optimize the description according to a variety of easily-modifiable hypomesis quality criteria.

The learned. descriptions are represented in me fonn of a set of decision rules. expressed in an atttibutional logic calculus, called variable-valued logic J or VLJ (Michalski. 1973). A distinctive feature of this representation is that it employs, in addition to standard logic operators, the internal disjunction operator (a disjunction of values of the same attribute in a condition) and the "range"

100

Page 7: INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study · 2009-10-28 · INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study Q

operator (to express conditions involving a range of discrete or continuous values). These operators help to simplify rules involving multivalued discrete attributes; the second operator is also used for creating logical expressions involving continuous attributes.

AQIS can generate decision rules that represent either characrerisric or discriminant concept descriptions. depending on the settings of its parameters (Michalski. (983). A characteristic description states propenies that are true for all objects in the concept. The simplest characteristic concept description is in the form of a single conjunctive rule (in general. it can be a set of such rules), The most desirable is the maximal characteristic description, that is a rule with the longest condition part. Le .• stating as many common propenies of objects of the given class as can be determined. A discriminant description states propenies thal discriminate a given concept from a fued set of other concepts. The most desirable is the minimal discriminant deSCription. that is a rule with the shortesl condition pan. For example. to distinguish a given set of tables from a set of chairs. one may only need to indicate that tables "have large flat top." A characteristic description of the tables would include also properties such as "have four legs. have no back. have four comers. etc." Discriminant descriptions are usually much shoner than characteristic descriptions.

Another option provided in AQIS controls the relationship among the generated descriptions (urulesets" or "covers") of different decision classes. In the "Ie" (Intersecting Covers) mode. rulesels of different classes may logically intersect over areas of the description space in which there are no training examples. In the ''DC' (Disjoint Covers) mode. descriptions of different classes are logically disjoinL The DC mode descriptions are usually more complex. both in the number of rules and the number of conditions. There is also a "OL" mode (a Decision List mode, also called un" mode--for variable-valued logic mode). in which the program generates ruJesels that are evaluated in a cenain order.

3.3.2. Description of the Method

This section presents a simple case study of a parametric intelligent agent that manages the learning system AQIS (Michalski. et al. 1986). The specialty of this parametric intelligent agent is to

detennine the best combination of parameters for producing decision rules with high accuracy for a pven problem. Anther possible specially of such an agent could be to determine the parameter setting that results in leaminl the simplest rules (e.g. rules with fewer conditions).

The agent lains inidal knowledge and stores it for future use. This knowledge is modified whenever is needed. To obtain the initial knowledge for the agent. AQlS is executed with a set of randomly generated problems (data sets) of different sizes. For each data set. about 18 different experiments are performed for the same data set. Each experiment consists of a set of training examples. a set of testing examples. and a pre-setting of the leaming parameters of AQlS. After running AQlS with the training examples. the learned rules are tested against the testing examples. The predictive accuracy of testing the learned rules together with the parameter settings fonn the result vector of the experiment. The result vector of the experiment represents an example that shows the relationship between the parameter settings and the predictive accuracy. Some of the result vectOrs are used to learn decision rules using AQIS. The learned decision rules are the initial knowledge of the intelligent parametric agent.

Page 8: INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study · 2009-10-28 · INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study Q

Table 1 shows the set of AQ IS's parameters (i.e. attributes are used in detennining the initial knowledge of the agent and their possible values), The decision ataibute. the predictive accuracy, is given values ranges from "minimum" to "maximum". Each learning parameter is represented by one attribute.

J

!,

Table 1: The set of attributes that used in building initial knowledge-base for the agent

The noise attribute is defined either by the user or by the agent. The agent detennines the degree of noise by applying the learning system AQlS on a small set of the nining data and assigns a degree of noise based on the strength of the learned rules. Figure 2 shows some of the uaining ex.amples obtained from running the AQ 15 system with different parameter senings on one data set.

, xl xl :d x4 xS x6 1 C 0 L F H H 2 0 0 L F H A 3 C 1 L F H H 4 0 ( L F H MA S C V L F H B 6 0 V L F H L

7 C 0 H F H H 8 0 0 H F H L 9 C 1 H F H H 10 0 I H F H MA 11 C V H F H H 12 0 V H F H L

Figure 2: Examples show the relation between the parameter setting and the accuracy.

For each randomly generated data set, AQ1S ran with 18 different combinations of the learning parameters (not including non-learning parameters). The results of these runs are divided imo two classes. High and Low. An example belongs to the class High if the value of its accuracy. '(6. is "Bener" or higher, and belongs to the class Low otherwise. AQ1S is executed again on the new data set to learn the initial knowledge for the agent. Figure 3 shows the initial knowledge of the intelligent agent.

102

Page 9: INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study · 2009-10-28 · INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study Q

R.ule 1: The accuracy is High if

the learning mode is disjoint or intersecting and the size of the training data is small or larie

Rule 2: The accuracy is High if

the learning mode is intersectinK

Rule 3: The accuracy is High if

the type of the rules is characteristic and the size of the trainin( data is small or large

Figure 3: The initial knowledge of the agent.

The agent was tested on a weU known daEa scts. namely the first two MONK problems (Thrun, MitcheU &. Cheng. 1991). The MONK· I and. MONK-2 problems have different characteristics; however they share the number of attributes and their domain values. The MONK problems are concerned with learning a concept description for robot-like figures. The data consists of two decision classes (Pos and Neg) and six attributes xl •...• x6. The MONK·} problem has no noise, and can be described using a simple DNF concept. The MONK:'2 problem is very complex, and involves an n-of·m concept.

In this paper, we will only test the first two rules of the agent knowledge against the MONKs problems. Random samples from each of the two MONKs problems are selected for craining and the learned rules are tested with the remaining data. The results reponed are compared with [he first two rules. Table 2 summarize these results. One conaadicrion is encountered when the second rule is tested with one of the MONK·l data sets. In this data set, the MONK· 1 problem produced average accuracy while learning characteristic rules in intersecting mode. This contradiction is more specialized than what is known to the intelligent agenl In such case. the intellgent agent generates a new rule and adds it to its knowledge base. Figure 4 shows the new learned rule.

X: means that there is at least one case which contradicts this rule. Table 2: Results from testing the agent with different problems.

,Rule 2: The accuracy is High if .

the learning mode is intersecting and the~of the rules is discriminant

Figure 4: A new rule generated from Rule 2 in Figure 3.

103

Page 10: INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study · 2009-10-28 · INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study Q

3.3.3. How Can Parametric Intelligent Agents Interact with Other Agents?

As ~e discussed earlier, there are three kinds of intelligent agents for managing the learning process. Having such agents together requires some son of complex control. ParameDic agents may get feedback from other agents and may send information to them. Th~ intelligent agents thai search for the optimal representation space or knowledge representation may require a qUH:k run (which requires fewer beam searches) to determine a feedback. generating the more general rules (using discriminant rule type), etc. Other agents search for the best inferential method to apply for a given problem, may require also a quick run and bener accuracy, or simple and accurate rules.

The feedback from other intelligent agents to the para.meteric agents may specify the set of parameters that are of interest. information on the new constraints to be applied to a given problem (which can be used in updating the agent knowledge), and/or a time in which to run the learning system.

Communication between agents that search for the optimal representation space and agents that search for the optimal inferential methodogies is very complex. Both agents may require simultaneous information from each other. These issues will be investigated in future research.

4. Summary and Discussion The paper presents a brief description of intelligent agents and their applicability to managing learning systems. Intelligent agents are systems that use inferential capabilities to assist the users. in performing cenain tasks. The paper intt'Oduces a set of criteria to be considered in the development of intelligent agents. These criteria include defining the specialty of the agent-the functions of interest to the user that the agent is able to do, the usability of the agent-detennined by the acceptance of the user to that agent and by how much contt'Ol the user has over it. and the growingness of the agent-its adaptability for performing its function more efficiently. Intelligent agents should also have infonnation on the outside world. or the pre- and post-conditions to their functions.

Intelligent agents can be of great help in. managing learning systems. The paper introduces three ways to manage a leaminl system: 1) by settinl a set of parameters to improve the overall perfonnance of the system; 2) by searching for the optimal representation space for the given problem or by selecting the most appropriate knowledge representation; 3) by planning the learning process through multistrategy learning capabilities.

With the increased inteJat in machine learning. it is very necessary to develop with each learning system an -intelligent agent that searches for the best way to learn accurate. simple. or fast concept descriptions for any given problem. A case study of generating a paramett'ic intelligent agent was presented. The intelligent agent learns its initial knowledge from randomly generated experiments. Each experiment is executed with different parameter settings. The agent uses this knowledge to set the parameters for any future run. Whenever unusual perfonnance is occurred, the agent considers this as a contradiction to what it knows and starts modifying or correcting its knowledge.

104

Page 11: INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study · 2009-10-28 · INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study Q

Even though the presented experiment is simple. it shows a great potential ror funher improvement. Furure research on this approach will include adding deeper knowledge about the data. and different ways of evaluating the O"ai.ning data berore running the learning system. A full implementation of the Pazza.n.i method (1990) for handling different types of contradictions is needed for such agents. Finally, it would be very interesting to bring together multiple agents that

, can do all son of managing of the learning process.

ACKNOWLEDGMENT

The author thanks Ken Kaufman for his review and his useful comments.

This research was conducted in the Center for Artificial Intelligence at George Mason University. The Center's research is supponed in pan by the Advanced Research Projects Agency under the grant No. NOOOI4·91-J·18S4, administered by the Office of Naval Research, and under the grant No. F49620-92·J"()S49. administered by the Air Force Office of Scientific Research. in pan by the Office of Naval Research under grant No. NOOOI4·91·J·13S1, and in pan by the National Science Foundation under grant No. IRl·9020266.

REFERENCES

Baftes, P.T., and Mooney, R.J., "Symbolic Revision of Theories with M·of-N Rules", Proceedings of the Second bllernalwnal ~ork.shop on Multisrrategy Learning. pp. 69·75, Harpers Ferry, WV, May 26-29, 1993.

Boden, M.A., "Agents and Creativhy'" Communications of the ACM. Vol. 37, No.7, pp. 117· 121. July. 1994.

Brodley, C.E., "Addressing the Selection Superiority Problem: Automatic Algorithm/Model Class Selection", Proceedings o/tM Telllh tnterNJlionol Con/erence on Machine uarning, pp. 17· 24, 1993.

Guha, R.V., and Lenal, D.B. "Enabling Agents to Work Together", Com.municarions of the ACM. Vol. 37. No.7, pp. 127-142, July. 1994.

Michalski, R.S •• uAQVAUl-Computer Implementation of a Variable-Valued Logic System VLl and Examples of itS Application to -Panem Recognition", Proceedings of the First IlllernationaJ Joint Con/erence on Parrem Recognition, -(pp. 3·17). Washington, DC. October 30­November 1. 1973.

Michalski, R.S., UA Theory and Methodology of Inductive Learning", Artificial Intelligence. Vol. 20. (pp. 111·116). 1983.

Michalski, R.S., MOletie, I., Hona, J. and Lavrac, N., "The Multi·Purpose Incremental Learning System AQ1S and Its Testing Application to Three Medical Domains", Proceedings of AAAl·86. (pp. 1041·1045). Philadelphia. PA, 1986.

lOS

Page 12: INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study · 2009-10-28 · INTELLIGENT AGENTS FOR MANAGEME~T OF LEARNING: An Introduction and A Case Study Q

Minsky, M .. "A Conversation with Marvin Minsky about Agents". Communtcattons tI! che

ACM. Vol. 37. No.7. pp. 23·29. July. 1994.

\-1itchell, \-'I .. Analogy-Makin.g as Perceptinn. MIT Press. C.lmbridge. Mass.. 1993

\-'Iitchell, T., Caruana, R., Freitag, D., \-fcDermott, J., and Zabowski, D .. "Experience with a Learning Personal Assistant", COl1'lmunications of the AC.W. Vol. 37, ~o. 7. pp. 81·91. July. 1994.

Morik, K .• "Balanced Cooperative Modeling" in Machine uarninR: A ,Wuiti.wrateRY ApfJroach Vol. IV. Michalski. R.S. & Tecuci. G. (Eds.). pp. 259·31 X. \1organ KJufmann Puo, .. S,ln Francisco. CA. 1994.

Norman, D.A•• "How Might People Interact with Agents". Communications of the ACW. Vol. 37. No.7. pp. 68·71, July, 1994:

Pazzani, M.J~. "Learning Fault Diagnosis Heuristics from Device Descriptions". Machine Learning: An Arrificiallntelligence Approach. Vol. 3. Kodratoff, Y. & Michalski. R.S. (Eds.), pp.214-234. Morgan Kufmann Pub., 1990.

Rendell, L., and Seshu, R.. "Learning Hard Concept through ConstrUctive Induction: Framework. and Rationale", Journal ojCompularionallnlelligence, No.6. 1990

Thrun, S.B., Mitchell, T. & Cheng, J., (Eds.) "The MONK's problems: A Perfonnance Comparison of Different Learning Algorithms", Technical Repon. Carnegie Mellon University, October. 1991.

Wnek, J., and Michalski, K.S •• "Hypothesis-Driven Consauctive Induction in AQ17·HCI: A Method and Experiments", Machine Learning. 14, pp. 139-168. Kluwer Academic Publishers. Boston. 1994.

106