emerging technologies for enterprise optimization in the...

12
Emerging Technologies for Enterprise Optimization in the Process Industries Rudolf Kulhav´ y 1 Honeywell Laboratories Pod vod´ arenskou vˇ ı CZ-182 08 Prague, Czech Republic Joseph Lu 2 Honeywell Hi-Spec Solutions 16404 N. Black Canyon Highway Phoenix, AZ 85023 U.S.A. Tariq Samad 3 Honeywell Laboratories 3660 Technology Drive Minneapolis, MN 55418 U.S.A. Abstract We discuss three emerging technologies for process enterprise optimization, using a different application domain as an example in each case. The first topic is cross-functional integration on a model predictive control (MPC) foundation, in which a coordination layer is added to dynamically integrate unit-level MPCs. Enterprise optimization for oil refining is used to illustrate the concept. We next discuss data-centric forecasting and optimization, providing some details for how high-dimensional problems can be addressed and outlining an application to a district heating network. The final topic is adaptive software agents and their use for developing bottom-up models of complex systems. With learning and adaptation algorithms, agents can generate optimized decision and control strategies. A tool for the deregulated electric power industry is described. We conclude by emphasizing the importance of seeking multiple approaches to complex problems, leveraging existing foundations and advances in information technology, and a multidisciplinary perspective. Keywords Model predictive control, Cross-functional integration, Oil refining, Data-centric modeling, District heating, Adaptive agents, Electric power Introduction The history of advances in control is the history of progress in the level of automation. From single-loop regulation to multivariable control to unit-level optimiza- tion, we have seen step changes in the efficiency, through- put, and autonomy of operation of process plants. The next step in this progression is the optimization of the enterprise—a process plant, a multifacility business, or even a cross-corporate industry sector. Enterprise optimization is the object of significant re- search today. Given the diversity of potential applica- tions and the relative newness of the topic, it is not surprising that no one approach has been accepted as a universal solution. Although it is conceivable that fu- ture research will identify a cross-sector solution, at this point it appears that multiple approaches are likely to be necessary to best cover the full spectrum of potential applications. In this paper, we discuss three emerging technologies for process enterprise optimization: cross-functional in- tegration on a model predictive control (MPC) founda- tion, data-centric modeling and optimization, and adap- tive agents. The discussions are generic, but different process applications are used in each case for illustra- tion: oil refining, district heating, and electric power. Enterprise Optimization on an MPC Foundation 1 MPC has proven to be the most viable multivariable con- trol solution in the continuous process industries and it is becoming increasingly popular in the semibatch and 1 [email protected] 2 [email protected] 3 [email protected] 1 This section is adapted from Lu (1998). batch industries as well. Most industrial MPC products also include a proprietary economic optimization algo- rithm that is essential for driving the process to deliver more profit. In terms of economic benefit, MPC is one of the most significant enabling technologies for the pro- cess industries in recent years, with reported payback times between 3 and 12 months (Hall and Verne, 1993, 1995; Smith, 1993; Sheehan and Reid, 1997; Verne and Escarcega, 1998). Cross-Functional Integration as a New Trend Traditionally, most MPC applications are used for sta- bilizing operations, reducing variability, improving prod- uct qualities, and optimizing unit production. In most cases, a divide-and-conquer approach to a complex plantwide production problem is adopted. In this ap- proach, a large plant is divided into many process units, and MPC is then applied on appropriate units. The divide-and-conquer approach reduces the complexity of the plantwide problem, but each application can reach only its local optimum at best. In a complex plant, the composition of local optima can be significantly less than the potential global optimum. For example, the esti- mated latent benefit for a typical refinery is 2-10 times more than what the combination of MPC controllers can capture (Bodington, 1995). One possible approach to plantwide control is employ- ing a single controller that is responsible for the whole plant; however, this option is infeasible. To note just the most obvious issue, commissioning or maintenance would require the whole plant to be brought offline. An alternative and practical approach for delivering global optimization benefit is to add a coordination layer on top of all the MPC applications to achieve the global op- timum. The coordination layer usually covers multiple functions of the plant, such as operations, production 356

Upload: others

Post on 12-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Emerging Technologies for Enterprise Optimization in the ...staff.utia.cas.cz/kulhavy/cpc01.pdf3tariq.samad@honeywell.com 1This section is adapted from Lu (1998). batch industries

Emerging Technologies for Enterprise Optimization in the Process Industries

Rudolf Kulhavy1

Honeywell LaboratoriesPod vodarenskou vezı

CZ-182 08 Prague, Czech Republic

Joseph Lu2

Honeywell Hi-Spec Solutions16404 N. Black Canyon Highway

Phoenix, AZ 85023 U.S.A.

Tariq Samad3

Honeywell Laboratories3660 Technology Drive

Minneapolis, MN 55418 U.S.A.

AbstractWe discuss three emerging technologies for process enterprise optimization, using a different application domain as anexample in each case. The first topic is cross-functional integration on a model predictive control (MPC) foundation, inwhich a coordination layer is added to dynamically integrate unit-level MPCs. Enterprise optimization for oil refiningis used to illustrate the concept. We next discuss data-centric forecasting and optimization, providing some details forhow high-dimensional problems can be addressed and outlining an application to a district heating network. The finaltopic is adaptive software agents and their use for developing bottom-up models of complex systems. With learning andadaptation algorithms, agents can generate optimized decision and control strategies. A tool for the deregulated electricpower industry is described. We conclude by emphasizing the importance of seeking multiple approaches to complexproblems, leveraging existing foundations and advances in information technology, and a multidisciplinary perspective.

KeywordsModel predictive control, Cross-functional integration, Oil refining, Data-centric modeling, District heating, Adaptiveagents, Electric power

Introduction

The history of advances in control is the history ofprogress in the level of automation. From single-loopregulation to multivariable control to unit-level optimiza-tion, we have seen step changes in the efficiency, through-put, and autonomy of operation of process plants. Thenext step in this progression is the optimization of theenterprise—a process plant, a multifacility business, oreven a cross-corporate industry sector.

Enterprise optimization is the object of significant re-search today. Given the diversity of potential applica-tions and the relative newness of the topic, it is notsurprising that no one approach has been accepted asa universal solution. Although it is conceivable that fu-ture research will identify a cross-sector solution, at thispoint it appears that multiple approaches are likely tobe necessary to best cover the full spectrum of potentialapplications.

In this paper, we discuss three emerging technologiesfor process enterprise optimization: cross-functional in-tegration on a model predictive control (MPC) founda-tion, data-centric modeling and optimization, and adap-tive agents. The discussions are generic, but differentprocess applications are used in each case for illustra-tion: oil refining, district heating, and electric power.

Enterprise Optimization on an MPCFoundation1

MPC has proven to be the most viable multivariable con-trol solution in the continuous process industries and itis becoming increasingly popular in the semibatch and

[email protected]@[email protected] section is adapted from Lu (1998).

batch industries as well. Most industrial MPC productsalso include a proprietary economic optimization algo-rithm that is essential for driving the process to delivermore profit. In terms of economic benefit, MPC is oneof the most significant enabling technologies for the pro-cess industries in recent years, with reported paybacktimes between 3 and 12 months (Hall and Verne, 1993,1995; Smith, 1993; Sheehan and Reid, 1997; Verne andEscarcega, 1998).

Cross-Functional Integration as a New Trend

Traditionally, most MPC applications are used for sta-bilizing operations, reducing variability, improving prod-uct qualities, and optimizing unit production. In mostcases, a divide-and-conquer approach to a complexplantwide production problem is adopted. In this ap-proach, a large plant is divided into many process units,and MPC is then applied on appropriate units. Thedivide-and-conquer approach reduces the complexity ofthe plantwide problem, but each application can reachonly its local optimum at best. In a complex plant, thecomposition of local optima can be significantly less thanthe potential global optimum. For example, the esti-mated latent benefit for a typical refinery is 2-10 timesmore than what the combination of MPC controllers cancapture (Bodington, 1995).

One possible approach to plantwide control is employ-ing a single controller that is responsible for the wholeplant; however, this option is infeasible. To note justthe most obvious issue, commissioning or maintenancewould require the whole plant to be brought offline. Analternative and practical approach for delivering globaloptimization benefit is to add a coordination layer ontop of all the MPC applications to achieve the global op-timum. The coordination layer usually covers multiplefunctions of the plant, such as operations, production

356

Page 2: Emerging Technologies for Enterprise Optimization in the ...staff.utia.cas.cz/kulhavy/cpc01.pdf3tariq.samad@honeywell.com 1This section is adapted from Lu (1998). batch industries

Emerging Technologies for Enterprise Optimization in the Process Industries 357

scheduling and production planning.

Complexity of MPC Coordination. With thedivide-and-conquer approach, transfer price is tradition-ally used for measuring the merit of an advanced controlapplication from a plantwide view and over an appropri-ate time period. The transfer price of a product (or afeed) is an artificial price assigned to determine the ben-efit contribution of a process unit to the overall plantunder an assumed level of global coordination. A com-mon assumption in calculating transfer price is that theproduct produced in a unit will travel through the de-signed paths (or designed processes) with the designedfractions to reach the final designated markets. This as-sumption is not always valid due to a lack of dynamiccoordination among the units.

This phenomenon is referred to as benefit erosion2

(e.g., see Bodington, 1995) and is typically alleviatedby manual coordination between different sections of theproduction processes. For example, after an advancedcontrol application is implemented on a refinery’s crudeunit, the yield of the most valuable component oftenincreases, whereas the yield of the least valuable com-ponent decreases. The scheduling group would detectthe component inventory imbalances (which could causetank level problems if not corrected in time) ripplingthrough various parts of the refinery, and it would thencoordinate the affected parts of the refinery to “digest”the imbalances. With feedback from scheduling and op-erations groups, the planning group would update itsyield models to reflect the yield improvement and rerunthe plantwide optimization to generate a new productionplan and schedule.

The fundamental cause of benefit erosion, however, isa lack of global coordination or optimization. Generally,the more complex the production scheme, the greater theproblem. Therefore, a complex plant, such as a refineryor a chemicals plant, presents a higher benefit potentialfor cross-functional integration. If a new dynamic orsteady-state bottleneck is encountered, or if an infeasibleproduction schedule results, the scheduling group andthe planning group would have to work together withthe operations group to devise a new solution. The finalsolution may take a few adjustments or iterations. Forcomplex cases, this process can take a few weeks or evenmonths.

Only when all operations and activities in the refineryare coordinated together will benefit erosion be mini-mized. Although the situation described in this refineryexample may sound primitive, it is still one of the bettercases. In reality, different parts of a refinery usually usedifferent tools with different models on different plat-forms. Engineers and operators in different units lookat the same problem with very different time horizons.

2The benefit loss when the assumed level of global coordinationis not reached.

All of these factors complicate plantwide coordination inpractice.

Although the cross-functional integration approach re-quires existing infrastructure to be revamped, includingthe DCS hardware and supporting software systems, thisis increasingly less of an issue. Hardware and infras-tructure costs have dropped significantly in recent years,particularly when viewed as a percentage of the totaladvanced control project budget. Support and organiza-tional “psychology” sometimes hinder progress, but thesituation has improved as more and more refineries andchemical plants realize the benefits of advanced control.

Long- and Short-Term Goals. Cross-functionalintegration takes a holistic approach to the plantwideproblem. The integration encompasses a large number ofprocess units and operating activities. Practical consid-erations suggest that it proceed bottom up, integratingone layer at a time until the level of enterprise optimiza-tion is finally reached, as depicted in Figure 1.

In Figure 1, the various layers in the pyramid describethe plantwide automation solution structure and thedecision-making hierarchy. Around the pyramid struc-ture is the circle of supply chain, production planningand scheduling, process control, global optimization, andproduct distribution. As more layers are integrated intothe cross-functional optimization, the integrated systemwill perform more tasks and make more decisions thatare made heuristically and manually today. Long envi-sioned by many industrial researchers, such as Prett andGarcıa (1988), this concept is now being further devel-oped with design details of hardware systems, softwarestructures, network interfaces, and application struc-tures.

As a long-term goal, enterprise optimization integratesall activities in the whole business process, from the sup-ply chain to production, and further to the distributionchannel. In addition, risk management can also be in-cluded as a key technical differentiator. Parallel to thestructural development for such a general-purpose com-plex system, some proof-of-concept projects have beenpiloted, and experiments have been conducted on severaldifferent structures (Bain et al., 1993; del Toro, 1991;Watano et al., 1993). The multiple-layer structure de-scribed in Figure 1 is believed to provide the flexibilityneeded for implementing and operating such a system.Moreover, each layer can be built at an appropriate levelof abstraction and over a suitable time horizon. Thelower layers capture more detailed information of variouslocal process units over a shorter time horizon, whereasthe higher layers capture more of the business essence ofthe plant over a longer horizon.

As a short-term goal, cross-functional integrationcould include, in a refinery, for example, raw mate-rial allocation, inventory management, production man-agement, unit process control, real-time optimization,

Page 3: Emerging Technologies for Enterprise Optimization in the ...staff.utia.cas.cz/kulhavy/cpc01.pdf3tariq.samad@honeywell.com 1This section is adapted from Lu (1998). batch industries

358 Rudolf Kulhavy, Joseph Lu and Tariq Samad

Consumer

InventoryManagement

Supply &Distribution

Material &Equipment

Plan

ProductionGoals

DemandForecast

Orders

On-MarketProduct

Raw MaterialAllocation

Plant

Control

Measurement

Schedule &Optimization

ProductionPlanning

BusinessPlanning

Optimized Production

ProductionSchedule

ProductBlending

Figure 1: Enterprise optimization—refinery example.

product blending, production planning, and productionscheduling. The instrumentation and real-time databaseare in the bottom layer. The regulatory control comesin the second layer. Following that are the MPC layer,the global/multiunit optimization layer, the productionscheduling layer, and, last, the top production planninglayer.

More relevant to the control community, this inte-gration will have a large impact on almost all control,optimization, and scheduling technologies currently em-ployed in the process industries. Advanced process con-trol, primarily MPC and real-time optimization (RTO),will certainly be affected, particularly in terms of con-nectability, responsiveness, and compatibility.

MPC Considerations

This integration requires us to reassess the MPC andRTO designs in terms of their online connectability andtheir dynamic integration. There is a need for codesign-ing MPC and RTO, or at least designing one taking intoaccount that it will be working with the other dynam-ically. Furthermore, from a control perspective, globalcoordination requires MPC applications to perform overa much wider operating region and more responsively.This poses new challenges to MPC technology, three ofwhich are discussed below.

Nonlinear MPC. The majority of chemical processdynamics is, loosely speaking, slightly nonlinear and canbe well modeled with a linearized model around a givenoperating point. With the application of CV/MV lin-earizing transformations, linear MPC can be extendedto effectively handle a variety of simple nonlinear dy-namics. Practical examples include pH control, valveposition control, high-purity quality control, and differ-ential pressure control. However, under cross-functionalintegration, a process will be required to promptly moveits operating point over a significant span, or else to op-erate under very different conditions. Linear MPC maynot be adequate in such cases.

Two examples can serve to illustrate. First, as the op-erating point of the process migrates, the dynamics ofsome processes may change dramatically, even to the ex-tent that the gain will change sign. For example, someyields of the catalytic cracking unit in a refinery canchange their signs when operated between undercrack-ing and overcracking regions. Second, transition con-trol presents a unique type of nonlinear control problem.During a transition, the process operating point typicallymoves much faster than usual, and the process typicallyresponds more nonlinearly than usual. Examples includecrude switch in a refinery and grade transition in manyseries production processes such as polymer and paper.

Both of these problems can be seen as instances of mul-tizone control, where the process needs to be regulated

Page 4: Emerging Technologies for Enterprise Optimization in the ...staff.utia.cas.cz/kulhavy/cpc01.pdf3tariq.samad@honeywell.com 1This section is adapted from Lu (1998). batch industries

Emerging Technologies for Enterprise Optimization in the Process Industries 359

most of the time in each of the zones and to be occa-sionally maneuvered from one zone to another accordingto three performance criteria. The first criterion is fornormal continuous operation in the first zone. The sec-ond criterion is for normal operation in the second zone.The third criterion is for special requirements during thetransition or migration.

Although solving the multizone control problem alonehas merit, solving it while coordinating with other partsof the plant will potentially provide much greater ben-efit. As enterprise optimization further advances, thereis an increasing need for a nonlinear MPC tool that cansolve multizone control problems. Likewise, there is anincreasing need for MPC formulations, linear and non-linear, that take into account the requirements of cross-functional optimization.

Model Structure. The MPC model structure ispreferred to have a linear backbone or linear substruc-ture. The linear model can be developed experimentally,and the nonlinear portion of the dynamics can be addedas the need arises. This preference stems from the factthat, in most cases, one does not know a priori if a non-linear model is necessary.

The model structure should also be scalable. Addingcontrolled variables (CVs) and manipulated variables(MVs) should not require the existing model to be rei-dentified or regenerated. The user should easily be ableto eliminate any input-output pair of the model. Thepreferred solution should not require the system to besquare or all the MVs to be available all the time.

The ability to share model information in the cross-functional integration scheme is important. The modelobtained in an advanced control project is often thesingle most costly item in the implementation. Shar-ing model information with the other layers of cross-functional integration can show substantial monetarysavings. This model sharing can be a two-way process:bottom up and top down.

The preferred model-sharing scheme is bottom up. Itis much easier for engineers to find and correct model-ing errors on a unit-by-unit basis in the control imple-mentation than on a multiunit or plantwide basis undercross-functional integration. Bottom-up model sharingenhances usability, as engineers can verify their modelsas they commission the MPC applications unit by unit.

The solution technique needs to be algorithmically ro-bust. If multiple solutions exist, it is preferable to stick toone branch of solutions throughout, or better yet, stickto the branch of solutions that requires only the mini-mum amount of control effort by some criterion.

Coordination Port. An essential requirement forcross-functional integration is an independent port forcoordination. Simply sending a steady-state target as aset-point to an MPC controller is inadequate. The dy-

o

o

Slow Path

Fast Path

Global Optimum

CV2

CV3

CV1(Regulated)

Start

Figure 2: An example of different dynamic optimiza-tion paths.

namic response required for coordination is often verydifferent from that for set-point change, and the per-formance criterion can be completely different from set-point tracking or disturbance rejection. With a singleport, it is difficult, if not impossible, to always satisfyboth requirements.

A good coordination port implementation in MPC isone of the essential links in instituting cross-functionaloptimization. The nature of the problem is illustratedin Figure 2 as a simplified example of three CVs andtwo MVs. The first CV has a set-point, and the othertwo both have high and low bounds. (The MVs are notshown in the diagram.) The coordination or optimiza-tion target is solved by a global optimizer. The controllerneeds to drive the system to the target in an independentresponse for coordination. This is similar to designinga two-degree-of-freedom controller for the coordinationport, except that the input direction and the responserequirement may vary from one transition to another.

Furthermore, when the coordination port problem isnot fully specified dynamically, as in many practicalproblems that we encounter, multiple solution paths tothe destination exist, as depicted in Figure 2. Treatingthis as a traditional control problem by specifying a de-sired response trajectory is not always suitable for tworeasons. First, the CV error violation is usually muchless important in the transition than in normal opera-tion. Second, except in trivial cases, one rarely knows apriori which transitional path would be financially opti-mal yet dynamically feasible.

An alternative solution is to solve for all equal solu-tion paths in terms of the performance criteria (includ-ing financial terms) and choose the one that requires theminimum MV movement by some criterion. See Lu andEscarcega (1997) for one such solution to the coordina-tion port implementation.

Page 5: Emerging Technologies for Enterprise Optimization in the ...staff.utia.cas.cz/kulhavy/cpc01.pdf3tariq.samad@honeywell.com 1This section is adapted from Lu (1998). batch industries

360 Rudolf Kulhavy, Joseph Lu and Tariq Samad

Summary

If MPC has been the main theme of the ’80s and ’90s inthe process control industries, cross-functional integra-tion and enterprise optimization will be the main themeof the next two decades. As we advance toward a higherlevel of computer-integrated manufacturing, the problemdefinition and the scope of classical model predictive con-trol will be expanded. The cross-functional integrationapproach requires the dynamic coordination of multipleMPCs, not just in terms of steady-state operation, as inmany steady-state RTO and composite LP approaches.One practical way of achieving dynamic coordination isby designing a coordination port in the MPC controlstrategies.

Although truly enterprise-scale applications alongthese lines are yet to be implemented, our initial projectsin cross-functional integration are yielding exciting re-sults. A cross-functional RMPCT3 integration is dis-cussed by Verne and Escarcega (1998), who report signif-icant benefits. More recently, Nath et al. (1999) describean application of Honeywell’s Profit r© Optimizer tech-nology to an ethylene process at Petromont’s Varennesolefins plant. In the Profit Optimizer approach, con-trollers are not operating in tandem, and the dynamicpredictions of bridged disturbance variables are usedby individual RMPCTs to ensure dynamic coordina-tion/compensation among them. The controllers therebywork together to protect mutual CV constraints. The op-timizer coordinates 10 controllers and other areas, cov-ering the entire plant except for the debutanizer. Duringthe acceptance test for the system, a sustained increaseof over 10% in average production was achieved. Thisproduction level surpassed the previous all-time recordfor the plant by over 3.7% and was well above the expec-tation of a 2.7% increase.

Our experience to date in enterprisewide advancedcontrol coordination has largely been limited to refiningand petrochemical plants. For chemical plants, one of thekey additional requirements is the integration of schedul-ing, product switchover, and other discrete-event aspectsof plant operation within the formulation. The researchof Morari and colleagues (Bemporad and Morari, 1999)on the optimization of hybrid dynamical systems is es-pecially promising in this context.

Exploiting the Data-Centric Enterprise

From a technology that leverages the state-of-the-art inempirical-model-based control, we next turn to a moreradical alternative to large-scale optimization. The keyidea is that, where first principles or identified modelscannot usefully be developed, we can consider histori-cal data as a substitute. In other words, “the data is the

3RMPCT (Robust Multivariable Predictive Control Technol-ogy) is a Honeywell MPC product.

model.” This mantra was not especially useful even a fewyears ago, but now, with modern storage media availableat affordable prices and computer performance increas-ing at a steady pace, entire process and business histo-ries can be stored in single repositories and used onlinefor enhanced forecasting, decision making, and optimiza-tion. This makes it possible to implement a data-centricversion of intelligent behavior:

• Focus on what matters—by building a local model,on demand and for the immediate purpose.

• Learn from your errors—by consulting all relevantdata in your repository.

• Improve best practices—by adapting proven strate-gies first.

The data-centric paradigm combines database queriesfor selecting data relevant to the case, fitting the data re-trieved with a model of appropriate structure, and usingthe resulting model for forecasting or decision making.

When the size of the data repository becomes largeenough, the architecture of the database (the datamodel) becomes crucial. To guarantee sufficiently fastretrieval of historical data, the queries need to be runagainst a specifically designed data warehouse ratherthan the operational database. Once the relevant dataare retrieved, nonparametric statistical methods can beapplied to build a local model fitting the data. Data-centric modeling can thus be seen as a synergistic mergerof data warehousing and nonparametric statistics.

Data-centric models can form the basis for enterpriseoptimization. For example, the central problem of busi-ness optimization is matching demand with supply insituations when the supply or demand, or both, are un-certain. Suppose that a reward due to supply is definedas a response variable dependent on the decisions madeand other conditions. Supply optimization then amountsto searching for maximum reward over the response sur-face.

In contrast to traditional response surface methods,data-centric optimization does not assume a globalmodel of the response surface; rather it constructs a lo-cal model on demand—for each decision tested in opti-mization. Since the response is estimated through lo-cally weighted regression (as discussed below), noise isautomatically filtered. The uncertainty of the responseestimate can also be respected in the optimization byreplacing the estimated reward with the expected valueof a utility function of the reward. By properly shapingthe utility function, one can make decision making eitherrisk-prone or risk-averse. As for the optimization algo-rithm itself, data-centric models lend themselves to anystochastic optimization method, including simulated an-nealing, genetic algorithms, and tabu search. (Responsesurface optimization is only one example of a data-centric

Page 6: Emerging Technologies for Enterprise Optimization in the ...staff.utia.cas.cz/kulhavy/cpc01.pdf3tariq.samad@honeywell.com 1This section is adapted from Lu (1998). batch industries

Emerging Technologies for Enterprise Optimization in the Process Industries 361

optimization scheme. Other schemes, such as optimiza-tion over multiunit systems and distributed optimiza-tion, are currently being investigated.)

We next contrast data-centric modeling with the moreestablished global and local modeling approaches andprovide some technical details. An overview of a ref-erence application concludes this section.

Empirical Modeling

When exploring huge data sets, one must choose betweentrying to fit the complete behavior of the data and lim-iting model development to partial target-oriented de-scriptions.

The global approach generally calls for estimation ofcomprehensive models such as neural networks or othernonlinear parametric models (Sjoberg et al., 1995). Themajor advantage of global modeling is in splitting themodel-building and model-exploitation phases. Once amodel is fit to the data, model look-up is very fast.Global models also provide powerful data compression.After a model is built, the training data is not neededany further. The drawback is that the time necessaryfor estimation of unknown model parameters can be verylong for huge data sets. Also, global models are increas-ingly sensitive to changes in the data behavior and maybecome obsolete unless they are periodically retuned.

The local approach makes use of the fact that often itis sufficient to limit the fit of the process behavior to aneighborhood of the current working point. Tradition-ally, local modeling has been identified with recent-datafitting. Linear regression (Box and Jenkins, 1970) andKalman filtering (Kalman, 1960) with simple recursiveformulae available for parameter/state estimation havebecome extremely popular tools. Their simplicity comesat some cost, however. Adaptation of local-in-time mod-els is driven solely by the prediction error. When apreviously encountered situation is encountered again,learning starts from scratch. Further, when a processis cycling through multiple operating modes, adaptationdeteriorates model accuracy.

The data-centric approach is an alternative to boththese prevailing paradigms. It extends recent-data fittingto relevant-data fitting (see Figure 3). This “shift ofparadigm” combines the advantages of global and localmodeling. Namely, it provides

• a global description of the data behavior,

• through a collection of simple local models built ondemand,

• using all data relevant to the case.

The price we pay for this powerful mix is that all dataneed to be at our disposal at all times (i.e., no singlecompact model is returned as a result of modeling). Thedata-centric model can be built out of the original datastored in the database without any data compression.

GlobalModel

All Data

Adaptation

Estimation

Recent Data Local

Model

Query

LocalModel

RelevantData

Figure 3: Global, local-in-time, and local-in-datamodeling.

When applied for forecasting at the enterprise level, thedata needs to be aggregated properly. The forecastingliterature (see, e.g., West and Harrison, 1989) shows thatforecasting from aggregated data yields more robust andmore precise forecasts. The design of proper aggrega-tion or more sophisticated preprocessing of data thusbecomes a crucial part of data-centric modeling. Poorpreprocessing strategies will need to be corrected beforedata-centric models can be effective.

Data-Centric Modeling with Locally WeightedRegression

To be more specific, we describe one possible implemen-tation of data-centric modeling, namely, locally weightedregression with local variable bandwidth. The algorithmis by no means the only option and should be understoodonly as an illustration of the general idea.

It is difficult to trace the originator of local modeling.The concept appeared independently in various fields un-der names such as locally weighted smoothing (Cleve-land, 1979), nonparametric regression (Hardle, 1990), lo-cal learning (Bottou and Vapnik, 1992), memory-basedlearning (Schaal and Atkeson, 1994), instance-basedlearning (Deng and Moore, 1994), and just-in-time es-timation (Cybenko, 1996).

General Regression. Assume that a single depen-dent variable or response y depends on n independent orregressor variables ϕ1, ϕ2, . . . , ϕn through an unknownstatic nonlinear function f(·) with precision up to anunpredictable or stochastic component e

y = f(·) + e.

Page 7: Emerging Technologies for Enterprise Optimization in the ...staff.utia.cas.cz/kulhavy/cpc01.pdf3tariq.samad@honeywell.com 1This section is adapted from Lu (1998). batch industries

362 Rudolf Kulhavy, Joseph Lu and Tariq Samad

The objective is to estimate response y0 for any par-ticular regressor ϕ0.

Linearization in Parameters. Rather than tryingto fit the above global model to all the data available,data-centric modeling suggests the application of a sim-pler model to data points (y1, ϕ1), . . . , (yN , ϕN ) selectedso that ϕ1, ϕ2, . . . , ϕN are close to a specified regressorϕ.

A typical example is a model linearized around a givencolumn vector ϕ:

y = θ′(ϕ)ϕ + e,

where θ denotes a column vector of regression coefficientsand θ′ its transposition.

Curse of Dimensionality. A word of caution ap-plies here. As the dimension of ϕ increases, the datapoints become extremely sparsely distributed in the cor-responding data space. To locate enough historical datapoints within a neighborhood of the query regressor ϕ0,the neighborhood can become so large that the actualdata behavior cannot be explained through a simplifiedmodel.

Consider, for instance, a 10-dimensional data cubebuilt over 10-bit data; it contains 2100 ≈ 1030 bins! Evena trillion (1012) data points occupy an almost zero frac-tion of the data cube bins. Even with 5-bit (drasticallyaggregated) data, 99.9% of bins are still empty.

We have an obvious contradiction here. To justifya simple model, we must apply it in a relatively smallneighborhood of the query point. To retrieve enoughdata points for a reliable statistical estimate, we mustsearch within a large enough neighborhood. One mustask: Can data-centric modeling work at all?

Coping with Dimensionality. Luckily, real databehavior is rarely that extreme. First, the data is usu-ally concentrated in several regions around typical oper-ating conditions, which violates the assumption of uni-form distribution assumed in mathematical paradoxes.Second, the regressor ϕ often lives in a subspace of lowerdimension. That is, ϕ = ϕ(x) where x is a vector of di-mension smaller than the dimension of ϕ. Suppose, forinstance, that y, x1, and x2 denote process yield, pres-sure, and temperature, respectively, and the model is apolynomial fit with

ϕi(x1, x2) = xm(i)1 x

n(i)2 .

The dimension of regressor ϕ can easily be much largerthan the dimension of x, but it is the dimension of xthat matters here; it defines the dimension of a dataspace within which we search for “similar” data points.

Under the assumption ϕ = ϕ(x), the model is lin-earized around the vector x:

y = θ′(x)ϕ + e

and applied to data points (y1, x1), . . . , (yN , xN ) selectedso that ‖xk − x0‖ ≤ d for k = 1, 2, . . . , N , where ‖∆‖ isan Euclidean norm of vector ∆, x0 is a query vector, andd is a properly chosen upper bound on the distance.

Weighted Least Squares. The simplest statisticalscheme for estimating the unknown parameters θ is basedon minimizing the weighted sum of prediction errorssquared:

minθ

N∑k=1

K(‖xk − x0‖)(yk − θT ϕk).

Each data point (yk, xk), k = 1, 2, . . . , N is assigneda weight inversely proportional to the Euclidean dis-tance of xk from x0 through a kernel function K(·).Typical examples of kernel functions are the Gaussiankernel K(∆) = exp(−∆2) or the Epanechnikov kernelK(∆) = max(1−∆2, 0).

The kernel function assigns zero or practically zeroweight to data points (y, x) that appear too far fromthe query vector x0. We can use this fact to acceleratedatabase query by searching only for data points withinthe neighborhood of x0 defined by K(‖xk − x0‖) ≤ ε,where ε is close to zero.

Performance Tuning. The performance of theabove algorithm crucially depends on the definition ofthe Euclidean norm ‖∆‖. In general, the norm is shapedby the “bandwidth” matrix S:

‖∆‖2 = ∆′S−1∆.

Through S, it is possible to emphasize or suppressthe importance of deviations from the query point x0 inselected directions in the space of x-values. The matrixS depends on x0 and can be determined, for example, bythe nearest neighbor or cross-validation method (Hastieand Tibshirani, 1990).

Bayesian Prediction. The precision of predictionis an important issue with any statistical method, butin data-centric modeling it is even more pressing due tothe relatively small number of data points used for lo-cal modeling. To quantify consistently the predictionuncertainty, one can adopt the Bayesian approach to es-timation and prediction (Peterka, 1981). Compared withthe least-squares method chosen above for simplicity, theBayesian method calculates a complete probability dis-tribution of the estimated parameters. The distributionis then used for calculating a probability distribution ofthe predicted variable(s). From the predictive distribu-tion, any derived statistic such as variance or confidenceinterval can be computed. Due to the relative simplicityof local models, the Bayesian calculations, notorious fortheir computational complexity in many tasks, can beperformed here analytically, without any approximation(for more details, see Kulhavy and Ivanova, 1999).

Page 8: Emerging Technologies for Enterprise Optimization in the ...staff.utia.cas.cz/kulhavy/cpc01.pdf3tariq.samad@honeywell.com 1This section is adapted from Lu (1998). batch industries

Emerging Technologies for Enterprise Optimization in the Process Industries 363

Figure 4: A comparison of predicted (bars) and ac-tual (line) steam.

Reference Application

The first operating application of data-centric forecast-ing and decision making is an operator advisory systemin a large combined heat and power company in theCzech Republic. The company supplies heat to a dis-trict heating network and electricity to a power grid.The whole system is composed of five generation plants,a steam pipeline network totaling 60 miles, and five pri-mary, partially interconnected hot-water pipeline net-works of a total length of 46 miles.

Data-centric forecasting is being applied to predict thetotal steam, heat, and electricity demand, heat demandin individual hot-water pipelines, and total gas consump-tion. The forecasts are performed in three horizons—15-minute average one day ahead, 1-hour average one weekahead, and 1-day average one month ahead (Figure 4).Altogether, this requires the computation of 1,632 fore-casts every 15 minutes, 2,856 forecasts every hour, and1,581 forecasts every midnight. The use of highly opti-mized data marts makes it possible to perform all thecomputations, including thousands of database queries,while still leaving enough time for other analytic appli-cations.

Data-centric decision making is applied to optimiza-tion of set-points (supply temperature and pressure) onhot-water pipeline networks and to economic load allo-cation over 16 boilers. Insufficient instrumentation ofhot-water pipeline networks is solved by complement-ing data-centric optimization with a simple determinis-tic model-based procedure for evaluating the objectivefunction. The lack of data is thus compensated for withprior knowledge, combining the underlying physical lawsand some simplifying assumptions. The solution can beadapted to a wide range of heat distribution networkswith different levels of system knowledge and processmonitoring. Data-centric modeling takes into accountthe outdoor temperature, current (or planned) electric-

ity production, time of day, day of week, whether it isa working day/holiday, and the recency of data. Elec-tricity production is considered to reflect the effect ofoperators’ decisions in addition to external conditions.

Incomplete measurements on the boilers do not allowfor online estimation of combustion efficiency for eachboiler separately. Data-centric optimization is config-ured so as to exploit only available information. Fromthe total fuel consumptions of generation plants and thetotal heat generated, the overall efficiency of the currentboiler configuration is calculated. Data-centric optimiza-tion then searches in the process history and suggestspossible improvements. For frequent situations and con-figurations, important for the company, enough points inthe history are retrieved and reliable results are obtained.Different configurations are tested and evaluated whiletaking into account the costs of reconfiguration. One ofthe most valuable features of data-centric optimization isthat it automatically adapts to changing operating con-ditions (e.g., variations in fuel calorific value, changes inboiler parameters, and aging of equipment).

Summary

A criticism frequently voiced about the data-centric ap-proach is that it is incapable of modeling plant behav-ior in previously unvisited operational regimes and ofhandling changes in the plant, such as minor equipmentfailures, catalyst aging, and general wear and tear. Ingeneral, as with any statistical model, the quality ofdata-centric models depends crucially on the availabil-ity and quality of historical data. The lack of data canbe compensated only by prior knowledge. One optionis to populate the database with both actual and vir-tual data, the latter coming from a simulation model ordomain expert (Kulhavy and Ivanova, 1999).

Thus, the data-centric approach itself suggests a solu-tion to the problem of integrating data and knowledge.A historical database can be merged with a databasepopulated with examples generated based on heuristicsor conventional models. Different weightings can be as-sociated with records that depend on their source. Thushistorical data can be preferred for operational regionsthat are well represented in the recent history of plantoperation, whereas “synthetic” data can be preferred forcontexts for which process history provides few associ-ated samples. In effect, a synthesis of multiple types ofinformation sources can be achieved using the databaseas a common foundation.

To sum up, data-centric models can be applied to arange of problems in the process industries, subject onlyto an ability to satisfy the database-intensive search re-quirements and the absence of a single compact modelthat would permit closed-form analytic solutions. Theformer will cease to be a real constraint in a few years dueto continuing progress in database and computer technol-

Page 9: Emerging Technologies for Enterprise Optimization in the ...staff.utia.cas.cz/kulhavy/cpc01.pdf3tariq.samad@honeywell.com 1This section is adapted from Lu (1998). batch industries

364 Rudolf Kulhavy, Joseph Lu and Tariq Samad

ogy. The latter issue is fundamental—with on-demandmodeling, all decision making becomes iterative. For thisreason, the data-centric model should not be considereda replacement for the classical results of decision andcontrol theory, but as a tool that can extend the reach ofautomation and control to processes that have not beenamenable to analytic methods.

Optimization with Adaptive Agents

The term agent is used in multiple senses in the com-putational intelligence and related communities. We useit here to mean software objects that represent prob-lem domain elements. Agents can, for example, repre-sent units and equipment in a chemical plant, parts ofan electricity transmission and distribution network, orsuppliers and consumers in supply chains (Garcıa-Floreset al., 2000). An agent must be capable of modeling theinput/output behavior of a system at a level of fidelityappropriate for problems of interest. Couplings betweensystems, whether material or energy flows, sensing andcontrol data, or financial transactions, can be capturedthrough interagent communication mechanisms. In somesense, agent-based systems can be seen as extensions ofobject orientation, although the extensions are substan-tive enough that the analogy can be misleading.

One common use of agents is to develop bottom-up,componentwise models of complex systems. With sub-system behaviors captured with associated agents, theoverall multiagent system can constitute a useful modelof an enterprise such as a process plant or even an indus-try structure. Given some initial conditions and inputstreams, the evolution of the computational model cantrack the evolution of the physical system. Often a quan-titative match will be neither expected nor obtained, butif the essential elements driving the dynamics of the do-main are captured, even qualitative trends can provideinsight into and guidance for system operation.

The increasing interest in agent systems can be at-tributed in part to advances in component software tech-nology. Provided that common interface specificationsare defined, agents can be developed in different pro-gramming languages and can be executing on differentprocesses or computers. Agents can vary from very sim-ple to extremely sophisticated, and heterogeneous agentsmay work together in a single application. “Plug-and-play” protocols for agent construction encourage codereuse and modularity while enabling agent developersto work in a variety of programming languages. Genericagent toolkits are now available (e.g., swarm www.swarm.org, Lost Wax www.lostwax.com, and Zeus http://193.113.209.147/projects/agents/zeus/) that canfacilitate the design of agent applications for diverseproblems. Languages for interagent communication forbroad-based applications have also been developed. Onesuch language, KQML, has been fairly widely adopted

OUT-Box

IN-Box

Status

Control

Configure

Query

Online

Algorithms

Configuration

Internal State

Probe

Figure 5: Abstract agent architecture.

(Finin et al., 1994).

Decision Making and Adaptation in Agents

Figure 5 shows an example abstract agent architecture,identifying some of the functions that are often incor-porated. The In and Out boxes are the ports throughwhich the agent exchanges messages with other agents;Status and Control are interfaces for the agent simulationframework; and the Query, Configure, and Probe fea-tures are used for initialization, monitoring, debugging,and so on. An agent can maintain considerable inter-nal state information, which can be used by algorithmsthat implement its decision logic (its input/output be-havior). Other online algorithms can be used to adaptthe decision logic to improve some performance criteria(which may be agent-centric or global). The adaptationmechanism is often internal to an agent, but it need notnecessarily be so.

The adaptation mechanism may be based on geneticalgorithms (Mitchell, 1996), genetic programming (Koza,1992), evolutionary computing (Fogel, 1995), statisticaltechniques, or artificial neural networks. These algo-rithms act on structures within an agent, but the adap-tation is often based on information obtained from otheragents. For example, a low-performing agent may changeits program by incorporating elements of neighboring,better-performing agents.

The choice of learning algorithm interacts stronglywith how the agent represents its decision-making knowl-edge and the kind of feedback available. For example,LISP programs may lend themselves to genetic program-ming. In a supervised learning situation, neural networksmay employ an algorithm such as back-propagation orLevenburg-Marquardt minimization. However, in manyagent applications, only weaker information is available(e.g., a final score, forcing agents to rely on reinforcementlearning algorithms). The learner must solve tempo-ral and spatial credit assignment problems—determiningwhat aspect of its sequence of decisions led to the final(or intermediate) score and what part of its internal rep-resentation is responsible. Strategies such as temporaldifferencing and Q-learning have been proposed for this(Watkins and Dayan, 1992).

Multiple learning mechanisms can be incorporated

Page 10: Emerging Technologies for Enterprise Optimization in the ...staff.utia.cas.cz/kulhavy/cpc01.pdf3tariq.samad@honeywell.com 1This section is adapted from Lu (1998). batch industries

Emerging Technologies for Enterprise Optimization in the Process Industries 365

Figure 6: SEPIA interface showing a four-zone,three-generator, two-load scenario.

into the same agent environment, or even the same agent.“Classifier systems” frequently incorporate a genetic al-gorithm to evolve new classifiers over time. The use ofmultiple adaptation mechanisms can offer unsuspectedadvantages. One well-known example is the Baldwineffect, in which learning interacts with evolution eventhough the learned information is not inherited (Hintonand Nowlan, 1987).

Example: A Simulator for Electric Power Indus-try Agents

One industry where the integration of business and phys-ical realms has recently taken on a new importance iselectric power. Deregulation and competition in thepower industries in several countries over the last fewyears have resulted in new business structures. At thesame time, generation and transmission facilities imposehard physical constraints on the power system. For autility to attempt to maximize its profit while ensur-ing that its power delivery commitments can be accom-modated by the transmission system—which is simulta-neously being used by many other utilities and powergenerators—economics and electricity must be jointlyanalyzed.

A prototype modeling and optimization tool recentlydeveloped under the sponsorship of the Electric PowerResearch Institute provides an illustration. The tool,named SEPIA (for Simulator for Electric Power Indus-try Agents), integrates classical power flow models, abilateral power exchange market, and agents that repre-sent both physical entities (power plants) and businessentities (generating companies). Through a full-featuredGUI, users can define, configure, and interconnect agentsto set up specific industry-relevant scenarios (see Fig-ure 6). A scenario can be simulated with adaptationcapabilities enabled in selected agents as desired. Forexample, a generation company agent can search a space

of pricing structures to optimize its profits subject to itsgeneration constraints, the transmission capabilities ofthe system, and the simultaneous optimization of indi-vidualized criteria by other agents that may bear a co-operative or competitive relationship to it. Two reusableand configurable learning/ adaptation mechanisms havebeen incorporated within SEPIA: Q-learning and a ge-netic classifier system.

SEPIA models a power system as a set of zones in-terconnected by tie-lines. A transmission operator agentconducts security analysis and available transfer capac-ity calculation, including first contingency checks for allproposed transactions. A dc power flow algorithm and asimplified ac algorithm are included for this purpose.

Tools such as SEPIA have several potential uses:

• To identify optimized (although not necessarily opti-mal) operational parameters (such as pricing struc-tures or consumption profiles in the power industryapplication).

• To determine whether an industry or business struc-ture is stable or meets other criteria (such as fairnessof access).

• To evaluate the potential benefits of new technology(e.g., superconducting cables and high-power trans-mission switching devices).

• To generally help decision makers gain insight intothe operation and evolution of a complex systemthat may not be amenable to more conventionalanalysis techniques.

Further details on SEPIA are available in Harp et al.(2000). A self-running demonstration can be down-loaded from the project Web site at http://www.htc.honeywell.com/projects/sepia.

Summary

Over the last several years, a technological convergence—ncreasing processing power and memory capacities, com-ponent software infrastructure developments, adaptiveagent architectures—has made possible the developmentof a new class of simulation and optimization tools. Withcareful design, these tools can be used by nonexperts,and they can provide a level of decision support and in-sight to analysts and planners. It is, however, importantto recognize that agency architectures-although the ob-ject of significant attention in both the software researchcommunity and the high-technology media-is no panacea(Wooldridge and Jennings, 1998). Problems that willbenefit from an agent-oriented solution are those thathave an appropriate degree of modularity and concur-rency. Even then, considerable effort must be invested toendow agents with the necessary domain knowledge. Foroptimization applications, adaptation capabilities must

Page 11: Emerging Technologies for Enterprise Optimization in the ...staff.utia.cas.cz/kulhavy/cpc01.pdf3tariq.samad@honeywell.com 1This section is adapted from Lu (1998). batch industries

366 Rudolf Kulhavy, Joseph Lu and Tariq Samad

be carefully enabled in agents to ensure that the flexi-bility of the agent-based approach does not immediatelyimply drastic computational inefficiency.

Conclusions

We conclude with some “slogans” that reflect the moti-vations and philosophy underlying the research reportedabove.

Embrace pluralism. The more complex and encom-passing the problems we attempt to solve, the less likelythat any one solution approach will suffice. Key problemcharacteristics-such as the degree of knowledge and theamount of data available-will vary considerably acrossthe range of enterprise optimization applications. Cov-ering the space of problems of interest requires a mul-tipronged research agenda and the development of ap-proaches that are applicable across the diversity of prob-lem instances.

Leverage existing foundations. New classes of prob-lems need not mean new built-from-scratch solutions.Especially from a pragmatic perspective, an ability tocreate technology that can “piggy-back” on existing in-frastructure can be a key differentiator. The extent ofdisruption of systems and processes is always a consid-eration in the adoption of new research results.

Exploit IT advances. Advances in hardware, soft-ware, and communication platforms do not just allowmore complex algorithms to be run; they also suggestnew ways of thinking about problems. The doing awaywith traditional technology constraints can be a liber-ating event for the research community, although it isimportant to remember that the inertia of a mature, es-tablished industry is a significant constraint in its ownright.

Pursue multidisciplinary collaborations. Anothercorollary of complexity is that its management is a mul-tidisciplinary undertaking (Samad and Weyrauch, 2000).In the current context, this is not only a matter of marry-ing control theory, chemical engineering, and computerscience. As we attempt to automate larger-scale sys-tems and pursue the autonomous operation of entire en-terprises, any delimiting of multidisciplinary connectionsseems arbitrary.

It appears to be a law of automation that the larger thescale of the system to be automated, the more special-ized and less generic the solution. At one extreme, thePID controller is ubiquitous across all industries (pro-cess, aerospace, automotive, buildings, etc.) for single-loop regulation. At the multivariable control level, MPCis the technology of choice for the process industries buthas had little impact in others. For enterprise optimiza-tion, effective solutions will likely be even more domain-specific. The fact that we have discussed three very dif-ferent technologies does not imply a fundamental uncer-tainty about which one of these will ultimately be the

unique “winner”; rather, it reflects a fundamental beliefthat process enterprise optimization is too complex anddiverse a problem area for any one solution approach tosatisfactorily address.

References

Bain, M. L., K. W. Mansfield, J. G. Maphet, W. H. Bosler, andJ. P. Kennedy, Gasoline blending with an integrated on-line op-timization, scheduling, and control system, In Proc. NationalPetroleum Refiners Association Computer Conference, New Or-leans, LA (1993).

Bemporad, A. and M. Morari, “Control of systems integratingdynamics, logic, and constraints,” Automatica, 35(3), 407–427(1999).

Bodington, E., Planning, Scheduling, and Control Integration inthe Process Industries. McGraw-Hill, New York (1995).

Bottou, L. and V. Vapnik, “Local learning algorithms,” NeuralComputation, 4, 888–900 (1992).

Box, G. E. P. and G. M. Jenkins, Time Series Analysis, Forecast-ing and Control. Holden-Day, San Francisco (1970).

Cleveland, W. S., “Robust locally-weighted regression and smooth-ing scatterplots,” J. Amer. Statist. Assoc., 74, 829–836 (1979).

Cybenko, G., Just-in-time learning and estimation, In Bittanti,S. and G. Picci, editors, Identification, Adaptation, Learning,NATO ASI Series, pages 423–434. Springer-Verlag, New York(1996).

del Toro, J. L., Computer-integrated manufacturing at the Mon-santo Pensacola plant, Presented at AIChE Spring Meeting,Houston, TX (1991).

Deng, K. and A. W. Moore, Multiresolution instance-based learn-ing, In Proc. 14th Int. Joint Conference on Artificial Intelli-gence. Morgan Kaufmann (1994).

Finin, T., R. Fritzson, D. McKay, and R. McEntire, KQML asan agent communication language, In Proc. Third Interna-tional Conference on Information and Knowledge Management(CIKM’94). ACM Press (1994). Online at http://www.cs.

umbc.edu/kqml/papers/kqml\_acl.ps.

Fogel, D. B., Evolutionary Computation: Toward a New Phi-losophy of Machine Intelligence. IEEE Press, Piscataway, NJ(1995).

Garcıa-Flores, R., X. Z. Wang, and G. E. Goltz, Agent-based in-formation flow for process industries’ supply chain modeling,In Proc. Process Control and Instrumentation 2000, Glasgow(2000).

Hall, J. and T. Verne, “RCU optimization in a DCS gives fastpayback,” Hydrocarbon Process., pages 85–92 (1993).

Hall, J. and T. Verne, “Advanced controls improve operationof Lubes Plant dewaxing unit,” Oil and Gas J., pages 25–28(1995).

Hardle, W., Applied Non-parametric Regression. Cambridge Uni-versity Press (1990).

Harp, S. A., S. Brignone, B. F. Wollenberg, and T. Samad,“SEPIA: A simulator for electric power industry agents,” IEEECont. Sys. Mag., pages 53–69 (2000).

Hastie, T. J. and R. J. Tibshirani, Generalized Additive Models.Chapman & Hall, London (1990).

Hinton, G. E. and S. J. Nowlan, “How learning can guide evolu-tion,” Complex Systems, 1, 495–502 (1987).

Kalman, R. E., “A New Approach to Linear Filtering and Pre-diction Problems,” Trans. ASME, J. Basic Engineering, pages35–45 (1960).

Koza, J., Genetic Programming. MIT Press, Cambridge, MA(1992).

Kulhavy, R. and P. Ivanova, Memory-based prediction in controland optimisation, In 14th World Congress of IFAC, volume H,pages 289–294, Beijing, China (1999).

Page 12: Emerging Technologies for Enterprise Optimization in the ...staff.utia.cas.cz/kulhavy/cpc01.pdf3tariq.samad@honeywell.com 1This section is adapted from Lu (1998). batch industries

Emerging Technologies for Enterprise Optimization in the Process Industries 367

Lu, J. and J. Escarcega, RMPCT: Robust MPC technology sim-plifies APC, Presented at AIChE Spring Meeting, Houston, TX(1997).

Lu, J., Multi-zone control under enterprise optimization: Needs,challenges and requirements, In Proc. International Symposiumon Nonlinear MPC: Assessment and Future Directions, Ascona,Switzerland (1998).

Mitchell, M., An Introduction to Genetic Algorithms. MIT Press,Cambridge, MA (1996).

Nath, R., Z. Alzein, R. Pouwer, and M. Leseur, On-line dynamicoptimization of an ethylene plant using Profit(r) Optimizer, Pa-per presented at NPRA Computer Conference, Kansas City, MO(1999).

Peterka, V., Bayesian approach to system identification, InEykhoff, P., editor, Trends and Progress in System Identifica-tion, pages 239–304, Elsmford, NY. Pergamon Press (1981).

Prett, D. M. and C. E. Garcıa, Fundamental Process Control. But-terworths, Boston (1988).

Samad, T. and J. Weyrauch, editors, Automation, Control andComplexity: An Integrated View. John Wiley and Sons, Chich-ester, UK (2000).

Schaal, S. and C. G. Atkeson, “Robot juggling: An implementationof memory-based learning,” Control Systems Magazine, 14, 57–71 (1994).

Sheehan, B. and L. Reid, “Robust controls optimize productivity,”Chemical Engineering (1997).

Sjoberg, J., Q. Zhang, L. Ljung, A. Benveniste, B. Deylon, P.-Y.Glorennec, H. Hjalmarsson, and A. Juditsky, “Nonlinear black-box modeling in system identification: A unified overview,” Au-tomatica, 31, 1691–1724 (1995).

Smith, F. B., An FCCU multivariable predictive control applica-tion in a DCS, The 48th Annual Symposium on Instrumentationfor the Process Industries (1993).

Verne, T. and J. Escarcega, Multi-unit refinery optimization:Minimum investment, maximum return, In The Second Inter-national Conference and Exhibition on Process Optimization,pages 1–7, Houston, TX (1998).

Watano, T., K. Tamura, T. Sumiyoshi, and P. Nair, Integrationof production, planning, operations and engineering at IdemistuPetrochemical Company, Presented at AIChE Spring Meeting,Houston, TX (1993).

Watkins, C. J. C. H. and P. Dayan, “Q-Learning,” Machine Learn-ing, 8, 279–292 (1992).

West, M. and J. Harrison, Bayesian Forecasting and DynamicModels. Springer-Verlag, New York (1989).

Wooldridge, M. and N. R. Jennings, Pitfalls of agent-orienteddevelopment, In Proc. of the 2nd Int. Conf. on AutonomousAgents (AA’98), pages 69–76, New York. ACM Press (1998).