the pennsylvania state university planning the means of

110
The Pennsylvania State University The Graduate School College of Engineering PLANNING THE MEANS OF PRODUCTION: A MODEL BASED SIMULATED ANNEALING APPROACH A Thesis in Industrial Engineering by William J. Peck Β© 2019 William J. Peck Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science May 2019

Upload: others

Post on 24-Dec-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Pennsylvania State University PLANNING THE MEANS OF

The Pennsylvania State University The Graduate School

College of Engineering

PLANNING THE MEANS OF PRODUCTION: A MODEL BASED SIMULATED

ANNEALING APPROACH

A Thesis in Industrial Engineering

by William J. Peck

Β© 2019 William J. Peck

Submitted in Partial Fulfillment of the Requirements

for the Degree of

Master of Science

May 2019

Page 2: The Pennsylvania State University PLANNING THE MEANS OF

ii

The thesis of William J. Peck was reviewed and approved* by the following: Daniel Finke Assistant Research Professor Thesis Advisor S. Ilgin Guler Assistant Professor of Civil and Environmental Engineering Robert Voigt Professor of Industrial Engineering Industrial Engineering Graduate Program Coordinator *Signatures are on file in the Graduate School.

Page 3: The Pennsylvania State University PLANNING THE MEANS OF

iii

Abstract We present a job shop like manufacturing environment and introduce a random demand that is known in advance. It then becomes necessary to form combinations of workers, work stations, and shifts so as to minimize the deviation between output achieved by the combination and the random demand. Integer programming, simulated annealing, and systems dynamic modeling were utilized to solve the underlying resource allocation problem. Integer programming provides a benchmark method to measure the effectiveness of using simulated annealing and systems dynamic modeling as solution techniques. In analyzing the above problem, we also probe the scalability and robustness of each technique and offer comparative and contrasting measures for each model. Experimental results reveal the existence of multiple optimal solutions, especially as demand increases, which leads to a discussion of Pareto Frontiers and the introduction of a new cost objective in order to achieve a global optimal solution. Our findings reveal that simulated annealing is a viable and robust solution technique that is able to deliver results whose quality is equivalent to that of integer programming. Systems dynamic modeling was not able to achieve the same level of quality that was observed by simulated annealing. We conclude by noting the compatibility and comparability of simulated annealing to integer programming when used as a solution technique for solving the aforementioned problem. Contrast this to systems dynamic modeling that was lacking in areas of optimality and quality.

Page 4: The Pennsylvania State University PLANNING THE MEANS OF

iv

Table of Contents

List of Equations.............................................................................................................................................................vi List of Figures.................................................................................................................................................................vii List of Tables..................................................................................................................................................................viii Acknowledgements.........................................................................................................................................................ix

1. INTRODUCTION .............................................................................................................................................. 1

1.A. PROBLEM STATEMENT ................................................................................................................................ 3

2. LITERATURE REVIEW .................................................................................................................................. 4

2.A. CYCLICAL DEMAND .................................................................................................................................... 4 2.B. FORECASTING DEMAND .............................................................................................................................. 4 2.C. MANUFACTURING SYSTEMS ........................................................................................................................ 6

2.C.1. Job Shop ................................................................................................................................................ 6 2.C.2. Mass Production .................................................................................................................................... 7 2.C.3. Batch Production ................................................................................................................................... 8

2.D. THE MANY FACETS OF SIMULATION MODELING ......................................................................................... 8 2.D.1. Systems Dynamic Modeling ................................................................................................................... 9 2.D.2. Discrete Event Simulation ..................................................................................................................... 9 2.D.3. Agent Based Modeling ......................................................................................................................... 11 2.D.4. Dynamic Systems ................................................................................................................................. 12 2.D.5. Modeling Summary .............................................................................................................................. 13

2.E. MATHEMATICAL OPTIMIZATION ............................................................................................................... 13 2.E.1. General Mathematical Optimization Concepts .................................................................................... 13 2.E.2. Linear Programming ........................................................................................................................... 14 2.E.3. Nonlinear Programming ...................................................................................................................... 17

2.F. METAHEURISTICS ...................................................................................................................................... 18 2.F.1. Simulated Annealing ............................................................................................................................ 18 2.F.2. Tabu Search ......................................................................................................................................... 20 2.F.3. Genetic Algorithm ................................................................................................................................ 21 2.F.4. Ant Colony Optimization ..................................................................................................................... 22

3. METHODOLOGY ........................................................................................................................................... 24

3.A. MODEL BASED .......................................................................................................................................... 24 3.A.1. Dynamic Systems ................................................................................................................................. 24 3.A.2. Discrete Event Simulation ................................................................................................................... 24 3.A.3. Agent Based Modeling ......................................................................................................................... 25 3.A.4. Systems Dynamic Modeling ................................................................................................................. 26

3.B. SIMULATED ANNEALING APPROACH ......................................................................................................... 27 3.B.1. The Genetic Algorithm ......................................................................................................................... 27 3.B.2. Tabu Search ......................................................................................................................................... 28 3.B.3. Ant Colony Optimization ..................................................................................................................... 29 3.B.4. Simulated Annealing ............................................................................................................................ 30

3.C. MATHEMATICAL PROGRAMMING .............................................................................................................. 32 3.D. METHODOLOGY SUMMARY ....................................................................................................................... 33

4. EXPERIMENTATION & RESULTS ............................................................................................................ 34

4.A. SCALABILITY ............................................................................................................................................. 34 4.A.1. The Base Model ................................................................................................................................... 34 4.A.2. Multiple Operations ............................................................................................................................. 38 4.A.3. Multiple Product Lines ........................................................................................................................ 50

4.B. ROBUSTNESS ............................................................................................................................................. 62

5. CONCLUSIONS .............................................................................................................................................. 68

5.A. FUTURE WORK .......................................................................................................................................... 69

Page 5: The Pennsylvania State University PLANNING THE MEANS OF

v

References........................................................................................................................................................70 Appendix A: Multiple Product Line Matlab Β© Code..........................................................................73 Appendix B: Fel'dman Model..................................................................................................................100

Page 6: The Pennsylvania State University PLANNING THE MEANS OF

vi

List of Equations Eq. 1: Standard Form of a Linear Program ................................................................................................. 15 Eq. 2: Dual of the Standard Linear Program ............................................................................................... 15 Eq. 3: Weak Duality ......................................................................................................................................... 15 Eq. 4: Strong Duality ....................................................................................................................................... 16 Eq. 5: The Revised Simplex Algorithm ........................................................................................................ 16 Eq. 6: General Integer Program .................................................................................................................... 32 Eq. 7: Base Model, Integer Program ............................................................................................................. 35 Eq. 8: Refined Base Model, Integer Program .............................................................................................. 35 Eq. 9: Integer Program for Two Operations, Single Product Line .......................................................... 38 Eq. 10: Integer Program for Two Product Lines ........................................................................................ 51 Eq. 11: Nominal-is-best Mean & Variance Signal to Noise Ratios .......................................................... 63 Eq. 12: Nominal-is-best Variance Only Signal to Noise Ratios ................................................................ 63 Eq. 13: Loss Function ..................................................................................................................................... 66 Eq. 14: Producer Sector Capital Stock in Period t ................................................................................... 100 Eq. 15: Consumer Sector Capital Stock in Period t ................................................................................. 100 Eq. 16: Output in the Producer Sector ...................................................................................................... 100 Eq. 17: Output in the Consumer Sector .................................................................................................... 100 Eq. 18: Rearranged Producer Sector Capital Stock in Period t .............................................................. 100 Eq. 19: Rearranged Consumer Sector Capital Stock in Period t ............................................................ 101 Eq. 20: Final Output in the Consumer Sector .......................................................................................... 101

Page 7: The Pennsylvania State University PLANNING THE MEANS OF

vii

List of Figures Figure 1: Simulated Annealing Flowsheet .................................................................................................... 31 Figure 2: Relaxed Integer Program Pareto Efficiency and Frontiers ....................................................... 40 Figure 3: Relaxed Integer Program Pareto Frontiers .................................................................................. 41 Figure 4: Pareto Clouds and Frontiers by Methods with Tradeoff Lines ............................................... 42 Figure 5: Pareto Clouds and Frontiers by Method ..................................................................................... 43 Figure 6: Simulated Annealing Pareto Front ............................................................................................... 45 Figure 7: Simulated Annealing Algorithm Convergence ............................................................................ 47 Figure 8: Feasible Cost Deviation Pareto Frontier, Multiple Operations................................................ 48 Figure 9: Optimal Deviation Cost Pareto Frontier ..................................................................................... 49 Figure 10: Relaxed Integer Program Pareto Efficiency and Frontiers, First Product Line .................. 52 Figure 11: Pareto Frontiers, Product Line One ........................................................................................... 53 Figure 12: Pareto Clouds and Frontiers by Methods with Tradeoff Lines, Product Line One ........... 53 Figure 13: Pareto Clouds and Frontiers by Method, Operation 2, Product Line One.......................... 54 Figure 14: Pareto Clouds and Frontiers by Method, Operation 1, Product Line One.......................... 55 Figure 15: Relaxed Integer Program Pareto Efficiency and Frontiers, Product Line Two .................. 56 Figure 16: Pareto Frontiers, Product Line Two .......................................................................................... 56 Figure 17: Pareto Clouds and Frontiers by Methods with Tradeoff Lines, Product Line Two ........... 57 Figure 18: Pareto Clouds and Frontiers by Method, Operation 2, Product Line Two ......................... 58 Figure 19: Pareto Clouds and Frontiers by Method, Operation 1, Product Line Two ......................... 59 Figure 20: Feasible Cost Deviation Pareto Frontier, Multiple Product Lines ........................................ 60 Figure 21: Deviation Cost Pareto Frontier, Two Product Lines .............................................................. 61

Page 8: The Pennsylvania State University PLANNING THE MEANS OF

viii

List of Tables Table 1: Base Model Results for Single Product Line and Operation ..................................................... 36 Table 2: Simulated Annealing Frequency Tables, Single Operation ........................................................ 37 Table 3: Integer Programming Results, Single Instance ............................................................................. 39 Table 4: Simulated Annealing Results, Single Instance .............................................................................. 43 Table 5: Worker Allocations that Achieve Equivalent Output ................................................................. 45 Table 6: Simulated Annealing Frequency Tables Operation Two ............................................................ 45 Table 7: Cost per Shift .................................................................................................................................... 46 Table 8: Resulting Costs for Worker Configurations ................................................................................. 48 Table 9: Systems Dynamic Optimization Results, Multiple Operations ................................................. 50 Table 10: Number of Differing Combinations of Workers, Multiple Product Lines ............................ 51 Table 11: Global Optimal Solution Configurations with Cost, Two Product Lines ............................. 61 Table 12: Systems Dynamic Optimization Results, Multiple Product Lines .......................................... 62 Table 13: Signal to Noise Targets .................................................................................................................. 64 Table 14: Signal to Noise Responses ............................................................................................................ 64

Table 15: 𝐴0 Values per Product Line Operation for each Model .......................................................... 65 Table 16: Loss Function Values ..................................................................................................................... 66

Page 9: The Pennsylvania State University PLANNING THE MEANS OF

ix

The sum of all possible parts… Misty Peck Patrick Peck Hailey Peck Bill Peck Carrie Peck Matthew Norton Renee Norton Kathy Smith John Fitch Suzy McDougal Paul Hamilton June Rutherford Louise Benvie Dusty Garland Greg Garland Kelly Garland Gabrielle Jones Devin Garland Teya Garland Cheryl Fifer Danny Ray Fifer Danica Fifer Paisley Reyes Bill Ballard JoAnne Ballard Edward Pines Jean Paul Vessel Hansuk Sohn Michael Gaume Alice Comer Neil Swapp Kathy Dollahon Sarah Rede Elvira Masson Dan Makens

Monica Makens Jacob Makens Lucas Makens Joseph Paz Kenny Paz Matthew Paz Jerry Paz Ellen Paz Penny Wilson Tom Green Arnold Bustillos Adrianna Bustillos Scott Dick-Peddie Theresa Dick-Peddie Steve Ewing David Lynch Nellie Lynch Alex Lynch Julian Gaides Walter Gaides Jen Gaides Richard VonWolff Terry VonWolff Nic VonWolff Alex VonWolff Sue Henning George Henning Joe Pestovich Austin Martin-Likes Karis Funk Josiah Armstrong Dustin Chavez Ashley Chavez Bell Jacquez Adrian Kison

Robert Parker Warren Neff Reuben Peterson Jillian Adams Devin Oneal Olivia Trautschold Sally Grindstaff Courtney Clark Jess Bishop Travis Moulton Jerrad Auxier Joel Johnson Devin Kimball Caitlin Kimball Tim Dinehart Mary Mendoza Alexis Beichley Armando Mendoza Butch Peel Debbie Peel David Robinson Rhonda Robinson Raymond Friend Morgan Austin Nate Reese Joe Wanat Seamus McFadden Dan Krych Matt Zwetolitz Russ Smith

Acknowledgement

This material is based on work funded by the Office of Naval Research through the Naval Sea

Systems Command (NAVSEA) under Contract No. N000024-12-D-6404, Delivery Order

18F8317. The opinions, findings, conclusions, and recommendations expressed in this material

are those of the authors and do not necessarily reflect the views of the Naval Sea Systems

Command. Nor do they reflect the views of the Office of Naval Research or our collaborators at

the Naval Air Systems Command.

Page 10: The Pennsylvania State University PLANNING THE MEANS OF

1

1. INTRODUCTION

The ability to estimate demand adequately is more of an art form than a science due to the inherent variability of this quantity. Great measures have been undertaken to provide a general framework that fulfills this goal. Fortunately, with respect to large populations, humans are cyclical creatures prone to seasonal adjustments. This is most evident in shopping. Demand patterns for cosmopolitan shoppers are well known due to years of empirical evidence. The same cannot be said for manufacturing processes. It’s true that demand can follow consumption, but this isn’t always the case and often depends on the product. The need to forecast demand even for the most mundane part is something that must be attempted so that the process operates as efficiently as possible. Properly spending resources that aim to provide these forecasts is a worthy endeavor of any executive or manager who hopes to improve their bottom line. Forecasts are inherently problematic, as they are always wrong. Any layman watching the evening news can attest to this when the weather forecasts are presented. It’s not merely that forecasts are drastically wrong, but they are never quite right. Often times they exhibit some sort of incorrectness that colors the ultimate outcome. The term β€œmanufacturing processes” encompasses the entire scope of all processes that are involved in some sort of production. Further refinement is needed so that a specific area or environment can be identified and studied. For purposes of this paper, a job shop environment will be considered. Wherein a small number of piece parts are actually produced, but the variability of parts is large. Meaning that many unique parts are being produced, but the volume or repetition of specific parts is low. As such, scheduling and especially forecasting is particularly tricky, which makes job shops intrinsically worthy of study and simulation. Simulation modeling offers a better estimation of forecasts. This is due to the ability to run replications. Properties, such as the central limit theorem, ensure that after a series of replications, a distribution can be created from which a better estimation of demand is achieved. Instead of relying on a single data point, or a series of empirically gathered data points; simulation modeling can produce a seemingly unending series of forecasts that, when combined, provide a more robust and accurate estimation of demand, or any parameter in question. Historically, the most common type of simulation modeling has been discrete event modeling. Its applications are many; though they primarily focus on estimating parameters centered on queues, and arrival/departure schedules. Discrete event modeling does not present itself as a tool that can be useful for estimating demand. The implication is not that discrete event modeling should be avoided when it comes to the field of forecasts, but rather there are better simulation tools that lend themselves to being friendlier in terms of projection and outlook. In recent years, systems dynamic modeling has been used to estimate workforce, or the number of people a process will require. These processes can be as varied as recruiting in the armed forces or as common as a manufacturing process. A network of stocks, flows, and parameters coupled with a series of feedback loops make systems dynamic modeling distinctly different from other modeling techniques. When approaching a problem that requires demand to be estimated, it is our understanding that people comprise only a portion of the inputs. Machines, whether they be new or

Page 11: The Pennsylvania State University PLANNING THE MEANS OF

2

old, processing time, and routing from machine to machine are all individual pieces that must fit together in order to solve this puzzle. In essence, the problem at hand is of the optimization nature. The measurable outcome of how well this optimization problem has been met is the ability to meet the variable demand. A result that delivers either a surplus or shortage is undesirable. Shortages imply that demand was not in fact met and that changes need to be made so as to fulfill the need. Surplus, while appearing lucrative is also anathema because resources are diverted towards holding and purchasing costs; rather than towards production. While we argue that, anything other than a solution wherein demand is met is undesirable; it is also paramount to understand that this is impractical in the real world. To be plainly said, there are simply too many integer constrains that must be satisfied. This results in an output that is either above or below the demand. Thus, it becomes necessary to plan production such that the difference between output and demand is minimized. While we desire a solution that produces an output equivalent to demand we also understand that this is most likely impractical and are willing to settling for a solution that represents minimized deviation between output and demand. Within the scope of optimization, one usually pictures a linear or nonlinear program. These problems always have an objective function whose value is either being maximized or minimized depending on the problem. The objective function is subject to a series of either linear or nonlinear functions. It is the interaction of the variables within the constraint functions, subject to their perspective inequality or equality signs, that determines the outcome of the objective function. Operations Research literature is rich with many methods and algorithms that aim to produce an optimal or near optimal solution. While mathematical programming presents itself as an attractive solution procedure, there are multiple avenues open to solving an optimization problem. Of particular interest to this paper is the use of metaheuristics, where optimality is never guaranteed. When adopting the mathematical modeling method, simplifying assumptions are necessary to construct the constraint equations. Often times, a large and complicated problem creates a complex mathematical program, which requires a great deal of assumptions. At which point in the formulation of the problem do the number of assumptions made detract from solving the actual problem itself? The ingenuity behind metaheuristics is that they draw their inspiration from natural or biological processes. Genetic Algorithm uses the concept of natural selection, while ant colony optimization seeks an optimal solution in the way an ant colony would seek out food. The first step that all metaheuristics take is to map out the solution space. This is a necessary and important aspect as the optimization involved moving from neighboring solution to neighboring solution. The initial location within the neighborhood is often initialized and the algorithm is allowed to take a step. After each step, the algorithm evaluates the fitness or robustness of that step. The algorithm is allowed to step to a worse solution; the attempt is to avoid convergence on a local solution. The way each metaheuristic avoids local solutions is what sets them apart from each other. The algorithm terminates after a set number of iterations, or will stop if the optimal solution is reached, which means that the solution has remained relatively steady over the past few iterations. The focus of this paper is on a job shop environment where the objective is to find resource configurations that allow the demand to be met as closely as possible. The outlook is a model based

Page 12: The Pennsylvania State University PLANNING THE MEANS OF

3

simulated annealing approach with significant research contributions coming from the specific areas of simulation modeling, mathematical programming, and metaheuristics. The efficacy of the proposed methodology is demonstrated through multiple experiments involving scalability and robustness. Firstly, we test the methodologies ability to increase proportionally and see how larger problems affect the solution quality. Secondly, we test the robustness of the simulated annealing algorithm under the guise of Taguchi and the concepts of signal to noise ratios and loss functions.

1.A. Problem Statement

Increasing demand has resulted in shortages for a manufacturing cell that resembles a job shop environment. Several product lines, each with a large variety of individual piece parts, are produced by this cell and are necessary in fabricating larger units. There are a variety of work stations required for the production of any given product line as well as a skilled and specialized workforce. Often times, a single worker will perform the vast majority of necessary operations in order to deliver a finished part. What changes are needed to meet variable demand?

Given a set of 𝐽 product lines where 𝐽 = {𝐽1, 𝐽2, … , 𝐽𝑛}, and a set of 𝑂 independent operations

where 𝑂 = {𝑂1, 𝑂2, … , π‘‚π‘š} determine the appropriate number of workers, shifts, and work stations

for each product line 𝐽. At most, there can be three shifts for each product line. A worker needs a work station so as to contribute to the total output of the shop, and each work station can process only one operation at a time. Once a process has begun on that work station the operation must be

completed without interruption. Each shift suffers some sort of productivity penalty, 𝑃𝑠 due to various economic and working conditions. There exists a physical capacity on the number of work stations and workers that can perform work within the space. Each operation is processed in the

order given by the set 𝑂 for the specific job. The collective production pertaining to the

combination of workers, shifts, and work stations shall be known as output. Every period 𝑙 they face a random demand that is known is advance. The goal is to minimize the deviation between output and random demand per period. Our purpose, therefore, is to solve the above problem using less common methods. In examining the less common methods, we chart a course that is not often taken and approach the resource allocation or configuration problem in this manner.

Page 13: The Pennsylvania State University PLANNING THE MEANS OF

4

2. LITERATURE REVIEW

Using broad-brush strokes the introduction made known several important concepts and techniques that require further refinement and inspection. The following is a general discussion of these topics infused with relevant articles that serve to reinforce the discussion. The goal was not to produce an exhaustive survey, but to gauge the depth and examine the breath of these various topics so that the reader can understand the natural progression of thought that was involved in solving the afore mentioned problem.

2.A. Cyclical Demand

Much of human life and nature is cyclical from the time of our rising and falling to the growing and harvest season of crops. Taken as a broad statement, humans are cyclical creatures who follow patterns that are easily observable to the modern eye. As such, this same concept also applies to the production and consumption of goods. Leaving the realm of human nature the focus moves to demand and how cyclical patterns effect not only consumption but also national trade. Since the late 1940’s and early 1950’s economists and statisticians have focused on studying national demand for imported goods. Equations were developed to β€œrelate the quantity of imports to the ratio of import prices to domestic prices, and the level of domestic real income.”1 This style of methodology and approach remained largely unchanged until a new generation of economists and statisticians realized that this model does not entirely capture the true picture. The criticism is that previous models did not account for distinctions β€œbetween the effects of cyclical factors and those factors that are secular in nature.”2 Meaning that an entirely new breed of models had to be created that accounted for seasonal and yearly demand patterns as well as patterns that only occur once a decade or century. Broadly speaking, estimating and understanding import demand for an industrialized country is a relatively easy task to undertake. Governments have resources to create entire departments and ministries whose sole purpose is to conduct and manage trade policies. Most modern countries have industrial data regarding gross domestic product and import figures that can be used to help forecast and predict future trends regarding the national economy. In this vein, there really is no excuse for a modern industrial country not to have a forecast or prediction regarding national import or exports. Taking a step down from the notion of national economies, we arrive at the reason why countries are forced to import goods and ostensibly to export them: manufacturing. Whether they be private or public companies all manufacturing enterprises must take in some sort of material and through processing, transform that material into a commodity that can either be traded independently or combined into an assembly.

2.B. Forecasting Demand

The ability to forecast demand is arguably more difficult for manufacturing companies. They simply do not have the monetary resources that are available to national governments. Empirical data regarding demand for a certain product may not be as well documented as national imports so the use of empirical data to influence forecasts may not be an option available to all goods. Demand

1 Khan, M., Ross, K. (1975) Cyclical and Secular Income Elasticities of the Demand for Imports. The Review of Economics and Statistics. Volume 57. Number 3. 357-361. 2 Khan, et. all, β€œCyclical and Secular Income Elasticities of the Demand for Imports.”

Page 14: The Pennsylvania State University PLANNING THE MEANS OF

5

forecasts are crucial for any manufacturing enterprise because they allow the company to plan β€œproduction, inventories and work force, and economic lot sizing.”3 Apart from effecting necessary inputs to a manufacturing process β€œwithout accurate and timely forecasts, systematic approaches to increasing production efficiency… are severely handicapped.”4 The inability to forecast demand both properly and accurately is felt twice by the process. This is first seen in planning the inputs necessary to production such as workforce and inventory levels and secondly, the shortcomings experienced when attempting to expand production efficiency and capacity. The problem not only lies in the allocation of monetary and staffing resources, but also in the various guises, that manufacturing can assume. Different industries and systems have been developed to serve the needs of various goods that are produced. Within the scope of industry there are two terms used to describe the production of goods. Heavy Industry denotes durable goods or products that often require a large number of assemblies to form a finished product. Sectors within heavy industry include agriculture, shipbuilding, steel production, and chemical production. Light industry is generalized by the use of consumer goods, or products that are relatively simply to make in and of themselves. Sectors within light industry include textile manufacturing and some electrical production. Historically, many Western nations first develop light industry. Examples of this includes textile production during the Industrial Revolution in Britain. The textile industry stands out as the first sector to utilize β€œmodern” production techniques such as mechanized looms and waterpower. The industrial revolution will eventually spread to other sectors, such as iron production, that spurred development in machine technology and further increased upstream demand. The above industrial revolution might be described as a β€œlight industry” revolution as textile production represented the largest share of industrialization. A second β€œheavy industry” revolution occurred with the introduction of mass produced steel. This feeds directly into the production of automobiles, chemicals, and refined petroleum. One notable exception to the development of light industry before heavy industry is Imperialist Russia, whose state financed expansion of railroads required massive amounts of steel. It should be said that the industrial prowess of Imperial Russia was never comparable to that of Britain or the United States, and precipitously fell during the First World War, the Russian Civil War, and under the policy of War Communism. The lack of a middle class lead to problems in financing industrialization since these sorts of things tend to be expensive. As a result, industrialization tended to occur in areas where the autocratic government deemed necessary because they provided funding. It has also been the case that government entities have used investment in heavy industry as a means to fuel economic growth: especially in the face of economic stagnation or decline. One famous example is the policy of deficit spending undertaken by the National Socialist government in Germany during the early 1930’s. The government was able to mask it deficit through the introduction of promissory notes known as β€œMEFO Bills” (Metallurgische Forschungsgesellschaft). Economist and head of the German Central Bank Hjalmar Schacht developed the scheme. It allowed Germany to divert billions of Marks secretly towards heavy industry that would later be

3 Willemain, T., Smart, C., Shockor, J., DeSautels, P. (1994). Forecasting Intermittent Demand in Manufacturing: A Comparative Evaluation of Croston’s Method. International Journal of Forecasting. Volume 10. Issue 4. 529-538. 4 Willemain, et. all. β€œForecasting Intermittent Demand in Manufacturing: A Comparative Evaluation of Croston’s Method”

Page 15: The Pennsylvania State University PLANNING THE MEANS OF

6

converted to rearmament. Around the same time frame, the Soviet Union also embarked on a campaign of industrialization through the funding of heavy industry. Once it became apparent that the World Revolution was not going to occur leaders within the Soviet Union were forced to look internally for funding that would be applied to industrialization through heavy industry. A policy created by Yevgeni Preobrazhensky was to tax agriculture goods produced via the peasantry. After falling out with Joseph Stalin, who later championed the idea, Preobrazhensky was executed during the Great Purge. Collectivization under the Stalin regime was largely geared towards generating capital and food in order to sustain the process of industrialization. Grigory Fel’dman an economist at GOSPLAN accomplished the calculations and planning of industrialization. The idea was to favor investment in areas concerning the production of producer goods and to delay investment in consumer goods. The key principle being that output in the consumer sector is dependent on the amount of capital present in the producer goods sector. Thus, the amount of capital invested in producer goods is indirectly related to the amount of capital diverted towards the production of consumer goods (See Appendix II). By initially providing funding for producer goods, that funding will eventually bleed over into consumer production. The question now becomes how long should this period of austerity (strictness with respect to the production of consumer goods) last before preferential treatment of producer goods is removed? Each industry intrinsically faces challenges when it comes to forecasting demand. Heavy Industry may see seasonal fluctuations due to weather related instances: a farmer probably won’t by a new tractor in the winter, rather waiting until spring. Light Industry, due to its focus on consumer goods, deals directly with consumer demand, which can be hard to forecast due to ever evolving interests and choices among the populace.

2.C. Manufacturing Systems

Contained within the industries described above, there are multiple systems or layouts that a manufacturing shop can assume. These layouts are often product specific. The production of a specific good will naturally favor a certain layout. The reason for is that production of one product can be drastically different when compared to another. There are major differences in the manufacturing processes of a toaster compared to a semi-conductor. As such, different layouts and methodologies are required to make a toaster versus a semi-conductor.

2.C.1. Job Shop

One such layout is the job shop, which produces β€œsmall batches of a large number of different products, most of which require a different set or sequence of processing steps.”5 Job shops tend to be small in scale when compared to the sprawling complexes of mass production. The most common example of a job shop environment is a machine shop, which typically contains a series of machines that are capable of great flexibility in their work. Meaning that a single machine is capable of a great number of machining operations either through different configurations or from the swapping of a particular tool head. In many instances, a machine shop or job shop layout must be ready to act, as there is great variability in the specificity that a customer may demand. As a result, the job shop must be capable of performing a high level of custom operations that can be required by the customer. An advantage of the job shop layout is that they are inherently resistant to failures. Given that, each machine is capable of multiple operations the breakdown of an individual tool head is insulated because in all likelihood there is another machine that can perform the same operation.

5 Chase, R., Jacobs, F. Aquilano, N. (2006) Operations Management for Competitive Advantage. McGraw-Hill/Irwin. New York, New York.

Page 16: The Pennsylvania State University PLANNING THE MEANS OF

7

This concept can even be applied to the various machines, while regular maintenance should be carried out to avoid breakdowns; the failure of one machine is not catastrophic as the jobs can be routed to another similar machine for processing. The strength of the job shop layout lies in its flexibility to produce a great variety of parts, as well as the number of unique operations that the shop as a whole can perform. Scheduling in a job shop can be difficult because the problem is NP-hard, which implies that the problem cannot be solved in polynomial time. To say that a problem is not NP-hard implies that β€œany desired precision can be obtained in a number of iterations that is bounded above by a polynomial.”6 Job shops produce many different parts and it’s intrinsically difficult to schedule when the number of piece parts grows large.

2.C.2. Mass Production

When the reader thinks of manufacturing, a mass production layout probably comes to mind. The reason for this is that β€œmanufacturing in the United States developed along such distinct lines in the first half of the nineteenth century that English observers in the 1850’s referred to an β€˜American System’ of manufactures.”7 It’s not so much that the automobile industry in the late 1910’s created mass production, but rather, it seized upon the idea and incorporated several new concepts; chief among them being the assembly line that allowed Ford to mass-produce his Model-T. Being an entirely American idea, the concepts of mass production first appeared in rifle production at the Springfield and Harpers Ferry arsenals. The genius of Ford and other automobile barons was the use of interchangeable parts, coupled with machine tools and the assembly line. In many regards, the blossoming of mass production occurred under Ford who, through the standardization of parts, labor, and assembly introduced an entirely new system of manufacturing to the world. It was Steinbeck who wrote in Cannery Row β€œTwo generations of American knew more about the Ford coil than the clitoris, about the planetary system of gears than the solar system of stars.”8 The term, mass production, is a bit misleading. Superficially, one might assume that mass production is characterized by the production of mass quantities of goods. That’s not entirely true; the notion of mass production is that higher levels of production are cheaper than lower levels. The implication is that producing a high volume of goods is cheaper than producing a lower volume. The reason that mass production focuses on higher levels of output is primarily based on various costs that aggregate in setup, labor, and holding costs. The three central tenants that delineate mass production are β€œdivision of labor, interchangeable parts, and mechanization.”9 Division of Labor implies that a worker has a single task to perform rather than a multitude. That single operation will require the same tool and the execution of the operation will be the same for all parts that come down the assembly line. This reduces the amount of time that the worker spends on an individual part because everything that is needed for the operation is immediately at hand. The workers single focus should be performing a specific job, not looking for tools, or fetching parts from a production step. Interchangeable parts defines tolerances

6 Vanderbei, R. (2014). Linear Programming: Foundations and Extensions. Springer, International Series in Operations Research and Management Science. Fourth Edition. New York, New York. 7 Hounshell, D. (1984). From the American System to Mass Production, 1800 – 1932. The Johns Hopkins University Press. Baltimore Maryland. 8 Steinbeck, J. (1945). Cannery Row. Viking Press. New York, New York. 9 Duguay, C., Landry, S., Pasin, F. (1997) From Mass Production to Flexible/Agile Production. Ecole des Hautes Etudes Commerciales de Montreal, Quebec, Canada. International Journal of Operations & Production Management. Volume 17. Issue 12. 1195-1183

Page 17: The Pennsylvania State University PLANNING THE MEANS OF

8

that a particular part must fall within so that it is nearly identical to any part is that produced today, tomorrow, or a year from now. The reason for wanting parts that are interchangeable is twofold. On one hand, it streamlines the processing operations that a worker must perform, and secondly, it allows for easier and quicker repair of the individual part. Interchangeable parts eliminate the need for custom operations as it standardizes part tolerances and production. Mechanization, such as the assembly lines, and machine tool improvements serve to enhance other aspects of mass production. For instance, the assembly line allows a worker to perform more operations as parts are directly delivered via the line. Improvements in machine tools allows for tighter tolerance control.

2.C.3. Batch Production

Presenting itself as a middle alternative to mass production and the job shop, batch production focuses on producing a small number of items in batches. The most available example of batch production is a bakery that produces a number of confections. The idea of producing a batch is what sets batch production apart from mass production and the job shop. Machines used in batch production are similar to those used in the job shop, namely, a machine can perform and produce a number of different operations and products. The volume of output is noticeably lower when compared to mass production, but this allows the producer to follow demand more closely. Allowing for more immediate actions regarding future production. In this vein, batch production is much more agile and flexible when compared to mass production that can struggle to accept and foresee change. Drawbacks for batch production are inherent to the production of batches. While a defect in a job shop setting is isolated to a single part, a defect in batch production could affect the entire batch. After a batch has finished processing the machine needs to undergo a set up process in preparation for another type of product. The machine might undergo multiple setups throughout a day, which can be costly. As far as output is concerned, batch production is able to outpace the job shop, whose focus on producing unique or customizable parts places limits on production volume. When compared to mass production though, batch production is unable to compete. Turning now to flexibility and the competency to adapt quickly to new demand patterns, batch production is favored over mass production. Though possessing a number of machines that are capable of producing different products, batch production is unable to maintain the high flexibility that is characteristic of job shops. The advent of computing and it proliferation throughout society has allowed intensive research and analysis into the area of manufacturing systems. The ability to model an entire facility in order to gauge output based on input demand is a feasible realization. Furthermore, breakdowns and subsequent repairs can be introduced so that the simulation model represents a real world scenario. The use of simulation modeling is a powerful tool that enables manufacturers to better plan and understand their inputs as well as their outputs.

2.D. The Many Facets of Simulation Modeling

Simulation Modeling is a large and varied subject whose general aim is to model a physical system in order to generalize about processes, to understand complex systems better, or to collect and present relevant statistics. In attempting a solution, there are many different avenues a modeler might take; many of these techniques are problem dependent. The following examines the more common

Page 18: The Pennsylvania State University PLANNING THE MEANS OF

9

methods and techniques related to simulation modeling including: Systems Dynamic Modeling, Discrete Event Simulation, Agent Based Modeling and Dynamic System.

2.D.1. Systems Dynamic Modeling

A relative new comer to the world of simulation modeling, systems dynamic modeling enhances β€œunderstanding of an identified problem,” 10 as well as β€œimproving comprehension of the structure of the problem and the relationship present between relevant variables.” 11 A typical systems dynamic model is composed of a series of interconnecting stocks and flows. Whereby flow of information or other goods is regulated by the intensity of the flow in to or out of a stock. Specially, the flows are regulated by a series of parameters that are often interconnected with other flows or parameters; thus creating a feedback loop. These loops have causal effects on each other. Meaning that an increase in one stock could result in a decrease in another stock. Compared to other techniques, systems dynamic modeling is broader. Individual arrival times or departure times are not collected. The only real concern are the amounts present within the individual stocks and how those amounts are interconnected to the system as a whole. A benefit of systems dynamic modeling is the ability to adjust a specific parameter or variable and observe the effect that that change has on the entire system. In reality, a systems dynamic model is a series of differential equations that directly influence the intensity of the flow in to or out of a stock, as well as interactions between parameters and variables. Brailsford also points out that this reliance on difference equations creates a certain degree of inflexibility with respect to the addition of probability distribution functions, or empirical data. Systems dynamic modeling excels at identifying and finding relationships between variables and parameters in a complex system. Applications for Systems Dynamic Modeling are wide and varied. This is because the use of feedback loops is visible in our society today. Feedback loops will only continue to grow more popular and prevalent as the world continues to knit itself together forming a global economy. Common fields include engineering, economics, biology, business administration, and public policy. Many systems dynamic models focus on staffing levels, and correctly modeling attrition rates to ensure a properly staffed facility or even military12.

2.D.2. Discrete Event Simulation

The most common and widely used modeling method, discrete event simulation tracks items through a series of processes, queues, and resources. Discrete event simulation is item oriented, meaning that an individual item has worth and there is utility in tracking it throughout the entire system. Items can be thought of as people, manufactured parts, or documents13.

10 Brailsford, S., Hilton, N. (2001). A Comparison of Discrete Event Simulation and System Dynamics for Modeling Healthcare Systems. Proceedings of the 26th meeting of the ORAHS Working Group. 18-39. 11 Brailsford, et. all, β€œA Comparison of Discrete Event Simulation and System Dynamics for Molding Healthcare Systems.” 12 Thomas, D., Kwinn, B., McGinnis, M., Bowman, B., Entner, M. (1997). The U.S. Army Enlisted Personnel System: A System Dynamics Approach. Computational Cybernetics and Simulation. 1997 IEEE International Conference. 1263 – 1267. 13 Nelson, B. (2013). Foundations and Methods of Stochastic Simulation: A First Course. Springer. New York, New York

Page 19: The Pennsylvania State University PLANNING THE MEANS OF

10

The reason these items have worth is because the modeler is interested in observing how the item interacts with the system. A common example might be a couple arriving at a restaurant. The couple in this case can be represented by an item. The amount of time the item waits in a queue translates into the amount of time the couple waits for a table. Dinner itself would be the process, and in order to be served the item would have to seize a resource, or in this instance a waiter. Once dinner has been completed, the item releases the resource, which is then free to serve other patrons in the restaurant, and the couple exits the system. There are a number of statistics that might be of interest to the modeler. The amount of time spent waiting for a table, or the total number of patrons served that night. The discrete event model can also answer several questions about the system. Are there enough tables to serve every item that enters the system? Are there enough resources to serve all of the tables and prevent items from reneging? Should more tables or waiters be added? Once these questions have been answered possible solutions can be addressed. Brailsford asserts, β€œthe aim of these models is often comparison of scenarios, prediction, or optimizing specified performance criteria.” 14 Often times, multiple replications are necessary to gather relevant information concerning the system or process at hand. Concerning the restaurant above, a modeler may have three different staffing levels that are each replicated ten times in order to estimate the appropriate number of waiters. The underlying principle of replication is randomness, which can be easily implemented into the model through a probability distribution function. Randomness between scenarios is an important aspect of discrete event modeling, allowing managers and customers to make more informed decisions. Of course, though, there are several drawbacks associated with discrete event modeling. There is a large amount of data required to develop the model. Interarrival times for items entering the system must be known, and in most instances are approximated by some sort of probability distribution function. Process times can also be estimated according to a probability distribution function. These estimates though, can be very costly, as they often require large amounts of capital and time to carry out. Discrete event simulations also tend to be smaller in scope. They analyze individual restaurants or specific manufacturing lines. The number of items that the system processes also tends to be smaller, as large quantities of items of make the model too large. A balance has be struck between the number of items, on one hand a minimum number of items are required to present statistical observations, while a maximum number of items reduces clarity of individual reactions an items exerts on the system. As the reader might have already guessed, interpretation is key for any model. Being able to observe the physical system correctly and replicate within the software will often be the difference between good and bad models. Data robustness is also key to simulation modeling. Populating any model with poorly collected data leads to erroneous results from which managers might draw disastrous conclusions about the process15.

14 Brailsford, S., Hilton, N. (2001). A Comparison of Discrete Event Simulation and System Dynamics for Modeling Healthcare Systems. Proceedings of the 26th meeting of the ORAHS Working Group. 18-39. 15 Nelson, B. (2013). Foundations and Methods of Stochastic Simulation: A First Course. Springer. New York, New York

Page 20: The Pennsylvania State University PLANNING THE MEANS OF

11

Being the most prevalent simulation modeling technique, discrete event system has applications in virtually every sector of industry. So long as queues are present within a process the ability to model it as a discrete event is possible. Manufacturing benefits greatly from discrete event simulation, as do processes that involve some sort of interaction with customers. Certain industries use discrete event simulation more than other. This sort of simulation modeling is very popular in the automotive and semiconductor industries, while not as prevalent or emerging in many varied industries such as pharmaceutical, textiles, and the paper industry.16

2.D.3. Agent Based Modeling

Particularly within the realm of complexity science, agent based modeling has grown in popularity based on its ability to model nonlinear relationships and systems. Rather than focusing on individual discrete decisions, agent based modeling focuses on large picture consequences. Consequences that play out on a global level or at least consequences that affect a group or multitude of people.17 Agent behavior is often loosely defined by a small set of rules. Often times, a state chart can be used to help clarify and define interactions between agents. It is best to define the rules regarding agent interactions loosely. The purpose being to observe some sort of emergent behavior. Namely, the small, direct or indirect, actions that agents have on each other help to shape a larger collective behavior. This collective behavior cannot be observed at the agent level, which is why the behavior is emergent. In other words, β€œThe interactions between parts are nonlinear; so the overall behavior cannot be obtained by summing the behaviors of the isolated components.”18 Subtle, and sometimes, unexpected interactions can lead to surprisingly complex collective behaviors, which is why it is best to loosely define rules regarding interactions. Agent based models are not necessarily in equilibrium. Rather, their state is best measured by robustness19. How agents respond to their environment can often be viewed as a predictive tool regarding the ability to maintain their functionality. If a new rule is introduced that severely limits agent’s ability to interact with each other a completely new complex behavior may be observed simply due to lack of communication. Agent based modeling could be applied to a flock of birds whose behavior and shape can be considered emergent. The decisions that a series of individual birds makes impact the formation that they as a group operate in collectively. Agents operate in an environment that already has an effect on their interactions. An agent can also be in a specific state that has direct consequences effecting interactions. These states can be governed by the state chart, but could also be inherent to the agent’s spatial location within the experiment space. The key point is that agents through interactions with other agents create a collective behavior that is hard to predict and even quantify without taking the actions of individuals into account. Behaviors are defined for agents not the global system. This collective behavior emerges as the agents interact with each other.

16 Semini, M., Fauske, H., Strandhagen, J. (2006). Applications of Discrete-Event Simulation to Support Manufacturing Logistics Decision-Making: A Survey. Proceedings of the 2006 Winter Simulation Conference. 1946 – 1953. 17 Macal, C., North, M. (2009). Tutorial on Agent Based Modeling and Simulation. Proceedings from the 2009 Winter Simulation Conference. 86 – 98. 18 Scholl, H. (2001). Agent-based and System Dynamics Modeling: A Call for Cross Study and Joint Research. Proceedings of the 34th Hawaii International Conference on System Sciences. 19 Newberry, D. (2012). The Robustness of Agent-Based Models for Electricity Wholesale Markets. Cambridge University. CWPE; 1228. 1 – 16.

Page 21: The Pennsylvania State University PLANNING THE MEANS OF

12

Agent based modeling has many applications in the social sciences as well as ecology and biology. A popular example is the β€œbird-like” or β€œboid” model developed by Craig Reynolds.20 A boid is an object that travels together, ostensibly in a herd, flock or school. Boids follow three simple rules: they try to avoid collisions with each other, within nearby members they attempt to match velocities, and they gravitate towards the perceived center of their unit. These rules, along with others that govern the laws of physics, allow boids to fly and wheel about as a single entity composed of individuals that follow a small set of rules. Within manufacturing enterprises, it is common for authors to propose, β€œagent based systems for the objective of shop floor control in batch manufacturing environments.”21 Agent based modeling is also prevalent in other industrial sectors such as the production of automobile parts, and hot steel rolling among other examples.22 These illustrations lend themselves to applications within heavy industry.

2.D.4. Dynamic Systems

Although they share similar names Dynamic Systems and System Dynamics represent two distinct forms of simulation modeling and should be thought of as distinct and separate techniques available to researchers performing simulation modeling. In many ways though, a dynamic systems model comprises important aspects from agent based and systems dynamic modeling. In many regards, the mathematical framework of differential equations that are the driving force of systems dynamic models are also the drivers in a dynamic systems model. Differential equations also provide the impetus for emergence that was present within agent based modeling. Self-organization, coupled with feedback loops provide the same sort of collective behavior that is visible on a global level, but not a micro level. Stable States are patterns of interactions that exist between elements. The stable states often describe the sort of push pull relationship that leads to the development of global collective behavior. This behavior is always seen on a micro level with effects influencing the collective behavior. As the model continues to run and the full extent of the space is understood, attractors stabilize and become predictable. The state space can be represented graphically by a series of wells. The deeper an attractor or well is, the more likely it is for the system to enter and thus remain within that state. Over time, the system may enter and leave a series of wells. As time progress, the long run probabilities of remaining in a certain state are better clarified. Wells that are shallow suggests that the system spends little time there and moves away quick once that state has been achieved. Conversely, wells that are deep are very difficult to leave and have higher chances of absorption. In the long run, attractors are excellent tools that allow researcher to make predictions regarding future states. Feedback loops, on the other hand, drive the self-organization of the collective behavior and often times either accelerate or stabilize the process. A positive feedback loop β€œis the means by which interactions among system elements amplify particular variations, leading to the

20 Reynolds, C. (1987). Flocks, Herd, and Schools: A Distributed Behavioral Model. Computer Graphics. Volume 21. Number 4. 25-34. 21 Cantamessa, M. (1997). Agent-Based Modeling and Management of Manufacturing Systems. Computers in Industry. Vol. 34. Issue 2. 173 – 186. 22 Monostori, L., Vancza, J., Kumara, S.R.T. (2006). Agent-Based Systems for Manufacturing. CIRP Annals. Volume 55. Issue 2. 697 – 720.

Page 22: The Pennsylvania State University PLANNING THE MEANS OF

13

emergence of novelty” 23 that manifests itself in the collective behavior. Negative feedback loops have stabilizing tendencies that reduce deviations. It is through this process that systems are absorbed by attractors. That’s not to say that positive and negative feedbacks loops are at odds with each other. Rather, environment changes can reshape the process through the introduction of a positive feedback loop, while a negative feedback loop will maintain and stabilize the new changes. It is no surprise the dynamic systems have many related fields. These fields are mostly focused in the social sciences, particularly around psychology and sociology. Applications are also prevalent in physics, as well as engineering and mathematics. The link between dynamic systems and systems dynamic allows them to be considered related under the guise of simulation modeling. The only difference being the use of stocks and flows in systems dynamic. MATLAB Β© is popular software package for modeling dynamic systems.24

2.D.5. Modeling Summary

Though not explicitly within the realm of optimization due to myriad reasons, simulation modeling can arguable by considered within the realm of feasible solutions. The entire premise of simulation modeling to estimate or to gain a better understanding of some sort of value, whether it is output, or utilization, this vale has intrinsic worth that makes it worthy of estimation. It’s important to realize that the outputs of simulation models are estimates and that they should be viewed with a certain level of skepticism as the model can only approximate real world scenarios so well. If simulation modeling allows a user to better understand a variety of situations and to select the best or most feasible then it should be considered within the realm of optimization and mathematical modeling, which will be discussed next.

2.E. Mathematical Optimization

2.E.1. General Mathematical Optimization Concepts

Optimal solutions exist everywhere in society. There exists a shortest path from Betty’s house to the Bingo hall, but whether or not we are aware of that shortest path or seize it is part of the human condition. In going from Betty’s house to the Bingo hall, there are an infinite number of avenues and paths available to take. Some may be more attractive than others, but so long as the end point is the Bingo hall that solution is a feasible solution. If there are an infinite number of feasible solutions then there must also be an infinite number of infeasible solutions that can either take one to the supermarket or the pet-store or anywhere that is not the Bingo hall. The point being that an infeasible solution does not arrive at the Bingo hall. There can only exist one optimal solution, and that optimal solution depends on what exactly is being dictated under the objective function and optimization problem in general. Take distance for example, there exists a shortest distance from Betty’s House to the Bingo Hall that is optimal. On the other hand, we could be performing the optimization with respect to time; in which there exists a route that requires the least amount of time and is considered optimal with respect to the minimization of time spent traveling. Either way, the optimal solution represents the best solution to the problem at hand. The problem that was loosely

23 Granic, I., Patterson, G. (2006) Towards a Comprehensive Model of Antisocial Development: A Dynamic Systems Approach. Psychological Review. Vol. 113. No. 1. 101 – 131. 24 Borshchev, A., Filippov, A. (2004). From System Dynamics and Discrete Event to Practical Agent Based Modeling:

Reasons, Techniques, Tools. The 22nd International Conference of the System Dynamics Society.

Page 23: The Pennsylvania State University PLANNING THE MEANS OF

14

discussed above is known as the shortest path problem, which, conveniently enough, has a solution methodology called Dijkstra’s algorithm25. There are two general types of optimization problems. The first being the constrained optimization problem and the second being the unconstrained optimization problem. The key is to realize that constraints focus on variables. Many problems place constraints upon its variables. These constraints can take the form of budgetary constraints, binary locations regarding the number of factories to be placed, or the number of available resources at a node. Unconstrained optimization places no constrains upon it variables. That’s not to say that the variables are entirely free, rather β€œconstraints are replaced by penalization terms added to objective function that have the effect of discouraging constraint violations.”26 Variables within unconstrained problems are also real and representative of the continuous number sequence. Whereas constrained optimization problems are discrete in nature and the variables can be either integer, binary, or non-negative real numbers27. For the case arising in constrained optimization the constraints form a feasible region. The optimal solution and all feasible solutions exist within this region. The simplest case can be a small two-dimensional region that is created using only several linear constraints. A more complex feasible region might include a combination of nonlinear constraints in three-dimensional space. Intuitively, the two dimensional linear model is much easier to solve than the three dimensional nonlinear case. The goal of the optimization problem is to identify the optimal solution. This is reflected by the objective function and can be achieved by either maximizing or minimizing the objective function. If the objective function resembles some sort of cost function it is best to consider minimization techniques. The opposite can be said if it represents some sort of profit function. Graphically, the search for an optimal solution manifests itself in identifying either a global or local maximum in the case of maximization problems or a global or local minimum in the case of a minimization problem. Within the context of problems containing linear constraints a local solution will also be a global solution. The same cannot be said for nonlinear constraints that, more often than not, focus on finding a local solution due to the complexity of the feasible regions.

2.E.2. Linear Programming

Linear programming deals with problems that rely on linear constraints. The term linear programming is often misleading as in our modern vocabulary β€œprogramming” typically refers to computer software. As a phrase, β€œLinear Programming” was coined in the early 1940’s long before the notion of modern computing. When one hears programming used in the context of an optimization problem they should connect it with the use and analysis of algorithms aimed at solving said optimization problem. Leonid Kantorovich is often credited with the mathematical formulation of linear programming for which he received the Stalin Prize in 1949 and a Nobel Prize in Economics in 1975. George Dantzig is known for the creation of the simplex algorithm, which is used to solve linear programs.

25 Dijkstra, E. W. (1959). A Note on the Two Problems in Connexion with Graphs. Numerische Mathematik. 269 - 271 26 Nocedal, J., Wright, S. (2006). Numerical Optimization. Springer. New York, New York. 27 Bazaraa, M., Sherali, H., Shetty, C. M. (2006). Nonlinear Programming: Theory and Algorithms. Wiley. Hoboken, New

Jersey.

Page 24: The Pennsylvania State University PLANNING THE MEANS OF

15

Linear programs take the form:

min 𝑐𝑇π‘₯ (1.a)

𝑠. 𝑑. 𝐴π‘₯ = 𝑏, π‘₯ β‰₯ 0 (1.b)

Eq. 1: Standard Form of a Linear Program28

Eq. 1 displays the standard form of a linear program. (1.a) represents the objective function that is being minimized where c is a row vector of predetermined coefficients such as costs; x is a row vector of variables whose values are initially unknown. The objective function can either be minimized or maximized depending on the context of the problem. The constraints are presented in

(1.b). A is a (π‘š π‘₯ 𝑛) matrix of rank π‘š and b is a column vector. The above figure represents only the standard form of a linear program it is possible for the constraints to be less than or equal, or greater than or equal to b, or some combination thereof. Let us refer to the above model as the primal problem. For every primal linear program, there exists a dual linear program of the form:

max 𝑏𝑇𝑒 (2.a)

𝑠. 𝑑. 𝐴𝑇𝑒 ≀ 𝑐 (2.b)

Eq. 2: Dual of the Standard Linear Program

The dual program of a standard linear program is displayed in Eq. 2. The objective function is represented by (2.a). Since the standard linear program was being minimized, we maximize the dual. b is a column vector taken direction from the standard problem and u is also a column vector of variables whose values are initially unknown. (2.b) represents the constraints of the dual program where the A matrix taken from the primal problem and c is a row vector also taken from the primal. Every linear program has an associated dual problem and β€œit turns out that every feasible solution for one of these two linear programs gives a bound on the optimal objective function value for the other.”29 This indicates an intimate relationship between the primal and the dual problem. Within Duality Theory there are two important concepts used to describe the objective function values for both the primal and dual. The first is called weak duality, where by the optimal value of

the dual problem (π‘‘βˆ—) is less than or equal to the optimal value of the primal problem (π‘βˆ—).

π‘‘βˆ— ≀ π‘βˆ— (3.a)

Eq. 3: Weak Duality

The concept of weak duality is presented in Eq. 3. The underlying premise of weak duality is explained by (3.a) wherein the optimal value for the dual program is less than or equal to the optimal value for the primal problem. The utility of weak duality manifests itself in many ways. If the optimal

value of the dual problem is very large (∞) then the primal problem has no feasible solutions. A feasible point for the dual problem provides a lower bound on the optimal value of the primal

28 Forst, W., Hoffman, D. (2010). Optimization – Theory and Practice. Springer, Undergraduate Texts in Mathematics and Technology. New York, New York. 29 Vanderbei, R. (2014). Linear Programming: Foundations and Extensions. Springer, International Series in Operations Research and Management Science. Fourth Edition New York, New York.

Page 25: The Pennsylvania State University PLANNING THE MEANS OF

16

problem. Conversely, a feasible point for the primal problem provides an upper bound on the optimal value of the dual problem. For Strong Duality it is the case that the optimal value of the dual problem is equal to the optimal value of the primal problem. Implying there does not exist a duality gap. This is most evident in (4.a) where the optimal solution to the dual program is equal to the optimal value of the primal program. Eq. 4 presents the conditions for strong duality.

π‘‘βˆ— = π‘βˆ— (4.a)

Eq. 4: Strong Duality

Forst and Hoffman present a β€œRevised Simplex Method”30 whose algorithm is relatively easy to follow. The β€œrevised” method should be considered an improvement on Dantzig’s original simplex method as it is more computationally efficient. The algorithm is as follows:

𝐿𝑒𝑑 𝐽 = (𝑗1, … , π‘—π‘š)𝑏𝑒 π‘Ž π‘“π‘’π‘Žπ‘ π‘–π‘π‘™π‘’ π‘π‘Žπ‘ π‘–π‘ . (5.a)

Step 1

πΆβ„Žπ‘œπ‘œπ‘ π‘’ 𝐾 = (π‘˜1, … , π‘˜π‘›βˆ’π‘š) ∈ π‘…π‘›βˆ’π‘š (5.b)

πΆπ‘œπ‘šπ‘π‘’π‘‘π‘’ 𝑏 = π‘₯𝐽 π‘€π‘–π‘‘β„Ž 𝐴𝐽π‘₯𝐽 = 𝑏. (5.c)

Step 2

𝑐𝑇

= 𝑐𝐾𝑇 βˆ’ 𝑐𝐽

π‘‡π΄π½βˆ’1𝐴𝐾 (5.d)

𝐼𝑓 𝑐 β‰₯ 0 π‘†π‘‘π‘œπ‘. 𝐴 π‘šπ‘–π‘›π‘–π‘šπ‘’π‘š β„Žπ‘Žπ‘  𝑏𝑒𝑒𝑛 π‘“π‘œπ‘’π‘›π‘‘ (5.e) Step 3

π‘‚π‘‘β„Žπ‘’π‘Ÿπ‘€π‘–π‘ π‘’ π‘‘β„Žπ‘’π‘Ÿπ‘’ 𝑒π‘₯𝑖𝑠𝑑𝑠 π‘Žπ‘› 𝑖𝑛𝑑𝑒π‘₯ 𝑠 = π‘˜πœŽ π‘€π‘–π‘‘β„Ž π‘πœŽ < 0. π‘‡β„Žπ‘’ 𝑖𝑛𝑑𝑒π‘₯ 𝑠 π‘’π‘›π‘‘π‘’π‘Ÿπ‘  π‘‘β„Žπ‘’ π‘π‘Žπ‘ π‘–π‘ . (5.f) Step 4

πΆπ‘œπ‘šπ‘π‘’π‘‘π‘’ π‘‘β„Žπ‘’ π‘ π‘œπ‘™π‘’π‘‘π‘–π‘œπ‘› π‘Žπ‘  = (π‘Ž1,𝑠, … , π‘Žπ‘š,𝑠)𝑇

π‘œπ‘“ π΄π½π‘Žπ‘  = π‘Žπ‘  (5.g)

𝐼𝑓 π‘Žπ‘  ≀ 0 π‘†π‘‘π‘œπ‘. π‘‡β„Žπ‘’ π‘œπ‘π‘—π‘’π‘π‘‘π‘–π‘£π‘’ π‘“π‘’π‘›π‘π‘‘π‘–π‘œπ‘› 𝑖𝑠 π‘’π‘›π‘π‘œπ‘’π‘›π‘‘π‘’π‘‘. (5.h)

π‘‚π‘‘β„Žπ‘’π‘Ÿπ‘€π‘–π‘ π‘’ π‘‘π‘’π‘‘π‘’π‘Ÿπ‘šπ‘–π‘›π‘’ π‘Ž 𝜚 ∈ {1, … , π‘š}π‘€π‘–π‘‘β„Ž minπ‘Žπœ‡,𝑠>0

π‘πœ‡

π‘Žπœ‡,𝑠=

π‘πœš

π‘Žπœš,𝑠 (5.i)

π‘Ÿ = π‘—πœš . π‘‡β„Žπ‘’ 𝑖𝑛𝑑𝑒π‘₯ π‘Ÿ π‘™π‘’π‘Žπ‘£π‘’π‘  π‘‘β„Žπ‘’ π‘π‘Žπ‘ π‘–π‘  (5.j)

𝐽′ = (𝑗1, … , π‘—πœšβˆ’1, 𝑠, π‘—πœš+1, … , π‘—π‘š) (5.k)

π‘ˆπ‘π‘‘π‘Žπ‘‘π‘’ 𝐽 = 𝐽′; π‘Žπ‘›π‘‘ π‘Ÿπ‘’π‘‘π‘’π‘Ÿπ‘› π‘‘π‘œ 𝑆𝑑𝑒𝑝 1. (5.l)

Eq. 5: The Revised Simplex Algorithm

The revised simplex method is presented in Eq. 5. To begin we first define a J and call it a basis if 𝐴𝐽 is invertible (5.a). Furthermore, the corresponding π‘₯π‘—π‘š

∈ π‘₯𝐽 variables are denoted as basic

variables, while all other variables are denoted as non-basic variables. That same basis J becomes a feasible basis if all parts of a particular basic point are nonnegative. Proceeding to Step 1 we select a

K where π‘₯π‘˜π‘›βˆ’π‘š ∈ π‘₯𝐾 is a non-basic variable (5.b). It follows that 𝐴π‘₯ = 𝐴𝐽π‘₯𝐽 + 𝐴𝐾π‘₯𝐾 . (5.c) declares

30 Forst, W., Hoffman, D. (2010). Optimization – Theory and Practice. Springer, Undergraduate Texts in Mathematics and Technology. New York, New York.

Page 26: The Pennsylvania State University PLANNING THE MEANS OF

17

that there exists a unique basic point denoted by οΏ½Μ…οΏ½; where 𝐴𝐽π‘₯𝐽 = 𝑏 and π‘₯𝐾 = 0. Once the unique

basic point has been determined, calculate the cost coefficients, 𝑐𝑇 for that basic point (5.d) and

determine if all of the entries are greater than or equal to zero (5.e). If they are an optimal solution

has been obtained; if not proceed to (5.f). Select an index 𝜎 such that π‘πœŽ < 0. That index 𝑠 = π‘˜πœŽ

will enter the basis. In (5.g) compute π‘Žπ‘  with 𝑠 ∈ π‘š . If any of the values in π‘Žπ‘  are less than or equal to zero, conclude that the problem is unbounded and stop (5.h). Evaluate the expression

minπ‘Žπœ‡,𝑠>0

π‘πœ‡

π‘Žπœ‡,𝑠=

π‘πœš

π‘Žπœš,𝑠 and determine the 𝜚 index (5.i). The π‘—πœš entry will leave the basis (5.j). Replace

π‘—πœš with 𝑠 (5.k), and declare 𝐽 = 𝐽′ (5.l).

In essence, the simplex algorithm starts at some initial vertex and proceeds along a series of edges defined by the feasible region until an optimal vertex is reached. At each step, the algorithm checks whether the current solution is optimal to the objective function. If it is the algorithm terminates, otherwise another step is taken towards an optimal solution. The algorithm is finite if the feasible region is fully enclosed by linear constraints. If the feasible is not enclosed by linear constraints the algorithm terminates declaring that the problem unbounded.

2.E.3. Nonlinear Programming

Nonlinear Programming is not entirely different from linear programming, the only difference being the move from linear equations to nonlinear equations. Superficially, one would imagine that nonlinear and linear programs could be treated as very similar classes of problems. The formulation process for a nonlinear program and linear program is relatively the same. A nonlinear program has an objective function that is either minimized or maximized and is subject to a series of nonlinear constraints. Nonlinear programs can also represented by unconstrained methods. The major difference between linear and nonlinear programming is the solution technique31. Within linear programming, the simplex method is a very popular method to achieve a solution. For nonlinear programming, the solution technique is highly dependent on the type of problem. As such, there exists a wide variety of solution techniques. Although there is a great variety of nonlinear functions that can be used to construct a nonlinear program. Solution techniques generally focus on convexity and whether or not the problem is convex or concave. The following paragraphs will discuss some popular methods used to solve nonlinear programming. Unconstrained optimization is a popular method to solve nonlinear programs. As mentioned above, nonlinear programs have an objective function that is either being maximized or minimized, but lacks formal restrictions. It is even possible to take a constrained problem and convert β€œit into a sequence of unconstrained problems via Lagrangian multipliers or via penalty barrier functions.”32Another technique is the method of feasible direction whereby a direction vector is chosen and optimization is performed along that direction in the hope that it will lead to a global solution if not an optimal solution.

31 Vanderbei, R. (2014). Linear Programming: Foundations and Extensions. Springer, International Series in Operations

Research and Management Science. Fourth Edition New York, New York. 32 Bazaraa, M., Sherali, H., Shetty, C. M. (2006). Nonlinear Programming: Theory and Algorithms. Wiley. Hoboken, New Jersey.

Page 27: The Pennsylvania State University PLANNING THE MEANS OF

18

It comes as no surprise that duality also exists for nonlinear programming. The use of Lagrange multipliers for nonlinear programming is β€œsimilar to the role played by the Lagrange multipliers of classical calculus where a function of several variables is to be minimized subject to equality constraints.”33 Through strong duality, indicating the lack of a duality gap, an optimal saddle point can be achieved. This implies that strong duality holds at the saddle point and an optimal solution has been achieved. Penalty and barrier functions are two differing methods to also achieve a solution for a given nonlinear program. The penalty method places a penalty parameter on the objective function. This penalty function penalizes the objective function whenever a violation of constraints occurs. The generated solutions are considered infeasible to the problem. The limit of these infeasible solutions is optimal to the original un-penalized problem. For constrained problems, there exists an exterior penalty function method whose solutions outside the feasible regions approach the optimal solution. The difference between the penalty and barrier methods is the location of the generated solutions. For the penalty method these are infeasible solutions that approach the optimal solution. For the barrier method the solutions are within the feasible region and hence feasible. The barrier is to prevent any moves outside of the feasible region. This method has also been referred to as the interior point method. Auslender34 presents an algorithm that unifies both penalty and barrier methods, while also providing convergence theorems for both the primal and dual problems. The final class of solutions considered is the feasible direction methods. The feasible region is explored by β€œsearching along directions which reduce the objective function while maintaining feasibility.”35 Specifically, the method of Zoutendijk chooses a feasible direction by solving a linear programming sub problem. Once a direction has been chosen optimization is performed along that direction until an optimal solution has been found. Another technique is subgradient optimization whereby at each iteration a prescribed step size is used to eventually converge to an optimal solution is guaranteed.

2.F. Metaheuristics

Metaheuristics are tools whose aim is to provide a quality solution that is not necessary optimal in a relatively quick timeframe. They present themselves as alternatives to more traditional methods such as mathematical programming, which may take considerable time to converge on an optimal solution. The genius behind metaheuristics is there great flexibility in problem applications. In many instances a particular metaheuristic technique could be swapped for another and still obtain relatively similar results. Popular metaheuristics include Simulating Annealing, Tabu Search, Genetic Algorithm, and Ant Colony Optimization. They each specialize in certain domains, but can solve most general types of problems.

2.F.1. Simulated Annealing

Annealing is a metallurgical process that heats a metal to its liquid transition phase. Once the metal reaches the liquid transition phase the temperature is slowly and evenly decreased until it reaches the ground state. The purpose is that if the cooling process is properly controlled the particles will

33 Mangasarian, O. (1994). Nonlinear Programming. Society for Industrial and Applied Mathematics. Philadelphia, Pennsylvania. 34 Auslender, A. (1999). Penalty and Barrier Methods: A Unified Framework. Society for Industrial and Applied Mathematics. Volume 10. No. 1. 211 – 230. 35 Smith, E. A., Carpenter, W. C. (1978). A Feasible Direction Method Based on Zoutendijk’s Procedure P1. Engineering Optimization. Volume 8. 109 – 112.

Page 28: The Pennsylvania State University PLANNING THE MEANS OF

19

rearrange themselves such that defects within the crystal lattice are minimized. In doing this the particles assume the lowest possible energy state. The resulting metal is more workable due to increased ductility and a reduction in hardness. Simulated Annealing follows a very similar process whereas the ground state becomes a global solution and the lowest energy state represents a cost. The method is an adaptation from a Monte Carlo method, the Metropolis Algorithm, which is credited to Nicholas Metropolis at Los Alamos National Laboratories in Los Alamos, New Mexico. The general idea is that initial parameters are set and a current solution determined. A move is then made to a neighboring solution. The current solution and the neighboring solution are evaluated, if the neighboring solution is better than the current solution it is declared the current solution and the old solution is set aside. If the neighboring solution is not better than the current solution then there is a probability that it will be accepted as the new current solution. This probability is often referred to as the temperature. The idea behind accepting a worse current solution is to move away from local solutions in favor of a global solution. As mentioned above, the temperature is the probability that a neighboring solution, which is worse than the current solution, is accepted as the current solution. Before beginning the simulated annealing algorithm values for the initial and final temperature should be set by the user. Over time, as in the actual annealing process, the temperature will decrease and the algorithm will begin to hone in on a global solution. The chain length determines the number of iterations that the process remains at the current temperature. Repeated iterations at the same temperature level facilitate the algorithm in mapping out the surrounding neighboring solutions. Lastly, the cooling schedule determines how the temperate will decrease after a chain length has been reached. Often times, a fraction of the current temperature is kept in beginning the next chain length of runs. The concept of accepting a worse solution is fundamental to the simulated annealing process. If increasingly better solutions were only accepted the algorithm would be mirroring the quenching process whereby the metal is rapidly cooled. This quenching approach will likely lead to a local solution rather than a global solution indicating that the algorithm was trapped in some local region and unable to escape. The purpose of the temperature probability in simulated annealing is to move the solution away from a local solution in favor of a global solution. The application of the simulated annealing algorithm is focused on combinatorial optimization problems that occur commonly in β€œcomputer-aided design of integrated circuits, image processing, code design, and neural network theory.”36 The strength of simulated annealing lies in problems that contain a large amount of variables. The algorithm must be given sufficient time to β€œcool down” so that it has time to explore a wide variety of neighboring solutions with the goal of reaching a global solution. The choice of initial and final temperature as well as chain lengths and cooling schedules require great forethought as they will be the driving force behind the search process.

36 Van Laarhoven, P.J.M., Aarts, E.H.L. (1987). Simulated Annealing: Theory and Application. D. Reidel Publishing Company. Dordrecht, Holland.

Page 29: The Pennsylvania State University PLANNING THE MEANS OF

20

2.F.2. Tabu Search

In order to better approach this metaheuristic, it is best to realize that tabu actuals refers to the verbal form of taboo, which by definition is an act of prohibition. Keeping that in mind, tabu search is similar to simulated annealing. In truth, all metaheuristics aim to provide a β€œgood enough” or adequate solution. Ideally, the solution would be optimal, but this isn’t always the case owing to the complexity of the problem. The whole idea of using a metaheuristic is to obtain a feasible solution. To begin, tabu search scans the neighborhood of possible solutions. The algorithm will select an improving solution or best solution. This move is then placed on the tabu list, which prevents the algorithm from returning to that particular solution. The length of the tabu list is controlled by the memory space. As such, only a few moves are recorded by the tabu list because a limited amount of memory is generally allocated to the tabu list. The most recent move is placed on the list, while the last move on the list is removed. The purpose of the tabu list is to prevent cycling and to allow the algorithm to escape local solutions. As mentioned above, the solution chosen is not necessarily the best solution. The algorithm would prefer to select an improving solution, but if one does not exist it will chose a worse solution with the hope that eventually the solution will improve. The algorithm will terminate if the optimal solution is found, or if there are no available to moves to make. Meaning that all the possible moves are on the tabu list and is has no other option than to end. The user can also set programmatic limitations. If the solution does not improve after x moves then the algorithm will terminate, or if the maximum number of iterations or moves is reached. Glover and other authors suggest, β€œinstead of a single tabu list it may be more convenient to use several lists.”37 One possible example of this could be the use of short-term and long-term memory. The short-term memory functions very similarly to the method described above. The algorithm will perform a neighborhood search and select the best possible move. That move is placed on the tabu list and the algorithm will continue to cycle for a set number of iterations. As the name indicates the short-term memory has a limited amount of space so the tabu list is relatively small. Whenever the number of iterations is reached the user can either change the number of iterations, the length of the tabu list or choose a new starting point and then restart the short-term memory process. In addition, they can also move to the long-term memory space, or stop with the best solution. The long-term memory section uses past searches to bias future searches. In this way it more efficiently reduces the search space and delivers a feasible solution. The long-term memory can used a learning tool for the algorithm, it can either place entire sections of the neighborhood under prohibition or it can focus in on a certain region. Tabu search is a very flexible metaheuristic and has been used to solve quadratic assignment problems as well as the traveling salesman problem. Its flexibility lies in the use of it memory spaces. More complex problems may require the use of multiple memory techniques in order to find an adequate solution. Determining how best to use memory spaces as well as the number of iterations and the length of the tabu list is crucial to success.

37 Glover, F., Taillard, E., de Werra, D. (1993) A user’s Guide to Tabu Search. Annals of Operations Research. Volume 41. Issue 1. 1-28.

Page 30: The Pennsylvania State University PLANNING THE MEANS OF

21

2.F.3. Genetic Algorithm

Genetic Algorithm is a metaheuristic that draws its inspiration from the field of genetics. Common terms include gene, and chromosome. Each iteration of the algorithm produces a new generation whose fitness is then evaluated. Individuals with a high fitness level are generally seen to offer good approximations or are considered robust with respect to the optimization problem at hand. The more fit an individual a higher chance that it will be selected to form a new generation for the next iteration. The new generation is formed by a series of operators that are similar to processes seen in nature. One operator is a crossover whereby children are formed by splicing together two parents. For each parent a cutting location is determined and then two parts are swapped to form children. Another operation is called a mutation whereby a single individual produces a child. A random element of the parent is chosen and exchanged for another random element thus creating a new child. In order to begin, the algorithm maps the entire solution space and assigns a binary string to each location or chromosome. For large solution space the chromosomes will have multiple binary entries. Each binary entry is referred to as a gene. It is possible and highly likely that a chromosome will contain a multitude of genes. A random set of solutions is chosen to form the initial or first generation. An initial fitness value is determined for each individual in the first generation. This fitness value β€œreflects the quality of the solution represented by the individual.”38 Next, each specific individual is randomly chosen for reproduction. Characteristics that render an individual more fit are given a higher chance of selection. Children are generated from the original generation using some of the operator techniques described above. Less fit or older individuals are removed from the population. A new generation is formed comprised of those that were not eliminated and any perspective children. The fitness of each individual is determined and the algorithm continues until it has reached an iteration threshold that has been determined by the user. For each iteration children are created and older or less fit individual are removed from the population. As is the case for all metaheuristics and algorithms set-up is key. In this case the most important aspect is the initial encoding, which allows the solution space to be mapped. With regards to job scheduling β€œencoding consists of defining a string of symbols for each machine (one symbol for each operations). These symbols describe the sequence of operations.”39 It’s important to keep in mind that for a schedule containing multiple machines a chromosome may be produced that represents an infeasible solution. Namely, during a crossover operation a chromosome is created that contains only routing information for one machine in a multiple machine environment. One way to prevent this would be to create different chromosome sections and to only allow reproduction within a certain section. If a new generation contains infeasible schedules it best to attempt to control or limit the reproduction process such that only feasible chromosome are produced. There are multiple procedures that terminate the algorithm: the maximum number of iterations or generations has been reached, the fitness or each generation has peaked or plateaued at a certain value, an optimal or near-optimal solution has been found. Genetic algorithms are mostly applied to scheduling problems as well as neural network problems. Within the scheduling domain, there exist

38 Della Croce, F., Tadei, R., Volta, G. (1995) A Genetic Algorithm for the Job Shop Problem. Computer Operations Research. Volume 22. No. 1. 15-24. 39 Della Croce, et. all, β€œA Genetic Algorithm for the Job Shop Problem.”

Page 31: The Pennsylvania State University PLANNING THE MEANS OF

22

scheduling algorithms that use genetic algorithm to schedule in a Flexible Job-Shop environment.40 The uniqueness of this algorithm is its ability to generate initial populations using different strategies. This also applies to the selection of individuals for reproduction in the next generation. Another example of a neural network problem using genetic algorithm presents itself in the form of a multi-objective problem for thermal comfort and energy consumption in residual houses.41 The paper uses a simulation-based artificial neural network to specify building behaviors. The optimization is achieved using a multiobjective genetic algorithm,

2.F.4. Ant Colony Optimization

As with some of the other biological processes discussed above, ant colony optimization is firmly rooted in the behavior of an ant colony. Whenever an ant colony searches for food it sends ants out in all directions. If an ant finds food it will return to the hive with the collected food. During the return trip it will leave a pheromone trail that will guide other ants to the food source. During their foraging expedition ants will gravitate towards these pheromone trails because they naturally lead to food. Sources that are closer to the hive will see more ant traffic than compared to other locations farther away from the colony. As a result of this, more ants will tend towards the closer source of food instead of the farther away source. The ant colony optimization algorithm acts in a very similar manner to the biological method described above. To begin with, a series of artificial ants are deployed into the solution each starting with an empty solution. Components of the solution are added in an iterative nature with backtracking not allowed. This means that an artificial ant is not allowed to visit a previously better solution. At each construction step β€œan ant extends its current partial solution by choosing one feasible solution component and adding it to its partial solution”42 set. The important aspect is that feasibility must be maintained for every step. Solutions that are unable to maintain feasibility are either discarded or declared infeasible complete solutions. The construction phase is over once a set of complete solutions have been formed. They are then biased by the pheromone update phase. Pheromones are added to solutions with good components so that they become more attractive for other artificial ants to follow. The idea is to increase the attraction of a good component in a particular solution. Over time, pheromone levels will naturally decrease for all solutions no matter how robust. In many ways this evaporating prevents the algorithm from converging to a local solution. On the practical side, β€œit implements a useful form of forgetting, favoring the exploration of new areas of the search space.”43 Once the pheromone phase has been completed a new iteration begins with artificial ants being released from the colony. These ants may randomly search the solution space or follow a pheromone laid down by the previous group.

40 Pezella, F., Morganti, G., Ciaschetti, G. (2008). A Genetic Algorithm for the Flexible Job-Shop Scheduling Problem.

Computers and Operations Research. Volume 35. 3202 – 3212. 41 Maginer, L., Haghighat, F. (2010). Multiobjective Optimization of Building Design using TRNSYS Simulations, Genetic Algorithm, and Artificial Neural Network. Building and Environment. Volume 45. Issue 3. 739 – 746. 42 Dorgio, M., Stutzle, T. (2009) Ant Colony Optimization: Overview and Recent Advances. Institut de Recherches Interdisciplinaires et de Developpements en Intelligence Artificielle. Technical Report No. TR/IRIDIA/2009-013. 43 Dorgio, et. all, β€œAnt Colony Optimization: Overview and Recent Advances.”

Page 32: The Pennsylvania State University PLANNING THE MEANS OF

23

The uniqueness of ant colony optimization is that the pheromone levels allow the algorithm and artificial ants to be influenced by previous searches and experiences. The program terminates whenever an optimal or near optimal is found. Like other metaheuristic techniques, ant colony optimization has many applications. These applications focus themselves in optimization areas such as scheduling, particularly in job shops or resource-constrained areas. Assignment and Routing problems such as quadratic assignment and the traveling salesman problem are also areas of interest.

Page 33: The Pennsylvania State University PLANNING THE MEANS OF

24

3. METHODOLOGY

To solve the resource allocation problem occurring within a job shop environment we propose a model based simulation approach. Systems dynamic modeling was chosen due to the novelty of its application for a problem that includes multiple decision variables, including workers, which are ultimately beholden to a random demand figure that is known in advance. Simulated Annealing was selected because it applies itself naturally to a problem of this nature. Integer programming acts as a benchmark to compare solution quality.

3.A. Model Based

Creating a simulation model of a job shop is trivial. Using the simulation model to analyze the job shop is where the rubber hits the road. The Literature Review discussed several popular simulation techniques employed by modelers. The models considered above were systems dynamic modeling, discrete event modeling, agent based, and dynamic systems modeling. In selecting the appropriate model to use, we must first understand what exactly we require from this model. Each model technique will deliver different outputs and information for the job shop. Our goal is to find an appropriate combination of workers and work stations such that the output that they achieve is greater than the random demand for that period. In short, we wish to minimize the difference between output achieved and demand. The following sections will discuss why a systems dynamic model was chosen to represent the job shop environment over the other techniques.

3.A.1. Dynamic Systems

The use of dynamic systems often pertains to the development of controls on a time dependent and varying process. The state space is a series of wells that each have an attractor value. Visually, the attractor value is described as the depth of the well: the deeper the well the stronger the attraction and conversely for more shallow wells. Over time, attractors stabilize and a control strategy can be developed to better understand output and minimize error between the achieved and desired output. The main problem of dynamic systems with respect to the problem of minimizing deviation between output and demand is that we wish to maintain a stable workforce. For practical purposes, it does not make sense to have large number of workers one week and a small number the next. We want to achieve a long-term solution of workers and work stations that achieve a minimize deviation in output and demand. In many regards, our control strategy is already defined: find a stable number of workers and work station to minimize the deviation. Hence, dynamic systems have no real role in helping to determine the right configuration in our job shop environment.

3.A.2. Discrete Event Simulation

Discrete event simulation is a very versatile modeling technique. There are many applications within queuing theory, supply chain engineering, and general manufacturing. Discrete event simulation excels at providing useful statistics regarding the throughput of an item, the cycle time of an item, or the utilization of a resource. Managers can determine which operation represents a bottleneck and address the problem. Estimates of raw material needed to produce a finished product can also be achieved, as well the feasibility of an economic order quantity policy. Drawbacks within discrete event simulation regard the large amount of data needed to construct the model. Generally, stochasticity is derived from random probability distributions for operations. A triangular distribution may provide the mean, max, and min of a pressing operation, while a Weibull

Page 34: The Pennsylvania State University PLANNING THE MEANS OF

25

distribution may provide the time to failure for that hydraulic press. The rate at which items arrived can also be expressed as a random variable; such is the idea of a Poisson arrival process. Probability distributions are generally drawn from empirical or observational data. In many instances, this collection is time consuming and could be subject to biased effects or factors. That being said, discrete event simulation is the most widely used form of simulation modeling. There exists a great proliferation of commercial software whose aim is to handle the execution of discrete event simulations. The general belief is that discrete event simulation β€œis considered to be more suitable for modeling problems at an operational/tactical level.”44 Operation levels tend to focus on the day-to-day management of the operation such as the number of parts produced in day or the cycle time of an operation for that week. Tactical levels are a midterm concern with their period being within the multi month range or quarter. Will we be able to meet our production quota for this quarter? Tactical level decisions do not extend past a year. The requirements for our problem are strategic requirements; namely, the number of workers, shifts, and workstations needed. We want to set a level for the number of workers and hire up to that amount, while ensuring that losses due to attrition are replaced so as to keep the number of workers relatively constant over a multi-year planning horizon. A systems dynamic model β€œis considered appropriate when taking a β€˜distant’ perspective (meaning strategic) where events are seen in the form of patterns of behavior and system structures.”45 Staffing policies are often considered strategic decisions, which is why a systems dynamic simulation was chosen over a discrete event model.

3.A.3. Agent Based Modeling

The agent is the most important component of agent based modeling. Being able to interact in an environment and make independent actions based on a series of predefined rules the agent represents an active participant of the simulation. Agents represent discrete individuals such as people, or boids. The behavior of the agent is governed by a set of rules that ultimately influence its ability to make decisions within the environment. There are multiple agents and they can interact with each other in the environment. Often times, the goal of agent based modeling is to allow interaction between agents to observe a larger phenomenon that is only seen at the macro level. The decisions that agents make can also be influenced by past experiences. Thus, the agent has the ability to learn and adapt its behaviors within the environment. The scope of agent based modeling lies in attempting to understand complex and macro phenomena and how they are influenced and created by agent or micro interactions. Often times, there is no direct relationship between the micro and macro elements other than agent interaction. By defining, a series of rules for agents to adhere to and placing them in an environment the model is making inferences about the macro effect. They are attempting to understand how a series of small interactions might lead to a larger happening. It should come as no surprise that agent based modeling is not used for combinatorial resource allocation problems. The process of determining a number of workers and work stations so as to minimize the difference between output and demand is beyond the scope of agent based modeling. Primarily, agent based simulation β€œcan be used to study how patterns and organizations emerge and 44 Tako, A., Robinson, S. (2012). The Application of Discrete Event Simulation and System Dynamics in the Logistics and Supply Chain Context. Decision Support Systems. Volume 52. Issue 4 802-815. 45 Tako et. all, β€œThe Application of Discrete Event Simulation and System Dynamics in the Logistics and Supply Chain Context.”

Page 35: The Pennsylvania State University PLANNING THE MEANS OF

26

to discover how system-level structures form that are not apparent from the behavior of individual agents.”46 There is room for agent based modeling within extensions for this problem however. Namely, how would a new shop layout affect demand? Alternatively, how a new allocation of shifts might affect productivity? These are all speculative scenarios, but the reader should obtain a better understanding of how to apply agent based modeling to differing scenarios that might arise within a manufacturing/job shop environment. Interactions between agents produce a large and complex phenomena is that not very well understood from traditional analytical methods even though observation in the real world suggests that this phenomena does exist.

3.A.4. Systems Dynamic Modeling

The overall goal of simulation modeling is to develop a better understanding of systems that are complex and difficult to analyze using standard analytical tools. Computer aided simulation modeling is useful because it allows researchers to cut through the minutiae and quickly obtain necessary information regarding a system. The downfall of simulation modeling is that rigorous effort must be applied to ensure that the model actually represents the physical system and that the computer is executing the model in a logical fashion. All simulation modeling techniques have their specialties. Dynamic Systems focus on the development of an optimal control policy; discrete event simulation focuses on operational/tactical decisions, and agent based focuses on emergent behavior. Systems Dynamic is described in the following manner:

β€œThe SD methodology, which is adopted in this research, is a modeling and simulation technique specifically designed for long-term, chronic, dynamic management problems. It focuses on understanding how the physical processes, information flows, and managerial policies interact so as to create the dynamics of the variables of interest. The totality of the relationships between these components defines the β€œstructure” of the system. Hence, it is said that the β€˜structure’ of the system, operating over time, generates its β€œdynamic behavior patterns.” It is most crucial in SD that the model structure provides a valid description of the real processes. The typical purpose of a SD study is to understand how and why the dynamics of concern are generated and then search for policies to further improve the system performance. Policies refer to the long-term, macro-level decision rules used by upper management.”47

Past research into systems dynamic modeling have used it as tool to estimate the number of workers or bodies necessary for a system. One such application looked at U.S. Army enlisted personnel and used systems dynamic modeling to understand the impact of policies and the stability on manning requirements.48 Another example of systems dynamic modeling and staffing decision support focused on staff attrition in software development organizations.49 The methodology that has been described above is unique in the manner that although the number of workers is determined they ultimately affect the output that the shop achieves. Naturally, workers need work stations to perform work, but given a number of shifts, the entirety of work stations may not be used due to some

46 Macal, C., North, M. (2009). Tutorial on Agent Based Modeling and Simulation. Proceedings from the 2009 Winter Simulation Conference. 47 Vlachos, D., Georgiadis, P., Lakovou, E. (2004). A System Dynamics Model for Dynamic Capacity Planning of

Remanufacturing in Closed-Loop Supply Chains. Computer & Operations Research. Volume 34, Issue 2. 367-394 48 Thomas, D., Kwinn, B., McGinnis, M., Bowman, B., Entner, M. (1997). The U.S. Army Enlisted Personnel System: A System Dynamics Approach. Computational Cybernetics and Simulation. 1997 IEEE International Conference. 1263 – 1267. 49 Collofello, J., Houston, D., Rus, I., Chauhan, A., Sycamore, D., Smith-Daniels, D. (1998). A System Dynamics Software Process Simulator for Staffing Policies Decision Support. Proceedings of the Thirty-First Hawaii International Conference on System Sciences, 103 – 111.

Page 36: The Pennsylvania State University PLANNING THE MEANS OF

27

economic reason to have more workers on the first shift versus the second shift. When planning the number of work stations one would want to plan for the maximum number of workers on a shift. Adjusting work stations by shift makes no practical sense. Not only are we solving for the number of workers that the process requires, but also the output that these collective workers achieve must be larger than the random demand and it must represent a minimized deviation between output and demand. Hence, our approach is novel in the fact that it is the first to use systems dynamic modeling to estimate the number of workers and then extend that directly to the output that they are able to achieve in a period.

3.B. Simulated Annealing Approach

The problem at hand is a combinatorial problem. How best can a series of workers and work stations be chosen and assigned so that their output is greater than demand? The optimization portion of this problem requires that the difference between output and demand be as minimized as possible. In an ideal situation, the shop would be able to arrange workers and work stations so that output matches demand. Due to various real world implications this is unlikely and if occurs would be more coincidence then a result of actual planning. As such, we wish for output to be greater than demand, but require the difference between the two to be minimized. The goal is to allocate workers and work stations so that the difference between their output and demand is minimized, which will represent an optimal solution. Since this is a resource allocation problem, we must choose a metaheuristic that is appropriate to given problem. All of the metaheuristics described in the literature review are able to handle combinatorial resource allocation problems, but some are better geared than others for this particular instance. While any of the metaheuristics could have been chosen we ultimately chose simulated annealing. The next few sections will discuss reasons for and against the various metaheuristics.

3.B.1. The Genetic Algorithm

Genetic algorithm lends itself well to sequencing problems. The most common application of genetic algorithm would be the binary genetic algorithm, wherein the chromosomal values are binary entries. The most straightforward example of this would be the knapsack problem. Imagine that one has a knapsack that has to be filled with items. The knapsack has a certain carrying capacity and each item has an intrinsic utility value. The goal is to fill the knapsack with items such that the capacity is not breached and the utility of all items in the knapsack is maximized. A binary chromosomal structure under genetic algorithm would indicate, for a specific solution, the series of items that are to be placed in the bag and not placed in the bag. The binary chromosome would then be applied to a function to determine the capacity and utility. Thus, the binary entries are used to indicate the sequence of items that are to be placed within the knapsack. The utility and capacity are merely checks on the solution. A binary encoding of the chromosome is valid because all possible binary values are applicable to the problem. This means that we can either place the item in the knapsack or not. Both binary values have equal representation under the knapsack problem. The same cannot be said for whenever the individual chromosomal values are non-binary. As Reeves describes,

β€œIn order to apply any GA to a sequencing problem, there is an obvious practical difficulty. In most β€˜traditional’ GAs, the chromosomal representation is by means of a string of 0s and 1s, and the result of a genetic operator is still a valid chromosome. This is not the case if one

Page 37: The Pennsylvania State University PLANNING THE MEANS OF

28

uses a permutation to represent the solution of a problem, which is the natural representation for a sequencing problem.”50

It is possible to use Genetic Algorithm for non-binary encoding; the entirety of Reeves paper is to prove that that is possible. In selecting Simulated Annealing as our chosen metaheuristic, we wish to avoid the obvious difficulties that are associated with chromosomal representation of non-binary values. The problems only compound themselves as one moves to crossover and mutation. The possibility of having a non-feasible value in a specific chromosomal location after a crossover forces some modifications to be made. Reeves explains a possible crossover method: choose β€œone crossover point X randomly” and take β€œthe pre-X section of the first parent” and lastly fill β€œup the chromosome by taking in order each β€˜legitimate’ element for the second parent.”51 The rationale behind this method is to preserve the absolute position of values taken from the first parent as well as the relative positions taken from the second parent. Mutation is included with this method so as to prevent premature convergence. Initial solutions, under the general framework, are generated randomly. Within the non-binary context, initial solutions are seeded from a good solution. This good solution was obtained via a constructive heuristic. It was also found that these seeded initial solutions achieved a solution more quickly than counterparts that were randomly generated. The problem with random generation is ensuring the each specific value within the chromosome is valid to that individual sub problem representation. The problem in applying genetic algorithm to non-binary combinatorial problems is not insurmountable. In selecting Simulated Annealing, we simply wished to avoid these intricate and nuanced difficulties.

3.B.2. Tabu Search

The same problems that plagued Genetic Algorithm also plague Tabu Search. In order to apply tabu search to a non-binary combinatorial problem we have to ensure that values encoded in the vector are feasible to the specific sub problem that is being solved. The underlying concept of tabu search is that β€œa neighborhood can be constructed to identify adjacent solutions that can be reached from any current solution. Pairwise exchanges (or swaps) are frequently used to define neighborhoods in permutation problems, identifying moves that lead from one solution to the next.”52 The paper goes onto describe strategic oscillation in terms of the tabu search framework that is used to vary the search direction, which controls the eligibility of certain moves. This new framework allows for a move to an infeasible solution under a penalty that varies with time. It is possible to solve non-binary combinatorial or sequencing problems under the framework of tabu search, but significant alterations are required in order to control the search so that feasible values are chosen for the sequence. When considering these hybrid alternatives at what point do their specializations create a completely new class of metaheuristics? At what point does tabu search become a sub routine in another more complex metaheuristic? Our purpose is to find and identify assessable metaheuristics that can solve a general combinatorial resource allocation problem.

50 Reeves, C. (1995). A Genetic Algorithm for Flowshop Sequencing. Computers Operations Research. Volume 22. No. 1. 5 – 13. 51 Reeves. β€œA Genetic Algorithm for Flowshop Sequencing.” 52 Glover, F., Kelly, J., Laguna, M. (1995). Genetic Algorithm and Tabu Search: Hybrids for Optimization. Computers Operations Research. Volume 22. Number 1. 111 – 134.

Page 38: The Pennsylvania State University PLANNING THE MEANS OF

29

Under the basic assumptions of local search methods and memory structures that tabu search employs it is possible to maintain feasibility for every encoding value if β€œcandidate lists [are used] to screen moves for examination, both to reduce computational effort and to focus on more promising alternatives.”53 The candidates lists themselves constrain the possible values that each individual value within the solution string can assume, but it also ensures feasibility for each sub problem. Great care would need to be used when constructing these candidate lists so as to ensure that the search region is still sufficiently large enough to ensure that an optimal solution can be found.

3.B.3. Ant Colony Optimization

As with all of the metaheuristics discussed in the literature review, all can be used to solve combinatorial problems. The goal is to identify which metaheuristic lends itself more naturally to solving combinatorial problems coupled with resource allocation. The class of problems that ant colony optimization can solve require β€œan appropriate graph representation of the problem considered and a heuristic that guides the construction of feasible solutions.”54 Many papers discuss the application of ant colony optimization to the traveling salesman problem and extensions. In many regards, the focus on graph-oriented problems makes sense for this metaheuristic. Generally speaking, ants venture forth from a central hive in search of food. When an ant finds food it returns to the hive; in doing so an ant lays down a pheromone trail. The closer a source of food is to the hive the stronger the pheromone trail. Ants are generally influenced by pheromone trails. In doing this, ants are essentially solving for the shortest path between the hive and a food source. The caveat is that the closest food source is unknown so in many regards this process represents a type of learning that will eventually converge on the closest source of food. There are two critical issues associated with ant colony optimization that can affect performance, the first is β€œfinding an appropriate graph representation to ensure that the problem specific-constraints are satisfied” and the second is β€œfinding a heuristic algorithm for the node transition rule to expedite the convergence rate.”55 There is no doubt that there exists some sort of graphical representation regarding the effect of workers and workstation combinations on output, but how to construct it? This is done by having ants begin at the starting location and then proceed through a series of sequential layers. At each node, the ant chooses an appropriate resource quantity thus ensuring that the quantity selected will not prove infeasible to the sub problem. Once the ant has traversed the sequential layers, it terminates at a sink node. At each layer, it is important to determine how an ant selects a node. This is done by via the various pheromone trails that exist due to previous ants having traversed the system. As an ant travels to the given node, it lays down a pheromone trails that can be used by other ants to find the source of food. Overtime pheromone trails do evaporate. This effectively removes trails that have experienced little ant traffic and has a converging effect on the optimal solution.

53 Glover. β€œGenetic Algorithm and Tabu Search: Hybrids for Optimization.” 54 Dorgio, M., Gambardella, L. (1997). Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem. IEEE Transactions on Evolutionary Computation. Volume 1. No. 1. 53 – 66. 55 Yin, P., Wang, J. (2006). Ant Colony Optimization for the Nonlinear Resource Allocation Problem. Applied Mathematics and Computation. Volume 174. Issue 2. 1438 – 1453.

Page 39: The Pennsylvania State University PLANNING THE MEANS OF

30

3.B.4. Simulated Annealing

The attractiveness of simulated annealing is that the variables need not necessarily be specified in binary terms, as can be the case in Genetic Algorithm and Tabu Search. It is much easier to apply simulated annealing to a non-binary combinatorial problem when compared to the other metaheuristics. Other authors within the literature who conclude, β€œsimulated annealing can be used to generate resource allocation alternatives”, share this viewpoint.56 The referenced paper investigates land usages as their resource to be allocated optimally. The application is to redevelop mining land in Spain with variable development costs and physical attributes. The problem that this thesis proposes to solve using simulated annealing involves finding the optimal combination of workers and workstations in order to minimize the difference between output and a fluctuating demand. Due to its ability to easily handle problems of the non-binary nature, simulated annealing was selected as the metaheuristic of choice to solve the combinatorial problem. That’s not to say that the other metaheuristics mentioned above cannot handle or solve non-binary combinatorial problems. Our goal is not to issue demerits, but merely to observe that when dealing with a resource allocation problem that requires the use of non-binary combinatorial variables simulated annealing is a more natural fit then genetic algorithm, tabu search, and even ant colony optimization. When dealing with sequencing problems one would likely turn to genetic algorithm or tabu search. Swarm techniques, such as ant colony optimization, excel at finding optimal paths through a problem that can be reduced to a graph. The following steps represent the ease of simulated annealing: β€œgenerate an initial feasible solution, be able to perturb a feasible solution and create another feasible solution, and be able to evaluate the objective function at these solutions.”57 Generating the initial solution is trivial given that the only guideline is that it should be feasible. Techniques that deliver a more robust initial solution may ensure that the algorithm converges to an optimal solution in a shorter amount of time. Since the variables represent non-binary values, perturbation can be achieved by random selection from a probability density function. Once the iteration values for the difference solutions are achieved, the objective function values can be obtained. The selection of the incumbent solution is based on the simulated annealing criteria and the present temperature. This process will need to be repeated for a sufficiently large number of iterations in order to assure convergence to an optimal solution. The reader should gain valuable insight into the specifics regarding our implementation of the simulated annealing algorithm. See Figure 1 below.

56 Aerts, J., Heuvelink, G. (2002). Using Simulated Annealing for Resource Allocation. Geographical Information Science. Volume 16. Number 5. 571 – 587. 57 Nahar, S., Sahni, S., Shragowitz, E. (1986). Simulated Annealing and Combinatorial Optimization. Proceeding of the 23rd ACM/IEEE Design and Automation Conference. Las Vegas, Nevada. 293 – 299.

Page 40: The Pennsylvania State University PLANNING THE MEANS OF

31

Figure 1: Simulated Annealing Flowsheet

To begin, for Figure 1, we initialize all the decision variables and define constant values. An initial solution is generated and declared the incumbent solution. A neighboring solution is generated via a perturbation off the incumbent solution and a feasible solution is chosen based on the simulated annealing criteria. Temperature is reduced by the alpha value. This process of generating a neighboring solution off incumbent perturbations is repeated k times where k is sufficiently large enough to ensure that the algorithm evenly cools down and converges on a near optimal solution. Once the algorithm has satisfied the k replications, move onto the next operation if applicable and repeat the entire process. Once all of the operations for that product line have been looped through, we move onto the next product line and repeat the process. The algorithm terminates when it has successfully looped through all product lines. Once that has been achieved, we collect our disparate data sets into a readable output and report the near optimal solutions.

Page 41: The Pennsylvania State University PLANNING THE MEANS OF

32

3.C. Mathematical Programming

Our methodology is a Model Based Simulated Annealing approach to solve a combinatorial resource allocation problem. We utilize simulation modeling and metaheuristics to achieve solutions regarding the number of workers and work stations necessary so that their output represent a minimum positive deviation between output and demand. Essentially, we want to staff our job shop such that their output is sufficient to cover the random demand for that period. The use of simulation modeling and metaheuristics represent inexact solution techniques. It’s not so much that the methodology produces inexact solutions, but rather feasible solutions. The problem being that we have no reference regarding how feasible they are; it may be the case that the feasible solutions represent optimal solutions, which would be ideal, or that they are not very good solutions. In order to provide this type of benchmarking we propose the use of integer programming to achieve an exact solution so as have a reference regarding the quality of solutions achieved by our methodology. Integer programming was selected because we require a real discrete number of workers and work station. It makes no practical sense to have half of a worker or two thirds of a work station. Integer programming ensures that solutions contain integer values. Presented below is the general formulation.

π‘€π‘–π‘›π‘–π‘šπ‘–π‘§π‘’ π·π‘’π‘£π‘–π‘Žπ‘‘π‘–π‘œπ‘› (6.a)

𝑆𝑒𝑏𝑗𝑒𝑐𝑑 π‘‘π‘œ:

π·π‘’π‘£π‘–π‘Žπ‘‘π‘–π‘œπ‘› = βˆ‘ βˆ‘ βˆ‘ π‘Šπ‘—π‘œπ‘  βˆ— 𝑃𝑠3𝑠=1

π‘šπ‘œ=1

𝑛𝑗=1 βˆ’

π·π‘—π‘œ

40 (6.b)

βˆ‘ βˆ‘ [π‘Šπ‘—π‘œ1 β‰₯ π‘Šπ‘—π‘œ2]π‘šπ‘œ=1

𝑛𝑗=1 (6.c)

βˆ‘ βˆ‘ [π‘Šπ‘—π‘œ2 β‰₯ π‘Šπ‘—π‘œ3]π‘šπ‘œ=1

𝑛𝑗=1 (6.d)

βˆ‘ βˆ‘ [π‘Šπ‘—π‘œ1 ≀ πΆπ‘—π‘œ]π‘šπ‘œ=1

𝑛𝑗=1 (6.e)

π‘Šπ‘—π‘œπ‘  β‰₯ 0, π‘–π‘›π‘‘π‘’π‘”π‘’π‘Ÿ (6.f)

πΆπ‘—π‘œ β‰₯ 0, π‘–π‘›π‘‘π‘’π‘”π‘’π‘Ÿ (6.g)

π·π‘’π‘£π‘–π‘Žπ‘‘π‘–π‘œπ‘› β‰₯ 0 (6.h)

π·π‘—π‘œ β‰₯ 0 (6.i)

𝑃𝑠 β‰₯ 0 (6.j)

Eq. 6: General Integer Program

In Eq. 6, we have the general integer program for the problem described in the introduction. The objective function is to minimize Deviation (6.a), which is defined in (6.b) as the sum across all

product lines, operations and shifts of π‘Šπ‘—π‘œπ‘  βˆ— 𝑃𝑠. Where π‘Šπ‘—π‘œπ‘  is the number of workers for product

line 𝑗 operation π‘œ on shift 𝑠, and 𝑃𝑠 is the productivity per shift. From that, we subtract the random demand per product line operation, π·π‘—π‘œ. The demand is divided by forty so as to ensure equivalent

units between the double sum of n product line workers over a possible three shifts and the

demand. The second constraint (6.c) ensures that for any product line 𝑗 and operation π‘œ the maximum number of workers available on second shift is equal to or less than first shift. The same can also be said of workers on shift three with respect to the second shift (6.d). Workers on the first shift tend to be more productive when compared to second and third shift. There are also economic reasons to prioritize the placement of workers of the first shift over the other shifts. Workers on the first shift are subject to normal pay rates whereas the other shifts cost more due to unfavorable hours like the night shift. The fourth constraint (6e) provides a physical capacity constraint on the shop. Where πΆπ‘—π‘œ is some sort of physical capacity constraint per product line operation regarding the

number of work stations and ostensibly workers. Essentially, we can only accommodate a certain

Page 42: The Pennsylvania State University PLANNING THE MEANS OF

33

number of workers and work stations within the shop area. Constraints (6f) and (6g) require that

π‘Šπ‘—π‘œπ‘  and πΆπ‘—π‘œ be nonnegative integers, while constraints (6h), (6i), and (6j) require that

π·π‘’π‘£π‘–π‘Žπ‘‘π‘–π‘œπ‘›, π·π‘—π‘œ , π‘Žπ‘›π‘‘ 𝑃𝑠 be nonnegative values, respectively.

3.D. Methodology Summary

In solving the resource allocation problem that manifests itself in a job shop environment, we propose a model based simulated approach. This methodology couples together simulated annealing and systems dynamic modeling. In essence, we develop a systems dynamic model that is used under optimization conditions to achieve configurations of workers, shifts, and work stations. Systems dynamic modeling was chosen because its inclusion into a problem like this is considered novel and unique because we are planning beyond the worker whereas past papers only planned for the worker. By saying that we planned beyond the worker; we imply that there are multiple variables within our model and that workers are simply one of them. Ultimately, the values that the variables achieve are dependent on the random demand that is known in advance. The other aspect of our joint methodology is simulated annealing. This metaheuristic excels at solving resource allocations problem and lends itself more easily than genetic algorithm or tabu search for example. This metaheuristic does not require a binary encoding of the variables, rather it generates an initial solution and perturbs off that solution. The incumbent solution is chosen off the simulated annealing criteria. The choice of the incumbent or neighbor solution is highly dependent on the current temperature of the algorithm, which decreases as replications increase. Lastly, integer programming serves as a validation approach that allows us to reach conclusions regarding the quality of a solution produced via systems dynamic or simulated annealing. A single solution to the resource allocation problem manifests itself in three disparate and distinct solutions. The first comes from the integer program, which is considered an exact method and should set the benchmark for all other solutions because it represents an optimal solution. The second solution comes from simulated annealing and may not be optimal, which is why we compare it to the integer program. The third and final solution comes from the systems dynamic model. Again, we compare that solution to integer programming to determine if it is a quality solution. If the solution for integer programming and the joint approaches of simulated annealing and systems dynamic modeling are identical or equivalent then we conclude that their quality is that of optimality and that the methodology is valid for solving a resource allocation problem. If one of the methods fail, say simulated annealing, then we conclude that only the other method represents a feasible solution technique to the problem, systems dynamic modeling for example. Thus, depending on the outcomes and solution qualities there are multiple methods that may or may not represent feasible solution techniques to the resource allocation problem.

Page 43: The Pennsylvania State University PLANNING THE MEANS OF

34

4. EXPERIMENTATION & RESULTS

The methodology described a Model Based Simulated Annealing approach to solve the problem

under consideration. Namely, given a set of 𝐽 product lines where 𝐽 = {𝐽1, 𝐽2, … , 𝐽𝑛}, and a set of 𝑂

independent operations where 𝑂 = {𝑂1, 𝑂2, … , π‘‚π‘š} determine the appropriate number of workers,

shifts, and work stations for each product line 𝐽. At most, there can be three shifts for each product line. A worker needs a work station so as to contribute to the total output of the shop, and each work station can process only one operation at a time. Once a process has begun, the operations

must be completed without interruption. Each shift suffers some sort of productivity penalty, 𝑃𝑠 due to various economic and working conditions. There exists a physical capacity on the number of work stations and workers that can perform labor within the space. Each operation is processed in

the order given by the set 𝑂 for the specific job. The collective production pertaining to the

combination of workers, shifts, and work stations shall be known as output. Every period 𝑙 they face a random demand. The goal is to minimize the deviation between output and random demand. The experimentation on our methodology will largely focus on two broad applications: scalability and robustness. The first section will describe the results pertaining to scalability, while the second will focus more on the robustness of our application. Recall that our proposed methodology is a model based simulated annealing approach. Wherein we utilize systems dynamic modeling and simulated annealing to solve the resource allocation problem. We then compare those solutions to that of integer programming and determine the solution quality. If the solution for a particular solution technique are equivalent to integer programming when we conclude that that technique represents a feasible solution technique to solving the problem. For robustness we analyze whether or not the simulated annealing algorithm is robust. This investigation occurs under the guise of Taguchi and some of his concepts such as the signal to noise ratio and the loss function.

4.A. Scalability

Scalability refers to the notion of scaling something up. To begin this section we will focus on developing a base model with a single product line and operation. The methodology will be applied to this problem with results regarding workers, shifts, and work stations recorded along with run time by the various algorithms and methods. Once the base model has been presented, we will move on to more complex versions of the problem. This will demonstrate the scalability of the methodology and its ability to solve the problems of the general form like those discussed in the methodology.

4.A.1. The Base Model

In this model we test the simulated annealing algorithm against the integer program and utilize the optimizer experiment inherent to AnyLogic Β© 8.3.1to test the systems dynamic model against the integer program. The base model is one such that a single product line and operation are under consideration. Presented below is the integer program for the base problem.

π‘€π‘–π‘›π‘–π‘šπ‘–π‘§π‘’ π·π‘’π‘£π‘–π‘Žπ‘‘π‘–π‘œπ‘› (7.a)

𝑆𝑒𝑏𝑗𝑒𝑐𝑑 π‘‘π‘œ:

π·π‘’π‘£π‘–π‘Žπ‘‘π‘–π‘œπ‘› = (π‘Š111 βˆ— 𝑃1) + (π‘Š112 βˆ— 𝑃2) + (π‘Š113 βˆ— 𝑃3) βˆ’π·11

40 (7.b)

π‘Š111 β‰₯ π‘Š112 (7.c)

π‘Š112 β‰₯ π‘Š113 (7.d)

π‘Š111 ≀ 𝐢11 (7.e)

Page 44: The Pennsylvania State University PLANNING THE MEANS OF

35

π‘Š111, π‘Š112, π‘Š113, 𝐢11 β‰₯ 0, π‘–π‘›π‘‘π‘’π‘”π‘’π‘Ÿ (7.f)

π·π‘’π‘£π‘–π‘‘π‘Žπ‘–π‘œπ‘›, 𝐷11, 𝑃1, 𝑃2, 𝑃3 β‰₯ 0 (7.g)

Eq. 7: Base Model, Integer Program

With respect to Eq. 7, the objective function (7.a) is to minimize Deviation, which is defined in (7.b) as the sum of workers across the three available shits multiplied by their respective productivity values. From that, we subtract the random demand. Keep in mind that there is a single product line and operation present. We ensure that the maximum number of workers on shift two is at most equal to shift one (7.c). The same can also be said of shift three with respect to shift two (7.d). Naturally, the number of workers on the first shift cannot exceed the shop capacity (7.e). Finally, we require that all workers on their perspective shifts be nonnegative integers as well as shop capacity (7.f) and that Deviation, Demand and the Productivity Values be greater than zero (7.g). We can further develop the integer program by adding in the constant values. We can arbitrarily set

the demand, 𝐷𝑙 equal 363.90 hours. Concerning the productivity penalties we can declare that 𝑃1 =1, which implies that the workers on the first shift for product line one do not face any sort of penalty; this will act as a sort of bench for further productivity penalties. The productivity penalty for workers on the second shift for product line one face a fifteen percent penalty compared to the

first shift; thus 𝑃2 = 0.85. In a typical forty-hour workweek, a worker on the first shift would be able to perform forty hours of labor, while a worker on the second shift can only fulfill thirty-four hours of labor. This effect can generally be described as the unavailability of β€œgood” working times. The third shift would suffer a twenty-five percent penalty when compared to the first shift;

thus 𝑃3 = 0.75. On our forty-hour workweek, workers on the third shift can only contribute thirty hours to output. Consider shifts two and three to be outside of the traditional 9 a.m. – 5 p.m. work hours. Finally, the shop capacity is arbitrarily set to five, implying that the shop can accommodate enough space for five work stations without further losses due to productivity. The refined integer program becomes the following:

π‘€π‘–π‘›π‘–π‘šπ‘–π‘§π‘’ π·π‘’π‘£π‘–π‘Žπ‘‘π‘–π‘œπ‘› (8.a)

𝑆𝑒𝑏𝑗𝑒𝑐𝑑 π‘‘π‘œ:

π·π‘’π‘£π‘–π‘Žπ‘‘π‘–π‘œπ‘› = (π‘Š111 βˆ— 1) + (π‘Š112 βˆ— 0.85) + (π‘Š113 βˆ— 0.75) βˆ’363.91

40 (8.b)

π‘Š111 β‰₯ π‘Š112 (8.c)

π‘Š112 β‰₯ π‘Š113 (8.d)

π‘Š111 ≀ 5 (8.e)

π‘Š111, π‘Š112, π‘Š113 β‰₯ 0, π‘–π‘›π‘‘π‘’π‘”π‘’π‘Ÿ (8.f)

π·π‘’π‘£π‘–π‘‘π‘Žπ‘–π‘œπ‘› β‰₯ 0 (8.g)

Eq. 8: Refined Base Model, Integer Program

The main difference between Eq. 7 and Eq. 8 is that Eq. 8 represents the refined version of Eq. 7. What is meant by this is that we define the constant parameters. Notice that the objective function value (8.a) is the same. For (8.b) we define the shift productivity values as well as the random demand. The priority constraints (8.c) and (8.d) remain unchanged, while in (8.e) the shop capacity is set to five. Finally, we ensure that all of the workers on their perspective shifts are nonnegative integers (8.f), while also requiring that Deviation be nonnegative (8.g).

Page 45: The Pennsylvania State University PLANNING THE MEANS OF

36

With respect to the simulated annealing algorithm, the algorithm, presented within the methodology as Figure 1, becomes very simple to implement as there is only a single product line and operation for this application. The algorithm only has to exert itself on the k loop. Recall, that the k loop perturbs the incumbent solution thus producing a neighboring solution. The simulated annealing criterion then chooses whether the incumbent solution remains or the neighboring solution becomes the incumbent. The driving factor for the simulated annealing criteria is the current value of the temperature. Initially, the algorithm may choose solutions with β€œworse” objective function values so as to escape local minimums. As the temperature approaches zero the probability of this happening decreases so the algorithm is said to converge to a near optimal solution given that k is sufficiently large. For this experiment, the temperature was set to one and decreases by an alpha value of 0.996 for every iteration in k. For our purposes, k is set to 10,000 implying that 10,000 neighboring solutions will be perturbed and subsequently compared to 10,000 other incumbent values. At the end of 10,000 replications we will show that the objective function value converges to a minimized value representing deviation and thus we can conclude that the simulated annealing algorithm was successful in this application. The simulated annealing and integer programming methods were scripted and executed in Matlab Β© R2018a. For purposes pertaining to systems dynamic modeling we use AnyLogic Β© 8.3.1 to create the model. The computer on which these scripts were executed had a 3.31 GHz processor and 8 GB of RAM. Once the systems dynamic model has been created, an optimization experiment can be added to solve our combinatorial resource allocation problem. During each replication, the decision values are randomized and used to compute the objective function value. Once that has been accomplished, calculating the deviation value is quickly accomplished. The optimization experiment employs a whole host of search techniques and heuristics to assign values randomly to the decision variables within their predefined ranges. There are also a number of constraints associated with the optimization experiment that are very similar to some of the constraints seen in the integer programming model. Results pertaining to the execution of the three methods discussed above are presented below.

Table 1: Base Model Results for Single Product Line and Operation

IP SA SDO

Worker Shift 1 5 5 5

Worker Shift 2 4 4 5

Worker Shift 3 1 1 0

Total Shifts Filled 3 3 2

Work Stations 5 5 5

Output (Hrs.) 366 366 370

Objective Function Value 2.09 2.09 6.09

Execution Time (Sec.) 0.60 6.12 4.68

Table 1 shows that the results for Integer Programming (IP) and Simulated Annealing (SA) are largely identical. They both have five workers of the first shift, four on the second, and one on the third. All three shifts contain a worker thus we can say that all the shifts are occupied. The number of work stations is in part derived from the workers on the first shift. Since there are five people on that shift, they will require five work stations in order to contribute to demand. Once second shift

Page 46: The Pennsylvania State University PLANNING THE MEANS OF

37

comes on line, they can use the same work stations that are already in the shop; just note that one work station will be idle and not have a worker present. The same can be said of third shift the difference that four work stations will be idle. The output that is achieved by these workers and work station is 366 hours. The objective function, which is defined as the deviation between output and demand is 2.09. This value represents a minimized deviation that is larger than demand. Lastly, we come to execution time, which is the amount of time required to execute the program. There is a roughly tenfold increase when executing the integer program compared to the simulated annealing algorithm. This can be compared to the systems dynamic optimization (SDO) model. A prime difference among the models is the allocation of workers to shifts. Workers on the first shift are equal across all three models. On the second shift systems dynamic places five workers, while simulated annealing and integer programming only place 4. Most interestingly, systems dynamic places no workers on the third shift, while integer programming and simulated annealing place four. In essence, the systems dynamic model has moved the worker on third shift and placed in on second shift. This lowers the amount of shifts filled to two for the systems dynamic model. The number of work stations is equivalent across the three models. The output that is achieved by the systems dynamic model is 370 hours, which is 6.09 hours over capacity. This configuration was found with a step size of one for workers on first and third shift and half on second shift. The total number of people within the shop is also equivalent across all three models. The difference is where to place the final worker on second or third shift? By placing the final worker on third shift, we are able to achieve a configuration that minimizes deviation, while placing them on second shift achieved a feasible solution. The execution time for the systems dynamic model was roughly one and a half times faster than its simulated annealing counterpart. We believe that this is largely a result of the number of replications being run. Within the simulated annealing algorithm, we set the number of replications to 10,000, while the optimizer experiment for systems dynamic modeling does not have a set replication number. It most likely terminates after a certain threshold has been reached and the objective function has remained stable with minimal change after a number of iterations. When dealing with metaheuristics like simulated annealing it’s important to remember that they cannot guarantee convergence on an optimal solution. The cooling rate is meant to control the change in solution that is accepted from one iteration to another. We don’t want to converge too quickly or the algorithm might find itself in a local solution. When the algorithm terminates we have the option of taking the final answer or the best answer. In the optimal scenario, the final answer and the best answer should be equivalent. This shows that the algorithm evenly cooled and converged on a feasible solution with respect to the defined criteria. In this instance, the criteria would be to find a configuration of workers, shifts, and work stations that minimizes deviation. During the course of experimental runs, it was decided to loop through the algorithm one hundred times. For each loop, we collect the optimal solution. The attempt is to provide the reader with a more nuanced interpretation of the simulated annealing outputs. The frequency table is presented below.

Table 2: Simulated Annealing Frequency Tables, Single Operation

Output Achieved

366* 97

372 1

376 2

Page 47: The Pennsylvania State University PLANNING THE MEANS OF

38

In Table 2, the output column refers to the collective output that the configuration of workers, shifts, and work stations was able to achieve. The achieved column refers to the number of times out of one hundred that the algorithm achieved that solution. From this, we can conclude that ninety-seven percent of the solutions are able to achieve optimality, which is also indicated by *. The remaining three percent are split between outputs of 372 and 376.

4.A.2. Multiple Operations

Returning to the methodology of a model based simulated annealing approach we proportionally increase the problem by adding on another product line. This experiment is performed because we are interested in how multiple operations will affect the solution quality for the simulated annealing algorithm and the systems dynamic model. In doing this we continue to probe the scalability of the two models under consideration. Typically, we have compared simulated annealing and the systems dynamic optimization model side by side with integer programming. Meaning that we presented results within the same format or table. For this experiment, we modify that approach. Since the systems dynamic model under the optimizer conditions was unable to achieve a quality solution on par with that of integer programming we now present the two methods separately. First, we present the simulated annealing algorithm and investigate its ability to handle multiple operations, and then we move onto the systems dynamic model and determine whether the optimizer inherent within the software can achieve an optimal solution.

π‘€π‘–π‘›π‘–π‘šπ‘–π‘§π‘’ π·π‘’π‘£π‘–π‘Žπ‘‘π‘–π‘œπ‘› (9.a)

𝑆𝑒𝑏𝑗𝑒𝑐𝑑 π‘‘π‘œ:

π·π‘’π‘£π‘–π‘Žπ‘‘π‘–π‘œπ‘› = {(π‘Š111 βˆ— 1) + (π‘Š112 βˆ— 0.85) + (π‘Š113 βˆ— 0.75) βˆ’363

40} + {(π‘Š121 βˆ— 1) +

(π‘Š122 βˆ— 0.85) + (π‘Š123 βˆ— 0.75) βˆ’ 617.11

40} (9.b)

π‘Š111 β‰₯ π‘Š112 (9.c)

π‘Š121 β‰₯ π‘Š122 (9.d)

π‘Š112 β‰₯ π‘Š113 (9.e)

π‘Š122 β‰₯ π‘Š123 (9.f)

π‘Š111 ≀ 5 (9.g)

π‘Š121 ≀ 15 (9.h)

π‘Š111, π‘Š112, π‘Š113, π‘Š121, π‘Š122, π‘Š123 β‰₯ 0, π‘–π‘›π‘‘π‘’π‘”π‘’π‘Ÿ (9.i)

π·π‘’π‘£π‘–π‘Žπ‘‘π‘–π‘œπ‘› β‰₯ 0 (9.j)

Eq. 9: Integer Program for Two Operations, Single Product Line

The objective function (9.a) as it has been throughout is to minimize Deviation, which is defined in (9.b) as the sum of workers on three shifts and two operations multiplied by their perspective productivity values. One difference is that we now see delineation between operation one and operation two. The first half of (9.b) represents operation one. The workers on that operation collectively achieve an output that is then subtracted from the random demand for operation one. A similar process occurs in the second half of (9.b) whereby the workers on operation two collectively achieve an output that is then subtracted from the demand for operation two. In (9.c) and (9.d), we ensure that the maximum number of workers on shift 2 is at most equal to shift one. This applies to both operations. For constraints (9.e) and (9.f), we ensure that workers on shift three are bounded by shift two. Meaning that the maximum number of workers on shift three can at most be equal to shift two. Constraints (9.g) and (9.h) place physical constraints on the number of work stations and ostensibly workers that can exist within the shop. Each operation adds their own capacity constraint.

Page 48: The Pennsylvania State University PLANNING THE MEANS OF

39

Finally, we require all workers on all shifts for both operations to be nonnegative integers (9.i), and we also require the Deviation to be nonnegative (9.j). With the introduction of the new operation, we have to account for a new demand that is present within the deviation constraint. The new operation also requires a new set of workers who specialize in that area as well as work stations that are different from the first operation. For illustrative purposes, imagine that the first operation is some sort of sand blasting activity that requires sand blasters as work stations. The workers on this operation must be skilled in the sand blasting operation. The second operation could be some sort of tack welding activity that requires a welding torch and other miscellaneous equipment. Naturally, workers on this operation are skilled with welding equipment and are specifically assigned to welding. Due to the addition of another operation, the physical size of the shop increases to accommodate the extra demand. We retain the constraint from the base model that applies a physical constraint on the first operation. Namely, there is only room for five work stations of the operation one kind. We are forced to add another capacity constraint for the second operation. This constraint declares that for the second operation is there only room for fifteen work stations. Refer back to the illustrative example of sand blasters and tack welders. Perhaps sand blasters are quite bulky, while welding booths are relatively compact in comparison. Lastly, for all operations we assume shift one is more productive then shift two then shift three. We also follow the priority rule and declare that the maximum number of people on shift three is equal to or less than shift two. The maximum number of people on shift two is equal to or less than the people on shift one. The simulated annealing algorithm is updated so as to accommodate two operations. Many of the constants that were discussed for the base model remain unchanged for this application. The same can also be said for the systems dynamic model. The results from integer programming are presented below.

Table 3: Integer Programming Results, Single Instance

Operation 1 Operation 2

Worker Shift 1 5 8

Worker Shift 2 4 7

Worker Shift 3 1 2

Total Shifts Filled 3 3

Work Stations 5 8

Output (Hrs.) 366 618

Objective Function Value 3 0.89

Execution Time (sec.) - 12.28

As the name of Table 3 implies, this is only a single instance of the optimal solution, indicating that there are multiple combinations of workers on the various shifts that are able to achieve the same output. The execution time for operation one is empty because the integer program execution includes both operations. It takes approximately two seconds for the integer program to produce results for both operation one and operation two. As was mentioned above, there are multiple combinations of workers that are able to achieve the same output. Thus, there are multiple optimal solutions that ultimately differ from each other in the

Page 49: The Pennsylvania State University PLANNING THE MEANS OF

40

allocation of workers across the shifts. In order present a more encompassing and compelling dataset, we relax the integrality constraint on the integer program. The result is a relaxed integer program (RIP) that is markedly different from the integer program in the sense that integrality has been relaxed. By doing this, we can now present the Pareto Clouds and Frontiers for the relaxed integer program. The Pareto Frontiers represent Shifts one, two and three. See Figure 2 below. This will allow us have a better visualization of tradeoffs between workers since every triplet shift combination has the same deviation.

Figure 2: Relaxed Integer Program Pareto Efficiency and Frontiers

For Figure 2, the Pareto Clouds are the variously colored asterisks. With blue being for shift one, orange for shift two, and grey for shift three. The Pareto Frontiers are represented by the variously colored circles. With yellow indicating the frontier of shift one, purple for the shift two frontier, and light green for the shift three frontier. The difference between the Pareto Clouds and the Frontiers being that the Pareto Frontiers represent points that are not dominated by the Pareto Cloud. Thus, all points not in the Pareto Frontier are dominated by the Frontier. The horizontal axis represents the number of workers for operation one, while the vertical axis represents the number of workers for operation two. We wish to belabor the point that there is a difference between the relaxed integer program and the integer program. Moreover, that difference is that the integrality constraint is relaxed for the relaxed integer program thus effectively forming a linear program. Why do we do this? Why not plot the Pareto Clouds and Frontiers for the integer program? The reasons for this are many. Pareto Frontiers are discussed in the context of tradeoffs whenever multiple solutions occur for mathematical programming. They allow the reader to visualize what happens to resource x when resource y is increased or decreased. Thus for our purposes we wish to illustrate what happens to the various workers on their respective shifts as we move along the shift one Pareto Frontier. This is hard to visualize under the guise of the integer program since there are so few integer combinations so we relax integrality to present a more complete illustration. The problem is simply more interesting under the relaxed problem because many more combinations can be generated that have

Page 50: The Pennsylvania State University PLANNING THE MEANS OF

41

the same objective function value. In this way, the reader can gain a more fulfilling vantage of the tradeoffs as they occur. We will only use the relaxed integer program in conjunction with the Pareto Clouds and Frontiers. When talking about solution quality, especially concerning simulated annealing and the systems dynamic optimization model we will always return to the original integer program. For a more focused view of the Pareto Frontiers without the Pareto Clouds, we present Figure 3.

Figure 3: Relaxed Integer Program Pareto Frontiers

In Figure 3 notice that the shift one Pareto cloud for the relaxed integer program is denoted by the yellow circles. For shift two, the Pareto Frontier is denoted by the purple circles, and light green for shift three. The above paragraph mentioned workers tradeoffs in view of maintaining the same objective function value. In order to achieve these tradeoffs there has to be movement and change in the workers values per shift. The general tradeoffs that are occurring in the above Pareto Fronts are visible in Figure 4 as we move along the shift one Pareto Frontier

Page 51: The Pennsylvania State University PLANNING THE MEANS OF

42

Figure 4: Pareto Clouds and Frontiers by Methods with Tradeoff Lines

Figure 4 presents two examples. The first case being the extreme right example, which is represented by the dash line. The second being the extreme left example, which is represented by the dash dot line. The purpose of these examples is to observe the tradeoffs that are occurring as one increases the number of workers on the first shift for both operations and the subsequent effect that that has on the other worker values. Keep in mind that both examples yield the same output and ostensibly the same objective function value. We are simply observing what happens to the other shifts as we adjust the allocation of workers on shift one. Shift one was chosen because all other shifts are beholden to it. The extreme right example is so called because it represents the right most extreme value on the shift one Pareto Frontier. Thus for shift one it represents a maximum worker value. Extreme left example is so named because it represents the left extreme point on the shift one Pareto Frontier. This extreme left point on the shift one Pareto Frontier represents a minimum worker value. Thus as we move from one extreme to the next we observe the subsequent changes occurring on the worker values for shift two and three. If one were to decrease the number of shift one workers a subsequent increase would have to occur pertaining to the number of workers present on the second and third shift. This decrease in shift one workers manifests itself by moving from the extreme right example to the extreme left example. This applies to both operation one and operation two. Naturally, the converse is also true if one were to increase the number of shift one workers. Note that for operation two, any movement in the third shift is trivial because for both examples the worker value is zero, so one is simply trading zero for zero. This does not occur in the first operation as the third shift actually has workers present in both examples.

In order to show that simulated annealing is a viable option to solve the problem under consideration, we now repeat the process for simulated annealing and begin with Table 4.

Page 52: The Pennsylvania State University PLANNING THE MEANS OF

43

Table 4: Simulated Annealing Results, Single Instance

Operation 1 Operation 2

Worker Shift 1 5 13

Worker Shift 2 4 2

Worker Shift 3 1 1

Total Shifts Filled 3 3

Work Stations 5 13

Output (Hrs.) 366 618

Objective Function Value 3 0.89

Execution Time (sec.) - 12.28

Notice that when compared to Table 3 the outputs are equivalent, but the combination of workers on the various shifts is different. Indicating multiple optimal solutions with the same objective function value, but differing allocations amongst the decision variables. Pareto Frontiers for simulated annealing can also be achieved. Presented next are the Pareto Frontiers for the simulated annealing algorithm projected onto the Pareto Clouds and Frontiers from the relaxed integer program. In doing so we wish to demonstrate the compatibility of the two Pareto Fronts. The first coming from the relaxed integer program and the second coming from the simulated annealing algorithm. See the figure below.

Figure 5: Pareto Clouds and Frontiers by Method

Concerning Figure 5, the points that have been plotted as triangles represent the Pareto Frontiers for the simulated annealing algorithm. They serve a twofold purpose. The first is to indicate the effects of integrality upon the problem, and secondly to indicate the number of optimal solutions available to each operation.

Page 53: The Pennsylvania State University PLANNING THE MEANS OF

44

Beginning with the integrality comment, we note that the Pareto Clouds were formed by relaxing the integrality constraints on the integer program thus achieving a linear program. The integrality constraints were not relaxed for the simulated annealing algorithm so as to observe the effects of integrality. We are able to generate many more optimal solutions without integrality than with integrality. Thus by applying integrality and plotting both integer and non-integer methods we observe what happens to the frontiers whenever integrality is imposed. One feature of interest is the location of the simulated annealing frontier to the relaxed integer programming frontier. The simulated annealing frontier is in all cases displaced to the right of the relaxed integer program. They generally exist within the Pareto Clouds. Exceptions being the bottom point in the shift two simulated annealing frontier and the top point for shift one also in the simulated annealing frontier. The shift one simulated annealing frontier is interesting because it lies on what we shall call an extreme edge. Recall that the number of workers on the first shift cannot exceed five. It would appear that within the simulated annealing algorithm that constraint is active since it produces frontier points that are equivalent to five workers for operation one. Since the simulated annealing algorithm is an integer method, there are no simulated annealing clouds. There only exists a simulated annealing frontier. Why is this? It is simply due to the effects of integrality. The Pareto Clouds represent triplet points that all correspond to the same output and objective function value, while the Pareto Frontier represents points within the clouds that dominate the other points. By the very nature of integer programming, we have a sparse number of combinations or different ways to allocate workers across the various shifts. Thus, the Pareto Clouds are in effect equal to the Pareto Frontier in all instances due to this sparsity of combinations that is created and enforced by integrality constraints. Since there is so few the triplet combinations dominate each other to form the frontier. It is worth mentioning that the reason we do not plot the integer programming Pareto Frontiers is that they would be identical to those of the simulated annealing algorithm. This is because both methods are integer methods, while the relaxed integer program is a real method. This is why we achieve a plethora of real points. It’s simply that case that it is harder to generate integer combinations and so we have a small number of combinations that is sparse when compared to the real combinations. It is important to note the integer programming and simulated annealing are integer methods contrasted to the real method of relaxing integer programming. It is an important performance measure to note that the solutions produced by both integer programming and simulated annealing are equivalent or of the same quality. Figure 5 also displays the number of optimal solutions for each operation. As is evident for shift one there are two triangles indicating two unique worker values for operation two. Concerning the first operation the projection of those two points yields only a single worker value. The same occurrence can be noted for shift two and three. Namely, there exists two optimal solutions for operation two where all the workers values per shift are unique. For the first operation, there exists a single unique solution that is achieved by applying the projection of the triangular points on the horizontal axis. The crux of the analyses’ performed thus far is the fact that all of the worker allocations across the various shifts achieve the same output and objective function value. In keeping with Figure 5 and Figure 3 we can present Figure 6, which are the simulated annealing Pareto Frontiers, which would be equivalent to the Pareto Frontiers for the integer program.

Page 54: The Pennsylvania State University PLANNING THE MEANS OF

45

Figure 6: Simulated Annealing Pareto Front

With respect to Figure 6, the gray triangular line denotes the Pareto Frontiers for the first shift, while the Pareto Frontier for the second shift is denoted by the brown triangular line. The same can be said of the third shift and the blue triangular line. We can now present Table 6, which presents all possible integer worker allocations that achieve an equivalent optimal solution. See the Table below.

Table 5: Worker Allocations that Achieve Equivalent Output

Operation 1 Operation 2

Worker Shift 1 5 13 8

Worker Shift 2 4 2 7

Worker Shift 3 1 1 2

Total Shifts Filled 3 3 3

Work Stations 5 13 8

Output (Hrs.) 366 618 618

Objective Function Value 3 0.89 0.89

In Table 6, we confirm what was presented and discussed in Figure 5 and 6. We are able to clearly note that there exist two optimal solutions for operation two that have the same output and a single optimal solution for operation one. Furthermore, we can use the simulated annealing algorithm in conjunction with a for loop in order to measure the frequency of each of the above optimal solutions for operation 2. That frequency table is presented below.

Table 6: Simulated Annealing Frequency Tables Operation Two

Worker Combination

Frequency

13, 2, 1 65

8, 7, 2 35

Page 55: The Pennsylvania State University PLANNING THE MEANS OF

46

From Table 6 it would appear that the 13, 2, 1 combination of workers on shift one, two and three respectively is more likely to be generated over 100 iterations than the 8, 7, 2 combination which was only generated thirty-five times. All of the results presented and discussed above paint a very favorable picture for simulated annealing and its ability to produce a solution that has an equal quality concerning the integer program, which has so far produced only optimal solutions. Comments regarding the systems dynamic optimization can be found that the end of this sub-section. Thus far, we have identified integer worker allocations across various shifts that achieve the same output and objective function value. We used two methods to achieve these configurations. The first was integer programming and the second was the simulated annealing algorithm. Both methods produced solutions that were identical in nature and quality. The question from the perspective of the job shop manager is β€œwhich do I select?” That’s a valid question to ask. Unfortunately, the current optimization techniques are unable to answer that question. All of the optimization energy was focused on finding workers allocations that achieve a minimized deviation value with regards to output and random demand. Now that the dust has settled, we are still left with a question of optimality. Yes, both are optimal with regards to minimizing deviation between output and demand, but surely, we can further refine or generate constraints that provide a unique solution? The obvious answer would be to use a cost criterion to narrow down our multiple optimal solutions to a single optimal solution concerning both minimizing deviation between output and demand as well as cost. We can arbitrarily define the cost per hour of labor performed for each worker on each shift. See the Table below for these values.

Table 7: Cost per Shift

Cost ($) / hr.

Shift 1 100

Shift 2 100

Shift 3 115

From Table 7 we gather that the cost for shift one and shift two is equivalent at 100 dollars per hour for each worker. For shift three, we increase the cost to 115 dollars per hour for each worker due to unfavorable working hours and as an enticement for people to want that shift. Not only does shift three cost the most, but it also is the least productive. The prime difference between shift one and two being that the second shift is less productive than shift one, which sets the benchmark. Whenever the simulated annealing algorithm terminates for each run it returns a solution that is then compared to integer programming to determine its quality and ultimate optimality. Generally speaking, the reason that deviation is being minimized is because it is extremely unlike that a configuration will be found that exactly matches demand. Thus, we have to accept a certain amount in deviation from the demand. In our case we require that deviation to be large than demand. Apart from producing a near optimal solution, the algorithm also generates many feasible solutions whose output does not necessarily constitute a minimized deviation value. Those feasible solutions are stored in a series of vectors that can be collected and analyzed pursuant to the aforementioned cost criteria. As an aside we present Figure 7, which displays the convergence of the simulated annealing algorithm as the number of iterations grows sufficient large enough. The volatility

Page 56: The Pennsylvania State University PLANNING THE MEANS OF

47

observed early on is a result of the simulated annealing criteria, which accepts a worse solution to escape a potential local minimum.

Figure 7: Simulated Annealing Algorithm Convergence

Deviation is plotted on the vertical axis and the number of iterations is plotted on the horizontal axis for Figure 7. Eventually, as the algorithm cools down due to temperature decreases it is able to converge on a solution that has a minimized deviation value. All of this is in keeping with how we would expect a simulated annealing algorithm to operate. To show how the simulated annealing algorithm converges and achieves the final solution, all the solutions that were produced during a single iteration are collected. By addition we obtain a sense of many workers are on shift one for both operations, how many shift two workers for both operations, and the same for shift three. We also combine the objective function values for each conjoined combination and calculate the cost using the values in Table 7. Lastly, we take the absolute value of the costs so as to prevent negative objective function values, which occur when demand is larger than the output. Once all this has been achieved we can graph the cost by the absolute value of the objective function value forming the below figure.

Page 57: The Pennsylvania State University PLANNING THE MEANS OF

48

Figure 8: Feasible Cost Deviation Pareto Frontier, Multiple Operations

In Figure 8, we form the feasible cost deviation Pareto Frontier. The horizontal axis is cost and the vertical axis is the absolute value of the objective function values. The blue asterisks represent the feasible solution cloud, while the orange circles represent the Pareto Frontier. By moving along the Pareto Frontier, one can observe changes that occur if cost is increased or decreased. At the extremes, we can create a configuration of workers that costs very little, yet has a large objective function value or difference between demand and output, or we would create a combination that has a relatively large cost for the frontier and low objective function value or deviation. Our focus thus far has been on presenting the cost deviation Pareto Frontier for all feasible solutions. We can now refine that criterion to focus on optimal solutions with the goal being to achieve a single global optimal solution that has been minimized with respect to cost and deviation. Recall that there exist two optimal solutions. By applying the cost presented in Table 7, we can achieve the costs for each of those optimal solutions. See Table 8 below where we present the costs of each worker configuration in Table 5 based on the costs in Table 7.

Table 8: Resulting Costs for Worker Configurations

Achieving the costs in Table 8 is a purely arithmetic problem. We simply summed the number of workers on each shift across both operations and then multiplied that figure by their respective shift cost and forty hours since that is the amount of time they work each week. Keep in mind that operation one had the same configuration for each of the differing allocations occurring for operation two. Thus, the optimal operation two configuration is 13, 2, 1 with respect to shift one,

Configuration Op. 1 / Op. 2 Cost ($)

5, 4, 1

13, 2, 1

5, 4, 1

8, 7, 22

105,200

109,800

1

Page 58: The Pennsylvania State University PLANNING THE MEANS OF

49

shift two, and shift three. When combined with operation one the total cost 105,200 $, which is 4,600 $ less than the alternative 8, 7, 2 operation two configuration in conjunction with the unique operation one solution. Thus, once we perform the optimization concerning both cost and deviation we are able to achieve a single optimal solution that minimizes both deviation and cost. We can also present the optimal decision regarding deviation and cost as a Pareto Front of objective function values. That Pareto Frontier is presented below.

Figure 9: Optimal Deviation Cost Pareto Frontier

In Figure 9, the horizontal axis represents Deviation for both operations and is in terms of hours. The vertical axis represents costs based off the cost per shifts presented in Table 7. The optimal decision is intuitive if one wishes to minimize costs since both configurations have the same objective function value or deviation. The only real difference between them is costs as is apparent in Table 8, which is due to differing configurations of workers across the various shifts. The optimization problem presented is a multi-objective problem wherein we wish to both minimize deviation that is defined as the difference between output and random demand as well as the cost. As was alluded to in the introduction for this subsection we partitioned the analysis and first considered only simulated annealing. Secondly, we consider the systems dynamic model under the optimizer conditions. The reasons for this omission are very clear. Systems dynamic optimization failed to produce solutions optimal to the problem. As the reader has already observed in the base model experiment, systems dynamic optimization failed to produce an optimal solution. Whereas the simulated annealing algorithm and integer program were able to produce an optimal solution. Now that the discussion regarding multiple optimal solutions for the simulated annealing algorithm and integer program is over, we can present the shortcomings of systems dynamic optimization. See the table below for the systems dynamic optimization results regarding the original configuration.

Page 59: The Pennsylvania State University PLANNING THE MEANS OF

50

Table 9: Systems Dynamic Optimization Results, Multiple Operations

Operation 1 Operation 2

Worker Shift 1 5 10

Worker Shift 2 3 7

Worker Shift 3 3 0

Total Shifts Filled 3 2

Work Stations 5 10

Output (Hrs.) 392 638

Objective Function Value 29 20.89

Execution Time (sec.) - 7.89

In Table 9, one can clearly see that the solutions generated via systems dynamic optimization are not optimal. The execution time was 7.89 seconds and this includes generating the solution for operation one and operation two, which is why there is no execution time for operation one. They are both generated within the same timeframe.

4.A.3. Multiple Product Lines

Up to this point, we have discussed various experiments relating to scalability. Presented first was the base model so the reader could familiarize themselves with the concept of a single product line and operation. Next, we expanded the number of operations and noted the changes that adding another operation created for the shop. Finally, we expand the number of product lines to two. Now, within the job shop, we have two general products that are being produced. Both product lines require two operations. To begin we present the integer program for the two-product line model.

π‘€π‘–π‘›π‘–π‘šπ‘–π‘§π‘’ π·π‘’π‘£π‘–π‘Žπ‘‘π‘–π‘œπ‘› (10.a)

𝑆𝑒𝑏𝑗𝑒𝑐𝑑 π‘‘π‘œ:

π·π‘’π‘£π‘–π‘Žπ‘‘π‘–π‘œπ‘› = {(π‘Š111 βˆ— 1) + (π‘Š112 βˆ— 0.85) + (π‘Š113 βˆ— 0.75) βˆ’1400.85

40} + {(π‘Š121 βˆ—

1) (π‘Š122 βˆ— 0.85) + (π‘Š123 βˆ— 0.75) βˆ’ 1325.68

40} + {(π‘Š211 βˆ— 1) + (π‘Š212 βˆ— 0.85) + (π‘Š213 βˆ—

0.75) βˆ’ 2501.56

40} + {(π‘Š221 βˆ— 1) + (π‘Š222 βˆ— 0.85) + (π‘Š223 βˆ— 0.75) βˆ’

660.72

40} (10.b)

π‘Š111 β‰₯ π‘Š112 (10.c)

π‘Š121 β‰₯ π‘Š122 (10.d)

π‘Š211 β‰₯ π‘Š212 (10.e)

π‘Š221 β‰₯ π‘Š222 (10.f)

π‘Š112 β‰₯ π‘Š113 (10.g)

π‘Š122 β‰₯ π‘Š123 (10.h)

π‘Š212 β‰₯ π‘Š213 (10.i)

π‘Š222 β‰₯ π‘Š223 (10.j)

π‘Š111 ≀ 35 (10.k)

π‘Š121 ≀ 30 (10.l)

π‘Š111 ≀ 60 (10.m)

π‘Š121 ≀ 15 (10.n)

π‘Š111, π‘Š112, π‘Š113, π‘Š121, π‘Š122, π‘Š123, β‰₯ 0, π‘–π‘›π‘‘π‘’π‘”π‘’π‘Ÿ (10.o)

Page 60: The Pennsylvania State University PLANNING THE MEANS OF

51

π‘Š211, π‘Š212, π‘Š213, π‘Š221, π‘Š222, π‘Š223, β‰₯ 0, π‘–π‘›π‘‘π‘’π‘”π‘’π‘Ÿ (10.p)

π·π‘’π‘£π‘–π‘Žπ‘‘π‘–π‘œπ‘› β‰₯ 0 (10.q)

Eq. 10: Integer Program for Two Product Lines

In Eq. 10, the objective function is to minimize the total deviation present within the shop across all product lines and operations (10.a). Deviation is defined in (10.b) as the sum of all workers across product lines and operations multiplied by their respective productivity values. Their general output per product line operation is subtracted from that product line operation demand. Next, we ensure that the maximum number of workers on the second shift is at most equivalent to the number of workers on shift one (10.c), (10.d), (10.e), (10.f). This has often been referred to as the priority rule and it applies across all product line operations. Continuing on with the priority rule we declare that the maximum number of workers on shift three is at most equal to the number of workers on shift two (10.g), (10.h), (10.i), (10.j). Next, we declare the capacity per product line operation. For the first product line first operation that limit is thirty-five work stations and ostensibly thirty-five people (10.k). For the first product line second operation, that value is thirty (10.l). The capacity for the second product line first operation is sixty (10.m) and fifteen for the second operation pertaining to the second product line (10.n). Lastly, we declare that all workers across their various shifts, operations, and product lines must be nonnegative integer values (10.o), (10.p), while deviation is required to be nonnegative (10.q). In defining the mathematical program for multiple product lines, we expand the overall deviation constraint to account for an increase in the number of product lines and the requisite operations that are necessary for that product line. In essence, we wouldn’t have another product line without more operations. By expanding the number of product lines and operations, we are forced to hire workers in order to fill those perspective shifts. We follow the priority rule in filling the shifts with the priority always being towards the first shift. More constraints regarding the priority rule are necessary because there now exist more slots that workers could fill in order to contribute output. With the introduction of two new operation under the second product line, we have to expand the shop in order to account for the work stations that are required. After all, workers require some sort of instrument in order to convert their labor into output. Lastly, we declare that all workers must be nonnegative integer values, while the deviation must be nonnegative. Normally, at this point in the discussion, we would present the optimal solution by operation (Op.) and product line (Pl.). This table would have rows corresponding to workers on their various shifts, the number of work stations required, how many shifts are necessary, the output a various worker configuration achieves, and the objective function value along with the execution time. The columns would be sorted by the sundry product lines and operations. In solving the problem described in the Eq. 10, we run into a slight problem that is best illustrated in the following table.

Table 10: Number of Differing Combinations of Workers, Multiple Product Lines

Product Line Operation Number of Combinations

Pl. 1, Op. 1 8

Pl. 1, Op. 2 15

Pl. 2, Op. 1 19

Pl. 2, Op. 2 2

Page 61: The Pennsylvania State University PLANNING THE MEANS OF

52

Table 10 communicates to the reader that there are simply too many worker combinations to present in table form. For the first product line operation one there are eight different ways to allocate workers across the various shifts to achieve an output that will result in a minimized deviation. For the second product line first operation, there are nineteen different ways to allocate workers that achieve the same output! The reason for this are the demand values. A larger demand results in the possibility of having more ways to allocate workers across the various shifts. The demand for the second product line first operation is approximately 2,500 hours, which explains why there are nineteen combinations of workers that achieve the same output. There are many ways to allocate workers across the sundry product lines and operations, so many in fact, that it becomes impractical to present them all, as has been our modus operandi for the previous two subsections. There exist multiple optimal solutions. Table 10 makes this fact apparent. As a result of this, we present the Pareto Clouds and Frontiers for multiple product lines. We begin our analysis by focusing on product line one. The Pareto graphs will consider each product line separately for clarity. See the figure below.

Figure 10: Relaxed Integer Program Pareto Efficiency and Frontiers, First Product Line

Figure 10 makes many of the same assumptions that have been made previously. We relax the integrality condition for the integer program; thus resulting in a relaxed integer program or linear program. The horizontal axis is operation one workers and the vertical axis is for operation two workers. The blue asterisks indicate the Pareto cloud for shift one, orange asterisks indicate the Pareto cloud for shift two, and the grey asterisks are for shift three. The Pareto Frontiers are represented as circles. The yellow circles indicate the Pareto Frontier for shift one, the dark blue indicate the Pareto Frontier for shift two, and finally green for shift three. Continuing with our analysis, we can present the Pareto Frontiers without the noise of the Pareto Clouds. It’s necessary to present both the Pareto Clouds and the Pareto Frontiers as a single figure. By doing this the reader gains insight into the number of possible relaxed combinations that exist for the given demand values. Triplets can be formed from the Pareto Clouds that have the same output. For every point in the shift one set, corresponding points in the shift two and three cloud can be found that

Page 62: The Pennsylvania State University PLANNING THE MEANS OF

53

have the same output and ostensibly objective function value as another triplet taken from each of the clouds. The figure below presents only the Pareto Frontiers, see the figure below.

Figure 11: Pareto Frontiers, Product Line One

In Figure 11, we observe the three distinct Pareto Frontiers for each of the shifts. They all have a general positive slope. We notice that there is significant overlap between the Pareto Fronts for the second and third shift, whereas the first shift Pareto Frontier is rather isolated when compared to the other two. To understand the tradeoffs that are occurring for each operation on the first product line we plot the tradeoff lines. See the figure below.

Figure 12: Pareto Clouds and Frontiers by Methods with Tradeoff Lines, Product Line One

Page 63: The Pennsylvania State University PLANNING THE MEANS OF

54

In Figure 12, we notice similar patterns that have already been mentioned. To begin we declare the extreme right point, which is anchored in the extreme right point on the shift one Pareto Frontier. It has a corresponding shift two worker value that is traceable by following the dashed line. It terminates in a shift three worker value. The extreme left point is anchored in the left most extreme point on the shift one Pareto Frontier. It also has corresponding shift two and three worker values, which follow the dashed dot line. Each of these examples have the same output, deviation, and objective function value. The difference between them is how each solution allocates workers across the various shifts as is obvious in Figure 12. To mention the tradeoffs occurring we notice that as one moves along the shift one Pareto Frontier in a leftward direction, decreasing the number of first shift workers, the subsequent workers on second and third shift increases so as to maintain the same output as previously achieved. Similarly, if one moves in a rightward direction along the first shift Pareto Frontier, increasing the number of shift one workers, then the number of workers present on the second and third shifts must decrease so as to maintain an equivalent output, deviation, and objective function value. The above graphs apply to the relaxed integer program for the first product line. We can now introduce the simulated annealing algorithm and its subsequent Pareto Frontier. This will demonstrate the effect of integrality on the problem as well as illustrating the number of optimal solutions per operation for the first product line.

Figure 13: Pareto Clouds and Frontiers by Method, Operation 2, Product Line One

The first observation for Figure 13 that we wish to illustrate is that for operation two there exists five different integer combinations of workers that achieve the same output and objective function value. This fact is present in the shift one cloud where there are five dark grey triangles. Each of the triangles in the shift one cloud can be traced back to their respective values regarding shift two and shift three worker values. Focusing on shift two notice that there are only three brown triangles. This is because there are only three different choices available, whereas for shift three every shift one allocation has a unique shift three worker value. As was pointed out, this graph not only indicates the effect of integrality constraints on the problem, but also illustrates the number of optimal

Page 64: The Pennsylvania State University PLANNING THE MEANS OF

55

solutions for operation two. We now present a similar figure, but this time the focus is on the number of optimal solutions for operation one.

Figure 14: Pareto Clouds and Frontiers by Method, Operation 1, Product Line One

Figure 14, apart from demonstrating the effect of integrality constraints on the problem, also illustrates the number of optimal solutions available for operation one. Namely, there are eight different combinations of workers across the various shifts that achieve an equivalent output and objective function value. The number of optimal solutions is evident by the number of triangles present within the shift one cloud. There are only four triangles for shift two. This indicates that there are only four different values that the number of workers can take on. There are eight triangles for shift three indicating that there are eight different values that the number of workers can assume. Thus, each of the triangles in the shift one and shift three simulated annealing Pareto Frontier represents a unique value, while several of the shift two values are present within differing allocations. The goal of Figure 12 and 13 is to demonstrate the effects of introducing integrality on the problem and to show the number of optimal solution alternatives available for each operation. We use the word alternative because they all achieve the same output, and the choice of a particular allocation over another configuration, ultimately, would be undertaken by the shop manager. We just present the different alternatives available to the shop given a particular demand. Now begins the discussion of the second product line. We will continue with the standard modus operandi that has serves us well for the past few subsections. To start, we present the Pareto Clouds along with the Pareto Frontiers both of which are generated via the relaxed integer program. See Figure 15 on the next page. Figure 15 displays the Pareto Clouds and Pareto Frontiers for the relaxed integer program. The Sets are denoted by the variously colored asterisks with blue being for shift one, orange for shift two, and grey for shift three. The Frontiers are denoted by circles. The yellow circles apply to shift one, the blue circles to shift two, and the green circles to shift three. Notice that that the axis value for operation one workers, product line two is quite large compared to all previous problems. This is because of increased demand figure. More workers are required to

Page 65: The Pennsylvania State University PLANNING THE MEANS OF

56

contribute towards output such that the difference between output and demand represents a minimized value.

Figure 15: Relaxed Integer Program Pareto Efficiency and Frontiers, Product Line Two

We now present the Pareto Frontiers without the Sets to provide a more clear perspective regarding their location and bounds.

Figure 16: Pareto Frontiers, Product Line Two

Generally speaking, with regards to Figure 16, the Pareto Frontiers have a positive slope. It comes as no surprise that shift one has the highest workers values. This is due to the priority rule, which sets

Page 66: The Pennsylvania State University PLANNING THE MEANS OF

57

limits on the second and third shift based on the overarching shift. In order to understand the tradeoffs occurring on the Pareto Frontiers we now present the next figure.

Figure 17: Pareto Clouds and Frontiers by Methods with Tradeoff Lines, Product Line Two

The concept of tradeoffs is displayed in Figure 17. We begin with the extreme right example, which has its origin in the right most extreme point on the Pareto Frontier for shift one. We can then follow the ensuing dash line down into the shift two and three clouds and observe the corresponding worker shifts. This example terminates in the shift three cloud displaying a worker value for shift three. The extreme left example is complementary to shift one with regards to placement on the shift one Pareto Frontier. Namely, its existence on the shift one Pareto Frontier is rooted in the left most extreme point. As we follow the dash dot line, we can observe corresponding worker values for shift two and shift three. The reason that we go through pains to present the tradeoffs and their particular location on the shift one Pareto Frontier is to demonstrate two extreme examples and the subsequent changes occurring on the other shifts as one increase or decreases shift one workers. Shift one was chosen because the remaining shifts are ultimately beholden to it. Thus, shift two and three are dependent on that shift worker value since they are not allowed to exceed that value. As the number of shift one workers decreases we observe an ensuing increase in shift two and shift three worker values. This makes sense because in order to maintain the same output, deviation, and objective function value and decrease shift one workers we would have to increase workers on shift two and three. The converse could be said if we were to increase the number of shift one workers. A decrease would be seen for shift two and three. The tradeoffs that are occurring as we oscillate between extreme points on the shift one Pareto Frontier deal if whether or not we will increase or decrease the number of workers on shift two or three. There is a counter relationship between shift one and shifts two and three. An increase in one of the values leads to a decrease in the other. The final step for the second product line is to present the simulated annealing Pareto Frontiers. As was previously mentioned, we gain two important insights by doing this. The first is to observe the

Page 67: The Pennsylvania State University PLANNING THE MEANS OF

58

effects of integrality on the problem since the relaxed integer program is in essence a linear program because it is lacking integrality constraints and to display the multiple optimal solutions in a graphical form for each operation. We begin by displaying the multiple optimal solutions for operation two see the figure below.

Figure 18: Pareto Clouds and Frontiers by Method, Operation 2, Product Line Two

For Figure 18, it is apparent that there exist two optimal solutions for the second operation of production line two. The simulated annealing Pareto Frontiers are denoted by Triangles for their respective shift clouds. The Pareto Frontier for the first shift is located on the extreme edge of the feasible region. We now this because the number of workstations and ostensibly workers allowed within the shop for that particular operation is sixty, which is a constraint in our integer program. This constraint is active for our simulated annealing algorithm. The simulated annealing Pareto Frontiers are displaced a considerable Cartesian distance from their respective relaxed integer programming Pareto Frontiers. Returning to the notion of multiple optimal solutions, each of the two points within a perspective shift could has a unique combination of worker values on the other shifts. This is apparent because there are two Frontiers points present within each shift cloud indicating that each worker shift value is unique for that given combination. In Figure 19, we present the simulated annealing Pareto Frontiers superimposed on the relaxed integer program Pareto Frontiers. From this graph we gain insight into the number of optimal solutions that exist for operation one on the second product line. This is evident by the number of points plotted within the shift one cloud. The dark green triangles correspond to the simulated annealing Pareto Frontiers for shift one. The purple triangles are the simulated annealing Pareto Frontiers for shift two, and the dark brown are for shift three. There exist nineteen different points within the shift one cloud indicating that there are nineteen unique values that workers on the first shift can assume. For shift two, there are only six different values that those workers can be allocated towards, and for shift three, that number is fourteen. Thus every triangular point does not map to a unique worker value in shift two and shift three as we observed in Figure 18.

Page 68: The Pennsylvania State University PLANNING THE MEANS OF

59

Figure 19: Pareto Clouds and Frontiers by Method, Operation 1, Product Line Two

This finishes our discussion regarding efforts to demonstrate the scalability of the basic model by expanding the number of product lines. It followed the expansion due to operational levels. The topic thus far has been to discuss simulated annealing and how it compares to integer programming and its ability to solve the problem and achieve a quality solution. The simulated annealing algorithm was able to obtain a number of solutions that were identical to the integer programming result. Thus, we can conclude that simulated annealing represents a feasible solution technique for the problem under consideration. We shall comment on the systems dynamic optimization model later in the section. We attribute this success to its unflagging ability through many applications to produce a robust series of optimal solutions that was in turn mirrored by results obtained from integer programming. We applaud the efforts of the simulated annealing algorithm and place a special designation upon it as having an equivalent compatibility for solving the problem when compared with integer programming. This is a laudable achievement because mathematical programming is typically thought to be an exact measure because it provides an optimal solution under necessary and sufficient conditions. Contrast this to metaheuristics, which are generally expected to provide sufficiently good approximations, but are not considered exact measures like mathematical programming. Essentially, under sufficient and necessary conditions we are assured that mathematical programming will converge on an optimal solution, whereas that assurance is missing for metaheuristics. Thus, we have demonstrated that for our specific problem the metaheuristic, simulated annealing, was able to produce results with a solution quality equivalent to that of integer programming. Proceeding onward, we can have the same sort of discussion that occurred in the multiple operations section regarding the Pareto frontier for feasible simulated annealing solutions. The algorithm collects all of the feasible combinations so we perform a few arithmetic calculations in order to achieve the entire number of workers per shift across all operations and product lines. Once that has been obtained, we can then calculate the cost of each feasible configuration and take the absolute value of the objective function values. Again, we do this because a negative deviation

Page 69: The Pennsylvania State University PLANNING THE MEANS OF

60

occurs when output is less than demand and for our purpose of creating Pareto frontiers; we require a positive objective function value. See the ensuing figure below.

Figure 20: Feasible Cost Deviation Pareto Frontier, Multiple Product Lines

For Figure 20 we have a horizontal axis of cost, and a vertical axis representing the absolute value of the objective function values. The blue asterisks denote the simulated annealing feasible cloud and is the formation of all feasible points graphed by their respective cost and objective function value. The cost deviation Pareto Frontier is denoted by the orange circles. The tradeoffs that are occurring between cost and the objective function are very apparent as one moves along the Pareto Frontier. At an extreme case, we would have a minimized cost, which would result in a large or maximized objective function value. At the other extreme, we can minimize the deviation, while paying a maximum cost on the frontier, but only relative when compared to the other points within the cloud. Of course, though, there comes the decision of choosing an optimal solution with regards to cost out of all of the optimal solutions that were based on deviation. Having a myriad of optimal solutions that have been optimized with regards to deviation between output and demand; how can we optimize with regards to cost and achieve a single solution? This single optimal solution would be optimized with respect to both deviation and cost. We might consider this to be a global optimal solution. In order to achieve this global optimal solution, we employee the same scheme that was developed in the previous sub-section. We introduce prices based on Table 7, which are hourly prices for each shift. In order to achieve the global optimal solution we apply the labor costs to optimal solutions across the operations for each product line. Once this has been implemented, we simply select the allocation or configuration that exhibits the lowest cost per operation. The total cost can be achieved by combining all operation costs and multiplying that figure by forty hours since it is standard to have a forty-hour workweek. We present Figure 21 below.

Page 70: The Pennsylvania State University PLANNING THE MEANS OF

61

Figure 21: Deviation Cost Pareto Frontier, Two Product Lines

In Figure 21 we present the tradeoffs one experiences between deviation and cost for each product line operation under consideration. Each point within its perspective line has the same deviation because we have already optimized with respect to deviation and the solutions obtained all have equivalent objective function values. Given that deviations are constant for all of the plotted points it is simple to determine the optimal configuration for each product line operation. We present those configurations in the below Table.

Table 11: Global Optimal Solution Configurations with Cost, Two Product Lines

The configurations presented in Table 11 represent the product line operation optimal solutions when optimized with respect to the costs detailed in Table 7. It is not always the case that the optimal solution is the configuration with the largest number of workers on shift one. Thus, after applying cost and optimizing the ensuing optimal solutions we achieve a global optimal solution to the two product line problem described in the beginning of this sub section. We make no claims on prices and for purposes of this paper; the reader can assume that they do not exist as we are only worried with meeting our quotas and calculating labor costs. To wrap up this subsection we present the findings from the systems dynamic optimization model. The reason that it was not mentioned in the above discussion is because it did not achieve an optimal solution. This is apparent in the below table.

Operation 1 Operation 2 Operation 1 Operation 2

Worker Shift 1 31 21 60 14

Worker Shift 2 3 9 3 3

Worker Shift 3 2 3 0 0

Cost ($) 145,200 133,800 252,000 68,000

Product Line 1 Product Line 2

Page 71: The Pennsylvania State University PLANNING THE MEANS OF

62

Table 12: Systems Dynamic Optimization Results, Multiple Product Lines

Table 12 presents the optimal combinations that minimize deviation using the systems dynamic optimization method. The above configurations are not optimal and will not produce a global optimal solution that rivals the one produced by the simulated annealing algorithm and integer programming. It is for these reasons that systems dynamic optimization was not a member of the discussions that occurred above. This concludes our experiments relating to the scalability of our problem. The problem can be scaled up with regards to both operations and product lines. We began with a base model that had only a single product line and operation and showed that simulated annealing produced a solution whose quality matched that of integer programming. Systems dynamic optimization was unable to provide the same level of quality solution. The base model was scaled up to include two operations for a single product line. The results were the same regarding the solution quality of simulated annealing with integer programming and the inability of systems dynamic optimization. Finally, we increased the number of product lines to two each containing two operations. A quality solution was found for simulated annealing that matched integer programming. Again, systems dynamic optimization failed to produce a quality solution. By performing experiments regarding scalability we also demonstrated that the simulated annealing algorithm represents a possible solution technique to solve the problem since it was able to deliver solutions whose caliber was equal to integer programming. This was not the case for systems dynamic optimization. Within this section, we illustrate the scalability of the methodology and the solution techniques as well as separating the wheat of simulated annealing from the chaff of systems dynamic optimization. We now proceed to testing the robustness of the methodology.

4.B. Robustness

Leaving now the realm of optimization and optimal solutions, we turn to measures of robustness or quality. We have already discussed solution quality for the simulated annealing algorithm and the systems dynamic model when compared to integer programming, which was considered to have set the benchmark. By and large, we observed that the simulated annealing algorithm was able to deliver a solution whose quality was equivalent to that of integer programming. By this we mean that the two solutions were identical and both achieved optimality. Thus, we were able to demonstrate that simulated annealing presents itself as a feasible solution technique to solve the resource allocation problem described for the job shop. This is important because metaheuristic techniques represent a nontraditional class of solution techniques whereas an individual might turn to integer programming.

Operation 1 Operation 2 Operation 1 Operation 2

Worker Shift 1 20 20 50 13

Worker Shift 2 13 10 10 3

Worker Shift 3 6 7 6 2

Total Shifts Filled 3 3 3 3

Work Stations 20 20 50 13

Output (Hrs.) 1422 1350 2520 682

Objective Function Value 21.15 24.32 18.44 21.28

Execution Time (sec.) - - - 17.95

Product Line 1 Product Line 2

Page 72: The Pennsylvania State University PLANNING THE MEANS OF

63

Thus, we highlight the utility of a nontraditional method when solving a particular problem. Systems dynamic optimization, however, was unable to provide the same sort of solution quality, which is lamentable because its use as a tool to allocate resources based on demand is so far considered unique. It’s all well and good that the simulated annealing algorithm delivers a result that is on par with integer programming, but it comes time to ask how robust is the algorithm? By asking this question, we concern ourselves not necessarily with the output, but rather with the input. The underlying assumption of the algorithm is that there is some random demand that is known and has to be planned for. This then leads to the formulation of workers, work stations, and shifts in order to try to meet that demand. The previous experiments were concerned with achieving an output that minimized the difference between output and demand; we called this deviation. We now look at the effectuation of the algorithm and the role that variance or noise has. If the algorithm is robust, we should then expect its behavior to be stable across various levels of noise. We define noise in context of the random demand. Naturally, demand is drawn from a random distribution and the fact that it is known allows planners to allocation workers across various shifts and work stations so as to better utilize the shop as a whole. Our goal, therefore, was to obtain a long run estimate of workers across the various work stations and shifts that minimizes deviation. The understanding is that long run deviation would be minimized. The reason for this is that it makes no sense to try and plan on a week-to-week basis. It is impractical from an economic and moral standpoint to fire workers one week and then hire some of them back next week because of demand fluctuations. Besides, under this policy workers are never fully trained and will most likely never operate efficiently as they would in long-term employment. In order to determine whether the algorithm is robust we must first measure the amount of noise that is present from variance in the random demand distributions. To do this we utilize signal to noise (S/N) ratios described by Genichi Taguchi58. See the equations below.

πœ‚ = βˆ’10 log (𝑠2) (11.a)

Eq. 11: Nominal-is-best Mean & Variance Signal to Noise Ratios

πœ‚ = 10 log (οΏ½Μ…οΏ½

𝑠)

2

(12.a)

Eq. 12: Nominal-is-best Variance Only Signal to Noise Ratios

For Eq. 11, eta is the signal to noise ratio that is to be calculated, and 𝑠 is the sample standard

deviation of the characteristic under consideration (11.a). For Eq. 12, eta and 𝑠 remain what they

were in (11.a) and οΏ½Μ…οΏ½ is the sample average of the characteristic being considered (12.a). There are three different types of signal to noise equations. The first being smaller-is-better, where one wants to minimize some unnecessary effect. Taguchi uses wear on a piston to illustrate an example. The second being larger-is-better where the characteristic needs to be maximized. An example of this could be amplification or power. These characteristic are just variations of a

58 Taguchi, G. (1986). Introduction to Quality Engineering: Designing Quality into Products and Processes. Asian Productivity

Organization. White Plains, New York.

Page 73: The Pennsylvania State University PLANNING THE MEANS OF

64

common nominal form. For smaller-is-better, the target or nominal value is zero, while for larger-is-better the target is positive infinite. Thus by defining a target, we can derive the smaller-is-better and larger-is-better equations. This is why we chose to present the nominal-is-best formulations. Another reason is that we actually do have targets that we are trying to achieve. Noise is due to variability from the random demand distribution. Thus, in order to measure it, we must first capture a sufficiently large enough number of samples from the distribution. This is easily accomplished within the framework of the algorithm. The sufficiently large enough number is set to 200 samples. We collect 200 samples for every operation in a model. For the multiple product line model, we collect 800 samples since there are four operations. For the multiple operations model we collect 400 samples, and 200 sample for the base model since it has only a single operation. Thus, the characteristic under consideration is demand produced by each of the random distributions. We collect at the operation level because each operation has a specific demand that the workers are contributing output towards. The targets would be the long run demand figures that have already been calculated and are present within the integer programming models. For convenience we present the targets in the below table.

Table 13: Signal to Noise Targets

Table 13 presents the Target values for each of the operations by their perspective model. These values are the long run distribution averages after 50,000 iterations. Now that we understand the targets, we should proceed with discussing signal to noise ratios and their respective calculations. Broadly speaking, signal to noise ratios are thought of as a ratio of mean response divided by the standard deviation. The signal is this instance is the mean response and the noise is the standard deviation. In a perfect setting, there would be no standard deviation to blur the mean value. Thus, we would achieve a single value that completely lacks in ambiguity. Unfortunately, that is never the case and we are forced to always admit or allow error for any sort of measurement. We now present a table regarding the average value over the 200 observations, the sample standard deviation, and signal to noise ratio mean and variance as well as signal to noise ratio variance only. This is for all models discussed in the scalability subsection. See the table below.

Table 14: Signal to Noise Responses

Model Multiple Pl. Multiple Op. Base Model

Pl. 1, Op. 1 1400.8 363 363.91

Pl. 1, Op. 2 1325.7 617.11 -

Pl. 2, Op. 1 2501.6 - -

Pl.2, Op.2 660.72 - -

s S/N Mean & Var. S/ N Var.

Pl. 1, Op. 1 1417.71 948.97 -59.55 3.49

Pl. 1, Op. 2 1284.78 533.38 -54.54 7.64

Pl. 2, Op. 1 2506.44 1579.71 -63.97 4.01

Pl.2, Op.2 668.62 281.87 -49.00 7.50

Pl. 1, Op. 1 652.84 454.35 -53.15 3.15

Pl. 1, Op. 2 857.15 371.67 -51.40 7.26

Base Model Pl. 1, Op. 1 354.74 239.13 -47.57 3.43

Model

Multiple

Product

Lines

Multiple

Operations

Page 74: The Pennsylvania State University PLANNING THE MEANS OF

65

In Table 14, we present the average of the 200-point samples obtained for each operation as οΏ½Μ…οΏ½. They are displayed for each operation within a product line for a specific model. There are a total of seven averages. Consequently, each average has an associated standard deviation, which is presented in the

𝑠 column for each product line operation within a specific model. The S/N Mean & Var. denotes the signal to noise ratio mean and variance formula. The specific values are shown in the S/N Mean & Var. column. The values shown in the S/N Var. column denote the signal to noise ratio variance only formula. The difference between the two is subtle and both are aimed at meeting the target and reducing variability within the characteristic. For the S/N Mean &Var. calculation, the formula first seeks to reduce the variability within the characteristic and then drive the mean to the nominal or target value. The S/N Var. formulation seeks only to reduce variation in the characteristic. With regards to specific signal to noise ratio values, it becomes important to note that higher values typically indicate settings that have a minimizing effect on the noise. Since all of our values are rather low or negative we can conclude that noise due to variability within the random distribution is comparatively large to the signal. Signal to noise values are abstract and difficult to interpret. To prove a more intuitive measure of noise within our random demand distributions we can present values in terms of a loss function. A loss function describes, in monetary terms, the loss of quality due variation. It is contrasted by a system that merely looks at whether the specification is within the tolerance level and declares that every part produced will have some sort of deviation from the mean or target. A black and white world would be one wherein a product is either defective or not. Taguchi’s loss function indicates that this black and white world is really more of a grey scale wherein certain products exhibit varying degrees of deviation, the greater the deviation the greater the loss.

Before calculating these loss functions there are a few constants that must be clearly and carefully

defined. The first is Ξ”0, which is defined as an upper tolerance level defined by the consumer. The

next is 𝐴0, which is essentially the cost of exceeding that upper tolerance level. To begin, we first set

Ξ”0 equal to one hour. The reason for such a stringent tolerance level is that the plan requires the shop to be efficient; overages should not be permitted. Yet, in the back of our mind, we also realize that the plan isn’t without flaws, and we are willing to accept small amounts of overages. Which is why the optimization problem would like to minimize the difference between output and demand,

or deviation. If our Ξ”0 is set to one hour then calculating 𝐴0, which is the cost due to that hourly overage, is relatively easy to calculate. Simply take the global optimum number of workers per shift

and multiple that by the cost per shift. Lastly, we multiply by Ξ”0 to achieve the cost per hour for

each product line operation configuration. See the below table to the specific values of 𝐴0.

Table 15: 𝐴0 Values per Product Line Operation for each Model

Now that Ξ”0 and 𝐴0 have been determined we can proceed to the calculation of loss functions for each product line operation on a specific model. The loss function equation is seen below.

𝐿 = (Ξ”02𝐴0) βˆ— (𝑠2 + (οΏ½Μ…οΏ½ βˆ’ 𝑇)2) (13.a)

Base Model

Pl. 1, Op. 1 Pl. 1, Op. 2 Pl. 2, Op. 1 Pl.2, Op.2 Pl. 1, Op. 1 Pl. 1, Op. 2 Pl. 1, Op. 1

($) 3630 3345 6300 6300 1615 1015 1015

Multiple Product Lines Multiple Operations

Page 75: The Pennsylvania State University PLANNING THE MEANS OF

66

Eq. 13: Loss Function

For Eq. 13, the loss function, L, is defined by the following terms: Ξ”0 is the upper tolerance level

and 𝐴0 is the cost of violating that upper tolerance level both of which are known constants. 𝑠 is the

sample standard deviation, οΏ½Μ…οΏ½ is the sample average, and T is the individual target (13.a). Loss functions are not meant to replace the signal to noise ratios, but merely to supplement them. We present the loss function values in the below Table.

Table 16: Loss Function Values

The loss function values for each product line operation model are presented in Table 16. Here we get another opportunity to analyze noise within the random demand distributions, but this time under the guise of losses. Take Pl. 2, Op.1 for multiple product lines. It has the highest loss function value at 15,721.65 millions of dollars. From this, we gather that the sample demands were significantly off from the long run demand or target value, which is why it has such a large loss value. Contrast this to the Base Model, which has the lowest loss function value of all the perspective operations. We conclude that the sample demands are quite close to the target demand, which is why they exhibit such low loss values. Thus higher loss function values should be taken as an indication that the sample demands are far from their respective target value when compared to lower loss function values that are closer to the target value. When interpreting loss function values we offer two cautionary side notes. First, we make no claims about the individual values representing any sort of monetary loss to the shop due to quality deviations from the target. Secondly, we suggest to consider only the magnitude of the value and to use that concept as a direct correlation to noise. Keep in mind that higher losses are the direct results of large deviations away from the target value. It is in this way that we are able to draw parallels between loss function magnitudes and noise inherent to random demand distributions. Returning to the concept of noise, we can take loss function values to be an indication of noise within the random demand distribution. Examples like Pl. 2, Op. 1 for multiple product lines exhibit a good deal of noise as is recognized in elevated loss function values. The reason that the sample demand is not able to meet their target values is due to noise. The amount of noise present within the random probability manifests itself in the magnitude of the loss function values that are displayed in Table 16. So then, the question becomes: is the algorithm robust? The answer to that question is ultimately yes, the algorithm is robust. Clearly, Table 16 presents a number of operations that have a good deal of noise present within the random demand distributions and other operations that exhibit markedly low levels of noise. All this is recognizable in the magnitude of the loss function values. Loss values increase as deviation from the target increases. But how is the algorithm robust? Recall that our definition of robustness is stable performance across various levels of noise. It’s clear in Table 16 that there are various levels of noise present within the random demand distributions that the algorithm deals with when searching for optimal solutions. Conclusions from the previous experiments are such that solution quality regarding integer programming and simulated annealing are equivalent. Thus, it would appear that the algorithm is stable across multiple performances for differing levels of noise.

Base Model

Pl. 1, Op. 1 Pl. 1, Op. 2 Pl. 2, Op. 1 Pl.2, Op.2 Pl. 1, Op. 1 Pl. 1, Op. 2 Pl. 1, Op. 1

L (million $) 3269.99 957.22 15721.65 135.18 442.43 198.69 59.60

Multiple Product Lines Multiple Operations

Page 76: The Pennsylvania State University PLANNING THE MEANS OF

67

We have already noted that the performance of the simulated annealing algorithm was unflagging with respect to integer programming across the various applications. We say this because the solutions that both methods achieved were identical, which allowed us to conclude that the simulated annealing algorithm was a feasible solution technique to solve the problem under consideration. We have also shown that there exists varying degrees of noise within the random demand distributions and yet the performance is unaffected. Thus, we conclude that the algorithm is robust. The inputs exhibit varying degrees of noise for their random distributions and yet the performance is still equivalent to that of integer programming across the various applications.

Page 77: The Pennsylvania State University PLANNING THE MEANS OF

68

5. CONCLUSIONS

The past many pages have detailed analyses and experiments that have been aimed at solving a resource allocation problem. The general premise was that for a manufacturing cell resembling a job shop like environment, demand for produced products is increasing. Subsequent increases in workers, work stations, and shifts are going to have to be made so as to achieve that increased demand. Planners know the random demand in advance and can make adjustments before the demand hits so as to achieve a configuration that minimizes the deviation between output and that random demand. We defined output to be collective work achieved by the workers on their work stations across their various shifts. Work stations are an integral planning object because they allow workers to convert their labor into output. Each shift faces its own productivity challenges due to myriad factors. The productivity of the first shift provides a benchmark by which all others shall be measured. In order to solve the problem and achieve a configuration that minimizes deviation, we looked at several different alternatives. The first being integer programming, which is considered the standard tool for approaching and solving such a problem. There are, of course, other alternatives to mathematical programming. The first being metaheuristics, which are typically not considered exact methods like their mathematical programming counterparts. Nonetheless, they represent a class of solution techniques that pride themselves on delivering β€œgood enough solutions” in a quick time frame. The last avenue we wished to explore was simulation modeling, whose application to this problem might seem unusual and novel, which is why it was selected for inclusion. Our goal is to use the prevailing method of mathematical programming to measure the effectiveness of the metaheuristic and simulation model with regards to solution quality. We already know that mathematical programming is a feasible solution technique, but what about metaheuristics and simulation modeling? The choice of integer programming over linear programming is natural because workers, work stations, and shifts must be represented as integer values. It would be very impractical and most likely expensive and messy to have half a worker on the shop floor. The choice of simulated annealing as the metaheuristic of choice was also natural because it applies itself more easily to resource allocation problems than say genetic algorithm. The difficultly with genetic algorithm would be binary encoding all of the differing options and interpreting the output. That’s not to say it cannot be done, but just not in this paper. Lastly, we choose systems dynamic modeling as the simulation modeling tool of choice. Applications for simulated modeling include planning the number of bodies or workers for a certain application; thus, it sounded like a good fit. The novelty in applying systems dynamic modeling to the problem is that we are not only planning for the number of people, but also for the number of work stations and shifts. All these variables affect the overall output and deviation. Thus, we take the application one-step further by having the number of workers be a variable that is ultimately beholden to demand. In determining whether the less common methods of simulated annealing and systems dynamic modeling constitute feasible solution techniques we designed and executed two experiments. The first experiment was a scalability test that simply scaled up to the problem to more complex levels in order to observe the performance of the methods. The second experiment undertaken was a robustness test to assess whether or not the simulated annealing algorithm was robust.

Page 78: The Pennsylvania State University PLANNING THE MEANS OF

69

Results from scalability painted a very favorable picture of simulated annealing with regards to achieving a solution of equivalent quality as integer programming. This occurred across multiple scaling applications such as expanding the number of operations and then expanding the number of product lines. Systems dynamic modeling was never able to achieve the same solution quality. That’s not to say that the results it produced were drastically bad, but that they never achieved optimality. In large part, this may be due to the optimizer that is inherent and built into the software package. Whereas integer programming and simulated annealing were able to achieve optimality. From this experiment, we conclude that the simulated annealing algorithm represented a feasible solution technique to solve the resource allocation problem. Our last experiment was related to determining whether the simulated annealing algorithm was robust. In order for an algorithm to be robust, we determined that performance must be steady across various levels of noise. Our focus was then shifted from analyzing the outputs of the algorithm to inspecting the inputs. Planners, when developing an allocation plan, base it on random demand that the shop will be expected to achieve. That figure is known to them. Thus, the noise inherent within the algorithm is due to the variability within the random demand distributions. For the algorithm to be robust, its performance must be unaffected by differing levels of noise. Thus, we have to determine the noise levels inherent within random demand distributions. This was done through the calculation of signal to noise ratios and loss function values all of which are described and presented by Taguchi. Our findings revealed that noise varies across different operational levels per product line and specific model. In calculating the loss function, we are not interested in the physical financial nature of the value, but rather the magnitude with higher levels indicating larger deviations from the target value. We were able to determine that noise from the random demand distributions at the operation level was highly variable. Certain operations displayed more noise than others, and yet performance across the multiple applications was unchanged due to the noise levels. This is apparent because we have already determined that the simulated annealing algorithm represents a feasible solution technique because its solution quality is equivalent to that of integer programming. Thus, we concluded that the algorithm is robust. Our findings revealed that simulated annealing is a feasible solution technique to solve the resource allocation problem. This is due to the fact that results produced by the algorithm were optimal and equivalent to that of integer programming. Systems dynamic modeling does not represent a feasible solution technique because its results were only feasible and not equivalent to those of integer programming. Furthermore, the simulated annealing algorithm is robust. This is evident because varying noise levels exist for the random demand distribution inputs and yet the performance of the algorithm is unaffected. Simulated annealing is a feasible solution technique to the problem, while systems dynamic is not.

5.A. Future Work

Future Work would include investigations into why the systems dynamic optimization model was never able to attain optimality. As well as analyzing interactions that changing shifts or workers might have on the overflow of operations within the job shop. Transfer from a systems dynamic point of view to more an agent-based modeling approach where the agent is a worker or part within the shop.

Page 79: The Pennsylvania State University PLANNING THE MEANS OF

70

References

Adams, J., Balas, E., Zawack, D. (1988). The Shifting Bottleneck Procedure for Job Shop

Scheduling. Management Science. Volume 34. No. 3. 391 – 401.

Aerts, J., Heuvelink, G. (2002). Using Simulated Annealing for Resource Allocation.

Geographical Information Science. Volume 16. Number 5. 571 – 587.

Al-Aomar, R. (2006). Incorporating Robustness into Genetic Algorithm Search of Stochastic

Simulation Concepts. Simulation Modeling and Practice. Volume 14. Issue 3. 201-223.

Angerhofer, B., Angelides, M. (2000). Systems Dynamic Modeling in Supply Chain

Management: Research Review. Proceedings of the 2000 Winter Simulation Conference. 342 – 351.

Auslender, A. (1999). Penalty and Barrier Methods: A Unified Framework. Society for Industrial

and Applied Mathematics. Volume 10. No. 1. 211 – 230.

Bazaraa, M., Sherali, H., Shetty, C. M. (2006). Nonlinear Programming: Theory and Algorithms.

Wiley. Hoboken, New Jersey.

Blake, G. (1946). British Ships and Shipbuilders. Collins. London.

Borshchev, A., Filippov, A. (2004). From System Dynamics and Discrete Event to Practical

Agent Based Modeling: Reasons, Techniques, Tools. The 22nd International Conference of the

System Dynamics Society.

Brailsford, S., Hilton, N. (2001). A Comparison of Discrete Event Simulation and System

Dynamics for Modeling Healthcare Systems. Proceedings of the 26th meeting of the ORAHS

Working Group. 18 – 39.

Chase, R., Jacobs, F. Aquilano, N. (2006) Operations Management for Competitive Advantage.

McGraw-Hill/Irwin. New York, New York.

Collofello, J., Houston, D., Rus, I., Chauhan, A., Sycamore, D., Smith-Daniels, D. (1998). A

Systems Dynamics Software Process Simulator for Staffing policies Decision Support. IEEE

Proceedings, 31st Annual Hawaii International Conference on System Sciences. 103 – 111.

Czyzak, P., Jaszkiewicz, A. (1998). Pareto Simulated Annealing – A Metaheuristic Technique

for Multiple Objective Combinatorial Optimization. Journal of Multi-Criteria Decision Analysis.

Volume 7. 34 – 47.

Della Croce, F., Tadei, R., Volta, G. (1995) A Genetic Algorithm for the Job Shop Problem.

Computer Operations Research. Volume 22. No. 1. 15-24.

Dijkstra, E. W. (1959). A Note on the Two Problems in Connexion with Graphs. Numerische

Mathematik. 269 – 271

Dorgio, M., Gambardella, L. (1997). Ant Colony System: A Cooperative Learning Approach

to the Traveling Salesman Problem. IEEE Transactions on Evolutionary Computation. Volume 1.

No. 1. 53 – 66.

Dorgio, M., Stutzle, T. (2009) Ant Colony Optimization: Overview and Recent Advances.

Institut de Recherches Interdisciplinaires et de Developpements en Intelligence Artificielle.

Technical Report No. TR/IRIDIA/2009-013.

Page 80: The Pennsylvania State University PLANNING THE MEANS OF

71

Duguay, C., Landry, S., Pasin, F. (1997) From Mass Production to Flexible/Agile

Production. Ecole des Hautes Etudes Commerciales de Montreal, Quebec, Canada. International Journal

of Operations & Production Management. Volume 17. Issue 12. 1195-1183.

Forst, W., Hoffman, D. (2010). Optimization – Theory and Practice. Springer, Undergraduate

Texts in Mathematics and Technology. New York, New York.

Fritzsche, P. (1996). Nazi Modern. Modernism/Modernity. Volume 3. No. 1. 1 – 22.

Glover, F., Taillard, E., de Werra, D. (1993) A user’s Guide to Tabu Search. Annals of

Operations Research. Volume 41. Issue 1. 1-28.

Glover, F., Kelly, J., Laguna, M. (1995). Genetic Algorithm and Tabu Search: Hybrids for

Optimization. Computers Operations Research. Volume 22. Number 1. 111 – 134.

Granci, I., Patterson, G. (2006). Towards a Comprehensive Model of Antisocial

Development: A Dynamic Systems Approach. Psychological Review. Volume 113. No. 1. 101 –

131.

Hounshell, D. (1984). From the American System to Mass Production, 1800 – 1932. The Johns

Hopkins University Press. Baltimore Maryland.

Khan, M., Ross, K. (1975). Cyclical and Secular Income Elasticies of the Demand for

Imports. The Review of Economic and Statistics. Volume 57. No. 3. 357-361.

Macal, C., North, M. (2009). Tutorial on Agent Based Modeling and Simulation. Proceedings

from the 2009 Winter Simulation Conference.

Mangasarian, O. (1994). Nonlinear Programming. Society for Industrial and Applied

Mathematics. Philadelphia, Pennsylvania.

Nahar, S., Sahni, S., Shragowitz, E. (1986). Simulated Annealing and Combinatorial

Optimization. Proceeding of the 23rd ACM/IEEE Design and Automation Conference. Las Vegas,

Nevada. 293 – 299.

Nocedal, J., Wright, S. (2006). Numerical Optimization. Springer. New York, New York.

Nove, A. (1992). An Economic History of the USSR: 1917 – 1991. Third Edition. Penguin Press.

London.

Nowicki, E., Smutnicki, C. (1996). A Fast Taboo Search Algorithm for the Job Shop

Problem. Management Science. Volume 42. No. 6. 797 – 813.

Pezella, F., Morganti, G., Ciaschetti, G. (2008). A Genetic Algorithm for the Flexible Job-

Shop Scheduling Problem. Computers and Operations Research. Volume 35. 3202 – 3212.

Ramasesh, R. (1990). Dynamic Job Shop Scheduling: A Survey of Simulation Research.

Omega. Volume 1. Issue 1. 43 – 57.

Reeves, C. (1995). A Genetic Algorithm for Flowshop Sequencing. Computers Operations

Research. Volume 22. No. 1. 5 – 13.

Reynolds, C. (1987). Flocks, Herds, and Schools: A Distributed Behavioral Mode. Computer

Graphics. Volume 21. No. 4. 25 – 34.

Richmond, B. (1994). Systems Thinking/System Dynamics: Let’s Just Get on With it. System

Dynamics Review. Volume 10. No. 2 – 3. 135 – 157.

Page 81: The Pennsylvania State University PLANNING THE MEANS OF

72

Scholl, H. (2001). Agent-based and System Dynamics Modeling: A Call for Cross Study and

Joint Research. Proceedings of the 34th Hawaii International Conference on System Sciences. 1 – 8.

Smith, E. A., Carpenter, W. C. (1978). A Feasible Direction Method Based on Zoutendijk’s Procedure

P1. Engineering Optimization. Volume 8. 109 – 112.

Spufford, F. (2012). Red Plenty. Graywolf Press. Minneapolis, Minnestota.

Steinbeck, J. (1945). Cannery Row. Viking Press. New York, New York.

Taguchi, G. (1986). Introduction to Quality Engineering: Designing Quality into Products and Processes.

Asian Productivity Organization. White Plains, New York.

Tako, A., Robinson, S. (2012). The Application of Discrete Event Simulation and System

Dynamics in the Logistics and Supply Chain Context. Decision Support Systems. Volume 52.

Issue 4. 802-815.

Thomas, D., Kwinn, B., McGinnis, M., Bowman, B., Entner, M. (1997). The U.S. Army

Enlisted Personnel System: A System Dynamics Approach. Computational Cybernetics and

Simulation, The 1997 IEEE Conference on. Volume 2. 1263 – 1267.

Vanderbei, R. (2014). Linear Programming: Foundations and Extensions. Springer, International

Series in Operations Research and Management Science. New York, New York.

van Geert, P. (1991). A Dynamic Systems Model of Cognitive and Language Growth.

Psychological Review. Volume 98. No. 1. 3 – 53.

van Laarhoven, P. J. M., Aarts, E. H. L., Lenstra, J. (1992). Job Shop Scheduling by

Simulated Annealing. Operations Research. Volume 40. No. 1. 113 – 125.

van Laarhoven, P. J. M., Aarts, E. H. L. (1987). Simulated Annealing: Theory and Application. D.

Reidel Publishing Company. Dordrecht, Holland.

Vlachos, D., Georgiadis, P., Iakovou, E. (2007). A System Dynamics Model for Dynamic

Capacity Planning of Remanufacturing in Closed-Loop Supply Chains. Computer & Operations

Research. Volume 34. 367 – 394.

Willemain, T., Smart, C., Shockor, J., DeSautels, P. (1994). Forecasting Intermittent Demand

in Manufacturing: A Comparative Evaluation of Croston’s Method. International Journal of

Forecasting. Volume 10. Issue 4. 529-538.

Yin, P., Wang, J. (2006). Ant Colony Optimization for the Nonlinear Resource Allocation

Problem. Applied Mathematics and Computation. Volume 174. Issue 2. 1438 – 1453.

Page 82: The Pennsylvania State University PLANNING THE MEANS OF

73

Appendix A: Multiple Product Line Matlab Β© Code

Integer Programming

clc clear

g = 1400.8; r = 1325.7;

u = 2501.6; y = 660.72;

p = optimproblem;

Deviation = optimvar('Deviation', 4, 'Type', 'continuous', 'LowerBound', 0); x = optimvar('x', 12, 'Type', 'integer', 'LowerBound', 0);

p.ObjectiveSense = 'minimize';

p.Objective = Deviation(1) + Deviation(2) + Deviation(3) + Deviation(4);

p.Constraints.c1 = x(1) >= x(2); %Workers on Shift 1 >= Workers on Shift 2

Operation 1 p.Constraints.c2 = x(2) >= x(3); %Workers on Shift 2 >= Workers on Shift 3 p.Constraints.c3 = x(1) <= 35; %Workers on Shift 1 <= Some physical capacity p.Constraints.c4 = Deviation(1) == ((x(1) + (0.85 * x(2)) + (0.75 * x(3))) -

(g/40)); p.Constraints.c5 = Deviation(1) >= 0;

p.Constraints.c6 = x(4) >= x(5); %Workers on Shift 1 >= Workers on Shift 2

Operation 2 p.Constraints.c7 = x(5) >= x(6); %Workers on Shift 2 >= Workers on Shift 3 p.Constraints.c8 = x(4) <= 30; %Workers on Shift 1 <= Some physical capacity p.Constraints.c9 = Deviation(2) == ((x(4) + (0.85 * x(5)) + (0.75 * x(6))) -

(r/40)); p.Constraints.c10 = Deviation(2) >= 0;

p.Constraints.c11 = x(7) >= x(8); %Workers on Shift 1 >= Workers on Shift 2

Operation 2 p.Constraints.c12 = x(8) >= x(9); %Workers on Shift 2 >= Workers on Shift 3 p.Constraints.c13 = x(7) <= 60; %Workers on Shift 1 <= Some physical capacity p.Constraints.c14 = Deviation(3) == ((x(7) + (0.85 * x(8)) + (0.75 * x(9))) -

(u/40)); p.Constraints.c15 = Deviation(3) >= 0;

p.Constraints.c16 = x(10) >= x(11); %Workers on Shift 1 >= Workers on Shift 2

Operation 2 p.Constraints.c17 = x(11) >= x(12); %Workers on Shift 2 >= Workers on Shift 3 p.Constraints.c18 = x(10) <= 15; %Workers on Shift 1 <= Some physical

capacity p.Constraints.c19 = Deviation(4) == ((x(10) + (0.85 * x(11)) + (0.75 *

x(12))) - (y/40));

Page 83: The Pennsylvania State University PLANNING THE MEANS OF

74

p.Constraints.c20 = Deviation(4) >= 0;

values = solve(p);

Simulated Annealing Algorithm

%Master's Thesis %Two Product Line %Two Operation clc clear

%Product Line 1 %Constants Demand_Multiplier = 2.7;

for t = 1:200 [Op_1_D_1(t), Op_2_D_1(t)] = Temp_PL2_Op_2_1(Demand_Multiplier); end

for tt = 1:200 [Op_1_D_2(tt), Op_2_D_2(tt)] = Temp_PL2_Op_2_2(Demand_Multiplier); end

y_counter = 100;

for y = 1:y_counter

HoursinaDay = 8; DaysinaWeek = 5; Temperature = 1; KCounter = 10000; Productivity_1 = 1; Productivity_2 = 0.85; Productivity_3 = 0.75; Alpha = 0.996; IL = 0; IU = 5; KL = 5;

%Initialization Op_1_Initial = 0; Op_2_Initial = 0;

Op_1_S_1 = 0; Op_1_S_2 = 0; Op_1_S_3 = 0;

Op_2_S_1 = 0; Op_2_S_2 = 0; Op_2_S_3 = 0;

Shifts = 0;

Page 84: The Pennsylvania State University PLANNING THE MEANS OF

75

while Op_1_D_1 > Op_1_Initial %Initial Solution Op_1_S_1 = (Op_1_S_1 + randi([IL IU], 1, 1)); Op_1_S_2 = (Op_1_S_2 + randi([IL IU], 1, 1)); Op_1_S_3 = (Op_1_S_3 + randi([IL IU], 1, 1));

%Op_1_S_1 > Op_1_S_2 > Op_1_S_3 if Op_1_S_2 >= Op_1_S_1 Op_1_S_2 = Op_1_S_1 - 1; end

if Op_1_S_3 >= Op_1_S_2 Op_1_S_3 = Op_1_S_2 - 1; end

%Defining Shifts if Op_1_S_1 > 0 Shifts = 1; end

if Op_1_S_2 > 0 Shifts = 2; end

if Op_1_S_3 > 0 Shifts = 3; end

%Defining Work Stations S = max(Op_1_S_2, Op_1_S_3); Stations = max(Op_1_S_1, S);

%Capacity Calculation Op_1_Initial = (HoursinaDay * DaysinaWeek * Productivity_1 * Op_1_S_1) +

(HoursinaDay * DaysinaWeek * Productivity_2 * Op_1_S_2) + (HoursinaDay *

DaysinaWeek * Productivity_3 * Op_1_S_3);

end

MD_Op_1 = Op_1_Initial - Op_1_D_1; %Object Function Value using mean Op_1

Demand

Op_1_V = [Op_1_S_1, Op_1_S_2, Op_1_S_3, Shifts, Stations, Op_1_Initial,

MD_Op_1]'; %Vector Storage

Op_1_Solution = [Op_1_S_1, Op_1_S_2, Op_1_S_3, Shifts, Stations,

Op_1_Initial, MD_Op_1, 1];

for k = 1:KCounter %Perturb Variables Op_1_V(1, k + 1) = floor((Op_1_V(1, k) + randi([-KL KL], 1, 1))); Op_1_V(2, k + 1) = floor((Op_1_V(2, k) + randi([-KL KL], 1, 1)));

Page 85: The Pennsylvania State University PLANNING THE MEANS OF

76

Op_1_V(3, k + 1) = floor((Op_1_V(3, k) + randi([-KL KL], 1, 1)));

%Op_1_S_1 > Op_1_S_2 > Op_1_S_3 if Op_1_V(2, k + 1) >= Op_1_V(1, k + 1) Op_1_V(2, k + 1) = Op_1_V(1, k + 1) - 1; end

if Op_1_V(3, k + 1) >= Op_1_V(2, k + 1) Op_1_V(3, k + 1) = Op_1_V(2, k + 1) - 1; end

%Defining Shifts if Op_1_V(1, k + 1) > 0 Op_1_V(4, k + 1) = 1; end

if Op_1_V(2, k + 1) > 0 Op_1_V(4, k + 1) = 2; end

if Op_1_V(3, k + 1) > 0 Op_1_V(4, k + 1) = 3; end

%Non-negativity Constraints if Op_1_V(1, k + 1) < 1 Op_1_V(1, k + 1) = 0; end

if Op_1_V(2, k + 1) < 1 Op_1_V(2, k + 1) = 0; end

if Op_1_V(3, k + 1) < 1 Op_1_V(3, k + 1) = 0; end

%Defining Stations S = max(Op_1_V(2, k + 1), Op_1_V(3, k + 1)); Stations = max(Op_1_V(1, k + 1), S);

%Capacity Calculation Op_1_N = (HoursinaDay * DaysinaWeek * Productivity_1 * Op_1_V(1, k + 1))

+ (HoursinaDay * DaysinaWeek * Productivity_2 * Op_1_V(2, k + 1)) +

(HoursinaDay * DaysinaWeek * Productivity_3 * Op_1_V(3, k + 1));

MD_Op_1_S_N = Op_1_N - Op_1_D_1;

Op_1_N_V(:, k) = [Op_1_V(1, k + 1), Op_1_V(2, k + 1), Op_1_V(3, k + 1),

Op_1_V(4, k + 1), Stations, Op_1_N, MD_Op_1_S_N];

MD_Op_1 = Op_1_V(6, k) - Op_1_D_1;

Page 86: The Pennsylvania State University PLANNING THE MEANS OF

77

if abs(MD_Op_1) <= abs(MD_Op_1_S_N) %Incumbent is closer to zero than the

Neighbor if rand > Temperature %Accept Incumbent Op_1_V(1, k + 1) = Op_1_V(1, k); %Shift 1 Op_1_V(2, k + 1) = Op_1_V(2, k); %Shift 2 Op_1_V(3, k + 1) = Op_1_V(3, k); %Shift 3 Op_1_V(4, k + 1) = Op_1_V(4, k); %Number of Shifts Op_1_V(5, k + 1) = Op_1_V(5, k); %Stations Op_1_V(6, k + 1) = Op_1_V(6, k); %Capacity Op_1_V(7, k + 1) = Op_1_V(7, k); %Objective Function Value else %Accept Neighbor Op_1_V(1, k + 1) = Op_1_N_V(1, k); %Shift 1 Op_1_V(2, k + 1) = Op_1_N_V(2, k); %Shift 2 Op_1_V(3, k + 1) = Op_1_N_V(3, k); %Shift 3 Op_1_V(4, k + 1) = Op_1_N_V(4, k); %Number of Shifts Op_1_V(5, k + 1) = Op_1_N_V(5, k); %Stations Op_1_V(6, k + 1) = Op_1_N_V(6, k); %Capacity Op_1_V(7, k + 1) = Op_1_N_V(7, k); %Objective Function Value end else %Accept Neighbor Op_1_V(1, k + 1) = Op_1_N_V(1, k); %Shift 1 Op_1_V(2, k + 1) = Op_1_N_V(2, k); %Shift 2 Op_1_V(3, k + 1) = Op_1_N_V(3, k); %Shift 3 Op_1_V(4, k + 1) = Op_1_N_V(4, k); %Number of Shifts Op_1_V(5, k + 1) = Op_1_N_V(5, k); %Stations Op_1_V(6, k + 1) = Op_1_N_V(6, k); %Capacity Op_1_V(7, k + 1) = Op_1_N_V(7, k); %Objective Function Value end

Temperature = Temperature * Alpha;

%Best Op_1 Solution out of the replications if (Op_1_V(7, k) < Op_1_Solution(7)) && (Op_1_V(7, k) > 0) Op_1_Solution(1) = Op_1_V(1, k); %Shift 1 Op_1_Solution(2) = Op_1_V(2, k); %Shift 2 Op_1_Solution(3) = Op_1_V(3, k); %Shift 3 Op_1_Solution(4) = Op_1_V(4, k); %Number of Shifts Op_1_Solution(5) = Op_1_V(5, k); %Stations Op_1_Solution(6) = Op_1_V(6, k); %Capacity Op_1_Solution(7) = Op_1_V(7, k); %Objective Function Value Op_1_Solution(8) = k + 1; end

end

Op_1_Solution = Op_1_Solution';

%Reinitialize Variables Shifts = 0; Temperature = 1;

while Op_2_D_1 > Op_2_Initial %Initial Solution

Page 87: The Pennsylvania State University PLANNING THE MEANS OF

78

Op_2_S_1 = (Op_2_S_1 + randi([IL IU], 1, 1)); Op_2_S_2 = (Op_2_S_2 + randi([IL IU], 1, 1)); Op_2_S_3 = (Op_2_S_3 + randi([IL IU], 1, 1));

if Op_2_S_2 >= Op_2_S_1 Op_2_S_2 = Op_2_S_1 - 1; end

if Op_2_S_3 >= Op_2_S_2 Op_2_S_3 = Op_2_S_2 - 1; end

%Defining Shifts if Op_2_S_1 > 0 Shifts = 1; end

if Op_2_S_2 > 0 Shifts = 2; end

if Op_2_S_3 > 0 Shifts = 3; end

%Defining Booths C = max(Op_2_S_2, Op_2_S_3); WS = max(Op_2_S_1, C);

%Capacity Calculation Op_2_Initial = (HoursinaDay * DaysinaWeek * Productivity_1 * Op_2_S_1) +

(HoursinaDay * DaysinaWeek * Productivity_2 * Op_2_S_2) + (HoursinaDay *

DaysinaWeek * Productivity_3 * Op_2_S_3);

end

MD_Op_2 = Op_2_Initial - Op_2_D_1; %Object Function Value using mean Demand

Op_2_V = [Op_2_S_1, Op_2_S_2, Op_2_S_3, Shifts, WS, Op_2_Initial, MD_Op_2]';

%Vector Storage

Op_2_Solution = [Op_2_S_1, Op_2_S_2, Op_2_S_3, Shifts, WS, Op_2_Initial,

MD_Op_2, 1];

for k = 1:KCounter %Perturb Variables Op_2_V(1, k + 1) = (Op_2_V(1, k) + randi([-KL KL], 1, 1)); Op_2_V(2, k + 1) = (Op_2_V(2, k) + randi([-KL KL], 1, 1)); Op_2_V(3, k + 1) = (Op_2_V(3, k) + randi([-KL KL], 1, 1));

if Op_2_V(2, k + 1) >= Op_2_V(1, k + 1) Op_2_V(2, k + 1) = Op_2_V(1, k + 1) - 1; end

Page 88: The Pennsylvania State University PLANNING THE MEANS OF

79

if Op_2_V(3, k + 1) >= Op_2_V(2, k + 1) Op_2_V(3, k + 1) = Op_2_V(2, k + 1) - 1; end

%Defining Shifts if Op_2_V(1, k + 1) > 0 Op_2_V(4, k + 1) = 1; end

if Op_2_V(2, k + 1) > 0 Op_2_V(4, k + 1) = 2; end

if Op_2_V(3, k + 1) > 0 Op_2_V(4, k + 1) = 3; end

%Non-negativity Constraints if Op_2_V(1, k + 1) < 2 Op_2_V(1, k + 1) = 2; end

if Op_2_V(2, k + 1) < 1 Op_2_V(2, k + 1) = 1; end

if Op_2_V(3, k + 1) < 1 Op_2_V(3, k + 1) = 0; end

%Defining Booths C = max(Op_2_V(2, k + 1), Op_2_V(3, k + 1)); WS = max(Op_2_V(1, k + 1), C);

%Capacity Calculation Op_2_N = (HoursinaDay * DaysinaWeek * Productivity_1 * Op_2_V(1, k + 1))

+ (HoursinaDay * DaysinaWeek * Productivity_2 * Op_2_V(2, k + 1)) +

(HoursinaDay * DaysinaWeek * Productivity_3 * Op_2_V(3, k + 1));

MD_Op_2_N = Op_2_N - Op_2_D_1;

Op_2_N_V(:, k) = [Op_2_V(1, k + 1), Op_2_V(2, k + 1), Op_2_V(3, k + 1),

Op_2_V(4, k + 1), WS, Op_2_N, MD_Op_2_N];

MD_Op_2 = Op_2_V(6, k) - Op_2_D_1;

if abs(MD_Op_2) <= abs(MD_Op_2_N) %Incumbent is closer to zero than the

Neighbor if rand > Temperature %Accept Incumbent Op_2_V(1, k + 1) = Op_2_V(1, k); %Shift 1 Op_2_V(2, k + 1) = Op_2_V(2, k); %Shift 2 Op_2_V(3, k + 1) = Op_2_V(3, k); %Shift 3

Page 89: The Pennsylvania State University PLANNING THE MEANS OF

80

Op_2_V(4, k + 1) = Op_2_V(4, k); %Number of Shifts Op_2_V(5, k + 1) = Op_2_V(5, k); %Booths Op_2_V(6, k + 1) = Op_2_V(6, k); %Capacity Op_2_V(7, k + 1) = Op_2_V(7, k); %Objective Function Value else %Accept Neighbor Op_2_V(1, k + 1) = Op_2_N_V(1, k); %Shift 1 Op_2_V(2, k + 1) = Op_2_N_V(2, k); %Shift 2 Op_2_V(3, k + 1) = Op_2_N_V(3, k); %Shift 3 Op_2_V(4, k + 1) = Op_2_N_V(4, k); %Number of Shifts Op_2_V(5, k + 1) = Op_2_N_V(5, k); %Booths Op_2_V(6, k + 1) = Op_2_N_V(6, k); %Capacity Op_2_V(7, k + 1) = Op_2_N_V(7, k); %Objective Function Value end else %Accept Neighbor Op_2_V(1, k + 1) = Op_2_N_V(1, k); %Shift 1 Op_2_V(2, k + 1) = Op_2_N_V(2, k); %Shift 2 Op_2_V(3, k + 1) = Op_2_N_V(3, k); %Shift 3 Op_2_V(4, k + 1) = Op_2_N_V(4, k); %Number of Shifts Op_2_V(5, k + 1) = Op_2_N_V(5, k); %Booths Op_2_V(6, k + 1) = Op_2_N_V(6, k); %Capacity Op_2_V(7, k + 1) = Op_2_N_V(7, k); %Objective Function Value end

Temperature = Temperature * Alpha;

%Best Solution out of the replications if (Op_2_V(7, k) < Op_2_Solution(7)) && (Op_2_V(7, k) > 0) Op_2_Solution(1) = Op_2_V(1, k); Op_2_Solution(2) = Op_2_V(2, k); Op_2_Solution(3) = Op_2_V(3, k); Op_2_Solution(4) = Op_2_V(4, k); Op_2_Solution(5) = Op_2_V(5, k); Op_2_Solution(6) = Op_2_V(6, k); Op_2_Solution(7) = Op_2_V(7, k); Op_2_Solution(8) = k + 1; end

end

Op_2_Solution = Op_2_Solution';

q = floor(Op_1_Solution(1)); w = floor(Op_1_Solution(2)); e = floor(Op_1_Solution(3));

o = floor(Op_2_Solution(1)); p = floor(Op_2_Solution(2)); l = floor(Op_2_Solution(3));

r = (Productivity_1 * q * 40) + (Productivity_2 * w * 40) + (Productivity_3 *

e * 40); x = (Productivity_1 * o * 40) + (Productivity_2 * p * 40) + (Productivity_3 *

l * 40);

Page 90: The Pennsylvania State University PLANNING THE MEANS OF

81

d = r - Op_1_D_1; u = x - Op_2_D_1;

%Final Solution Op_1 = [q, w, e, Op_1_Solution(4), max(q, w), r, d]' Op_2 = [o, p, l, Op_2_Solution(4), max(o, p), x, u]'

%figure %plot(Op_1_V(7, :))

William_1(:, y) = [q, w, e, d]'; Dan_1(:, y) = [o, p, l, u]'; end

%************************************************************************ %Product Line 2 Absenteeism_1 = 0.20; Absenteeism_2 = 0.25; Absenteeism_3 = 0.30; Demand_Multiplier = 2.7;

for y = 1:y_counter

HoursinaDay = 8; DaysinaWeek = 5; Temperature = 1; KCounter = 10000; Productivity_1 = 1; Productivity_2 = 0.85; Productivity_3 = 0.75; Alpha = 0.996; IL = 0; IU = 5; KL = 5;

%Initialization Op_1_Initial = 0; Op_2_Initial = 0;

Op_1_S_1 = 0; Op_1_S_2 = 0; Op_1_S_3 = 0;

Op_2_S_1 = 0; Op_2_S_2 = 0; Op_2_S_3 = 0;

Shifts = 0;

while Op_1_D_2 > Op_1_Initial %Initial Solution Op_1_S_1 = (Op_1_S_1 + randi([IL IU], 1, 1)); Op_1_S_2 = (Op_1_S_2 + randi([IL IU], 1, 1)); Op_1_S_3 = (Op_1_S_3 + randi([IL IU], 1, 1));

Page 91: The Pennsylvania State University PLANNING THE MEANS OF

82

%Op_1_S_1 > Op_1_S_2 > Op_1_S_3 if Op_1_S_2 >= Op_1_S_1 Op_1_S_2 = Op_1_S_1 - 1; end

if Op_1_S_3 >= Op_1_S_2 Op_1_S_3 = Op_1_S_2 - 1; end

%Defining Shifts if Op_1_S_1 > 0 Shifts = 1; end

if Op_1_S_2 > 0 Shifts = 2; end

if Op_1_S_3 > 0 Shifts = 3; end

%Defining Work Stations S = max(Op_1_S_2, Op_1_S_3); Stations = max(Op_1_S_1, S);

%Capacity Calculation Op_1_Initial = (HoursinaDay * DaysinaWeek * Productivity_1 * Op_1_S_1) +

(HoursinaDay * DaysinaWeek * Productivity_2 * Op_1_S_2) + (HoursinaDay *

DaysinaWeek * Productivity_3 * Op_1_S_3);

end

MD_Op_1 = Op_1_Initial - Op_1_D_2; %Object Function Value using mean Op_1

Demand

Op_1_V = [Op_1_S_1, Op_1_S_2, Op_1_S_3, Shifts, Stations, Op_1_Initial,

MD_Op_1]'; %Vector Storage

Op_1_Solution = [Op_1_S_1, Op_1_S_2, Op_1_S_3, Shifts, Stations,

Op_1_Initial, MD_Op_1, 1];

for k = 1:KCounter %Perturb Variables Op_1_V(1, k + 1) = floor((Op_1_V(1, k) + randi([-KL KL], 1, 1))); Op_1_V(2, k + 1) = floor((Op_1_V(2, k) + randi([-KL KL], 1, 1))); Op_1_V(3, k + 1) = floor((Op_1_V(3, k) + randi([-KL KL], 1, 1)));

%Op_1_S_1 > Op_1_S_2 > Op_1_S_3 if Op_1_V(2, k + 1) >= Op_1_V(1, k + 1) Op_1_V(2, k + 1) = Op_1_V(1, k + 1) - 1; end

Page 92: The Pennsylvania State University PLANNING THE MEANS OF

83

if Op_1_V(3, k + 1) >= Op_1_V(2, k + 1) Op_1_V(3, k + 1) = Op_1_V(2, k + 1) - 1; end

%Defining Shifts if Op_1_V(1, k + 1) > 0 Op_1_V(4, k + 1) = 1; end

if Op_1_V(2, k + 1) > 0 Op_1_V(4, k + 1) = 2; end

if Op_1_V(3, k + 1) > 0 Op_1_V(4, k + 1) = 3; end

%Non-negativity Constraints if Op_1_V(1, k + 1) < 1 Op_1_V(1, k + 1) = 0; end

if Op_1_V(2, k + 1) < 1 Op_1_V(2, k + 1) = 0; end

if Op_1_V(3, k + 1) < 1 Op_1_V(3, k + 1) = 0; end

%Defining Stations S = max(Op_1_V(2, k + 1), Op_1_V(3, k + 1)); Stations = max(Op_1_V(1, k + 1), S);

%Capacity Calculation Op_1_N = (HoursinaDay * DaysinaWeek * Productivity_1 * Op_1_V(1, k + 1))

+ (HoursinaDay * DaysinaWeek * Productivity_2 * Op_1_V(2, k + 1)) +

(HoursinaDay * DaysinaWeek * Productivity_3 * Op_1_V(3, k + 1));

MD_Op_1_S_N = Op_1_N - Op_1_D_2;

Op_1_N_V(:, k) = [Op_1_V(1, k + 1), Op_1_V(2, k + 1), Op_1_V(3, k + 1),

Op_1_V(4, k + 1), Stations, Op_1_N, MD_Op_1_S_N];

MD_Op_1 = Op_1_V(6, k) - Op_1_D_2;

if abs(MD_Op_1) <= abs(MD_Op_1_S_N) %Incumbent is closer to zero than the

Neighbor if rand > Temperature %Accept Incumbent Op_1_V(1, k + 1) = Op_1_V(1, k); %Shift 1 Op_1_V(2, k + 1) = Op_1_V(2, k); %Shift 2 Op_1_V(3, k + 1) = Op_1_V(3, k); %Shift 3

Page 93: The Pennsylvania State University PLANNING THE MEANS OF

84

Op_1_V(4, k + 1) = Op_1_V(4, k); %Number of Shifts Op_1_V(5, k + 1) = Op_1_V(5, k); %Stations Op_1_V(6, k + 1) = Op_1_V(6, k); %Capacity Op_1_V(7, k + 1) = Op_1_V(7, k); %Objective Function Value else %Accept Neighbor Op_1_V(1, k + 1) = Op_1_N_V(1, k); %Shift 1 Op_1_V(2, k + 1) = Op_1_N_V(2, k); %Shift 2 Op_1_V(3, k + 1) = Op_1_N_V(3, k); %Shift 3 Op_1_V(4, k + 1) = Op_1_N_V(4, k); %Number of Shifts Op_1_V(5, k + 1) = Op_1_N_V(5, k); %Stations Op_1_V(6, k + 1) = Op_1_N_V(6, k); %Capacity Op_1_V(7, k + 1) = Op_1_N_V(7, k); %Objective Function Value end else %Accept Neighbor Op_1_V(1, k + 1) = Op_1_N_V(1, k); %Shift 1 Op_1_V(2, k + 1) = Op_1_N_V(2, k); %Shift 2 Op_1_V(3, k + 1) = Op_1_N_V(3, k); %Shift 3 Op_1_V(4, k + 1) = Op_1_N_V(4, k); %Number of Shifts Op_1_V(5, k + 1) = Op_1_N_V(5, k); %Stations Op_1_V(6, k + 1) = Op_1_N_V(6, k); %Capacity Op_1_V(7, k + 1) = Op_1_N_V(7, k); %Objective Function Value end

Temperature = Temperature * Alpha;

%Best Op_1 Solution out of the replications if (Op_1_V(7, k) < Op_1_Solution(7)) && (Op_1_V(7, k) > 0) Op_1_Solution(1) = Op_1_V(1, k); %Shift 1 Op_1_Solution(2) = Op_1_V(2, k); %Shift 2 Op_1_Solution(3) = Op_1_V(3, k); %Shift 3 Op_1_Solution(4) = Op_1_V(4, k); %Number of Shifts Op_1_Solution(5) = Op_1_V(5, k); %Stations Op_1_Solution(6) = Op_1_V(6, k); %Capacity Op_1_Solution(7) = Op_1_V(7, k); %Objective Function Value Op_1_Solution(8) = k + 1; end

end

Op_1_Solution = Op_1_Solution';

%Reinitialize Variables Shifts = 0; Temperature = 1;

while Op_2_D_2 > Op_2_Initial %Initial Solution Op_2_S_1 = (Op_2_S_1 + randi([IL IU], 1, 1)); Op_2_S_2 = (Op_2_S_2 + randi([IL IU], 1, 1)); Op_2_S_3 = (Op_2_S_3 + randi([IL IU], 1, 1));

if Op_2_S_2 >= Op_2_S_1

Page 94: The Pennsylvania State University PLANNING THE MEANS OF

85

Op_2_S_2 = Op_2_S_1 - 1; end

if Op_2_S_3 >= Op_2_S_2 Op_2_S_3 = Op_2_S_2 - 1; end

%Defining Shifts if Op_2_S_1 > 0 Shifts = 1; end

if Op_2_S_2 > 0 Shifts = 2; end

if Op_2_S_3 > 0 Shifts = 3; end

%Defining Booths C = max(Op_2_S_2, Op_2_S_3); WS = max(Op_2_S_1, C);

%Capacity Calculation Op_2_Initial = (HoursinaDay * DaysinaWeek * Productivity_1 * Op_2_S_1) +

(HoursinaDay * DaysinaWeek * Productivity_2 * Op_2_S_2) + (HoursinaDay *

DaysinaWeek * Productivity_3 * Op_2_S_3);

end

MD_Op_2 = Op_2_Initial - Op_2_D_2; %Object Function Value using mean Weld

Demand

Op_2_V = [Op_2_S_1, Op_2_S_2, Op_2_S_3, Shifts, WS, Op_2_Initial, MD_Op_2]';

%Vector Storage

Op_2_Solution = [Op_2_S_1, Op_2_S_2, Op_2_S_3, Shifts, WS, Op_2_Initial,

MD_Op_2, 1];

for k = 1:KCounter %Perturb Variables Op_2_V(1, k + 1) = (Op_2_V(1, k) + randi([-KL KL], 1, 1)); Op_2_V(2, k + 1) = (Op_2_V(2, k) + randi([-KL KL], 1, 1)); Op_2_V(3, k + 1) = (Op_2_V(3, k) + randi([-KL KL], 1, 1));

if Op_2_V(2, k + 1) >= Op_2_V(1, k + 1) Op_2_V(2, k + 1) = Op_2_V(1, k + 1) - 1; end

if Op_2_V(3, k + 1) >= Op_2_V(2, k + 1) Op_2_V(3, k + 1) = Op_2_V(2, k + 1) - 1; end

Page 95: The Pennsylvania State University PLANNING THE MEANS OF

86

%Defining Shifts if Op_2_V(1, k + 1) > 0 Op_2_V(4, k + 1) = 1; end

if Op_2_V(2, k + 1) > 0 Op_2_V(4, k + 1) = 2; end

if Op_2_V(3, k + 1) > 0 Op_2_V(4, k + 1) = 3; end

%Non-negativity Constraints if Op_2_V(1, k + 1) < 2 Op_2_V(1, k + 1) = 2; end

if Op_2_V(2, k + 1) < 1 Op_2_V(2, k + 1) = 1; end

if Op_2_V(3, k + 1) < 1 Op_2_V(3, k + 1) = 0; end

%Defining Booths C = max(Op_2_V(2, k + 1), Op_2_V(3, k + 1)); WS = max(Op_2_V(1, k + 1), C);

%Capacity Calculation Op_2_N = (HoursinaDay * DaysinaWeek * Productivity_1 * Op_2_V(1, k + 1))

+ (HoursinaDay * DaysinaWeek * Productivity_2 * Op_2_V(2, k + 1)) +

(HoursinaDay * DaysinaWeek * Productivity_3 * Op_2_V(3, k + 1));

MD_Op_2_N = Op_2_N - Op_2_D_2;

Op_2_N_V(:, k) = [Op_2_V(1, k + 1), Op_2_V(2, k + 1), Op_2_V(3, k + 1),

Op_2_V(4, k + 1), WS, Op_2_N, MD_Op_2_N];

MD_Op_2 = Op_2_V(6, k) - Op_2_D_2;

if abs(MD_Op_2) <= abs(MD_Op_2_N) %Incumbent is closer to zero than the

Neighbor if rand > Temperature %Accept Incumbent Op_2_V(1, k + 1) = Op_2_V(1, k); %Shift 1 Op_2_V(2, k + 1) = Op_2_V(2, k); %Shift 2 Op_2_V(3, k + 1) = Op_2_V(3, k); %Shift 3 Op_2_V(4, k + 1) = Op_2_V(4, k); %Number of Shifts Op_2_V(5, k + 1) = Op_2_V(5, k); %Booths Op_2_V(6, k + 1) = Op_2_V(6, k); %Capacity Op_2_V(7, k + 1) = Op_2_V(7, k); %Objective Function Value else %Accept Neighbor

Page 96: The Pennsylvania State University PLANNING THE MEANS OF

87

Op_2_V(1, k + 1) = Op_2_N_V(1, k); %Shift 1 Op_2_V(2, k + 1) = Op_2_N_V(2, k); %Shift 2 Op_2_V(3, k + 1) = Op_2_N_V(3, k); %Shift 3 Op_2_V(4, k + 1) = Op_2_N_V(4, k); %Number of Shifts Op_2_V(5, k + 1) = Op_2_N_V(5, k); %Booths Op_2_V(6, k + 1) = Op_2_N_V(6, k); %Capacity Op_2_V(7, k + 1) = Op_2_N_V(7, k); %Objective Function Value end else %Accept Neighbor Op_2_V(1, k + 1) = Op_2_N_V(1, k); %Shift 1 Op_2_V(2, k + 1) = Op_2_N_V(2, k); %Shift 2 Op_2_V(3, k + 1) = Op_2_N_V(3, k); %Shift 3 Op_2_V(4, k + 1) = Op_2_N_V(4, k); %Number of Shifts Op_2_V(5, k + 1) = Op_2_N_V(5, k); %Booths Op_2_V(6, k + 1) = Op_2_N_V(6, k); %Capacity Op_2_V(7, k + 1) = Op_2_N_V(7, k); %Objective Function Value end

Temperature = Temperature * Alpha;

%Best Solution out of the replications if (Op_2_V(7, k) < Op_2_Solution(7)) && (Op_2_V(7, k) > 0) Op_2_Solution(1) = Op_2_V(1, k); Op_2_Solution(2) = Op_2_V(2, k); Op_2_Solution(3) = Op_2_V(3, k); Op_2_Solution(4) = Op_2_V(4, k); Op_2_Solution(5) = Op_2_V(5, k); Op_2_Solution(6) = Op_2_V(6, k); Op_2_Solution(7) = Op_2_V(7, k); Op_2_Solution(8) = k + 1; end

end

Op_2_Solution = Op_2_Solution';

q = floor(Op_1_Solution(1)); w = floor(Op_1_Solution(2)); e = floor(Op_1_Solution(3));

o = floor(Op_2_Solution(1)); p = floor(Op_2_Solution(2)); l = floor(Op_2_Solution(3));

r = (Productivity_1 * q * 40) + (Productivity_2 * w * 40) + (Productivity_3 *

e * 40); x = (Productivity_1 * o * 40) + (Productivity_2 * p * 40) + (Productivity_3 *

l * 40); d = r - Op_1_D_2; u = x - Op_2_D_2;

% Final Solution Op_1 = [q, w, e, Op_1_Solution(4), max(q, w), r, d]' Op_2 = [o, p, l, Op_2_Solution(4), max(o, p), x, u]'

Page 97: The Pennsylvania State University PLANNING THE MEANS OF

88

%figure %plot(Op_1_V(7, :))

William_2(:, y) = [q, w, e, d]'; Dan_2(:, y) = [o, p, l, u]'; end

Ancillary Functions Product Line 1

function [AS, WS] = Temp_PL2_Op_2_1(Demand_Multiplier)

Op_1_ND = 1250; Op_2_ND = 1500;

Op_1_Min = Op_1_ND * 0.05; Op_1_Max = Op_1_ND * 3; Op_1_Mode = Op_1_ND * 0.05;

Op_1 = makedist('Triangular','a',Op_1_Min,'b',Op_1_Mode,'c',Op_1_Max);

Op_2_Min = Op_2_ND * 0.2; Op_2_Max = Op_2_ND * 2; Op_2_Mode = Op_2_ND * 0.6;

Op_2 = makedist('Triangular','a',Op_2_Min,'b',Op_2_Mode,'c',Op_2_Max);

%for i = 1:50000 Op_1_D = ((0.40 * (random(Op_1, 1, 1)))) * Demand_Multiplier; Op_2_D = ((0.35 * (random(Op_2, 1, 1)))) * Demand_Multiplier; %end

AS = mean(Op_1_D); WS = mean(Op_2_D);

Ancillary Functions Product Line 2

function [AS, WS] = Temp_PL2_Op_2_2(Demand_Multiplier)

Op_1_ND = 2250; Op_2_ND = 750;

Op_1_Min = Op_1_ND * 0.05; Op_1_Max = Op_1_ND * 3; Op_1_Mode = Op_1_ND * 0.05;

Op_1 = makedist('Triangular','a',Op_1_Min,'b',Op_1_Mode,'c',Op_1_Max);

Op_2_Min = Op_2_ND * 0.2; Op_2_Max = Op_2_ND * 2; Op_2_Mode = Op_2_ND * 0.6;

Page 98: The Pennsylvania State University PLANNING THE MEANS OF

89

Op_2 = makedist('Triangular','a',Op_2_Min,'b',Op_2_Mode,'c',Op_2_Max);

%for i = 1:50000 Op_1_D = ((0.40 * (random(Op_1, 1, 1)))) * Demand_Multiplier; Op_2_D = ((0.35 * (random(Op_2, 1, 1)))) * Demand_Multiplier; %end

AS = mean(Op_1_D); WS = mean(Op_2_D);

Relaxed Integer Program Pareto Frontier

clc clear

fitnessfcn = @(x)[(x * 1) + (x * 0.85) + (x * 0.75), (x * 1) + (x * 0.85) +

(x * 0.75)];

Aineq = [-1 1 0 0 0 0; 0 -1 1 0 0 0; 0 0 0 -1 1 0; 0 0 0 0 -1 1; 1 0 0 0 0 0; 0 0 0 1 0 0];

Bineq = [0; 0; 0; 0; 5; 15];

Aeq = [1 0.85 0.75 0 0 0; 0 0 0 1 0.85 0.75];

Beq = [363/40; 617.11/40];

lb = [0; 0; 0; 0; 0; 0]; up = [5; 5; 5; 15; 15; 15;];

rng default % For reproducibility x = gamultiobj(fitnessfcn, 6, Aineq, Bineq, Aeq, Beq, lb, up);

plot(x(:,1), x(:,4),'ko', x(:,2), x(:,5), 'ro', x(:,3), x(:,6), 'bo') %t = linspace(-1/2,2); %y = 1/2 - t; hold on %plot(t,y,'b--') xlabel('x(1)') ylabel('x(2)') title('Pareto Points in Parameter Space') hold off

Page 99: The Pennsylvania State University PLANNING THE MEANS OF

90

rng default nvars = 2; opts =

optimoptions(@gamultiobj,'UseVectorized',true,'PlotFcn','gaplotpareto'); [xga,fvalga,~,gaoutput] =

gamultiobj(fitnessfcn,nvars,[],[],[],[],[],[],[],opts);

mout = [1, 0.85, 0.75, 1, 0.85, 0.75];

plot(fvalga(:,1),fvalga(:,4),'r*', fvalga(:,2),fvalga(:,5),'b*',

fvalga(:,3),fvalga(:,6),'g*');

Relaxed Integer Programming Pareto Clouds by Shift, Product Line One

Operation 1 Operation 2

S1 S2 S3 S1 S2 S3

31.28518 4.393911 0 15.29987 12.59081 9.520586

21.86845 10.8384 5.251877 29.91797 3.793564 -8.88E-

16

20.49508 11.53133 6.29772 30 2.370881 1.503002

28.21712 6.393738 1.824271 14.72175 12.59042 10.29185

21.5892 10.95771 5.488992 29.7408 3.673534 0.372264

30.82755 4.856874 0.085478 15.45406 12.50258 9.415004

31.92589 3.304321 0.380586 18.27916 10.6071 7.796409

21.29165 10.99712 5.841069 12.8179 12.74473 12.65545

16.025 14.63321 8.742359 22.55123 8.420285 4.578699

22.84663 10.27374 4.58758 17.84485 16.10456 2.145031

26.75229 7.52351 2.496973 14.92246 12.27006 10.38732

32.85267 2.131188 0.474423 18.5774 10.42593 7.604079

13.63176 13.46769 13.25428 21.098 9.397914 5.408363

16.73264 12.80227 9.87391 25.01759 6.636871 3.311422

15.28916 13.28542 11.25098 22.61221 8.201387 4.745476

21.44455 10.95895 5.68045 29.0826 4.288527 0.552866

26.82912 7.426118 2.504911 16.41351 14.23061 6.177292

31.68957 3.107282 0.918986 17.69679 10.83156 8.318514

19.05088 13.30376 6.214569 29.75 3.032359 1.086659

18.9271 18.3261 0.687611 20.52507 10.84612 4.530972

18.59821 12.32738 7.924687 23.12389 7.346343 5.032289

31.64321 3.415303 0.631705 18.67109 10.43604 7.467697

21.63755 10.95432 5.428373 17.61384 17.03597 1.397452

27.16871 7.076009 2.448904 16.71967 13.45955 6.642959

19.16147 12.20679 7.310352 27.0227 4.844722 2.669052

18.78174 11.92724 8.133479 24.39041 7.086816 3.637732

19.38568 17.09863 1.467305 22.17652 8.985585 4.43764

Page 100: The Pennsylvania State University PLANNING THE MEANS OF

91

24.14425 9.809336 3.383759 14.05742 12.526 11.25064

25.39237 7.806069 3.989957 25.47324 5.563081 3.920848

21.79414 11.45962 4.646913 17.58688 17.58688 0.809031

21.29165 10.99712 5.841069 13.38026 12.37274 12.32722

25.2847 8.255736 3.623906 24.06464 7.051546 4.112065

15.15166 13.92233 10.71247 21.95417 8.757096 4.99307

19.99045 12.65855 5.693047 26.17185 7.352997 0.960809

15.31291 12.69813 11.88491 19.93582 9.588246 6.742231

20.70331 15.03511 2.049125 22.3876 8.933535 4.215189

21.50608 10.83297 5.741198 12.92787 12.70437 12.55456

30.38784 4.670874 0.882555 16.61003 11.70147 8.781617

27.27581 7.028249 2.360236 16.71774 11.43655 8.938247

15.39149 13.22095 11.18761 21.53655 8.915997 5.369797

17.98139 12.08711 9.019426 22.82936 8.290191 4.355301

23.04679 9.955406 4.681492 15.76688 14.3876 6.861538

24.20024 8.928243 4.307677 15.87246 11.36781 10.1432

17.5802 11.69749 9.995915 20.14853 9.834651 6.179354

21.79544 10.64212 5.571677 13.43535 12.88538 11.67277

15.96917 13.04359 10.61837 22.62442 8.03282 4.920248

19.36549 16.91409 1.703376 22.8137 8.618168 4.004477

17.04626 13.71623 8.41993 25.17294 6.43059 3.338073

14.10142 13.35214 12.75902 20.89597 9.869103 5.14372

15.24852 12.99857 11.63026 20.51013 10.66745 4.753378

14.11036 13.8355 12.19928 21.15563 9.364728 5.369132

25.74621 7.565694 3.790601 24.78938 6.11431 4.207936

14.20102 13.67424 12.26117 20.99865 9.617557 5.2919

18.47218 12.35297 8.063728 23.03726 7.419014 5.065438

19.85367 14.77753 3.473912 22.10311 9.039169 4.474795

22.88047 9.903081 4.962552 19.11906 13.12839 3.819086

32.41049 2.502272 0.643433 18.24262 10.58014 7.875687

20.26044 11.63369 6.494562 29.4109 2.81451 1.785688

31.47043 3.952894 0.25281 17.01504 11.41136 8.570407

26.63231 7.394236 2.803455 15.31591 12.0918 10.06475

24.38876 8.589931 4.43973 24.65152 6.648691 3.786129

19.76555 11.93354 6.81459 26.64529 5.382893 2.562341

16.13971 12.8807 10.5756 23.21679 8.247151 3.887506

24.94914 8.501672 3.79259 14.80404 12.3363 10.47014

18.97052 18.20979 0.761543 20.41775 10.87699 4.639081

19.20027 11.82373 7.69275 15.24607 11.76038 10.53348

31.31184 3.545979 0.925438 17.77826 10.87203 8.164013

21.88692 10.78406 5.288839 29.5415 4.017608 0.24805

20.70715 14.04374 3.167571 24.98042 6.961045 2.993593

19.89481 16.57361 1.383494 21.80014 9.701563 4.128039

Page 101: The Pennsylvania State University PLANNING THE MEANS OF

92

25.19093 10.04355 1.722736 18.6188 10.68718 7.252797

15.70793 13.65051 10.27885 21.07128 9.285043 5.571908

14.80226 13.05348 12.16303 20.55854 9.568772 5.934

21.87549 10.82257 5.260438 29.82504 3.842975 0.067914

18.80841 18.51817 0.628203 20.59838 10.75008 4.542072

21.03088 12.21559 4.807831 18.91633 14.8022 2.1924

16.28214 14.40874 8.653903 20.8818 9.49862 5.582503

20.59998 11.46667 6.231133 28.63327 4.150752 1.308125

28.84961 5.987995 1.440791 17.6986 11.06686 8.049422

23.36811 9.476254 4.796104 14.44157 11.98425 11.35243

19.48837 12.97362 6.005399 29.28459 3.352827 1.344007

15.18787 14.87872 9.580293 20.95637 9.755913 5.19147

18.35748 12.56503 7.976329 25.41865 6.23583 3.231194

24.33491 8.116351 5.048253 17.97152 11.03837 7.717813

23.59423 9.505699 4.46123 26.9649 5.416202 2.09844

24.67579 9.041611 3.545125 17.57786 15.37218 3.331048

22.85158 9.983525 4.909893 28.2267 4.613887 1.325326

19.60383 16.10905 2.297972 18.10616 11.389 7.140923

22.5008 10.45639 4.841692 17.20995 15.79603 3.341237

21.43421 11.42406 5.167111 18.30488 15.63928 2.058971

21.712 10.67002 5.65132 14.11514 12.00857 11.7601

19.15923 12.30638 7.200461 19.61644 9.993178 6.70914

23.10135 9.801499 4.783174 14.28137 12.50355 10.97748

20.40308 11.55706 6.391226 29.8336 2.516361 1.559997

20.73101 11.13443 6.432966 13.78418 12.37686 11.78399

24.74832 8.394121 4.182237 23.62561 7.529605 4.15563

18.84378 14.49641 5.139024 23.26127 7.793409 4.342449

14.57592 13.13409 12.37347 20.36902 9.673152 6.068406

22.30219 10.28536 5.300342 27.87608 3.89691 2.60539

22.35954 10.1872 5.335117 26.4319 5.918206 2.24017

19.92791 15.61273 2.428362 19.11552 10.85054 6.405362

21.78668 10.8609 5.335407 27.5734 5.009841 1.747641

23.5807 9.076159 4.966084 23.58047 8.163939 3.496905

23.52212 9.630658 4.415763 25.54637 6.25962 3.033931

20.26385 11.61005 6.516802 29.66931 2.637105 1.642196

18.99589 12.18537 7.555391 26.2361 5.205336 3.309151

22.84163 10.06526 4.830542 28.51552 4.55514 1.00681

17.18552 15.11962 6.643746 20.03392 10.22945 5.884733

15.89201 12.92047 10.86078 20.57744 9.624113 5.846079

29.80129 4.78616 1.533967 17.55872 11.88474 7.308998

21.11969 11.11775 5.933634 13.12546 13.03039 11.92161

18.13574 16.71188 3.572225 21.41525 9.90202 4.414047

19.79213 15.9211 2.259907 23.76682 7.909096 3.53727

Page 102: The Pennsylvania State University PLANNING THE MEANS OF

93

19.54677 11.82513 7.229154 28.52307 3.650337 2.022187

14.47476 13.17591 12.46096 20.30674 9.758037 6.055239

15.93314 13.5206 10.12579 23.06216 8.092714 4.26871

20.47395 14.37427 3.103891 17.84902 12.77766 5.909961

16.44753 12.01547 11.14575 20.30603 9.857153 5.943848

18.62084 16.34024 3.346606 22.60586 8.966442 3.886892

20.49508 11.53133 6.29772 30 2.370881 1.503002

20.59925 11.4183 6.286923 15.35828 11.36772 10.82888

23.88554 9.768933 3.774485 17.25528 15.24037 3.910542

13.95999 13.74668 12.50045 21.098 9.397914 5.408363

18.23824 11.21931 9.660458 20.25434 9.778136 6.10233

22.71909 10.35877 4.661277 17.90131 16.41402 1.719024

30.88212 4.760431 0.122021 15.66928 12.36432 9.284726

21.44063 11.72208 4.820808 13.61118 12.40768 11.97971

22.18981 11.27499 4.328591 13.39537 12.52667 12.13263

13.63176 13.46769 13.25428 21.098 9.397914 5.408363

20.00048 11.4559 7.04267 18.97545 14.03731 2.980444

18.09708 12.43036 8.476154 25.75219 5.773848 3.310053

17.72518 11.96718 9.496955 22.54166 8.340782 4.681573

20.77892 11.73008 5.694019 15.00536 11.70763 10.91421

17.72276 16.06596 4.854893 20.48248 10.36918 5.128293

31.02503 3.856598 0.955818 17.90288 10.81962 8.057253

16.95944 12.0563 10.41695 20.76371 9.526043 5.708873

19.67591 12.5089 6.282034 26.19217 6.673775 1.703499

25.80225 7.711636 3.550472 24.55817 6.19485 4.424948

27.4917 6.563751 2.598817 22.28701 8.231772 5.144643

25.16298 7.981736 4.096724 21.14895 9.930612 4.736706

25.51663 8.240043 3.332448 23.75817 6.872486 4.723616

21.61194 11.46968 4.878454 19.32793 15.45269 0.906367

22.45634 10.43886 4.920847 17.60414 16.58759 1.918547

19.07582 17.55256 1.366002 21.66914 9.451006 4.58668

14.95611 13.35723 11.61366 21.36432 9.352429 5.104824

16.95647 12.38687 10.04625 24.49856 7.231032 3.330081

26.68523 7.355569 2.776715 16.81122 13.26673 6.739411

23.83805 9.435063 4.216198 24.93876 5.978061 4.163181

27.25175 6.446675 3.051435 19.45217 10.07197 6.838882

22.81495 9.818713 5.145529 27.37881 4.194119 2.931586

19.37131 16.81939 1.802942 21.57357 9.677255 4.457679

25.20267 7.964093 4.063802 25.35485 5.614226 4.020747

24.83711 8.183615 4.302423 16.3276 11.77695 9.072654

23.59727 8.731192 5.334956 16.3992 11.01155 9.844639

21.82675 9.341391 7.004088 18.72901 10.6236 7.177907

22.14765 10.62973 5.116114 17.07147 14.6313 4.845905

Page 103: The Pennsylvania State University PLANNING THE MEANS OF

94

23.63013 10.60091 3.17213 14.82725 12.20004 10.59362

21.50667 11.04791 5.496805 27.03997 6.651457 0.598383

13.8098 13.3847 13.11094 21.11925 9.372275 5.409093

17.3874 12.1172 9.777313 18.7784 11.15407 6.510848

26.2239 7.911057 2.76227 15.25853 12.60482 9.559836

29.01685 5.834761 1.391471 16.48673 12.23754 8.338486

19.21848 17.23491 1.535803 19.97258 10.73115 5.397926

19.09299 17.87759 0.97475 20.25118 10.87361 4.86501

17.86686 12.35317 8.870588 21.98246 8.156432 5.636102

13.87382 13.44102 12.96174 21.12657 9.361573 5.411456

26.25691 7.740479 2.911573 16.80516 13.59003 6.38108

17.10845 14.08933 7.914154 19.59778 10.97267 5.623934

26.95222 7.255292 2.53437 16.97508 11.29955 8.750394

26.32563 6.85689 3.821355 19.15952 11.1447 6.013311

23.7252 8.466463 5.464412 17.33312 11.05637 8.548628

29.52789 5.268956 1.351332 16.25153 11.8001 9.147849

20.95134 11.22534 6.03616 20.15845 8.312805 7.89089

31.35801 4.27005 0.043264 15.63855 12.36531 9.324587

16.51334 12.89106 10.06568 23.89381 7.701066 3.603715

20.26368 10.84974 7.378721 23.54 7.589526 4.201867

18.95538 12.03048 7.784951 27.78639 4.27796 2.29312

23.42698 9.684304 4.481815 26.49863 4.977238 3.217618

20.17871 12.45566 5.671969 27.91966 4.200294 2.203456

18.07061 12.35733 8.594214 26.6319 5.047546 2.960248

21.86133 10.80215 5.302457 17.56016 16.93286 1.585876

21.84908 10.84734 5.267575 29.07181 4.494345 0.334

16.52604 13.50794 9.34962 19.36541 10.0986 6.924369

16.73766 14.8664 7.52787 21.76701 9.115171 4.836791

32.48568 2.611156 0.419781 17.93937 10.84276 7.982376

22.24399 10.81184 4.78126 18.11103 14.99972 3.042285

24.94187 8.132077 4.221159 24.03588 7.049611 4.152603

18.40217 12.54499 7.939457 25.45758 6.208759 3.209966

31.56584 3.084635 1.109622 18.75636 10.35391 7.447088

17.40919 12.63509 9.161315 20.4421 9.752769 5.880731

19.63051 12.96758 5.822727 26.90558 5.106222 2.528841

21.68329 10.574 5.798412 27.2633 4.457701 2.786877

21.0737 11.14431 5.964854 13.32278 12.88346 11.82504

19.02977 12.26663 7.418121 18.82582 14.69731 2.431957

20.91745 14.14419 2.773319 19.14791 13.76861 3.05503

21.79414 11.45962 4.646913 17.70425 17.45425 0.802847

21.29165 10.99712 5.841069 12.8179 12.74473 12.65545

32.22138 2.762073 0.601145 18.55243 10.47417 7.582698

32.85267 2.131188 0.474423 18.5774 10.42593 7.604079

Page 104: The Pennsylvania State University PLANNING THE MEANS OF

95

23.95225 9.02757 4.525759 14.1684 12.21037 11.46039

Relaxed Integer Programming Pareto Clouds by Shift, Product Line Two

Operation 1 Operation 2

S1 S2 S3 S1 S2 S3

57.33104 6.128184 -1.78E-

15 10.02886 5.214155 2.742808

39.25979 18.08193 10.54742 14.72821 2.10564 -4.44E-

16

36.97231 18.83528 12.74238 7.05012 7.051104 4.632183

32.65127 32.65226 2.846155 10.41127 5.233761 2.209753

25.93191 25.93291 19.41884 8.024417 6.406543 4.063869

33.36596 31.09232 3.660139 10.7907 4.952803 2.022151

36.5428 19.11118 13.00447 14.71036 1.982309 0.164117

55.78711 6.771469 1.329477 10.51019 4.854092 2.509233

53.23441 8.816046 2.416054 10.73271 4.885897 2.176544

27.98814 24.60292 18.18569 8.565496 6.049188 3.748466

32.95733 31.67772 3.542505 10.73127 4.993518 2.055521

39.67921 18.50071 9.513664 8.895629 8.484579 0.548577

37.37809 18.59856 12.47184 14.81001 1.959255 0.057733

26.48537 25.577 19.08429 7.975678 6.438941 4.092468

58.19017 4.423193 0.787251 10.1244 5.021837 2.832281

38.27989 21.47174 8.012056 9.216212 7.788654 0.909306

50.02838 10.28465 5.026096 11.72552 4.019495 1.834613

29.10685 27.49187 13.4189 8.826704 6.121516 3.318249

55.00798 6.857473 2.271154 10.64614 4.726542 2.472889

36.97207 18.83488 12.74393 14.98745 1.798787 0.002309

36.83502 18.92599 12.8227 7.307219 6.881503 4.481643

38.50247 19.00216 10.51414 8.484415 8.353711 1.245013

36.33801 19.2616 13.10506 7.178648 6.999795 4.519098

39.97427 18.32998 9.313691 8.727212 8.728083 0.497211

47.40314 12.59348 5.910183 12.24406 3.70283 1.502501

44.14638 15.22921 7.265065 12.82071 3.326381 1.160048

47.28498 12.24757 6.459708 11.98899 3.821487 1.707348

42.81585 15.19154 9.080986 8.101407 6.394964 3.974243

45.5854 14.28697 6.414304 9.447813 7.19383 1.274739

48.79809 11.36125 5.446793 12.21078 3.698644 1.550741

50.19312 10.03181 5.093629 9.12646 6.025854 3.025513

34.71042 28.26894 5.068228 11.3173 4.530457 1.799147

45.2333 13.58183 7.682906 13.04853 3.146033 1.060796

56.7549 5.456625 1.529587 10.42315 4.831304 2.651383

57.1077 5.579423 0.919666 10.28072 4.97066 2.683626

Page 105: The Pennsylvania State University PLANNING THE MEANS OF

96

36.97234 18.8353 12.74241 7.050631 7.050163 4.632358

40.86643 16.74279 9.922287 8.058674 7.32263 2.980426

59.18103 3.709691 0.274723 9.918417 5.153614 2.957378

32.83202 29.82222 5.812384 10.73065 4.932248 2.126048

38.43168 17.89096 11.86763 14.49026 2.191792 0.219768

46.54984 13.50612 6.013337 9.662866 6.620366 1.637525

55.82884 6.224705 1.893749 10.59872 4.721786 2.54147

40.96893 16.11562 10.49623 7.581007 6.699677 4.32223

37.42102 18.5937 12.41865 14.18868 2.435582 0.345629

56.24877 6.006673 1.580764 9.95262 5.168089 2.896653

33.5876 20.88224 14.93578 9.560445 5.392861 3.164671

36.94751 18.91472 12.68551 7.07334 7.037452 4.616404

36.01897 19.72957 13.00074 13.30172 2.938798 0.957873

34.85329 20.19629 14.02554 7.237435 6.926589 4.523581

34.95754 20.15464 13.93506 13.25444 2.953244 1.004745

32.4619 30.49606 5.542236 10.56054 5.054791 2.21427

45.98429 13.68051 6.569658 9.185352 7.270666 1.537807

51.87342 9.194373 3.802251 11.39922 4.21235 2.05081

54.08703 7.76281 2.473166 10.89184 4.576001 2.315958

37.14943 19.17287 12.12414 7.664798 7.530116 3.270984

48.59386 12.87594 4.002882 11.38898 4.270517 1.998195

33.24916 21.10713 15.13221 9.500314 5.432537 3.199962

52.68821 8.276206 3.756267 9.346932 5.783782 3.006658

32.04566 29.29623 7.456794 10.38931 5.11893 2.36969

41.71916 15.89123 9.751017 12.03239 3.841944 1.626999

52.19843 8.791133 3.825527 11.25514 4.327617 2.112765

38.65635 17.74484 11.73475 13.09604 3.056409 1.098891

35.46919 22.87403 10.17123 13.51889 2.883975 0.730474

38.2669 21.44678 8.05773 9.195278 7.781315 0.945521

34.62237 20.33968 14.17155 12.51956 3.476256 1.3919

52.84769 8.45107 3.345572 10.95603 4.525855 2.287028

44.77453 14.41609 7.349354 12.4735 3.545516 1.374582

46.63262 12.66564 6.855662 11.86312 3.90358 1.782155

32.50662 31.00525 4.905545 10.52529 5.097056 2.213203

33.85988 21.10414 14.3213 7.428307 6.810336 4.400979

48.44082 12.00573 5.192461 11.67937 4.08786 1.81857

35.11869 23.37407 10.07032 9.248539 5.805976 3.113193

43.24062 19.03263 4.162559 11.20441 4.478212 2.009018

57.47438 4.883476 1.220005 9.79005 5.240061 3.030696

33.68263 27.12847 7.730976 11.41688 4.400191 1.814259

32.85323 32.06574 3.241621 10.59399 5.096094 2.122209

41.22343 16.52805 9.69017 13.36424 2.979364 0.828544

48.92661 11.082 5.591575 12.0461 3.803832 1.65146

Page 106: The Pennsylvania State University PLANNING THE MEANS OF

97

51.6235 9.531321 3.753178 11.04874 4.464229 2.232961

55.04919 7.317798 1.694247 10.64785 4.762215 2.429904

34.11231 28.14652 6.004418 11.09086 4.660898 1.953343

27.05983 25.2078 18.73676 7.924933 6.472297 4.121948

32.17177 23.68498 13.64798 8.665322 7.056335 2.473955

27.71357 25.02872 18.06929 8.903935 5.836784 3.537944

44.82234 13.9058 7.863463 11.51203 4.178907 1.938517

29.7591 23.57614 16.98772 9.06479 5.738931 3.434059

33.77861 20.92359 14.63508 11.8018 3.994319 1.762005

52.25295 8.911423 3.616867 11.30804 4.274608 2.101915

53.66785 7.97582 2.790276 9.821496 5.872395 2.274098

58.56509 4.151727 0.594364 10.04749 5.07135 2.879307

25.93267 25.92566 19.42719 8.023622 6.40598 4.065419

27.61006 26.81573 16.18199 8.628711 6.078592 3.62933

57.71834 5.028017 0.729273 10.17392 5.024564 2.764292

59.18102 3.707731 0.276151 9.919943 5.153258 2.957064

20.73101 11.13443 6.432966 13.78418 12.37686 11.78399

24.74832 8.394121 4.182237 23.62561 7.529605 4.15563

18.84378 14.49641 5.139024 23.26127 7.793409 4.342449

14.57592 13.13409 12.37347 20.36902 9.673152 6.068406

22.30219 10.28536 5.300342 27.87608 3.89691 2.60539

22.35954 10.1872 5.335117 26.4319 5.918206 2.24017

19.92791 15.61273 2.428362 19.11552 10.85054 6.405362

21.78668 10.8609 5.335407 27.5734 5.009841 1.747641

23.5807 9.076159 4.966084 23.58047 8.163939 3.496905

23.52212 9.630658 4.415763 25.54637 6.25962 3.033931

20.26385 11.61005 6.516802 29.66931 2.637105 1.642196

18.99589 12.18537 7.555391 26.2361 5.205336 3.309151

22.84163 10.06526 4.830542 28.51552 4.55514 1.00681

17.18552 15.11962 6.643746 20.03392 10.22945 5.884733

15.89201 12.92047 10.86078 20.57744 9.624113 5.846079

29.80129 4.78616 1.533967 17.55872 11.88474 7.308998

21.11969 11.11775 5.933634 13.12546 13.03039 11.92161

18.13574 16.71188 3.572225 21.41525 9.90202 4.414047

19.79213 15.9211 2.259907 23.76682 7.909096 3.53727

19.54677 11.82513 7.229154 28.52307 3.650337 2.022187

14.47476 13.17591 12.46096 20.30674 9.758037 6.055239

15.93314 13.5206 10.12579 23.06216 8.092714 4.26871

20.47395 14.37427 3.103891 17.84902 12.77766 5.909961

16.44753 12.01547 11.14575 20.30603 9.857153 5.943848

18.62084 16.34024 3.346606 22.60586 8.966442 3.886892

20.49508 11.53133 6.29772 30 2.370881 1.503002

20.59925 11.4183 6.286923 15.35828 11.36772 10.82888

Page 107: The Pennsylvania State University PLANNING THE MEANS OF

98

23.88554 9.768933 3.774485 17.25528 15.24037 3.910542

13.95999 13.74668 12.50045 21.098 9.397914 5.408363

18.23824 11.21931 9.660458 20.25434 9.778136 6.10233

22.71909 10.35877 4.661277 17.90131 16.41402 1.719024

30.88212 4.760431 0.122021 15.66928 12.36432 9.284726

21.44063 11.72208 4.820808 13.61118 12.40768 11.97971

22.18981 11.27499 4.328591 13.39537 12.52667 12.13263

13.63176 13.46769 13.25428 21.098 9.397914 5.408363

20.00048 11.4559 7.04267 18.97545 14.03731 2.980444

18.09708 12.43036 8.476154 25.75219 5.773848 3.310053

17.72518 11.96718 9.496955 22.54166 8.340782 4.681573

20.77892 11.73008 5.694019 15.00536 11.70763 10.91421

17.72276 16.06596 4.854893 20.48248 10.36918 5.128293

31.02503 3.856598 0.955818 17.90288 10.81962 8.057253

16.95944 12.0563 10.41695 20.76371 9.526043 5.708873

19.67591 12.5089 6.282034 26.19217 6.673775 1.703499

25.80225 7.711636 3.550472 24.55817 6.19485 4.424948

27.4917 6.563751 2.598817 22.28701 8.231772 5.144643

25.16298 7.981736 4.096724 21.14895 9.930612 4.736706

25.51663 8.240043 3.332448 23.75817 6.872486 4.723616

21.61194 11.46968 4.878454 19.32793 15.45269 0.906367

22.45634 10.43886 4.920847 17.60414 16.58759 1.918547

19.07582 17.55256 1.366002 21.66914 9.451006 4.58668

14.95611 13.35723 11.61366 21.36432 9.352429 5.104824

16.95647 12.38687 10.04625 24.49856 7.231032 3.330081

26.68523 7.355569 2.776715 16.81122 13.26673 6.739411

23.83805 9.435063 4.216198 24.93876 5.978061 4.163181

27.25175 6.446675 3.051435 19.45217 10.07197 6.838882

22.81495 9.818713 5.145529 27.37881 4.194119 2.931586

19.37131 16.81939 1.802942 21.57357 9.677255 4.457679

25.20267 7.964093 4.063802 25.35485 5.614226 4.020747

24.83711 8.183615 4.302423 16.3276 11.77695 9.072654

23.59727 8.731192 5.334956 16.3992 11.01155 9.844639

21.82675 9.341391 7.004088 18.72901 10.6236 7.177907

22.14765 10.62973 5.116114 17.07147 14.6313 4.845905

23.63013 10.60091 3.17213 14.82725 12.20004 10.59362

21.50667 11.04791 5.496805 27.03997 6.651457 0.598383

13.8098 13.3847 13.11094 21.11925 9.372275 5.409093

17.3874 12.1172 9.777313 18.7784 11.15407 6.510848

26.2239 7.911057 2.76227 15.25853 12.60482 9.559836

29.01685 5.834761 1.391471 16.48673 12.23754 8.338486

19.21848 17.23491 1.535803 19.97258 10.73115 5.397926

19.09299 17.87759 0.97475 20.25118 10.87361 4.86501

Page 108: The Pennsylvania State University PLANNING THE MEANS OF

99

17.86686 12.35317 8.870588 21.98246 8.156432 5.636102

13.87382 13.44102 12.96174 21.12657 9.361573 5.411456

26.25691 7.740479 2.911573 16.80516 13.59003 6.38108

17.10845 14.08933 7.914154 19.59778 10.97267 5.623934

26.95222 7.255292 2.53437 16.97508 11.29955 8.750394

26.32563 6.85689 3.821355 19.15952 11.1447 6.013311

23.7252 8.466463 5.464412 17.33312 11.05637 8.548628

29.52789 5.268956 1.351332 16.25153 11.8001 9.147849

20.95134 11.22534 6.03616 20.15845 8.312805 7.89089

31.35801 4.27005 0.043264 15.63855 12.36531 9.324587

16.51334 12.89106 10.06568 23.89381 7.701066 3.603715

20.26368 10.84974 7.378721 23.54 7.589526 4.201867

18.95538 12.03048 7.784951 27.78639 4.27796 2.29312

23.42698 9.684304 4.481815 26.49863 4.977238 3.217618

20.17871 12.45566 5.671969 27.91966 4.200294 2.203456

18.07061 12.35733 8.594214 26.6319 5.047546 2.960248

21.86133 10.80215 5.302457 17.56016 16.93286 1.585876

21.84908 10.84734 5.267575 29.07181 4.494345 0.334

16.52604 13.50794 9.34962 19.36541 10.0986 6.924369

16.73766 14.8664 7.52787 21.76701 9.115171 4.836791

32.48568 2.611156 0.419781 17.93937 10.84276 7.982376

22.24399 10.81184 4.78126 18.11103 14.99972 3.042285

24.94187 8.132077 4.221159 24.03588 7.049611 4.152603

18.40217 12.54499 7.939457 25.45758 6.208759 3.209966

31.56584 3.084635 1.109622 18.75636 10.35391 7.447088

17.40919 12.63509 9.161315 20.4421 9.752769 5.880731

19.63051 12.96758 5.822727 26.90558 5.106222 2.528841

21.68329 10.574 5.798412 27.2633 4.457701 2.786877

21.0737 11.14431 5.964854 13.32278 12.88346 11.82504

19.02977 12.26663 7.418121 18.82582 14.69731 2.431957

20.91745 14.14419 2.773319 19.14791 13.76861 3.05503

21.79414 11.45962 4.646913 17.70425 17.45425 0.802847

21.29165 10.99712 5.841069 12.8179 12.74473 12.65545

32.22138 2.762073 0.601145 18.55243 10.47417 7.582698

32.85267 2.131188 0.474423 18.5774 10.42593 7.604079

23.95225 9.02757 4.525759 14.1684 12.21037 11.46039

Page 109: The Pennsylvania State University PLANNING THE MEANS OF

100

Appendix B: Fel’dman Model

Recall, that there are two sectors within this model. The first being a consumer sector and the

second being a producer or production sector. An example of the former could be a lampshade, and

an example of the later could be a machine tool. We can define π‘˜π‘and π‘˜π‘ as the initial capital stock

for the producer sector and the consumer sector, respectively. Furthermore, we can define these two

variables.

π‘˜π‘‘π‘ = (1 βˆ’ 𝛿)π‘˜π‘‘βˆ’1

𝑝 + 𝛼𝑦𝑑𝑝

Eq. 14: Producer Sector Capital Stock in Period t

To begin π‘˜π‘‘π‘ is the amount of producer capital stock in period t. We let 𝛿 be a constant depreciation

rate for capital, and 𝛼 is the share of producer goods invested towards the production of producer

goods. Lastly, 𝑦𝑑𝑝 is the output of the producer goods sector.

π‘˜π‘‘π‘ = (1 βˆ’ 𝛿)π‘˜π‘‘βˆ’1

𝑐 + (1 βˆ’ 𝛼)𝑦𝑑𝑝

Eq. 15: Consumer Sector Capital Stock in Period t

Continuing as before π‘˜π‘‘π‘ is the amount of consumer capital stock for period t. Again, 𝛿 is the

constant rate at which capital depreciates. Since 𝛼 is the share of producer goods invested towards

the production of producer goods then (1 βˆ’ 𝛼) is the amount of producer goods shifted towards

the consumer sector. Finally, 𝑦𝑑𝑝 is amount of output in the producer sector.

We can further refine the two equations if we allow output in each sector to be equal to some constant multiplied by the capital stock in each sector at period t.

𝑦𝑑𝑝 = β„Žπ‘˜π‘‘

𝑝

Eq. 16: Output in the Producer Sector

Output in the producer sector is defined as some constant β„Ž multiplied by the capital stock in the producer sector at period t. We can repeat the same process for output in the consumer goods sector.

𝑦𝑑𝑐 = π‘π‘˜π‘‘

𝑐

Eq. 17: Output in the Consumer Sector

For Eq. 17, output in the consumer sector is defined as the multiplication of some constant 𝑝 by the capital stock in the consumer sector at period t. Once that has been achieved, we can rearrange Eq. 14 and Eq. 15. See below.

π‘˜π‘‘π‘ =

(1 βˆ’ 𝛿)π‘˜π‘‘βˆ’1𝑝

(1 βˆ’ π›Όβ„Ž)

Eq. 18: Rearranged Producer Sector Capital Stock in Period t

π‘˜π‘‘π‘ =

(1 βˆ’ 𝛿)(1 βˆ’ 𝛼)π‘˜π‘‘βˆ’1𝑝

(1 βˆ’ π›Όβ„Ž)

Page 110: The Pennsylvania State University PLANNING THE MEANS OF

101

Eq. 19: Rearranged Consumer Sector Capital Stock in Period t

Now all that remains is to combine Eq. 19 with Eq. 17 to obtain a complete formula for output in the consumer sector based the capital stock present in the producer goods sector.

𝑦𝑑𝑐 = 𝑝 [

(1 βˆ’ 𝛿)(1 βˆ’ 𝛼)

(1 βˆ’ π›Όβ„Ž)] π‘˜π‘‘βˆ’1

𝑝

Eq. 20: Final Output in the Consumer Sector

As is obvious in Eq. 20, output in the consumer sector for the current period is related to the amount of capital stock in the producer sector for the previous period. Thus, output in the consumer goods sector is linked to the amount of capital in the producer goods sector. Let us return

to 𝛼, which is the share of producer goods diverted towards the production of producer goods. If

one were to increase the level of 𝛼 we would invariably reduce the amount of capital invested in the consumer goods sector. Indirectly though, it actually increases capital invested in the consumer

goods sector because a larger 𝛼 makes the capital present in the producer goods sector grow faster, which also effects output in the consumer goods sector. Thus by injecting a vast majority of the capital stock available to the producer sector, we also stimulate output in the consumer goods sector. The question now becomes how long does it take for there to be a response in the consumer goods sector and for how long does one favor the production of producer goods over the production of consumer goods demanded by the populace. How long should one keep up this policy of austerity? This model is a model for industrialization at a rapid pace that places a preferential status upon producer goods by diverting a large portion of industrialization investment towards the production of more producer goods. This is industrialization with respect to the means of production not the means of consumption. Certain political or societal calls may prevent the model from being fully realized, which might prevent the producer sector from being large enough to have a meaningful impact on the output of consumer goods.