[ieee 2007 proceedings 57th electronic components and technology conference - sparks, nv, usa...

6
Electrical Design Space Exploration for High Speed Servers Caleb Wesley' ECE Department. North Carolina State University Bhyrav Mutnury2, Nam Pham3, Erdem Matoglu4, Moises Cases5 IBM Systems and Technology Group 11400 Burnet Road, Austin, TX - 78758 Phone: (512) 823-6918; Fax: (512) 823 - 5938 lcjwesley(ncsu.edu, 2bmutnury, 3npham, 4matoglu, [email protected] Abstract Today's high-speed systems are characterized by a multitude of electrical design parameters due to their complexity. Therefore, performing a thorough design space exploration is computationally exhaustive. This is primarily the result of the number of variables (electrical parameters) needed to do a comprehensive analysis. In this paper, a Genetic Algorithm (GA) based optimization method is described as a solution to address the above problem. Traditional statistical approaches like Design of Experiments (DoE) are explained along with their advantages and limitations that the GA based approach improves upon. 1. Introduction Today's high-speed systems are characterized by a multitude of electrical design parameters due to their complexity. Therefore, performing a thorough design space exploration is computationally exhaustive. This is primarily the result of the number of variables (electrical parameters) needed to do a comprehensive analysis. Approaches such as Monte Carlo and design of experiment (DoE) methods such as orthogonal arrays and Central Composite Design (CCD) are currently used by designers for high-speed system analysis[1][2]. Even though these methods serve to reduce the design space, their use is limited to a linear or weakly non-linear design space. In many cases, they are also assume limited interactions between factors and are confined to the structure of the experiment plan. A novel genetic algorithm (GA) based optimization method is described in this paper for shrinking the design space and to mitigate some of the limitations of current design methods. This paper is organized as follows. In section 2, DoE technique based CCD approach is discussed. In section 3, the concept of genetic algorithms is described. In section 4, the specific genetic algorithm optimization approach taken in this paper for electrical design space exploration is explained in detail. In section 5, a DDR memory subsystem test case is evaluated for CCD, DoE, and GA based optimization techniques and the results are compared with each other. 2. Statistical Methods Figure 1 shows a typical DDR memory subsystem with driver, transmission lines, and receivers. Such a system has numerous electrical design parameters. Variations can include transmission line impedance, transmission line length, driver and receiver settings. As the number of variations increase the search space grows exponentially. Thus, there is a need for methods to find the worst case/ best case corners without having to search the entire space and also to determine the sensitivity of output to parameter variations. One common method currently used by designers is the Monte Carlo statistical approach. Monte Carlo is a brute force exhaustive search technique where a simulation is randomly selected and ran until a correct answer is discovered or the entire space has been traversed. An obvious problem arises when there is no way of determining if a given answer is correct. This is what happens when trying to find the best/worst case corner. In this case, with a purely random selection, it is necessary to run many simulations before a sufficient confidence level is obtained. Thus, for situations with no a priori knowledge or assumptions of the data, Monte Carlo scales linearly with the design space. It can be and is still used for small design spaces. However, as the design space grows exponentially, its usefulness is limited and a technique that can reduce the design space is desired. Zo, Len Zo, Len Zo, Len Zo, Len Figure 1. DDR memory subsystem One reductive method involves estimating points across a linear space that will give a good description of the entire space. This approach is used for DoE methods. The goal is to find the smallest number of experiments or simulations to accurately cover the design space. Research is currently being done to find orthogonal arrays for a wide variety of spaces, but currently their use is still limited to certain factor (variables) and level (# of variations) settings[3]. A typical example involves the designer choosing high, typical, and low settings for a group of parameters. This amounts to a multi- factor 3-level design. Figure 2 shows an example matrix for a 4-factor 3-level design orthogonal array. 1748 2007 Electronic Components and Technology Conference 1-4244-0985-3/07/$25.00 02007 IEEE

Upload: moises

Post on 14-Apr-2017

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 2007 Proceedings 57th Electronic Components and Technology Conference - Sparks, NV, USA (2007.05.29-2007.06.1)] 2007 Proceedings 57th Electronic Components and Technology Conference

Electrical Design Space Exploration for High Speed Servers

Caleb Wesley'ECE Department. North Carolina State University

Bhyrav Mutnury2, Nam Pham3, Erdem Matoglu4, Moises Cases5IBM Systems and Technology Group

11400 Burnet Road, Austin, TX - 78758Phone: (512) 823-6918; Fax: (512) 823 - 5938

lcjwesley(ncsu.edu, 2bmutnury, 3npham, 4matoglu, [email protected]

AbstractToday's high-speed systems are characterized by a

multitude of electrical design parameters due to theircomplexity. Therefore, performing a thorough design spaceexploration is computationally exhaustive. This is primarilythe result of the number of variables (electrical parameters)needed to do a comprehensive analysis. In this paper, aGenetic Algorithm (GA) based optimization method isdescribed as a solution to address the above problem.Traditional statistical approaches like Design of Experiments(DoE) are explained along with their advantages andlimitations that the GA based approach improves upon.

1. IntroductionToday's high-speed systems are characterized by a

multitude of electrical design parameters due to theircomplexity. Therefore, performing a thorough design spaceexploration is computationally exhaustive. This is primarilythe result of the number of variables (electrical parameters)needed to do a comprehensive analysis. Approaches such asMonte Carlo and design of experiment (DoE) methods suchas orthogonal arrays and Central Composite Design (CCD)are currently used by designers for high-speed systemanalysis[1][2]. Even though these methods serve to reducethe design space, their use is limited to a linear or weaklynon-linear design space. In many cases, they are also assumelimited interactions between factors and are confined to thestructure of the experiment plan.

A novel genetic algorithm (GA) based optimizationmethod is described in this paper for shrinking the designspace and to mitigate some of the limitations of current designmethods.

This paper is organized as follows. In section 2, DoEtechnique based CCD approach is discussed. In section 3, theconcept of genetic algorithms is described. In section 4, thespecific genetic algorithm optimization approach taken in thispaper for electrical design space exploration is explained indetail. In section 5, a DDR memory subsystem test case isevaluated for CCD, DoE, and GA based optimizationtechniques and the results are compared with each other.

2. Statistical MethodsFigure 1 shows a typical DDR memory subsystem with

driver, transmission lines, and receivers. Such a system hasnumerous electrical design parameters. Variations can

include transmission line impedance, transmission linelength, driver and receiver settings. As the number ofvariations increase the search space grows exponentially.Thus, there is a need for methods to find the worst case/ bestcase corners without having to search the entire space andalso to determine the sensitivity of output to parametervariations.

One common method currently used by designers is theMonte Carlo statistical approach. Monte Carlo is a bruteforce exhaustive search technique where a simulation israndomly selected and ran until a correct answer is discoveredor the entire space has been traversed. An obvious problemarises when there is no way of determining if a given answeris correct. This is what happens when trying to find thebest/worst case corner. In this case, with a purely randomselection, it is necessary to run many simulations before asufficient confidence level is obtained. Thus, for situationswith no a priori knowledge or assumptions of the data, MonteCarlo scales linearly with the design space. It can be and isstill used for small design spaces. However, as the designspace grows exponentially, its usefulness is limited and atechnique that can reduce the design space is desired.

Zo, Len Zo, Len Zo, Len Zo, Len

Figure 1. DDR memory subsystem

One reductive method involves estimating points across alinear space that will give a good description of the entirespace. This approach is used for DoE methods. The goal is tofind the smallest number of experiments or simulations toaccurately cover the design space. Research is currently beingdone to find orthogonal arrays for a wide variety of spaces,but currently their use is still limited to certain factor(variables) and level (# of variations) settings[3]. A typicalexample involves the designer choosing high, typical, and lowsettings for a group of parameters. This amounts to a multi-factor 3-level design. Figure 2 shows an example matrix for a4-factor 3-level design orthogonal array.

1748 2007 Electronic Components and Technology Conference1-4244-0985-3/07/$25.00 02007 IEEE

Page 2: [IEEE 2007 Proceedings 57th Electronic Components and Technology Conference - Sparks, NV, USA (2007.05.29-2007.06.1)] 2007 Proceedings 57th Electronic Components and Technology Conference

Figure 2. 4-Factor, 3-level orthogonal array

The DoE approaches usually rely on a linear or weaklynon-linear design space. They do not natively handle thelocal spikes intermittent in a nonlinear space. Thus theamount of important data missed increases drastically for non-

linear systems. The resolution and structure of the DoE array

also limits the number of interactions that can be identifiedbetween various factors in the experiment. In this paper, a GAbased optimization approach is proposed to address theselimitations.

3. Genetic AlgorithmsFor non-linear systems, GA based optimization

techniques are effective[4][5]. They rely on natural geneticsand survival of the fittest to achieve the desired result.

A typical execution of a GA starts with an initial group ofvariable settings called a population. Each population isconsidered a generation. The variable settings for one memberof a population is called a chromosome. These chromosomesare typically converted into a string of bits (Is and Os) calledgenes for easier manipulation. After the initial population isset up, a fitness function is used to determine whichchromosomes are the "best" based on their proximity to theoptimal solution. Each chromosome is fed into the functionto obtain its objective ranking. The fitness function can besuited to evolve the GA toward the desired results.

After the chromosomes have been ranked, a method fordetermining which chromosomes to retain is used to create a

new population. One common method and the one used inthis paper is a roulette wheel, which gives each chromosome a

percentage value based on their output from the fitnessfunction. Those chromosomes with a better fitness value are

given a larger percentage of the wheel. A generator is used tospin the wheel and randomly select two or more of thechromosomes. To mimic typical genetics these chromosomescan be interchanged in various ways in what is calledcrossover. There is also a chance that a chromosome can

acquire a small random change to its genes to mimicmutation. The chance of mutation is usually low, but isnecessary to keep the GA from attaining a local minimal or

maxima. Certain improvements like master-to-slavenonlinear genetic algorithm that focus on a nonlinearmutation operator that resides only in the global and not localsearch space can be applied to improve nonlinear accuracy[6].

An example of crossover and mutation is shown in Figure 3.It can be seen that the darkly shaded chromosome is brokenup and swapped with each of the other chromosomes. Thisprocess is repeated until the population for the new generationis filled.

Crosso-ver Mutation

001 010 100 001 010 100

100 110010

001

110 100

10001011

Figure 3. Crossover and mutation

A common addition to the GA is an idea called elitismwhere the best chromosome in each generation is alwayscarried over to the next generation without any change even atthe expense of other chromosomes. By doing this, it ensuresthat the GA will gradually converge on a value rather thanoscillating.

4. Genetic Algorithm Approach for Design SpaceExploration

The focus of this paper is to apply a GA basedoptimization technique to quickly find the best and/or worstcase corner values. In high speed electrical analysis, it isimportant to ensure the voltage at the receiver input (far endof the transmission line) is compliant with ASIC requirementsor interface specification. One way to ensure compliance isto plot an eye diagram of the voltage at the receiver. Eye-diagrams are plotted by repeated overlapping of the voltagesignal for a given cycle period that is dependent on bit streamfrequency.

Attenuation through lossy channel and reflection fromimpedance mismatch and connectors is reflected in an eye-

diagram. The height of an eye diagram gives the voltagemargin available for the interface. Attenuation in a channelusually manifests itself into eye height closure. The width ofthe eye gives information about timing. Jitter andIntersymbol Interference (ISI) in a channel result in eye widthclosure. Figure 4 shows a typical eye-diagram with eyeheight and width information.

1749 2007 Electronic Components and Technology Conference

±1 -1 -1 ±1-1 +1 -1 +1+1 +1 -1 +1-1 -1 +1 +1-1-1 -1+1

-1 +1 +1 +1+1 +1 +1 +10 0 0 0

+1 0 0 0-1 0 0 0

Page 3: [IEEE 2007 Proceedings 57th Electronic Components and Technology Conference - Sparks, NV, USA (2007.05.29-2007.06.1)] 2007 Proceedings 57th Electronic Components and Technology Conference

Figure 4. Eye diagram

The fitness functions used in this paper for thecalculation of best and worst case eyes are as follows.

Best Case: Eye Height +Eye Width;Worst Case: Abs(Eye maxHeight- Eye Height) + Abs(Eye

max Width -Eye Width)

Absolute value for the worst case calculation ensurespositive value and that the same roulette wheel method can beapplied for both best case and worst case corner functions.

Figure 5 shows a flowchart for the genetic algorithm usedto find the best and worst case design corners. The values forpopulation size, mutation/crossover probability, stoppingcriteria, and elitism are initially set.

The population is then created with each chromosome basedon random variable settings unless specifically biased. Eachchromosome in the population is run and the eye is calculated.The eye height and width are fed into the fitness function todetermine which the probabilities of each chromosomecarrying through to the next generation. A roulette wheel iscreated with the probabilities and it is spun continuously untila new population of the same size is generated. Then arandom die determines whether the first three chromosomesshould be crossed over and another random die is cast foreach bit in every chromosome to decide whether it should bemutated (flipped). This finalizes the new generation andthese decks are ran and the cycle continues again until thetermination condition is met.

The crossover used in this approach involves threechromosomes from the previous generation. Each of the threechromosomes is divided into three sections based on tworandom separation points. All three sections are crossed overwith each other as shown in Figure 3. Mutation is importantfor a nonlinear design space optimization because of themultiple false local minimums and maximums the algorithmcan reach. One way to prevent this local convergence is forthe GA based approach to use a mutation probability thatgradually increases over the number of iterations. At the startthis allows for a relatively quick convergence onto one peakdue to a low initial probability for mutation. A highermutation probability at later stages causes branching out to aglobal optimum.

This genetic algorithm theoretically improves eachgeneration up to the optimal point. Biasing the initial

Initialize Settings for crossover/mutationprobability, population size and number of

iterations to run (n). (It = 0)

Initialize Population

Create spice decks, simulate them andcalculate eye width and eye height.

Calculate fitness function for eachchromosome in the population

Perform random sinqle-bit Mutation

Perform a 3-chromosome Crossoverwith 2 independent points in each

chromosome

IL

(it> ~~no Use fitness function outputs to create a|-_~~~~t= t0 Roulette Wheel and spin wheel to create a |

new population

l ys

3 ~~~Figure 5. Flowchart of a GA based optimization approach

1750 2007 Electronic Components and Technology Conference

Page 4: [IEEE 2007 Proceedings 57th Electronic Components and Technology Conference - Sparks, NV, USA (2007.05.29-2007.06.1)] 2007 Proceedings 57th Electronic Components and Technology Conference

population can reduce the number of simulations needed forconvergence even further. One way to bias the initialpopulation is to use the results from a DoE study. The bestcase corners and the worst case corners can be used in theinitial population along with other random chromosomes.This would help in finding the best/worst corners faster, andalso benefit the sensitivity analysis by ensuring a wide set ofexperiments.

After the GA converges onto a corner, the sensitivity ofthe experiments is analyzed using Analysis of Variance(ANOVA)[7][8]. It is used to determine the interactionswithin variables and from the variables to the output. Themethod tries to fit a curve that is affected by each factor aloneand an interaction of each factor with each other. For thispaper only data up to the second order is obtained. Thismeans that only interactions for up to two independentparameters are considered. For a simple two parameter caseANOVA can be shown as:

Yijk = + Yi +/j + (Y/ij+)+ k (1)

Yyk = the measurement in the kth deck,u = mean of all simulationsyi = main effect of the first parameter8j = main effect of the second parameter(yf8,= interaction effect for both factorsijk= error term

One note of special significance is that when a system isunbalanced, meaning that different parameter (factor) havedifferent number of settings (levels), then the order in whichthe factor effects are calculated can influence the overallresult.

4. ResultsA DDR memory subsystem shown in Figure 6 is used as a

test case. A write cycle is modeled in the test case with thememory controller driving six transmission lines to thefarthest DIMM. The first DIMM is terminated with an on dietermination of 100 ohms. The parameters that are varied inthe test case included driver settings, bit pattern setting, leadtransmission line impedance, DIMM transmission lineimpedance, DIMM to DIMM transmission line impedance,DIMM settings. For the test case eight different parameterswith three variations in each parameter are taken into account.With eight parameters at three levels of variations each, thesize of the design space is

3 *3 *3 *3 *3 *3 *3*3 = 6561

Brkout Lead_tline

C_comp

Dim_tline

DimField

V D2d tline

Figure 6. DDR memory subsystem used in the test cases

Case 1For this test case all genetic algorithm simulations were

ran using elitism with a random initial population of fivechromosomes. The probability for crossover was static at50% and the mutation was static at 5%. Six separate GAsimulations were ran for this test case. Three simulationswere performed with termination criteria of 20 consecutivegenerations with the same best fitness value. Two simulationswere performed with termination criteria of 40 consecutivegenerations, and one with 60 consecutive generations wasperformed. A CCD simulation based in DoE analysis wasalso performed 8-factors and 3-levels. The results for allseven simulations are shown in Table 1.

Bar graphs for the worst case corners of runs #3, #5, #6are shown in Figures 7, 8 and 9, respectively, showing resultsfor each termination criteria. In these graphs the height islisted on the left y-axis and the width is listed on the right y-axis. The x-axis contains the number of generations in theGA simulation. The benefit of elitism is easily observed in thegraphs as the overall eye results gradually improvedthroughout the whole simulation. An interesting note for theworst case graph is the steady result between generations 42and 68 in Figure 9. If this same simulation were ran withonly a 20 generation termination criteria then the result wouldnot improve beyond this point. This is further exemplified bythe table where increasing the number of generations beforestopping improves the accuracy of the result. At only 20iterations the answers varied widely. At 40 iterations theybecame more stabilized and at 60 iterations the best result ofthe simulation was achieved. Also to note from the table isthe improvement of both the minimum and maximum eyeover the CCD simulation in run #6. This is a direct benefit ofthe randomness innate to genetic algorithms. Sometimes theassumptions of linearity necessary for CCD lead to missedpoints. Thus a GA simulation can improve upon CCD whilealso offering the advantages of unbalanced designs andnonlinear spaces.

Traversing this search space using a Monte Carlo strategytakes hours or even days. Even with a very modestestimation of 60 seconds per simulation it would take 4 andhalf days to complete all the simulations. It can be seen thatthis not a feasible approach.

1751 2007 Electronic Components and Technology Conference

Page 5: [IEEE 2007 Proceedings 57th Electronic Components and Technology Conference - Sparks, NV, USA (2007.05.29-2007.06.1)] 2007 Proceedings 57th Electronic Components and Technology Conference

Table 1. Results of simulation runsExperiment Largest Smallest # of Simulation Runs

GA #1 - Mutation: 0.05 Height: +/-211.7 mV Height: +/-87.9 mV Generations: 7620 generations to exit Width: 300 ps Width: 150 ps # decks ran: 215GA #2 - Mutation: 0.05 Height: +/-215 mV Height: +/-68.6 mV Generations: 5520 gen. to exit Width: 290 ps Width: 140 ps # decks ran: 179GA #3 - Mutation: 0.05 Height: +/-216 mV Height: +/-60.2 mV Generations: 8320 gen. to exit Width: 300 ps Width: 128 ps # decks ran: 242GA #4 - Mutation: 0.05 Height: +/-212.6 mV Height: +/-43.9 mV Generations: 14940 gen. to exit Width: 300 ps Width: 140 ps # decks ran: 490GA #5 - Mutation: 0.05 Height: +/-220.4 mV Height: +/-58.1 mV Generations: 16840 gen. to exit Width: 290 ps Width: 130 ps # decks ran: 524GA #6 - Mutation: 0.05 Height: +/-212 mV Height: +/-56.5 mV Generations: 22460 gen. to exit Width: 300 ps Width: 100 ps # decks ran: 705DOE based CCD- 8 Height: +/-230.5 mV Height: +/-43.8 mV # decks ran: 81Factors 3 Levels Width: 270 ps Width: 130 ps

Worst Case Graph

Worst Case Graph

85 -

82.2 -

79.4 -

76.6 -

73.8 -

71 -

68.2 -

65.4 -

62.6 -

59.8 - IIIIIIIIbhL_hhhhh~hh~hhIIhh 1114 7 11 14 17 21 24 28 31 3

Figure 7 - Worst case graph for GA run #3 - 20 iterations

Worst Case Graph

119.7 -

112.233 -

104.766 -

97.299 -

89.832 -

X2.365 -

74.898 -

67.431 -

59.964 _

52.497 _

- 210

- 201.35

- 192.7

- 184.05

- 175.4

- 166.75

- 158.1

- 149.45

- 140.8

- 132.15

69.7

191.9 66.321

_1 855, 22 62.942_ 178.144

59.563_ 171.266

56.184- 164.388

52.805- 157.51

49.426- 150.632

46.047- 143.754

42.668- 136.876

39.289- 129.998

160

153.5

_ 47

140.5

134

127.5

_ 121

_ 114.5

108

101.5

14 28 42 56 69 83 97 111 125 138

Figure 9 -Worst case graph for GA run # 6 - 60 iterations

An analysis of variance was done on the results. Thethree most significant parameters for each of the simulationsare shown in Table 2. The only difference in the mostsensitive parameters between the GA and the CCD is bitpattern setting and lead tline. A test was done leaving allother settings nominal, but varying two variable from low tonominal to high. The results are shown in Table 3. While nota complete measurement of each variable's sensitivity theresults show the similarities between these two variables withlead_tline causing slightly more variation in the overall eye.

Table 2 contains the sensitivity analysis results. Thecombination of both CCD and GA merged their individualresults with CCD's dim tline having a stronger effect on theGA and lead_tline having a greater affect on CCD.

13 26 39 52 64 77 90 103 116 128

Figure 8 - Worst case graph for GA run #5 - 40 iterations

1752 2007 Electronic Components and Technology Conference

I

4

Page 6: [IEEE 2007 Proceedings 57th Electronic Components and Technology Conference - Sparks, NV, USA (2007.05.29-2007.06.1)] 2007 Proceedings 57th Electronic Components and Technology Conference

Table 2. The three most sensitive values for each experiment

Sensitive Sensitive Sensitiveparameter 1 parameter 2 parameter 3

GA Run #1 C_comp Lead_tline Dim_tlineGA Run #2 C_comp Lead_tline Dim_tlineGA Run #3 C_comp Lead_tline Dim_tlineGA Run #4 C_comp Dim_tline Lead_tlineGA Run #5 C_comp Dim_tline Lead_tlineGA Run #6 Dim_tline Lead_tline C_compCCD C comp Dim tline Bit patternCCD + GA C_comp Dim_tline Lead_tline

Table 3. Sensitivity Analysis results with all parameters set tonominal except for the desired variableSensitive parameter Bit stream setting Lead_tline

settings (+/-mV/ ps) (+/-mV/ ps)Low Height: 152.1 Height: 147.2

Width: 220 Width: 230Nominal Height: 155.7 Height: 155.7

Width: 250 Width: 250High Height: 158.4 Height: 165.1

Width: 260 Width: 260Difference between Height: 6.3 Height: 17.9High and Low Eye Width: 40 Width: 30 ps

Case 2One aspect that strongly influences how quickly a genetic

algorithm converges is the initial population. It is shown incasel that the GA results improved on the results from theCCD approach with just random initial population. One ideais to run GA after CCD and bias the initial population withthe results from the CCD simulation. There are two mainreasons for this. One is for a quicker convergence byguaranteeing good starting points. The second is to improvethe results of sensitivity analysis. Since GA is a randomsearch technique there is a chance that it will not cover a wideenough area of the space to give a good sensitivity result.The addition of the CCD sensitivity results ensures a goodvariety of points for the GA sensitivity.

For this test case the three best eye results are placed intoan initial population of five chromosomes. The other twochromosomes are randomized. Another difference in thissimulation is that the number of generations ran is set at thestart and is not determined by consecutive identical values.This is due to the fact that a near-optimal result is alreadyobtained at the start of the genetic algorithm. The mutationprobability is kept constant at 5%.

Table 4 shows the results of a 20 run simulationcompared with the best 20 iteration (83 generations total)simulation with a random initial population and with theCCD.From the table, it is shown that overall better results areachieved with only half the generations and with 37% lessdecks ran.

Table 4. Results from CCD biased GA optimization approach

Best Case Worst Case Simulation(+±- mV/ps) (+±- mV/ps) Time

GA Run #3 Height: 212 Height: 53.2 Gen: 83with 20

Width: 300 Width: 130 #decks: 242iterationsCCD + GA- Height: 230 Height: 50.3 Gen: 40

generat Width: 300 Width: 120 #decks: 153generations

CCD Height: 230 Height: 43.8decks:81

Width: 270 Width: 130

ConclusionsIn this paper, a GA based optimization technique is

described for design space exploration. A GA basedapproach tends to mitigate the limitations of DoE based CCDapproach. It can give similar or better results in linear spacesand can also scale well to a nonlinear space. It also is notrestricted to pre-defined orthogonal arrays and supportsunbalanced parameters allowing for a much wider set ofusable problems. In addition, biasing the initial GApopulation with the CCD results improves both theconvergence and sensitivity results.

References[1] E. Matoglu, N. Pham, D. N de Araujo, M. Cases, M.

Swaminathan," Statistical signal integrity analysis anddiagnosis methodology for high-speed systems," IEEETransactions on Advanced Packaging, Volume 27,Nov.2004 Page(s):611 - 629

[2] Wong, K.Y.; Singh, V.P.; Rustagi, J.S.: "Statisticalmethods in manufacturing" Electronic ManufacturingTechnology Symposium, 1993, Fifteenth IEEE/CHMTInternational 4-6 Oct. 1993 Page(s):215 - 218

[3] Y. Zhang (2007), Orthogonal arrays obtained byrepeating-column difference matrices, Discrete Math.,307, 246-261.

[4] Z. Michalewicz, "Genetic Algorithms + Data Structures= Evolution Programs" Springer Verlag; 3rd Revisionedition, March 1996.

[5] Goldberg, David E., "Genetic Algorithms in Search,Optimization & Machine Learning" Addison WesleyLongman, Inc. 1989

[6] Cui Zhi-Hua; Zeng Jian-Chao; Xu Yu-Bin; "Master-to-slave nonlinear genetic algorithm" Decision and Control,2003. Proceedings. 42nd IEEE Conference on Volume 4,9-12 Dec. 2003 Page(s):3830 - 3833 vol.4

[7] Mandel, John, "The Statistical Analysis of ExperimentalData" Dover Publications Inc.; Dover edition, 1984.

[8] Kachigan, Sam Kash, "Multivariate Statistical Analysis"Radius Press; 2nd edition, June, 1991.

1753 2007 Electronic Components and Technology Conference