darpa assessing parameter and model sensitivities of cycle-time predictions using gtx u abstract the...

1
DARPA DARPA Assessing Parameter and Model Sensitivities Assessing Parameter and Model Sensitivities of Cycle-Time Predictions Using GTX of Cycle-Time Predictions Using GTX Abstract Abstract The GTX (GSRC Technology Extrapolation) system serves as a flexible platform for integration and comparison of various studies, aimed at calibrating and predicting achievable design in future technology generations. The flexibility of GTX makes it particularly useful for 1. development of new studies that model particular aspects of design and technology, and 2. emulation, comparison, and evaluation of various technology extrapolation methods. In this poster, we highlight the ability of GTX to evaluate the sensitivity of existing (or newly developed) estimation methods to their input parameters and to their implicit modeling choices. We integrate three highly influential cycle-time models within GTX and compare the clock frequencies that result when primary input parameters are common to all models. The models' sensitivities to input parameter changes (parameter sensitivity) as well as to changes in the components of the estimation model (model sensitivity) are evaluated next. Our results reveal a surprisingly high level of uncertainty inherent in predictions of future CPU timing. In particular, existing cycle time models are extremely sensitive to both modeling choices and to changes in device parameters. GTX: The GSRC Technology Extrapolation System GTX: The GSRC Technology Extrapolation System Evaluates the impact of both design and process technology on achievable design and associated design problems. Sets new requirements for CAD tools and methodologies. Allows easy integration, evaluation and comparison of several technology extrapolation efforts. Is based on the concepts of “parameters” (technology description) “rules” (derivation methods) “rule chains” (inference chain) a “derivation engine” (executes rule chain) a “GUI” (represents results, provides user interaction) Sensitivity Analyses Sensitivity Analyses Goal Investigate sensitivity of existing cycle-time prediction models Evaluate roadmapping efforts Types of sensitivity Parameter sensitivity: influence of changes in the primary input parameters Model (rule) sensitivity: influence of changes in the estimation model itself Experimental setup Experimental setup Integration of three highly influential cycle-time models within GTX SUSPENS (Stanford University System Performance Simulator) BACPAC (Berkeley Advanced Chip Performance Calculator) Model of Fisher and Nesbitt which provides cycle-time values for the current SIA ITRS roadmap Implementation of some extensions and optimizations Takahashi’s extension to SUSPENS to introduce clock slew calculations Optimizations for wire sizing and buffer sizing from IPEM (Interconnect Performance Estimation Models, Jason Cong, UCLA) GTX successfully duplicates original results Additional requirements for maximal interoperability of rules Same granularity in all models, preferably as low as possible (may have to split some rules) Uniform parameter names: naming convention! Conversion rules sometimes necessary (vector to single values) Three different experiments For the same primary inputs, compare the results for different models (model sensitivity). For each model, change the input parameters by +/- 10%, note the difference in the resulting clock frequency ( parameter sensitivity). For each rule in a model, replace one rule by a rule from another model that computes the same parameter and record the difference in clock frequency ( model sensitivity). Model sensitivity of cycle-time Model sensitivity of cycle-time predictions predictions Common primary input (PI) parameter base for all models (250nm technology, mainly follows default parameter values of BACPAC) Expect similar results for all models Obtain very different results for SUSPENS and rather close results for BACPAC and Fisher (see table) Parameter Sensitivity of cycle-time Parameter Sensitivity of cycle-time predictions predictions Change of a single input parameter value by subtracting/adding 10% of its value, and computation of the resulting clock frequency Simultaneous changes of various parameters (in subsets of up to 7 parameters) BACPAC is most robust model (may not be the best!) SUSPENS is very sensitive to parameter changes Sensitivity to rules of other models Sensitivity to rules of other models (model sensitivity) (model sensitivity) Replacement of one rule of BACPAC / Fisher with a rule (or a set of rules) from another model BACPAC and Fisher are comparable except for a few rules Fisher model shows more variation than BACPAC Differences are larger for local than for global delay Example for the BACPAC rule chain (see figure) Assessment of the effects of clock skew (Takahashi) and leading-edge interconnect optimizations via IPEM with wire sizing, with driver and wire sizing, and with buffer insertion and wire sizing Conclusions Conclusions We evaluated the model sensitivity and parameter sensitivity of current cycle-time models These analyses reveal surprising levels of uncertainty and sensitivity to modeling choices in the technology extrapolations that drive roadmapping and R&D investment Andrew B. Kahng, Farinaz Koushanfar, Hua Lu, Dirk Stroobandt Model Logic stage delay (ps) Global delay (ps) Clock frequency (MHz) BACPAC 893 115 745 Fisher 1162 204 659 SUSPENS 665 (not modeled) 1505 Model Very sensitive (>10%) to Rather insensitive (<5%) to SUSPENS Rent exponent (41%!) Track utilization factor (routing efficiency) Wiring pitch on layers Dielectric constant (0.2%) Input capacitance of a minimum sized device Logic depth On-resistance of a minimum sized device BACPAC Logic depth (12%) Supply voltage (7%) Everything else Fisher Fanout per gate (25%)* Supply voltage Everything else * For variations from 3 to 1 and 2

Upload: victor-higgins

Post on 18-Dec-2015

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: DARPA Assessing Parameter and Model Sensitivities of Cycle-Time Predictions Using GTX u Abstract The GTX (GSRC Technology Extrapolation) system serves

DARPADARPA

Assessing Parameter and Model Sensitivities of Assessing Parameter and Model Sensitivities of Cycle-Time Predictions Using GTXCycle-Time Predictions Using GTX

AbstractAbstractThe GTX (GSRC Technology Extrapolation) system serves as a flexible platform for integration and comparison of various studies, aimed at calibrating and predicting achievable design in future technology generations. The flexibility of GTX makes it particularly useful for1. development of new studies that model particular aspects of design and technology, and2. emulation, comparison, and evaluation of various technology extrapolation methods.In this poster, we highlight the ability of GTX to evaluate the sensitivity of existing (or newly developed) estimation methods to their input parameters and to their implicit modeling choices.

We integrate three highly influential cycle-time models within GTX and compare the clock frequencies that result when primary input parameters are common to all models. The models' sensitivities to input parameter changes (parameter sensitivity) as well as to changes in the components of the estimation model (model sensitivity) are evaluated next. Our results reveal a surprisingly high level of uncertainty inherent in predictions of future CPU timing. In particular, existing cycle time models are extremely sensitive to both modeling choices and to changes in device parameters.

GTX: The GSRC Technology Extrapolation SystemGTX: The GSRC Technology Extrapolation System Evaluates the impact of both design and process technology on

achievable design and associated design problems. Sets new requirements for CAD tools and methodologies. Allows easy integration, evaluation and comparison of several technology

extrapolation efforts. Is based on the concepts of

“parameters” (technology description) “rules” (derivation methods) “rule chains” (inference chain) a “derivation engine” (executes rule chain) a “GUI” (represents results, provides user interaction)

Sensitivity AnalysesSensitivity Analyses Goal

Investigate sensitivity of existing cycle-time prediction models Evaluate roadmapping efforts

Types of sensitivity Parameter sensitivity: influence of changes in the primary input parameters Model (rule) sensitivity: influence of changes in the estimation model itself

Experimental setupExperimental setup Integration of three highly influential cycle-time models within GTX

SUSPENS (Stanford University System Performance Simulator) BACPAC (Berkeley Advanced Chip Performance Calculator) Model of Fisher and Nesbitt which provides cycle-time values for the current SIA

ITRS roadmap Implementation of some extensions and optimizations

Takahashi’s extension to SUSPENS to introduce clock slew calculations Optimizations for wire sizing and buffer sizing from IPEM (Interconnect

Performance Estimation Models, Jason Cong, UCLA) GTX successfully duplicates original results Additional requirements for maximal interoperability of rules

Same granularity in all models, preferably as low as possible (may have to split some rules)

Uniform parameter names: naming convention! Conversion rules sometimes necessary (vector to single values)

Three different experiments For the same primary inputs, compare the results for different models (model

sensitivity). For each model, change the input parameters by +/- 10%, note the difference in

the resulting clock frequency (parameter sensitivity). For each rule in a model, replace one rule by a rule from another model that

computes the same parameter and record the difference in clock frequency (model sensitivity).

Model sensitivity of cycle-time predictionsModel sensitivity of cycle-time predictions Common primary input (PI) parameter base for all models (250nm

technology, mainly follows default parameter values of BACPAC) Expect similar results for all models Obtain very different results for SUSPENS and rather close results for

BACPAC and Fisher (see table)

Parameter Sensitivity of cycle-time predictionsParameter Sensitivity of cycle-time predictions Change of a single input parameter value by subtracting/adding 10%

of its value, and computation of the resulting clock frequency

Simultaneous changes of various parameters (in subsets of up to 7 parameters) BACPAC is most robust model

(may not be the best!) SUSPENS is very sensitive to

parameter changes

Sensitivity to rules of other models (model sensitivity)Sensitivity to rules of other models (model sensitivity) Replacement of one rule of BACPAC / Fisher with a rule (or a set of

rules) from another model BACPAC and Fisher are

comparable except for afew rules

Fisher model shows morevariation than BACPAC

Differences are larger forlocal than for global delay

Example for the BACPACrule chain (see figure)

Assessment of the effects of clock skew (Takahashi) and leading-edge interconnect optimizations via IPEM with wire sizing, with driver and wire sizing, and with buffer insertion and wire sizing

ConclusionsConclusions We evaluated the model sensitivity and parameter sensitivity of

current cycle-time models These analyses reveal surprising levels of uncertainty and sensitivity

to modeling choices in the technology extrapolations that drive roadmapping and R&D investment

Andrew B. Kahng, Farinaz Koushanfar, Hua Lu, Dirk Stroobandt

Model Logic stage delay (ps) Global delay (ps) Clock frequency (MHz)BACPAC 893 115 745

Fisher 1162 204 659SUSPENS 665 (not modeled) 1505

Model Very sensitive (>10%) to Rather insensitive (<5%) to

SUSPENS Rent exponent (41%!)Track utilization factor (routing efficiency)Wiring pitch on layers

Dielectric constant (0.2%)Input capacitance of a minimum sized deviceLogic depthOn-resistance of a minimum sized device

BACPAC Logic depth (12%)Supply voltage (7%)

Everything else

Fisher Fanout per gate (25%)*Supply voltage

Everything else * For variations from 3 to 1 and 2