technical report interpreting applicability...

8
Winter 2011 Issue 14(4) IN THIS ISSUE Technical Report Interpreting Applicability Scores ... A value in using applicability scores is that it provides counts for “like both” and “like neither”... (pages 3-4) President's Message................................... 1 Recent Papers & Reports ........................... 1 Course Information ..................................... 2 Advertising Claims Support Course on Sea Island, GA Framework & Drivers of Liking ® Courses in Brussels, Belgium Details of 2012 Spring Courses .............. 5, 6 March 26 - 30, 2012 - Williamsburg, VA A Powerful Framework for Sensory Product Testing and Descriptive Analysis and Panel Training Meet the Instructors and Invited Speakers ........................................ 7 Tails of Applicability (pgs. 3 & 4 )

Upload: ledang

Post on 29-Jun-2018

231 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Technical Report Interpreting Applicability Scoresifpress.com/wp-content/uploads/2012/01/IFPress14-4_Newsletter.pdf · Technical Report Interpreting Applicability Scores ... Offer

Winter 2011 ● Issue 14(4)

IN THIS ISSUE

Technical Report

Interpreting Applicability Scores... A value in using applicability scores is that it provides counts for “like both” and

“like neither”... (pages 3-4)

President's Message ...................................1

Recent Papers & Reports ...........................1

Course Information .....................................2 Advertising Claims Support Course on Sea Island, GA Framework & Drivers of Liking® Courses in Brussels, Belgium

Details of 2012 Spring Courses ..............5, 6March 26 - 30, 2012 - Williamsburg, VA

A Powerful Framework for Sensory Product Testing

andDescriptive Analysis and Panel Training

Meet the Instructors and Invited Speakers ........................................7

Tails of Applicability (pgs. 3 & 4 )

Page 2: Technical Report Interpreting Applicability Scoresifpress.com/wp-content/uploads/2012/01/IFPress14-4_Newsletter.pdf · Technical Report Interpreting Applicability Scores ... Offer

Mission Statement:To develop and apply

advanced research tools for human perceptual

measurement.

C O U R S E C A L E N D A R :

January 24 - 27, 2012 Tokyo, JapanCOURSE 1 (2 DAYS): A Powerful Framework for Sensory Product Testing

COURSE 2 (2 DAYS): Drivers of Liking® and Product Portfolio OptimizationMarch 5 - 6, 2012 The Cloister - Sea Island, GA

TWO-DAY COURSE: Advertising Claims Support: Case Histories and Principles

March 26 - 30, 2012 The Williamsburg Lodge - Williamsburg, VACOURSE 1 (3 DAYS): A Powerful Framework for Sensory Product Testing

COURSE 2 (1.5 DAYS): Descriptive Analysis and Panel Training

May 21 - 25, 2012 The Radisson BLU EU - Brussels, BelgiumCOURSE 1 (2 DAYS): A Powerful Framework for Sensory Product Testing

COURSE 2 (2.5 DAYS): Drivers of Liking® and Product Portfolio Optimization

P U B L I S H E D PA P E R S :

Ennis, D.M. and Ennis, J.M. (2012). Accounting for no difference/preference responses or ties in choice experiments. Food Quality and Preference. 23(1), 13-17.

Ennis, J.M., Fayle, C.M. and Ennis, D.M. (2012). eTURF: A competitive TURF algorithm for large datasets. Food Quality and Preference. 23(1), 44-48.

Ennis, D.M. and Ennis, J.M. (2010). Equivalence hypothesis testing. Food Quality and Preference, 21(3), 253-256.

N E W S & E V E N T S

PAGE #

News & Events ................1

Advertising Claims Support& European Courses ...... 2

Technical Report ........ 3, 4

Spring Courses .......... 5, 6

Instructor Bios .................7

President's MessageWinter 2011 Issue 14(4)

PAGE 1

TECHNICAL REPORTS:

201114(4) Interpreting Applicability Scores14(3) Illuminating Product Demographic Interactions14(2) From Many to Few: A Graph Theoretic Screening Tool for Product Developers 14(1) How to Set Identicality Norms for

No Preference Data

201013(4) Action Standards in a Successful Sensory Discrimination Program13(3) How to Account for "No Difference / Preference" Counts13(2) Portfolio Optimization Based on First Choice13(1) Optimum Product Selection for a Drivers of Liking® Project

200912(4) Unfolding Liking using Landscape Segmentation Analysis® and Internal Preference Mapping12(3) Scaling First-Last, MaxDiff and Best-Worst Data

To download previously published technical reports from our

website, become a colleague at www.ifpress.com

W H AT W E D O :

Client Services: Provide full-service product and concept testing for product development, market research and legal objectives

Short Courses: Offer internal and external courses on product testing, sensory science, and advertising claims support

Research: Conduct and publish basic research on human perception in the areas of methodology, measurement and modeling

IFPrograms™: License proprietary software to provide access to new modeling tools

2011 Student Award

I am pleased to announce that we will again be sponsoring our annual Student Award. This award is given to a senior undergrad-uate or graduate student demonstrating an interest in pursuing a career in Marketing, Sensory Science, Psychology, Economics or other related fields.

The selected student will receive the award and a plaque presented at an evening banquet held during our Spring 2012 courses in Williamsburg, Virginia. The winner is also extended

a complimentary invitation to attend the courses (see details on pages 5 & 6 ) with travel and accommodation expenses included. The winner’s name will be announced on our website and in The Institute for Perception’s Spring newsletter.

If you know a student or are a student who would be interested in applying, the following is required:

An introductory letterA copy of your most recent transcriptA professional or academic letter of

recommendationProof of your scholarly interest as demonstrated

by a paper you have published or an essay on a contemporary issue in the field

Best regards, Daniel M. Ennis President, The Institute for Perception

Previous 2010 Winner, Brandon Turner

All entries must be postmarked by

January 21, 2012Send all entry documents to:

The Institute for PerceptionATTN: Dr. Daniel Ennis

7629 Hull Street Rd. Richmond, VA 23235

www.ifpress.com [email protected]

804-675-2980 804-675-2983

7629 Hull Street Road Richmond, VA 23235To Contact Us...

Page 3: Technical Report Interpreting Applicability Scoresifpress.com/wp-content/uploads/2012/01/IFPress14-4_Newsletter.pdf · Technical Report Interpreting Applicability Scores ... Offer

MONDAY (MARCH 5, 8am - 4pm)

8:00 – 9:00 | Introduction & 5 Key Questions…

9:10 – 10:00 | ASTM Guidelines for Test Protocols

10:10 – 11:00 | Data & Methods

11:10 – Noon | Sensory Intensity & Preference

1:00 – 2:00 | Requirements for a Sound Methodology

2:10 – 3:00 | Choosing the Right Method, Venue & Participants

3:10 – 4:00 | Analysis – Interpretation & Communication

TUESDAY (MARCH 6, 8am - 4pm)

8:00 - 9:00 | Test Power & Consumer Relevance

9:10 – 10:00 | Testing for Equivalence

10:10 – 11:00 | Equivalence – Learning from Cases

11:10 –Noon | Ratio, Multiplicative, “Up-to” & Count-Based Claims

1:00 – 2:00 | Case Examples of Ratio & “Up-to” Claims

2:10 – 3:00 | What to Do with No Difference/Preference Responses

3:10 – 4:00 | Summary, Review & Questions

Case Examples:s Case 1 - Sensory Intensity: Miller® Lite vs. Bud Light®

“more taste” claims and sensory intensitys Case 2 - Preference: Kraft Foods, Inc. (Tombstone® Pizza)

NAD Case 4915 (2008)s Case 3 - Consumer Relevance: Unilever US (Dove® Beauty

Bar) NAD Case 5197 (2010)s Case 4 - Venue: The Procter & Gamble Company (Swiffer®

Dust & Shine Furniture Spray) NAD Case 4960 (2009)s Case 5 - Equivalence: Pactiv Corporation (Hefty® Odor

Block Trash Bags) NAD Case 5105 (2009)s Case 6 – Ratio Claim: Dominos Pizza Inc. (Oven Baked

Sandwiches) NAD Case 5023 (2009)s Case 7 - “Up-to” Claim: Royal Purple, Ltd. (Royal Purple

Motor Oil) NAD Case 4983 (2009)s Case 8 - No Preference Counts: Frito-Lay (Lay’s® Stax®)

NAD Case 4270 (2004)

PAGE 2

How do you compete effectively in an increasingly challenging advertising environment? Dr. Daniel Ennis, Dr. John Ennis, Dr. Benoît Rousseau along with invited speakers from diverse legal backgrounds will discuss these issues during this insightful course. The legal team of speakers will include:

● Don Lofty - internal counsel for S. C. Johnson● Christopher Cole - litigation partner at Manatt, Phelps & Phillips● Lawrence Weinstein - litigation partner at Proskauer Rose LLP● David Mallen, Annie Ugurlayan, and Kathryn Farrara of the NAD

2-Day Course, March 5 & 6, 2012 ............. $1,950*_____________________________________________*A 25% discount will be applied to each additional registration from the same company*Fee includes all course materials, continental breakfast, break refresh- ments, lunches, and group dinners.

C O U R S E I N F O R M A T I O N

ATTEND THESE COURSES IN

Brussels, Belgium

MAY 21 - 25, 2012at the

Radisson BLU EU

MAY 21 - 22, 2012

A Powerful Framework For Sensory Product TestingIn this course you will achieve a deeper understanding of traditional discrimination and rating methods by learning a common framework in which to interpret results across methodologies.

MAY 23 - 25, 2012

Drivers of Liking® and Product Portfolio OptimizationLearn to “see” the market from your consumers’ perspective as you develop an understanding of similarity, Drivers of Liking,® and Landscape Segmentation Analysis.®

You will also be introduced to recently developed novel combinatorial tools, which can enhance the use of the TURF technique and guide the optimal selection of products for consumer category appraisals.

A two-day professional coursepresented at The Cloister on Sea Island, Georgia Monday & Tuesday, March 5 & 6, 2012

“How a survey or product test is planned,executed and interpreted is often the persuasive element

in an advertising claim dispute.”-- Dr. Daniel M. Ennis

For more information and online registration for any of The Institute for Perception courses, please visit www.ifpress.com/short-courses or contact us at [email protected]

Page 4: Technical Report Interpreting Applicability Scoresifpress.com/wp-content/uploads/2012/01/IFPress14-4_Newsletter.pdf · Technical Report Interpreting Applicability Scores ... Offer

Interpreting Applicability ScoresDaniel M. Ennis and John M. Ennis

2011 Issue 14(4)

T E C H N I C A L R E P O R T

PAGE 3

McNemar’s Test: In Table 1, there are four cells and only two of them reflect differences between the products. These cells are the off-diagonal elements that count the number of consumers who found that the item “I like this product” applied to one product but not the other. If the products were identical the expected value of these cells would be equal. A chi-square test with one degree of freedom using these cells provides a basis for testing a null hypothesis of no difference. This test is conducted by checking whether

is greater than 3.84 – the cutoff for a central chi-square with one degree of freedom at = 0.05. Note that since different respondents populate all four of the cells in Table 1, the independence assumption made by McNemar’s test is satisfied.

Table 1. Applicability counts for the main brand and the low calorie prototype based on the statement “I like this product.”

Loh and Ennis4 compared McNemar’s test on applicability data with binomial tests on a forced choice preference question using the same pair of products tested separately. They found that McNemar’s test on the off-diagonal of the applicability matrix was at least as sensitive in finding differences as a binomial test on the preference data. In addition, they used factor analysis to show that the applicability items provided a richer source of information than 7-point rating scales on similar attributes.

Applies to Both/Neither: In a typical paired test with a no difference/preference option5, we obtain preference counts for each product as well as a count of the no differences/preferences. However, we do not know from the preference counts if the consumers are satisfied with either product. Applicability scores provide this additional infor- mation. For example, you know from Table 1 that 88 people liked both products and 58 liked neither of them. If this latter count had been relatively much larger than the former you would have inferred that whatever preferences existed, neither of the products were generally acceptable. Further research to account for scoring bias could then have been conducted with identical products among loyal users to substantiate this conclusion in this case.

It is important to note that consumers falling into the like both/like neither categories may still prefer one product over the other. In the case of the “like both” category, they may not dislike the less preferred product enough to

Background: “Check-all-that-apply” (CATA) lists are a popular tool in product tests1,2,3. In a typical test, consumers respond to a series of statements and mark those statements that apply to the product of interest. An advantage of CATA testing is that it provides an opportunity to obtain information from consumers that would be difficult in some cases to extract using either a rating or 2-AFC format. A related method, explored by Loh and Ennis4 in 1982, is called applicability scoring. In applicability scoring, consumers mark statements that are applicable but also mark statements that are not applicable. In a CATA list, an unmarked item may imply that the consumer does not think that the item applies, but could also mean that the consumer merely missed that item – applicability scoring avoids this ambiguity.

In this report the topic of how to analyze and interpret applicability scores will be discussed. This report will provide guidance on the analysis of applicability counts to test a null hypothesis of no difference and will also discuss the scaling of applicability data using a Thurstonian model. One application of particular interest will be the comparative evaluation of two products on liking.

Fiqure 1. Sample items from a typical applicability questionnaire.

Scenario: In a consumer test you have obtained appli-cability data in a sequential design on two seasoned pretzels in a blind format. Figure 1 shows a sample of the applicability items. The applicability variables were randomized for each respondent using computerized data entry and the products were tested in a balanced order of presentation. The consumers were recruited from an homogenous group of users regarding taste preferences and are loyal users of your main brand. The two products consist of your main brand and a low calorie prototype repacked in a blind format.

Read the phrases and, for each phrase, mark the box to the left if the phrase describes the product tested. Mark the box on the right if the phrase does not describe the product tested.

Does Does Not Apply Apply

I like this product □ □Has a lasting aftertaste □ □Causes salivation □ □Tastes spicy □ □Tastes very salty □ □

Page 5: Technical Report Interpreting Applicability Scoresifpress.com/wp-content/uploads/2012/01/IFPress14-4_Newsletter.pdf · Technical Report Interpreting Applicability Scores ... Offer

Issue 14(4)

T E C H N I C A L R E P O R T

PAGE 4

2011

Table 2. Actual applicability score frequencies for the item “I like this product” and predicted results from a Thurstonian model with c = 0.1 and = 0.46.

Having completed your analysis, you can predict the results of other methods for which a Thurstonian model has been developed7,8,9,10. For instance, for = 0.46, you predict a forced-choice preference result to be 63% in favor of the main brand. Using the Thurstonian approach, you can also investigate the power of your applicability scoring relative to other testing methods11.

Conclusion: Applicability scores are frequently obtained in a number of consumer testing applications. For two products, hypothesis testing for data of this type can be accomplished using McNemar’s test. These data are also valuable for scaling differences between products within a Thurstonian perspective.

References and Notes 1. Ares, G., Barreiro, C., Deliza, R., Gimenez, A., and Gámbaro,

A. (2010). Application of a check-all-that-apply question to the development of chocolate milk desserts. Journal of Sensory Studies, 25(s1), 67-86.

2. Dooley, L., Lee, Y., and Meullenet, J.F. (2010). The application of check-all-that-apply (CATA) consumer profiling to pref-erence mapping of vanilla ice cream and its comparison to classical external preference mapping. Food Quality and Preference, 21(4), 394-401.

3. Plaehn, D. (2011). CATA penalty/reward. Food Quality and Preference, 24(1), 141-152.

4. Loh, C.F. and Ennis, D.M. (1982). Glossary of attributes. Philip Morris Internal Report, accession number 82-288. Retrieved from http://www.pmdocs.com/pdf/2001298236_8263_xhmfyd55uzdszmvz5bnqoqvt.pdf

5. Ennis, D.M. and Ennis, J.M. (2011). Accounting for no difference/preference responses or ties in choice exper-iments. Food Quality and Preference, 23(1), 13-17.

6. 1 = (104-50)2/154 = 18.94, p < 0.0001

7. Thurstone, L.L. (1927). A law of comparative judgement. Psychological Review, 34(4), 273-286.

8. Hacker, M. J. and Ratcliff, R. (1979). A revised table of d' for m-alternative forced choice. Perception & Psychophysics, 26, 168-170.

9. Ennis, D.M. (1993). The power of sensory discrimination methods. Journal of Sensory Studies, 8(4), 353-370.

10. Ennis, J.M., Ennis, D.M., Yip, D., and O’Mahony, M. (1998). Thurstonian models for variants of the method of tetrads. British Journal of Mathematical and Statistical Psychology, 51, 205-215.

11. Ennis, J.M. and Jesionka, V. (2011). The power of sensory discrimination methods revisited. Journal of Sensory Studies, 26, 371-382.

provide a “does not apply” score to that product on liking. The purpose of the next section on Thurstonian scaling is to estimate where that cut-off lies.

Thurstonian Scaling: In a Thurstonian model, we assume that each percept is randomly drawn from a distribution with a certain mean and unit variance. One product is assumed to have a mean of zero and the other a mean of . A criterion, c, is placed on the attribute axis. If the percept exceeds c, the subject responds “does apply,” otherwise the subject responds “does not apply.”

Figure 2. Underlying distributions that give rise to thefour possible response patterns. Shaded areas correspond to the probability that a product is liked.

Figure 2 illustrates the distributions of the percepts for two products in the case of “I like this product.” The criterion, c, that determines the point above which liking would be declared for either product. To obtain the probability of a joint event such as “I like Product A/I do not like Product B,”we multiply the probabilities of the independent events “I like Product A” and “I do not like Product B.” The probabilities of these individual events are found by computing the areas under the curves for the respective products that are either above or below the c value. For example, to find the probability that a respondent likes the main brand, we determine the area of the blue region shown in Figure 2. To find the probability that a respondent does not like the low calorie prototype, we compute one minus the area of the red region shown in Figure 2.

Interpretation of the Applicability Counts: Applying McNemar’s test to the results in Table 1, you reject the null hypothesis that the products performed identically6. You then perform a Thurstonian analysis as described above using the IFPrograms™ software package. From this analysis you find that , the estimate of , is 0.46. The criterion c is estimated to be 0.1. Table 2 shows the actual and predicted values for the data in Table 1 using the two-parameter Thurstonian model involving and c. In a similar manner, you perform this analysis for every attribute. If needed, this approach could be generalized to include more than two products.

Cell Observed Predicted

Applies to Main Brand but not Prototype 104 103.74

Applies to Prototype but not Main Brand 50 49.62

Applies to Both 88 88.43

Applies to Neither 58 58.2

Page 6: Technical Report Interpreting Applicability Scoresifpress.com/wp-content/uploads/2012/01/IFPress14-4_Newsletter.pdf · Technical Report Interpreting Applicability Scores ... Offer

A Powerful Framework for Sensory Product Testing

S P R I N G 2 0 1 2 C O U R S E S

Register for courses online at www.ifpress.com/short-courses PAGE 5

MONDAY (MARCH 26, 8am - 4pm) Topics ______________________________

s Difference testing methods: m-AFC, triangle, duo-trio

s Discussion of a theory underlying all sensory evaluation methods

s Estimating a measure of sensory difference, dꞌ and its variance from discrimination tests

s Proportion of discriminators in the population

s The method of tetrads

s Same-different and degree of difference methodsCases ______________________________ s Product differences using m-AFC tests

s Ingredient supplier change: Texture using 2-AFC, duo-trio, and triangle; The issue of power

s 2-AFC and 2-AC on carbonated water

TUESDAY (MARCH 27, 8am - 4pm) Topics ____________________________s Power and sample sizes for discrimination methods

s Risk management in product testing

s Setting action standards

s Replicated testing: How to increase power and reduce costs using the beta-binomial model

s New model for replicated same-different

s Torgerson’s method of triads: Simultaneous comparison of more than two productsCases ____________________________

s Replicated testing using fragrance preferences

s Action standards for product improvement and cost reduction

s Multiple comparisons of cookies manufactured using different processes and formulations

WEDNESDAY (MARCH 28, 8am - 4pm)

Topics ______________________________ s Advanced concepts and applications: Retasting, memory and sequence effects

s Measuring the effect of training

s How to get values from intensity ratings and ranking data

s Disposition of no difference/preference votesCases ______________________________

s Memory and sequence effects in tests involving orange and apple beverages

s Improving discrimination by allowing retasting: A case study using a sports beverage

s Relating trained panel and consumer sensitivities using vanilla ice cream

s Generating a dose response relationship using ranking and rating

s Ingredient change: Getting dꞌ values from descriptive analysis

In this course you will achieve a deeper understanding of commonly used discrimination and rating methods by learning a common framework in which to interpret results across methodologies. In particular, you will learn how to:● Select the most suitable and powerful discrimination metho- dology based on the objectives of your project, thus saving time, expense and human resources● Develop standards to detect when sensory differences and ratings exceed a consumer-relevant acceptability threshold, allowing you to connect results of internal panel testing to consumer response ● Assess and manage risks in product testing decisions● Relate difference testing results to harmonize historical data and resolve apparent paradoxes across methodologies● Use the latest theories to treat specific categorical responses, such as those obtained from preference/difference tests with a ‘no preference/difference’ option

REGISTER NOW FOR COURSES HELD:

March 26 - 30, 2012in Williamsburg,Virginia

at The Williamsburg LodgeWHO SHOULD ATTEND?

These courses have been developed for technical and supervisory personnel in sensory evaluation, market research, product and process development, quality

assurance, legal, marketing, and general management currently working in consumer product companies.

Consumerpanel

Internalpanel

Paired Preference

A xB y

NP z

% expressedpreference

% expressedpreference

% expressed

Action Standard

pc

DifferenceTesting

A BA BA BA BA B

Page 7: Technical Report Interpreting Applicability Scoresifpress.com/wp-content/uploads/2012/01/IFPress14-4_Newsletter.pdf · Technical Report Interpreting Applicability Scores ... Offer

Descriptive Analysis and Panel Training

S P R I N G 2 0 1 2 C O U R S E S

PAGE 6 For more information, visit www.ifpress.com

How the Courses are TaughtDuring two decades of teaching short courses in Sensory and Consumer Science we have gained an appreciation for engaging our audiences so that technical material can be absorbed easily for effective future use. Rather than relying on the standard but often ineffective theory-application approach, we instead interweave an unfolding story with the theoretical and applied material to provide our participants with a sense of discovery and relevance regarding the various tools they encounter. This dual teaching approach has shown itself to be extremely effective at providing participants with a thorough and long-lasting understanding of the course material.

Hands-On AnalysisThroughout the week, you will use IFPrograms™ software to per-

form the analyses demonstrated in the course. You will be introduced to its capabilities and, upon completion of the course, you will receive a complimentary trial version accessible through the internet.

For a detailed listing of IFPrograms capabilities, please visit www.ifpress.com/software.

Throughout the week, you will use form the analyses demonstrated in the course. You will be introduced to its capabilities and, upon completion of the course, you will receive a complimentary trial version accessible through the internet.

For a detailed listing of I please visit

THURSDAY (MARCH 29, 8am - 4pm)

Topics ______________________________ s Why use descriptive analysis?

s Review of current descriptive analysis methodologies

s Panel training as category learning

s The importance of feedback in panel training

s Existing and novel approaches to panel monitoringCases ______________________________

s Modeling training effects using ice cream samples

s Visual demonstrations of the effectiveness of feedback

s Representing descriptive profiling results using Landscape Segmentation Analysis® (LSA)

s Using descriptive analysis to guide product selection for a category appraisal of premium frozen cookie dough

FRIDAY (MARCH 30, 8am - noon)

Topics _____________________________

s Determining imprecision and bias in descriptive panels using Trellis graphics

s Tracking differences in panelist performance

s Quantifying differences between panelists without replication: Is it possible?

s Measuring panelist variability to improve quality of profiling: Towards a Bayesian approachCases ______________________________

s Application of Trellis graphics to carbonated beverages

s Special application of quality control to descriptive analysis

s Identifying poor performing panelists in a fragrance related product category

In this course you will learn new product profiling techniques through an in-depth exploration of descriptive panel methodolo-gies. Specifically, you will connect a substantial body of research from the fields of cognitive psychology and cognitive neurosci-ence to the field of descriptive analysis and panel training. During this course you will:● Understand the techniques traditionally used for descriptive analysis and identify their relative strengths and weaknesses● Go beyond traditional techniques to improve panel training using learnings from cognitive science and related fields, including category learning and feedback administration ● Learn how to use advanced panel monitoring techniques to know your panelists better and to improve measurement accuracy by taking into account individual performance variability● Become familiar with novel applications of descriptive analysis to quality control and to product selection for category appraisal projects

COURSE FEE & ONLINE REGISTRATION:Course fee includes all course materials, a copy of our latest book, “Short Stories in Sensory and Consumer Science,” a trial version of IFPrograms,™ plus continental breakfast, break refreshments, lunches, and a group dinner per course.

Register for courses online at www.ifpress.com/short-courses or call 804-675-2980 for more information.

A Powerful Framework for Sensory Product Testing March 26, 27 & 28, 2012 (3 days) ................ $1,350*Descriptive Analysis and Panel Training March 29 & 30, 2012 (1.5 days) ...................... $850*_______________________________________________* A 50% discount will be applied to each additional registration from the same company, for the same course* Academic discount available on request

CANCELLATION POLICY: Registrants who have not cancelled two working days prior to the course will be charged the entire fee. Substitutions are allowed for any reason.

LIMITED ENROLLMENT: Enrollment is limited to ensure individual attention, so early registration is recommended. You may hold your place by calling (804) 675-2980.

LOCATION: Our 2012 Spring courses will be held at the Williams- burg Lodge, located near Colonial Williamsburg’s historic district, where hundreds of buildings spanning more than 300 acres have been authentically restored.

HOTEL RESERVATIONS: Participants must make their own hotel reservations; the cost of hotel accom-modation is not included in the course fee. Attendees are eligible for a special room rate of $179 per night. To make a reservation, call 1-800-261-9530 and mention The Institute for Perception. For information about Colonial Williamsburg, visit www.colonialwilliamsburg.com.

Page 8: Technical Report Interpreting Applicability Scoresifpress.com/wp-content/uploads/2012/01/IFPress14-4_Newsletter.pdf · Technical Report Interpreting Applicability Scores ... Offer

Dr. Daniel M. ennis is the President of The Institute for Perception, a full service research consulting firm, which he established in 1994. Danny holds two doctorates; one in Food Science for research in Environmental Microbiology and a second in Mathematical and Statistical Psychology and has more than 35 years of experience working on product testing theory and applications for consumer products. He has published over 100 technical reports and peer-reviewed papers in scientific journals. Danny consults both

within the US and internationally and has served as an expert witness in a wide variety of Lanham act cases. ■ 2, 3

Dr. John M. ennis is Vice President of Research Operations at The Institute for Perception. John received his PhD in Mathematics from the University of California at Santa Barbara and conducted post-doctoral studies in the UCSB Psychology department. An active researcher, John has published in prominent journals in Market Research, Statistics, Mathematics and Psychology and has coauthored a book chapter on Neuroanatomy. The current chair of ASTM E18.04, “Fundamentals of Sensory,” John has a strong interest

in the widespread adoption of best practices throughout sensory science. ■ 2, 3, 4

Dr. Benoît rousseau is Senior Vice President at The Institute for Perception. Benoît holds a PhD in Sensory Science and Psychophysics from the University of California, Davis and received his Food Engineering degree from AgroParis Tech in Paris, France. He has conducted extensive experimental research on probabilistic models and has published numerous journal articles as well as several book chapters. Benoît regularly consults with and manages projects for clients in Asia, Latin America, Europe and the US. In his

teaching, Benoît is well known for his effective and user-friendly approach to introducing new ideas. ■ 1, 2, 3, 4

KEY to Course Speakers1 = Asian Courses: January 24 - 27, 2012 Tokyo, Japan

2 = Ad Claims Course: March 5 - 6, 2012 Sea Island, GA

3 = Spring Courses: March 26 - 30, 2012 Williamsburg, VA

4 = European Courses: May 21 - 25, 2012 Brussels, Belgium

I N S T R U C T O R B I O S

Register for courses online at www.ifpress.com/short-courses PAGE 7

Invited Speakers - Advertising Claims Course

DaviD Mallen is the National Advertising Division’s Dep-uty Director and is responsible for the review of claim substan-tiation and advertising issues for a broad range of products and services. Before joining the NAD, he practiced law at Kensing-ton & Ressler L.L.C., specializing in litigation and representa-

tion of a wide range of businesses, including manufacturers of food and consumer products. David graduated from Cornell University and holds a JD degree from Albany Law School of Union University. ■ 2

annie ugurlayan is a Senior Staff Attorney at the NAD. Since 2003, she has handled over 150 cases, with a particular focus on cosmetics and food cases. Annie is a published author, Chair of the Consumer Affairs Committee of the New York City Bar Association, a member of the Board of Direc-

tors of the New York Women’s Bar Association Foundation, and other local and national bar associations. Annie is fluent in French and Arme-nian, and proficient in Romanian. She is a graduate of Hamilton College and Hofstra University School of Law. ■ 2

Kathryn Farrara is a Senior Attorney at the NAD, Coun-cil of Better Business Bureaus. Kathryn reviews national ad-vertising campaigns, analyzes claim substantiation issues, and has successfully resolved disputes relating to food and drug products, telecommunication services, consumer household

products, and dietary supplements. She also lectures on NAD’s system of voluntary self-regulation and on issues concerning truth-in-advertising. Kathryn is a graduate of the Smeal College of Business at Pennsylvania State University (B.S., in Marketing) and New York Law School. ■ 2

Christopher Cole is a litigation partner at Manatt, Phelps & Phillips where he leads the firm’s false advertising litigation practice. He represents his clients in comparative advertising battles under the Lanham Act and before the NAD, as well as in class actions and other commercial litigation matters. Chris has

been a frequent author and commentator on advertising litigation issues and green marketing. He is a magna cum laude graduate of Boston University School of Law and holds biology and marine biology degrees from Yale and the University of Miami, respectively. ■ 2

Don loFty has been with S. C. Johnson & Son, Inc. since 1993, and currently heads the Company’s practice group for Marketing and Regulatory Affairs. He specializes in antitrust and trade regulation, with emphasis on advertising law, including practice before the NAD and has also managed the company’s

Legal Compliance Program for over ten years. Don received his AB from Dartmouth College in 1969 and his JD from Georgetown University Law Center in 1972. ■ 2

lawrenCe i. weinstein is a litigation partner at Proskauer Rose LLP, where he is co-head of the firm’s renowned False Advertising & Trademark Group. Larry is both a distinguished trial lawyer and counselor. His practice covers a broad spectrum of intellectual property law, including false advertising (Lanham

Act, consumer class actions, NAD, NARB, ERSP, FTC), trademark, trade secret and copyright matters, as well as sports, art and other complex commercial cases. He is currently a member of Law360’s Intellectual Property Law Editorial Board. ■ 2

Dr. MiChael o’Mahony is Professor and Sensory Scientist in the Department of Food Science and Technology, University of California, Davis, CA. A very entertaining and informative lecturer, Mike is well known for his approach of communicating new concepts to broad audiences. He has pub-

lished over 100 journal articles in Sensory Science and is the author of Sensory Evaluation of Food: Statistical Methods and Procedures. Mike consults extensively with consumer products companies globally. He holds a PhD in Chemistry and Psychology from Bristol University, UK. ■ 3

Dr. Kevin Blot is Lead Scientist at Unilever, where he manages the Sensation, Perception, and Behavior group for Unilever Global R&D in Trumbull, Connecticut. He holds a PhD in Experimental Psychology in Psychophysics. Recent work, published in seminal Psychology and Sensory Science journals,

has focused on touch perception and multivariate visualization approaches. During his tenure at Unilever, Kevin has championed the development and application of probabilistic models to understand consumer needs and motivations, leading to the development of successful global products. ■ 4

Dr. F. gregory ashBy is Professor and Chair of the Department of Psychology, University of California, Santa Barbara. He received his PhD in Cognitive/Mathematical Psychology from Purdue University. Greg is particularly in-terested in understanding how people categorize objects and

events in their environment, and in developing and testing neurobiologi-cally plausible mathematical models. ■ 3

FranK rossi is Associate Director, Applied Quantitative Sciences, Kraft Foods in Glenview, Illinois, where he supports Operations, Quality and Marketing Research Organizations. Frank has also held statistical consulting positions with General Foods and Campbell Soup Company. He has authored publica-

tions on the statistical aspects of product testing and he obtained an MA in Statistics from The Pennsylvania State University. ■ 3, 4

Invited Speakers - Spring and European Courses