Penalty Analysis

Penalty Analysis

Overview

Penalty analysis is a great way to identify potential directions for the improvement of products to product developers, for example, by looking for the sensory attributes that may be associated with decreased overall liking (acceptability) of a product. A test compatible with the Penalty analysis includes a Category Liking question to measure overall liking, and at least one Category Just About Right (JAR) question to collect data about specific sensory attributes.

Penalty analysis measures the change in product liking due to the product having “too much” or “too little” of an attribute. For example, for a dark chocolate bar, you could measure the bitterness of the chocolate against the overall liking of the chocolate bar.

Info
For consumer testing we typically recommend at least 60 results; however the Penalty analysis will run on a lower N and can still provide valuable information. 

Considerations

  1. It is crucial to include a Liking and JAR question types in the test to be able to run the report.

    If it happens that the question types are accidentally set to a different variety of Category Scale questions, even after the data is collected, the question types can be changed to a proper type. Please do not delete results to make this change!

    Changing the question type is done in the Build tab, in the Category Question Options, as seen in the image below.


  2. Penalty Analysis reports are generated as Excel Workbooks.

  3. Penalty analysis is available only for tests without reps.

  4. In tests with sections, the report can be generated within individual sections. The report is not available across sections in a single test nor across multiple tests.

Test Setup

  1. Create a Standard test.

  2. Ensure that there is at least one Category Liking question with at least one attribute measuring overall liking (e.g., Overall Liking of chocolate bar).

  3. Ensure that there is at least one Category Just About Right question. Include at least one attribute with 5-point, 7-point, or 9-point scales (e.g., bitterness).

  4. Add samples (from the Products library, or from the Sample list, or generic samples) and design.

  5. Add panelists.

  6. Specify the sample set distribution method, review any other options in the Logistics tab, and print the Serve report.

  7. Preview and run the test to collect data from panelists.

To Generate the Report

  1. Go to the Results area of your test. Filter results if that is required before generating the report.

  2. Click the Reports tab and select Create report


  3. In the 1. Select report type list, select Penalty analysis


  4. Click 2. Select options and specify the Mean Drop, Net Penalty, and Penalty Inclusion thresholds you wish to use in the report.
    NotesThe thresholds that are set in Compusense can be used as a starting point. Whether you increase or decrease the thresholds will be a business decision.
    You are able to see the attributes that are trending towards the threshold and can make a business decision from here.




    1. Mean Drop Thresholds: Represented with red lines in the Mean Drop Chart sheet.
      1. The Mean Drop threshold indicates how different the mean liking of the consumers who selected JAR for a particular attribute is from the mean liking of consumers who did not select JAR for that attribute. 

      2. The percentage threshold is the total percentage of consumers who selected anything other than JAR.

    1. Net Penalty Colour Thresholds: If selected, these will clearly indicate which attributes are of interest in the Net Penalty Graphs. A net penalty of <0.25 is considered low impact. Potential and high impact thresholds can be selected by the analyst. 

    2. The Penalty Inclusion Threshold: It allows the analyst to set a penalty analysis percentage threshold.  If the sum of the percentages for Too Much and Not Enough for a specific attribute is below the selected penalty analysis percentage, that attribute will be excluded from the Mean Drop Chart.

    1.  Click 3. Select questions.
      1. Select the Overall Liking attribute. Only one liking attribute can be selected.
        If no Liking attributes are listed but you believe they should be, you can review the question setup. It may have happened that a wrong Category question type was selected during the setup. Follow these steps:
        In the top left-hand corner, click the test name.


        Go to the Build tab, and in the Question options of a Category question that was asking panelists for their overall liking, check whether it is set as a Category- Liking.

        If it is not, change the question type into the Category- Liking type, as described in the Considerations area on this page.

        If unable to make the change, your test may have been set to Complete. If it works for you to do that, go to the Run test tab and undo complete.

        Results will not be deleted unless you click the Delete results button. Return to the Build tab as described above and change the question type.


        Return back to the Results area to generate the report. Select the Overall liking attribute.


      1. Scroll down if necessary to select the JAR attributes that you wish to include in the report.


    1. In the 4. Select export type, click Create my report.




    Report Details

    Here we will review each sheet that can be included in the report. The sheet availability in your report can depend on your selections before generating the report, as described in the previous section.

    Data

    This sheet lists the raw data for all included samples and all included attributes. Information included in this sheet is used for calculations in all other sheets.


    Penalty Tables

    The penalty table enables you to look at how consumers’ perceptions of the attributes affect liking. The information found in this sheet is used for the graphs on the Mean Drop Chart sheets.

    Notes
    Each sample included in the analysis has its own table in this sheet (you will need to scroll down to see them all).


    1. Attribute: Three rows for each attribute, one for each level on the collapsed JAR scale. In our example screenshot, the "chocolate JAR" attribute repeats three times because there are three levels (see the next item below) in the attribute.

    2. Level: While typically a JAR scale would have 5 categories, in this analysis the top two and bottom two options are collapsed; therefore, only three levels are displayed and used in calculations. In our example: Too Weak, Just-about-right, and Too Strong.

    3. Frequencies: The count of responses for each level within an attribute.

    4. %: The frequencies converted into percentages.

      The formula: (Frequency/N)*100

    5. Sum Liking: The sum of the liking responses within the attribute and level. In our example, the "chocolate JAR" attribute for the Too Weak level, indicates the sum liking of 25. This is determined by looking at the Data sheet. In the screenshot below, highlighted in yellow, we can see the 4 responses for the sample 1 that were less than a JAR (value lower than 3) for the "chocolate JAR" attribute. For those 4 responses we then look at the Overall Liking scores and add them up: 4 + 6 + 7 + 8 = 25.


    6. Mean Liking: The mean liking of the sample by consumers who selected the indicated level for the given attribute.

      The formula: Sum Liking/Frequency

    7. Mean Drop: A comparison of the mean liking of the level above or below JAR with the mean liking of the group who selected JAR. Therefore the results are displayed only in the 'too much' and 'not enough' rows; never in the JAR row.

      The larger the mean drop, the more that attribute/level affects the liking of the sample.

    8. Label: This text will appear in the mean drop graphs.

    Mean Drop Charts

    You will see a mean drop chart sheet for each sample in your study, as well as one sheet for all of your samples together.


    Alert
    If you have selected Penalty Inclusion when generating the report, and if the sum of the percentages for Too much and Not enough for a specific attribute is below the selected penalty inclusion percentage, the attribute will be excluded from the Mean Drop Charts.

    If all attributes for a sample are below the penalty inclusion percentage threshold, that whole sample will be excluded from the Mean Drop Charts. That would mean that at the set threshold, that sample's overall liking is not affected by any of the JAR attributes included in the report.



    1. x axis: This is the frequency percentage (%) of the responses for each level of a specific attribute. See the % column in the Penalty Tables sheet.

    2. y axis: This is the mean drop value. See the Mean Drop column in the Penalty Tables sheet.

    3. Colour (these can be edited in Excel using its built in functionality): Each sample is assigned its own colour.

    4. Shapes (these can be edited in Excel using its built in functionality):  
      1. Levels above JAR are represented by a triangle.
      2. Levels below JAR are represented by a circle.

    5. Quadrants: If you selected Mean Drop thresholds, they will be represented as two red lines on the graphs. This creates easy to read quadrants in your graph.

      1. Attribute levels found in the bottom left quadrant represent attributes levels that are below both predetermined thresholds. These attribute levels are of no concern.

      2. Attribute levels found in the top right quadrant represent attribute levels that are above both predetermined thresholds. This is the area of primary concern because these attribute levels indicate that they affected panelists overall liking of the sample.

      3. Attribute levels found in the top left quadrant represent attribute levels that are above the mean drop threshold but below the frequency threshold. This is the area of secondary concern.

      4. Attribute levels found in the bottom right quadrant represent attribute levels that are below the mean drop threshold but above the frequency threshold. This is another area of secondary concern.



    Net Penalty Tables



    1. Attribute: Selected attributes.

    2. Category Low: The collapsed lower levels of your scale; lowest value label is shown.

    3. Net Penalty: Weighted difference of the means between JAR and 'not enough' levels.

      The formula: Net Penalty= [Proportion indicated Not Enough] *[Mean Drop]

    4. Mean Drop: The difference in the means between JAR and 'not enough' levels.

    5. % Not Enough: Percentage of respondents who selected below JAR.

    6. % JAR: Percentage of respondents who selected JAR.

    7. % Too Much: Percentage of respondents who selected above JAR.

    8. Mean Drop: The difference in the means between JAR and 'too much' levels.

    9. Net Penalty: Weighted difference of the means between JAR and 'too much' levels.

      The formula: Net Penalty= [Proportion indicated Too Much] *[Mean Drop]

    10. Category High: The collapsed higher levels of your scale; highest value label is shown.


    Net Penalty Graphs



     This sheet provides a quick and clear way to see which attributes are having the greatest effect on the liking of your product.

    Notes
    The Net Penalty thresholds are selected before running the report.


    There is a separate graph for each sample (you will need to scroll down to see them all). Within the graph you will see the following:
    1. Each attribute that meets your threshold settings.

    2. The net penalty of the 'too much' and 'not enough' levels. 

    3. The frequency that the JAR level was selected, displayed as a percentage. 

    The net penalty is calculated by multiplying percentage of panelists that indicated an attribute other than JAR by the mean drop for that particular attribute. The calculation combined by impact threshold produces the graphs.


      • Related Articles

      • Descriptive Analysis Testing

        Overview Welcome to the Descriptive Analysis Testing Webinar! This webinar will take you through all of our features specifically for DA Testing, from training your panelists with FCM, having open discussions with our Show results at end feature to ...
      • Descriptive Analysis Workbook Review

        Overview The Descriptive Analysis Workbook provides detailed analyses on product and panelist performance. The report is compatible with sample related Line Scale , Category , LMS, Numeric, and ' Choose 1 ' questions when 2 or more samples are ...
      • Results and Analysis Review

        Overview The Results area allows you to view Graphs and Data, sort Comments and create Reports to download or send to colleagues. There are also advanced features that will allow you to delete sample sets, set sample sets to complete and edit ...
      • Analysis Across Tests: Graphing

        Overview The Graphing tool can provide you with visual representation of your data across multiple sections within a single test or across multiple tests with or without multiple sections in them. Nothing prevents you from generating graphs within a ...
      • Descriptive Analysis Workbook Product Summary

        What is it? Product Summary sheet is one of the sheets in the Descriptive Analysis Workbook . Product Summary provides the post hoc results by attribute. Why would I use it? To identify attributes that panelists used to differentiate between the ...