Building An Actionable Insights Story
This guide explains how to turn your test results into a clear insights story using the platform’s key analysis tools, where to find the right data, how to interpret it, and how to translate results into clear recommendations for stakeholders.
The platform organizes insights across several key analysis tabs:
- Scorecard – overall KPIs and high-level comparisons
- Explore – charts and consumer verbatims
- Crosstabs – deeper attribute analysis and scale distributions
- Penalty Analysis – identifies which attributes most impact liking
Cheat version: Creating your Insights story
- Start in the Scorecard (your headline view)
- Decide the type of story you’re telling
- Use Explore & Crosstabs to unpack the “why”
- Use Penalty (JAR) analysis to prioritize what to fix (only for HUTs)
- Conclude with a simple, clear story
Creating your Insights story
1. Start in the Scorecard (your headline view)
Tab: Scorecard
Purpose: Get the big picture and write your 1–2 line headline.
Use the Scorecard to see overall KPI performance across all products (appeal, overall liking, purchase intent) and to spot any obvious standout concepts/ products.
Use the Scorecard to answer:
- Are concepts/ products at parity on key KPIs, or is there a clear winner/loser?
- Are there 1–2 attributes where a prototype clearly has an edge (e.g., stronger on attributes like flavor, texture, skin benefits as relevant to product category)?
How to talk about it:
- If differences are statistically significant:
- “[Prototype A] is significantly better than [Prototype B] on [KPIs or specific attributes]”
- If differences are not statistically significant:
- “Concepts are at parity overall; we see directional upsides on [KPIs or specific attributes] for [prototype].”
- If you’re building a product‑selection story with 2+ strong prototypes:
- “Both prototypes are strong; [Prototype A] shows a slight edge on [attribute(s)], which makes it a better base to optimize from.”
This gives you your headline before you dive into any detail.
2. Decide the type of story you’re telling
Tabs: Still in Scorecard, but this is a framing step for everything that follows.
In many product tests, different prototypes perform well on different attributes. For example, one version may score higher on flavor while another performs better on texture.
Since only one product will ultimately launch, the goal is to determine which option provides the strongest foundation to optimize.
Decide upfront:
- Are you telling a “one hero concept/ product” story?
- E.g., “[Prototype A] is our main platform; it clearly beats or matches [Prototype B or Competitor X] on key attributes. We’ll optimize [attribute(s)] before launch.”
- Or a “two directions, one final product” story?
- E.g., “We tested two versions. One is stronger on flavor delivery and sweetness, the other on texture. Our final product will combine these strengths.”
This decision guides:
- Whether you show both prototypes explicitly to internal stakeholders/ customers/retailers, or keep the story focused on a single “final” product.
- Which comparison charts you prioritize.
3. Use Explore & Crosstabs to unpack the “why”
Tabs: Explore, Crosstabs
Purpose: Move beyond headline KPIs to diagnose what’s driving them.
3.1 Explore tab: Verbatims / Open‑ends and Chart visualizations
Use Explore when you want to:
- Humanize the numbers and give your internal teams qualitative consumer verbatim by product and see what people liked and disliked.
Use this to:
- Pull what people like about each prototype (e.g., “nice, creamy texture and pleasant skin feel”; “perfect balance of sweet and sour taste”).
- Capture criticisms that align with penalty analysis (e.g., “chips are not flavorful enough,” “absorption is not quick enough and there’s residue left behind,” “product looks dull”).
These become:
- Bullets on your “What consumers love” / “What needs work” slides.
- Quotations for sales and brand storytelling (e.g., retailer sell‑in decks).
3.2 Crosstabs tab: attribute deep dives & scale distributions
Use Crosstabs when you want to:
- Look at attribute‑level scores (e.g., flavor, sweetness, texture, uniqueness etc) across all products.
- This is where you may pick up that one prototype has an edge on [attribute(s)] vs the other prototype.
- This is where you may pick up that one prototype has an edge on [attribute(s)] vs the other prototype.
- Inspect distributions (top‑box, mid‑box, bottom‑box) to explain confusing means/ averages:
- Eg: you may see that top‑box and top‑3‑box were similar in Scorecard, but [Prototype B] has more bottom‑3 ratings (more people strongly disliking it), pulling its mean down.
Tell the story like this:
- “Top‑box performance is similar for both prototypes; the difference is that [Prototype B] is more polarizing, with a higher share of strong dislikes, which drags down its mean.”
Use Crosstabs to understand how consumers rated each attribute and to see whether differences are driven by strong likes, strong dislikes, or overall distribution patterns.
4. Use Penalty (JAR) analysis to prioritize what to fix
Tab: Analysis → Penalty Analysis
Purpose: Separate “nice‑to‑improve” from “must‑fix” drivers. Use penalty analysis to decide which prototype is the stronger option and which levers matter most.
Background: Penalty analysis uses Just-About-Right (JAR) questions to identify when product attributes are perceived as too weak, too strong, or about right, and shows how those perceptions impact overall liking.
Within the Penalty Analysis tab:
- Look at % About Right vs % Too Weak / Too Strong for each attribute and each concept. This tells you how many people feel the product is “on target” versus needing improvement.
- Note where “too weak / too strong” is linked to a meaningful drop in overall liking or a key KPI. Those are your highest‑impact issues to address.
How to use this insight:
- Identify which attributes are most frequently off‑target (high “too weak/too strong”) and carry a strong penalty to overall liking. These are your priority optimization levers.
- Compare prototypes to see which one:
- Has fewer severe penalties on your most important attributes (often the better option to move forward with), and
- Might have strengths on secondary attributes (e.g., texture, aroma, appearance) that explain why it performs well overall.
- Translate this into simple guidance for your team, for example:
- “Penalty analysis shows that [attribute/s] are the main drivers of dissatisfaction; Prototype A is less exposed here, so it’s our stronger option.”
This is your “what to fix to move the needle” section.
Want to go deeper?
There is a separate help article dedicated to how Penalty Analysis works and how to interpret it step‑by‑step.
5. Conclude with a simple, clear story
Tabs used: Scorecard + Penalty + Explore (attributes & verbatims) + Crosstabs
The end product of all this navigation should be 3–5 crisp insights.
A reusable structure (adapt with your numbers):
- Headline (overall performance):
- “Our prototypes perform on par/ better compared to each other/ competitor overall on [KPIs]”
- Specific edges:
- “Prototype A shows a clear edge on [attribute/s] vs Prototype B, with fewer flavor penalties.”
- Trade‑offs and R&D direction for optimization:
- “Prototype A offers better [attribute/s], explaining its slight edge on some overall liking measures, while Prototype B provides stronger delivery on [other attribute/s].
- Recommendation:
- “Our final product should prioritize Prototype A as lead option and aim to fix [attribute/s] to further improve delivery perceptions”.”
Reference topics (for nuanced questions)
A. Means vs. Top‑box / Top‑3‑box
- For 9‑point scales, use means or top‑3‑box, but don’t switch between top‑2 and top‑3 just to make a product look better.
- It’s fine to move from means to top‑3‑box if 7.1 vs 6.8 looks misleading and the real inference is parity; the goal is clarity, not spin.
- For 5‑point scales, top‑2‑box is the standard.
B. Reconciling “contradictions” (overall liking vs ranking)
- Differences between scale questions (overall liking) and rankings often appear when gaps are tiny (second decimal place). Treat them as parity, not wins.
- Use distribution views (top/mid/bottom‑box) to clarify that the issue is usually polarization vs. average, not that one product is truly loved and the other isn’t.
- Phrase it as: “Both prototypes are similarly liked; one is slightly more polarizing, and when forced to choose, a small share favors [X], but results are very close.”
Quick note on Reports & Downloads
Reports (Instant Reports):- Auto‑generate PPTs with Scorecard, attribute charts, and AI‑generated takeaways, which you can then edit to fit your preferred story.
- Use downloads when you want underlying data or chart exports to plug into your own templates.