Skip to content
  • There are no suggestions because the search field is empty.

Multi Project Dashboard

Compare Product Performance Across Multiple Projects

Overview

Multi-Project Dashboards enable you to compare product performance across multiple research projects, revealing trends and patterns that aren't visible when analyzing projects individually. By aggregating data from comparable tests, you can track product evolution, benchmark against competitors, and make strategic decisions based on comprehensive historical data.

However, comparing data across projects requires careful consideration of research design and methodology. This guide helps you understand when cross-project comparisons are meaningful, valid, and how to avoid common pitfalls that can lead to misleading conclusions.

Comparison Scenarios

  1. Iterative Testing & Product Refinement. You test a prototype, refine it, and retest with Highlight! Comparing results shows whether refinements improved key metrics or introduced new issues.
  2. Variant or Formula Testing As Samples Are Ready. You run separate tests for product variants (e.g., flavor, scent, packaging) on a rolling basis, as those variants are produced and ready for consumer testing. Comparison identifies which variant performs best and how they differ across consistent metrics.
  3. Benchmark or Competitor Tracking. You repeat the same test periodically (e.g., quarterly or annually) to understand consumer perceptions of your product within a changing competitive landscape. Enables trend analysis and quantifies performance shifts over time.
  4. Audience Comparison. You test the same product among different consumer segments, as new or expansion targets are identified by marketing teams. Comparison highlights how context or demographics shape perceptions - and helps you identify the best targets for your product and where to invest more heavily.
  5. Concept-to-Product Validation. You test a concept first, then test the actual product when it’s produced and ready for testing! Ensures the final product delivers on the original promise.

Understanding Comparison Pitfalls

Critical Factors That Affect Comparability

  • Audience Criteria
    • Audience criteria and definitions play a crucial role in the validity of cross-project comparisons. Ideally, participants in each study being compared were recruited using the same screening qualifications and quotas, resulting in comparable user types (for example, similar category buyers or product users). If audience composition differed, shifts in results may reflect who participated rather than how the product performed. In some cases however, the audience criteria may be different from test to test intentionally. Be sure to keep this context in mind when reviewing the data.
  • Questionnaire design
    • For data to be meaningfully compared, question wording, order, and response scales should be consistent across projects. Even small changes in phrasing or scale anchors can influence how participants interpret and respond to questions, introducing artificial differences between studies.
  • Sample size
    • Each test should have similar sample sizes. Unequal samples can distort averages or make results appear more (or less) meaningful than they actually are. You can still compare data from tests with different sample sizes, but it’s important to proceed with caution.
    • If sample sizes are small or responses are highly variable, statistical power decreases, and small differences may appear meaningful when they aren’t—or go undetected when they are. In those cases where samples sizes are less than 50n, look for directional alignment rather than treating every numeric difference as significant.
  • Comparable testing conditions
    • Usage instructions, test duration, and environment help ensure results are grounded in experience rather than circumstance. If seasonality, test duration, context, or even product freshness differed across projects, those factors should be considered when reviewing differences in outcomes.
  • Product formulation change
    • In some cases, product modifications between tests are substantial enough to invalidate a direct comparison. For example, if a product’s core formulation, sensory profile, or positioning has fundamentally changed, the new version may no longer represent the same underlying experience being measured in the original test. In these cases, it’s more appropriate to treat the newer test as a baseline for the next generation rather than a continuation of the prior benchmark. You can still draw directional learnings—such as whether the new approach better meets consumer needs—but absolute metric-to-metric comparisons should be interpreted cautiously, as they may reflect different product realities rather than true performance shifts.

Mistakes to Avoid When Looking for Direct Apples to Apples Comparisons

  • Comparing projects with different audiences or participant criteria
  • Comparing KPIs when the question wording, metric or scale has changed (e.g., 5-pt to 7-pt scale)
  • Ignoring differences in sample size
  • Overlooking external influences (seasonality, holidays, campaigns)
  • Assuming all numerical differences are meaningful without considering statistical power
  • Failing to account for methodological changes (design, instructions, collection mode, environment)
  • Not documenting differences between projects to inform interpretation

Best Practices for Valid Comparisons

Before Creating a Multi-Project Dashboard

  • Review source project methodologies for consistency
  • Verify question alignment and mapping for KPIs critical to compare
  • Assess audience comparability
  • Consider participant instructions, contextual factors and market conditions
  • Document methodological differences between projects to inform interpretation

Interpreting Multi-Project Results

  • Focus on statistically significant differences over 80%+
  • If sample sizes are less than 50, consider differences directional in nature
  • Look for consistent patterns across multiple metrics to confirm validity of any takeaways

Communicating Findings Appropriately

  • Clearly state which projects are included in comparisons
  • Disclose any methodological differences between projects, including design, audience criteria, fieldwork timing, and product changes made between tests
  • Provide appropriate context for differences observed
  • Avoid overconfident conclusions when comparability is limited
  • Include necessary caveats and limitations

Getting Started with Multi-Project Dashboards

Coming Soon