ROI [🔏CLASSIFIED FILE] No. X043 | What is ICE Scoring

📅 2025-10-27

🕒 Reading time: 12 min

🏷️ ICE 🏷️ Prioritization 🏷️ Product Management 🏷️ Learning 🏷️ [🔏CLASSIFIED FILE]



ice_image

Detective's Memo: "Which feature should we build first?" "Which initiative should take priority?"—eternal mysteries confronting every product development scene. Most rely on the loudest voice in the room, the highest-paid person's decree, or gut feeling. But true detectives possess a different tool: "ICE Scoring"—a formula that quantitatively evaluates all ideas across three dimensions (Impact, Confidence, Ease) to derive the most efficient prioritization. Why did this methodology, devised by Sean Ellis, founder of GrowthHackers.com, get adopted by hyper-growth companies like Intercom, HubSpot, and Slack? "High impact but difficult to implement," "Easy but low effectiveness"—the paradox of solving trade-off-laden decisions with simple multiplication (I×C×E). Uncover the measurement philosophy that gains objectivity not by eliminating subjectivity, but by structuring it.

What is ICE Scoring - Case Overview

ICE Scoring, formally known as "Three-Dimensional Priority Evaluation Method by Impact, Confidence, and Ease," is a product/growth initiative prioritization framework conceived by Sean Ellis, founder of GrowthHackers.com. For countless idea candidates, it evaluates three factors on a 1-10 scale—Impact (how much it moves you toward your goal), Confidence (how certain you are of success), and Ease (how simple implementation is)—then calculates a score via multiplication (I×C×E) or average ((I+C+E)/3), executing from the highest score down. This is how clients understand it. However, in actual practice, it often degrades into "mere point assignment," with most product managers failing to grasp its true structural value: controlling ambiguous criteria, managing subjectivity, and facilitating team consensus.

Investigation Memo: ICE is not merely a "priority list creation tool" but a "decision transparency and democratization system." Why does it circumvent HiPPO (Highest Paid Person's Opinion) bias? And why does it gain reproducibility by "structuring" rather than "eliminating" intuition? This is the decision-making foundation of product management—the criteria for determining "what is minimum" in MVP, and the practical application of Baseline of Measurement. We must decode its true nature.

Basic Structure of ICE Scoring - Evidence Analysis

Core Evidence: Multi-dimensional evaluation through three questions

The Three Dimensions of ICE

I: Impact (Magnitude of Effect)

Definition: If this initiative succeeds, how much does it advance toward the goal?

Question:

"Assuming this initiative succeeds perfectly,
 how much does it contribute to goal achievement?"

Evaluation Scale: - 1-3 points: Minimal impact (microscopic improvement) - 4-6 points: Moderate impact (partial improvement) - 7-9 points: Significant impact (clear progress) - 10 points: Decisive impact (game changer)

Concrete Examples (E-commerce Conversion Rate Goal):

Initiative A: Add one more product image
→ Impact 3 (microscopic improvement)

Initiative B: Add review functionality
→ Impact 7 (significantly affects purchase decisions)

Initiative C: Implement one-click purchase
→ Impact 9 (dramatically simplifies buying process)

Critical Insight: Impact evaluates the "if successful" scenario → Success probability is evaluated by Confidence


C: Confidence (Certainty Level)

Definition: How certain are you this initiative will produce the expected effect?

Question:

"How confident can you be
 that this initiative will actually work?"

Evaluation Scale: - 1-3 points: Almost no confidence (hypothesis only, no data) - 4-6 points: Moderate confidence (similar cases exist, partial data) - 7-9 points: High confidence (proven data, A/B tested) - 10 points: Complete confidence (verified in-house)

Judgment Materials: - Past company data - Competitor success cases - Industry best practices - User research results - Expert opinions

Concrete Examples:

Initiative A: Add one more product image
→ Confidence 8 (widely proven in industry)

Initiative B: Add review functionality
→ Confidence 9 (clear track record: Amazon, etc.)

Initiative C: Implement one-click purchase
→ Confidence 5 (high impact but implementation risks)

Importance of Confidence: High Impact × Low Confidence = Gambling Medium Impact × High Confidence = Solid improvement


E: Ease (Implementation Simplicity)

Definition: How easily and quickly can this initiative be implemented?

Question:

"How much time and resources
 does implementing this initiative require?"

Evaluation Scale: - 1-3 points: Extremely difficult (months, large-scale development) - 4-6 points: Moderate (weeks, medium-scale development) - 7-9 points: Relatively easy (days, small-scale development) - 10 points: Extremely easy (hours, configuration change only)

Consideration Factors: - Development effort - Design effort - Technical difficulty - External dependencies - Legal/regulatory hurdles

Concrete Examples:

Initiative A: Add one more product image
→ Ease 9 (just shoot and upload)

Initiative B: Add review functionality
→ Ease 4 (requires database design, UI implementation)

Initiative C: Implement one-click purchase
→ Ease 3 (major payment system overhaul, security)

Connection to Realization First Principle: Even with low Ease, realization methods always exist → Substitutable via manual work, external services, etc.

ICE Score Calculation Methods

Method 1: Multiplication Approach

ICE Score = Impact × Confidence × Ease

Example: Initiative A
Impact 7 × Confidence 8 × Ease 9 = 504

Example: Initiative B  
Impact 9 × Confidence 5 × Ease 3 = 135

Characteristics: - Even one 0 makes the whole 0 - Extreme differences get emphasized - Balanced initiatives score higher


Method 2: Average Approach (Addition)

ICE Score = (Impact + Confidence + Ease) / 3

Example: Initiative A
(7 + 8 + 9) / 3 = 8.0

Example: Initiative B
(9 + 5 + 3) / 3 = 5.7

Characteristics: - One low score's impact is mitigated - More intuitively understandable - Generally recommended

Which Should You Use: - Consistently use whichever the team agrees on - Most teams adopt the average approach - What matters is "comparing with the same standard"

ICE Scoring Implementation Procedure - Investigation Methods

Investigation Discovery 1: Intercom's Practical Case

Case Evidence (Product prioritization at a hyper-growth SaaS):

Phase 1: Problem Emergence

Background (circa 2014):

Situation:
- 200+ items in product idea backlog
- Engineer/designer time is finite
- Loudest customer requests get prioritized
- CEO's "gut feeling" changes priorities

Problems:
- Non-data-driven decision-making
- Low team buy-in
- Many post-launch features had minimal effect
- Resource waste

Turning Point: Product Manager seeks "objective prioritization mechanism" → Discovers Sean Ellis's ICE framework

Phase 2: ICE Implementation Process

Step 1: Define Evaluation Criteria

Impact Criteria (relative to company goal):
Goal: Increase Monthly Active Users (MAU)

10 points: Potential for MAU +20% or more
7-9 points: MAU +10-20%
4-6 points: MAU +5-10%
1-3 points: MAU +5% or less

Confidence Criteria:
10 points: A/B tested and proven in-house
7-9 points: Competitor success cases + internal data
4-6 points: Industry best practices
1-3 points: Hypothesis only

Ease Criteria:
10 points: Within 1 day
7-9 points: Within 1 week
4-6 points: Within 1 month
1-3 points: Over 1 month

Step 2: Score All Initiatives

Process:
1. Weekly evaluation meeting with entire product team
2. 5 minutes discussion per initiative
3. Each member independently assigns scores
4. Use average (discuss extreme outliers)
5. Sort by score

Sample Results:
Initiative A: (9+8+9)/3 = 8.67
Initiative B: (7+9+7)/3 = 7.67  
Initiative C: (10+4+3)/3 = 5.67
Initiative D: (6+7+8)/3 = 7.00

Priority Order: A → B → D → C

Step 3: Execute from Top

Rules:
- Start with highest ICE scores in order
- "But [person X] said..." is invalid
- New ideas must undergo ICE evaluation
- Monthly score review (adapt to market changes)

Phase 3: Post-Implementation Effects

Quantitative Results:

Pre-Implementation (6 months):
- Features released: 18
- Features with impact: 5 (28%)
- Average development effort: 3 weeks/feature

Post-Implementation (6 months):
- Features released: 12 (selective)
- Features with impact: 9 (75%)
- Average development effort: 2 weeks/feature (high Ease priority)

ROI:
Development resources: -33% reduction
Success rate: +47 percentage points
MAU growth rate: 1.8x acceleration

Qualitative Results:

Team Changes:
- Improved transparency on "why are we doing this"
- Buy-in for decisions
- Decreased CEO decree frequency (data-driven dialogue)
- Strengthened Product Manager authority

Cultural Shift:
"Seems good somehow" → "What's the ICE score?"
became the catchphrase

Investigation Discovery 2: Application at ROI Detective Agency

Case Evidence (Content strategy prioritization):

Challenge: Selecting Articles to Write

Situation:

Backlog:
- Frameworks to write about: 50+
- Time: 2-3 articles/week maximum
- Goal: Reach 100,000 monthly pageviews

Traditional Method:
- Choose by "seems interesting"
- Choose by "easy to write"
→ Pageviews stagnant, goal unmet

ICE Application Process

Setting Evaluation Criteria:

Impact (PV Contribution):
10 points: Monthly search volume 10,000+
7-9 points: Monthly search volume 3,000-10,000
4-6 points: Monthly search volume 1,000-3,000
1-3 points: Monthly search volume <1,000

Confidence (Top Ranking Certainty):
10 points: Competitor article quality is low
7-9 points: Many competitors but differentiation possible
4-6 points: Strong competitors exist
1-3 points: Authority sites dominate

Ease (Writing Ease):
10 points: Already have knowledge/experience (2 hours)
7-9 points: Light research suffices (5 hours)
4-6 points: Deep research needed (10 hours)
1-3 points: Expert interview required (20+ hours)

Evaluation Results (Excerpt):

Article Candidate A: "What is MVP"
I: 9 (large search volume)
C: 8 (confident we can beat competitors)
E: 9 (have practical experience)
ICE: 8.67 → Priority #1

Article Candidate B: "What is SWOT Analysis"
I: 10 (extremely high search volume)
C: 3 (authority sites dominate)
E: 7 (easy to write)
ICE: 6.67 → Defer

Article Candidate C: "What is Gamification"
I: 5 (medium search volume)
C: 9 (few competitors)
E: 4 (requires specialized knowledge)
ICE: 6.00 → Defer

Execution Results:

Prioritized writing top 10 ICE articles

Results (after 3 months):
- Top 10 articles average PV: 2,500/month
- Lower articles average PV: 400/month
- Approximately 6x difference

Learning:
"Want to write" ≠ "Gets read"
Objective judgment via ICE produces results

The Power of ICE Scoring - Crime Prevention Effect

Power 1: Neutralizing HiPPO (Highest Paid Person's Opinion)

Traditional:
CEO: "Let's build this feature" → Development starts
→ Data ignored, frontline opinions ignored

Post-ICE:
CEO: "Let's build this feature"
PM: "What's the ICE score?"
CEO: "Impact 10, Confidence 4, Ease 2... 5.3"
PM: "Let's prioritize this 8.5 initiative first"
CEO: "Got it"

Change:
Dialogue through data, not power

Power 2: Decision Transparency and Democratization

Problem: "Why was this prioritized? I don't understand"

Solution:
Share ICE scores for all initiatives
→ Anyone can understand prioritization rationale
→ Increased buy-in and ownership

Effect:
Team cohesion
Trust in priorities

Power 3: Discovering "Quick Wins"

Typical Pattern:
Impact 6, Confidence 8, Ease 10
→ ICE Score 8.0

Characteristics:
- Not a dramatic effect
- But certain to succeed
- Can implement immediately

Strategic Value:
Early victories boost team morale
→ Foundation for bigger challenges

Power 4: Optimizing Resource Allocation

Traditional: Pour all resources into large project
→ Huge damage if it fails

ICE Utilization:
Execute high-score initiatives in parallel
→ Risk diversification
→ Accelerated learning cycles

Excellent compatibility with
[Agile Development](/behind_case_files/articles/X038_AGILE_DEVELOPMENT)

Limitations and Caveats of ICE Scoring - Investigation Warnings

Limitation 1: Complete Elimination of Subjectivity is Impossible

Problem:
ICE scores ultimately rely on "human judgment"

Countermeasures:
- Team evaluation (averaging reduces bias)
- Codify evaluation criteria ([Baseline of Measurement](/behind_case_files/articles/X041_BOM))
- Regular retrospectives (improve prediction accuracy)

Critical Recognition:
Aim not for "perfect objectivity"
but for "structured subjectivity"

Limitation 2: Overlooking Strategic Importance

Case:
Initiative X: ICE score 4.5 (low)
However: Prerequisite for next major feature

Countermeasure:
Consider strategic dependencies separately
Understand ICE is "individual evaluation"

Limitation 3: Score Assignment Becomes Perfunctory

Dangerous Pattern:
"Everything gets 8 points anyway"
→ No differentiation, becomes meaningless

Countermeasures:
- Be conscious of relative evaluation
- Regular calibration
- Feedback loop with actual results

Limitation 4: Undervaluing Long-term Value

Problem:
Low Ease = Deferred
→ Important but difficult initiatives never get started

Countermeasure:
Institutionalize rule: Once per quarter, challenge
"Low Ease but High Impact×Confidence" initiative

Direct Relations: - MVP: Use ICE to determine "what is minimum" - Realization First Principle: Realization methods always exist even with low Ease - Baseline of Measurement: Essential for constructing ICE evaluation criteria

Indirect Relations: - Agile Development: ICE utilization in sprint planning - AARRR: Apply ICE to initiative prioritization at each stage - HEART Framework: Measure Impact with HEART metrics

Industry-Specific ICE Use Cases - Field Testimonies

SaaS Industry (Slack):

Goal: Increase user engagement

High ICE Score Initiatives:
- Add emoji reactions (I:7, C:9, E:9) → 8.3
- Threading feature (I:9, C:7, E:5) → 7.0

Results:
Implemented emoji first → huge success → momentum for threads too

E-commerce (Amazon):

Goal: Increase conversion rate

ICE Application:
One-click purchase (I:10, C:8, E:3) → 7.0
→ High score but technically difficult
→ Made into long-term project and realized
→ Became source of competitive advantage

Media (Netflix):

Goal: Increase viewing time

Evaluation Example:
Auto-play feature (I:8, C:7, E:9) → 8.0
→ High score, implemented immediately
→ Viewing time greatly increased (but also criticized)

Learning:
High ICE still requires separate ethical consideration

Investigation Conclusion - Essence of the Case

The true value of ICE Scoring lies not in creating the "perfect priority list," but in "structuring and making transparent the decision-making process."

Three Essences:

1. Structure intuition rather than eliminate it
   → Convert "somehow" into "because"

2. Gain reproducibility, not perfect objectivity
   → Can make judgments with same process next time

3. Enable dialogue rather than defeat HiPPO
   → Constructive discussion through data

ICE Philosophy:

The world has infinite ideas
But resources are finite
Therefore—
Choose the path that realizes
the greatest impact
with the highest probability
with the least effort

This is the product management truth
embedded in the simple formula
(Impact × Confidence × Ease)

ROI Detective Agency applies this methodology to all decisions, pursuing the path to generate maximum value with limited resources. Only those who decode the cipher called prioritization reach the treasure called success—never forget the truth this case file reveals.

[🔏CLASSIFIED FILE DESIGNATION No. X043 - Investigation Complete]

🎖️ Top 3 Weekly Ranking of Case Files

ranking image
🥇
Case File No. 245_5
The True Culprit Behind the Vanishing OGP Images

OGP images won't display on social media. What seemed like a simple configuration error led to a massive darkness: a 5.76-second server response time. Hunt down the true culprit lurking behind the surface symptoms.
ranking image
🥈
Case File No. 000
A Report on Self-Diagnosis Using the ROI Visualization Service

London, 1891. To the detective agency established next to 221B Baker Street, a peculiar commission arrived. The client's name, upon inspection, raised my eyebrows—it was our own.
ranking image
🥉
Case File No. 175
The Moment When Uncertainty's Soul Responds Instantly! Investment Universe Challenges All-

Days after NovaComm's competition soul creation revolution success, Alliance received the fifth challenge of Volume 12. RapidCapital faced the challenge of transcending the limitations of 'OODA Loop analysis in rapid decision-making' to con

Solve Your Business Challenges with Kindle Unlimited!

Access millions of books with unlimited reading.
Read the latest from ROI Detective Agency now!

Start Your Free Kindle Unlimited Trial!

*Free trial available for eligible customers only