📅 2026-01-02 23:00
🕒 Reading time: 9 min
🏷️ MECE
![]()
The day after resolving Global Solutions Inc.'s TOC case, a consultation arrived regarding AI-powered game content development. Volume 30, "The Pursuit of Reproducibility," Case 372 tells the story of systematically decomposing challenges with MECE.
"Detective, we have treasure. Nine years of user data. Conversation data accumulated from our chat game between users and characters. Over 5 million records. Plus past event and story information. Over 2,000 items. But we don't know how to use this treasure."
Misaki Sato, Planning Director of NeuroPlay Inc. from Akihabara, visited 221B Baker Street with an expression mixing anticipation and anxiety. In her hands were schema diagrams of massive databases spanning 9 years, and in stark contrast, a simple plan titled "AI Content Development Roadmap 2026."
"We've operated the chat game 'CharaTalk' for 9 years. 38 employees. Annual revenue of 800 million yen. 120,000 monthly active users. However, planning and operations are entirely manual. We only use generative AI for design, and we lack knowledge about AI utilization methods and possibilities."
NeuroPlay Inc.'s Current State: - Founded: 2017 (chat game operation) - Employees: 38 - Annual revenue: 800M yen - Monthly active users: 120,000 - Issues: Unclear data utilization methods, lack of AI knowledge, no system development experience
Deep anxiety permeated Sato's voice.
"The directive from our decision-maker, the CEO, is clear: 'Use this data to create new AI game content by summer 2026.' We envision readable content or adventure games positioned alongside the existing game. But we have absolutely no idea where to start."
Reality of Accumulated Data:
Data 1: Conversation Logs (9 years) - Total records: 5.23 million - Period: April 2017 - January 2026 - Structure: user_id, character_id, message, timestamp, sentiment_score - File size: 18GB (compressed)
Data 2: Event & Story Information - Total events: 2,147 - Total stories: 8,652 - Structure: event_id, story_id, title, text, choices, outcomes - File size: 3.2GB
Data 3: User Behavior Logs - Total records: 18.5 million - Structure: user_id, action_type, target_id, timestamp - File size: 25GB
Current AI Utilization: - Design: Generate character images with Stable Diffusion (50 images/month) - Other: Manual (8 planners, 12 scenario writers)
Sato sighed deeply.
"There's another problem. We lack AI × system development knowledge. How to extract data from databases. Which AI models to use. How to train them. We don't know any of it. And the CEO says 'by summer 2026.' We only have 8 months."
"Sato-san, do you believe that solving all challenges at once will meet the summer deadline?"
My question left Sato looking confused.
"Isn't that the case? I thought having AI learn all the data would automatically generate content."
Current Understanding (All-at-Once Solution): - Expectation: Automatic generation by feeding all data to AI - Problem: Challenges are not organized, priorities unclear
I explained the importance of systematically decomposing challenges with MECE.
"The problem is thinking 'solve all challenges at once.' MECE—Mutually Exclusive and Collectively Exhaustive. Decompose challenges comprehensively without gaps or overlaps, and prioritize to achieve reproducible phased implementation."
"Don't solve everything at once. Decompose challenges with MECE, organizing comprehensively without gaps or overlaps"
"Challenges are always 'tangled threads.' Unraveling them one by one is essential"
"Classify challenges with MECE. Gaps lead to failure, overlaps create waste"
The three members began their analysis. Gemini displayed a "MECE Tree" on the whiteboard.
MECE Principles: 1. Mutually Exclusive: Elements don't overlap 2. Collectively Exhaustive: Contains all elements
"Sato-san, let's first decompose the challenges with MECE."
Step 1: Challenge Identification (1 week)
Challenges Raised by Sato (Unorganized): 1. Don't know how to utilize AI 2. Unclear effective utilization methods for accumulated data 3. Don't know proper data extraction methods from databases 4. Don't know which AI models to use 5. Lack system development knowledge 6. Want to meet summer 2026 deadline 7. Budget unclear 8. Internal opposition exists
"This challenge list has gaps and overlaps. Let's organize with MECE."
Step 2: MECE Tree Construction (1 week)
Level 1: Major Categories (Classify by challenge nature)
AI-Powered Game Content Development Challenges ├─ A. Data-Related Challenges (What Data) ├─ B. AI Technology Challenges (What AI) ├─ C. System Development Challenges (What System) └─ D. Project Management Challenges (How to Execute)
Level 2: Medium Categories (Details of each major category)
A. Data-Related Challenges: - A1. Don't know data extraction methods - A2. Don't know data preprocessing methods - A3. Data quality unknown
B. AI Technology Challenges: - B1. Don't know which AI model to use - B2. Don't know AI model training methods - B3. Don't know how to evaluate AI output quality
C. System Development Challenges: - C1. Cannot design system architecture - C2. Lack development resources (insufficient in-house engineers) - C3. Lack infrastructure construction knowledge
D. Project Management Challenges: - D1. Schedule to summer 2026 unclear - D2. Budget unclear - D3. Internal consensus not achieved
Step 3: Priority Setting (1 week)
Evaluation Axes: - X-axis: Urgency (1-5 points, 5 highest) - Y-axis: Importance (1-5 points, 5 highest)
Evaluation Results:
| Challenge | Urgency | Importance | Total | Priority |
|---|---|---|---|---|
| A1. Data extraction | 5 | 5 | 10 | 1st |
| B1. AI model selection | 5 | 5 | 10 | 1st |
| D1. Schedule | 5 | 4 | 9 | 3rd |
| A2. Data preprocessing | 4 | 5 | 9 | 3rd |
| B2. AI training methods | 4 | 5 | 9 | 3rd |
| C2. Development resources | 5 | 3 | 8 | 6th |
| D2. Budget | 4 | 4 | 8 | 6th |
| A3. Data quality | 3 | 4 | 7 | 8th |
| B3. AI evaluation methods | 3 | 4 | 7 | 8th |
| C1. Architecture | 3 | 3 | 6 | 10th |
| D3. Internal consensus | 3 | 3 | 6 | 10th |
| C3. Infrastructure | 2 | 3 | 5 | 12th |
Phase 1 Priority Measures (Top 5):
Measure 1: Establish Data Extraction Methods (A1) - Goal: Properly export conversation logs and event information - Duration: 2 weeks - In charge: External data engineer + in-house DB administrator
Measure 2: AI Model Selection (B1) - Goal: Select optimal LLM for content generation - Candidates: GPT-4, Claude 3, Gemini Pro - Duration: 2 weeks - In charge: External AI consultant
Measure 3: Schedule Development (D1) - Goal: Create detailed schedule to summer 2026 - Duration: 1 week - In charge: PMO
Measure 4: Establish Data Preprocessing Methods (A2) - Goal: Format extracted data for AI training - Duration: 3 weeks - In charge: External data scientist
Measure 5: Establish AI Training Methods (B2) - Goal: Implement fine-tuning with selected LLM - Duration: 4 weeks - In charge: External AI engineer
Months 1-2: Execute Measures 1-2 (Data Extraction + AI Model Selection)
Data Extraction Flow: 1. Extract conversation logs from PostgreSQL by month 2. Export in JSON format (1 month = approx. 500K records) 3. Prioritize extraction of most recent 3 months (1.5M records) 4. Extract all event/story information (2,147 items)
AI Model Selection Result: - Selection: GPT-4 Turbo + RAG (Retrieval-Augmented Generation) - Reason: High conversational context understanding, stable API, good cost efficiency - Training approach: Utilize existing data with RAG rather than fine-tuning
Month 3: Execute Measure 3 (Schedule Development)
Milestones to Summer 2026: - Months 1-2: Data extraction + AI model selection (complete) - Months 3-4: Data preprocessing + RAG system construction - Months 5-6: Prototype development + test play - Month 7: User testing + feedback integration - Month 8: Official release (August 2026)
Months 3-4: Execute Measures 4-5 (Data Preprocessing + RAG System Construction)
Data Preprocessing: 1. Classify conversation logs by user_id and character_id 2. Extract positive conversations through sentiment analysis (68% of total) 3. Tag events/stories by theme (romance, adventure, daily life, etc.) 4. Store in vector database (Pinecone)
RAG System Construction: 1. Vectorize user input query 2. Search for similar conversations/stories in Pinecone (Top 10) 3. Inject search results into GPT-4 prompt 4. GPT-4 generates content
Months 5-6: Prototype Development
Function 1: AI Story Generation - AI generates next scene when user selects choices - Learns past popular events, automatically generates similar developments
Function 2: AI Character Conversation - Characters respond based on user's past conversation history - Learns conversation patterns from 9 years of data
Prototype KPIs: - Story generation speed: Within 5 seconds - User satisfaction: 70% or higher (100 test players)
Month 7: Effect Measurement
KPI 1: Content Production Time - Before: 12 scenario writers produce 20 items/month, 8 hours per item - After: AI generates 100 items/month, human editing 2 hours/item - Reduction rate: 75% - Time saved: 6 hours per item
KPI 2: Content Production Cost - Before: 20 items/month × 8 hours × 3,500 yen (writer hourly rate) = 560K yen/month - After: 100 items/month × 2 hours × 3,500 yen = 700K yen/month (5x volume at 1.25x cost) - Cost per item: 28K yen → 7K yen (75% reduction)
KPI 3: User Engagement - Prototype test satisfaction: 78% (achieved 70% target) - Play time: +35% vs. conventional content
Annual Effects (Month 8 onward, annualized):
Content Production Cost Reduction: - Before: 240 items/year × 28K yen = 6.72M yen/year - After: 1,200 items/year × 7K yen = 8.4M yen/year - 5x volume at 1.25x cost - Efficiency per item: 75% improvement
Revenue Increase (New User Acquisition): - New users from AI-generated content: 1,500/month - Monthly billing amount: Average 800 yen - Annual revenue increase: 1,500 × 800 yen × 12 months = 14.4M yen/year
Total Annual Effects: - Cost reduction effect (efficiency): Achieve 5x volume at 1.25x cost - Revenue increase: 14.4M yen/year
Investment: - External resources (data engineer, AI consultant): 6M yen - RAG system development: 4M yen - Total initial investment: 10M yen - Annual AI API costs: 1.8M yen
ROI: - (14.4M - 1.8M) / 10M × 100 = 126% - Payback period: 10M ÷ 12.6M = 0.79 years (9.5 months)
That evening, I contemplated the essence of MECE.
NeuroPlay Inc. held the illusion of "solving all challenges at once." However, when challenges were identified, there were 12 items with mixed gaps and overlaps.
By organizing challenges into 4 major categories with MECE—Data, AI Technology, System Development, and Project Management—and prioritizing them, we concentrated on the top 5 measures. This enabled the summer 2026 release in 8 months.
What's important is balancing "Collectively Exhaustive (no gaps)" with "Mutually Exclusive (no overlaps)." Gaps cause important challenges to be overlooked, while overlaps generate wasteful work.
Annual effect of 12.6M yen, ROI of 126%, payback in 9.5 months. And content production volume increased 5-fold.
"Don't solve everything at once. Decompose challenges with MECE. By organizing comprehensively without gaps or overlaps and prioritizing, reproducible phased implementation emerges."
The next case will also depict the moment of systematically solving complex challenges with MECE.
"MECE—Mutually Exclusive and Collectively Exhaustive. Decompose challenges comprehensively without gaps or overlaps. By prioritizing and implementing in phases, true value creation is achieved"—From the Detective's Notes
🎖️ Top 3 Weekly Ranking of Case Files
Solve Your Business Challenges with Kindle Unlimited!
Access millions of books with unlimited reading.
Read the latest from ROI Detective Agency now!
*Free trial available for eligible customers only