ROI Case File No.436 'The Thirty Minutes That Never Leave the Meeting Room'
![]()
The Thirty Minutes That Never Leave the Meeting Room
Chapter 1: Vanishing Words
"We have fourteen meetings a week. And all fourteen sets of minutes are written entirely by hand."
The head of corporate planning at Synergy Solutions displayed the weekly view of the company calendar. Monday through Friday, two to four meetings were color-coded across each day.
"We're an enterprise IT consulting firm with approximately 180 employees. Client review meetings, internal project status meetings, executive meetings, cross-departmental task forces—our meetings span a wide range. Attendance varies from as few as five to as many as twenty-five."
The planning director opened one meeting's minutes file. Eight pages in Word. Speaker names, discussion points, decisions, and action items were all organized.
"How much time does the assigned person spend creating these minutes?" I asked.
"Including note-taking during the meeting, it averages about twice the meeting duration. A one-hour meeting means about two hours of minutes preparation. With an average meeting length of about fifty minutes across our fourteen weekly meetings, roughly twenty-three hours per week are consumed by minutes creation—nearly three full business days."
"How many people share this workload?" Claude confirmed.
"Junior staff from each department take turns. About fifteen people are involved in creating minutes, each handling one or two sessions per week. The problem is that their primary job is consulting work. Time for client proposals and analysis is being taken away by meeting minutes."
"Converted to annual cost," Gemini calculated, "that's twenty-three hours per week, about fifty weeks per year, totaling 1,150 hours. At an average consultant hourly rate of 4,500 yen, approximately 5.2 million yen per year disappears into minutes creation."
"But," the planning director said, lowering her voice, "the problem isn't just cost."
She opened another file—meeting minutes with noticeable gaps.
"The quality of minutes varies enormously. When a veteran writes them, the flow of discussion and decisions are clearly structured. But when a less experienced person writes them, it becomes a list of statements, and you can't tell what was decided. Last month, a documentation gap in meeting minutes caused an agreement with a client to go unshared internally, resulting in rework. The correction took approximately eighty hours."
"So you're considering an AI meeting minutes tool," I confirmed.
"Yes. Some departments have been piloting Google NotebookLM. But—"
The planning director showed NotebookLM's output. Proper nouns were inconsistent. "Director Tanaka" became "Mr. Tanaka" partway through, then simply "the director" by the end. Technical terms were inaccurate too—"SLA" was transcribed phonetically as raw characters.
"I'm not sure a tool will solve this. But I can't even organize what the issues are and where to start."
This was a case that required a comprehensive, gap-free mapping of all challenges surrounding meeting minutes—before any tool selection could begin.
Chapter 2: Mutually Exclusive, Collectively Exhaustive
"When multiple challenges are tangled together, the first thing to do is classify."
Gemini drew a large frame on the whiteboard and began dividing it with lines.
"MECE," I explained, "stands for Mutually Exclusive, Collectively Exhaustive. In simple terms, it's a method for classifying challenges with no gaps and no overlaps. When a problem looks complex, often it's because the boundaries between issues are blurred, and you're discussing the same problem from different angles repeatedly. Apply MECE, and the number of truly independent challenges becomes clear."
"What does it look like when you apply MECE to the minutes problem?" the planning director asked.
"We cut along three axes," Claude answered. "Axis one—Process: where in the workflow of creating minutes do problems exist? Axis two—Quality: what problems exist in the content of the completed minutes? Axis three—Utilization: how are the finished minutes used afterward—or are they not used at all? These three are mutually exclusive, and together they exhaustively cover all challenges related to meeting minutes."
[Axis One: Process Challenges]
"Let's break down axis one—process challenges," I said, pointing to the first section.
"We'll divide the minutes creation process into three phases," Gemini explained. "Phase A: recording during the meeting. Phase B: post-meeting documentation. Phase C: distribution and approval of the completed minutes. What problems exist in each?"
The planning director answered. "In phase A, the note-taker is so focused on capturing content that they can't participate in the discussion. In phase B, turning notes into minutes takes twice the meeting time. In phase C, approval is delayed because senior managers are slow to review—sometimes minutes sit untouched for over a week."
"These three phases represent independent challenges," Claude organized. "Solving phase A doesn't eliminate the documentation cost of phase B. Streamlining phase B with AI doesn't fix the approval delay in phase C. Each requires a separate countermeasure."
[Axis Two: Quality Challenges]
"Axis two—quality challenges," I continued.
"Let's apply MECE to quality issues as well," Gemini prompted. "We classify quality problems into three categories. First—accuracy: is what was said recorded correctly? Second—completeness: are important decisions and action items captured without gaps? Third—consistency: are naming conventions and formatting unified?"
The planning director nodded. "Now I understand what felt wrong when we tried NotebookLM. The inconsistent proper nouns were a consistency issue. The misrecognized technical terms were an accuracy issue. And the missing decisions were a completeness issue. Because all three problems were mixed together, we ended up with a vague verdict of 'it just doesn't work.'"
"The countermeasures also split into three," I explained. "Accuracy can be improved through speech recognition engine precision and custom dictionary registration. Completeness can be ensured by setting up meeting agendas and templates in advance and instructing the AI to extract 'decisions' and 'action items.' Consistency can be unified by loading a company glossary and naming conventions into the AI."
[Axis Three: Utilization Challenges]
"The third axis must not be forgotten," Claude pointed out. "How the completed minutes are utilized afterward—or whether they're utilized at all."
The planning director's expression changed. "To be honest—not many people go back and read the minutes."
"That's a serious problem," I said. "You're spending 5.2 million yen annually to create minutes that go unread. No matter how much you improve the process and quality of minutes creation, if they're not utilized, the return on investment is zero."
"Let's apply MECE to utilization challenges too," Gemini organized. "First—searchability: can people quickly find needed information from past minutes? Second—connectivity: do action items from minutes automatically feed into task management tools or the next meeting's agenda? Third—accumulation: as meetings continue over time, do minutes build up as an organizational record of decision-making?"
"This third axis," Claude emphasized, "becomes the most important criterion for selecting an AI minutes tool. You should choose not simply the tool with the highest transcription accuracy, but one equipped with search, task management integration, and knowledge accumulation capabilities."
Chapter 3: The Power of the Complete Picture
The planning director studied the three axes and their classification items organized on the whiteboard.
"A vague request to 'implement an AI minutes tool' has been broken down into nine independent challenges. Three process phases, three quality elements, three utilization conditions. Now I can determine where to start."
"The essence of MECE," I responded, "is that by dividing the problem correctly, the right priorities become visible. It's precisely because all nine challenges are visible that you can see 'we don't need to tackle them all at once.' Conversely, if you proceed without classifying, you try to solve multiple problems with a single measure and end up with everything half-done."
"In terms of priority," Claude proposed, "start with process phase B—reducing documentation workload. That's where the bulk of the twenty-three hours per week is spent. Automate transcription and summarization with an AI minutes tool, shifting the task from 'writing from scratch' to 'reviewing and editing AI output.' This alone should reduce documentation workload by 60–70%."
"Next," Gemini continued, "quality consistency—developing the company glossary and templates. This can proceed in parallel with the tool rollout. Third, utilization searchability—full-text search for minutes. Execute these three within the first three months."
"For the pilot," I added, "start with five recurring meetings that have ten or more participants, out of the fourteen weekly meetings. Over two months, measure documentation hours, quality scores, and minutes view counts. Use those results to decide on expanding to the remaining nine meetings."
"And," I reminded, "record improvement progress for each of the nine challenges. With the MECE-classified challenge map in hand, you can see at a glance which items are resolved, which are untouched, and where new issues have emerged."
The planning director stood and bowed deeply. "Thank you. Next week, we'll start by developing the company glossary and selecting the pilot meetings."
Chapter 4: When Words Become Assets
After she left, Gemini murmured, "MECE is simple as a thinking method, but actually dividing challenges without gaps or overlaps is surprisingly difficult."
"Indeed," I answered. "The difficulty of MECE lies in choosing the axes for classification. This time, we cut along 'process, quality, utilization'—but we could have cut along a different axis, such as 'internal meetings versus client meetings.' However, that axis wouldn't reveal the structure of the challenges. Whether you can choose the right axes is the practical skill of MECE. And the right axes don't always appear on the first try. Set axes as hypotheses, try classifying, check for gaps or overlaps, and change axes if needed. If you record this trial-and-error process itself, then the next time you face a different problem, your precision in choosing axes improves. That's reproducibility."
Outside the window, meeting room lights in office buildings were switching on and off, floor by floor.
Four months later, a report arrived from Synergy Solutions.
An AI minutes tool was introduced for the five pilot meetings. With 280 terms registered in the company dictionary and templates configured with three categories—"Decisions," "Action Items," and "Pending Items"—documentation workload dropped from twice the meeting time to 0.4 times. The twenty-three hours per week previously spent on minutes creation was reduced to approximately five hours.
On the quality front, the average of three monthly gaps in decision documentation that occurred before implementation dropped to zero. Proper noun inconsistencies were automatically unified in 95% of cases through the glossary registration.
But the change the planning director highlighted as most significant was in the third axis—utilization. With full-text search capability, people could now instantly look up "When was that decision made?" in past minutes. As a result, monthly minutes view counts jumped from twelve before implementation to 187.
At the end of the report, the planning director wrote: "We've posted the MECE-classified map of nine challenges on the wall. In four months, we've addressed five of the nine. The remaining four—particularly task management tool integration and converting decision history into organizational knowledge—are next quarter's priorities. Because we can see the full picture, there's no anxiety. And the method of creating this challenge map itself is beginning to spread across the company as a thinking approach applicable to other process improvements."
The words that had vanished in meeting rooms were now being etched into organizational memory—as searchable assets.
"When challenges are tangled together, many people feel 'I don't know where to start.' What MECE provides is a classification technique for untangling that knot. Divide into independent challenges—without gaps, without overlaps. The moment you divide, priorities become visible, and 'what to do now' and 'what can wait' become clear. And if you record the classification axes, the next time you face a different problem, the same thinking pattern applies. MECE is not a tool for solving a single problem—it is a thinking pattern that makes problem-solving itself reproducible."