NLP · LLM · Meeting Intelligence

Turn chaotic meetings into structured decisions.

Minutes of Meeting AI is a structured meeting intelligence system that extracts decisions, action items, and accountability from noisy, multi-speaker conversations so teams never lose what matters. [web:1]

Focused on structured extraction and constrained summarization, not generic, free-form meeting summaries. [web:1]
Live transcript LLM powered
PM We’ll ship the API revamp this sprint.
ENG Let’s block two days for load testing.
PM Riya owns QA sign-off by Friday.

Designed to surface decisions and action items first, optimizing for downstream usability over narrative summaries. [web:1]

Why this exists

Online and hybrid meetings are messy: overlapping speakers, interruptions, and half-finished sentences. Minutes of Meeting AI focuses on what is actionable instead of producing long story-like summaries. [web:1]

Structured over verbose

The system favors constrained, structured extraction—decisions, tasks, owners—rather than generic abstractive meeting summaries that bury signal in paragraphs. [web:1]

Built for noisy reality

Prompts and post-processing are tuned for overlapping speakers, interruptions, and implicit decisions that never get stated as neat sentences. [web:1]

Consistency over flair

The goal is repeatable, low-variance extraction of actionable items across many meetings—not a one-off perfect summary. [web:1]

How it works

From raw multi-speaker audio or transcripts to a compact, structured view of your meeting in just a few steps.

  1. 01

    Upload audio or video

    The FastAPI backend streams your file to disk, enforces a laptop-safe size limit, and extracts mono 16 kHz MP3 audio from videos using ffmpeg.

  2. 02

    GPU transcription with Whisper

    A faster‑whisper small model on CUDA produces a full transcript, then releases GPU memory immediately so your machine stays responsive.

  3. 03

    LLM extraction via Ollama

    A local Llama model, running through Ollama, turns the transcript into structured JSON minutes with summary, topics, decisions, and tasks.

  4. 04

    Smart handling for long meetings

    Long transcripts are chunked with overlap, facts are extracted per chunk, then consolidated into final minutes validated against a strict schema.

Step 01 · Capture & diarize

Connect your existing transcription pipeline or feed the project’s components meeting transcripts to get speaker-tagged text as input.

// pseudo
meeting = load_transcript("sprint-review.json")
segments = diarize(meeting)

What it extracts

Instead of a wall of text, Minutes of Meeting AI emits a structured snapshot of the meeting’s intent and follow-ups. [web:1]

Decisions

Clear records of what was decided, including who was involved, so you can answer “when did we agree to this?” instantly.

Action items

Concrete tasks with owners and, when inferable, due dates, so meetings directly translate into execution.

Accountability

Speaker-aware extraction ties commitments back to people and roles to reduce ambiguity and follow-up churn. [web:1]

{
  "decisions": [
    "Ship API revamp in Sprint 14."
  ],
  "action_items": [
    "Riya to finalize QA plan by Friday.",
    "Arjun to add API load tests to CI."
  ],
  "owners": ["Riya", "Arjun"]
}

Under the hood

The project combines modern transcription and LLM techniques with carefully designed prompts and post-processing for more stable, structured outputs. [web:1]

Research focus

  • Structured extraction & constrained summarization as alternatives to generic abstractive summaries. [web:1]
  • Robust behavior under overlapping speakers and noisy, real-world transcripts. [web:1]
  • Optimized outputs for downstream tools and workflows rather than narrative completeness. [web:1]

Implementation themes

  • Modular pipeline for capture → extract → clean → export.
  • LLM prompting focused on decisions, tasks, and owners as first-class objects.
  • Extensible output schema that can back a UI, email template, or integrations.

Ready to make your meetings actionable?

Explore the source, adapt the pipeline to your stack, or plug it into your existing transcription system.