>_

faster-whisper

non-interactive

A reimplementation of OpenAI's Whisper model using CTranslate2, providing up to 4x faster transcription with lower memory usage.

How to install faster-whisper

pip install faster-whisper

faster-whisper use case

Fast voice-to-text transcription using Whisper (Python API)

When to use faster-whisper

  • Batch transcription of audio files with GPU acceleration
  • When you need word-level timestamps for subtitling
  • Running Whisper in production with memory constraints

When not to use faster-whisper

  • When you need a simple CLI tool (use whisper-ctranslate2 instead)
  • Real-time transcription without a Python environment

faster-whisper features

  • · GPU and CPU inference with fp16/int8/fp32 precision
  • · Batched transcription support
  • · Word-level timestamps
  • · Voice Activity Detection (VAD) filtering with Silero VAD
  • · Distil-Whisper model compatibility
  • · Automatic model downloading from Hugging Face Hub

Want your agent to find this automatically?

Add the MCP server to your agent config and it will discover tools like faster-whisper on its own.

Set up MCP →

Similar tools to faster-whisper