whisper-ctranslate2
agent-ready json output non-interactive
A command-line client compatible with OpenAI Whisper CLI, powered by faster-whisper for up to 4x faster transcription.
How to install whisper-ctranslate2
pip install whisper-ctranslate2 whisper-ctranslate2 use case
Fast CLI-based voice-to-text transcription with Whisper
When to use whisper-ctranslate2
- Transcribing audio files from the command line
- Batch transcription with high performance
- When you need speaker diarization in transcripts
- Running Whisper in CI/CD pipelines
When not to use whisper-ctranslate2
- Real-time streaming transcription (experimental)
- When you need Python API access (use faster-whisper instead)
whisper-ctranslate2 features
- · 4x faster than OpenAI Whisper (up to 16x with batched inference)
- · OpenAI Whisper CLI compatibility
- · Docker image with pre-loaded models
- · Speaker Diarization via pyannote.audio
- · Voice Activity Detection (VAD) filtering
- · Load custom fine-tuned Whisper models
- · Live microphone transcription
- · JSON, VTT, SRT output formats
Want your agent to find this automatically?
Add the MCP server to your agent config and it will discover tools like whisper-ctranslate2 on its own.
Set up MCP →Similar tools to whisper-ctranslate2
faster-whisper
A reimplementation of OpenAI's Whisper model using CTranslate2, providing up to 4x faster transcription with lower memory usage.
whisper-cpp
High-performance C/C++ port of OpenAI's Whisper speech recognition model. Runs locally, supports many languages, and outputs in multiple formats.