Head-to-head comparison based on real production benchmarks with Gulf Arabic callers.
Mistral's speech model — completely non-functional for Arabic.
ElevenLabs' realtime STT offering — poor quality and slow for Arabic.
Produced zero transcriptions for Arabic audio. Tested with and without explicit language parameter.
Described as 'shit quality' in production testing. Not viable for Arabic.
| Feature | Mistral Voxtral Mini | ElevenLabs Scribe v2 |
|---|---|---|
| Multilingual speech recognition (claimed) | ✓ | ✗ |
| Audio understanding | ✓ | ✗ |
| Real-time streaming transcription | ✗ | ✓ |
| Multiple language support | ✗ | ✓ |
| LiveKit inference integration | ✗ | ✓ |
| Capability | Mistral Voxtral Mini | ElevenLabs Scribe v2 |
|---|---|---|
| Streaming support | ✗ | ✓ |
| LiveKit plugin | ✗ | ✓ |
| Self-hostable | ✗ | ✗ |
| API style | REST | WebSocket streaming |
| SDKs | Python, Node.js | Python, Node.js |
Does not work for Arabic at all. Zero transcriptions produced in testing despite claiming multilingual support.
Poor quality and poor latency for Arabic. Not recommended for any Arabic STT use case.
Mistral Voxtral Mini has a quality rating of 1/5 (Non-functional). Produced zero transcriptions for Arabic audio. Tested with and without explicit language parameter.
Both providers are viable options. Mistral Voxtral Mini: Does not work for Arabic at all. Zero transcriptions produced in testing despite claiming multilingual support. ElevenLabs Scribe v2: Poor quality and poor latency for Arabic. Not recommended for any Arabic STT use case.
Mistral Voxtral Mini starts at Usage-based per request (Mistral API pricing). ElevenLabs Scribe v2 starts at $5 per month (Includes STT credits).