Whisper Large v3 Turbo
Transcriptions
whisper-large-v3-turbo
Transcriptions
Whisper large-v3-turbo is a finetuned version of a pruned Whisper large-v3. In other words, it's the exact same model, except that the number of decoding layers have reduced from 32 to 4. As a result, the model is way faster, at the expense of a minor quality degradation.
Pricing
Pay-as-you-go rates for this model. More details can be found here.
Transcription (1 minute)
$0.0001
Capabilities
Input Modalities
Audio
Output Modalities
Text
Supported Parameters
Available parameters for API requests
Language
Usage Analytics
Token usage across the last 30 active days
Uptime
Reliability over the last 7 days
Not enough performance data to display charts
Code Example
Example code for using this model through our API with Python (OpenAI SDK) or cURL. Replace placeholders with your API key and model ID.
Basic request example. Ensure API key permissions. For more details, see our documentation.
from openai import OpenAI
client = OpenAI(base_url="https://api.naga.ac/v1", api_key="YOUR_API_KEY")
with open("audio.mp3", "rb") as f:
transcription = client.audio.transcriptions.create(
model="whisper-large-v3-turbo",
file=f,
prompt=None,
language=None,
)
print(transcription.text)