A speech-to-text model using GPT-4o for transcribing audio. It offers improved word error rate, better language recognition, and higher accuracy compared to the original Whisper models. Use it for more precise transcripts.
Code Example
Example code for using this model through our API with Python (OpenAI SDK) or cURL. Replace placeholders with your API key and model ID.
Basic request example. Ensure API key permissions. For more details, see our documentation.
from openai import OpenAI
client = OpenAI(base_url="https://api.naga.ac/v1", api_key="YOUR_API_KEY")
with open("audio.mp3", "rb") as f:
transcription = client.audio.transcriptions.create(
model="gpt-4o-transcribe",
file=f,
prompt=None,
language=None,
)
print(transcription.text)