Class AudioTranscriptionRequest
public sealed class AudioTranscriptionRequest : IDisposable
- Inheritance
-
AudioTranscriptionRequest
- Implements
- Inherited Members
Constructors
AudioTranscriptionRequest(Stream, string, string, ChunkingStrategy, string[], string, string, AudioResponseFormat, float?, TimestampGranularity)
Constructor.
public AudioTranscriptionRequest(Stream audio, string audioName, string model = null, ChunkingStrategy chunkingStrategy = null, string[] include = null, string language = null, string prompt = null, AudioResponseFormat responseFormat = AudioResponseFormat.Json, float? temperature = null, TimestampGranularity timestampGranularity = TimestampGranularity.None)
Parameters
audio
StreamThe audio stream to transcribe.
audioName
stringThe name of the audio file to transcribe.
model
stringID of the model to use. Only whisper-1 is currently available.
chunkingStrategy
ChunkingStrategyControls how the audio is cut into chunks. When set to "auto", the server first normalizes loudness and then uses voice activity detection (VAD) to choose boundaries. server_vad object can be provided to tweak VAD detection parameters manually. If unset, the audio is transcribed as a single block.
include
string[]Additional information to include in the transcription response. logprobs will return the log probabilities of the tokens in the response to understand the model's confidence in the transcription. logprobs only works with response_format set to json and only with the models gpt-4o-transcribe and gpt-4o-mini-transcribe.
language
stringOptional, The language of the input audio. Supplying the input language in ISO-639-1 format will improve accuracy and latency.
Currently supported languages: Afrikaans, Arabic, Armenian, Azerbaijani, Belarusian, Bosnian, Bulgarian, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, Galician, German, Greek, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kannada, Kazakh, Korean, Latvian, Lithuanian, Macedonian, Malay, Marathi, Maori, Nepali, Norwegian, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swahili, Swedish, Tagalog, Tamil, Thai, Turkish, Ukrainian, Urdu, Vietnamese, and Welsh.prompt
stringOptional, An optional text to guide the model's style or continue a previous audio segment.
The prompt should be in English.responseFormat
AudioResponseFormatOptional, The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
Defaults to json.temperature
float?The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.
Defaults to 0timestampGranularity
TimestampGranularityThe timestamp granularities to populate for this transcription. response_format must be set verbose_json to use timestamp granularities. Either or both of these options are supported: Word, or Segment.
Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency.
AudioTranscriptionRequest(Stream, string, string, string, AudioResponseFormat, float?, string, TimestampGranularity)
[Obsolete("Use new .ctr overload with chunkingStrategy and include")]
public AudioTranscriptionRequest(Stream audio, string audioName, string model, string prompt, AudioResponseFormat responseFormat, float? temperature, string language, TimestampGranularity timestampGranularity)
Parameters
audio
StreamaudioName
stringmodel
stringprompt
stringresponseFormat
AudioResponseFormattemperature
float?language
stringtimestampGranularity
TimestampGranularity
AudioTranscriptionRequest(string, string, ChunkingStrategy, string[], string, string, AudioResponseFormat, float?, TimestampGranularity)
Constructor.
public AudioTranscriptionRequest(string audioPath, string model = null, ChunkingStrategy chunkingStrategy = null, string[] include = null, string language = null, string prompt = null, AudioResponseFormat responseFormat = AudioResponseFormat.Json, float? temperature = null, TimestampGranularity timestampGranularity = TimestampGranularity.None)
Parameters
audioPath
stringThe audio file to transcribe, in one of these formats flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
model
stringID of the model to use.
chunkingStrategy
ChunkingStrategyControls how the audio is cut into chunks. When set to "auto", the server first normalizes loudness and then uses voice activity detection (VAD) to choose boundaries. server_vad object can be provided to tweak VAD detection parameters manually. If unset, the audio is transcribed as a single block.
include
string[]Additional information to include in the transcription response. logprobs will return the log probabilities of the tokens in the response to understand the model's confidence in the transcription. logprobs only works with response_format set to json and only with the models gpt-4o-transcribe and gpt-4o-mini-transcribe.
language
stringOptional, The language of the input audio. Supplying the input language in ISO-639-1 format will improve accuracy and latency.
Currently supported languages: Afrikaans, Arabic, Armenian, Azerbaijani, Belarusian, Bosnian, Bulgarian, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, Galician, German, Greek, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kannada, Kazakh, Korean, Latvian, Lithuanian, Macedonian, Malay, Marathi, Maori, Nepali, Norwegian, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swahili, Swedish, Tagalog, Tamil, Thai, Turkish, Ukrainian, Urdu, Vietnamese, and Welsh.prompt
stringOptional, An optional text to guide the model's style or continue a previous audio segment.
The prompt should be in English.responseFormat
AudioResponseFormatOptional, The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
Defaults to json.temperature
float?The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.
Defaults to 0timestampGranularity
TimestampGranularityThe timestamp granularities to populate for this transcription. response_format must be set verbose_json to use timestamp granularities. Either or both of these options are supported: Word, or Segment.
Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency.
AudioTranscriptionRequest(string, string, string, AudioResponseFormat, float?, string, TimestampGranularity)
[Obsolete("Use new .ctr overload with chunkingStrategy and include")]
public AudioTranscriptionRequest(string audioPath, string model, string prompt, AudioResponseFormat responseFormat, float? temperature, string language, TimestampGranularity timestampGranularity)
Parameters
audioPath
stringmodel
stringprompt
stringresponseFormat
AudioResponseFormattemperature
float?language
stringtimestampGranularity
TimestampGranularity
Properties
Audio
The audio file to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
public Stream Audio { get; }
Property Value
AudioName
The name of the audio file to transcribe.
public string AudioName { get; }
Property Value
ChunkingStrategy
Controls how the audio is cut into chunks. When set to "auto", the server first normalizes loudness and then uses voice activity detection (VAD) to choose boundaries. server_vad object can be provided to tweak VAD detection parameters manually. If unset, the audio is transcribed as a single block.
public ChunkingStrategy ChunkingStrategy { get; }
Property Value
Include
Additional information to include in the transcription response. logprobs will return the log probabilities of the tokens in the response to understand the model's confidence in the transcription. logprobs only works with response_format set to json and only with the models gpt-4o-transcribe and gpt-4o-mini-transcribe.
public string[] Include { get; }
Property Value
- string[]
Language
Optional, The language of the input audio.
Supplying the input language in ISO-639-1 format will improve accuracy and latency.
Currently supported languages: Afrikaans, Arabic, Armenian, Azerbaijani, Belarusian, Bosnian, Bulgarian, Catalan,
Chinese, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, Galician, German, Greek, Hebrew,
Hindi, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kannada, Kazakh, Korean, Latvian, Lithuanian,
Macedonian, Malay, Marathi, Maori, Nepali, Norwegian, Persian, Polish, Portuguese, Romanian, Russian, Serbian,
Slovak, Slovenian, Spanish, Swahili, Swedish, Tagalog, Tamil, Thai, Turkish, Ukrainian, Urdu, Vietnamese, and Welsh.
public string Language { get; }
Property Value
Model
ID of the model to use. Only whisper-1 is currently available.
public string Model { get; }
Property Value
Prompt
Optional, An optional text to guide the model's style or continue a previous audio segment.
The prompt should be in English.
public string Prompt { get; }
Property Value
ResponseFormat
Optional, The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
Defaults to json.
public AudioResponseFormat ResponseFormat { get; }
Property Value
Temperature
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random,
while lower values like 0.2 will make it more focused and deterministic. If set to 0,
the model will use log probability to automatically increase the temperature until certain thresholds are hit.
Defaults to 0
public float? Temperature { get; }
Property Value
TimestampGranularities
The timestamp granularities to populate for this transcription.
response_format must be set verbose_json to use timestamp granularities.
Either or both of these options are supported: Word, or Segment.
Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency.
public TimestampGranularity TimestampGranularities { get; }
Property Value
Methods
Dispose()
Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.
public void Dispose()
~AudioTranscriptionRequest()
protected ~AudioTranscriptionRequest()