Class: OpenAI::Models::Realtime::AudioTranscription
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- OpenAI::Models::Realtime::AudioTranscription
- Defined in:
- lib/openai/models/realtime/audio_transcription.rb
Defined Under Namespace
Modules: Model
Instance Attribute Summary collapse
-
#language ⇒ String?
The language of the input audio.
-
#model ⇒ Symbol, ...
The model to use for transcription.
-
#prompt ⇒ String?
An optional text to guide the model’s style or continue a previous audio segment.
Instance Method Summary collapse
-
#initialize(language: nil, model: nil, prompt: nil) ⇒ Object
constructor
Some parameter documentations has been truncated, see AudioTranscription for more details.
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(language: nil, model: nil, prompt: nil) ⇒ Object
Some parameter documentations has been truncated, see OpenAI::Models::Realtime::AudioTranscription for more details.
|
|
# File 'lib/openai/models/realtime/audio_transcription.rb', line 33
|
Instance Attribute Details
#language ⇒ String?
The language of the input audio. Supplying the input language in [ISO-639-1](en.wikipedia.org/wiki/List_of_ISO_639-1_codes) (e.g. ‘en`) format will improve accuracy and latency.
13 |
# File 'lib/openai/models/realtime/audio_transcription.rb', line 13 optional :language, String |
#model ⇒ Symbol, ...
The model to use for transcription. Current options are ‘whisper-1`, `gpt-4o-mini-transcribe`, `gpt-4o-transcribe`, and `gpt-4o-transcribe-diarize`. Use `gpt-4o-transcribe-diarize` when you need diarization with speaker labels.
21 |
# File 'lib/openai/models/realtime/audio_transcription.rb', line 21 optional :model, enum: -> { OpenAI::Realtime::AudioTranscription::Model } |
#prompt ⇒ String?
An optional text to guide the model’s style or continue a previous audio segment. For ‘whisper-1`, the [prompt is a list of keywords](platform.openai.com/docs/guides/speech-to-text#prompting). For `gpt-4o-transcribe` models (excluding `gpt-4o-transcribe-diarize`), the prompt is a free text string, for example “expect words related to technology”.
31 |
# File 'lib/openai/models/realtime/audio_transcription.rb', line 31 optional :prompt, String |