Class: OpenAI::Models::Realtime::InputAudioBufferSpeechStartedEvent

Inherits:
Internal::Type::BaseModel show all
Defined in:
lib/openai/models/realtime/input_audio_buffer_speech_started_event.rb

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods inherited from Internal::Type::BaseModel

==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml

Methods included from Internal::Type::Converter

#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info

Methods included from Internal::Util::SorbetRuntimeSupport

#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type

Constructor Details

#initialize(audio_start_ms:, event_id:, item_id:, type: :"input_audio_buffer.speech_started") ⇒ Object

Some parameter documentations has been truncated, see OpenAI::Models::Realtime::InputAudioBufferSpeechStartedEvent for more details.

Sent by the server when in ‘server_vad` mode to indicate that speech has been detected in the audio buffer. This can happen any time audio is added to the buffer (unless speech is already detected). The client may want to use this event to interrupt audio playback or provide visual feedback to the user.

The client should expect to receive a ‘input_audio_buffer.speech_stopped` event when speech stops. The `item_id` property is the ID of the user message item that will be created when speech stops and will also be included in the `input_audio_buffer.speech_stopped` event (unless the client manually commits the audio buffer during VAD activation).

Parameters:

  • audio_start_ms (Integer)

    Milliseconds from the start of all audio written to the buffer during the

  • event_id (String)

    The unique ID of the server event.

  • item_id (String)

    The ID of the user message item that will be created when speech stops.

  • type (Symbol, :"input_audio_buffer.speech_started") (defaults to: :"input_audio_buffer.speech_started")

    The event type, must be ‘input_audio_buffer.speech_started`.



# File 'lib/openai/models/realtime/input_audio_buffer_speech_started_event.rb', line 34

Instance Attribute Details

#audio_start_msInteger

Milliseconds from the start of all audio written to the buffer during the session when speech was first detected. This will correspond to the beginning of audio sent to the model, and thus includes the ‘prefix_padding_ms` configured in the Session.

Returns:

  • (Integer)


14
# File 'lib/openai/models/realtime/input_audio_buffer_speech_started_event.rb', line 14

required :audio_start_ms, Integer

#event_idString

The unique ID of the server event.

Returns:

  • (String)


20
# File 'lib/openai/models/realtime/input_audio_buffer_speech_started_event.rb', line 20

required :event_id, String

#item_idString

The ID of the user message item that will be created when speech stops.

Returns:

  • (String)


26
# File 'lib/openai/models/realtime/input_audio_buffer_speech_started_event.rb', line 26

required :item_id, String

#typeSymbol, :"input_audio_buffer.speech_started"

The event type, must be ‘input_audio_buffer.speech_started`.

Returns:

  • (Symbol, :"input_audio_buffer.speech_started")


32
# File 'lib/openai/models/realtime/input_audio_buffer_speech_started_event.rb', line 32

required :type, const: :"input_audio_buffer.speech_started"