Class: OpenAI::Models::Realtime::RealtimeConversationItemAssistantMessage::Content
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- OpenAI::Models::Realtime::RealtimeConversationItemAssistantMessage::Content
- Defined in:
- lib/openai/models/realtime/realtime_conversation_item_assistant_message.rb
Defined Under Namespace
Modules: Type
Instance Attribute Summary collapse
-
#audio ⇒ String?
Base64-encoded audio bytes, these will be parsed as the format specified in the session output audio type configuration.
-
#text ⇒ String?
The text content.
-
#transcript ⇒ String?
The transcript of the audio content, this will always be present if the output type is ‘audio`.
-
#type ⇒ Symbol, ...
The content type, ‘output_text` or `output_audio` depending on the session `output_modalities` configuration.
Instance Method Summary collapse
-
#initialize(audio: nil, text: nil, transcript: nil, type: nil) ⇒ Object
constructor
Some parameter documentations has been truncated, see Content for more details.
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(audio: nil, text: nil, transcript: nil, type: nil) ⇒ Object
Some parameter documentations has been truncated, see OpenAI::Models::Realtime::RealtimeConversationItemAssistantMessage::Content for more details.
65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 |
# File 'lib/openai/models/realtime/realtime_conversation_item_assistant_message.rb', line 65 class Content < OpenAI::Internal::Type::BaseModel # @!attribute audio # Base64-encoded audio bytes, these will be parsed as the format specified in the # session output audio type configuration. This defaults to PCM 16-bit 24kHz mono # if not specified. # # @return [String, nil] optional :audio, String # @!attribute text # The text content. # # @return [String, nil] optional :text, String # @!attribute transcript # The transcript of the audio content, this will always be present if the output # type is `audio`. # # @return [String, nil] optional :transcript, String # @!attribute type # The content type, `output_text` or `output_audio` depending on the session # `output_modalities` configuration. # # @return [Symbol, OpenAI::Models::Realtime::RealtimeConversationItemAssistantMessage::Content::Type, nil] optional :type, enum: -> { OpenAI::Realtime::RealtimeConversationItemAssistantMessage::Content::Type } # @!method initialize(audio: nil, text: nil, transcript: nil, type: nil) # Some parameter documentations has been truncated, see # {OpenAI::Models::Realtime::RealtimeConversationItemAssistantMessage::Content} # for more details. # # @param audio [String] Base64-encoded audio bytes, these will be parsed as the format specified in the # # @param text [String] The text content. # # @param transcript [String] The transcript of the audio content, this will always be present if the output t # # @param type [Symbol, OpenAI::Models::Realtime::RealtimeConversationItemAssistantMessage::Content::Type] The content type, `output_text` or `output_audio` depending on the session `outp # The content type, `output_text` or `output_audio` depending on the session # `output_modalities` configuration. # # @see OpenAI::Models::Realtime::RealtimeConversationItemAssistantMessage::Content#type module Type extend OpenAI::Internal::Type::Enum OUTPUT_TEXT = :output_text OUTPUT_AUDIO = :output_audio # @!method self.values # @return [Array<Symbol>] end end |
Instance Attribute Details
#audio ⇒ String?
Base64-encoded audio bytes, these will be parsed as the format specified in the session output audio type configuration. This defaults to PCM 16-bit 24kHz mono if not specified.
72 |
# File 'lib/openai/models/realtime/realtime_conversation_item_assistant_message.rb', line 72 optional :audio, String |
#text ⇒ String?
The text content.
78 |
# File 'lib/openai/models/realtime/realtime_conversation_item_assistant_message.rb', line 78 optional :text, String |
#transcript ⇒ String?
The transcript of the audio content, this will always be present if the output type is ‘audio`.
85 |
# File 'lib/openai/models/realtime/realtime_conversation_item_assistant_message.rb', line 85 optional :transcript, String |
#type ⇒ Symbol, ...
The content type, ‘output_text` or `output_audio` depending on the session `output_modalities` configuration.
92 |
# File 'lib/openai/models/realtime/realtime_conversation_item_assistant_message.rb', line 92 optional :type, enum: -> { OpenAI::Realtime::RealtimeConversationItemAssistantMessage::Content::Type } |