Class: OpenAI::Models::Chat::ChatCompletionChunk
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- OpenAI::Models::Chat::ChatCompletionChunk
- Defined in:
- lib/openai/models/chat/chat_completion_chunk.rb
Defined Under Namespace
Modules: ServiceTier Classes: Choice
Instance Attribute Summary collapse
-
#choices ⇒ Array<OpenAI::Models::Chat::ChatCompletionChunk::Choice>
A list of chat completion choices.
-
#created ⇒ Integer
The Unix timestamp (in seconds) of when the chat completion was created.
-
#id ⇒ String
A unique identifier for the chat completion.
-
#model ⇒ String
The model to generate the completion.
-
#object ⇒ Symbol, :"chat.completion.chunk"
The object type, which is always
chat.completion.chunk. -
#service_tier ⇒ Symbol, ...
Specifies the processing type used for serving the request.
- #system_fingerprint ⇒ String? deprecated Deprecated.
-
#usage ⇒ OpenAI::Models::CompletionUsage?
An optional field that will only be present when you set ‘stream_options: true` in your request.
Instance Method Summary collapse
-
#initialize(content:, refusal:) ⇒ Object
constructor
Log probability information for the choice.
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(content:, refusal:) ⇒ Object
Log probability information for the choice.
|
|
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 364
|
Instance Attribute Details
#choices ⇒ Array<OpenAI::Models::Chat::ChatCompletionChunk::Choice>
A list of chat completion choices. Can contain more than one elements if n is greater than 1. Can also be empty for the last chunk if you set ‘stream_options: true`.
19 |
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 19 required :choices, -> { OpenAI::Internal::Type::ArrayOf[OpenAI::Chat::ChatCompletionChunk::Choice] } |
#created ⇒ Integer
The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.
26 |
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 26 required :created, Integer |
#id ⇒ String
A unique identifier for the chat completion. Each chunk has the same ID.
11 |
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 11 required :id, String |
#model ⇒ String
The model to generate the completion.
32 |
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 32 required :model, String |
#object ⇒ Symbol, :"chat.completion.chunk"
The object type, which is always chat.completion.chunk.
38 |
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 38 required :object, const: :"chat.completion.chunk" |
#service_tier ⇒ Symbol, ...
Specifies the processing type used for serving the request.
-
If set to ‘auto’, then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use ‘default’.
-
If set to ‘default’, then the request will be processed with the standard pricing and performance for the selected model.
-
If set to ‘[flex](platform.openai.com/docs/guides/flex-processing)’ or ‘[priority](openai.com/api-priority-processing/)’, then the request will be processed with the corresponding service tier.
-
When not set, the default behavior is ‘auto’.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
59 |
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 59 optional :service_tier, enum: -> { OpenAI::Chat::ChatCompletionChunk::ServiceTier }, nil?: true |
#system_fingerprint ⇒ String?
This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.
69 |
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 69 optional :system_fingerprint, String |
#usage ⇒ OpenAI::Models::CompletionUsage?
An optional field that will only be present when you set ‘stream_options: true` in your request. When present, it contains a null value **except for the last chunk** which contains the token usage statistics for the entire request.
NOTE: If the stream is interrupted or cancelled, you may not receive the final usage chunk which contains the total token usage for the request.
81 |
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 81 optional :usage, -> { OpenAI::CompletionUsage }, nil?: true |