Class: OpenAI::Models::Realtime::RealtimeResponseUsage

Inherits:
Internal::Type::BaseModel show all
Defined in:
lib/openai/models/realtime/realtime_response_usage.rb

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods inherited from Internal::Type::BaseModel

==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml

Methods included from Internal::Type::Converter

#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info

Methods included from Internal::Util::SorbetRuntimeSupport

#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type

Constructor Details

#initialize(input_token_details: nil, input_tokens: nil, output_token_details: nil, output_tokens: nil, total_tokens: nil) ⇒ Object

Some parameter documentations has been truncated, see OpenAI::Models::Realtime::RealtimeResponseUsage for more details.

Usage statistics for the Response, this will correspond to billing. A Realtime API session will maintain a conversation context and append new Items to the Conversation, thus output from previous turns (text and audio tokens) will become the input for later turns.

Parameters:

  • input_token_details (OpenAI::Models::Realtime::RealtimeResponseUsageInputTokenDetails) (defaults to: nil)

    Details about the input tokens used in the Response. Cached tokens are tokens fr

  • input_tokens (Integer) (defaults to: nil)

    The number of input tokens used in the Response, including text and

  • output_token_details (OpenAI::Models::Realtime::RealtimeResponseUsageOutputTokenDetails) (defaults to: nil)

    Details about the output tokens used in the Response.

  • output_tokens (Integer) (defaults to: nil)

    The number of output tokens sent in the Response, including text and

  • total_tokens (Integer) (defaults to: nil)

    The total number of tokens in the Response including input and output



# File 'lib/openai/models/realtime/realtime_response_usage.rb', line 43


Instance Attribute Details

#input_token_detailsOpenAI::Models::Realtime::RealtimeResponseUsageInputTokenDetails?

Details about the input tokens used in the Response. Cached tokens are tokens from previous turns in the conversation that are included as context for the current response. Cached tokens here are counted as a subset of input tokens, meaning input tokens will include cached and uncached tokens.



14
# File 'lib/openai/models/realtime/realtime_response_usage.rb', line 14

optional :input_token_details, -> { OpenAI::Realtime::RealtimeResponseUsageInputTokenDetails }

#input_tokensInteger?

The number of input tokens used in the Response, including text and audio tokens.

Returns:

  • (Integer, nil)


21
# File 'lib/openai/models/realtime/realtime_response_usage.rb', line 21

optional :input_tokens, Integer

#output_token_detailsOpenAI::Models::Realtime::RealtimeResponseUsageOutputTokenDetails?

Details about the output tokens used in the Response.



27
# File 'lib/openai/models/realtime/realtime_response_usage.rb', line 27

optional :output_token_details, -> { OpenAI::Realtime::RealtimeResponseUsageOutputTokenDetails }

#output_tokensInteger?

The number of output tokens sent in the Response, including text and audio tokens.

Returns:

  • (Integer, nil)


34
# File 'lib/openai/models/realtime/realtime_response_usage.rb', line 34

optional :output_tokens, Integer

#total_tokensInteger?

The total number of tokens in the Response including input and output text and audio tokens.

Returns:

  • (Integer, nil)


41
# File 'lib/openai/models/realtime/realtime_response_usage.rb', line 41

optional :total_tokens, Integer