Class: OpenAI::Models::Responses::InputTokenCountParams::Text

Inherits:
Internal::Type::BaseModel show all
Defined in:
lib/openai/models/responses/input_token_count_params.rb

Defined Under Namespace

Modules: Verbosity

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods inherited from Internal::Type::BaseModel

==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml

Methods included from Internal::Type::Converter

#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info

Methods included from Internal::Util::SorbetRuntimeSupport

#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type

Constructor Details

#initialize(format_: nil, verbosity: nil) ⇒ Object

Some parameter documentations has been truncated, see OpenAI::Models::Responses::InputTokenCountParams::Text for more details.

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:

Parameters:



# File 'lib/openai/models/responses/input_token_count_params.rb', line 198


Instance Attribute Details

#format_OpenAI::Models::ResponseFormatText, ...

An object specifying the format that the model must output.

Configuring ‘{ “type”: “json_schema” }` enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](platform.openai.com/docs/guides/structured-outputs).

The default format is ‘{ “type”: “text” }` with no additional options.

**Not recommended for gpt-4o and newer models:**

Setting to ‘{ “type”: “json_object” }` enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.



184
# File 'lib/openai/models/responses/input_token_count_params.rb', line 184

optional :format_, union: -> { OpenAI::Responses::ResponseFormatTextConfig }, api_name: :format

#verbositySymbol, ...

Constrains the verbosity of the model’s response. Lower values will result in more concise responses, while higher values will result in more verbose responses. Currently supported values are low, medium, and high.



192
193
194
195
196
# File 'lib/openai/models/responses/input_token_count_params.rb', line 192

optional :verbosity,
enum: -> {
  OpenAI::Responses::InputTokenCountParams::Text::Verbosity
},
nil?: true