Class: OpenAI::Models::Responses::InputTokenCountParams
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- OpenAI::Models::Responses::InputTokenCountParams
- Extended by:
- Internal::Type::RequestParameters::Converter
- Includes:
- Internal::Type::RequestParameters
- Defined in:
- lib/openai/models/responses/input_token_count_params.rb
Overview
Defined Under Namespace
Modules: Conversation, Input, ToolChoice, Truncation Classes: Text
Instance Attribute Summary collapse
-
#conversation ⇒ String, ...
The conversation that this response belongs to.
-
#input ⇒ String, ...
Text, image, or file inputs to the model, used to generate a response.
-
#instructions ⇒ String?
A system (or developer) message inserted into the model’s context.
-
#model ⇒ String?
Model ID used to generate the response, like ‘gpt-4o` or `o3`.
-
#parallel_tool_calls ⇒ Boolean?
Whether to allow the model to run tool calls in parallel.
-
#previous_response_id ⇒ String?
The unique ID of the previous response to the model.
-
#reasoning ⇒ OpenAI::Models::Reasoning?
**gpt-5 and o-series models only** Configuration options for [reasoning models](platform.openai.com/docs/guides/reasoning).
-
#text ⇒ OpenAI::Models::Responses::InputTokenCountParams::Text?
Configuration options for a text response from the model.
-
#tool_choice ⇒ Symbol, ...
How the model should select which tool (or tools) to use when generating a response.
-
#tools ⇒ Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::FunctionShellTool, OpenAI::Models::Responses::CustomTool, OpenAI::Models::Responses::ApplyPatchTool, OpenAI::Models::Responses::WebSearchTool, OpenAI::Models::Responses::WebSearchPreviewTool>?
An array of tools the model may call while generating a response.
-
#truncation ⇒ Symbol, ...
The truncation strategy to use for the model response.
Attributes included from Internal::Type::RequestParameters
Class Method Summary collapse
- .values ⇒ Array<Symbol>
- .variants ⇒ Array(Symbol, OpenAI::Models::Responses::ToolChoiceOptions, OpenAI::Models::Responses::ToolChoiceAllowed, OpenAI::Models::Responses::ToolChoiceTypes, OpenAI::Models::Responses::ToolChoiceFunction, OpenAI::Models::Responses::ToolChoiceMcp, OpenAI::Models::Responses::ToolChoiceCustom, OpenAI::Models::Responses::ToolChoiceApplyPatch, OpenAI::Models::Responses::ToolChoiceShell)
Instance Method Summary collapse
-
#initialize(conversation: nil, input: nil, instructions: nil, model: nil, parallel_tool_calls: nil, previous_response_id: nil, reasoning: nil, text: nil, tool_choice: nil, tools: nil, truncation: nil, request_options: {}) ⇒ Object
constructor
Some parameter documentations has been truncated, see InputTokenCountParams for more details.
Methods included from Internal::Type::RequestParameters::Converter
Methods included from Internal::Type::RequestParameters
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(conversation: nil, input: nil, instructions: nil, model: nil, parallel_tool_calls: nil, previous_response_id: nil, reasoning: nil, text: nil, tool_choice: nil, tools: nil, truncation: nil, request_options: {}) ⇒ Object
Some parameter documentations has been truncated, see OpenAI::Models::Responses::InputTokenCountParams for more details.
|
|
# File 'lib/openai/models/responses/input_token_count_params.rb', line 106
|
Instance Attribute Details
#conversation ⇒ String, ...
The conversation that this response belongs to. Items from this conversation are prepended to ‘input_items` for this response request. Input items and output items from this response are automatically added to this conversation after this response completes.
18 19 20 21 22 |
# File 'lib/openai/models/responses/input_token_count_params.rb', line 18 optional :conversation, union: -> { OpenAI::Responses::InputTokenCountParams::Conversation }, nil?: true |
#input ⇒ String, ...
Text, image, or file inputs to the model, used to generate a response
28 |
# File 'lib/openai/models/responses/input_token_count_params.rb', line 28 optional :input, union: -> { OpenAI::Responses::InputTokenCountParams::Input }, nil?: true |
#instructions ⇒ String?
A system (or developer) message inserted into the model’s context. When used along with ‘previous_response_id`, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.
37 |
# File 'lib/openai/models/responses/input_token_count_params.rb', line 37 optional :instructions, String, nil?: true |
#model ⇒ String?
Model ID used to generate the response, like ‘gpt-4o` or `o3`. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the [model guide](platform.openai.com/docs/models) to browse and compare available models.
47 |
# File 'lib/openai/models/responses/input_token_count_params.rb', line 47 optional :model, String, nil?: true |
#parallel_tool_calls ⇒ Boolean?
Whether to allow the model to run tool calls in parallel.
53 |
# File 'lib/openai/models/responses/input_token_count_params.rb', line 53 optional :parallel_tool_calls, OpenAI::Internal::Type::Boolean, nil?: true |
#previous_response_id ⇒ String?
The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about [conversation state](platform.openai.com/docs/guides/conversation-state). Cannot be used in conjunction with ‘conversation`.
62 |
# File 'lib/openai/models/responses/input_token_count_params.rb', line 62 optional :previous_response_id, String, nil?: true |
#reasoning ⇒ OpenAI::Models::Reasoning?
**gpt-5 and o-series models only** Configuration options for [reasoning models](platform.openai.com/docs/guides/reasoning).
69 |
# File 'lib/openai/models/responses/input_token_count_params.rb', line 69 optional :reasoning, -> { OpenAI::Reasoning }, nil?: true |
#text ⇒ OpenAI::Models::Responses::InputTokenCountParams::Text?
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
-
[Text inputs and outputs](platform.openai.com/docs/guides/text)
-
[Structured Outputs](platform.openai.com/docs/guides/structured-outputs)
79 |
# File 'lib/openai/models/responses/input_token_count_params.rb', line 79 optional :text, -> { OpenAI::Responses::InputTokenCountParams::Text }, nil?: true |
#tool_choice ⇒ Symbol, ...
How the model should select which tool (or tools) to use when generating a response. See the ‘tools` parameter to see how to specify which tools the model can call.
87 |
# File 'lib/openai/models/responses/input_token_count_params.rb', line 87 optional :tool_choice, union: -> { OpenAI::Responses::InputTokenCountParams::ToolChoice }, nil?: true |
#tools ⇒ Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::FunctionShellTool, OpenAI::Models::Responses::CustomTool, OpenAI::Models::Responses::ApplyPatchTool, OpenAI::Models::Responses::WebSearchTool, OpenAI::Models::Responses::WebSearchPreviewTool>?
An array of tools the model may call while generating a response. You can specify which tool to use by setting the ‘tool_choice` parameter.
94 |
# File 'lib/openai/models/responses/input_token_count_params.rb', line 94 optional :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Responses::Tool] }, nil?: true |
#truncation ⇒ Symbol, ...
The truncation strategy to use for the model response. - ‘auto`: If the input to this Response exceeds the model’s context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation. - ‘disabled` (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
104 |
# File 'lib/openai/models/responses/input_token_count_params.rb', line 104 optional :truncation, enum: -> { OpenAI::Responses::InputTokenCountParams::Truncation } |
Class Method Details
.values ⇒ Array<Symbol>
|
|
# File 'lib/openai/models/responses/input_token_count_params.rb', line 225
|
.variants ⇒ Array(Symbol, OpenAI::Models::Responses::ToolChoiceOptions, OpenAI::Models::Responses::ToolChoiceAllowed, OpenAI::Models::Responses::ToolChoiceTypes, OpenAI::Models::Responses::ToolChoiceFunction, OpenAI::Models::Responses::ToolChoiceMcp, OpenAI::Models::Responses::ToolChoiceCustom, OpenAI::Models::Responses::ToolChoiceApplyPatch, OpenAI::Models::Responses::ToolChoiceShell)
|
|
# File 'lib/openai/models/responses/input_token_count_params.rb', line 147
|