Class: OpenAI::Models::Responses::InputTokenCountParams

Inherits:
Internal::Type::BaseModel show all
Extended by:
Internal::Type::RequestParameters::Converter
Includes:
Internal::Type::RequestParameters
Defined in:
lib/openai/models/responses/input_token_count_params.rb

Overview

Defined Under Namespace

Modules: Conversation, Input, ToolChoice, Truncation Classes: Text

Instance Attribute Summary collapse

Attributes included from Internal::Type::RequestParameters

#request_options

Class Method Summary collapse

Instance Method Summary collapse

Methods included from Internal::Type::RequestParameters::Converter

dump_request

Methods included from Internal::Type::RequestParameters

included

Methods inherited from Internal::Type::BaseModel

==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml

Methods included from Internal::Type::Converter

#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info

Methods included from Internal::Util::SorbetRuntimeSupport

#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type

Constructor Details

#initialize(conversation: nil, input: nil, instructions: nil, model: nil, parallel_tool_calls: nil, previous_response_id: nil, reasoning: nil, text: nil, tool_choice: nil, tools: nil, truncation: nil, request_options: {}) ⇒ Object

Some parameter documentations has been truncated, see OpenAI::Models::Responses::InputTokenCountParams for more details.

Parameters:



# File 'lib/openai/models/responses/input_token_count_params.rb', line 106


Instance Attribute Details

#conversationString, ...

The conversation that this response belongs to. Items from this conversation are prepended to ‘input_items` for this response request. Input items and output items from this response are automatically added to this conversation after this response completes.



18
19
20
21
22
# File 'lib/openai/models/responses/input_token_count_params.rb', line 18

optional :conversation,
union: -> {
  OpenAI::Responses::InputTokenCountParams::Conversation
},
nil?: true

#inputString, ...

Text, image, or file inputs to the model, used to generate a response

Returns:



28
# File 'lib/openai/models/responses/input_token_count_params.rb', line 28

optional :input, union: -> { OpenAI::Responses::InputTokenCountParams::Input }, nil?: true

#instructionsString?

A system (or developer) message inserted into the model’s context. When used along with ‘previous_response_id`, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.

Returns:

  • (String, nil)


37
# File 'lib/openai/models/responses/input_token_count_params.rb', line 37

optional :instructions, String, nil?: true

#modelString?

Model ID used to generate the response, like ‘gpt-4o` or `o3`. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the [model guide](platform.openai.com/docs/models) to browse and compare available models.

Returns:

  • (String, nil)


47
# File 'lib/openai/models/responses/input_token_count_params.rb', line 47

optional :model, String, nil?: true

#parallel_tool_callsBoolean?

Whether to allow the model to run tool calls in parallel.

Returns:



53
# File 'lib/openai/models/responses/input_token_count_params.rb', line 53

optional :parallel_tool_calls, OpenAI::Internal::Type::Boolean, nil?: true

#previous_response_idString?

The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about [conversation state](platform.openai.com/docs/guides/conversation-state). Cannot be used in conjunction with ‘conversation`.

Returns:

  • (String, nil)


62
# File 'lib/openai/models/responses/input_token_count_params.rb', line 62

optional :previous_response_id, String, nil?: true

#reasoningOpenAI::Models::Reasoning?

**gpt-5 and o-series models only** Configuration options for [reasoning models](platform.openai.com/docs/guides/reasoning).

Returns:



69
# File 'lib/openai/models/responses/input_token_count_params.rb', line 69

optional :reasoning, -> { OpenAI::Reasoning }, nil?: true

#textOpenAI::Models::Responses::InputTokenCountParams::Text?

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:



79
# File 'lib/openai/models/responses/input_token_count_params.rb', line 79

optional :text, -> { OpenAI::Responses::InputTokenCountParams::Text }, nil?: true

#tool_choiceSymbol, ...

How the model should select which tool (or tools) to use when generating a response. See the ‘tools` parameter to see how to specify which tools the model can call.



87
# File 'lib/openai/models/responses/input_token_count_params.rb', line 87

optional :tool_choice, union: -> { OpenAI::Responses::InputTokenCountParams::ToolChoice }, nil?: true

#toolsArray<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::FunctionShellTool, OpenAI::Models::Responses::CustomTool, OpenAI::Models::Responses::ApplyPatchTool, OpenAI::Models::Responses::WebSearchTool, OpenAI::Models::Responses::WebSearchPreviewTool>?

An array of tools the model may call while generating a response. You can specify which tool to use by setting the ‘tool_choice` parameter.



94
# File 'lib/openai/models/responses/input_token_count_params.rb', line 94

optional :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Responses::Tool] }, nil?: true

#truncationSymbol, ...

The truncation strategy to use for the model response. - ‘auto`: If the input to this Response exceeds the model’s context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation. - ‘disabled` (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.



104
# File 'lib/openai/models/responses/input_token_count_params.rb', line 104

optional :truncation, enum: -> { OpenAI::Responses::InputTokenCountParams::Truncation }

Class Method Details

.valuesArray<Symbol>

Returns:

  • (Array<Symbol>)


# File 'lib/openai/models/responses/input_token_count_params.rb', line 225