Class: OpenAI::Resources::Responses::InputTokens

Inherits:
Object
  • Object
show all
Defined in:
lib/openai/resources/responses/input_tokens.rb

Instance Method Summary collapse

Constructor Details

#initialize(client:) ⇒ InputTokens

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.

Returns a new instance of InputTokens.

API:

  • private

Parameters:



58
59
60
# File 'lib/openai/resources/responses/input_tokens.rb', line 58

def initialize(client:)
  @client = client
end

Instance Method Details

#count(conversation: nil, input: nil, instructions: nil, model: nil, parallel_tool_calls: nil, previous_response_id: nil, reasoning: nil, text: nil, tool_choice: nil, tools: nil, truncation: nil, request_options: {}) ⇒ OpenAI::Models::Responses::InputTokenCountResponse

Some parameter documentations has been truncated, see Models::Responses::InputTokenCountParams for more details.

Returns input token counts of the request.

Returns an object with object set to response.input_tokens and an input_tokens count.

See Also:

Parameters:

  • The conversation that this response belongs to. Items from this conversation are

  • Text, image, or file inputs to the model, used to generate a response

  • A system (or developer) message inserted into the model’s context.

  • Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a w

  • Whether to allow the model to run tool calls in parallel.

  • The unique ID of the previous response to the model. Use this to create multi-tu

  • **gpt-5 and o-series models only** Configuration options for [reasoning models](

  • Configuration options for a text response from the model. Can be plain

  • Controls which tool the model should use, if any.

  • An array of tools the model may call while generating a response. You can specif

  • The truncation strategy to use for the model response. - auto: If the input to

Returns:



44
45
46
47
48
49
50
51
52
53
# File 'lib/openai/resources/responses/input_tokens.rb', line 44

def count(params = {})
  parsed, options = OpenAI::Responses::InputTokenCountParams.dump_request(params)
  @client.request(
    method: :post,
    path: "responses/input_tokens",
    body: parsed,
    model: OpenAI::Models::Responses::InputTokenCountResponse,
    options: options
  )
end