Class: OpenAI::Models::Responses::Response
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- OpenAI::Models::Responses::Response
- Defined in:
- lib/openai/models/responses/response.rb
Overview
Defined Under Namespace
Modules: Instructions, PromptCacheRetention, ServiceTier, ToolChoice, Truncation Classes: Conversation, IncompleteDetails
Instance Attribute Summary collapse
-
#background ⇒ Boolean?
Whether to run the model response in the background.
-
#conversation ⇒ OpenAI::Models::Responses::Response::Conversation?
The conversation that this response belongs to.
-
#created_at ⇒ Float
Unix timestamp (in seconds) of when this Response was created.
-
#error ⇒ OpenAI::Models::Responses::ResponseError?
An error object returned when the model fails to generate a Response.
-
#id ⇒ String
Unique identifier for this Response.
-
#incomplete_details ⇒ OpenAI::Models::Responses::Response::IncompleteDetails?
Details about why the response is incomplete.
-
#instructions ⇒ String, ...
A system (or developer) message inserted into the model’s context.
-
#max_output_tokens ⇒ Integer?
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and [reasoning tokens](platform.openai.com/docs/guides/reasoning).
-
#max_tool_calls ⇒ Integer?
The maximum number of total calls to built-in tools that can be processed in a response.
-
#metadata ⇒ Hash{Symbol=>String}?
Set of 16 key-value pairs that can be attached to an object.
-
#model ⇒ String, ...
Model ID used to generate the response, like ‘gpt-4o` or `o3`.
-
#object ⇒ Symbol, :response
The object type of this resource - always set to ‘response`.
-
#output ⇒ Array<OpenAI::Models::Responses::ResponseOutputMessage, OpenAI::Models::Responses::ResponseFileSearchToolCall, OpenAI::Models::Responses::ResponseFunctionToolCall, OpenAI::Models::Responses::ResponseFunctionWebSearch, OpenAI::Models::Responses::ResponseComputerToolCall, OpenAI::Models::Responses::ResponseReasoningItem, OpenAI::Models::Responses::ResponseCompactionItem, OpenAI::Models::Responses::ResponseOutputItem::ImageGenerationCall, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall, OpenAI::Models::Responses::ResponseOutputItem::LocalShellCall, OpenAI::Models::Responses::ResponseFunctionShellToolCall, OpenAI::Models::Responses::ResponseFunctionShellToolCallOutput, OpenAI::Models::Responses::ResponseApplyPatchToolCall, OpenAI::Models::Responses::ResponseApplyPatchToolCallOutput, OpenAI::Models::Responses::ResponseOutputItem::McpCall, OpenAI::Models::Responses::ResponseOutputItem::McpListTools, OpenAI::Models::Responses::ResponseOutputItem::McpApprovalRequest, OpenAI::Models::Responses::ResponseCustomToolCall>
An array of content items generated by the model.
-
#parallel_tool_calls ⇒ Boolean
Whether to allow the model to run tool calls in parallel.
-
#previous_response_id ⇒ String?
The unique ID of the previous response to the model.
-
#prompt ⇒ OpenAI::Models::Responses::ResponsePrompt?
Reference to a prompt template and its variables.
-
#prompt_cache_key ⇒ String?
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates.
-
#prompt_cache_retention ⇒ Symbol, ...
The retention policy for the prompt cache.
-
#reasoning ⇒ OpenAI::Models::Reasoning?
**gpt-5 and o-series models only**.
-
#safety_identifier ⇒ String?
A stable identifier used to help detect users of your application that may be violating OpenAI’s usage policies.
-
#service_tier ⇒ Symbol, ...
Specifies the processing type used for serving the request.
-
#status ⇒ Symbol, ...
The status of the response generation.
-
#temperature ⇒ Float?
What sampling temperature to use, between 0 and 2.
-
#text ⇒ OpenAI::Models::Responses::ResponseTextConfig?
Configuration options for a text response from the model.
-
#tool_choice ⇒ Symbol, ...
How the model should select which tool (or tools) to use when generating a response.
-
#tools ⇒ Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::FunctionShellTool, OpenAI::Models::Responses::CustomTool, OpenAI::Models::Responses::ApplyPatchTool, OpenAI::Models::Responses::WebSearchTool, OpenAI::Models::Responses::WebSearchPreviewTool>
An array of tools the model may call while generating a response.
-
#top_logprobs ⇒ Integer?
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
-
#top_p ⇒ Float?
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
-
#truncation ⇒ Symbol, ...
The truncation strategy to use for the model response.
-
#usage ⇒ OpenAI::Models::Responses::ResponseUsage?
Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.
- #user ⇒ String? deprecated Deprecated.
Class Method Summary collapse
- .values ⇒ Array<Symbol>
- .variants ⇒ Array(Symbol, OpenAI::Models::Responses::ToolChoiceOptions, OpenAI::Models::Responses::ToolChoiceAllowed, OpenAI::Models::Responses::ToolChoiceTypes, OpenAI::Models::Responses::ToolChoiceFunction, OpenAI::Models::Responses::ToolChoiceMcp, OpenAI::Models::Responses::ToolChoiceCustom, OpenAI::Models::Responses::ToolChoiceApplyPatch, OpenAI::Models::Responses::ToolChoiceShell)
Instance Method Summary collapse
-
#initialize(id: ) ⇒ Object
constructor
The conversation that this response belongs to.
-
#output_text ⇒ String
Convenience property that aggregates all ‘output_text` items from the `output` list.
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(id: ) ⇒ Object
The conversation that this response belongs to. Input items and output items from this response are automatically added to this conversation.
|
|
# File 'lib/openai/models/responses/response.rb', line 321
|
Instance Attribute Details
#background ⇒ Boolean?
Whether to run the model response in the background. [Learn more](platform.openai.com/docs/guides/background).
145 |
# File 'lib/openai/models/responses/response.rb', line 145 optional :background, OpenAI::Internal::Type::Boolean, nil?: true |
#conversation ⇒ OpenAI::Models::Responses::Response::Conversation?
The conversation that this response belongs to. Input items and output items from this response are automatically added to this conversation.
152 |
# File 'lib/openai/models/responses/response.rb', line 152 optional :conversation, -> { OpenAI::Responses::Response::Conversation }, nil?: true |
#created_at ⇒ Float
Unix timestamp (in seconds) of when this Response was created.
20 |
# File 'lib/openai/models/responses/response.rb', line 20 required :created_at, Float |
#error ⇒ OpenAI::Models::Responses::ResponseError?
An error object returned when the model fails to generate a Response.
26 |
# File 'lib/openai/models/responses/response.rb', line 26 required :error, -> { OpenAI::Responses::ResponseError }, nil?: true |
#id ⇒ String
Unique identifier for this Response.
14 |
# File 'lib/openai/models/responses/response.rb', line 14 required :id, String |
#incomplete_details ⇒ OpenAI::Models::Responses::Response::IncompleteDetails?
Details about why the response is incomplete.
32 |
# File 'lib/openai/models/responses/response.rb', line 32 required :incomplete_details, -> { OpenAI::Responses::Response::IncompleteDetails }, nil?: true |
#instructions ⇒ String, ...
A system (or developer) message inserted into the model’s context.
When using along with ‘previous_response_id`, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.
42 |
# File 'lib/openai/models/responses/response.rb', line 42 required :instructions, union: -> { OpenAI::Responses::Response::Instructions }, nil?: true |
#max_output_tokens ⇒ Integer?
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and [reasoning tokens](platform.openai.com/docs/guides/reasoning).
160 |
# File 'lib/openai/models/responses/response.rb', line 160 optional :max_output_tokens, Integer, nil?: true |
#max_tool_calls ⇒ Integer?
The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.
169 |
# File 'lib/openai/models/responses/response.rb', line 169 optional :max_tool_calls, Integer, nil?: true |
#metadata ⇒ Hash{Symbol=>String}?
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
53 |
# File 'lib/openai/models/responses/response.rb', line 53 required :metadata, OpenAI::Internal::Type::HashOf[String], nil?: true |
#model ⇒ String, ...
Model ID used to generate the response, like ‘gpt-4o` or `o3`. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the [model guide](platform.openai.com/docs/models) to browse and compare available models.
63 |
# File 'lib/openai/models/responses/response.rb', line 63 required :model, union: -> { OpenAI::ResponsesModel } |
#object ⇒ Symbol, :response
The object type of this resource - always set to ‘response`.
69 |
# File 'lib/openai/models/responses/response.rb', line 69 required :object, const: :response |
#output ⇒ Array<OpenAI::Models::Responses::ResponseOutputMessage, OpenAI::Models::Responses::ResponseFileSearchToolCall, OpenAI::Models::Responses::ResponseFunctionToolCall, OpenAI::Models::Responses::ResponseFunctionWebSearch, OpenAI::Models::Responses::ResponseComputerToolCall, OpenAI::Models::Responses::ResponseReasoningItem, OpenAI::Models::Responses::ResponseCompactionItem, OpenAI::Models::Responses::ResponseOutputItem::ImageGenerationCall, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall, OpenAI::Models::Responses::ResponseOutputItem::LocalShellCall, OpenAI::Models::Responses::ResponseFunctionShellToolCall, OpenAI::Models::Responses::ResponseFunctionShellToolCallOutput, OpenAI::Models::Responses::ResponseApplyPatchToolCall, OpenAI::Models::Responses::ResponseApplyPatchToolCallOutput, OpenAI::Models::Responses::ResponseOutputItem::McpCall, OpenAI::Models::Responses::ResponseOutputItem::McpListTools, OpenAI::Models::Responses::ResponseOutputItem::McpApprovalRequest, OpenAI::Models::Responses::ResponseCustomToolCall>
An array of content items generated by the model.
-
The length and order of items in the ‘output` array is dependent on the model’s response.
-
Rather than accessing the first item in the ‘output` array and assuming it’s an ‘assistant` message with the content generated by the model, you might consider using the `output_text` property where supported in SDKs.
81 |
# File 'lib/openai/models/responses/response.rb', line 81 required :output, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Responses::ResponseOutputItem] } |
#parallel_tool_calls ⇒ Boolean
Whether to allow the model to run tool calls in parallel.
87 |
# File 'lib/openai/models/responses/response.rb', line 87 required :parallel_tool_calls, OpenAI::Internal::Type::Boolean |
#previous_response_id ⇒ String?
The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about [conversation state](platform.openai.com/docs/guides/conversation-state). Cannot be used in conjunction with ‘conversation`.
178 |
# File 'lib/openai/models/responses/response.rb', line 178 optional :previous_response_id, String, nil?: true |
#prompt ⇒ OpenAI::Models::Responses::ResponsePrompt?
Reference to a prompt template and its variables. [Learn more](platform.openai.com/docs/guides/text?api-mode=responses#reusable-prompts).
185 |
# File 'lib/openai/models/responses/response.rb', line 185 optional :prompt, -> { OpenAI::Responses::ResponsePrompt }, nil?: true |
#prompt_cache_key ⇒ String?
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the ‘user` field. [Learn more](platform.openai.com/docs/guides/prompt-caching).
193 |
# File 'lib/openai/models/responses/response.rb', line 193 optional :prompt_cache_key, String |
#prompt_cache_retention ⇒ Symbol, ...
The retention policy for the prompt cache. Set to ‘24h` to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. [Learn more](platform.openai.com/docs/guides/prompt-caching#prompt-cache-retention).
202 203 204 |
# File 'lib/openai/models/responses/response.rb', line 202 optional :prompt_cache_retention, enum: -> { OpenAI::Responses::Response::PromptCacheRetention }, nil?: true |
#reasoning ⇒ OpenAI::Models::Reasoning?
**gpt-5 and o-series models only**
Configuration options for [reasoning models](platform.openai.com/docs/guides/reasoning).
213 |
# File 'lib/openai/models/responses/response.rb', line 213 optional :reasoning, -> { OpenAI::Reasoning }, nil?: true |
#safety_identifier ⇒ String?
A stable identifier used to help detect users of your application that may be violating OpenAI’s usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. [Learn more](platform.openai.com/docs/guides/safety-best-practices#safety-identifiers).
223 |
# File 'lib/openai/models/responses/response.rb', line 223 optional :safety_identifier, String |
#service_tier ⇒ Symbol, ...
Specifies the processing type used for serving the request.
-
If set to ‘auto’, then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use ‘default’.
-
If set to ‘default’, then the request will be processed with the standard pricing and performance for the selected model.
-
If set to ‘[flex](platform.openai.com/docs/guides/flex-processing)’ or ‘[priority](openai.com/api-priority-processing/)’, then the request will be processed with the corresponding service tier.
-
When not set, the default behavior is ‘auto’.
When the ‘service_tier` parameter is set, the response body will include the `service_tier` value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
244 |
# File 'lib/openai/models/responses/response.rb', line 244 optional :service_tier, enum: -> { OpenAI::Responses::Response::ServiceTier }, nil?: true |
#status ⇒ Symbol, ...
The status of the response generation. One of ‘completed`, `failed`, `in_progress`, `cancelled`, `queued`, or `incomplete`.
251 |
# File 'lib/openai/models/responses/response.rb', line 251 optional :status, enum: -> { OpenAI::Responses::ResponseStatus } |
#temperature ⇒ Float?
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or ‘top_p` but not both.
96 |
# File 'lib/openai/models/responses/response.rb', line 96 required :temperature, Float, nil?: true |
#text ⇒ OpenAI::Models::Responses::ResponseTextConfig?
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
-
[Text inputs and outputs](platform.openai.com/docs/guides/text)
-
[Structured Outputs](platform.openai.com/docs/guides/structured-outputs)
261 |
# File 'lib/openai/models/responses/response.rb', line 261 optional :text, -> { OpenAI::Responses::ResponseTextConfig } |
#tool_choice ⇒ Symbol, ...
How the model should select which tool (or tools) to use when generating a response. See the ‘tools` parameter to see how to specify which tools the model can call.
104 |
# File 'lib/openai/models/responses/response.rb', line 104 required :tool_choice, union: -> { OpenAI::Responses::Response::ToolChoice } |
#tools ⇒ Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::FunctionShellTool, OpenAI::Models::Responses::CustomTool, OpenAI::Models::Responses::ApplyPatchTool, OpenAI::Models::Responses::WebSearchTool, OpenAI::Models::Responses::WebSearchPreviewTool>
An array of tools the model may call while generating a response. You can specify which tool to use by setting the ‘tool_choice` parameter.
We support the following categories of tools:
-
**Built-in tools**: Tools that are provided by OpenAI that extend the model’s capabilities, like [web search](platform.openai.com/docs/guides/tools-web-search) or [file search](platform.openai.com/docs/guides/tools-file-search). Learn more about [built-in tools](platform.openai.com/docs/guides/tools).
-
**MCP Tools**: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about [MCP Tools](platform.openai.com/docs/guides/tools-connectors-mcp).
-
**Function calls (custom tools)**: Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about [function calling](platform.openai.com/docs/guides/function-calling). You can also use custom tools to call your own code.
128 |
# File 'lib/openai/models/responses/response.rb', line 128 required :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Responses::Tool] } |
#top_logprobs ⇒ Integer?
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
268 |
# File 'lib/openai/models/responses/response.rb', line 268 optional :top_logprobs, Integer, nil?: true |
#top_p ⇒ Float?
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or ‘temperature` but not both.
138 |
# File 'lib/openai/models/responses/response.rb', line 138 required :top_p, Float, nil?: true |
#truncation ⇒ Symbol, ...
The truncation strategy to use for the model response.
-
‘auto`: If the input to this Response exceeds the model’s context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.
-
‘disabled` (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
280 |
# File 'lib/openai/models/responses/response.rb', line 280 optional :truncation, enum: -> { OpenAI::Responses::Response::Truncation }, nil?: true |
#usage ⇒ OpenAI::Models::Responses::ResponseUsage?
Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.
287 |
# File 'lib/openai/models/responses/response.rb', line 287 optional :usage, -> { OpenAI::Responses::ResponseUsage } |
#user ⇒ String?
This field is being replaced by ‘safety_identifier` and `prompt_cache_key`. Use `prompt_cache_key` instead to maintain caching optimizations. A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. [Learn more](platform.openai.com/docs/guides/safety-best-practices#safety-identifiers).
299 |
# File 'lib/openai/models/responses/response.rb', line 299 optional :user, String |
Class Method Details
.values ⇒ Array<Symbol>
|
|
# File 'lib/openai/models/responses/response.rb', line 409
|
.variants ⇒ Array(Symbol, OpenAI::Models::Responses::ToolChoiceOptions, OpenAI::Models::Responses::ToolChoiceAllowed, OpenAI::Models::Responses::ToolChoiceTypes, OpenAI::Models::Responses::ToolChoiceFunction, OpenAI::Models::Responses::ToolChoiceMcp, OpenAI::Models::Responses::ToolChoiceCustom, OpenAI::Models::Responses::ToolChoiceApplyPatch, OpenAI::Models::Responses::ToolChoiceShell)
|
|
# File 'lib/openai/models/responses/response.rb', line 480
|
Instance Method Details
#output_text ⇒ String
Convenience property that aggregates all ‘output_text` items from the `output` list.
If no ‘output_text` content blocks exist, then an empty string is returned.
306 307 308 309 310 311 312 313 314 315 316 317 318 319 |
# File 'lib/openai/models/responses/response.rb', line 306 def output_text texts = [] output.each do |item| next unless item.type == :message item.content.each do |content| if content.type == :output_text texts << content.text end end end texts.join end |