Class: OpenAI::Models::Responses::ResponseCreateParams
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- OpenAI::Models::Responses::ResponseCreateParams
- Extended by:
- Internal::Type::RequestParameters::Converter
- Includes:
- Internal::Type::RequestParameters
- Defined in:
- lib/openai/models/responses/response_create_params.rb
Overview
Defined Under Namespace
Modules: Conversation, Input, PromptCacheRetention, ServiceTier, ToolChoice, Truncation Classes: StreamOptions
Instance Attribute Summary collapse
-
#background ⇒ Boolean?
Whether to run the model response in the background.
-
#conversation ⇒ String, ...
The conversation that this response belongs to.
-
#include ⇒ Array<Symbol, OpenAI::Models::Responses::ResponseIncludable>?
Specify additional output data to include in the model response.
-
#input ⇒ String, ...
Text, image, or file inputs to the model, used to generate a response.
-
#instructions ⇒ String?
A system (or developer) message inserted into the model’s context.
-
#max_output_tokens ⇒ Integer?
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and [reasoning tokens](platform.openai.com/docs/guides/reasoning).
-
#max_tool_calls ⇒ Integer?
The maximum number of total calls to built-in tools that can be processed in a response.
-
#metadata ⇒ Hash{Symbol=>String}?
Set of 16 key-value pairs that can be attached to an object.
-
#model ⇒ String, ...
Model ID used to generate the response, like ‘gpt-4o` or `o3`.
-
#parallel_tool_calls ⇒ Boolean?
Whether to allow the model to run tool calls in parallel.
-
#previous_response_id ⇒ String?
The unique ID of the previous response to the model.
-
#prompt ⇒ OpenAI::Models::Responses::ResponsePrompt?
Reference to a prompt template and its variables.
-
#prompt_cache_key ⇒ String?
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates.
-
#prompt_cache_retention ⇒ Symbol, ...
The retention policy for the prompt cache.
-
#reasoning ⇒ OpenAI::Models::Reasoning?
**gpt-5 and o-series models only**.
-
#safety_identifier ⇒ String?
A stable identifier used to help detect users of your application that may be violating OpenAI’s usage policies.
-
#service_tier ⇒ Symbol, ...
Specifies the processing type used for serving the request.
-
#store ⇒ Boolean?
Whether to store the generated model response for later retrieval via API.
-
#stream_options ⇒ OpenAI::Models::Responses::ResponseCreateParams::StreamOptions?
Options for streaming responses.
-
#temperature ⇒ Float?
What sampling temperature to use, between 0 and 2.
-
#text ⇒ OpenAI::Models::Responses::ResponseTextConfig?
Configuration options for a text response from the model.
-
#tool_choice ⇒ Symbol, ...
How the model should select which tool (or tools) to use when generating a response.
-
#tools ⇒ Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::FunctionShellTool, OpenAI::Models::Responses::CustomTool, OpenAI::Models::Responses::ApplyPatchTool, OpenAI::Models::Responses::WebSearchTool, OpenAI::Models::Responses::WebSearchPreviewTool>?
An array of tools the model may call while generating a response.
-
#top_logprobs ⇒ Integer?
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
-
#top_p ⇒ Float?
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
-
#truncation ⇒ Symbol, ...
The truncation strategy to use for the model response.
- #user ⇒ String? deprecated Deprecated.
Attributes included from Internal::Type::RequestParameters
Class Method Summary collapse
- .values ⇒ Array<Symbol>
- .variants ⇒ Array(Symbol, OpenAI::Models::Responses::ToolChoiceOptions, OpenAI::Models::Responses::ToolChoiceAllowed, OpenAI::Models::Responses::ToolChoiceTypes, OpenAI::Models::Responses::ToolChoiceFunction, OpenAI::Models::Responses::ToolChoiceMcp, OpenAI::Models::Responses::ToolChoiceCustom, OpenAI::Models::Responses::ToolChoiceApplyPatch, OpenAI::Models::Responses::ToolChoiceShell)
Instance Method Summary collapse
-
#initialize(include_obfuscation: nil) ⇒ Object
constructor
Some parameter documentations has been truncated, see StreamOptions for more details.
Methods included from Internal::Type::RequestParameters::Converter
Methods included from Internal::Type::RequestParameters
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(include_obfuscation: nil) ⇒ Object
Some parameter documentations has been truncated, see StreamOptions for more details.
Options for streaming responses. Only set this when you set ‘stream: true`.
|
|
# File 'lib/openai/models/responses/response_create_params.rb', line 311
|
Instance Attribute Details
#background ⇒ Boolean?
Whether to run the model response in the background. [Learn more](platform.openai.com/docs/guides/background).
18 |
# File 'lib/openai/models/responses/response_create_params.rb', line 18 optional :background, OpenAI::Internal::Type::Boolean, nil?: true |
#conversation ⇒ String, ...
The conversation that this response belongs to. Items from this conversation are prepended to ‘input_items` for this response request. Input items and output items from this response are automatically added to this conversation after this response completes.
27 28 29 30 31 |
# File 'lib/openai/models/responses/response_create_params.rb', line 27 optional :conversation, union: -> { OpenAI::Responses::ResponseCreateParams::Conversation }, nil?: true |
#include ⇒ Array<Symbol, OpenAI::Models::Responses::ResponseIncludable>?
Specify additional output data to include in the model response. Currently supported values are:
-
‘web_search_call.action.sources`: Include the sources of the web search tool call.
-
‘code_interpreter_call.outputs`: Includes the outputs of python code execution in code interpreter tool call items.
-
‘computer_call_output.output.image_url`: Include image urls from the computer call output.
-
‘file_search_call.results`: Include the search results of the file search tool call.
-
‘message.input_image.image_url`: Include image urls from the input message.
-
‘message.output_text.logprobs`: Include logprobs with assistant messages.
-
‘reasoning.encrypted_content`: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the `store` parameter is set to `false`, or when an organization is enrolled in the zero data retention program).
54 55 56 |
# File 'lib/openai/models/responses/response_create_params.rb', line 54 optional :include, -> { OpenAI::Internal::Type::ArrayOf[enum: OpenAI::Responses::ResponseIncludable] }, nil?: true |
#input ⇒ String, ...
Text, image, or file inputs to the model, used to generate a response.
Learn more:
-
[Text inputs and outputs](platform.openai.com/docs/guides/text)
-
[Image inputs](platform.openai.com/docs/guides/images)
-
[File inputs](platform.openai.com/docs/guides/pdf-files)
-
[Conversation state](platform.openai.com/docs/guides/conversation-state)
-
[Function calling](platform.openai.com/docs/guides/function-calling)
70 |
# File 'lib/openai/models/responses/response_create_params.rb', line 70 optional :input, union: -> { OpenAI::Responses::ResponseCreateParams::Input } |
#instructions ⇒ String?
A system (or developer) message inserted into the model’s context.
When using along with ‘previous_response_id`, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.
80 |
# File 'lib/openai/models/responses/response_create_params.rb', line 80 optional :instructions, String, nil?: true |
#max_output_tokens ⇒ Integer?
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and [reasoning tokens](platform.openai.com/docs/guides/reasoning).
88 |
# File 'lib/openai/models/responses/response_create_params.rb', line 88 optional :max_output_tokens, Integer, nil?: true |
#max_tool_calls ⇒ Integer?
The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.
97 |
# File 'lib/openai/models/responses/response_create_params.rb', line 97 optional :max_tool_calls, Integer, nil?: true |
#metadata ⇒ Hash{Symbol=>String}?
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
108 |
# File 'lib/openai/models/responses/response_create_params.rb', line 108 optional :metadata, OpenAI::Internal::Type::HashOf[String], nil?: true |
#model ⇒ String, ...
Model ID used to generate the response, like ‘gpt-4o` or `o3`. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the [model guide](platform.openai.com/docs/models) to browse and compare available models.
118 |
# File 'lib/openai/models/responses/response_create_params.rb', line 118 optional :model, union: -> { OpenAI::ResponsesModel } |
#parallel_tool_calls ⇒ Boolean?
Whether to allow the model to run tool calls in parallel.
124 |
# File 'lib/openai/models/responses/response_create_params.rb', line 124 optional :parallel_tool_calls, OpenAI::Internal::Type::Boolean, nil?: true |
#previous_response_id ⇒ String?
The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about [conversation state](platform.openai.com/docs/guides/conversation-state). Cannot be used in conjunction with ‘conversation`.
133 |
# File 'lib/openai/models/responses/response_create_params.rb', line 133 optional :previous_response_id, String, nil?: true |
#prompt ⇒ OpenAI::Models::Responses::ResponsePrompt?
Reference to a prompt template and its variables. [Learn more](platform.openai.com/docs/guides/text?api-mode=responses#reusable-prompts).
140 |
# File 'lib/openai/models/responses/response_create_params.rb', line 140 optional :prompt, -> { OpenAI::Responses::ResponsePrompt }, nil?: true |
#prompt_cache_key ⇒ String?
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the ‘user` field. [Learn more](platform.openai.com/docs/guides/prompt-caching).
148 |
# File 'lib/openai/models/responses/response_create_params.rb', line 148 optional :prompt_cache_key, String |
#prompt_cache_retention ⇒ Symbol, ...
The retention policy for the prompt cache. Set to ‘24h` to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. [Learn more](platform.openai.com/docs/guides/prompt-caching#prompt-cache-retention).
157 158 159 |
# File 'lib/openai/models/responses/response_create_params.rb', line 157 optional :prompt_cache_retention, enum: -> { OpenAI::Responses::ResponseCreateParams::PromptCacheRetention }, nil?: true |
#reasoning ⇒ OpenAI::Models::Reasoning?
**gpt-5 and o-series models only**
Configuration options for [reasoning models](platform.openai.com/docs/guides/reasoning).
168 |
# File 'lib/openai/models/responses/response_create_params.rb', line 168 optional :reasoning, -> { OpenAI::Reasoning }, nil?: true |
#safety_identifier ⇒ String?
A stable identifier used to help detect users of your application that may be violating OpenAI’s usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. [Learn more](platform.openai.com/docs/guides/safety-best-practices#safety-identifiers).
178 |
# File 'lib/openai/models/responses/response_create_params.rb', line 178 optional :safety_identifier, String |
#service_tier ⇒ Symbol, ...
Specifies the processing type used for serving the request.
-
If set to ‘auto’, then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use ‘default’.
-
If set to ‘default’, then the request will be processed with the standard pricing and performance for the selected model.
-
If set to ‘[flex](platform.openai.com/docs/guides/flex-processing)’ or ‘[priority](openai.com/api-priority-processing/)’, then the request will be processed with the corresponding service tier.
-
When not set, the default behavior is ‘auto’.
When the ‘service_tier` parameter is set, the response body will include the `service_tier` value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
199 |
# File 'lib/openai/models/responses/response_create_params.rb', line 199 optional :service_tier, enum: -> { OpenAI::Responses::ResponseCreateParams::ServiceTier }, nil?: true |
#store ⇒ Boolean?
Whether to store the generated model response for later retrieval via API.
205 |
# File 'lib/openai/models/responses/response_create_params.rb', line 205 optional :store, OpenAI::Internal::Type::Boolean, nil?: true |
#stream_options ⇒ OpenAI::Models::Responses::ResponseCreateParams::StreamOptions?
Options for streaming responses. Only set this when you set ‘stream: true`.
211 |
# File 'lib/openai/models/responses/response_create_params.rb', line 211 optional :stream_options, -> { OpenAI::Responses::ResponseCreateParams::StreamOptions }, nil?: true |
#temperature ⇒ Float?
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or ‘top_p` but not both.
220 |
# File 'lib/openai/models/responses/response_create_params.rb', line 220 optional :temperature, Float, nil?: true |
#text ⇒ OpenAI::Models::Responses::ResponseTextConfig?
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
-
[Text inputs and outputs](platform.openai.com/docs/guides/text)
-
[Structured Outputs](platform.openai.com/docs/guides/structured-outputs)
230 231 232 233 234 235 236 |
# File 'lib/openai/models/responses/response_create_params.rb', line 230 optional :text, union: -> { OpenAI::UnionOf[ OpenAI::Responses::ResponseTextConfig, OpenAI::StructuredOutput::JsonSchemaConverter ] } |
#tool_choice ⇒ Symbol, ...
How the model should select which tool (or tools) to use when generating a response. See the ‘tools` parameter to see how to specify which tools the model can call.
244 |
# File 'lib/openai/models/responses/response_create_params.rb', line 244 optional :tool_choice, union: -> { OpenAI::Responses::ResponseCreateParams::ToolChoice } |
#tools ⇒ Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::FunctionShellTool, OpenAI::Models::Responses::CustomTool, OpenAI::Models::Responses::ApplyPatchTool, OpenAI::Models::Responses::WebSearchTool, OpenAI::Models::Responses::WebSearchPreviewTool>?
An array of tools the model may call while generating a response. You can specify which tool to use by setting the ‘tool_choice` parameter.
We support the following categories of tools:
-
**Built-in tools**: Tools that are provided by OpenAI that extend the model’s capabilities, like [web search](platform.openai.com/docs/guides/tools-web-search) or [file search](platform.openai.com/docs/guides/tools-file-search). Learn more about [built-in tools](platform.openai.com/docs/guides/tools).
-
**MCP Tools**: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about [MCP Tools](platform.openai.com/docs/guides/tools-connectors-mcp).
-
**Function calls (custom tools)**: Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about [function calling](platform.openai.com/docs/guides/function-calling). You can also use custom tools to call your own code.
268 |
# File 'lib/openai/models/responses/response_create_params.rb', line 268 optional :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Responses::Tool] } |
#top_logprobs ⇒ Integer?
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
275 |
# File 'lib/openai/models/responses/response_create_params.rb', line 275 optional :top_logprobs, Integer, nil?: true |
#top_p ⇒ Float?
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or ‘temperature` but not both.
285 |
# File 'lib/openai/models/responses/response_create_params.rb', line 285 optional :top_p, Float, nil?: true |
#truncation ⇒ Symbol, ...
The truncation strategy to use for the model response.
-
‘auto`: If the input to this Response exceeds the model’s context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.
-
‘disabled` (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
297 |
# File 'lib/openai/models/responses/response_create_params.rb', line 297 optional :truncation, enum: -> { OpenAI::Responses::ResponseCreateParams::Truncation }, nil?: true |
#user ⇒ String?
This field is being replaced by ‘safety_identifier` and `prompt_cache_key`. Use `prompt_cache_key` instead to maintain caching optimizations. A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. [Learn more](platform.openai.com/docs/guides/safety-best-practices#safety-identifiers).
309 |
# File 'lib/openai/models/responses/response_create_params.rb', line 309 optional :user, String |
Class Method Details
.values ⇒ Array<Symbol>
|
|
# File 'lib/openai/models/responses/response_create_params.rb', line 422
|
.variants ⇒ Array(Symbol, OpenAI::Models::Responses::ToolChoiceOptions, OpenAI::Models::Responses::ToolChoiceAllowed, OpenAI::Models::Responses::ToolChoiceTypes, OpenAI::Models::Responses::ToolChoiceFunction, OpenAI::Models::Responses::ToolChoiceMcp, OpenAI::Models::Responses::ToolChoiceCustom, OpenAI::Models::Responses::ToolChoiceApplyPatch, OpenAI::Models::Responses::ToolChoiceShell)
|
|
# File 'lib/openai/models/responses/response_create_params.rb', line 384
|