Class: OpenAI::Models::Beta::AssistantUpdateParams
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- OpenAI::Models::Beta::AssistantUpdateParams
- Extended by:
- Internal::Type::RequestParameters::Converter
- Includes:
- Internal::Type::RequestParameters
- Defined in:
- lib/openai/models/beta/assistant_update_params.rb
Overview
Defined Under Namespace
Modules: Model Classes: ToolResources
Instance Attribute Summary collapse
- #assistant_id ⇒ String
-
#description ⇒ String?
The description of the assistant.
-
#instructions ⇒ String?
The system instructions that the assistant uses.
-
#metadata ⇒ Hash{Symbol=>String}?
Set of 16 key-value pairs that can be attached to an object.
-
#model ⇒ String, ...
ID of the model to use.
-
#name ⇒ String?
The name of the assistant.
-
#reasoning_effort ⇒ Symbol, ...
Constrains effort on reasoning for [reasoning models](platform.openai.com/docs/guides/reasoning).
-
#response_format ⇒ Symbol, ...
Specifies the format that the model must output.
-
#temperature ⇒ Float?
What sampling temperature to use, between 0 and 2.
-
#tool_resources ⇒ OpenAI::Models::Beta::AssistantUpdateParams::ToolResources?
A set of resources that are used by the assistant’s tools.
-
#tools ⇒ Array<OpenAI::Models::Beta::CodeInterpreterTool, OpenAI::Models::Beta::FileSearchTool, OpenAI::Models::Beta::FunctionTool>?
A list of tool enabled on the assistant.
-
#top_p ⇒ Float?
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
Attributes included from Internal::Type::RequestParameters
Instance Method Summary collapse
-
#initialize(assistant_id:, description: nil, instructions: nil, metadata: nil, model: nil, name: nil, reasoning_effort: nil, response_format: nil, temperature: nil, tool_resources: nil, tools: nil, top_p: nil, request_options: {}) ⇒ Object
constructor
Some parameter documentations has been truncated, see AssistantUpdateParams for more details.
Methods included from Internal::Type::RequestParameters::Converter
Methods included from Internal::Type::RequestParameters
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(assistant_id:, description: nil, instructions: nil, metadata: nil, model: nil, name: nil, reasoning_effort: nil, response_format: nil, temperature: nil, tool_resources: nil, tools: nil, top_p: nil, request_options: {}) ⇒ Object
Some parameter documentations has been truncated, see OpenAI::Models::Beta::AssistantUpdateParams for more details.
|
|
# File 'lib/openai/models/beta/assistant_update_params.rb', line 134
|
Instance Attribute Details
#assistant_id ⇒ String
14 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 14 required :assistant_id, String |
#description ⇒ String?
The description of the assistant. The maximum length is 512 characters.
20 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 20 optional :description, String, nil?: true |
#instructions ⇒ String?
The system instructions that the assistant uses. The maximum length is 256,000 characters.
27 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 27 optional :instructions, String, nil?: true |
#metadata ⇒ Hash{Symbol=>String}?
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
38 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 38 optional :metadata, OpenAI::Internal::Type::HashOf[String], nil?: true |
#model ⇒ String, ...
ID of the model to use. You can use the [List models](platform.openai.com/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](platform.openai.com/docs/models) for descriptions of them.
48 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 48 optional :model, union: -> { OpenAI::Beta::AssistantUpdateParams::Model } |
#name ⇒ String?
The name of the assistant. The maximum length is 256 characters.
54 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 54 optional :name, String, nil?: true |
#reasoning_effort ⇒ Symbol, ...
Constrains effort on reasoning for [reasoning models](platform.openai.com/docs/guides/reasoning). Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
-
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1. -
All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. -
The
gpt-5-promodel defaults to (and only supports)highreasoning effort. -
xhighis supported for all models aftergpt-5.1-codex-max.
72 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 72 optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true |
#response_format ⇒ Symbol, ...
Specifies the format that the model must output. Compatible with [GPT-4o](platform.openai.com/docs/models#gpt-4o), [GPT-4 Turbo](platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.
Setting to ‘{ “type”: “json_schema”, “json_schema”: … }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](platform.openai.com/docs/guides/structured-outputs).
Setting to ‘{ “type”: “json_object” }` enables JSON mode, which ensures the message the model generates is valid JSON.
Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if ‘finish_reason=“length”`, which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.
97 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 97 optional :response_format, union: -> { OpenAI::Beta::AssistantResponseFormatOption }, nil?: true |
#temperature ⇒ Float?
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
105 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 105 optional :temperature, Float, nil?: true |
#tool_resources ⇒ OpenAI::Models::Beta::AssistantUpdateParams::ToolResources?
A set of resources that are used by the assistant’s tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.
114 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 114 optional :tool_resources, -> { OpenAI::Beta::AssistantUpdateParams::ToolResources }, nil?: true |
#tools ⇒ Array<OpenAI::Models::Beta::CodeInterpreterTool, OpenAI::Models::Beta::FileSearchTool, OpenAI::Models::Beta::FunctionTool>?
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, file_search, or function.
122 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 122 optional :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Beta::AssistantTool] } |
#top_p ⇒ Float?
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
132 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 132 optional :top_p, Float, nil?: true |