Class: OpenAI::Models::Evals::RunListResponse::DataSource::Responses::SamplingParams
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- OpenAI::Models::Evals::RunListResponse::DataSource::Responses::SamplingParams
- Defined in:
- lib/openai/models/evals/run_list_response.rb
Overview
Defined Under Namespace
Classes: Text
Instance Attribute Summary collapse
-
#max_completion_tokens ⇒ Integer?
The maximum number of tokens in the generated output.
-
#reasoning_effort ⇒ Symbol, ...
Constrains effort on reasoning for [reasoning models](platform.openai.com/docs/guides/reasoning).
-
#seed ⇒ Integer?
A seed value to initialize the randomness, during sampling.
-
#temperature ⇒ Float?
A higher temperature increases randomness in the outputs.
-
#text ⇒ OpenAI::Models::Evals::RunListResponse::DataSource::Responses::SamplingParams::Text?
Configuration options for a text response from the model.
-
#tools ⇒ Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::FunctionShellTool, OpenAI::Models::Responses::CustomTool, OpenAI::Models::Responses::ApplyPatchTool, OpenAI::Models::Responses::WebSearchTool, OpenAI::Models::Responses::WebSearchPreviewTool>?
An array of tools the model may call while generating a response.
-
#top_p ⇒ Float?
An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
Instance Method Summary collapse
-
#initialize(max_completion_tokens: nil, reasoning_effort: nil, seed: nil, temperature: nil, text: nil, tools: nil, top_p: nil) ⇒ Object
constructor
Some parameter documentations has been truncated, see SamplingParams for more details.
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(max_completion_tokens: nil, reasoning_effort: nil, seed: nil, temperature: nil, text: nil, tools: nil, top_p: nil) ⇒ Object
Some parameter documentations has been truncated, see OpenAI::Models::Evals::RunListResponse::DataSource::Responses::SamplingParams for more details.
|
|
# File 'lib/openai/models/evals/run_list_response.rb', line 729
|
Instance Attribute Details
#max_completion_tokens ⇒ Integer?
The maximum number of tokens in the generated output.
662 |
# File 'lib/openai/models/evals/run_list_response.rb', line 662 optional :max_completion_tokens, Integer |
#reasoning_effort ⇒ Symbol, ...
Constrains effort on reasoning for [reasoning models](platform.openai.com/docs/guides/reasoning). Currently supported values are ‘none`, `minimal`, `low`, `medium`, `high`, and `xhigh`. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
-
‘gpt-5.1` defaults to `none`, which does not perform reasoning. The supported reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool calls are supported for all reasoning values in gpt-5.1.
-
All models before ‘gpt-5.1` default to `medium` reasoning effort, and do not support `none`.
-
The ‘gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
-
‘xhigh` is supported for all models after `gpt-5.1-codex-max`.
680 |
# File 'lib/openai/models/evals/run_list_response.rb', line 680 optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true |
#seed ⇒ Integer?
A seed value to initialize the randomness, during sampling.
686 |
# File 'lib/openai/models/evals/run_list_response.rb', line 686 optional :seed, Integer |
#temperature ⇒ Float?
A higher temperature increases randomness in the outputs.
692 |
# File 'lib/openai/models/evals/run_list_response.rb', line 692 optional :temperature, Float |
#text ⇒ OpenAI::Models::Evals::RunListResponse::DataSource::Responses::SamplingParams::Text?
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
-
[Text inputs and outputs](platform.openai.com/docs/guides/text)
-
[Structured Outputs](platform.openai.com/docs/guides/structured-outputs)
702 |
# File 'lib/openai/models/evals/run_list_response.rb', line 702 optional :text, -> { OpenAI::Models::Evals::RunListResponse::DataSource::Responses::SamplingParams::Text } |
#tools ⇒ Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::FunctionShellTool, OpenAI::Models::Responses::CustomTool, OpenAI::Models::Responses::ApplyPatchTool, OpenAI::Models::Responses::WebSearchTool, OpenAI::Models::Responses::WebSearchPreviewTool>?
An array of tools the model may call while generating a response. You can specify which tool to use by setting the ‘tool_choice` parameter.
The two categories of tools you can provide the model are:
-
**Built-in tools**: Tools that are provided by OpenAI that extend the model’s capabilities, like [web search](platform.openai.com/docs/guides/tools-web-search) or [file search](platform.openai.com/docs/guides/tools-file-search). Learn more about [built-in tools](platform.openai.com/docs/guides/tools).
-
**Function calls (custom tools)**: Functions that are defined by you, enabling the model to call your own code. Learn more about [function calling](platform.openai.com/docs/guides/function-calling).
721 |
# File 'lib/openai/models/evals/run_list_response.rb', line 721 optional :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Responses::Tool] } |
#top_p ⇒ Float?
An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
727 |
# File 'lib/openai/models/evals/run_list_response.rb', line 727 optional :top_p, Float |