Class: OpenAI::Models::Evals::CreateEvalCompletionsRunDataSource::SamplingParams
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- OpenAI::Models::Evals::CreateEvalCompletionsRunDataSource::SamplingParams
- Defined in:
- lib/openai/models/evals/create_eval_completions_run_data_source.rb
Overview
Defined Under Namespace
Modules: ResponseFormat
Instance Attribute Summary collapse
-
#max_completion_tokens ⇒ Integer?
The maximum number of tokens in the generated output.
-
#reasoning_effort ⇒ Symbol, ...
Constrains effort on reasoning for [reasoning models](platform.openai.com/docs/guides/reasoning).
-
#response_format ⇒ OpenAI::Models::ResponseFormatText, ...
An object specifying the format that the model must output.
-
#seed ⇒ Integer?
A seed value to initialize the randomness, during sampling.
-
#temperature ⇒ Float?
A higher temperature increases randomness in the outputs.
-
#tools ⇒ Array<OpenAI::Models::Chat::ChatCompletionFunctionTool>?
A list of tools the model may call.
-
#top_p ⇒ Float?
An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
Instance Method Summary collapse
-
#initialize(max_completion_tokens: nil, reasoning_effort: nil, response_format: nil, seed: nil, temperature: nil, tools: nil, top_p: nil) ⇒ Object
constructor
Some parameter documentations has been truncated, see SamplingParams for more details.
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(max_completion_tokens: nil, reasoning_effort: nil, response_format: nil, seed: nil, temperature: nil, tools: nil, top_p: nil) ⇒ Object
Some parameter documentations has been truncated, see OpenAI::Models::Evals::CreateEvalCompletionsRunDataSource::SamplingParams for more details.
|
|
# File 'lib/openai/models/evals/create_eval_completions_run_data_source.rb', line 521
|
Instance Attribute Details
#max_completion_tokens ⇒ Integer?
The maximum number of tokens in the generated output.
459 |
# File 'lib/openai/models/evals/create_eval_completions_run_data_source.rb', line 459 optional :max_completion_tokens, Integer |
#reasoning_effort ⇒ Symbol, ...
Constrains effort on reasoning for [reasoning models](platform.openai.com/docs/guides/reasoning). Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
-
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1. -
All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. -
The
gpt-5-promodel defaults to (and only supports)highreasoning effort. -
xhighis supported for all models aftergpt-5.1-codex-max.
477 |
# File 'lib/openai/models/evals/create_eval_completions_run_data_source.rb', line 477 optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true |
#response_format ⇒ OpenAI::Models::ResponseFormatText, ...
An object specifying the format that the model must output.
Setting to ‘{ “type”: “json_schema”, “json_schema”: … }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](platform.openai.com/docs/guides/structured-outputs).
Setting to ‘{ “type”: “json_object” }` enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.
492 493 |
# File 'lib/openai/models/evals/create_eval_completions_run_data_source.rb', line 492 optional :response_format, union: -> { OpenAI::Evals::CreateEvalCompletionsRunDataSource::SamplingParams::ResponseFormat } |
#seed ⇒ Integer?
A seed value to initialize the randomness, during sampling.
499 |
# File 'lib/openai/models/evals/create_eval_completions_run_data_source.rb', line 499 optional :seed, Integer |
#temperature ⇒ Float?
A higher temperature increases randomness in the outputs.
505 |
# File 'lib/openai/models/evals/create_eval_completions_run_data_source.rb', line 505 optional :temperature, Float |
#tools ⇒ Array<OpenAI::Models::Chat::ChatCompletionFunctionTool>?
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
513 |
# File 'lib/openai/models/evals/create_eval_completions_run_data_source.rb', line 513 optional :tools, -> { OpenAI::Internal::Type::ArrayOf[OpenAI::Chat::ChatCompletionFunctionTool] } |
#top_p ⇒ Float?
An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
519 |
# File 'lib/openai/models/evals/create_eval_completions_run_data_source.rb', line 519 optional :top_p, Float |