Class: OpenAI::Models::Evals::CreateEvalCompletionsRunDataSource::SamplingParams

Inherits:
Internal::Type::BaseModel show all
Defined in:
lib/openai/models/evals/create_eval_completions_run_data_source.rb

Overview

See Also:

Defined Under Namespace

Modules: ResponseFormat

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods inherited from Internal::Type::BaseModel

==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml

Methods included from Internal::Type::Converter

#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info

Methods included from Internal::Util::SorbetRuntimeSupport

#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type

Constructor Details

#initialize(max_completion_tokens: nil, reasoning_effort: nil, response_format: nil, seed: nil, temperature: nil, tools: nil, top_p: nil) ⇒ Object

Some parameter documentations has been truncated, see OpenAI::Models::Evals::CreateEvalCompletionsRunDataSource::SamplingParams for more details.

Parameters:



# File 'lib/openai/models/evals/create_eval_completions_run_data_source.rb', line 521

Instance Attribute Details

#max_completion_tokensInteger?

The maximum number of tokens in the generated output.

Returns:

  • (Integer, nil)


459
# File 'lib/openai/models/evals/create_eval_completions_run_data_source.rb', line 459

optional :max_completion_tokens, Integer

#reasoning_effortSymbol, ...

Constrains effort on reasoning for [reasoning models](platform.openai.com/docs/guides/reasoning). Currently supported values are ‘none`, `minimal`, `low`, `medium`, `high`, and `xhigh`. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • ‘gpt-5.1` defaults to `none`, which does not perform reasoning. The supported reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool calls are supported for all reasoning values in gpt-5.1.

  • All models before ‘gpt-5.1` default to `medium` reasoning effort, and do not support `none`.

  • The ‘gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.

  • ‘xhigh` is supported for all models after `gpt-5.1-codex-max`.

Returns:



477
# File 'lib/openai/models/evals/create_eval_completions_run_data_source.rb', line 477

optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

#response_formatOpenAI::Models::ResponseFormatText, ...

An object specifying the format that the model must output.

Setting to ‘{ “type”: “json_schema”, “json_schema”: … }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](platform.openai.com/docs/guides/structured-outputs).

Setting to ‘{ “type”: “json_object” }` enables the older JSON mode, which ensures the message the model generates is valid JSON. Using `json_schema` is preferred for models that support it.



492
493
# File 'lib/openai/models/evals/create_eval_completions_run_data_source.rb', line 492

optional :response_format,
union: -> { OpenAI::Evals::CreateEvalCompletionsRunDataSource::SamplingParams::ResponseFormat }

#seedInteger?

A seed value to initialize the randomness, during sampling.

Returns:

  • (Integer, nil)


499
# File 'lib/openai/models/evals/create_eval_completions_run_data_source.rb', line 499

optional :seed, Integer

#temperatureFloat?

A higher temperature increases randomness in the outputs.

Returns:

  • (Float, nil)


505
# File 'lib/openai/models/evals/create_eval_completions_run_data_source.rb', line 505

optional :temperature, Float

#toolsArray<OpenAI::Models::Chat::ChatCompletionFunctionTool>?

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.



513
# File 'lib/openai/models/evals/create_eval_completions_run_data_source.rb', line 513

optional :tools, -> { OpenAI::Internal::Type::ArrayOf[OpenAI::Chat::ChatCompletionFunctionTool] }

#top_pFloat?

An alternative to temperature for nucleus sampling; 1.0 includes all tokens.

Returns:

  • (Float, nil)


519
# File 'lib/openai/models/evals/create_eval_completions_run_data_source.rb', line 519

optional :top_p, Float