Class: OpenAI::Models::Evals::RunCreateParams::DataSource::CreateEvalResponsesRunDataSource::SamplingParams

Inherits:
Internal::Type::BaseModel
  • Object
show all
Defined in:
lib/openai/models/evals/run_create_params.rb

Overview

See Also:

Defined Under Namespace

Classes: Text

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods inherited from Internal::Type::BaseModel

==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml

Methods included from Internal::Type::Converter

#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info

Methods included from Internal::Util::SorbetRuntimeSupport

#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type

Constructor Details

#initialize(max_completion_tokens: nil, reasoning_effort: nil, seed: nil, temperature: nil, text: nil, tools: nil, top_p: nil) ⇒ Object

Some parameter documentations has been truncated, see OpenAI::Models::Evals::RunCreateParams::DataSource::CreateEvalResponsesRunDataSource::SamplingParams for more details.

Parameters:



# File 'lib/openai/models/evals/run_create_params.rb', line 658

Instance Attribute Details

#max_completion_tokensInteger?

The maximum number of tokens in the generated output.

Returns:

  • (Integer, nil)


590
# File 'lib/openai/models/evals/run_create_params.rb', line 590

optional :max_completion_tokens, Integer

#reasoning_effortSymbol, ...

Constrains effort on reasoning for [reasoning models](platform.openai.com/docs/guides/reasoning). Currently supported values are ‘none`, `minimal`, `low`, `medium`, `high`, and `xhigh`. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • ‘gpt-5.1` defaults to `none`, which does not perform reasoning. The supported reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool calls are supported for all reasoning values in gpt-5.1.

  • All models before ‘gpt-5.1` default to `medium` reasoning effort, and do not support `none`.

  • The ‘gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.

  • ‘xhigh` is supported for all models after `gpt-5.1-codex-max`.

Returns:



608
# File 'lib/openai/models/evals/run_create_params.rb', line 608

optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

#seedInteger?

A seed value to initialize the randomness, during sampling.

Returns:

  • (Integer, nil)


614
# File 'lib/openai/models/evals/run_create_params.rb', line 614

optional :seed, Integer

#temperatureFloat?

A higher temperature increases randomness in the outputs.

Returns:

  • (Float, nil)


620
# File 'lib/openai/models/evals/run_create_params.rb', line 620

optional :temperature, Float

#textOpenAI::Models::Evals::RunCreateParams::DataSource::CreateEvalResponsesRunDataSource::SamplingParams::Text?

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:



630
631
# File 'lib/openai/models/evals/run_create_params.rb', line 630

optional :text,
-> { OpenAI::Evals::RunCreateParams::DataSource::CreateEvalResponsesRunDataSource::SamplingParams::Text }

#toolsArray<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::FunctionShellTool, OpenAI::Models::Responses::CustomTool, OpenAI::Models::Responses::ApplyPatchTool, OpenAI::Models::Responses::WebSearchTool, OpenAI::Models::Responses::WebSearchPreviewTool>?

An array of tools the model may call while generating a response. You can specify which tool to use by setting the ‘tool_choice` parameter.

The two categories of tools you can provide the model are:



650
# File 'lib/openai/models/evals/run_create_params.rb', line 650

optional :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Responses::Tool] }

#top_pFloat?

An alternative to temperature for nucleus sampling; 1.0 includes all tokens.

Returns:

  • (Float, nil)


656
# File 'lib/openai/models/evals/run_create_params.rb', line 656

optional :top_p, Float