Class: OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams

Inherits:
Internal::Type::BaseModel show all
Defined in:
lib/openai/models/evals/run_cancel_response.rb

Overview

See Also:

Defined Under Namespace

Classes: Text

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods inherited from Internal::Type::BaseModel

==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml

Methods included from Internal::Type::Converter

#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info

Methods included from Internal::Util::SorbetRuntimeSupport

#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type

Constructor Details

#initialize(max_completion_tokens: nil, reasoning_effort: nil, seed: nil, temperature: nil, text: nil, tools: nil, top_p: nil) ⇒ Object

Some parameter documentations has been truncated, see OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams for more details.

Parameters:



# File 'lib/openai/models/evals/run_cancel_response.rb', line 730


Instance Attribute Details

#max_completion_tokensInteger?

The maximum number of tokens in the generated output.

Returns:

  • (Integer, nil)


662
# File 'lib/openai/models/evals/run_cancel_response.rb', line 662

optional :max_completion_tokens, Integer

#reasoning_effortSymbol, ...

Constrains effort on reasoning for [reasoning models](platform.openai.com/docs/guides/reasoning). Currently supported values are ‘none`, `minimal`, `low`, `medium`, `high`, and `xhigh`. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • ‘gpt-5.1` defaults to `none`, which does not perform reasoning. The supported reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool calls are supported for all reasoning values in gpt-5.1.

  • All models before ‘gpt-5.1` default to `medium` reasoning effort, and do not support `none`.

  • The ‘gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.

  • ‘xhigh` is supported for all models after `gpt-5.1-codex-max`.

Returns:



680
# File 'lib/openai/models/evals/run_cancel_response.rb', line 680

optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

#seedInteger?

A seed value to initialize the randomness, during sampling.

Returns:

  • (Integer, nil)


686
# File 'lib/openai/models/evals/run_cancel_response.rb', line 686

optional :seed, Integer

#temperatureFloat?

A higher temperature increases randomness in the outputs.

Returns:

  • (Float, nil)


692
# File 'lib/openai/models/evals/run_cancel_response.rb', line 692

optional :temperature, Float

#textOpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams::Text?

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:



702
703
# File 'lib/openai/models/evals/run_cancel_response.rb', line 702

optional :text,
-> { OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams::Text }

#toolsArray<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::FunctionShellTool, OpenAI::Models::Responses::CustomTool, OpenAI::Models::Responses::ApplyPatchTool, OpenAI::Models::Responses::WebSearchTool, OpenAI::Models::Responses::WebSearchPreviewTool>?

An array of tools the model may call while generating a response. You can specify which tool to use by setting the ‘tool_choice` parameter.

The two categories of tools you can provide the model are:



722
# File 'lib/openai/models/evals/run_cancel_response.rb', line 722

optional :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Responses::Tool] }

#top_pFloat?

An alternative to temperature for nucleus sampling; 1.0 includes all tokens.

Returns:

  • (Float, nil)


728
# File 'lib/openai/models/evals/run_cancel_response.rb', line 728

optional :top_p, Float