Class: OpenAI::Models::Evals::RunRetrieveResponse::DataSource::Responses::SamplingParams

Inherits:
Internal::Type::BaseModel show all
Defined in:
lib/openai/models/evals/run_retrieve_response.rb

Overview

See Also:

Defined Under Namespace

Classes: Text

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods inherited from Internal::Type::BaseModel

==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml

Methods included from Internal::Type::Converter

#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info

Methods included from Internal::Util::SorbetRuntimeSupport

#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type

Constructor Details

#initialize(max_completion_tokens: nil, reasoning_effort: nil, seed: nil, temperature: nil, text: nil, tools: nil, top_p: nil) ⇒ Object

Some parameter documentations has been truncated, see OpenAI::Models::Evals::RunRetrieveResponse::DataSource::Responses::SamplingParams for more details.

Parameters:

  • (defaults to: nil)

    The maximum number of tokens in the generated output.

  • (defaults to: nil)

    Constrains effort on reasoning for

  • (defaults to: nil)

    A seed value to initialize the randomness, during sampling.

  • (defaults to: nil)

    A higher temperature increases randomness in the outputs.

  • (defaults to: nil)

    Configuration options for a text response from the model. Can be plain

  • (defaults to: nil)

    An array of tools the model may call while generating a response. You

  • (defaults to: nil)

    An alternative to temperature for nucleus sampling; 1.0 includes all tokens.



# File 'lib/openai/models/evals/run_retrieve_response.rb', line 734


Instance Attribute Details

#max_completion_tokensInteger?

The maximum number of tokens in the generated output.

Returns:



666
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 666

optional :max_completion_tokens, Integer

#reasoning_effortSymbol, ...

Constrains effort on reasoning for [reasoning models](platform.openai.com/docs/guides/reasoning). Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

  • gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.

  • All models before gpt-5.1 default to medium reasoning effort, and do not support none.

  • The gpt-5-pro model defaults to (and only supports) high reasoning effort.

  • xhigh is supported for all models after gpt-5.1-codex-max.

Returns:



684
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 684

optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

#seedInteger?

A seed value to initialize the randomness, during sampling.

Returns:



690
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 690

optional :seed, Integer

#temperatureFloat?

A higher temperature increases randomness in the outputs.

Returns:



696
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 696

optional :temperature, Float

#textOpenAI::Models::Evals::RunRetrieveResponse::DataSource::Responses::SamplingParams::Text?

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:

Returns:



706
707
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 706

optional :text,
-> { OpenAI::Models::Evals::RunRetrieveResponse::DataSource::Responses::SamplingParams::Text }

#toolsArray<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::ComputerUsePreviewTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::FunctionShellTool, OpenAI::Models::Responses::CustomTool, OpenAI::Models::Responses::NamespaceTool, OpenAI::Models::Responses::ToolSearchTool, OpenAI::Models::Responses::ApplyPatchTool, OpenAI::Models::Responses::WebSearchTool, OpenAI::Models::Responses::WebSearchPreviewTool>?

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

Returns:



726
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 726

optional :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Responses::Tool] }

#top_pFloat?

An alternative to temperature for nucleus sampling; 1.0 includes all tokens.

Returns:



732
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 732

optional :top_p, Float