Class: OpenAI::Resources::Completions

Inherits:
Object
  • Object
show all
Defined in:
lib/openai/resources/completions.rb

Instance Method Summary collapse

Constructor Details

#initialize(client:) ⇒ Completions

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.

Returns a new instance of Completions.

Parameters:

API:

  • private



138
139
140
# File 'lib/openai/resources/completions.rb', line 138

def initialize(client:)
  @client = client
end

Instance Method Details

#create(model: , prompt: , best_of: nil, echo: nil, frequency_penalty: nil, logit_bias: nil, logprobs: nil, max_tokens: nil, n: nil, presence_penalty: nil, seed: nil, stop: nil, stream_options: nil, suffix: nil, temperature: nil, top_p: nil, user: nil, request_options: {}) ⇒ OpenAI::Models::Completion

See #create_streaming for streaming counterpart.

Some parameter documentations has been truncated, see Models::CompletionCreateParams for more details.

Creates a completion for the provided prompt and parameters.

See Also:

Parameters:

  • ID of the model to use. You can use the [List models](https://platform.openai.co

  • The prompt(s) to generate completions for, encoded as a string, array of strings

  • Generates best_of completions server-side and returns the "best" (the one with

  • Echo back the prompt in addition to the completion

  • Number between -2.0 and 2.0. Positive values penalize new tokens based on their

  • Modify the likelihood of specified tokens appearing in the completion.

  • Include the log probabilities on the logprobs most likely output tokens, as we

  • The maximum number of tokens that can be generated in the completi

  • How many completions to generate for each prompt.

  • Number between -2.0 and 2.0. Positive values penalize new tokens based on whethe

  • If specified, our system will make a best effort to sample deterministically, su

  • Not supported with latest reasoning models o3 and o4-mini.

  • Options for streaming response. Only set this when you set stream: true.

  • The suffix that comes after a completion of inserted text.

  • What sampling temperature to use, between 0 and 2. Higher values like 0.8 will m

  • An alternative to sampling with temperature, called nucleus sampling, where the

  • A unique identifier representing your end-user, which can help OpenAI to monitor

Returns:



54
55
56
57
58
59
60
61
62
63
64
65
66
67
# File 'lib/openai/resources/completions.rb', line 54

def create(params)
  parsed, options = OpenAI::CompletionCreateParams.dump_request(params)
  if parsed[:stream]
    message = "Please use `#create_streaming` for the streaming use case."
    raise ArgumentError.new(message)
  end
  @client.request(
    method: :post,
    path: "completions",
    body: parsed,
    model: OpenAI::Completion,
    options: options
  )
end

#create_streaming(model: , prompt: , best_of: nil, echo: nil, frequency_penalty: nil, logit_bias: nil, logprobs: nil, max_tokens: nil, n: nil, presence_penalty: nil, seed: nil, stop: nil, stream_options: nil, suffix: nil, temperature: nil, top_p: nil, user: nil, request_options: {}) ⇒ OpenAI::Internal::Stream<OpenAI::Models::Completion>

See #create for non-streaming counterpart.

Some parameter documentations has been truncated, see Models::CompletionCreateParams for more details.

Creates a completion for the provided prompt and parameters.

See Also:

Parameters:

  • ID of the model to use. You can use the [List models](https://platform.openai.co

  • The prompt(s) to generate completions for, encoded as a string, array of strings

  • Generates best_of completions server-side and returns the "best" (the one with

  • Echo back the prompt in addition to the completion

  • Number between -2.0 and 2.0. Positive values penalize new tokens based on their

  • Modify the likelihood of specified tokens appearing in the completion.

  • Include the log probabilities on the logprobs most likely output tokens, as we

  • The maximum number of tokens that can be generated in the completi

  • How many completions to generate for each prompt.

  • Number between -2.0 and 2.0. Positive values penalize new tokens based on whethe

  • If specified, our system will make a best effort to sample deterministically, su

  • Not supported with latest reasoning models o3 and o4-mini.

  • Options for streaming response. Only set this when you set stream: true.

  • The suffix that comes after a completion of inserted text.

  • What sampling temperature to use, between 0 and 2. Higher values like 0.8 will m

  • An alternative to sampling with temperature, called nucleus sampling, where the

  • A unique identifier representing your end-user, which can help OpenAI to monitor

Returns:



117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
# File 'lib/openai/resources/completions.rb', line 117

def create_streaming(params)
  parsed, options = OpenAI::CompletionCreateParams.dump_request(params)
  unless parsed.fetch(:stream, true)
    message = "Please use `#create` for the non-streaming use case."
    raise ArgumentError.new(message)
  end
  parsed.store(:stream, true)
  @client.request(
    method: :post,
    path: "completions",
    headers: {"accept" => "text/event-stream"},
    body: parsed,
    stream: OpenAI::Internal::Stream,
    model: OpenAI::Completion,
    options: options
  )
end