Class: Durable::Llm::Client

Inherits:
Object
  • Object
show all
Defined in:
lib/durable/llm/client.rb

Overview

Unified interface for interacting with different LLM providers

The Client class provides a facade that delegates operations like completion, chat, embedding, and streaming to the appropriate provider instance while handling parameter processing, model configuration, and providing convenience methods for quick text completion. The client automatically resolves provider classes based on the provider name and manages default parameters including model selection.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(provider_name, options = {}) ⇒ Client

Initializes a new LLM client for the specified provider

Examples:

Initialize with OpenAI provider

client = Durable::Llm::Client.new(:openai, model: 'gpt-4', api_key: 'sk-...')

Initialize with Anthropic provider

client = Durable::Llm::Client.new(:anthropic, model: 'claude-3-opus-20240229')

Parameters:

  • provider_name (Symbol, String)

    The name of the LLM provider (e.g., :openai, :anthropic)

  • options (Hash) (defaults to: {})

    Configuration options for the provider and client

Options Hash (options):

  • :model (String)

    The default model to use for requests

  • 'model' (String)

    Alternative string key for model

  • :api_key (String)

    API key for authentication (provider-specific)

Raises:

  • (ArgumentError)

    If provider_name is nil or empty

  • (NameError)

    If the provider class cannot be found



42
43
44
45
46
47
48
49
50
51
52
53
54
# File 'lib/durable/llm/client.rb', line 42

def initialize(provider_name, options = {})
  if provider_name.nil? || provider_name.to_s.strip.empty?
    raise ArgumentError, 'provider_name cannot be nil or empty. Supported providers: ' \
                         "#{Durable::Llm::Providers.available_providers.join(', ')}"
  end
  raise ArgumentError, 'options must be a Hash' unless options.is_a?(Hash)

  @model = options.delete('model') || options.delete(:model) if options.key?('model') || options.key?(:model)

  provider_class = Durable::Llm::Providers.provider_class_for(provider_name)

  @provider = provider_class.new(**options)
end

Instance Attribute Details

#modelString?

Returns The default model to use for requests.

Returns:

  • (String, nil)

    The default model to use for requests



27
28
29
# File 'lib/durable/llm/client.rb', line 27

def model
  @model
end

#providerObject (readonly)

Returns The underlying provider instance.

Returns:

  • (Object)

    The underlying provider instance



24
25
26
# File 'lib/durable/llm/client.rb', line 24

def provider
  @provider
end

Instance Method Details

#chat(params = {}) ⇒ Object

Performs a chat completion request (alias for completion)

Parameters:

  • params (Hash) (defaults to: {})

    The chat parameters

Options Hash (params):

  • :model (String)

    The model to use (overrides default)

  • :messages (Array<Hash>)

    The conversation messages

  • :temperature (Float)

    Sampling temperature (0.0-2.0)

  • :max_tokens (Integer)

    Maximum tokens to generate

Returns:

  • (Object)

    The chat response object

Raises:

See Also:



141
142
143
144
145
# File 'lib/durable/llm/client.rb', line 141

def chat(params = {})
  raise ArgumentError, 'params must be a Hash' unless params.is_a?(Hash)

  @provider.completion(process_params(params))
end

#clone_with(**options) ⇒ Client

Creates a copy of the client with different configuration

Examples:

Clone with different model

gpt4_client = client.clone_with(model: 'gpt-4')
gpt35_client = client.clone_with(model: 'gpt-3.5-turbo')

Parameters:

  • options (Hash)

    New configuration options

Options Hash (**options):

  • :model (String)

    Override the model

Returns:

  • (Client)

    A new client instance with merged configuration



251
252
253
254
# File 'lib/durable/llm/client.rb', line 251

def clone_with(**options)
  provider_name = @provider.class.name.split('::').last.downcase.to_sym
  self.class.new(provider_name, options.merge(model: @model))
end

#complete(text, _opts = {}) ⇒ String Also known as: quick_complete

Performs a text completion with minimal configuration

Examples:

Text completion with OpenAI

client = Durable::Llm::Client.new(:openai, model: 'gpt-4')
response = client.complete('What is the capital of France?')
puts response # => "The capital of France is Paris."

Parameters:

  • text (String)

    The input text to complete

  • opts (Hash)

    Additional options (currently unused, reserved for future use)

Returns:

  • (String)

    The generated completion text

Raises:

  • (ArgumentError)

    If text is nil or empty

  • (Durable::Llm::APIError)

    If the API request fails

  • (IndexError)

    If the response contains no choices

  • (NoMethodError)

    If the response structure is unexpected



76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
# File 'lib/durable/llm/client.rb', line 76

def complete(text, _opts = {})
  if text.nil? || text.to_s.strip.empty?
    raise ArgumentError, 'text cannot be nil or empty. Provide a non-empty string for completion.'
  end

  response = completion(process_params(messages: [{ role: 'user', content: text }]))

  choice = response.choices.first
  unless choice
    raise IndexError, 'No completion choices returned from the API. This may indicate an ' \
                      'API error or invalid request parameters.'
  end

  message = choice.message
  unless message
    raise NoMethodError, 'Response choice has no message. The API response format may be ' \
                         'unexpected or the provider may have changed their response structure.'
  end

  content = message.content
  unless content
    raise NoMethodError, 'Response message has no content. This may occur if the model ' \
                         'refused to respond or if content filtering was applied.'
  end

  content
end

#completion(params = {}) ⇒ Object

Performs a completion request

Examples:

Perform a completion

client = Durable::Llm::Client.new(:openai, model: 'gpt-4')
response = client.completion(
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Hello!' }
  ],
  temperature: 0.7
)

Parameters:

  • params (Hash) (defaults to: {})

    The completion parameters

Options Hash (params):

  • :model (String)

    The model to use (overrides default)

  • :messages (Array<Hash>)

    The conversation messages

  • :temperature (Float)

    Sampling temperature (0.0-2.0)

  • :max_tokens (Integer)

    Maximum tokens to generate

Returns:

  • (Object)

    The completion response object

Raises:



124
125
126
127
128
# File 'lib/durable/llm/client.rb', line 124

def completion(params = {})
  raise ArgumentError, 'params must be a Hash' unless params.is_a?(Hash)

  @provider.completion(process_params(params))
end

#default_paramsHash

Returns the default parameters to merge with request options

Returns:

  • (Hash)

    Default parameters including model if set



59
60
61
# File 'lib/durable/llm/client.rb', line 59

def default_params
  @model ? { model: @model } : {}
end

#embed(params = {}) ⇒ Object

Performs an embedding request

Examples:

Generate embeddings

client = Durable::Llm::Client.new(:openai)
response = client.embed(
  model: 'text-embedding-ada-002',
  input: 'Hello, world!'
)

Parameters:

  • params (Hash) (defaults to: {})

    The embedding parameters including model and input

Options Hash (params):

  • :model (String)

    The embedding model to use

  • :input (String, Array<String>)

    The text(s) to embed

Returns:

  • (Object)

    The embedding response object

Raises:

  • (ArgumentError)

    If params is not a Hash or missing required fields

  • (NotImplementedError)

    If the provider doesn’t support embeddings

  • (Durable::Llm::APIError)

    If the API request fails



162
163
164
165
166
167
168
169
170
# File 'lib/durable/llm/client.rb', line 162

def embed(params = {})
  raise ArgumentError, 'params must be a Hash' unless params.is_a?(Hash)

  @provider.embedding(**process_params(params))
rescue NotImplementedError
  provider_name = @provider.class.name.split('::').last
  raise NotImplementedError, "#{provider_name} does not support embeddings. " \
                              'Try using a provider like OpenAI that offers embedding models.'
end

#stream(params = {}) {|Object| ... } ⇒ Object

Performs a streaming completion request

Examples:

Stream a completion

client = Durable::Llm::Client.new(:openai, model: 'gpt-4')
client.stream(messages: [{ role: 'user', content: 'Count to 10' }]) do |chunk|
  print chunk.choices.first.delta.content
end

Parameters:

  • params (Hash) (defaults to: {})

    The streaming parameters

Options Hash (params):

  • :model (String)

    The model to use (overrides default)

  • :messages (Array<Hash>)

    The conversation messages

  • :temperature (Float)

    Sampling temperature (0.0-2.0)

  • :max_tokens (Integer)

    Maximum tokens to generate

Yields:

  • (Object)

    Yields stream response chunks as they arrive

Returns:

  • (Object)

    The final response object

Raises:

  • (ArgumentError)

    If params is not a Hash or no block is given

  • (NotImplementedError)

    If the provider doesn’t support streaming

  • (Durable::Llm::APIError)

    If the API request fails



189
190
191
192
193
194
195
196
197
198
199
200
# File 'lib/durable/llm/client.rb', line 189

def stream(params = {}, &block)
  raise ArgumentError, 'params must be a Hash' unless params.is_a?(Hash)
  unless block_given?
    raise ArgumentError, 'block required for streaming. Use: client.stream(params) { |chunk| ... }'
  end

  @provider.stream(process_params(params), &block)
rescue NotImplementedError
  provider_name = @provider.class.name.split('::').last
  raise NotImplementedError, "#{provider_name} does not support streaming. " \
                              'Try using completion() or chat() instead.'
end

#stream?Boolean

Checks if the provider supports streaming

Returns:

  • (Boolean)

    True if streaming is supported, false otherwise



205
206
207
# File 'lib/durable/llm/client.rb', line 205

def stream?
  @provider.stream?
end

#with_max_tokens(tokens) ⇒ Client

Sets max tokens for the next request (fluent interface)

Examples:

Fluent max tokens setting

client.with_max_tokens(500).complete('Write a story')

Parameters:

  • tokens (Integer)

    Maximum tokens to generate

Returns:

  • (Client)

    Returns self for method chaining



238
239
240
241
# File 'lib/durable/llm/client.rb', line 238

def with_max_tokens(tokens)
  @next_max_tokens = tokens
  self
end

#with_model(model_name) ⇒ Client

Sets the model for subsequent requests (fluent interface)

Examples:

Fluent API usage

client = Durable::Llm::Client.new(:openai)
client.with_model('gpt-4').complete('Hello!')

Parameters:

  • model_name (String)

    The model to use

Returns:

  • (Client)

    Returns self for method chaining



216
217
218
219
# File 'lib/durable/llm/client.rb', line 216

def with_model(model_name)
  @model = model_name
  self
end

#with_temperature(temp) ⇒ Client

Sets temperature for the next request (fluent interface)

Examples:

Fluent temperature setting

client.with_temperature(0.7).complete('Be creative!')

Parameters:

  • temp (Float)

    The temperature value (0.0-2.0)

Returns:

  • (Client)

    Returns self for method chaining



227
228
229
230
# File 'lib/durable/llm/client.rb', line 227

def with_temperature(temp)
  @next_temperature = temp
  self
end