Class: LLM::Provider Abstract

Inherits:
Object
  • Object
show all
Includes:
Client
Defined in:
lib/llm/shell/internal/llm.rb/lib/llm/provider.rb

Overview

This class is abstract.

The Provider class represents an abstract class for LLM (Language Model) providers.

Direct Known Subclasses

Anthropic, Gemini, Ollama, OpenAI

Constant Summary collapse

@@clients =
{}

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(key:, host:, port: 443, timeout: 60, ssl: true, persistent: false) ⇒ Provider

Returns a new instance of Provider.

Parameters:

  • key (String, nil)

    The secret key for authentication

  • host (String)

    The host address of the LLM provider

  • port (Integer) (defaults to: 443)

    The port number

  • timeout (Integer) (defaults to: 60)

    The number of seconds to wait for a response

  • ssl (Boolean) (defaults to: true)

    Whether to use SSL for the connection

  • persistent (Boolean) (defaults to: false)

    Whether to use a persistent connection. Requires the net-http-persistent gem.



33
34
35
36
37
38
39
40
41
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 33

def initialize(key:, host:, port: 443, timeout: 60, ssl: true, persistent: false)
  @key = key
  @host = host
  @port = port
  @timeout = timeout
  @ssl = ssl
  @client = persistent ? persistent_client : transient_client
  @base_uri = URI("#{ssl ? "https" : "http"}://#{host}:#{port}/")
end

Class Method Details

.clientsObject

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.



17
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 17

def self.clients = @@clients

Instance Method Details

#assistant_roleString

Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Returns:

  • (String)

    Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Raises:

  • (NotImplementedError)


168
169
170
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 168

def assistant_role
  raise NotImplementedError
end

#audioLLM::OpenAI::Audio

Returns an interface to the audio API

Returns:

Raises:

  • (NotImplementedError)


132
133
134
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 132

def audio
  raise NotImplementedError
end

#chat(prompt, params = {}) ⇒ LLM::Bot

Starts a new chat powered by the chat completions API

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:



95
96
97
98
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 95

def chat(prompt, params = {})
  role = params.delete(:role)
  LLM::Bot.new(self, params).chat(prompt, role:)
end

#complete(prompt, params = {}) ⇒ LLM::Response

Provides an interface to the chat completions API

Examples:

llm = LLM.openai(key: ENV["KEY"])
messages = [{role: "system", content: "Your task is to answer all of my questions"}]
res = llm.complete("5 + 2 ?", messages:)
print "[#{res.choices[0].role}]", res.choices[0].content, "\n"

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Options Hash (params):

  • :role (Symbol)

    Defaults to the provider’s default role

  • :model (String)

    Defaults to the provider’s default model

  • :schema (#to_json, nil)

    Defaults to nil

  • :tools (Array<LLM::Function>, nil)

    Defaults to nil

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



86
87
88
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 86

def complete(prompt, params = {})
  raise NotImplementedError
end

#default_modelString

Returns the default model for chat completions

Returns:

  • (String)

    Returns the default model for chat completions

Raises:

  • (NotImplementedError)


175
176
177
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 175

def default_model
  raise NotImplementedError
end

#embed(input, model: nil, **params) ⇒ LLM::Response

Provides an embedding

Parameters:

  • input (String, Array<String>)

    The input to embed

  • model (String) (defaults to: nil)

    The embedding model to use

  • params (Hash)

    Other embedding parameters

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



62
63
64
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 62

def embed(input, model: nil, **params)
  raise NotImplementedError
end

#filesLLM::OpenAI::Files

Returns an interface to the files API

Returns:

Raises:

  • (NotImplementedError)


139
140
141
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 139

def files
  raise NotImplementedError
end

#imagesLLM::OpenAI::Images, LLM::Gemini::Images

Returns an interface to the images API

Returns:

Raises:

  • (NotImplementedError)


125
126
127
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 125

def images
  raise NotImplementedError
end

#inspectString

Note:

The secret key is redacted in inspect for security reasons

Returns an inspection of the provider object

Returns:

  • (String)


47
48
49
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 47

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} @key=[REDACTED] @http=#{@http.inspect}>"
end

#modelsLLM::OpenAI::Models

Returns an interface to the models API

Returns:

Raises:

  • (NotImplementedError)


146
147
148
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 146

def models
  raise NotImplementedError
end

#moderationsLLM::OpenAI::Moderations

Returns an interface to the moderations API

Returns:

Raises:

  • (NotImplementedError)


153
154
155
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 153

def moderations
  raise NotImplementedError
end

#respond(prompt, params = {}) ⇒ LLM::Bot

Starts a new chat powered by the responses API

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



106
107
108
109
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 106

def respond(prompt, params = {})
  role = params.delete(:role)
  LLM::Bot.new(self, params).respond(prompt, role:)
end

#responsesLLM::OpenAI::Responses

Note:

Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.

Returns:

Raises:

  • (NotImplementedError)


118
119
120
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 118

def responses
  raise NotImplementedError
end

#schemaLLM::Schema

Returns an object that can generate a JSON schema

Returns:



182
183
184
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 182

def schema
  @schema ||= LLM::Schema.new
end

#server_tool(name, options = {}) ⇒ LLM::ServerTool

Note:

OpenAI, Anthropic, and Gemini provide platform-tools for things like web search, and more.

Returns a tool provided by a provider.

Examples:

llm   = LLM.openai(key: ENV["KEY"])
tools = [llm.server_tool(:web_search)]
res   = llm.responses.create("Summarize today's news", tools:)
print res.output_text, "\n"

Parameters:

  • name (String, Symbol)

    The name of the tool

  • options (Hash) (defaults to: {})

    Configuration options for the tool

Returns:



223
224
225
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 223

def server_tool(name, options = {})
  LLM::ServerTool.new(name, options, self)
end

#server_toolsString => LLM::ServerTool

Note:

This method might be outdated, and the LLM::Provider#server_tool method can be used if a tool is not found here.

Returns all known tools provided by a provider.

Returns:



206
207
208
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 206

def server_tools
  {}
end

#vector_storesLLM::OpenAI::VectorStore

Returns an interface to the vector stores API

Returns:

  • (LLM::OpenAI::VectorStore)

    Returns an interface to the vector stores API

Raises:

  • (NotImplementedError)


160
161
162
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 160

def vector_stores
  raise NotImplementedError
end

#web_search(query:) ⇒ LLM::Response

Provides a web search capability

Parameters:

  • query (String)

    The search query

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



233
234
235
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 233

def web_search(query:)
  raise NotImplementedError
end

#with(headers:) ⇒ LLM::Provider

Add one or more headers to all requests

Examples:

llm = LLM.openai(key: ENV["KEY"])
llm.with(headers: {"OpenAI-Organization" => ENV["ORG"]})
llm.with(headers: {"OpenAI-Project" => ENV["PROJECT"]})

Parameters:

  • headers (Hash<String,String>)

    One or more headers

Returns:



196
197
198
# File 'lib/llm/shell/internal/llm.rb/lib/llm/provider.rb', line 196

def with(headers:)
  tap { (@headers ||= {}).merge!(headers) }
end