Class: LLM::Bot

Inherits:
Object
  • Object
show all
Includes:
Builder, Conversable
Defined in:
lib/llm/shell/internal/llm.rb/lib/llm/bot.rb,
lib/llm/shell/internal/llm.rb/lib/llm/bot/builder.rb,
lib/llm/shell/internal/llm.rb/lib/llm/bot/conversable.rb

Overview

LLM::Bot provides an object that can maintain a conversation. A conversation can use the chat completions API that all LLM providers support or the responses API that currently only OpenAI supports.

Examples:

#!/usr/bin/env ruby
require "llm"

llm  = LLM.openai(key: ENV["KEY"])
bot  = LLM::Bot.new(llm)
url  = "https://en.wikipedia.org/wiki/Special:FilePath/Cognac_glass.jpg"
msgs = bot.chat do |prompt|
  prompt.system "Your task is to answer all user queries"
  prompt.user ["Tell me about this URL", URI(url)]
  prompt.user ["Tell me about this PDF", File.open("handbook.pdf", "rb")]
  prompt.user "Are the URL and PDF similar to each other?"
end

# At this point, we execute a single request
msgs.each { print "[#{_1.role}] ", _1.content, "\n" }

Defined Under Namespace

Modules: Builder, Conversable, Prompt

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(provider, params = {}) ⇒ Bot

Returns a new instance of Bot.

Options Hash (params):

  • :model (String)

    Defaults to the provider’s default model

  • :schema (#to_json, nil)

    Defaults to nil

  • :tools (Array<LLM::Function>, nil)

    Defaults to nil



50
51
52
53
54
# File 'lib/llm/shell/internal/llm.rb/lib/llm/bot.rb', line 50

def initialize(provider, params = {})
  @provider = provider
  @params = {model: provider.default_model, schema: nil}.compact.merge!(params)
  @messages = LLM::Buffer.new(provider)
end

Instance Attribute Details

#messagesLLM::Buffer<LLM::Message> (readonly)

Returns an Enumerable for the messages in a conversation



38
39
40
# File 'lib/llm/shell/internal/llm.rb/lib/llm/bot.rb', line 38

def messages
  @messages
end

Instance Method Details

#chat(prompt, params = {}) ⇒ LLM::Bot #chat(prompt, params) { ... } ⇒ LLM::Buffer

Maintain a conversation via the chat completions API

Overloads:

  • #chat(prompt, params) { ... } ⇒ LLM::Buffer

    Returns messages

    Yields:

    • prompt Yields a prompt



69
70
71
72
73
74
75
76
77
78
79
80
# File 'lib/llm/shell/internal/llm.rb/lib/llm/bot.rb', line 69

def chat(prompt = nil, params = {})
  if block_given?
    params = prompt
    yield Prompt::Completion.new(self, params)
    messages
  elsif prompt.nil?
    raise ArgumentError, "wrong number of arguments (given 0, expected 1)"
  else
    params = {role: :user}.merge!(params)
    tap { async_completion(prompt, params) }
  end
end

#drainArray<LLM::Message> Also known as: flush

Drains the buffer and returns all messages as an array

Examples:

llm = LLM.openai(key: ENV["KEY"])
bot = LLM::Bot.new(llm, stream: $stdout)
bot.chat("Hello", role: :user).flush


134
135
136
# File 'lib/llm/shell/internal/llm.rb/lib/llm/bot.rb', line 134

def drain
  messages.drain
end

#functionsArray<LLM::Function>

Returns an array of functions that can be called



120
121
122
123
124
125
# File 'lib/llm/shell/internal/llm.rb/lib/llm/bot.rb', line 120

def functions
  messages
    .select(&:assistant?)
    .flat_map(&:functions)
    .select(&:pending?)
end

#inspectString



111
112
113
114
115
# File 'lib/llm/shell/internal/llm.rb/lib/llm/bot.rb', line 111

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} " \
  "@provider=#{@provider.class}, @params=#{@params.inspect}, " \
  "@messages=#{@messages.inspect}>"
end

#respond(prompt, params = {}) ⇒ LLM::Bot #respond(prompt, params) { ... } ⇒ LLM::Buffer

Maintain a conversation via the responses API

Overloads:

  • #respond(prompt, params) { ... } ⇒ LLM::Buffer
    Note:

    Not all LLM providers support this API

    Returns messages

    Yields:

    • prompt Yields a prompt



96
97
98
99
100
101
102
103
104
105
106
107
# File 'lib/llm/shell/internal/llm.rb/lib/llm/bot.rb', line 96

def respond(prompt = nil, params = {})
  if block_given?
    params = prompt
    yield Prompt::Respond.new(self, params)
    messages
  elsif prompt.nil?
    raise ArgumentError, "wrong number of arguments (given 0, expected 1)"
  else
    params = {role: :user}.merge!(params)
    tap { async_response(prompt, params) }
  end
end

#usageLLM::Object

Note:

Returns token usage for the conversation This method returns token usage for the latest assistant message, and it returns an empty object if there are no assistant messages



146
147
148
# File 'lib/llm/shell/internal/llm.rb/lib/llm/bot.rb', line 146

def usage
  messages.find(&:assistant?)&.usage || LLM::Object.from_hash({})
end