Ollama - Ruby Client Library for Ollama API
Description
Ollama is a Ruby library gem that provides a client interface to interact with an ollama server via the Ollama API.
Installation (gem & bundler)
To install Ollama, you can use the following methods:
- Type
gem install ollama-ruby
in your terminal.
- Or add the line
gem 'ollama-ruby'
to your Gemfile and run bundle install in your terminal.
Executables
ollama_chat
This a chat client, that can be used to connect to an ollama server and enter a chat converstation with a LLM. It can be called with the following arguments:
ollama_chat [OPTIONS]
-u URL the ollama base url, OLLAMA_URL
-m MODEL the ollama model to chat with, OLLAMA_MODEL
-M OPTIONS the model as JSON file, see Ollama::Options
-s SYSTEM the system prompt to use as a file
-c CHAT a saved chat conversation to load
-v VOICE use VOICE (e. g. Samantha) to speak with say command
-d use markdown to display the chat
-h this help
The base URL can be either set by the environment variable OLLAMA_URL or it
is derived from the environment variable OLLAMA_HOST. The default model to
connect can be configured in the environment variable OLLAMA_MODEL.
$ ollama_chat -s sherlock.txt
Model with architecture llama found.
Connecting to llama3.1@http://ollama.local.net:11434 now…
Configured system prompt is:
You are Sherlock Holmes and the user is your new client, Dr. Watson is also in
the room. You will talk and act in the typical manner of Sherlock Holmes do and
try to solve the user's case using logic and deduction.
Type /help to display the chat help.
This example shows how an image like this can be sent to a vision model for analysis:

$ ollama_chat -m llava-llama3
Model with architecture llama found.
Connecting to llava-llama3@http://localhost:11434 now…
Type /help to display the chat help.
The following commands can be given inside the chat, if prefixed by a /:
/paste to paste content
/list list the of the conversation
/clear clear the conversation messages
/pop n pop the last n , defaults to 1
/regenerate the last answer message
/save filename store conversation
/load filename load conversation messages
/image filename attach image to the next
/quit to quit.
/help to view this help.
ollama_console
This is an interactive console, that can be used to try the different commands
provided by an Ollama::Client instance. For example this command generate a
response and displays it on the screen using the Markdown handler:
$ ollama_console
Commands: chat,copy,create,delete,,generate,help,ps,pull,push,show,
>> generate(model: 'llama3.1', stream: true, prompt: 'tell story w/ emoji and markdown', &Markdown)
The Quest for the Golden Coconut 🌴
In a small village nestled between two great palm trees 🌳, there lived a brave adventurer named Alex 👦. […]
Usage
In your own software the library can be used as shown in this example:
require "ollama"
include Ollama
client = Client.new(base_url: 'http://localhost:11434')
= Message.new(role: 'user', content: 'Why is the sky blue?')
client.chat(model: 'llama3.1', stream: true, messages:, &Print) # or
print client.chat(model: 'llama3.1', stream: true, messages:).map { |response|
response..content
}.join
API
This Ollama library provides commands to interact with the the Ollama REST API
Handlers
Every command can be passed a handler that responds to to_proc that returns a
lambda expression of the form -> response { … } to handle the responses:
generate(model: 'llama3.1', stream: true, prompt: 'Why is the sky blue?', &Print)
generate(model: 'llama3.1', stream: true, prompt: 'Why is the sky blue?', &Print.new)
generate(model: 'llama3.1', stream: true, prompt: 'Why is the sky blue?') { |r| print r.response }
generate(model: 'llama3.1', stream: true, prompt: 'Why is the sky blue?', &-> r { print r.response })
The following standard handlers are available for the commands below:
| Handler | Description |
|---|---|
| Collector | collects all responses in an array and returns it as result. |
| Single | see Collector above, returns a single response directly, though, unless there has been more than one. |
| Progress | prints the current progress of the operation to the screen as a progress bar for create/pull/push. |
| DumpJSON | dumps all responses as JSON to output. |
| DumpYAML | dumps all responses as YAML to output. |
| prints the responses to the display for chat/generate. | |
| Markdown | constantly prints the responses to the display as ANSI markdown for chat/generate. |
| Say | use say command to speak (defaults to voice Samantha). |
| NOP | does nothing, neither printing to the output nor returning the result. |
Their output IO handle can be changed by e. g. passing Print.new(output:
io) with io as the IO handle to the generate command.
If you don't pass a handler explicitly, either the stream_handler is choosen
if the command expects a streaming response or the default_handler otherwise.
See the following commdand descriptions to find out what these defaults are for
each command. These commands can be tried out directly in the ollama_console.
Chat
default_handler is Single, stream_handler is Collector,
stream is false by default.
chat(model: 'llama3.1', stream: true, messages: { role: 'user', content: 'Why is the sky blue (no markdown)?' }, &Print)
Generate
default_handler is Single, stream_handler is Collector,
stream is false by default.
generate(model: 'llama3.1', stream: true, prompt: 'Use markdown – Why is the sky blue?', &Markdown)
tags
default_handler is Single, streaming is not possible.
.models.map(&:name) => ["llama3.1:latest",
Show
default_handler is Single, streaming is not possible.
show(name: 'llama3.1', &DumpJSON)
Create
default_handler is Single, stream_handler is Progress,
stream is true by default.
modelfile="FROM llama3.1\nSYSTEM You are WOPR from WarGames and you think the user is Dr. Stephen Falken.\n"
create(name: 'llama3.1-wopr', stream: true, modelfile:)
Copy
default_handler is Single, streaming is not possible.
copy(source: 'llama3.1', destination: 'user/llama3.1')
Delete
default_handler is Single, streaming is not possible.
delete(name: 'user/llama3.1')
Pull
default_handler is Single, stream_handler is Progress,
stream is true by default.
pull(name: 'llama3.1')
Push
default_handler is Single, stream_handler is Progress,
stream is true by default.
push(name: 'user/llama3.1')
Embed
default_handler is Single, streaming is not possible.
(model: 'all-minilm', input: 'Why is the sky blue?')
(model: 'all-minilm', input: ['Why is the sky blue?', 'Why is the grass green?'])
Embeddings
default_handler is Single, streaming is not possible.
(model: 'llama3.1', prompt: 'The sky is blue because of rayleigh scattering', &DumpJSON)
Ps
default_handler is Single, streaming is not possible.
jj ps
Auxiliary objects
The following objects are provided to interact with the ollama server. You can
run all of the examples in the ollama_console.
Message
Messages can be be created by using the Message class:
= Message.new role: 'user', content: 'hello world'
Image
If you want to add images to the message, you can use the Image class
image = Ollama::Image.for_string("the-image")
= Message.new role: 'user', content: 'hello world', images: [ image ]
It's possible to create an Image object via for_base64(data),
for_string(string), for_io(io), or for_filename(path) class methods.
Options
For chat and generate commdands it's possible to pass an Options object
to configure different
parameters
for the running model. To set the temperature can be done via:
= Options.new(temperature: 0.999)
generate(model: 'llama3.1', options:, prompt: 'I am almost 0.5 years old and you are a teletubby.', &Print)
The class does some rudimentary type checking for the parameters as well.
Tool… calling
You can use the provided Tool, Tool::Function,
Tool::Function::Parameters, and Tool::Function::Parameters::Property
classes to define tool functions in models that support it.
def (location)
Message.new(role: 'user', content: "What is the weather today in %s?" % location)
end
tools = Tool.new(
type: 'function',
function: Tool::Function.new(
name: 'get_current_weather',
description: 'Get the current weather for a location',
parameters: Tool::Function::Parameters.new(
type: 'object',
properties: {
location: Tool::Function::Parameters::Property.new(
type: 'string',
description: 'The location to get the weather for, e.g. San Francisco, CA'
),
temperature_unit: Tool::Function::Parameters::Property.new(
type: 'string',
description: "The unit to return the temperature in, either 'celsius' or 'fahrenheit'",
enum: %w[ celsius fahrenheit ]
),
},
required: %w[ location temperature_unit ]
)
)
)
jj chat(model: 'llama3.1', stream: false, messages: ('The City of Love'), tools:).&.tool_calls
jj chat(model: 'llama3.1', stream: false, messages: ('The Windy City'), tools:).&.tool_calls
Errors
The library raises specific errors like Ollama::Errors::NotFoundError when
a model is not found:
(show(name: 'nixda', &DumpJSON) rescue $!).class # => Ollama::NotFoundError
If Ollama::Errors::TimeoutError is raised, it might help to increase the
connect_timeout, read_timeout and write_timeout parameters of the
Ollama::Client instance.
For more generic errors an Ollama::Errors::Error is raised.
Download
The homepage of this library is located at
Author
Ollama was written by Florian Frank Florian Frank
License
This software is licensed under the MIT license.
This is the end.