OllamaChat - Ruby Chat Bot for Ollama

Description

ollama_chat is a chat client, that can be used to connect to an ollama server and enter chat conversations with the LLMs provided by it.

Installation (gem)

To install ollama_chat, you can type

gem install ollama_chat

in your terminal.

Usage

It can be started with the following arguments:

Usage: ollama_chat [OPTIONS]

  -f CONFIG      config file to read
  -u URL         the ollama base url, OLLAMA_URL
  -m MODEL       the ollama model to chat with, OLLAMA_CHAT_MODEL, ?selector
  -s SYSTEM      the system prompt to use as a file, OLLAMA_CHAT_SYSTEM, ?selector
  -c CHAT        a saved chat conversation to load
  -C COLLECTION  name of the collection used in this conversation
  -D DOCUMENT    load document and add to embeddings collection (multiple)
  -M             use (empty) MemoryCache for this chat session
  -E             disable embeddings for this chat session
  -S             open a socket to receive input from ollama_chat_send
  -V             display the current version number and quit
  -h             this help

  Use `?selector` with `-m` or `-s` to filter options. Multiple matches
  will open a chooser dialog.

The base URL can be either set by the environment variable OLLAMA_URL or it is derived from the environment variable OLLAMA_HOST. The default model to connect can be configured in the environment variable OLLAMA_MODEL.

The YAML config file is stored in $XDG_CONFIG_HOME/ollama_chat/config.yml and you can use it for more complex settings.

Example: Setting a system prompt

Some settings can be passed as arguments as well, e. g. if you want to choose a specific system prompt:

$ ollama_chat -s sherlock.txt
Model with architecture llama found.
Connecting to llama3.1@http://ollama.local.net:11434 now…
Configured system prompt is:
You are Sherlock Holmes and the user is your new client, Dr. Watson is also in
the room. You will talk and act in the typical manner of Sherlock Holmes do and
try to solve the user's case using logic and deduction.

Type /help to display the chat help.

Example: Using a multimodal model

This example shows how an image like this can be sent to the LLM for multimodal analysis:

cat

$ ollama_chat -m llava-llama3
Model with architecture llama found.
Connecting to llava-llama3@http://localhost:11434 now…
Type /help to display the chat help.

Chat commands

The following commands can be given inside the chat, if prefixed by a /:

/copy                           to copy last response to clipboard
/paste                          to paste content
/markdown                       toggle markdown output
/stream                         toggle stream output
/location                       toggle location submission
/voice [change]                 toggle voice output or change the voice
/list [n]                       list the last n / all conversation exchanges
/clear [what]                   clear what=messages|links|history|tags|all
/clobber                        clear the conversation, links, and collection
/drop [n]                       drop the last n exchanges, defaults to 1
/model                          change the model
/system [show]                  change/show system prompt
/regenerate                     the last answer message
/collection [clear|change]      change (default) collection or clear
/info                           show information for current session
/config                         output current configuration ("/Users/flori/.config/ollama_chat/config.yml")
/document_policy                pick a scan policy for document references
/think                          enable ollama think setting for models
/import source                  import the source's content
/summarize [n] source           summarize the source's content in n words
/embedding                      toggle embedding paused or not
/embed source                   embed the source's content
/web [n] query                  query web search & return n or 1 results
/links [clear]                  display (or clear) links used in the chat
/save filename                  store conversation messages
/load filename                  load conversation messages
/quit                           to quit
/help                           to view this help

Using ollama_chat_send to send input to a running ollama_chat

You can do this from the shell by pasting into the ollama_chat_send executable.

$ echo "Why is the sky blue?" | ollama_chat_send

To send a text from inside a vim buffer, you can use a function/leader like this:

map <leader>o :<C-U>call OllamaChatSend(@*)<CR>

function! OllamaChatSend(input)
  let input = "Take note of the following code snippet (" . &filetype . ") **AND** await further instructions:\n\n```\n" . a:input . "\n```\n"
  call system('ollama_chat_send', input)
endfunction

Advanced Parameters for ollama_chat_send

The ollama_chat_send command now supports additional parameters to enhance functionality:

  • Terminal Input (-t): Sends input as terminal commands, enabling special commands like /import.
  $ echo "/import https://example.com/some-content" | ollama_chat_send -t
  • Wait for Response (-r): Enables two-way communication by waiting for and returning the server's response.
  $ response=$(echo "Tell me a joke." | ollama_chat_send -r)
  $ echo "$response"
  • Help (-h or --help): Displays usage information and available options.
  $ ollama_chat_send -h

These parameters provide greater flexibility in how you interact with ollama_chat, whether from the command line or integrated tools like vim.

Download

The homepage of this app is located at

Author

OllamaChat was written by Florian Frank

License

This software is licensed under the MIT license.