Class: TinyOllama
- Inherits:
-
Object
- Object
- TinyOllama
- Defined in:
- lib/tiny_ollama.rb
Overview
tiny HTTP client for the /api/generate and /api/chat endpoints of ollama see also: ollama.com/
Instance Method Summary collapse
-
#initialize(model:, format: nil, host: 'localhost', port: 11434, context_size: 2048, keep_alive: -1,, stream: false) ⇒ TinyOllama
constructor
a good rule of thumb would be to have a .tiny_ollama.yml config in your project, to parse that as YAML, and pass the parsed reulst into here.
-
#prompt(messages) ⇒ Object
sends a request to POST /api/chat.
Constructor Details
#initialize(model:, format: nil, host: 'localhost', port: 11434, context_size: 2048, keep_alive: -1,, stream: false) ⇒ TinyOllama
a good rule of thumb would be to have a .tiny_ollama.yml config in your project, to parse that as YAML, and pass the parsed reulst into here.
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
# File 'lib/tiny_ollama.rb', line 9 def initialize( model:, format: nil, host: 'localhost', port: 11434, context_size: 2048, keep_alive: -1, stream: false ) @model = model @host = host @port = port @context_size = context_size @keep_alive = keep_alive @stream = stream @format = format end |
Instance Method Details
#prompt(messages) ⇒ Object
sends a request to POST /api/chat
messages: an array of hashes in the following format: [
{
"role": "system",
"content": <optional message to override model instructions>,
},
{
"role": "user",
"content": <the first user message>,
},
{
"role": "assistant",
"content": <the LLM's first response>,
},
{
"role": "user",
"content": <the next user message>,
},
]
NOTE: the messages parameter needs to include a system message if you want to override the model’s default instructions
52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
# File 'lib/tiny_ollama.rb', line 52 def prompt() request_body = { model: @model, messages: , stream: @stream, format: @format, keep_alive: @keep_alive, options: { num_ctx: @context_size, } }.to_json uri = URI("http://#{@host}:#{@port}/api/chat") headers = { 'Content-Type' => 'application/json' } response = Net::HTTP.post(uri, request_body, headers) # Handle potential errors (e.g., non-200 responses) unless response.is_a?(Net::HTTPSuccess) raise TinyOllamaModelError, "Ollama API Error: #{response.code} - #{response.body}" end JSON.parse(response.body)['message']['content'] end |