Class: LlmOrchestrator::OpenAI
Overview
OpenAI LLM provider implementation Handles interactions with OpenAI’s GPT models
Instance Method Summary collapse
-
#generate(prompt, context: nil, **options) ⇒ Object
rubocop:disable Metrics/MethodLength.
-
#initialize(api_key: nil, model: nil, temperature: nil, max_tokens: nil) ⇒ OpenAI
constructor
A new instance of OpenAI.
Constructor Details
#initialize(api_key: nil, model: nil, temperature: nil, max_tokens: nil) ⇒ OpenAI
Returns a new instance of OpenAI.
25 26 27 28 29 30 31 32 |
# File 'lib/llm_orchestrator/llm.rb', line 25 def initialize(api_key: nil, model: nil, temperature: nil, max_tokens: nil) super @api_key ||= LlmOrchestrator.configuration.openai.api_key @client = ::OpenAI::Client.new(access_token: @api_key) @model = model || LlmOrchestrator.configuration.openai.model @temperature = temperature || LlmOrchestrator.configuration.openai.temperature @max_tokens = max_tokens || LlmOrchestrator.configuration.openai.max_tokens end |
Instance Method Details
#generate(prompt, context: nil, **options) ⇒ Object
rubocop:disable Metrics/MethodLength
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
# File 'lib/llm_orchestrator/llm.rb', line 35 def generate(prompt, context: nil, **) = [] << { role: "system", content: context } if context << { role: "user", content: prompt } response = @client.chat( parameters: { model: [:model] || @model, messages: , temperature: [:temperature] || @temperature } ) response.dig("choices", 0, "message", "content") end |