Class: Boxcars::GeminiAi
- Includes:
- UnifiedObservability
- Defined in:
- lib/boxcars/engine/gemini_ai.rb
Overview
A engine that uses GeminiAI’s API via an OpenAI-compatible interface.
Constant Summary collapse
- DEFAULT_PARAMS =
{ model: "gemini-1.5-flash-latest", # Default model for Gemini temperature: 0.1 # max_tokens is often part of the request, not a fixed default here }.freeze
- DEFAULT_NAME =
max_tokens is often part of the request, not a fixed default here
"GeminiAI engine"- DEFAULT_DESCRIPTION =
"useful for when you need to use Gemini AI to answer questions. " \ "You should ask targeted questions"
Instance Attribute Summary collapse
-
#batch_size ⇒ Object
readonly
Corrected typo llm_parmas to llm_params.
-
#llm_params ⇒ Object
readonly
Corrected typo llm_parmas to llm_params.
-
#model_kwargs ⇒ Object
readonly
Corrected typo llm_parmas to llm_params.
-
#prompts ⇒ Object
readonly
Corrected typo llm_parmas to llm_params.
Attributes inherited from Engine
Class Method Summary collapse
-
.gemini_client(gemini_api_key: nil) ⇒ Object
Renamed from open_ai_client to gemini_client for clarity.
Instance Method Summary collapse
- #client(prompt:, inputs: {}, gemini_api_key: nil, **kwargs) ⇒ Object
-
#conversation_model?(_model_name) ⇒ Boolean
Gemini models are typically conversational.
- #default_params ⇒ Object
-
#initialize(name: DEFAULT_NAME, description: DEFAULT_DESCRIPTION, prompts: [], batch_size: 20, **kwargs) ⇒ GeminiAi
constructor
A new instance of GeminiAi.
- #run(question) ⇒ Object
Methods inherited from Engine
#extract_answer, #generate, #generation_info, #get_num_tokens
Constructor Details
#initialize(name: DEFAULT_NAME, description: DEFAULT_DESCRIPTION, prompts: [], batch_size: 20, **kwargs) ⇒ GeminiAi
Returns a new instance of GeminiAi.
22 23 24 25 26 27 28 |
# File 'lib/boxcars/engine/gemini_ai.rb', line 22 def initialize(name: DEFAULT_NAME, description: DEFAULT_DESCRIPTION, prompts: [], batch_size: 20, **kwargs) user_id = kwargs.delete(:user_id) @llm_params = DEFAULT_PARAMS.merge(kwargs) # Corrected typo here @prompts = prompts @batch_size = batch_size super(description:, name:, user_id:) end |
Instance Attribute Details
#batch_size ⇒ Object (readonly)
Corrected typo llm_parmas to llm_params
11 12 13 |
# File 'lib/boxcars/engine/gemini_ai.rb', line 11 def batch_size @batch_size end |
#llm_params ⇒ Object (readonly)
Corrected typo llm_parmas to llm_params
11 12 13 |
# File 'lib/boxcars/engine/gemini_ai.rb', line 11 def llm_params @llm_params end |
#model_kwargs ⇒ Object (readonly)
Corrected typo llm_parmas to llm_params
11 12 13 |
# File 'lib/boxcars/engine/gemini_ai.rb', line 11 def model_kwargs @model_kwargs end |
#prompts ⇒ Object (readonly)
Corrected typo llm_parmas to llm_params
11 12 13 |
# File 'lib/boxcars/engine/gemini_ai.rb', line 11 def prompts @prompts end |
Class Method Details
.gemini_client(gemini_api_key: nil) ⇒ Object
Renamed from open_ai_client to gemini_client for clarity
31 32 33 34 35 36 37 38 39 40 41 |
# File 'lib/boxcars/engine/gemini_ai.rb', line 31 def self.gemini_client(gemini_api_key: nil) access_token = Boxcars.configuration.gemini_api_key(gemini_api_key:) # NOTE: The OpenAI gem might not support `log_errors: true` for custom uri_base. # It's a param for OpenAI::Client specific to their setup. ::OpenAI::Client.new(access_token:, uri_base: "https://generativelanguage.googleapis.com/v1beta/") # Removed /openai from uri_base as it's usually for OpenAI-specific paths on custom domains. # The Gemini endpoint might be directly at /v1beta/models/gemini...:generateContent # This might need adjustment based on how the OpenAI gem forms the full URL. # For direct generateContent, a different client or HTTP call might be needed if OpenAI gem is too restrictive. # Assuming for now it's an OpenAI-compatible chat endpoint. end |
Instance Method Details
#client(prompt:, inputs: {}, gemini_api_key: nil, **kwargs) ⇒ Object
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
# File 'lib/boxcars/engine/gemini_ai.rb', line 48 def client(prompt:, inputs: {}, gemini_api_key: nil, **kwargs) start_time = Time.now response_data = { response_obj: nil, parsed_json: nil, success: false, error: nil, status_code: nil } current_params = @llm_params.merge(kwargs) # Use instance var @llm_params api_request_params = nil current_prompt_object = prompt.is_a?(Array) ? prompt.first : prompt begin clnt = GeminiAi.gemini_client(gemini_api_key:) api_request_params = _prepare_gemini_request_params(current_prompt_object, inputs, current_params) (api_request_params[:messages]) if Boxcars.configuration.log_prompts && api_request_params[:messages] _execute_and_process_gemini_call(clnt, api_request_params, response_data) rescue ::OpenAI::Error => e # Catch OpenAI gem errors if they apply response_data[:error] = e response_data[:success] = false response_data[:status_code] = e.http_status if e.respond_to?(:http_status) rescue StandardError => e # Catch other errors response_data[:error] = e response_data[:success] = false ensure duration_ms = ((Time.now - start_time) * 1000).round request_context = { prompt: current_prompt_object, inputs:, conversation_for_api: api_request_params&.dig(:messages) || [], user_id: } track_ai_generation( duration_ms:, current_params:, request_context:, response_data:, provider: :gemini ) end # If there's an error, raise it to maintain backward compatibility with existing tests raise response_data[:error] if response_data[:error] response_data[:parsed_json] end |
#conversation_model?(_model_name) ⇒ Boolean
Gemini models are typically conversational.
44 45 46 |
# File 'lib/boxcars/engine/gemini_ai.rb', line 44 def conversation_model?(_model_name) true end |
#default_params ⇒ Object
99 100 101 |
# File 'lib/boxcars/engine/gemini_ai.rb', line 99 def default_params @llm_params # Use instance variable end |
#run(question) ⇒ Object
91 92 93 94 95 96 97 |
# File 'lib/boxcars/engine/gemini_ai.rb', line 91 def run(question, **) prompt = Prompt.new(template: question) response = client(prompt:, inputs: {}, **) answer = _extract_content_from_gemini_response(response) Boxcars.debug("Answer: #{answer}", :cyan) answer end |