Class: OpenRouter::ModelRegistry
- Inherits:
-
Object
- Object
- OpenRouter::ModelRegistry
- Defined in:
- lib/open_router/model_registry.rb
Constant Summary collapse
- API_BASE =
"https://openrouter.ai/api/v1"- CACHE_DIR =
File.join(Dir.tmpdir, "openrouter_cache")
- CACHE_DATA_FILE =
File.join(CACHE_DIR, "models_data.json")
- CACHE_METADATA_FILE =
File.join(CACHE_DIR, "cache_metadata.json")
- MAX_CACHE_SIZE_MB =
Maximum cache size in megabytes
50
Class Method Summary collapse
-
.all_models ⇒ Object
Get all registered models (fetch from API if needed).
-
.cache_stale? ⇒ Boolean
Check if cache is stale based on TTL.
-
.calculate_estimated_cost(model, input_tokens: 0, output_tokens: 0) ⇒ Object
Calculate estimated cost for a request.
-
.clear_cache! ⇒ Object
Clear local cache (both files and memory).
-
.determine_fallbacks(_model_id, _model_data) ⇒ Object
Determine fallback models (simplified logic).
-
.determine_performance_tier(model_data) ⇒ Object
Determine performance tier based on pricing and capabilities.
-
.ensure_cache_dir ⇒ Object
Ensure cache directory exists and set up cleanup.
-
.extract_capabilities(model_data) ⇒ Object
Extract capabilities from model data.
-
.fetch_and_cache_models ⇒ Object
Get processed models (fetch if needed).
-
.fetch_models_from_api ⇒ Object
Fetch models from OpenRouter API.
-
.find_best_model(requirements = {}) ⇒ Object
Find the best model matching given requirements.
-
.find_original_model_data(model_id) ⇒ Object
Find original API model data by model ID.
-
.get_fallbacks(model) ⇒ Object
Get fallback models for a given model.
-
.get_model_info(model) ⇒ Object
Get detailed information about a model.
-
.has_capability?(model, capability) ⇒ Boolean
Check if a model has a specific capability.
-
.model_exists?(model) ⇒ Boolean
Check if a model exists in the registry.
-
.models_meeting_requirements(requirements = {}) ⇒ Object
Get all models that meet requirements (without sorting).
-
.process_api_models(api_models) ⇒ Object
Convert API model data to our internal format.
-
.read_cache_if_fresh ⇒ Object
Read cache only if it’s fresh.
-
.refresh! ⇒ Object
Refresh models data from API.
-
.write_cache_with_timestamp(models_data) ⇒ Object
Write cache with timestamp metadata.
Class Method Details
.all_models ⇒ Object
Get all registered models (fetch from API if needed)
260 261 262 |
# File 'lib/open_router/model_registry.rb', line 260 def all_models @all_models ||= fetch_and_cache_models end |
.cache_stale? ⇒ Boolean
Check if cache is stale based on TTL
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
# File 'lib/open_router/model_registry.rb', line 54 def cache_stale? return true unless File.exist?(CACHE_METADATA_FILE) begin = JSON.parse(File.read(CACHE_METADATA_FILE)) cache_time = ["cached_at"] ttl = OpenRouter.configuration.cache_ttl return true unless cache_time Time.now.to_i - cache_time.to_i > ttl rescue JSON::ParserError, StandardError true # If we can't read metadata, consider cache stale end end |
.calculate_estimated_cost(model, input_tokens: 0, output_tokens: 0) ⇒ Object
Calculate estimated cost for a request
265 266 267 268 269 270 271 272 273 |
# File 'lib/open_router/model_registry.rb', line 265 def calculate_estimated_cost(model, input_tokens: 0, output_tokens: 0) model_info = get_model_info(model) return 0 unless model_info input_cost = (input_tokens / 1000.0) * model_info[:cost_per_1k_tokens][:input] output_cost = (output_tokens / 1000.0) * model_info[:cost_per_1k_tokens][:output] input_cost + output_cost end |
.clear_cache! ⇒ Object
Clear local cache (both files and memory)
97 98 99 100 101 |
# File 'lib/open_router/model_registry.rb', line 97 def clear_cache! FileUtils.rm_rf(CACHE_DIR) if Dir.exist?(CACHE_DIR) @processed_models = nil @all_models = nil end |
.determine_fallbacks(_model_id, _model_data) ⇒ Object
Determine fallback models (simplified logic)
209 210 211 212 |
# File 'lib/open_router/model_registry.rb', line 209 def determine_fallbacks(_model_id, _model_data) # For now, return empty array - could be enhanced with smart fallback logic [] end |
.determine_performance_tier(model_data) ⇒ Object
Determine performance tier based on pricing and capabilities
196 197 198 199 200 201 202 203 204 205 206 |
# File 'lib/open_router/model_registry.rb', line 196 def determine_performance_tier(model_data) input_cost = model_data.dig("pricing", "prompt").to_f # Higher cost generally indicates premium models # Note: pricing is per token, not per 1k tokens if input_cost > 0.000001 # > $0.001 per 1k tokens (converted from per-token) :premium else :standard end end |
.ensure_cache_dir ⇒ Object
Ensure cache directory exists and set up cleanup
48 49 50 51 |
# File 'lib/open_router/model_registry.rb', line 48 def ensure_cache_dir FileUtils.mkdir_p(CACHE_DIR) unless Dir.exist?(CACHE_DIR) setup_cleanup_hook end |
.extract_capabilities(model_data) ⇒ Object
Extract capabilities from model data
169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 |
# File 'lib/open_router/model_registry.rb', line 169 def extract_capabilities(model_data) capabilities = [:chat] # All models support basic chat # Check for function calling support supported_params = model_data["supported_parameters"] || [] if supported_params.include?("tools") && supported_params.include?("tool_choice") capabilities << :function_calling end # Check for structured output support if supported_params.include?("structured_outputs") || supported_params.include?("response_format") capabilities << :structured_outputs end # Check for vision support architecture = model_data["architecture"] || {} input_modalities = architecture["input_modalities"] || [] capabilities << :vision if input_modalities.include?("image") # Check for large context support context_length = model_data["context_length"] || 0 capabilities << :long_context if context_length > 100_000 capabilities end |
.fetch_and_cache_models ⇒ Object
Get processed models (fetch if needed)
110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
# File 'lib/open_router/model_registry.rb', line 110 def fetch_and_cache_models # Try cache first (only if fresh) cached_data = read_cache_if_fresh if cached_data api_data = cached_data else # Cache is stale or doesn't exist, fetch from API api_data = fetch_models_from_api (api_data) end @processed_models = process_api_models(api_data["data"]) end |
.fetch_models_from_api ⇒ Object
Fetch models from OpenRouter API
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
# File 'lib/open_router/model_registry.rb', line 22 def fetch_models_from_api uri = URI("#{API_BASE}/models") # Use configurable timeout and SSL settings http = Net::HTTP.new(uri.host, uri.port) http.use_ssl = true http.verify_mode = OpenSSL::SSL::VERIFY_PEER http.read_timeout = OpenRouter.configuration.model_registry_timeout http.open_timeout = OpenRouter.configuration.model_registry_timeout request = Net::HTTP::Get.new(uri) response = http.request(request) unless response.code == "200" raise ModelRegistryError, "Failed to fetch models from OpenRouter API: #{response.}" end JSON.parse(response.body) rescue JSON::ParserError => e raise ModelRegistryError, "Failed to parse OpenRouter API response: #{e.}" rescue StandardError => e raise ModelRegistryError, "Network error fetching models: #{e.}" end |
.find_best_model(requirements = {}) ⇒ Object
Find the best model matching given requirements
215 216 217 218 219 220 221 222 223 224 225 226 |
# File 'lib/open_router/model_registry.rb', line 215 def find_best_model(requirements = {}) candidates = models_meeting_requirements(requirements) return nil if candidates.empty? # If pick_newer is true, prefer newer models over cost if requirements[:pick_newer] candidates.max_by { |_, specs| specs[:created_at] } else # Sort by cost (cheapest first) as default strategy candidates.min_by { |_, specs| calculate_model_cost(specs, requirements) } end end |
.find_original_model_data(model_id) ⇒ Object
Find original API model data by model ID
126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
# File 'lib/open_router/model_registry.rb', line 126 def find_original_model_data(model_id) # Get raw models data (not processed) cached_data = read_cache_if_fresh if cached_data api_data = cached_data else api_data = fetch_models_from_api (api_data) end raw_models = api_data["data"] || [] raw_models.find { |model| model["id"] == model_id } end |
.get_fallbacks(model) ⇒ Object
Get fallback models for a given model
236 237 238 239 |
# File 'lib/open_router/model_registry.rb', line 236 def get_fallbacks(model) model_info = get_model_info(model) model_info ? model_info[:fallbacks] || [] : [] end |
.get_model_info(model) ⇒ Object
Get detailed information about a model
255 256 257 |
# File 'lib/open_router/model_registry.rb', line 255 def get_model_info(model) all_models[model] end |
.has_capability?(model, capability) ⇒ Boolean
Check if a model has a specific capability
247 248 249 250 251 252 |
# File 'lib/open_router/model_registry.rb', line 247 def has_capability?(model, capability) model_info = get_model_info(model) return false unless model_info model_info[:capabilities].include?(capability) end |
.model_exists?(model) ⇒ Boolean
Check if a model exists in the registry
242 243 244 |
# File 'lib/open_router/model_registry.rb', line 242 def model_exists?(model) all_models.key?(model) end |
.models_meeting_requirements(requirements = {}) ⇒ Object
Get all models that meet requirements (without sorting)
229 230 231 232 233 |
# File 'lib/open_router/model_registry.rb', line 229 def models_meeting_requirements(requirements = {}) all_models.select do |_model, specs| meets_requirements?(specs, requirements) end end |
.process_api_models(api_models) ⇒ Object
Convert API model data to our internal format
142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 |
# File 'lib/open_router/model_registry.rb', line 142 def process_api_models(api_models) models = {} api_models.each do |model_data| model_id = model_data["id"] models[model_id] = { name: model_data["name"], cost_per_1k_tokens: { input: model_data.dig("pricing", "prompt").to_f, output: model_data.dig("pricing", "completion").to_f }, context_length: model_data["context_length"], capabilities: extract_capabilities(model_data), description: model_data["description"], supported_parameters: model_data["supported_parameters"] || [], architecture: model_data["architecture"], performance_tier: determine_performance_tier(model_data), fallbacks: determine_fallbacks(model_id, model_data), created_at: model_data["created"] } end models end |
.read_cache_if_fresh ⇒ Object
Read cache only if it’s fresh
87 88 89 90 91 92 93 94 |
# File 'lib/open_router/model_registry.rb', line 87 def read_cache_if_fresh return nil if cache_stale? return nil unless File.exist?(CACHE_DATA_FILE) JSON.parse(File.read(CACHE_DATA_FILE)) rescue JSON::ParserError nil end |
.refresh! ⇒ Object
Refresh models data from API
104 105 106 107 |
# File 'lib/open_router/model_registry.rb', line 104 def refresh! clear_cache! fetch_and_cache_models end |
.write_cache_with_timestamp(models_data) ⇒ Object
Write cache with timestamp metadata
71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
# File 'lib/open_router/model_registry.rb', line 71 def (models_data) ensure_cache_dir # Write the actual models data File.write(CACHE_DATA_FILE, JSON.pretty_generate(models_data)) # Write metadata with timestamp = { "cached_at" => Time.now.to_i, "version" => "1.0", "source" => "openrouter_api" } File.write(CACHE_METADATA_FILE, JSON.pretty_generate()) end |