Class: ActiveAgent::Providers::Common::Usage
- Defined in:
- lib/active_agent/providers/common/usage.rb
Overview
Normalizes token usage statistics across AI providers.
Providers return usage data in different formats with different field names. This model normalizes them into a consistent structure, automatically calculating total_tokens if not provided.
Instance Attribute Summary collapse
-
#audio_tokens ⇒ Integer?
Available from: - OpenAI: sum of prompt_tokens_details.audio_tokens and completion_tokens_details.audio_tokens.
-
#cache_creation_tokens ⇒ Integer?
Available from: - Anthropic: cache_creation_input_tokens.
-
#cached_tokens ⇒ Integer?
Available from: - OpenAI: prompt_tokens_details.cached_tokens or input_tokens_details.cached_tokens - Anthropic: cache_read_input_tokens.
-
#duration_ms ⇒ Integer?
Available from: - Ollama: total_duration (converted from nanoseconds).
-
#input_tokens ⇒ Integer
Normalized from: - OpenAI Chat/Embeddings: prompt_tokens - OpenAI Responses API: input_tokens - Anthropic: input_tokens - Ollama: prompt_eval_count - OpenRouter: prompt_tokens.
-
#output_tokens ⇒ Integer
Normalized from: - OpenAI Chat: completion_tokens - OpenAI Responses API: output_tokens - Anthropic: output_tokens - Ollama: eval_count - OpenRouter: completion_tokens - OpenAI Embeddings: 0 (no output tokens).
-
#provider_details ⇒ Hash
Preserves provider-specific information that doesn’t fit the normalized structure.
-
#reasoning_tokens ⇒ Integer?
Available from: - OpenAI Chat: completion_tokens_details.reasoning_tokens - OpenAI Responses: output_tokens_details.reasoning_tokens.
-
#service_tier ⇒ String?
Available from: - Anthropic: service_tier (“standard”, “priority”, “batch”).
-
#total_tokens ⇒ Integer
Automatically calculated as input_tokens + output_tokens if not provided by provider.
Class Method Summary collapse
- .calculate_tokens_per_second(tokens, duration_ns) ⇒ Float?
- .convert_nanoseconds_to_ms(nanoseconds) ⇒ Integer?
-
.from_anthropic(usage_hash) ⇒ Usage
Creates a Usage object from Anthropic usage data.
-
.from_ollama(usage_hash) ⇒ Usage
Creates a Usage object from Ollama usage data.
-
.from_openai_chat(usage_hash) ⇒ Usage
Creates a Usage object from OpenAI Chat Completion usage data.
-
.from_openai_embedding(usage_hash) ⇒ Usage
Creates a Usage object from OpenAI Embedding API usage data.
-
.from_openai_responses(usage_hash) ⇒ Usage
Creates a Usage object from OpenAI Responses API usage data.
-
.from_openrouter(usage_hash) ⇒ Usage
Creates a Usage object from OpenRouter usage data.
-
.from_provider_usage(usage_hash) ⇒ Usage?
Auto-detects the provider format and creates a normalized Usage object.
Instance Method Summary collapse
-
#+(other) ⇒ Usage
Sums all token counts from two Usage objects.
-
#initialize(attributes = {}) ⇒ Usage
constructor
Automatically calculates total_tokens if not provided.
Methods inherited from BaseModel
#<=>, #==, attribute, #deep_compact, #deep_dup, delegate_attributes, drop_attributes, inherited, #inspect, keys, #merge!, required_attributes, #serialize, #to_h, #to_hash
Constructor Details
#initialize(attributes = {}) ⇒ Usage
Automatically calculates total_tokens if not provided.
123 124 125 126 127 |
# File 'lib/active_agent/providers/common/usage.rb', line 123 def initialize(attributes = {}) super # Calculate total_tokens if not provided self.total_tokens ||= (input_tokens || 0) + (output_tokens || 0) end |
Instance Attribute Details
#audio_tokens ⇒ Integer?
Available from:
-
OpenAI: sum of prompt_tokens_details.audio_tokens and completion_tokens_details.audio_tokens
80 |
# File 'lib/active_agent/providers/common/usage.rb', line 80 attribute :audio_tokens, :integer |
#cache_creation_tokens ⇒ Integer?
Available from:
-
Anthropic: cache_creation_input_tokens
87 |
# File 'lib/active_agent/providers/common/usage.rb', line 87 attribute :cache_creation_tokens, :integer |
#cached_tokens ⇒ Integer?
Available from:
-
OpenAI: prompt_tokens_details.cached_tokens or input_tokens_details.cached_tokens
-
Anthropic: cache_read_input_tokens
65 |
# File 'lib/active_agent/providers/common/usage.rb', line 65 attribute :cached_tokens, :integer |
#duration_ms ⇒ Integer?
Available from:
-
Ollama: total_duration (converted from nanoseconds)
101 |
# File 'lib/active_agent/providers/common/usage.rb', line 101 attribute :duration_ms, :integer |
#input_tokens ⇒ Integer
Normalized from:
-
OpenAI Chat/Embeddings: prompt_tokens
-
OpenAI Responses API: input_tokens
-
Anthropic: input_tokens
-
Ollama: prompt_eval_count
-
OpenRouter: prompt_tokens
39 |
# File 'lib/active_agent/providers/common/usage.rb', line 39 attribute :input_tokens, :integer, default: 0 |
#output_tokens ⇒ Integer
Normalized from:
-
OpenAI Chat: completion_tokens
-
OpenAI Responses API: output_tokens
-
Anthropic: output_tokens
-
Ollama: eval_count
-
OpenRouter: completion_tokens
-
OpenAI Embeddings: 0 (no output tokens)
51 |
# File 'lib/active_agent/providers/common/usage.rb', line 51 attribute :output_tokens, :integer, default: 0 |
#provider_details ⇒ Hash
Preserves provider-specific information that doesn’t fit the normalized structure. Useful for debugging or provider-specific features.
108 |
# File 'lib/active_agent/providers/common/usage.rb', line 108 attribute :provider_details, default: -> { {} } |
#reasoning_tokens ⇒ Integer?
Available from:
-
OpenAI Chat: completion_tokens_details.reasoning_tokens
-
OpenAI Responses: output_tokens_details.reasoning_tokens
73 |
# File 'lib/active_agent/providers/common/usage.rb', line 73 attribute :reasoning_tokens, :integer |
#service_tier ⇒ String?
Available from:
-
Anthropic: service_tier (“standard”, “priority”, “batch”)
94 |
# File 'lib/active_agent/providers/common/usage.rb', line 94 attribute :service_tier, :string |
#total_tokens ⇒ Integer
Automatically calculated as input_tokens + output_tokens if not provided by provider.
57 |
# File 'lib/active_agent/providers/common/usage.rb', line 57 attribute :total_tokens, :integer |
Class Method Details
.calculate_tokens_per_second(tokens, duration_ns) ⇒ Float?
377 378 379 380 381 |
# File 'lib/active_agent/providers/common/usage.rb', line 377 def self.calculate_tokens_per_second(tokens, duration_ns) return nil unless tokens && duration_ns && duration_ns > 0 (tokens.to_f / (duration_ns / 1_000_000_000.0)).round(2) end |
.convert_nanoseconds_to_ms(nanoseconds) ⇒ Integer?
368 369 370 371 372 |
# File 'lib/active_agent/providers/common/usage.rb', line 368 def self.convert_nanoseconds_to_ms(nanoseconds) return nil unless nanoseconds (nanoseconds / 1_000_000.0).round end |
.from_anthropic(usage_hash) ⇒ Usage
Creates a Usage object from Anthropic usage data.
257 258 259 260 261 262 263 264 265 266 267 268 269 270 |
# File 'lib/active_agent/providers/common/usage.rb', line 257 def self.from_anthropic(usage_hash) return nil unless usage_hash usage = usage_hash.deep_symbolize_keys new( **usage.slice(:input_tokens, :output_tokens, :service_tier), input_tokens: usage[:input_tokens] || 0, output_tokens: usage[:output_tokens] || 0, cached_tokens: usage[:cache_read_input_tokens], cache_creation_tokens: usage[:cache_creation_input_tokens], provider_details: usage.slice(:cache_creation, :server_tool_use).compact ) end |
.from_ollama(usage_hash) ⇒ Usage
Creates a Usage object from Ollama usage data.
284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 |
# File 'lib/active_agent/providers/common/usage.rb', line 284 def self.from_ollama(usage_hash) return nil unless usage_hash usage = usage_hash.deep_symbolize_keys new( input_tokens: usage[:prompt_eval_count] || 0, output_tokens: usage[:eval_count] || 0, duration_ms: convert_nanoseconds_to_ms(usage[:total_duration]), provider_details: { load_duration_ms: convert_nanoseconds_to_ms(usage[:load_duration]), prompt_eval_duration_ms: convert_nanoseconds_to_ms(usage[:prompt_eval_duration]), eval_duration_ms: convert_nanoseconds_to_ms(usage[:eval_duration]), tokens_per_second: calculate_tokens_per_second(usage[:eval_count], usage[:eval_duration]) }.compact ) end |
.from_openai_chat(usage_hash) ⇒ Usage
Creates a Usage object from OpenAI Chat Completion usage data.
168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 |
# File 'lib/active_agent/providers/common/usage.rb', line 168 def self.from_openai_chat(usage_hash) return nil unless usage_hash usage = usage_hash.deep_symbolize_keys prompt_details = usage[:prompt_tokens_details] || {} completion_details = usage[:completion_tokens_details] || {} audio_sum = [ prompt_details[:audio_tokens], completion_details[:audio_tokens] ].compact.sum new( **usage.slice(:total_tokens), input_tokens: usage[:prompt_tokens] || 0, output_tokens: usage[:completion_tokens] || 0, cached_tokens: prompt_details[:cached_tokens], reasoning_tokens: completion_details[:reasoning_tokens], audio_tokens: audio_sum > 0 ? audio_sum : nil, provider_details: usage.slice(:prompt_tokens_details, :completion_tokens_details).compact ) end |
.from_openai_embedding(usage_hash) ⇒ Usage
Creates a Usage object from OpenAI Embedding API usage data.
201 202 203 204 205 206 207 208 209 210 211 212 |
# File 'lib/active_agent/providers/common/usage.rb', line 201 def self.(usage_hash) return nil unless usage_hash usage = usage_hash.deep_symbolize_keys new( **usage.slice(:total_tokens), input_tokens: usage[:prompt_tokens] || 0, output_tokens: 0, # Embeddings don't generate output tokens provider_details: usage.except(:prompt_tokens, :total_tokens) ) end |
.from_openai_responses(usage_hash) ⇒ Usage
Creates a Usage object from OpenAI Responses API usage data.
227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 |
# File 'lib/active_agent/providers/common/usage.rb', line 227 def self.from_openai_responses(usage_hash) return nil unless usage_hash usage = usage_hash.deep_symbolize_keys input_details = usage[:input_tokens_details] || {} output_details = usage[:output_tokens_details] || {} new( **usage.slice(:input_tokens, :output_tokens, :total_tokens), input_tokens: usage[:input_tokens] || 0, output_tokens: usage[:output_tokens] || 0, cached_tokens: input_details[:cached_tokens], reasoning_tokens: output_details[:reasoning_tokens], provider_details: usage.slice(:input_tokens_details, :output_tokens_details).compact ) end |
.from_openrouter(usage_hash) ⇒ Usage
Creates a Usage object from OpenRouter usage data.
OpenRouter uses the same format as OpenAI Chat Completion.
315 316 317 |
# File 'lib/active_agent/providers/common/usage.rb', line 315 def self.from_openrouter(usage_hash) from_openai_chat(usage_hash) end |
.from_provider_usage(usage_hash) ⇒ Usage?
Detection is based on hash structure rather than native gem types because we cannot force-load all provider gems. This allows the framework to work with only the gems the user has installed.
Auto-detects the provider format and creates a normalized Usage object.
330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 |
# File 'lib/active_agent/providers/common/usage.rb', line 330 def self.from_provider_usage(usage_hash) return nil unless usage_hash.is_a?(Hash) usage = usage_hash.deep_symbolize_keys # Detect Ollama by presence of nanosecond duration fields if usage.key?(:total_duration) from_ollama(usage_hash) # Detect Anthropic by presence of cache_creation or service_tier elsif usage.key?(:cache_creation) || usage.key?(:service_tier) from_anthropic(usage_hash) # Detect OpenAI Responses API by input_tokens/output_tokens with details elsif usage.key?(:input_tokens) && usage.key?(:input_tokens_details) from_openai_responses(usage_hash) # Detect OpenAI Chat/OpenRouter by prompt_tokens/completion_tokens elsif usage.key?(:completion_tokens) from_openai_chat(usage_hash) # Detect OpenAI Embedding by prompt_tokens without completion_tokens elsif usage.key?(:prompt_tokens) (usage_hash) # Default to raw initialization else new(usage_hash) end end |
Instance Method Details
#+(other) ⇒ Usage
Sums all token counts from two Usage objects.
141 142 143 144 145 146 147 148 149 150 151 152 153 |
# File 'lib/active_agent/providers/common/usage.rb', line 141 def +(other) return self unless other self.class.new( input_tokens: self.input_tokens + other.input_tokens, output_tokens: self.output_tokens + other.output_tokens, total_tokens: self.total_tokens + other.total_tokens, cached_tokens: sum_optional(self.cached_tokens, other.cached_tokens), cache_creation_tokens: sum_optional(self.cache_creation_tokens, other.cache_creation_tokens), reasoning_tokens: sum_optional(self.reasoning_tokens, other.reasoning_tokens), audio_tokens: sum_optional(self.audio_tokens, other.audio_tokens) ) end |