Class: DSPy::Events::LLMEvent
- Inherits:
-
T::Struct
- Object
- T::Struct
- DSPy::Events::LLMEvent
- Defined in:
- lib/dspy/events/types.rb
Overview
LLM operation events with semantic conventions
Constant Summary collapse
- VALID_PROVIDERS =
T.let( ['openai', 'anthropic', 'google', 'azure', 'ollama', 'together', 'groq', 'cohere'].freeze, T::Array[String] )
Instance Method Summary collapse
-
#initialize(name:, provider:, model:, timestamp: Time.now, usage: nil, duration_ms: nil, temperature: nil, max_tokens: nil, stream: nil) ⇒ LLMEvent
constructor
A new instance of LLMEvent.
- #to_attributes ⇒ Object
- #to_otel_attributes ⇒ Object
Constructor Details
#initialize(name:, provider:, model:, timestamp: Time.now, usage: nil, duration_ms: nil, temperature: nil, max_tokens: nil, stream: nil) ⇒ LLMEvent
Returns a new instance of LLMEvent.
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
# File 'lib/dspy/events/types.rb', line 54 def initialize(name:, provider:, model:, timestamp: Time.now, usage: nil, duration_ms: nil, temperature: nil, max_tokens: nil, stream: nil) unless VALID_PROVIDERS.include?(provider.downcase) raise ArgumentError, "Invalid provider '#{provider}'. Must be one of: #{VALID_PROVIDERS.join(', ')}" end super( name: name, timestamp: , provider: provider.downcase, model: model, usage: usage, duration_ms: duration_ms, temperature: temperature, max_tokens: max_tokens, stream: stream ) end |
Instance Method Details
#to_attributes ⇒ Object
91 92 93 94 95 96 97 98 |
# File 'lib/dspy/events/types.rb', line 91 def to_attributes result = to_otel_attributes.dup result[:timestamp] = result[:provider] = provider result[:model] = model result[:duration_ms] = duration_ms if duration_ms result end |
#to_otel_attributes ⇒ Object
71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
# File 'lib/dspy/events/types.rb', line 71 def to_otel_attributes attrs = { 'gen_ai.system' => provider, 'gen_ai.request.model' => model } if usage attrs['gen_ai.usage.prompt_tokens'] = usage.prompt_tokens attrs['gen_ai.usage.completion_tokens'] = usage.completion_tokens attrs['gen_ai.usage.total_tokens'] = usage.total_tokens end attrs['gen_ai.request.temperature'] = temperature if temperature attrs['gen_ai.request.max_tokens'] = max_tokens if max_tokens attrs['gen_ai.request.stream'] = stream if stream attrs['duration_ms'] = duration_ms if duration_ms attrs end |