Module: ActiveAgent::Providers::Instrumentation

Extended by:
ActiveSupport::Concern
Included in:
BaseProvider
Defined in:
lib/active_agent/providers/concerns/instrumentation.rb

Overview

Builds instrumentation event payloads for ActiveSupport::Notifications.

Extracts request parameters and response metadata for monitoring, debugging, and APM integration (New Relic, DataDog, etc.).

Event Payloads

Top-Level Events (overall request lifecycle):

prompt.active_agent

Initial: ‘{ model:, temperature:, max_tokens:, message_count:, has_tools:, stream: }` Final: `{ usage: { input_tokens:, output_tokens:, total_tokens: }, finish_reason:, response_model:, response_id: }` Note: Usage is cumulative across all API calls in multi-turn conversations

embed.active_agent

Initial: ‘{ model:, input_size:, encoding_format:, dimensions: }` Final: `{ usage: { input_tokens:, total_tokens: }, embedding_count:, response_model:, response_id: }`

Provider-Level Events (per API call):

prompt.provider.active_agent

Initial: ‘{ model:, temperature:, max_tokens:, message_count:, has_tools:, stream: }` Final: `{ usage: { input_tokens:, output_tokens:, total_tokens: }, finish_reason:, response_model:, response_id: }` Note: Usage is per individual API call

embed.provider.active_agent

Initial: ‘{ model:, input_size:, encoding_format:, dimensions: }` Final: `{ usage: { input_tokens:, total_tokens: }, embedding_count:, response_model:, response_id: }`

Instance Method Summary collapse

Instance Method Details

#instrumentation_prompt_payload(payload, request, response) ⇒ void

This method returns an undefined value.

Builds and merges payload data for prompt instrumentation events.

Populates both request parameters and response metadata for top-level and provider-level events. Usage data (tokens) is CRITICAL for APM cost tracking and performance monitoring.

Parameters:

  • payload (Hash)

    instrumentation payload to merge into

  • request (Request)

    request object with parameters

  • response (Common::PromptResponse)

    completed response with normalized data



46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
# File 'lib/active_agent/providers/concerns/instrumentation.rb', line 46

def instrumentation_prompt_payload(payload, request, response)
  # message_count: prefer the request/input messages (pre-call), fall back to
  # response messages only if the request doesn't expose messages. New Relic
  # expects parameters[:messages] to be the request messages and computes
  # total message counts by adding response choices to that count.
  message_count = safe_access(request, :messages)&.size
  message_count = safe_access(response, :messages)&.size if message_count.nil?

  payload.merge!(trace_id: trace_id, message_count: message_count || 0, stream: !!safe_access(request, :stream))

  # Common parameters: prefer response-normalized values, then request
  payload[:model]       = safe_access(response, :model) || safe_access(request, :model)
  payload[:temperature] = safe_access(request, :temperature)
  payload[:max_tokens]  = safe_access(request, :max_tokens)
  payload[:top_p]       = safe_access(request, :top_p)

  # Tools / instructions
  if (tools_val = safe_access(request, :tools))
    payload[:has_tools]  = tools_val.respond_to?(:present?) ? tools_val.present? : !!tools_val
    payload[:tool_count] = tools_val&.size || 0
  end

  if (instr_val = safe_access(request, :instructions))
    payload[:has_instructions] = instr_val.respond_to?(:present?) ? instr_val.present? : !!instr_val
  end

  # Usage (normalized)
  if response.usage
    usage = response.usage
    payload[:usage] = {
      input_tokens:  usage.input_tokens,
      output_tokens: usage.output_tokens,
      total_tokens:  usage.total_tokens
    }

    payload[:usage][:cached_tokens]         = usage.cached_tokens         if usage.cached_tokens
    payload[:usage][:cache_creation_tokens] = usage.cache_creation_tokens if usage.cache_creation_tokens
    payload[:usage][:reasoning_tokens]      = usage.reasoning_tokens      if usage.reasoning_tokens
    payload[:usage][:audio_tokens]          = usage.audio_tokens          if usage.audio_tokens
  end

  # Response metadata
  payload[:finish_reason]  = safe_access(response, :finish_reason) || response.finish_reason
  payload[:response_model] = safe_access(response, :model)         || response.model
  payload[:response_id]    = safe_access(response, :id)            || response.id

  # Build messages list: prefer request messages; if unavailable use prior
  # response messages (all but the final generated message).
  if (req_msgs = safe_access(request, :messages)).is_a?(Array)
    payload[:messages] = req_msgs.map { |m| extract_message_hash(m, false) }
  else
    prior = safe_access(response, :messages)
    prior = prior[0...-1] if prior.is_a?(Array) && prior.size > 1
    if prior.is_a?(Array) && prior.any?
      payload[:messages] = prior.map { |m| extract_message_hash(m, false) }
    end
  end

  # Build a parameters hash that mirrors what New Relic's OpenAI
  # instrumentation expects. This makes it easy for APM adapters to
  # map our provider payload to their LLM event constructors.
  parameters = {}
  parameters[:model]       = payload[:model]       if payload[:model]
  parameters[:max_tokens]  = payload[:max_tokens]  if payload[:max_tokens]
  parameters[:temperature] = payload[:temperature] if payload[:temperature]
  parameters[:top_p]       = payload[:top_p]       if payload[:top_p]
  parameters[:stream]      = payload[:stream]
  parameters[:messages]    = payload[:messages]    if payload[:messages]

  # Include tools/instructions where available — New Relic ignores unknown keys,
  # but having them here makes the parameter shape closer to OpenAI's.
  parameters[:tools]        = begin request.tools        rescue nil end if begin request.tools        rescue nil end
  parameters[:instructions] = begin request.instructions rescue nil end if begin request.instructions rescue nil end

  payload[:parameters] = parameters

  # Attach raw response (provider-specific) so downstream APM integrations
  # can inspect the provider response if needed. Use the normalized raw_response
  # available on the Common::Response when possible.
  begin
    payload[:response_raw] = response.raw_response if response.respond_to?(:raw_response) && response.raw_response
  rescue StandardError
    # ignore
  end
end