Class: Tasker::Telemetry::LogBackend

Inherits:
Object
  • Object
show all
Includes:
Singleton
Defined in:
lib/tasker/telemetry/log_backend.rb

Overview

LogBackend provides thread-safe structured logging for events

This class implements Tasker's core logging system with thread-safe operations, automatic EventRouter integration, and structured log data collection. It complements Rails logging with structured event data suitable for log aggregation systems like ELK, Splunk, or Fluentd.

The backend follows the same singleton pattern as MetricsBackend for consistency and provides structured log data with correlation IDs and contextual information.

Examples:

Basic usage

backend = LogBackend.instance
backend.log_event('task.started', { task_id: '123', level: 'info' })

EventRouter integration

# Automatic log collection based on event routing
backend.handle_event('task.failed', { task_id: '123', error: 'timeout', level: 'error' })

Constant Summary collapse

LOG_LEVELS =

Log levels in order of severity

%w[debug info warn error fatal].freeze

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initializeLogBackend

Returns a new instance of LogBackend.



41
42
43
44
45
# File 'lib/tasker/telemetry/log_backend.rb', line 41

def initialize
  @logs = Concurrent::Hash.new { |h, k| h[k] = Concurrent::Array.new }
  @created_at = Time.current
  @instance_id = Socket.gethostname
end

Instance Attribute Details

#created_atTime (readonly)

Backend creation timestamp for monitoring

Returns:

  • (Time)

    When this backend was initialized



36
37
38
# File 'lib/tasker/telemetry/log_backend.rb', line 36

def created_at
  @created_at
end

#logsConcurrent::Hash (readonly)

Core log storage for structured events Using ConcurrentHash for thread-safe operations without locks

Returns:

  • (Concurrent::Hash)

    Thread-safe log storage



32
33
34
# File 'lib/tasker/telemetry/log_backend.rb', line 32

def logs
  @logs
end

Instance Method Details

#export(level: nil) ⇒ Hash

Export all collected log data

Parameters:

  • level (String, nil) (defaults to: nil)

    Specific log level to export, or nil for all

Returns:

  • (Hash)

    Log data with metadata



96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
# File 'lib/tasker/telemetry/log_backend.rb', line 96

def export(level: nil)
  logs_to_export = if level && LOG_LEVELS.include?(level.to_s)
                     { level.to_s => @logs[level.to_s].to_a }
                   else
                     @logs.each_with_object({}) { |(k, v), h| h[k.to_s] = v.to_a }
                   end

  {
    logs: logs_to_export,
    metadata: {
      backend: 'log',
      instance_id: @instance_id,
      created_at: @created_at.iso8601,
      exported_at: Time.current.iso8601,
      total_entries: @logs[:all]&.size || 0,
      level_counts: LOG_LEVELS.index_with { |l| @logs[l]&.size || 0 }
    }
  }
end

#handle_event(event_name, payload = {}) ⇒ Boolean

Handle an event from EventRouter and collect appropriate log data

This method is called by EventRouter when an event should be routed to the log backend. It automatically creates structured log entries based on event type and payload.

Examples:

Automatic usage via EventRouter

# EventRouter calls this automatically:
backend.handle_event('task.failed', {
  task_id: '123',
  error: 'Payment gateway timeout',
  context: { user_id: 456, amount: 100.0 }
})

Parameters:

  • event_name (String)

    The lifecycle event name

  • payload (Hash) (defaults to: {})

    Event payload with log data

Returns:

  • (Boolean)

    True if log data was collected successfully



65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
# File 'lib/tasker/telemetry/log_backend.rb', line 65

def handle_event(event_name, payload = {})
  return false unless payload.is_a?(Hash)

  log_entry = {
    timestamp: Time.current.iso8601,
    event_name: event_name,
    level: determine_log_level(event_name),
    message: build_log_message(event_name, payload),
    payload: payload,
    instance_id: @instance_id,
    correlation_id: extract_correlation_id(payload)
  }

  # Store log entry by level for organized retrieval
  level = log_entry[:level]
  @logs[level] << log_entry

  # Also store in chronological order
  @logs[:all] << log_entry

  true
rescue StandardError => e
  # Log error but don't raise to prevent breaking the event flow
  Rails.logger&.error("LogBackend error handling #{event_name}: #{e.message}")
  false
end

#recent_entries(limit: 100, level: nil) ⇒ Array<Hash>

Get recent log entries

Parameters:

  • limit (Integer) (defaults to: 100)

    Number of recent entries to return

  • level (String, nil) (defaults to: nil)

    Specific log level to filter by

Returns:

  • (Array<Hash>)

    Recent log entries



140
141
142
143
144
145
146
147
148
# File 'lib/tasker/telemetry/log_backend.rb', line 140

def recent_entries(limit: 100, level: nil)
  entries = if level && LOG_LEVELS.include?(level.to_s)
              @logs[level.to_s].to_a
            else
              @logs[:all].to_a
            end

  entries.last(limit)
end

#reset!void

This method returns an undefined value.

Clear all log data (primarily for testing)



119
120
121
# File 'lib/tasker/telemetry/log_backend.rb', line 119

def reset!
  @logs.clear
end

#statsHash

Get log statistics

Returns:

  • (Hash)

    Statistics about collected logs



126
127
128
129
130
131
132
133
# File 'lib/tasker/telemetry/log_backend.rb', line 126

def stats
  {
    total_entries: @logs[:all]&.size || 0,
    level_counts: LOG_LEVELS.index_with { |level| @logs[level]&.size || 0 },
    backend_uptime: Time.current - @created_at,
    instance_id: @instance_id
  }
end