Class: ServerMetrics::Processes
- Inherits:
-
Object
- Object
- ServerMetrics::Processes
- Defined in:
- lib/server_metrics/collectors/processes.rb
Overview
Collects information on processes. Groups processes running under the same command, and sums up their CPU & memory usage. CPU is calculated **since the last run**, and is a pecentage of overall CPU usage during the timeframe www.linuxquestions.org/questions/linux-general-1/per-process-cpu-utilization-557577/
Defined Under Namespace
Classes: Process
Class Method Summary collapse
-
.from_hash(hash) ⇒ Object
for reinstantiating from a hash why not just use marshall? this is a lot more manageable written to the Scout agent’s history file.
Instance Method Summary collapse
-
#calculate_processes ⇒ Object
called from run().
-
#get_jiffies ⇒ Object
Relies on the /proc directory (/proc/timer_list).
-
#get_top_processes(order_by, num) ⇒ Object
Can only be called after @processes is set.
-
#initialize(options = {}) ⇒ Processes
constructor
A new instance of Processes.
-
#run ⇒ Object
{‘mysqld’ => { :cmd => “mysqld”, # the command (without the path of arguments being run) :count => 1, # the number of these processes (grouped by the above command) :cpu => 34, # the total CPU usage of the processes :memory => 2, # the total memory usage of the processes :cmd_lines => [“cmd args1”, “cmd args2”] }, ‘apache’ => { .… } }.
-
#to_hash ⇒ Object
for persisting to a file – conforms to same basic API as the Collectors do.
Constructor Details
#initialize(options = {}) ⇒ Processes
Returns a new instance of Processes.
9 10 11 12 13 |
# File 'lib/server_metrics/collectors/processes.rb', line 9 def initialize(={}) @last_run @last_jiffies @last_process_list end |
Class Method Details
.from_hash(hash) ⇒ Object
for reinstantiating from a hash why not just use marshall? this is a lot more manageable written to the Scout agent’s history file.
140 141 142 143 144 145 146 |
# File 'lib/server_metrics/collectors/processes.rb', line 140 def self.from_hash(hash) p=new(hash[:options]) p.instance_variable_set('@last_run', hash[:last_run]) p.instance_variable_set('@last_jiffies', hash[:last_jiffies]) p.instance_variable_set('@last_process_list', hash[:last_process_list]) p end |
Instance Method Details
#calculate_processes ⇒ Object
called from run(). This method lists all the processes running on the server, groups them by command, and calculates CPU time for each process. Since CPU time has to be calculated relative to the last sample, the collector has to be run twice to get CPU data.
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
# File 'lib/server_metrics/collectors/processes.rb', line 54 def calculate_processes num_processors = ServerMetrics::SystemInfo.num_processors ## 1. get a list of all processes processes = Sys::ProcTable.ps.map{|p| ServerMetrics::Processes::Process.new(p) } # our Process object adds a method some behavior ## 2. loop through each process and calculate the CPU time. # The CPU values returned by ProcTable are cumulative for the life of the process, which is not what we want. # So, we rely on @last_process_list to make this calculation. If a process wasn't around last time, we use it's cumulative CPU time so far, which will be accurate enough. now = Time.now current_jiffies = get_jiffies if @last_run && @last_jiffies && @last_process_list elapsed_time = now - @last_run # in seconds elapsed_jiffies = current_jiffies - @last_jiffies if elapsed_time >= 1 processes.each do |p| if last_cpu = @last_process_list[p.pid] p.recent_cpu = p.combined_cpu - last_cpu else p.recent_cpu = p.combined_cpu # this process wasn't around last time, so just use the cumulative CPU time for its existence so far end # a) p.recent_cpu / elapsed_jiffies = the amount of CPU time this process has taken divided by the total "time slots" the CPU has available # b) * 100 ... this turns it into a percentage # b) / num_processors ... this normalizes for the the number of processors in the system, so it reflects the amount of CPU power avaiable as a whole p.recent_cpu_percentage = ((p.recent_cpu.to_f / elapsed_jiffies.to_f ) * 100.0) / num_processors.to_f end end end ## 3. group by command and aggregate the CPU grouped = {} processes.each do |proc| grouped[proc.comm] ||= { :cpu => 0, :memory => 0, :count => 0, :cmdlines => [] } grouped[proc.comm][:count] += 1 grouped[proc.comm][:cpu] += proc.recent_cpu_percentage || 0 if proc.respond_to?(:rss) # mac doesn't return rss. Mac returns 0 for memory usage grouped[proc.comm][:memory] += proc.rss.to_f / 1024.0 end grouped[proc.comm][:cmdlines] << proc.cmdline if !grouped[proc.comm][:cmdlines].include?(proc.cmdline) end # processes.each # {pid => cpu_snapshot, pid2 => cpu_snapshot ...} processes_to_store = processes.inject(Hash.new) do |hash, proc| hash[proc.pid] = proc.combined_cpu hash end @last_process_list = processes_to_store @last_jiffies = current_jiffies @last_run = now grouped end |
#get_jiffies ⇒ Object
Relies on the /proc directory (/proc/timer_list). We need this because the process CPU utilization is measured in jiffies. In order to calculate the process’ % usage of total CPU resources, we need to know how many jiffies have passed. Unfortunately, jiffies isn’t a fixed value (it can vary between 100 and 250 per second), so we need to calculate it ourselves.
if /proc/timer_list isn’t available, fall back to the assumption of 100 jiffies/second (10 milliseconds/jiffy)
124 125 126 127 128 129 130 |
# File 'lib/server_metrics/collectors/processes.rb', line 124 def get_jiffies if File.exist?('/proc/timer_list') `cat /proc/timer_list`.match(/^jiffies: (\d+)$/)[1].to_i else (Time.now.to_f*100).to_i end end |
#get_top_processes(order_by, num) ⇒ Object
Can only be called after @processes is set. Based on @processes, calcuates the top num processes, as ordered by order_by. Returns an array of hashes:
- :cpu=>30.0, :memory=>100, :uid=>1,:cmdlines=>[], => …
115 116 117 |
# File 'lib/server_metrics/collectors/processes.rb', line 115 def get_top_processes(order_by, num) @processes.map { |key, hash| {:cmd => key}.merge(hash) }.sort { |a, b| a[order_by] <=> b[order_by] }.reverse[0...num] end |
#run ⇒ Object
{‘mysqld’ =>
{
:cmd => "mysqld", # the command (without the path of arguments being run)
:count => 1, # the number of these processes (grouped by the above command)
:cpu => 34, # the total CPU usage of the processes
:memory => 2, # the total memory usage of the processes
:cmd_lines => ["cmd args1", "cmd args2"]
},
'apache' =>
{
....
}
}
36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
# File 'lib/server_metrics/collectors/processes.rb', line 36 def run @processes = calculate_processes # returns a hash top_memory = get_top_processes(:memory, 10) # returns an array top_cpu = get_top_processes(:memory, 10) # returns an array # combine the two and index by cmd. The indexing process will remove duplicates result = (top_cpu + top_memory).inject(Hash.new) {|temp_hash,process_hash| temp_hash[process_hash[:cmd]] = process_hash; temp_hash } # An alternate approach is to return an array with two separate arrays. More explicit, but more verbose. #{ # :top_memory => get_top_processes(:memory, 10), # :top_cpu => get_top_processes(:cpu, 10) #} end |
#to_hash ⇒ Object
for persisting to a file – conforms to same basic API as the Collectors do. why not just use marshall? This is a lot more manageable written to the Scout agent’s history file.
134 135 136 |
# File 'lib/server_metrics/collectors/processes.rb', line 134 def to_hash {:last_run=>@last_run, :last_jiffies=>@last_jiffies, :last_process_list=>@last_process_list} end |