Class: ServerMetrics::Processes

Inherits:
Object
  • Object
show all
Defined in:
lib/server_metrics/collectors/processes.rb

Overview

Collects information on processes. Groups processes running under the same command, and sums up their CPU & memory usage. CPU is calculated **since the last run**, and is a pecentage of overall CPU usage during the time span since the instance was last run.

FAQ:

1) top and htop show PIDs. Why doesn’t this class? This class aggregates processes. So if you have 10 apache processes running, it will report the total memory and CPU for all instances, and also report that there are 10 processes.

2) why are the process CPU numbers lower than [top|htop]? We normalize the CPU usage according to the number of CPUs your server has. Top and htop don’t do that. So on a 8 CPU system, you’d expect these numbers to be almost an order of magnitude lower.

www.linuxquestions.org/questions/linux-general-1/per-process-cpu-utilization-557577/

Defined Under Namespace

Classes: Process

Constant Summary collapse

DEFAULT_PAGE_SIZE =

most commmon - used if page size can’t be retreived. units are bytes.

4096

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(options = {}) ⇒ Processes

Returns a new instance of Processes.



20
21
22
23
24
25
# File 'lib/server_metrics/collectors/processes.rb', line 20

def initialize(options={})
  @last_run
  @last_jiffies
  @last_process_list
  @proc_table_klass = ServerMetrics::SystemInfo.os =~ /linux/ ? SysLite::ProcTable : Sys::ProcTable # this is used in calculate_processes. On Linux, use our optimized version
end

Class Method Details

.from_hash(hash) ⇒ Object

for reinstantiating from a hash why not just use marshall? this is a lot more manageable written to the Scout agent’s history file.



139
140
141
142
143
144
145
# File 'lib/server_metrics/collectors/processes.rb', line 139

def self.from_hash(hash)
  p=new(hash[:options])
  p.instance_variable_set('@last_run', hash[:last_run])
  p.instance_variable_set('@last_jiffies', hash[:last_jiffies])
  p.instance_variable_set('@last_process_list', hash[:last_process_list])
  p
end

Instance Method Details

#calculate_processesObject

called from run(). This method lists all the processes running on the server, groups them by command, and calculates CPU time for each process. Since CPU time has to be calculated relative to the last sample, the collector has to be run twice to get CPU data.



52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
# File 'lib/server_metrics/collectors/processes.rb', line 52

def calculate_processes
  ## 1. get a list of all processes
  processes = @proc_table_klass.ps.map{|p| ServerMetrics::Processes::Process.new(p) } # our Process object adds a method some behavior

  ## 2. loop through each process and calculate the CPU time.
  # The CPU values returned by ProcTable are cumulative for the life of the process, which is not what we want.
  # So, we rely on @last_process_list to make this calculation. If a process wasn't around last time, we use it's cumulative CPU time so far, which will be accurate enough.
  now = Time.now
  current_jiffies = get_jiffies
  if @last_run && @last_jiffies && @last_process_list
    elapsed_time = now - @last_run # in seconds
    elapsed_jiffies = current_jiffies - @last_jiffies
    if elapsed_time >= 1
      processes.each do |p|
        if last_cpu = @last_process_list[p.pid]
          p.recent_cpu = p.combined_cpu - last_cpu
        else
          p.recent_cpu = p.combined_cpu # this process wasn't around last time, so just use the cumulative CPU time for its existence so far
        end
        # a) p.recent_cpu / elapsed_jiffies = the amount of CPU time this process has taken divided by the total "time slots" the CPU has available
        # b) * 100 ... this turns it into a percentage
        # b) / num_processors ... this normalizes for the the number of processors in the system, so it reflects the amount of CPU power avaiable as a whole
        p.recent_cpu_percentage = ((p.recent_cpu.to_f / elapsed_jiffies.to_f ) * 100.0) / num_processors.to_f
      end
    end
  end

  ## 3. group by command and aggregate the CPU
  grouped = {}
  processes.each do |proc|
    grouped[proc.comm] ||= {
        :cpu => 0,
        :memory => 0,
        :count => 0,
        :cmdlines => []
    }
    grouped[proc.comm][:count]    += 1
    grouped[proc.comm][:cpu]      += proc.recent_cpu_percentage || 0
    if proc.has?(:rss) # mac doesn't return rss. Mac returns 0 for memory usage
      # converted to MB from bytes
      grouped[proc.comm][:memory]   += (proc.rss.to_f*page_size) / 1024 / 1024
    end
    grouped[proc.comm][:cmdlines] << proc.cmdline if !grouped[proc.comm][:cmdlines].include?(proc.cmdline)
  end # processes.each

  # {pid => cpu_snapshot, pid2 => cpu_snapshot ...}
  processes_to_store = processes.inject(Hash.new) do |hash, proc|
    hash[proc.pid] = proc.combined_cpu
    hash
  end

  @last_process_list = processes_to_store
  @last_jiffies = current_jiffies
  @last_run = now

  grouped
end

#get_jiffiesObject

We need this because the process CPU utilization is measured in jiffies. In order to calculate the process’ % usage of total CPU resources, we need to know how many jiffies have passed.

While jiffies isn’t a fixed value (it can vary between 100 and 250 per second), we assume it is 100 jiffies/second (10 milliseconds/jiffy) because that is most common.



115
116
117
# File 'lib/server_metrics/collectors/processes.rb', line 115

def get_jiffies
  (Time.now.to_f*100).to_i
end

#num_processorsObject



127
128
129
# File 'lib/server_metrics/collectors/processes.rb', line 127

def num_processors
  @num_processors ||= ServerMetrics::SystemInfo.num_processors  
end

#page_sizeObject

Sys::ProcTable.ps returns rss in pages, not in bytes. Returns the page size in bytes.



121
122
123
124
125
# File 'lib/server_metrics/collectors/processes.rb', line 121

def page_size
  @page_size ||= %x(getconf PAGESIZE).to_i
rescue
  @page_size = DEFAULT_PAGE_SIZE
end

#runObject

This is the main method to call. It returns a hash of processes, keyed by the executable name.

{‘mysqld’ =>

   {
    :cmd => "mysqld",    # the command (without the path of arguments being run)
    :count    => 1,      # the number of these processes (grouped by the above command)
    :cpu      => 34,     # the percentage of the total computational resources available (across all cores/CPU) that these processes are using.
    :memory   => 2,      # the percentage of total memory that these processes are using.
    :cmd_lines => ["cmd args1", "cmd args2"]
   },
'apache' =>
   {
    ....
   }

}



44
45
46
47
# File 'lib/server_metrics/collectors/processes.rb', line 44

def run
  @processes = calculate_processes # returns a hash
  @processes.keys.inject(@processes) { |processes, key| processes[key][:cmd] = key; processes }
end

#to_hashObject

for persisting to a file – conforms to same basic API as the Collectors do. why not just use marshall? This is a lot more manageable written to the Scout agent’s history file.



133
134
135
# File 'lib/server_metrics/collectors/processes.rb', line 133

def to_hash
  {:last_run=>@last_run, :last_jiffies=>@last_jiffies, :last_process_list=>@last_process_list}
end