Class: MachineLearningWorkbench::Optimizer::NaturalEvolutionStrategies::Base

Inherits:
Object
  • Object
show all
Defined in:
lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb

Overview

Natural Evolution Strategies base class

Direct Known Subclasses

BDNES, RNES, SNES, XNES

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(ndims, obj_fn, opt_type, rseed: nil, mu_init: 0, sigma_init: 1, parallel_fit: false, rescale_popsize: 1, rescale_lrate: 1, utilities: nil, popsize: nil, lrate: nil) ⇒ Base

NES object initialization

Parameters:

  • ndims (Integer)

    number of parameters to optimize

  • obj_fn (#call)

    any object defining a #call method (Proc, lambda, custom class)

  • opt_type (:min, :max)

    select minimization / maximization of obj_fn

  • rseed (Integer) (defaults to: nil)

    allow for deterministic execution on rseed provided

  • mu_init (Numeric) (defaults to: 0)

    values to initalize the distribution’s mean

  • sigma_init (Numeric) (defaults to: 1)

    values to initialize the distribution’s covariance

  • parallel_fit (boolean) (defaults to: false)

    whether the ‘obj_fn` should be passed all the individuals together. In the canonical case the fitness function always scores a single individual; in practical cases though it is easier to delegate the scoring parallelization to the external fitness function. Turning this to `true` will make the algorithm pass _an Array_ of individuals to the fitness function, rather than a single instance.

  • rescale_popsize (Float) (defaults to: 1)

    scaling for the default population size

  • rescale_lrate (Float) (defaults to: 1)

    scaling for the default learning rate

Raises:

  • (ArgumentError)


23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 23

def initialize ndims, obj_fn, opt_type, rseed: nil, mu_init: 0, sigma_init: 1, parallel_fit: false, rescale_popsize: 1, rescale_lrate: 1, utilities: nil, popsize: nil, lrate: nil
  raise ArgumentError, "opt_type: #{opt_type}" unless [:min, :max].include? opt_type
  raise ArgumentError, "obj_fn not callable: #{obj_fn}" unless obj_fn.respond_to? :call
  raise ArgumentError, "utilities only if popsize" if utilities && popsize.nil?
  raise ArgumentError, "wrong sizes" if utilities && utilities.size != popsize
  raise ArgumentError, "minimum popsize 5 for default utilities" if popsize&.<(5) && utilities.nil?
  @ndims, @opt_type, @obj_fn, @parallel_fit = ndims, opt_type, obj_fn, parallel_fit
  @rescale_popsize, @rescale_lrate = rescale_popsize, rescale_lrate # rescale defaults
  @utilities, @popsize, @lrate = utilities, popsize, lrate # if not set, defaults below
  @eye = NArray.eye(ndims)
  rseed ||= Random.new_seed
  # puts "NES rseed: #{s}"  # currently disabled
  @rng = Random.new rseed
  @best = [(opt_type==:max ? -1 : 1) * Float::INFINITY, nil]
  @last_fits = []
  initialize_distribution mu_init: mu_init, sigma_init: sigma_init
end

Instance Attribute Details

#bestObject (readonly)

Returns the value of attribute best.



6
7
8
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 6

def best
  @best
end

#eyeObject (readonly)

Returns the value of attribute eye.



6
7
8
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 6

def eye
  @eye
end

#last_fitsObject (readonly)

Returns the value of attribute last_fits.



6
7
8
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 6

def last_fits
  @last_fits
end

#muObject (readonly)

Returns the value of attribute mu.



6
7
8
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 6

def mu
  @mu
end

#ndimsObject (readonly)

Returns the value of attribute ndims.



6
7
8
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 6

def ndims
  @ndims
end

#obj_fnObject (readonly)

Returns the value of attribute obj_fn.



6
7
8
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 6

def obj_fn
  @obj_fn
end

#opt_typeObject (readonly)

Returns the value of attribute opt_type.



6
7
8
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 6

def opt_type
  @opt_type
end

#parallel_fitObject (readonly)

Returns the value of attribute parallel_fit.



6
7
8
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 6

def parallel_fit
  @parallel_fit
end

#rescale_lrateObject (readonly)

Returns the value of attribute rescale_lrate.



6
7
8
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 6

def rescale_lrate
  @rescale_lrate
end

#rescale_popsizeObject (readonly)

Returns the value of attribute rescale_popsize.



6
7
8
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 6

def rescale_popsize
  @rescale_popsize
end

#rngObject (readonly)

Returns the value of attribute rng.



6
7
8
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 6

def rng
  @rng
end

#sigmaObject (readonly)

Returns the value of attribute sigma.



6
7
8
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 6

def sigma
  @sigma
end

Instance Method Details

#cmaes_lrateNArray, Float

Magic numbers from CMA-ES (see ‘README` for citation)

Returns:

  • (NArray)

    scale-invariant utilities

  • (Float)

    learning rate lower bound



77
78
79
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 77

def cmaes_lrate
  (3+Math.log(ndims)) / (5*Math.sqrt(ndims))
end

#cmaes_popsizeNArray, Integer

Magic numbers from CMA-ES (see ‘README` for citation)

Returns:

  • (NArray)

    scale-invariant utilities

  • (Integer)

    population size lower bound



83
84
85
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 83

def cmaes_popsize
  [5, 4 + (3*Math.log(ndims)).floor].max
end

#cmaes_utilitiesNArray

Magic numbers from CMA-ES (see ‘README` for citation)

Returns:

  • (NArray)

    scale-invariant utilities

Raises:

  • (ArgumentError)


62
63
64
65
66
67
68
69
70
71
72
73
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 62

def cmaes_utilities
  # Algorithm equations are meant for fitness maximization
  # Match utilities with individuals sorted by INCREASING fitness
  raise ArgumentError, "Minimum `popsize` should be 5 (is #{popsize})" if popsize < 5
  log_range = (1..popsize).collect do |v|
    [0, Math.log(popsize.to_f/2 - 1) - Math.log(v)].max
  end
  total = log_range.reduce(:+)
  buf = 1.0/popsize
  vals = log_range.collect { |v| v / total - buf }.reverse
  NArray[vals]
end

#interface_methodsObject

Declaring interface methods - implement these in child class!



146
147
148
149
150
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 146

[:train, :initialize_distribution, :convergence].each do |mname|
  define_method mname do
    raise NotImplementedError, "Implement in child class!"
  end
end

#lrateObject

Memoized automatic magic numbers Initialization options allow to rescale or entirely override these. NOTE: Doubling popsize and halving lrate often helps



58
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 58

def lrate;   @lrate     ||= cmaes_lrate * rescale_lrate end

#move_inds(inds) ⇒ NArray

Move standard normal samples to current distribution

Returns:



99
100
101
102
103
104
105
106
107
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 99

def move_inds inds
  # TODO: can we reduce the transpositions?

  # multi_mu = NMatrix[*inds.rows.times.collect {mu.to_a}, dtype: dtype].transpose
  # (multi_mu + sigma.dot(inds.transpose)).transpose

  mu_tile = mu.tile(inds.shape.first, 1).transpose
  (mu_tile + sigma.dot(inds.transpose)).transpose
end

#popsizeObject

Memoized automatic magic numbers Initialization options allow to rescale or entirely override these. NOTE: Doubling popsize and halving lrate often helps



56
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 56

def popsize; @popsize   ||= Integer(cmaes_popsize * rescale_popsize) end

#sorted_indsObject

Sorted individuals NOTE: Algorithm equations are meant for fitness maximization. Utilities need to be matched with individuals sorted by INCREASING fitness. Then reverse order for minimization.

Returns:

  • standard normal samples sorted by the respective individuals’ fitnesses



113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 113

def sorted_inds
  # Xumo::NArray implements the Box-Muller, but no random seed (yet)
  samples = standard_normal_samples
  # samples = NArray.new([popsize, ndims]).rand_norm(0,1)
  inds = move_inds(samples)
  fits = parallel_fit ? obj_fn.call(inds) : inds.map(&obj_fn)
  # Quick cure for NaN fitnesses
  fits.map { |x| x.nan? ? (opt_type==:max ? -1 : 1) * Float::INFINITY : x }
  @last_fits = fits # allows checking for stagnation

  # sorted = [fits.to_a, inds, samples.to_a].transpose.sort_by(&:first)
  # sorted.reverse! if opt_type==:min
  # this_best = sorted.last.take(2)
  # NArray[*sorted.map(&:last)]



  # BUG IN NARRAY SORT!! ruby-numo/numo-narray#97
  # sort_idxs = fits.sort_index
  sort_idxs = fits.size.times.sort_by { |i| fits[i] }.to_na



  sort_idxs = sort_idxs.reverse if opt_type == :min
  this_best = [fits[sort_idxs[-1]], inds[sort_idxs[-1], true]]
  opt_cmp_fn = opt_type==:min ? :< : :>
  @best = this_best if this_best.first.send(opt_cmp_fn, best.first)

  samples[sort_idxs, true]
end

#standard_normal_sampleFloat

Note:

Xumo::NArray implements this but no random seed selection yet

Box-Muller transform: generates standard (unit) normal distribution samples

Returns:

  • (Float)

    a single sample from a standard normal distribution



44
45
46
47
48
49
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 44

def standard_normal_sample
  rho = Math.sqrt(-2.0 * Math.log(rng.rand))
  theta = 2 * Math::PI * rng.rand
  tfn = rng.rand > 0.5 ? :cos : :sin
  rho * Math.send(tfn, theta)
end

#standard_normal_samplesNArray

Note:

Xumo::NArray implements this but no random seed selection yet

Samples a standard normal distribution to construct a NArray of

popsize multivariate samples of length ndims

Returns:

  • (NArray)

    standard normal samples



91
92
93
94
95
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 91

def standard_normal_samples
  NArray.zeros([popsize, ndims]).tap do |ret|
    ret.each_with_index { |_,*i| ret[*i] = standard_normal_sample }
  end
end

#utilsObject

Memoized automatic magic numbers Initialization options allow to rescale or entirely override these. NOTE: Doubling popsize and halving lrate often helps



54
# File 'lib/machine_learning_workbench/optimizer/natural_evolution_strategies/base.rb', line 54

def utils;   @utilities ||= cmaes_utilities end