Class: NN

Inherits:
Object
  • Object
show all
Defined in:
lib/neuroevo/nn.rb

Overview

TODO: separate activation functions in class?

Direct Known Subclasses

FFNN, RNN

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(struct, act_fn: nil) ⇒ NN

Initialization


25
26
27
28
29
30
31
32
33
34
35
36
# File 'lib/neuroevo/nn.rb', line 25

def initialize struct, act_fn: nil
  @struct = struct
  @act_fn = self.class.act_fn(act_fn || :sigmoid)
  # @state holds both inputs, possibly recurrency, and bias
  # it is a complete input for the next layer, hence size from layer sizes
  @state = layer_row_sizes.collect do |size|
    NMatrix.zeros([1, size], dtype: :float64)
  end
  # to this, append a matrix to hold the final network output
  @state.push NMatrix.zeros([1, nneurs(-1)], dtype: :float64)
  reset_state
end

Instance Attribute Details

#act_fnObject (readonly)

layers: list of matrices, each being the weights connecting a

layer's inputs (rows) to a layer's neurons (columns), hence
its shape is [ninputs, nneurs]

state: list of (one-dimensional) matrices, each (the output or) an

input to the next layer. This means each matrix is a concatenation
of previous-layer output (or inputs), possibly (recursion) this
layer last inputs, and bias (fixed `1`).

act_fn: activation function, I guess the good ol' sigmoid will

do for starters

struct: the structure of the network, how many (inputs or)

neurons in each layer, hence it's a list of integers

20
21
22
# File 'lib/neuroevo/nn.rb', line 20

def act_fn
  @act_fn
end

#layersObject (readonly)

layers: list of matrices, each being the weights connecting a

layer's inputs (rows) to a layer's neurons (columns), hence
its shape is [ninputs, nneurs]

state: list of (one-dimensional) matrices, each (the output or) an

input to the next layer. This means each matrix is a concatenation
of previous-layer output (or inputs), possibly (recursion) this
layer last inputs, and bias (fixed `1`).

act_fn: activation function, I guess the good ol' sigmoid will

do for starters

struct: the structure of the network, how many (inputs or)

neurons in each layer, hence it's a list of integers

20
21
22
# File 'lib/neuroevo/nn.rb', line 20

def layers
  @layers
end

#stateObject (readonly)

layers: list of matrices, each being the weights connecting a

layer's inputs (rows) to a layer's neurons (columns), hence
its shape is [ninputs, nneurs]

state: list of (one-dimensional) matrices, each (the output or) an

input to the next layer. This means each matrix is a concatenation
of previous-layer output (or inputs), possibly (recursion) this
layer last inputs, and bias (fixed `1`).

act_fn: activation function, I guess the good ol' sigmoid will

do for starters

struct: the structure of the network, how many (inputs or)

neurons in each layer, hence it's a list of integers

20
21
22
# File 'lib/neuroevo/nn.rb', line 20

def state
  @state
end

#structObject (readonly)

layers: list of matrices, each being the weights connecting a

layer's inputs (rows) to a layer's neurons (columns), hence
its shape is [ninputs, nneurs]

state: list of (one-dimensional) matrices, each (the output or) an

input to the next layer. This means each matrix is a concatenation
of previous-layer output (or inputs), possibly (recursion) this
layer last inputs, and bias (fixed `1`).

act_fn: activation function, I guess the good ol' sigmoid will

do for starters

struct: the structure of the network, how many (inputs or)

neurons in each layer, hence it's a list of integers

20
21
22
# File 'lib/neuroevo/nn.rb', line 20

def struct
  @struct
end

Class Method Details

.act_fn(type, *args) ⇒ Object

Activation functions


134
135
136
137
138
139
140
141
142
# File 'lib/neuroevo/nn.rb', line 134

def self.act_fn type, *args
  fn = send(type,*args)
  lambda do |inputs|
    NMatrix.build([1, inputs.size], dtype: :float64) do |_,i|
      # single-row matrix, indices are columns
      fn.call inputs[i]
    end
  end
end

.lecun_hyperbolicObject


157
158
159
160
# File 'lib/neuroevo/nn.rb', line 157

def self.lecun_hyperbolic
  # http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf -- Section 4.4
  lambda { |x| 1.7159 * Math.tanh(2.0*x/3.0) + 1e-3*x }
end

.logisticObject


150
151
152
153
154
155
# File 'lib/neuroevo/nn.rb', line 150

def self.logistic
  lambda { |x|
    exp = Math.exp(x)
    exp.infinite? ? exp : exp / (1.0 + exp)
  }
end

.sigmoid(k = 0.5) ⇒ Object


144
145
146
147
148
# File 'lib/neuroevo/nn.rb', line 144

def self.sigmoid k=0.5
  # k is steepness:  0<k<1 is flatter, 1<k is flatter
  # flatter makes activation less sensitive, better with large number of inputs
  lambda { |x| 1.0 / (Math.exp(-k * x) + 1.0) }
end

Instance Method Details

#activate(input) ⇒ Object


112
113
114
115
116
117
118
119
120
121
122
123
# File 'lib/neuroevo/nn.rb', line 112

def activate input
  raise "Hell!" unless input.size == struct.first
  raise "Hell!" unless input.is_a? Array
  # load input in first state
  @state[0][0, 0..-2] = input
  # activate layers in sequence
  (0...nlayers).each do |i|
    act = activate_layer i
    @state[i+1][0,0...act.size] = act
  end
  return out
end

#biasObject

Activation


108
109
110
# File 'lib/neuroevo/nn.rb', line 108

def bias
  @bias ||= NMatrix[[1], dtype: :float64]
end

#deep_resetObject

method #deep_reset will be needed when playing with structure modification


56
57
58
59
60
61
62
63
# File 'lib/neuroevo/nn.rb', line 56

def deep_reset
  # reset memoization
  [:@layer_row_sizes, :@layer_col_sizes, :@nlayers, :@layer_shapes,
   :@nweights_per_layer, :@nweights].each do |sym|
     instance_variable_set sym, nil
  end
  reset_state
end

#init_randomObject


47
48
49
50
51
# File 'lib/neuroevo/nn.rb', line 47

def init_random
  # Will only be used for testing, no sense optimizing it (NMatrix#rand)
  # Reusing #load_weights instead helps catching bugs
  load_weights nweights.times.collect { rand -1.0..1.0 }
end

#layer_col_sizesObject

number of neurons per layer (excludes input)


81
82
83
# File 'lib/neuroevo/nn.rb', line 81

def layer_col_sizes # number of neurons per layer (excludes input)
  @layer_col_sizes ||= struct.drop(1)
end

#layer_shapesObject

define #layer_row_sizes in child class: number of inputs per layer


87
88
89
# File 'lib/neuroevo/nn.rb', line 87

def layer_shapes
  @layer_shapes ||= layer_row_sizes.zip layer_col_sizes
end

#load_weights(weights) ⇒ Object


95
96
97
98
99
100
101
102
103
# File 'lib/neuroevo/nn.rb', line 95

def load_weights weights
  raise "Hell!" unless weights.size == nweights
  weights_iter = weights.each
  @layers = layer_shapes.collect do |shape|
    NMatrix.build(shape, dtype: :float64) { weights_iter.next }
  end
  reset_state
  return true
end

#nlayersObject


73
74
75
# File 'lib/neuroevo/nn.rb', line 73

def nlayers
  @nlayers ||= layer_shapes.size
end

#nneurs(nlay = nil) ⇒ Object


91
92
93
# File 'lib/neuroevo/nn.rb', line 91

def nneurs nlay=nil
  nlay.nil? ? struct.reduce(:+) : struct[nlay]
end

#nweightsObject


65
66
67
# File 'lib/neuroevo/nn.rb', line 65

def nweights
  @nweights ||= nweights_per_layer.reduce(:+)
end

#nweights_per_layerObject


69
70
71
# File 'lib/neuroevo/nn.rb', line 69

def nweights_per_layer
  @nweights_per_layer ||= layer_shapes.collect { |shape| shape.reduce(:*) }
end

#outObject


125
126
127
# File 'lib/neuroevo/nn.rb', line 125

def out
  state.last.to_flat_a # activation of output layer (as 1-dim Array)
end

#reset_stateObject


38
39
40
41
42
43
44
45
# File 'lib/neuroevo/nn.rb', line 38

def reset_state
  @state.each do |m| # state has only single-row matrices
    # reset all to zero
    m[0,0..-1] = 0
    # add bias to all but output
    m[0,-1] = 1 unless m.object_id == @state.last.object_id
  end
end

#symObject

Interface to implement in child class


165
166
167
168
169
# File 'lib/neuroevo/nn.rb', line 165

[:layer_row_sizes, :activate_layer].each do |sym|
  define_method sym do |*args|
    raise NotImplementedError, "Implement ##{sym} in child class!"
  end
end

#weightsObject


77
78
79
# File 'lib/neuroevo/nn.rb', line 77

def weights
  layers.collect &:true_to_a
end