Class: MachineLearningWorkbench::NeuralNetwork::Base
- Inherits:
-
Object
- Object
- MachineLearningWorkbench::NeuralNetwork::Base
- Defined in:
- lib/machine_learning_workbench/neural_network/base.rb
Overview
Neural Network base class
Direct Known Subclasses
Instance Attribute Summary collapse
-
#act_fn ⇒ #call
readonly
activation function, common to all neurons (for now).
-
#act_fn_name ⇒ Object
readonly
Returns the value of attribute act_fn_name.
-
#layers ⇒ Array<NArray>
readonly
List of matrices, each being the weights connecting a layer’s inputs (rows) to a layer’s neurons (columns), hence its shape is ‘[ninputs, nneurs]` TODO: return a NArray after the usage of `#map` is figured out.
-
#state ⇒ Array<NArray>
readonly
It’s a list of one-dimensional matrices, each an input to a layer, plus the output layer’s output.
-
#struct ⇒ Object
readonly
Returns the value of attribute struct.
Instance Method Summary collapse
-
#activate(input) ⇒ Array
Activate the network on a given input.
-
#deep_reset ⇒ Object
Resets memoization: needed to play with structure modification.
-
#init_random ⇒ Object
Initialize the network with random weights.
-
#initialize(struct, act_fn: nil) ⇒ Base
constructor
A new instance of Base.
-
#interface_methods ⇒ Object
Declaring interface methods - implement in child class!.
-
#layer_col_sizes ⇒ Array
Number of neurons per layer.
-
#layer_shapes ⇒ Array<Array[Integer, Integer]>
Shapes for the weight matrices, each corresponding to a layer.
-
#lecun_hyperbolic ⇒ Object
LeCun hyperbolic activation.
-
#load_weights(weights) ⇒ true
Loads a plain list of weights into the weight matrices (one per layer).
-
#logistic ⇒ Object
Traditional logistic.
-
#nlayers ⇒ Integer
Count the layers.
-
#nneurs(nlay = nil) ⇒ Integer
Count the neurons in a particular layer or in the whole network.
-
#nweights ⇒ Integer
Total weights in the network.
-
#nweights_per_layer ⇒ Array<Integer>
List of per-layer number of weights.
-
#out ⇒ Array
Extract and convert the output layer’s activation.
-
#relu ⇒ Object
Rectified Linear Unit (ReLU).
-
#reset_state ⇒ Object
Reset the network to the initial state.
-
#sigmoid(k = 0.5) ⇒ Object
Traditional sigmoid with variable steepness.
-
#weights ⇒ Array
Returns the weight matrix.
Constructor Details
#initialize(struct, act_fn: nil) ⇒ Base
Returns a new instance of Base.
29 30 31 32 33 34 35 36 37 38 39 40 41 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 29 def initialize struct, act_fn: nil @struct = struct @act_fn_name = act_fn || :sigmoid @act_fn = send(act_fn_name) # @state holds both inputs, possibly recurrency, and bias # it is a complete input for the next layer, hence size from layer sizes @state = layer_row_sizes.collect do |size| NArray.zeros [1, size] end # to this, append a matrix to hold the final network output @state.push NArray.zeros [1, nneurs(-1)] reset_state end |
Instance Attribute Details
#act_fn ⇒ #call (readonly)
activation function, common to all neurons (for now)
22 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 22 attr_reader :layers, :state, :act_fn, :act_fn_name, :struct |
#act_fn_name ⇒ Object (readonly)
Returns the value of attribute act_fn_name.
22 23 24 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 22 def act_fn_name @act_fn_name end |
#layers ⇒ Array<NArray> (readonly)
List of matrices, each being the weights connecting a layer’s inputs (rows) to a layer’s neurons (columns), hence its shape is ‘[ninputs, nneurs]` TODO: return a NArray after the usage of `#map` is figured out
22 23 24 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 22 def layers @layers end |
#state ⇒ Array<NArray> (readonly)
It’s a list of one-dimensional matrices, each an input to a layer, plus the output layer’s output. The first element is the input to the first layer of the network, which is composed of the network’s input, possibly the first layer’s activation on the last input (recursion), and a bias (fixed ‘1`). The second to but-last entries follow the same structure, but with the previous layer’s output in place of the network’s input. The last entry is the activation of the output layer, without additions since it’s not used as an input by anyone. TODO: return a NArray after the usage of ‘#map` is figured out
22 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 22 attr_reader :layers, :state, :act_fn, :act_fn_name, :struct |
#struct ⇒ Object (readonly)
Returns the value of attribute struct.
22 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 22 attr_reader :layers, :state, :act_fn, :act_fn_name, :struct |
Instance Method Details
#activate(input) ⇒ Array
Activate the network on a given input
146 147 148 149 150 151 152 153 154 155 156 157 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 146 def activate input raise ArgumentError unless input.size == struct.first raise ArgumentError unless input.is_a? Array # load input in first state @state[0][0, 0..-2] = input # activate layers in sequence nlayers.times.each do |i| act = activate_layer i @state[i+1][0, 0...act.size] = act end return out end |
#deep_reset ⇒ Object
Resets memoization: needed to play with structure modification
62 63 64 65 66 67 68 69 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 62 def deep_reset # reset memoization [:@layer_row_sizes, :@layer_col_sizes, :@nlayers, :@layer_shapes, :@nweights_per_layer, :@nweights].each do |sym| instance_variable_set sym, nil end reset_state end |
#init_random ⇒ Object
Initialize the network with random weights
53 54 55 56 57 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 53 def init_random # Will only be used for testing, no sense optimizing it now (NArray#rand) # Reusing `#load_weights` instead helps catching bugs load_weights nweights.times.collect { rand(-1.0..1.0) } end |
#interface_methods ⇒ Object
Declaring interface methods - implement in child class!
199 200 201 202 203 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 199 [:layer_row_sizes, :activate_layer].each do |sym| define_method sym do raise NotImplementedError, "Implement ##{sym} in child class!" end end |
#layer_col_sizes ⇒ Array
Number of neurons per layer. Although this implementation includes inputs in the layer counts, this methods correctly ignores the input as not having neurons.
101 102 103 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 101 def layer_col_sizes @layer_col_sizes ||= struct.drop(1) end |
#layer_shapes ⇒ Array<Array[Integer, Integer]>
Shapes for the weight matrices, each corresponding to a layer
109 110 111 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 109 def layer_shapes @layer_shapes ||= layer_row_sizes.zip layer_col_sizes end |
#lecun_hyperbolic ⇒ Object
LeCun hyperbolic activation
187 188 189 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 187 def lecun_hyperbolic lambda { |x| 1.7159 * Numo::NMath.tanh(2.0*x/3.0) + 1e-3*x } end |
#load_weights(weights) ⇒ true
Loads a plain list of weights into the weight matrices (one per layer). Preserves order. Reuses allocated memory if available.
127 128 129 130 131 132 133 134 135 136 137 138 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 127 def load_weights weights raise ArgumentError unless weights.size == nweights weights_iter = weights.each @layers ||= layer_shapes.collect { |shape| NArray.zeros shape } layers.each do |narr| narr.each_with_index do |_val, *idxs| narr[*idxs] = weights_iter.next end end reset_state return true end |
#logistic ⇒ Object
Traditional logistic
177 178 179 180 181 182 183 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 177 def logistic lambda { |x| exp = Numo::NMath.exp(x) # exp.infinite? ? exp : exp / (1.0 + exp) exp / (1.0 + exp) } end |
#nlayers ⇒ Integer
Count the layers. This is a computation helper, and for this implementation the inputs are considered as if a layer like the others.
86 87 88 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 86 def nlayers @nlayers ||= layer_shapes.size end |
#nneurs(nlay = nil) ⇒ Integer
Count the neurons in a particular layer or in the whole network.
118 119 120 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 118 def nneurs nlay=nil nlay.nil? ? struct.reduce(:+) : struct[nlay] end |
#nweights ⇒ Integer
Total weights in the network
73 74 75 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 73 def nweights @nweights ||= nweights_per_layer.reduce(:+) end |
#nweights_per_layer ⇒ Array<Integer>
List of per-layer number of weights
79 80 81 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 79 def nweights_per_layer @nweights_per_layer ||= layer_shapes.collect { |shape| shape.reduce(:*) } end |
#out ⇒ Array
Extract and convert the output layer’s activation
161 162 163 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 161 def out state.last.to_a.flatten end |
#relu ⇒ Object
Rectified Linear Unit (ReLU)
192 193 194 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 192 def relu lambda { |x| (x>0).all? && x || x.class.zeros(x.shape) } end |
#reset_state ⇒ Object
Reset the network to the initial state
44 45 46 47 48 49 50 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 44 def reset_state state.each do |s| s.fill 0 # reset state to zero s[0,-1] = 1 # add bias end state[-1][0,-1] = 0 # last layer has no bias end |
#sigmoid(k = 0.5) ⇒ Object
Traditional sigmoid with variable steepness
170 171 172 173 174 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 170 def sigmoid k=0.5 # k is steepness: 0<k<1 is flatter, 1<k is flatter # flatter makes activation less sensitive, better with large number of inputs lambda { |x| 1.0 / (Numo::NMath.exp(-k * x) + 1.0) } end |
#weights ⇒ Array
Returns the weight matrix
93 94 95 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 93 def weights layers.collect(&:to_a) end |