Class: MachineLearningWorkbench::NeuralNetwork::Base
- Inherits:
-
Object
- Object
- MachineLearningWorkbench::NeuralNetwork::Base
- Defined in:
- lib/machine_learning_workbench/neural_network/base.rb
Overview
Neural Network base class
Direct Known Subclasses
Instance Attribute Summary collapse
-
#act_fn ⇒ #call
readonly
activation function, common to all neurons (for now).
-
#act_fn_name ⇒ Object
readonly
Returns the value of attribute act_fn_name.
-
#layers ⇒ Array<NArray>
readonly
List of matrices, each being the weights connecting a layer’s inputs (rows) to a layer’s neurons (columns), hence its shape is ‘[ninputs, nneurs]` TODO: return a NArray after the usage of `#map` is figured out.
-
#state ⇒ Array<NArray>
readonly
It’s a list of one-dimensional matrices, each an input to a layer, plus the output layer’s output.
-
#struct ⇒ Object
readonly
Returns the value of attribute struct.
Instance Method Summary collapse
-
#activate(input) ⇒ Array
Activate the network on a given input.
-
#deep_reset ⇒ Object
Resets memoization: needed to play with structure modification.
-
#init_random ⇒ Object
Initialize the network with random weights.
-
#initialize(struct, act_fn: nil) ⇒ Base
constructor
A new instance of Base.
-
#interface_methods ⇒ Object
Declaring interface methods - implement in child class!.
-
#layer_col_sizes ⇒ Array
Number of neurons per layer.
-
#layer_shapes ⇒ Array<Array[Integer, Integer]>
Shapes for the weight matrices, each corresponding to a layer.
-
#lecun_hyperbolic ⇒ Object
LeCun hyperbolic activation.
-
#load_weights(weights) ⇒ true
Loads a plain list of weights into the weight matrices (one per layer).
-
#nlayers ⇒ Integer
Count the layers.
-
#nneurs(nlay = nil) ⇒ Integer
Count the neurons in a particular layer or in the whole network.
-
#nweights ⇒ Integer
Total weights in the network.
-
#nweights_per_layer ⇒ Array<Integer>
List of per-layer number of weights.
-
#out ⇒ NArray
Extract and convert the output layer’s activation.
-
#relu ⇒ Object
Rectified Linear Unit (ReLU).
-
#reset_state ⇒ Object
Reset the network to the initial state.
-
#sigmoid(k = 0.5) ⇒ Object
(also: #logistic)
Traditional sigmoid (logistic) with variable steepness.
-
#weights ⇒ Array<NArray>
Returns the weight matrix.
Constructor Details
#initialize(struct, act_fn: nil) ⇒ Base
Returns a new instance of Base.
29 30 31 32 33 34 35 36 37 38 39 40 41 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 29 def initialize struct, act_fn: nil @struct = struct @act_fn_name = act_fn || :sigmoid @act_fn = send(act_fn_name) # @state holds both inputs, possibly recurrency, and bias # it is a complete input for the next layer, hence size from layer sizes @state = layer_row_sizes.collect do |size| NArray.zeros [1, size] end # to this, append a matrix to hold the final network output @state.push NArray.zeros [1, nneurs(-1)] reset_state end |
Instance Attribute Details
#act_fn ⇒ #call (readonly)
activation function, common to all neurons (for now)
22 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 22 attr_reader :layers, :state, :act_fn, :act_fn_name, :struct |
#act_fn_name ⇒ Object (readonly)
Returns the value of attribute act_fn_name.
22 23 24 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 22 def act_fn_name @act_fn_name end |
#layers ⇒ Array<NArray> (readonly)
List of matrices, each being the weights connecting a layer’s inputs (rows) to a layer’s neurons (columns), hence its shape is ‘[ninputs, nneurs]` TODO: return a NArray after the usage of `#map` is figured out
22 23 24 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 22 def layers @layers end |
#state ⇒ Array<NArray> (readonly)
It’s a list of one-dimensional matrices, each an input to a layer, plus the output layer’s output. The first element is the input to the first layer of the network, which is composed of the network’s input, possibly the first layer’s activation on the last input (recursion), and a bias (fixed ‘1`). The second to but-last entries follow the same structure, but with the previous layer’s output in place of the network’s input. The last entry is the activation of the output layer, without additions since it’s not used as an input by anyone. TODO: return a NArray after the usage of ‘#map` is figured out
22 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 22 attr_reader :layers, :state, :act_fn, :act_fn_name, :struct |
#struct ⇒ Object (readonly)
Returns the value of attribute struct.
22 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 22 attr_reader :layers, :state, :act_fn, :act_fn_name, :struct |
Instance Method Details
#activate(input) ⇒ Array
Activate the network on a given input
145 146 147 148 149 150 151 152 153 154 155 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 145 def activate input raise ArgumentError unless input.size == struct.first # load input in first state state[0][0...struct.first] = input # activate layers in sequence nlayers.times.each do |i| act = activate_layer i state[i+1][0...act.size] = act end return out end |
#deep_reset ⇒ Object
Resets memoization: needed to play with structure modification
61 62 63 64 65 66 67 68 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 61 def deep_reset # reset memoization [:@layer_row_sizes, :@layer_col_sizes, :@nlayers, :@layer_shapes, :@nweights_per_layer, :@nweights].each do |sym| instance_variable_set sym, nil end reset_state end |
#init_random ⇒ Object
Initialize the network with random weights
53 54 55 56 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 53 def init_random # Reusing `#load_weights` instead helps catching bugs load_weights NArray.new(nweights).rand(-1,1) end |
#interface_methods ⇒ Object
Declaring interface methods - implement in child class!
187 188 189 190 191 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 187 [:layer_row_sizes, :activate_layer].each do |sym| define_method sym do raise NotImplementedError, "Implement ##{sym} in child class!" end end |
#layer_col_sizes ⇒ Array
Number of neurons per layer. Although this implementation includes inputs in the layer counts, this methods correctly ignores the input as not having neurons.
99 100 101 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 99 def layer_col_sizes @layer_col_sizes ||= struct.drop(1) end |
#layer_shapes ⇒ Array<Array[Integer, Integer]>
Shapes for the weight matrices, each corresponding to a layer
107 108 109 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 107 def layer_shapes @layer_shapes ||= layer_row_sizes.zip layer_col_sizes end |
#lecun_hyperbolic ⇒ Object
LeCun hyperbolic activation
175 176 177 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 175 def lecun_hyperbolic -> (vec) { 1.7159 * NMath.tanh(2.0*vec/3.0) + 1e-3*vec } end |
#load_weights(weights) ⇒ true
Loads a plain list of weights into the weight matrices (one per layer). Preserves order. Reuses allocated memory if available.
125 126 127 128 129 130 131 132 133 134 135 136 137 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 125 def load_weights weights raise ArgumentError unless weights.size == nweights weights = weights.to_na unless weights.kind_of? NArray from = 0 @layers = layer_shapes.collect do |shape| to = from + shape.reduce(:*) lay_w = weights[from...to].reshape *shape from = to lay_w end reset_state return true end |
#nlayers ⇒ Integer
Count the layers. This is a computation helper, and for this implementation the inputs are considered as if a layer like the others.
85 86 87 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 85 def nlayers @nlayers ||= layer_shapes.size end |
#nneurs(nlay = nil) ⇒ Integer
Count the neurons in a particular layer or in the whole network.
116 117 118 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 116 def nneurs nlay=nil nlay.nil? ? struct.reduce(:+) : struct[nlay] end |
#nweights ⇒ Integer
Total weights in the network
72 73 74 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 72 def nweights @nweights ||= nweights_per_layer.reduce(:+) end |
#nweights_per_layer ⇒ Array<Integer>
List of per-layer number of weights
78 79 80 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 78 def nweights_per_layer @nweights_per_layer ||= layer_shapes.collect { |shape| shape.reduce(:*) } end |
#out ⇒ NArray
Extract and convert the output layer’s activation
159 160 161 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 159 def out state.last.flatten end |
#relu ⇒ Object
Rectified Linear Unit (ReLU)
180 181 182 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 180 def relu -> (vec) { (vec>0).all? && vec || vec.class.zeros(vec.shape) } end |
#reset_state ⇒ Object
Reset the network to the initial state
44 45 46 47 48 49 50 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 44 def reset_state state.each do |s| s.fill 0 # reset state to zero s[-1] = 1 # add bias end state[-1][-1] = 0 # last layer has no bias end |
#sigmoid(k = 0.5) ⇒ Object Also known as: logistic
Traditional sigmoid (logistic) with variable steepness
166 167 168 169 170 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 166 def sigmoid k=0.5 # k is steepness: 0<k<1 is flatter, 1<k is flatter # flatter makes activation less sensitive, better with large number of inputs -> (vec) { 1.0 / (NMath.exp(-k * vec) + 1.0) } end |
#weights ⇒ Array<NArray>
Returns the weight matrix
91 92 93 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 91 def weights layers end |