Class: MachineLearningWorkbench::NeuralNetwork::Base
- Inherits:
-
Object
- Object
- MachineLearningWorkbench::NeuralNetwork::Base
- Defined in:
- lib/machine_learning_workbench/neural_network/base.rb
Overview
Neural Network base class
Direct Known Subclasses
Instance Attribute Summary collapse
-
#act_fn ⇒ #call
readonly
activation function, common to all neurons (for now).
-
#act_fn_name ⇒ Object
readonly
Returns the value of attribute act_fn_name.
-
#layers ⇒ Array<NArray>
readonly
List of matrices, each being the weights connecting a layer’s inputs (rows) to a layer’s neurons (columns), hence its shape is ‘[ninputs, nneurs]` TODO: return a NArray after the usage of `#map` is figured out.
-
#state ⇒ Array<NArray>
readonly
It’s a list of one-dimensional matrices, each an input to a layer, plus the output layer’s output.
-
#struct ⇒ Object
readonly
Returns the value of attribute struct.
Instance Method Summary collapse
-
#activate(input) ⇒ Array
Activate the network on a given input.
-
#deep_reset ⇒ Object
Resets memoization: needed to play with structure modification.
-
#init_random ⇒ Object
Initialize the network with random weights.
-
#initialize(struct, act_fn: nil, **act_fn_args) ⇒ Base
constructor
A new instance of Base.
-
#interface_methods ⇒ Object
Declaring interface methods - implement in child class!.
-
#layer_col_sizes ⇒ Array
Number of neurons per layer.
-
#layer_shapes ⇒ Array<Array[Integer, Integer]>
Shapes for the weight matrices, each corresponding to a layer.
-
#lecun_hyperbolic ⇒ Object
LeCun hyperbolic activation.
-
#load_weights(weights) ⇒ true
Loads a plain list of weights into the weight matrices (one per layer).
-
#nlayers ⇒ Integer
Count the layers.
-
#nneurs(nlay = nil) ⇒ Integer
Count the neurons in a particular layer or in the whole network.
-
#nweights ⇒ Integer
Total weights in the network.
-
#nweights_per_layer ⇒ Array<Integer>
List of per-layer number of weights.
-
#out ⇒ NArray
Extract and convert the output layer’s activation.
-
#relu ⇒ Object
Rectified Linear Unit (ReLU).
-
#reset_state ⇒ Object
Reset the network to the initial state.
-
#sigmoid(steepness: 1) ⇒ Object
(also: #logistic)
Traditional sigmoid (logistic) with variable steepness.
-
#weights ⇒ Array<NArray>
Returns the weight matrix.
Constructor Details
#initialize(struct, act_fn: nil, **act_fn_args) ⇒ Base
Returns a new instance of Base.
30 31 32 33 34 35 36 37 38 39 40 41 42 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 30 def initialize struct, act_fn: nil, **act_fn_args @struct = struct @act_fn_name = act_fn || :sigmoid @act_fn = send act_fn_name, **act_fn_args # @state holds both inputs, possibly recurrency, and bias # it is a complete input for the next layer, hence size from layer sizes @state = layer_row_sizes.collect do |size| NArray.zeros [1, size] end # to this, append a matrix to hold the final network output @state.push NArray.zeros [1, nneurs(-1)] reset_state end |
Instance Attribute Details
#act_fn ⇒ #call (readonly)
activation function, common to all neurons (for now)
23 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 23 attr_reader :layers, :state, :act_fn, :act_fn_name, :struct |
#act_fn_name ⇒ Object (readonly)
Returns the value of attribute act_fn_name.
23 24 25 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 23 def act_fn_name @act_fn_name end |
#layers ⇒ Array<NArray> (readonly)
List of matrices, each being the weights connecting a layer’s inputs (rows) to a layer’s neurons (columns), hence its shape is ‘[ninputs, nneurs]` TODO: return a NArray after the usage of `#map` is figured out
23 24 25 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 23 def layers @layers end |
#state ⇒ Array<NArray> (readonly)
It’s a list of one-dimensional matrices, each an input to a layer, plus the output layer’s output. The first element is the input to the first layer of the network, which is composed of the network’s input, possibly the first layer’s activation on the last input (recursion), and a bias (fixed ‘1`). The second to but-last entries follow the same structure, but with the previous layer’s output in place of the network’s input. The last entry is the activation of the output layer, without additions since it’s not used as an input by anyone. TODO: return a NArray after the usage of ‘#map` is figured out
23 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 23 attr_reader :layers, :state, :act_fn, :act_fn_name, :struct |
#struct ⇒ Object (readonly)
Returns the value of attribute struct.
23 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 23 attr_reader :layers, :state, :act_fn, :act_fn_name, :struct |
Instance Method Details
#activate(input) ⇒ Array
Activate the network on a given input
147 148 149 150 151 152 153 154 155 156 157 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 147 def activate input raise ArgumentError unless input.size == struct.first # load input in first state state[0][0...struct.first] = input # activate layers in sequence nlayers.times.each do |i| act = activate_layer i state[i+1][0...act.size] = act end return out end |
#deep_reset ⇒ Object
Resets memoization: needed to play with structure modification
63 64 65 66 67 68 69 70 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 63 def deep_reset # reset memoization [:@layer_row_sizes, :@layer_col_sizes, :@nlayers, :@layer_shapes, :@nweights_per_layer, :@nweights].each do |sym| instance_variable_set sym, nil end reset_state end |
#init_random ⇒ Object
Initialize the network with random weights
54 55 56 57 58 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 54 def init_random # Reusing `#load_weights` instead helps catching bugs deep_reset load_weights NArray.new(nweights).rand(-1,1) end |
#interface_methods ⇒ Object
Declaring interface methods - implement in child class!
189 190 191 192 193 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 189 [:layer_row_sizes, :activate_layer].each do |sym| define_method sym do raise NotImplementedError, "Implement ##{sym} in child class!" end end |
#layer_col_sizes ⇒ Array
Number of neurons per layer. Although this implementation includes inputs in the layer counts, this methods correctly ignores the input as not having neurons.
101 102 103 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 101 def layer_col_sizes @layer_col_sizes ||= struct.drop(1) end |
#layer_shapes ⇒ Array<Array[Integer, Integer]>
Shapes for the weight matrices, each corresponding to a layer
109 110 111 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 109 def layer_shapes @layer_shapes ||= layer_row_sizes.zip layer_col_sizes end |
#lecun_hyperbolic ⇒ Object
LeCun hyperbolic activation
177 178 179 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 177 def lecun_hyperbolic -> (vec) { 1.7159 * NMath.tanh(2.0*vec/3.0) + 1e-3*vec } end |
#load_weights(weights) ⇒ true
Loads a plain list of weights into the weight matrices (one per layer). Preserves order. Reuses allocated memory if available.
127 128 129 130 131 132 133 134 135 136 137 138 139 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 127 def load_weights weights raise ArgumentError unless weights.size == nweights weights = weights.to_na unless weights.kind_of? NArray from = 0 @layers = layer_shapes.collect do |shape| to = from + shape.reduce(:*) lay_w = weights[from...to].reshape *shape from = to lay_w end reset_state return true end |
#nlayers ⇒ Integer
Count the layers. This is a computation helper, and for this implementation the inputs are considered as if a layer like the others.
87 88 89 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 87 def nlayers @nlayers ||= layer_shapes.size end |
#nneurs(nlay = nil) ⇒ Integer
Count the neurons in a particular layer or in the whole network.
118 119 120 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 118 def nneurs nlay=nil nlay.nil? ? struct.reduce(:+) : struct[nlay] end |
#nweights ⇒ Integer
Total weights in the network
74 75 76 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 74 def nweights @nweights ||= nweights_per_layer.reduce(:+) end |
#nweights_per_layer ⇒ Array<Integer>
List of per-layer number of weights
80 81 82 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 80 def nweights_per_layer @nweights_per_layer ||= layer_shapes.collect { |shape| shape.reduce(:*) } end |
#out ⇒ NArray
Extract and convert the output layer’s activation
161 162 163 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 161 def out state.last.flatten end |
#relu ⇒ Object
Rectified Linear Unit (ReLU)
182 183 184 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 182 def relu -> (vec) { (vec>0).all? && vec || vec.class.zeros(vec.shape) } end |
#reset_state ⇒ Object
Reset the network to the initial state
45 46 47 48 49 50 51 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 45 def reset_state state.each do |s| s.fill 0 # reset state to zero s[-1] = 1 # add bias end state[-1][-1] = 0 # last layer has no bias end |
#sigmoid(steepness: 1) ⇒ Object Also known as: logistic
Traditional sigmoid (logistic) with variable steepness
168 169 170 171 172 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 168 def sigmoid steepness: 1 # steepness: 0<s<1 is flatter, 1<s is flatter # flatter makes activation less sensitive, better with large number of inputs -> (vec) { 1.0 / (NMath.exp(-steepness * vec) + 1.0) } end |
#weights ⇒ Array<NArray>
Returns the weight matrix
93 94 95 |
# File 'lib/machine_learning_workbench/neural_network/base.rb', line 93 def weights layers end |