Class: Neuronet::FeedForward
- Inherits:
-
Array
- Object
- Array
- Neuronet::FeedForward
- Defined in:
- lib/neuronet.rb
Overview
A Feed Forward Network
Direct Known Subclasses
Instance Attribute Summary collapse
-
#in ⇒ Object
readonly
Returns the value of attribute in.
-
#learning ⇒ Object
Returns the value of attribute learning.
-
#out ⇒ Object
readonly
Returns the value of attribute out.
-
#yang ⇒ Object
readonly
Returns the value of attribute yang.
-
#yin ⇒ Object
readonly
Returns the value of attribute yin.
Instance Method Summary collapse
-
#exemplar(inputs, targets) ⇒ Object
trains an input/output pair.
-
#initialize(layers) ⇒ FeedForward
constructor
I find very useful to name certain layers: [0] @in Input Layer [1] @yin Tipically the first middle layer [-2] @yang Tipically the last middle layer [-1] @out Output Layer.
- #input ⇒ Object
-
#mu ⇒ Object
Whatchamacallits? The learning constant is given different names…
-
#muk(k = 1.0) ⇒ Object
Given that the learning constant is initially set to 1/mu as defined above, muk gives a way to modify the learning constant by some factor, k.
-
#num(n) ⇒ Object
Given that the learning constant can be modified by some factor k with #muk, #num gives an alternate way to express the k factor in terms of some number n greater than 1, setting k to 1/sqrt(n).
- #output ⇒ Object
- #set(inputs) ⇒ Object
- #train!(targets) ⇒ Object
- #update ⇒ Object
Constructor Details
#initialize(layers) ⇒ FeedForward
I find very useful to name certain layers: [0] @in Input Layer [1] @yin Tipically the first middle layer [-2] @yang Tipically the last middle layer [-1] @out Output Layer
266 267 268 269 270 271 272 273 274 275 276 277 |
# File 'lib/neuronet.rb', line 266 def initialize(layers) super(length = layers.length) @in = self[0] = Neuronet::InputLayer.new(layers[0]) (1).upto(length-1){|index| self[index] = Neuronet::Layer.new(layers[index]) self[index].connect(self[index-1]) } @out = self.last @yin = self[1] # first middle layer @yang = self[-2] # last middle layer @learning = 1.0/mu end |
Instance Attribute Details
#in ⇒ Object (readonly)
Returns the value of attribute in.
257 258 259 |
# File 'lib/neuronet.rb', line 257 def in @in end |
#learning ⇒ Object
Returns the value of attribute learning.
259 260 261 |
# File 'lib/neuronet.rb', line 259 def learning @learning end |
#out ⇒ Object (readonly)
Returns the value of attribute out.
257 258 259 |
# File 'lib/neuronet.rb', line 257 def out @out end |
#yang ⇒ Object (readonly)
Returns the value of attribute yang.
258 259 260 |
# File 'lib/neuronet.rb', line 258 def yang @yang end |
#yin ⇒ Object (readonly)
Returns the value of attribute yin.
258 259 260 |
# File 'lib/neuronet.rb', line 258 def @yin end |
Instance Method Details
#exemplar(inputs, targets) ⇒ Object
trains an input/output pair
295 296 297 298 |
# File 'lib/neuronet.rb', line 295 def exemplar(inputs, targets) set(inputs) train!(targets) end |
#input ⇒ Object
300 301 302 |
# File 'lib/neuronet.rb', line 300 def input @in.values end |
#mu ⇒ Object
Whatchamacallits? The learning constant is given different names… often some greek letter. It’s a small number less than one. Ideally, it divides the errors evenly among all contributors. Contributors are the neurons’ biases and the connections’ weights. Thus if one counts all the contributors as N, the learning constant should be at most 1/N. But there are other considerations, such as how noisy the data is. In any case, I’m calling this N value FeedForward#mu. 1/mu is used for the initial default value for the learning constant.
231 232 233 234 235 236 237 238 |
# File 'lib/neuronet.rb', line 231 def mu sum = 1.0 1.upto(self.length-1) do |i| n, m = self[i-1].length, self[i].length sum += n + n*m end return sum end |
#muk(k = 1.0) ⇒ Object
Given that the learning constant is initially set to 1/mu as defined above, muk gives a way to modify the learning constant by some factor, k. In theory, when there is no noice in the target data, k can be set to 1.0. If the data is noisy, k is set to some value less than 1.0.
243 244 245 |
# File 'lib/neuronet.rb', line 243 def muk(k=1.0) @learning = k/mu end |
#num(n) ⇒ Object
Given that the learning constant can be modified by some factor k with #muk, #num gives an alternate way to express the k factor in terms of some number n greater than 1, setting k to 1/sqrt(n). I believe that the optimal value for the learning constant for a training set of size n is somewhere between #muk(1) and #num(n). Whereas the learning constant can be too high, a low learning value just increases the training time.
253 254 255 |
# File 'lib/neuronet.rb', line 253 def num(n) muk(1.0/(Math.sqrt(n))) end |
#output ⇒ Object
304 305 306 |
# File 'lib/neuronet.rb', line 304 def output @out.values end |
#set(inputs) ⇒ Object
284 285 286 287 |
# File 'lib/neuronet.rb', line 284 def set(inputs) @in.set(inputs) update end |
#train!(targets) ⇒ Object
289 290 291 292 |
# File 'lib/neuronet.rb', line 289 def train!(targets) @out.train(targets, @learning) update end |
#update ⇒ Object
279 280 281 282 |
# File 'lib/neuronet.rb', line 279 def update # update up the layers (1).upto(self.length-1){|index| self[index].partial} end |