Class: Neuronet::FeedForward

Inherits:
Array
  • Object
show all
Defined in:
lib/neuronet.rb

Overview

A Feed Forward Network

Direct Known Subclasses

ScaledNetwork

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(layers) ⇒ FeedForward

I find very useful to name certain layers: [0] @in Input Layer [1] @yin Tipically the first middle layer [-2] @yang Tipically the last middle layer [-1] @out Output Layer



269
270
271
272
273
274
275
276
277
278
279
280
# File 'lib/neuronet.rb', line 269

def initialize(layers)
  super(length = layers.length)
  @in = self[0] = Neuronet::InputLayer.new(layers[0])
  (1).upto(length-1){|index|
    self[index] = Neuronet::Layer.new(layers[index])
    self[index].connect(self[index-1])
  }
  @out = self.last
  @yin = self[1] # first middle layer
  @yang = self[-2] # last middle layer
  @learning = 1.0/mu
end

Instance Attribute Details

#inObject (readonly)

Returns the value of attribute in.



260
261
262
# File 'lib/neuronet.rb', line 260

def in
  @in
end

#learningObject

Returns the value of attribute learning.



262
263
264
# File 'lib/neuronet.rb', line 262

def learning
  @learning
end

#outObject (readonly)

Returns the value of attribute out.



260
261
262
# File 'lib/neuronet.rb', line 260

def out
  @out
end

#yangObject (readonly)

Returns the value of attribute yang.



261
262
263
# File 'lib/neuronet.rb', line 261

def yang
  @yang
end

#yinObject (readonly)

Returns the value of attribute yin.



261
262
263
# File 'lib/neuronet.rb', line 261

def yin
  @yin
end

Instance Method Details

#exemplar(inputs, targets) ⇒ Object

trains an input/output pair



298
299
300
301
# File 'lib/neuronet.rb', line 298

def exemplar(inputs, targets)
  set(inputs)
  train!(targets)
end

#inputObject



303
304
305
# File 'lib/neuronet.rb', line 303

def input
  @in.values
end

#muObject

Whatchamacallits? The learning constant is given different names… often some greek letter. It’s a small number less than one. Ideally, it divides the errors evenly among all contributors. Contributors are the neurons’ biases and the connections’ weights. Thus if one counts all the contributors as N, the learning constant should be at most 1/N. But there are other considerations, such as how noisy the data is. In any case, I’m calling this N value FeedForward#mu. 1/mu is used for the initial default value for the learning constant.



234
235
236
237
238
239
240
241
# File 'lib/neuronet.rb', line 234

def mu
  sum = 1.0
  1.upto(self.length-1) do |i|
    n, m = self[i-1].length, self[i].length
    sum += n + n*m
  end
  return sum
end

#muk(k = 1.0) ⇒ Object

Given that the learning constant is initially set to 1/mu as defined above, muk gives a way to modify the learning constant by some factor, k. In theory, when there is no noice in the target data, k can be set to 1.0. If the data is noisy, k is set to some value less than 1.0.



246
247
248
# File 'lib/neuronet.rb', line 246

def muk(k=1.0)
  @learning = k/mu
end

#num(n) ⇒ Object

Given that the learning constant can be modified by some factor k with #muk, #num gives an alternate way to express the k factor in terms of some number n greater than 1, setting k to 1/sqrt(n). I believe that the optimal value for the learning constant for a training set of size n is somewhere between #muk(1) and #num(n). Whereas the learning constant can be too high, a low learning value just increases the training time.



256
257
258
# File 'lib/neuronet.rb', line 256

def num(n)
  muk(1.0/(Math.sqrt(n)))
end

#outputObject



307
308
309
# File 'lib/neuronet.rb', line 307

def output
  @out.values
end

#set(inputs) ⇒ Object



287
288
289
290
# File 'lib/neuronet.rb', line 287

def set(inputs)
  @in.set(inputs)
  update
end

#train!(targets) ⇒ Object



292
293
294
295
# File 'lib/neuronet.rb', line 292

def train!(targets)
  @out.train(targets, @learning)
  update
end

#updateObject



282
283
284
285
# File 'lib/neuronet.rb', line 282

def update
  # update up the layers
  (1).upto(self.length-1){|index| self[index].partial}
end