Module: Neuronet
- Defined in:
- lib/neuronet.rb
Overview
Neuronet module
Defined Under Namespace
Modules: Brahma, BrahmaYang, Tao, TaoBrahma, TaoBrahmaYang, TaoYang, TaoYin, TaoYinYang, Yang, Yin, YinYang Classes: Connection, FeedForward, Gaussian, InputLayer, Layer, LogNormal, Neuron, Node, Scale, ScaledNetwork
Constant Summary collapse
- VERSION =
'6.1.0'- BZERO =
1.0/(1.0-2.0*squash(1.0))
- WONE =
-2.0*BZERO
Class Method Summary collapse
-
.noise ⇒ Object
Although the implementation is free to set all parameters for each neuron, Neuronet by default creates zeroed neurons.
-
.squash(unsquashed) ⇒ Object
An artificial neural network uses a squash function to determine the activation value of a neuron.
- .unsquash(squashed) ⇒ Object
Class Method Details
.noise ⇒ Object
Although the implementation is free to set all parameters for each neuron, Neuronet by default creates zeroed neurons. Association between inputs and outputs are trained, and neurons differentiate from each other randomly. Differentiation among neurons is achieved by noise in the back-propagation of errors. This noise is provided by Neuronet.noise. I chose rand + rand to give the noise an average value of one and a bell shape distribution.
39 40 41 |
# File 'lib/neuronet.rb', line 39 def self.noise rand + rand end |
.squash(unsquashed) ⇒ Object
An artificial neural network uses a squash function to determine the activation value of a neuron. The squash function for Neuronet is the [Sigmoid function](en.wikipedia.org/wiki/Sigmoid_function) which sets the neuron’s activation value between 1.0 and 0.0. This activation value is often thought of on/off or true/false. For classification problems, activation values near one are considered true while activation values near 0.0 are considered false. In Neuronet I make a distinction between the neuron’s activation value and it’s representation to the problem. This attribute, activation, need never appear in an implementation of Neuronet, but it is mapped back to it’s unsquashed value every time the implementation asks for the neuron’s value. One should scale the problem with most data points between -1 and 1, extremes under 2s, and no outbounds above 3s. Standard deviations from the mean is probably a good way to figure the scale of the problem.
21 22 23 |
# File 'lib/neuronet.rb', line 21 def self.squash(unsquashed) 1.0 / (1.0 + Math.exp(-unsquashed)) end |
.unsquash(squashed) ⇒ Object
25 26 27 |
# File 'lib/neuronet.rb', line 25 def self.unsquash(squashed) Math.log(squashed / (1.0 - squashed)) end |