Class: NanoGPT::Layers::LayerNorm
- Inherits:
-
Torch::NN::Module
- Object
- Torch::NN::Module
- NanoGPT::Layers::LayerNorm
- Defined in:
- lib/nano_gpt/layers/layer_norm.rb
Overview
LayerNorm with optional bias (PyTorch doesn’t support bias=false directly)
Instance Attribute Summary collapse
-
#bias ⇒ Object
readonly
Returns the value of attribute bias.
-
#weight ⇒ Object
readonly
Returns the value of attribute weight.
Instance Method Summary collapse
- #forward(input) ⇒ Object
-
#initialize(ndim, bias: true) ⇒ LayerNorm
constructor
A new instance of LayerNorm.
Constructor Details
#initialize(ndim, bias: true) ⇒ LayerNorm
Returns a new instance of LayerNorm.
9 10 11 12 13 14 |
# File 'lib/nano_gpt/layers/layer_norm.rb', line 9 def initialize(ndim, bias: true) super() @ndim = ndim @weight = Torch::NN::Parameter.new(Torch.ones(ndim)) @bias = bias ? Torch::NN::Parameter.new(Torch.zeros(ndim)) : nil end |
Instance Attribute Details
#bias ⇒ Object (readonly)
Returns the value of attribute bias.
7 8 9 |
# File 'lib/nano_gpt/layers/layer_norm.rb', line 7 def bias @bias end |
#weight ⇒ Object (readonly)
Returns the value of attribute weight.
7 8 9 |
# File 'lib/nano_gpt/layers/layer_norm.rb', line 7 def weight @weight end |
Instance Method Details
#forward(input) ⇒ Object
16 17 18 |
# File 'lib/nano_gpt/layers/layer_norm.rb', line 16 def forward(input) Torch::NN::Functional.layer_norm(input, [@ndim], weight: @weight, bias: @bias, eps: 1e-5) end |