Method: Secryst::TransformerDecoderLayer#initialize

Defined in:
lib/secryst/transformer.rb

#initialize(d_model, nhead, dim_feedforward: 2048, dropout: 0.1, activation: "relu") ⇒ TransformerDecoderLayer

TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network. This standard decoder layer is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application. Args:

d_model: the number of expected features in the input (required).
nhead: the number of heads in the multiheadattention models (required).
dim_feedforward: the dimension of the feedforward network model (default=2048).
dropout: the dropout value (default=0.1).
activation: the activation function of intermediate layer, relu or gelu (default=relu).
Examples

>>> decoder_layer = TransformerDecoderLayer(512, 8) >>> memory = Torch.rand(10, 32, 512) >>> tgt = Torch.rand(20, 32, 512) >>> out = decoder_layer.call(tgt, memory)



298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
# File 'lib/secryst/transformer.rb', line 298

def initialize(d_model, nhead, dim_feedforward: 2048, dropout: 0.1, activation: "relu")
  super()
  @self_attn = MultiheadAttention.new(d_model, nhead, dropout: dropout)
  @multihead_attn = MultiheadAttention.new(d_model, nhead, dropout: dropout)
  # Implementation of Feedforward model
  @linear1 = Torch::NN::Linear.new(d_model, dim_feedforward)
  @dropout = Torch::NN::Dropout.new(p: dropout)
  @linear2 = Torch::NN::Linear.new(dim_feedforward, d_model)

  @norm1 = Torch::NN::LayerNorm.new(d_model)
  @norm2 = Torch::NN::LayerNorm.new(d_model)
  @norm3 = Torch::NN::LayerNorm.new(d_model)
  @dropout1 = Torch::NN::Dropout.new(p: dropout)
  @dropout2 = Torch::NN::Dropout.new(p: dropout)
  @dropout3 = Torch::NN::Dropout.new(p: dropout)

  @activation = _get_activation_fn(activation)
end