Method: Secryst::Transformer#initialize
- Defined in:
- lib/secryst/transformer.rb
#initialize(d_model: 512, nhead: 8, num_encoder_layers: 6, num_decoder_layers: 6, dim_feedforward: 2048, dropout: 0.1, activation: 'relu', custom_encoder: nil, custom_decoder: nil, input_vocab_size:, target_vocab_size:) ⇒ Transformer
A transformer model. User is able to modify the attributes as needed. The architecture is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users can build the BERT(arxiv.org/abs/1810.04805) model with corresponding parameters. Args:
d_model: the number of expected features in the encoder/decoder inputs (default=512).
nhead: the number of heads in the multiheadattention models (default=8).
num_encoder_layers: the number of sub-encoder-layers in the encoder (default=6).
num_decoder_layers: the number of sub-decoder-layers in the decoder (default=6).
dim_feedforward: the dimension of the feedforward network model (default=2048).
dropout: the dropout value (default=0.1).
activation: the activation function of encoder/decoder intermediate layer, relu or gelu (default=relu).
custom_encoder: custom encoder (default=nil).
custom_decoder: custom decoder (default=nil).
input_vocab_size: size of vocabulary for input sequence (number of different possible tokens).
target_vocab_size: size of vocabulary for target sequence (number of different possible tokens).
- Examples
-
>>> transformer_model = Transformer.new(nhead: 16, num_encoder_layers: 12) >>> src = Torch.rand((10, 32, 512)) >>> tgt = Torch.rand((20, 32, 512)) >>> out = transformer_model.call(src, tgt)
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
# File 'lib/secryst/transformer.rb', line 28 def initialize(d_model: 512, nhead: 8, num_encoder_layers: 6, num_decoder_layers: 6, dim_feedforward: 2048, dropout: 0.1, activation: 'relu', custom_encoder: nil, custom_decoder: nil, input_vocab_size:, target_vocab_size:) super() if custom_encoder @encoder = custom_encoder else encoder_layers = num_encoder_layers.times.map { TransformerEncoderLayer.new(d_model, nhead, dim_feedforward: dim_feedforward, dropout: dropout, activation: activation) } encoder_norm = Torch::NN::LayerNorm.new(d_model) @encoder = TransformerEncoder.new(encoder_layers, encoder_norm, d_model, input_vocab_size, dropout) end if custom_decoder @decoder = custom_decoder else decoder_layers = num_decoder_layers.times.map { TransformerDecoderLayer.new(d_model, nhead, dim_feedforward: dim_feedforward, dropout: dropout, activation: activation) } decoder_norm = Torch::NN::LayerNorm.new(d_model) @decoder = TransformerDecoder.new(decoder_layers, decoder_norm, d_model, target_vocab_size, dropout) end @linear = Torch::NN::Linear.new(d_model, target_vocab_size) @softmax = Torch::NN::LogSoftmax.new(dim: -1) _reset_parameters() @d_model = d_model @nhead = nhead end |