Class: DNN::Models::Model

Inherits:
Chain
  • Object
show all
Defined in:
lib/dnn/core/models.rb

Overview

This class deals with the model of the network.

Direct Known Subclasses

FixedModel, Sequential

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Methods inherited from Chain

#forward, #layers, #load_hash, #to_hash

Constructor Details

#initializeModel

Returns a new instance of Model.



125
126
127
128
129
130
131
132
133
134
# File 'lib/dnn/core/models.rb', line 125

def initialize
  super
  @optimizer = nil
  @loss_func = nil
  @built = false
  @loss_weights = nil
  @callbacks = []
  @last_log = {}
  @early_stop_requested = false
end

Instance Attribute Details

#last_logObject (readonly)

Returns the value of attribute last_log.



113
114
115
# File 'lib/dnn/core/models.rb', line 113

def last_log
  @last_log
end

#loss_weightsObject

Returns the value of attribute loss_weights.



112
113
114
# File 'lib/dnn/core/models.rb', line 112

def loss_weights
  @loss_weights
end

#optimizerObject

Returns the value of attribute optimizer.



111
112
113
# File 'lib/dnn/core/models.rb', line 111

def optimizer
  @optimizer
end

Class Method Details

.load(file_name) ⇒ DNN::Models::Model

Load marshal model.

Parameters:

  • file_name (String)

    File name of marshal model to load.

Returns:



118
119
120
121
122
123
# File 'lib/dnn/core/models.rb', line 118

def self.load(file_name)
  model = self.allocate
  loader = Loaders::MarshalLoader.new(model)
  loader.load(file_name)
  model
end

Instance Method Details

#add_callback(callback) ⇒ Object

Add callback function.

Parameters:

  • callback (Callback)

    Callback object.



457
458
459
460
# File 'lib/dnn/core/models.rb', line 457

def add_callback(callback)
  callback.model = self
  @callbacks << callback
end

#add_lambda_callback(event) { ... } ⇒ Object

Add lambda callback.

Parameters:

  • event (Symbol)

    Event to execute callback.

Yields:

  • Register the contents of the callback.



465
466
467
468
469
# File 'lib/dnn/core/models.rb', line 465

def add_lambda_callback(event, &block)
  callback = Callbacks::LambdaCallback.new(event, &block)
  callback.model = self
  @callbacks << callback
end

#built?Boolean

Returns If model have already been built then return true.

Returns:

  • (Boolean)

    If model have already been built then return true.



518
519
520
# File 'lib/dnn/core/models.rb', line 518

def built?
  @built
end

#call(input_tensors) ⇒ Object



136
137
138
139
140
# File 'lib/dnn/core/models.rb', line 136

def call(input_tensors)
  output_tensors = forward(input_tensors)
  @built = true unless @built
  output_tensors
end

#call_callbacks(event) ⇒ Object



624
625
626
627
628
# File 'lib/dnn/core/models.rb', line 624

def call_callbacks(event)
  @callbacks.each do |callback|
    callback.send(event) if callback.respond_to?(event)
  end
end

#check_early_stop_requestedObject



610
611
612
613
614
615
616
# File 'lib/dnn/core/models.rb', line 610

def check_early_stop_requested
  if @early_stop_requested
    @early_stop_requested = false
    return true
  end
  false
end

#clean_layersObject

Clean all layers.



523
524
525
526
527
528
529
530
531
532
533
# File 'lib/dnn/core/models.rb', line 523

def clean_layers
  layers.each(&:clean)
  if @loss_func.is_a?(Array)
    @loss_func.each do |lf|
      lf.clean
    end
  elsif @loss_func.is_a?(Losses::Loss)
    @loss_func.clean
  end
  @layers_cache = nil
end

#clear_callbacksObject

Clear the callback function registered for each event.



472
473
474
# File 'lib/dnn/core/models.rb', line 472

def clear_callbacks
  @callbacks = []
end

#copyDNN::Models::Model

Return the copy this model.

Returns:



498
499
500
# File 'lib/dnn/core/models.rb', line 498

def copy
  Marshal.load(Marshal.dump(self))
end

#evaluate(x, y, batch_size: 100, need_accuracy: true) ⇒ Array

Evaluate model and get accuracy and loss of test data.

Parameters:

  • x (Numo::SFloat)

    Input test data.

  • y (Numo::SFloat)

    Output test data.

  • batch_size (Integer) (defaults to: 100)

    Batch size used for one test.

  • need_accuracy (Boolean) (defaults to: true)

    Set true to compute the accuracy.

Returns:

  • (Array)

    Returns the test data accuracy and mean loss in the form [accuracy, mean_loss]. If accuracy is not needed returns in the form [nil, mean_loss].



315
316
317
318
319
320
321
322
# File 'lib/dnn/core/models.rb', line 315

def evaluate(x, y, batch_size: 100, need_accuracy: true)
  Utils.check_input_data_type("x", x, Xumo::SFloat)
  Utils.check_input_data_type("y", y, Xumo::SFloat)
  evaluator = ModelEvaluator.new(self)
  evaluator.start_evaluate(x, y, batch_size: batch_size, need_accuracy: need_accuracy)
  evaluator.update while evaluator.evaluating?
  [@last_log[:test_accuracy], @last_log[:test_loss]]
end

#evaluate_by_iterator(test_iterator, batch_size: 100, need_accuracy: true) ⇒ Array

Evaluate model by iterator.

Parameters:

  • test_iterator (DNN::Iterator)

    Iterator used for testing.

  • batch_size (Integer) (defaults to: 100)

    Batch size used for one test.

  • need_accuracy (Boolean) (defaults to: true)

    Set true to compute the accuracy.

Returns:

  • (Array)

    Returns the test data accuracy and mean loss in the form [accuracy, mean_loss]. If accuracy is not needed returns in the form [nil, mean_loss].



330
331
332
333
334
335
# File 'lib/dnn/core/models.rb', line 330

def evaluate_by_iterator(test_iterator, batch_size: 100, need_accuracy: true)
  evaluator = ModelEvaluator.new(self)
  evaluator.start_evaluate_by_iterator(test_iterator, batch_size: batch_size, need_accuracy: need_accuracy)
  evaluator.update while evaluator.evaluating?
  [@last_log[:test_accuracy], @last_log[:test_loss]]
end

#get_all_params_dataArray

Get parameter data of all layers.

Returns:

  • (Array)

    Parameter data.



537
538
539
540
541
542
543
# File 'lib/dnn/core/models.rb', line 537

def get_all_params_data
  trainable_layers.map do |layer|
    layer.get_params.to_h do |key, param|
      [key, param.data]
    end
  end
end

#get_all_trainable_paramsObject



618
619
620
621
622
# File 'lib/dnn/core/models.rb', line 618

def get_all_trainable_params
  layers.select { |layer| layer.is_a?(Layers::TrainableLayer) && layer.trainable }
        .map { |layer| layer.get_params.values }.flatten.compact
        .select(&:grad)
end

#get_layer(name) ⇒ DNN::Layers::Layer

Get the layer that the model has.

Parameters:

  • name (Symbol)

    The name of the layer to get.

Returns:



511
512
513
514
515
# File 'lib/dnn/core/models.rb', line 511

def get_layer(name)
  layer = instance_variable_get("@#{name}")
  return layer if layer.is_a?(Layers::Layer) || layer.is_a?(Chain) || layer.is_a?(LayersList)
  nil
end

#load_params(file_name) ⇒ Object

Load marshal params.

Parameters:

  • file_name (String)

    File name of marshal model to load.



478
479
480
481
# File 'lib/dnn/core/models.rb', line 478

def load_params(file_name)
  loader = Loaders::MarshalLoader.new(self)
  loader.load(file_name)
end

#loss_funcObject



158
159
160
# File 'lib/dnn/core/models.rb', line 158

def loss_func
  @loss_func
end

#loss_func=(lfs) ⇒ Object



162
163
164
165
166
167
168
169
170
171
172
173
174
# File 'lib/dnn/core/models.rb', line 162

def loss_func=(lfs)
  if lfs.is_a?(Array)
    @loss_func = []
    lfs.each.with_index do |lf, i|
      unless lf.is_a?(Losses::Loss)
        raise TypeError, "loss_func[#{i}]:#{lf.class} is not an instance of DNN::Losses::Loss class."
      end
      @loss_func << lf
    end
  else
    @loss_func = lfs
  end
end

#predict(x, use_loss_activation: true) ⇒ Object

Predict data.

Parameters:

  • x (Numo::SFloat)

    Input data.

  • use_loss_activation (Boolean) (defaults to: true)

    Use loss activation when loss has an activation.



415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
# File 'lib/dnn/core/models.rb', line 415

def predict(x, use_loss_activation: true)
  Utils.check_input_data_type("x", x, Xumo::SFloat)
  DNN.learning_phase = false
  output_tensors = call(Tensor.convert(x))
  if output_tensors.is_a?(Array)
    lfs = @loss_func
    ary_output_tensors = output_tensors
  else
    lfs = [@loss_func]
    ary_output_tensors = [output_tensors]
  end
  ys = []
  ary_output_tensors.each.with_index do |out, i|
    y = out.data
    lf = lfs[i]
    if use_loss_activation && lf && lf.class.respond_to?(:activation)
      y = lf.class.activation(y)
    end
    ys << y
  end
  output_tensors.is_a?(Array) ? ys : ys.first
end

#predict1(x, use_loss_activation: true) ⇒ Object

Predict one data.

Parameters:

  • x (Numo::SFloat)

    Input data. However, x is single data.



440
441
442
443
444
445
446
447
448
449
450
451
452
453
# File 'lib/dnn/core/models.rb', line 440

def predict1(x, use_loss_activation: true)
  Utils.check_input_data_type("x", x, Xumo::SFloat)
  input = if x.is_a?(Array)
            x.map { |v| v.reshape(1, *v.shape) }
          else
            x.reshape(1, *x.shape)
          end
  y = predict(input, use_loss_activation: use_loss_activation)
  if y.is_a?(Array)
    y.map { |v| v[0, false] }
  else
    y[0, false]
  end
end

#request_early_stopObject

Request training early stop.



606
607
608
# File 'lib/dnn/core/models.rb', line 606

def request_early_stop
  @early_stop_requested = true
end

#save(file_name) ⇒ Object

Save the model in marshal format.

Parameters:

  • file_name (String)

    Name to save model.



485
486
487
488
# File 'lib/dnn/core/models.rb', line 485

def save(file_name)
  saver = Savers::MarshalSaver.new(self, include_model: true)
  saver.save(file_name)
end

#save_params(file_name) ⇒ Object

Save the params in marshal format.

Parameters:

  • file_name (String)

    Name to save model.



492
493
494
495
# File 'lib/dnn/core/models.rb', line 492

def save_params(file_name)
  saver = Savers::MarshalSaver.new(self, include_model: false)
  saver.save(file_name)
end

#set_all_params_data(params_data) ⇒ Object

Set parameter data of all layers.

Parameters:

  • params_data (Array)

    Parameter data obtained by get_all_params_data.



547
548
549
550
551
552
553
# File 'lib/dnn/core/models.rb', line 547

def set_all_params_data(params_data)
  trainable_layers.each.with_index do |layer, i|
    params_data[i].each do |(key, data)|
      layer.get_params[key].data = data
    end
  end
end

#setup(optimizer, loss_func, loss_weights: nil) ⇒ Object

Set optimizer and loss_func to model.

Parameters:

  • optimizer (DNN::Optimizers::Optimizer)

    Optimizer to use for learning.

  • loss_func (DNN::Losses::Loss)

    Loss function to use for learning.

  • loss_weights (Array | NilClass) (defaults to: nil)

    Setting loss weights contribution.



146
147
148
149
150
151
152
153
154
155
156
# File 'lib/dnn/core/models.rb', line 146

def setup(optimizer, loss_func, loss_weights: nil)
  unless optimizer.is_a?(Optimizers::Optimizer)
    raise TypeError, "optimizer:#{optimizer.class} is not an instance of DNN::Optimizers::Optimizer class."
  end
  unless loss_func.is_a?(Losses::Loss) || loss_func.is_a?(Array)
    raise TypeError, "loss_func:#{loss_func.class} is not an instance of DNN::Losses::Loss or Array class."
  end
  @optimizer = optimizer
  self.loss_func = loss_func
  @loss_weights = loss_weights
end

#test_on_batch(x, y) ⇒ Float | Array

Test once.

Parameters:

  • x (Numo::SFloat | Array)

    Input test data.

  • y (Numo::SFloat | Array)

    Output test data.

Returns:

  • (Float | Array)

    Return loss value in the form of Float or Array.

Raises:



360
361
362
363
364
365
366
367
368
369
370
# File 'lib/dnn/core/models.rb', line 360

def test_on_batch(x, y)
  raise DNNError, "The model is not loss_func setup complete." unless @loss_func
  Utils.check_input_data_type("x", x, Xumo::SFloat)
  Utils.check_input_data_type("y", y, Xumo::SFloat)
  *, loss_data = test_on_batch_internal(x, y)
  if loss_data.is_a?(Array)
    loss_data.map { |v| Utils.to_f(v) }
  else
    Utils.to_f(loss_data)
  end
end

#test_step(x, y, need_accuracy: false) ⇒ Hash

Testing process to be performed in one step.

Parameters:

  • x (Numo::SFloat)

    Input training data.

  • y (Numo::SFloat)

    Output training data.

Returns:

  • (Hash)

    Hash of contents to be output to log.



341
342
343
344
345
346
347
348
349
350
351
352
353
354
# File 'lib/dnn/core/models.rb', line 341

def test_step(x, y, need_accuracy: false)
  output_data, loss_data = test_on_batch_internal(x, y)
  if loss_data.is_a?(Array)
    loss_value = []
    accuracy = []
    loss_data.each_index do |i|
      loss_value << Utils.to_f(loss_data)
      accuracy << accuracy(output_data[i], y[i]).to_f / y[i].shape[0]
    end
  else
    loss_value = Utils.to_f(loss_data)
  end
  { test_loss: loss_value, test_accuracy: accuracy(output_data, y) }
end

#to_cpuDNN::Models::Model

Convert the parameters of model and optimizer for cpu.

Returns:



557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
# File 'lib/dnn/core/models.rb', line 557

def to_cpu
  params_data = get_all_params_data
  clean_layers
  set_all_params_data(params_data)
  trainable_layers.each do |layer|
    layer.get_params.each do |key, param|
      data = param.data
      if DNN.use_cumo? && data.is_a?(Cumo::NArray)
        param.data = Utils.cumo2numo(data)
      end
    end
  end
  @optimizer.status.each do |key, state|
    next unless state
    state.each do |param, data|
      if DNN.use_cumo? && data.is_a?(Cumo::NArray)
        state[param] = Utils.cumo2numo(data)
      end
    end
  end
  self
end

#to_gpuDNN::Models::Model

Convert the parameters of model and optimizer for gpu.

Returns:



582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
# File 'lib/dnn/core/models.rb', line 582

def to_gpu
  params_data = get_all_params_data
  clean_layers
  set_all_params_data(params_data)
  trainable_layers.each do |layer|
    layer.get_params.each do |(key, param)|
      data = param.data
      if DNN.use_cumo? && data.is_a?(Numo::NArray)
        param.data = Utils.numo2cumo(data)
      end
    end
  end
  @optimizer.status.each do |(key, state)|
    next unless state
    state.each do |(param, data)|
      if DNN.use_cumo? && data.is_a?(Numo::NArray)
        state[param] = Utils.numo2cumo(data)
      end
    end
  end
  self
end

#train(x, y, epochs, batch_size: 1, initial_epoch: 1, test: nil, verbose: true, need_accuracy: true, io: $stdout) ⇒ Object Also known as: fit

Start training. Setup the model before use this method.

Parameters:

  • x (Numo::SFloat)

    Input training data.

  • y (Numo::SFloat)

    Output training data.

  • epochs (Integer)

    Number of training.

  • batch_size (Integer) (defaults to: 1)

    Batch size used for one training.

  • initial_epoch (Integer) (defaults to: 1)

    Initial epoch.

  • test (Array | NilClass) (defaults to: nil)

    If you to test the model for every 1 epoch, specify [x_test, y_test]. Don’t test to the model, specify nil.

  • verbose (Boolean) (defaults to: true)

    Set true to display the log. If false is set, the log is not displayed.

  • need_accuracy (Boolean) (defaults to: true)

    Set true to compute the accuracy.

  • io (IO) (defaults to: $stdout)

    Specifies the IO object to use for logging.



188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
# File 'lib/dnn/core/models.rb', line 188

def train(x, y, epochs,
          batch_size: 1,
          initial_epoch: 1,
          test: nil,
          verbose: true,
          need_accuracy: true,
          io: $stdout)
  trainer = ModelTrainer.new(self)
  trainer.start_train(x, y, epochs,
                      batch_size: batch_size,
                      initial_epoch: initial_epoch,
                      test: test,
                      verbose: verbose,
                      need_accuracy: need_accuracy,
                      io: io)
  trainer.update while trainer.training?
end

#train_by_iterator(train_iterator, epochs, batch_size: 1, initial_epoch: 1, test: nil, verbose: true, need_accuracy: true, io: $stdout) ⇒ Object Also known as: fit_by_iterator

Start training by iterator. Setup the model before use this method.

Parameters:

  • train_iterator (DNN::Iterator)

    Iterator used for training.

  • epochs (Integer)

    Number of training.

  • batch_size (Integer) (defaults to: 1)

    Batch size used for one training.

  • initial_epoch (Integer) (defaults to: 1)

    Initial epoch.

  • test (Array | NilClass) (defaults to: nil)

    If you to test the model for every 1 epoch, specify [x_test, y_test]. Don’t test to the model, specify nil.

  • verbose (Boolean) (defaults to: true)

    Set true to display the log. If false is set, the log is not displayed.

  • need_accuracy (Boolean) (defaults to: true)

    Set true to compute the accuracy.

  • io (IO) (defaults to: $stdout)

    Specifies the IO object to use for logging.



219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
# File 'lib/dnn/core/models.rb', line 219

def train_by_iterator(train_iterator, epochs,
                      batch_size: 1,
                      initial_epoch: 1,
                      test: nil,
                      verbose: true,
                      need_accuracy: true,
                      io: $stdout)
  trainer = ModelTrainer.new(self)
  trainer.start_train_by_iterator(train_iterator, epochs,
                                  batch_size: batch_size,
                                  initial_epoch: initial_epoch,
                                  test: test,
                                  verbose: verbose,
                                  need_accuracy: need_accuracy,
                                  io: io)
  trainer.update while trainer.training?
end

#train_on_batch(x, y) ⇒ Float | Array

Training once. Setup the model before use this method.

Parameters:

  • x (Numo::SFloat)

    Input training data.

  • y (Numo::SFloat)

    Output training data.

Returns:

  • (Float | Array)

    Return loss value in the form of Float or Array.

Raises:



269
270
271
272
273
274
275
276
277
278
279
280
# File 'lib/dnn/core/models.rb', line 269

def train_on_batch(x, y)
  raise DNNError, "The model is not optimizer setup complete." unless @optimizer
  raise DNNError, "The model is not loss_func setup complete." unless @loss_func
  Utils.check_input_data_type("x", x, Xumo::SFloat)
  Utils.check_input_data_type("y", y, Xumo::SFloat)
  *, loss_data = train_on_batch_internal(x, y)
  if loss_data.is_a?(Array)
    loss_data.map { |v| Utils.to_f(v) }
  else
    Utils.to_f(loss_data)
  end
end

#train_step(x, y, need_accuracy: false) ⇒ Hash

Implement the training process to be performed in one step.

Parameters:

  • x (Numo::SFloat)

    Input training data.

  • y (Numo::SFloat)

    Output training data.

  • need_accuracy (Boolean) (defaults to: false)

    Set true to compute the accuracy.

Returns:

  • (Hash)

    Hash of contents to be output to log.



244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
# File 'lib/dnn/core/models.rb', line 244

def train_step(x, y, need_accuracy: false)
  output_data, loss_data = train_on_batch_internal(x, y)
  if loss_data.is_a?(Array)
    loss_value = []
    acc = [] if need_accuracy
    loss_data.each_index do |i|
      loss_value << Utils.to_f(loss_data)
      acc << accuracy(output_data[i], y[i]).to_f / y[i].shape[0] if need_accuracy
    end
  else
    loss_value = Utils.to_f(loss_data)
    acc = accuracy(output_data, y).to_f / y.shape[0] if need_accuracy
  end
  if need_accuracy
    { loss: loss_value, accuracy: acc }
  else
    { loss: loss_value }
  end
end

#trainable_layersArray

Get the all trainable layers.

Returns:

  • (Array)

    All has param layers array.



504
505
506
# File 'lib/dnn/core/models.rb', line 504

def trainable_layers
  layers.select { |layer| layer.is_a?(Layers::TrainableLayer) }
end