Class: DNN::Models::Model

Inherits:
Object
  • Object
show all
Defined in:
lib/dnn/core/models.rb

Overview

This class deals with the model of the network.

Direct Known Subclasses

Sequential

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initializeModel

Returns a new instance of Model.



19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# File 'lib/dnn/core/models.rb', line 19

def initialize
  @optimizer = nil
  @loss_func = nil
  @last_link = nil
  @built = false
  @callbacks = {
    before_epoch: [],
    after_epoch: [],
    before_train_on_batch: [],
    after_train_on_batch: [],
    before_test_on_batch: [],
    after_test_on_batch: [],
  }
  @layers_cache = nil
end

Instance Attribute Details

#loss_funcObject

Returns the value of attribute loss_func.



7
8
9
# File 'lib/dnn/core/models.rb', line 7

def loss_func
  @loss_func
end

#optimizerObject

Returns the value of attribute optimizer.



6
7
8
# File 'lib/dnn/core/models.rb', line 6

def optimizer
  @optimizer
end

Class Method Details

.load(file_name) ⇒ DNN::Models::Model

Load marshal model.

Parameters:

  • file_name (String)

    File name of marshal model to load.

Returns:



12
13
14
15
16
17
# File 'lib/dnn/core/models.rb', line 12

def self.load(file_name)
  model = self.new
  loader = Loaders::MarshalLoader.new(model)
  loader.load(file_name)
  model
end

Instance Method Details

#accuracy(x, y, batch_size: 100) ⇒ Array

Evaluate model and get accuracy of test data.

Parameters:

  • x (Numo::SFloat)

    Input test data.

  • y (Numo::SFloat)

    Output test data.

  • batch_size (Integer) (defaults to: 100)

    Batch size used for one test.

Returns:

  • (Array)

    Returns the test data accuracy and mean loss in the form [accuracy, mean_loss].



166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
# File 'lib/dnn/core/models.rb', line 166

def accuracy(x, y, batch_size: 100)
  check_xy_type(x, y)
  num_test_datas = x.is_a?(Array) ? x[0].shape[0] : x.shape[0]
  batch_size = batch_size >= num_test_datas[0] ? num_test_datas : batch_size
  iter = Iterator.new(x, y, random: false)
  total_correct = 0
  sum_loss = 0
  max_steps = (num_test_datas.to_f / batch_size).ceil
  iter.foreach(batch_size) do |x_batch, y_batch|
    correct, loss_value = test_on_batch(x_batch, y_batch)
    total_correct += correct
    sum_loss += loss_value.is_a?(Xumo::SFloat) ? loss_value.mean : loss_value
  end
  mean_loss = sum_loss / max_steps
  [total_correct.to_f / num_test_datas, mean_loss]
end

#add_callback(event, callback) ⇒ Object

Add callback function.

Parameters:

  • event (Symbol)

    Callback event. The following can be used for event. before_epoch: Process: performed before one training. after_epoch: Process: performed after one training. before_train_on_batch: Set the proc to be performed before train on batch processing. after_train_on_batch: Set the proc to be performed after train on batch processing. before_test_on_batch: Set the proc to be performed before test on batch processing. after_test_on_batch: Set the proc to be performed after test on batch processing.

Raises:



237
238
239
240
# File 'lib/dnn/core/models.rb', line 237

def add_callback(event, callback)
  raise DNN_UnknownEventError.new("Unknown event #{event}.") unless @callbacks.has_key?(event)
  @callbacks[event] << callback
end

#built?Boolean

Returns If model have already been built then return true.

Returns:

  • (Boolean)

    If model have already been built then return true.



301
302
303
# File 'lib/dnn/core/models.rb', line 301

def built?
  @built
end

#clear_callbacks(event) ⇒ Object

Clear the callback function registered for each event.

Parameters:

  • event (Symbol)

    Callback event. The following can be used for event. before_epoch: Process: performed before one training. after_epoch: Process: performed after one training. before_train_on_batch: Set the proc to be performed before train on batch processing. after_train_on_batch: Set the proc to be performed after train on batch processing. before_test_on_batch: Set the proc to be performed before test on batch processing. after_test_on_batch: Set the proc to be performed after test on batch processing.

Raises:



250
251
252
253
# File 'lib/dnn/core/models.rb', line 250

def clear_callbacks(event)
  raise DNN_UnknownEventError.new("Unknown event #{event}.") unless @callbacks.has_key?(event)
  @callbacks[event] = []
end

#copyDNN::Models::Model

Return the copy this model.

Returns:



263
264
265
# File 'lib/dnn/core/models.rb', line 263

def copy
  Marshal.load(Marshal.dump(self))
end

#get_layer(name) ⇒ DNN::Layers::Layer

Get the layer that the model has.

Parameters:

  • The (Symbol)

    name of the layer to get.

Returns:



296
297
298
# File 'lib/dnn/core/models.rb', line 296

def get_layer(name)
  layers.find { |layer| layer.name == name }
end

#has_param_layersArray

Get the all has param layers.

Returns:

  • (Array)

    All has param layers array.



289
290
291
# File 'lib/dnn/core/models.rb', line 289

def has_param_layers
  layers.select { |layer| layer.is_a?(Layers::HasParamLayer) }
end

#layersArray

Get the all layers.

Returns:

  • (Array)

    All layers array.

Raises:



269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
# File 'lib/dnn/core/models.rb', line 269

def layers
  raise DNN_Error.new("This model is not built. You need build this model using predict or train.") unless built?
  return @layers_cache if @layers_cache
  layers = []
  get_layers = -> link do
    return unless link
    layers.unshift(link.layer)
    if link.is_a?(TwoInputLink)
      get_layers.(link.prev1)
      get_layers.(link.prev2)
    else
      get_layers.(link.prev)
    end
  end
  get_layers.(@last_link)
  @layers_cache = layers.uniq
end

#load_hash_params(hash) ⇒ Object

This method is provided for compatibility with v0.12.4. Load hash model parameters.

Parameters:

  • hash (Hash)

    Hash to load model parameters.



38
39
40
41
42
43
44
45
46
47
48
49
# File 'lib/dnn/core/models.rb', line 38

def load_hash_params(hash)
  has_param_layers_params = hash[:params]
  has_param_layers_index = 0
  has_param_layers.uniq.each do |layer|
    hash_params = has_param_layers_params[has_param_layers_index]
    hash_params.each do |key, (shape, bin)|
      data = Xumo::SFloat.from_binary(bin).reshape(*shape)
      layer.get_params[key].data = data
    end
    has_param_layers_index += 1
  end
end

#load_json_params(json_str) ⇒ Object

This method is provided for compatibility with v0.12.4. Load json model parameters.

Parameters:

  • json_str (String)

    JSON string to load model parameters.



54
55
56
57
58
59
60
61
62
63
64
65
66
67
# File 'lib/dnn/core/models.rb', line 54

def load_json_params(json_str)
  hash = JSON.parse(json_str, symbolize_names: true)
  has_param_layers_params = hash[:params]
  has_param_layers_index = 0
  has_param_layers.uniq.each do |layer|
    hash_params = has_param_layers_params[has_param_layers_index]
    hash_params.each do |key, (shape, base64_param)|
      bin = Base64.decode64(base64_param)
      data = Xumo::SFloat.from_binary(bin).reshape(*shape)
      layer.get_params[key].data = data
    end
    has_param_layers_index += 1
  end
end

#predict(x) ⇒ Object

Predict data.

Parameters:

  • x (Numo::SFloat)

    Input data.



217
218
219
220
# File 'lib/dnn/core/models.rb', line 217

def predict(x)
  check_xy_type(x)
  forward(x, false)
end

#predict1(x) ⇒ Object

Predict one data.

Parameters:

  • x (Numo::SFloat)

    Input data. However, x is single data.



224
225
226
227
# File 'lib/dnn/core/models.rb', line 224

def predict1(x)
  check_xy_type(x)
  predict(x.reshape(1, *x.shape))[0, false]
end

#save(file_name) ⇒ Object

Save the model in marshal format.

Parameters:

  • file_name (String)

    Name to save model.



257
258
259
260
# File 'lib/dnn/core/models.rb', line 257

def save(file_name)
  saver = Savers::MarshalSaver.new(self)
  saver.save(file_name)
end

#setup(optimizer, loss_func) ⇒ Object

Set optimizer and loss_func to model.

Parameters:



72
73
74
75
76
77
78
79
80
81
# File 'lib/dnn/core/models.rb', line 72

def setup(optimizer, loss_func)
  unless optimizer.is_a?(Optimizers::Optimizer)
    raise TypeError.new("optimizer:#{optimizer.class} is not an instance of DNN::Optimizers::Optimizer class.")
  end
  unless loss_func.is_a?(Losses::Loss)
    raise TypeError.new("loss_func:#{loss_func.class} is not an instance of DNN::Losses::Loss class.")
  end
  @optimizer = optimizer
  @loss_func = loss_func
end

#test_on_batch(x, y) ⇒ Array

Evaluate once.

Parameters:

  • x (Numo::SFloat)

    Input test data.

  • y (Numo::SFloat)

    Output test data.

Returns:

  • (Array)

    Returns the test data accuracy and mean loss in the form [accuracy, mean_loss].



187
188
189
190
191
192
193
194
# File 'lib/dnn/core/models.rb', line 187

def test_on_batch(x, y)
  call_callbacks(:before_test_on_batch)
  x = forward(x, false)
  correct = evaluate(x, y)
  loss_value = @loss_func.loss(x, y, layers)
  call_callbacks(:after_test_on_batch, loss_value)
  [correct, loss_value]
end

#train(x, y, epochs, batch_size: 1, initial_epoch: 1, test: nil, verbose: true) ⇒ Object Also known as: fit

Start training. Setup the model before use this method.

Parameters:

  • x (Numo::SFloat)

    Input training data.

  • y (Numo::SFloat)

    Output training data.

  • epochs (Integer)

    Number of training.

  • initial_epoch (Integer) (defaults to: 1)

    Initial epoch.

  • batch_size (Integer) (defaults to: 1)

    Batch size used for one training.

  • test (Array | NilClass) (defaults to: nil)

    If you to test the model for every 1 epoch, specify [x_test, y_test]. Don’t test to the model, specify nil.

  • verbose (Boolean) (defaults to: true)

    Set true to display the log. If false is set, the log is not displayed.

Raises:



93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
# File 'lib/dnn/core/models.rb', line 93

def train(x, y, epochs,
          batch_size: 1,
          initial_epoch: 1,
          test: nil,
          verbose: true)
  raise DNN_Error.new("The model is not optimizer setup complete.") unless @optimizer
  raise DNN_Error.new("The model is not loss_func setup complete.") unless @loss_func
  check_xy_type(x, y)
  iter = Iterator.new(x, y)
  num_train_datas = x.is_a?(Array) ? x[0].shape[0] : x.shape[0]
  (initial_epoch..epochs).each do |epoch|
    call_callbacks(:before_epoch, epoch)
    puts "【 epoch #{epoch}/#{epochs}" if verbose
    iter.foreach(batch_size) do |x_batch, y_batch, index|
      loss_value = train_on_batch(x_batch, y_batch)
      if loss_value.is_a?(Xumo::SFloat)
        loss_value = loss_value.mean
      elsif loss_value.nan?
        puts "\nloss is nan" if verbose
        return
      end
      num_trained_datas = (index + 1) * batch_size
      num_trained_datas = num_trained_datas > num_train_datas ? num_train_datas : num_trained_datas
      log = "\r"
      40.times do |i|
        if i < num_trained_datas * 40 / num_train_datas
          log << "="
        elsif i == num_trained_datas * 40 / num_train_datas
          log << ">"
        else
          log << "_"
        end
      end
      log << "  #{num_trained_datas}/#{num_train_datas} loss: #{sprintf('%.8f', loss_value)}"
      print log if verbose
    end
    if test
      acc, test_loss = accuracy(test[0], test[1], batch_size: batch_size)
      print "  accuracy: #{acc}, test loss: #{sprintf('%.8f', test_loss)}" if verbose
    end
    puts "" if verbose
    call_callbacks(:after_epoch, epoch)
  end
end

#train_on_batch(x, y) ⇒ Float | Numo::SFloat

Training once. Setup the model before use this method.

Parameters:

  • x (Numo::SFloat)

    Input training data.

  • y (Numo::SFloat)

    Output training data.

  • batch_size (Integer)

    Batch size used for one test.

Returns:

  • (Float | Numo::SFloat)

    Return loss value in the form of Float or Numo::SFloat.

Raises:



146
147
148
149
150
151
152
153
154
155
156
157
158
159
# File 'lib/dnn/core/models.rb', line 146

def train_on_batch(x, y)
  raise DNN_Error.new("The model is not optimizer setup complete.") unless @optimizer
  raise DNN_Error.new("The model is not loss_func setup complete.") unless @loss_func
  check_xy_type(x, y)
  call_callbacks(:before_train_on_batch)
  x = forward(x, true)
  loss_value = @loss_func.loss(x, y, layers)
  dy = @loss_func.backward(x, y)
  backward(dy)
  @optimizer.update(layers)
  @loss_func.regularizers_backward(layers)
  call_callbacks(:after_train_on_batch, loss_value)
  loss_value
end