Class: SVMKit::PolynomialModel::FactorizationMachineRegressor

Inherits:
Object
  • Object
show all
Includes:
Base::BaseEstimator, Base::Regressor
Defined in:
lib/svmkit/polynomial_model/factorization_machine_regressor.rb

Overview

FactorizationMachineRegressor is a class that implements Factorization Machine with stochastic gradient descent (SGD) optimization.

Reference

    1. Rendle, “Factorization Machines with libFM,” ACM Transactions on Intelligent Systems and Technology, vol. 3 (3), pp. 57:1–57:22, 2012.

    1. Rendle, “Factorization Machines,” Proc. the 10th IEEE International Conference on Data Mining (ICDM’10), pp. 995–1000, 2010.

    1. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” Proc. the 30th International Conference on Machine Learning (ICML’ 13), pp. 1139–1147, 2013.

    1. Hinton, N. Srivastava, and K. Swersky, “Lecture 6e rmsprop,” Neural Networks for Machine Learning, 2012.

Examples:

estimator =
  SVMKit::PolynomialModel::FactorizationMachineRegressor.new(
   n_factors: 10, reg_param_bias: 0.1, reg_param_weight: 0.1, reg_param_factor: 0.1,
   max_iter: 5000, batch_size: 50, random_seed: 1)
estimator.fit(training_samples, traininig_values)
results = estimator.predict(testing_samples)

Instance Attribute Summary collapse

Attributes included from Base::BaseEstimator

#params

Instance Method Summary collapse

Methods included from Base::Regressor

#score

Constructor Details

#initialize(n_factors: 2, reg_param_bias: 1.0, reg_param_weight: 1.0, reg_param_factor: 1.0, init_std: 0.01, learning_rate: 0.01, decay: 0.9, momentum: 0.9, max_iter: 1000, batch_size: 10, random_seed: nil) ⇒ FactorizationMachineRegressor

Create a new regressor with Factorization Machine.

Parameters:

  • n_factors (Integer) (defaults to: 2)

    The maximum number of iterations.

  • reg_param_bias (Float) (defaults to: 1.0)

    The regularization parameter for bias term.

  • reg_param_weight (Float) (defaults to: 1.0)

    The regularization parameter for weight vector.

  • reg_param_factor (Float) (defaults to: 1.0)

    The regularization parameter for factor matrix.

  • init_std (Float) (defaults to: 0.01)

    The standard deviation of normal random number for initialization of factor matrix.

  • learning_rate (Float) (defaults to: 0.01)

    The learning rate for optimization.

  • decay (Float) (defaults to: 0.9)

    The discounting factor for RMS prop optimization.

  • momentum (Float) (defaults to: 0.9)

    The Nesterov momentum for optimization.

  • max_iter (Integer) (defaults to: 1000)

    The maximum number of iterations.

  • batch_size (Integer) (defaults to: 10)

    The size of the mini batches.

  • random_seed (Integer) (defaults to: nil)

    The seed value using to initialize the random generator.



59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
# File 'lib/svmkit/polynomial_model/factorization_machine_regressor.rb', line 59

def initialize(n_factors: 2,
               reg_param_bias: 1.0, reg_param_weight: 1.0, reg_param_factor: 1.0, init_std: 0.01,
               learning_rate: 0.01, decay: 0.9, momentum: 0.9,
               max_iter: 1000, batch_size: 10, random_seed: nil)
  check_params_float(reg_param_bias: reg_param_bias, reg_param_weight: reg_param_weight,
                     reg_param_factor: reg_param_factor, init_std: init_std,
                     learning_rate: learning_rate, decay: decay, momentum: momentum)
  check_params_integer(n_factors: n_factors, max_iter: max_iter, batch_size: batch_size)
  check_params_type_or_nil(Integer, random_seed: random_seed)
  check_params_positive(n_factors: n_factors, reg_param_bias: reg_param_bias,
                        reg_param_weight: reg_param_weight, reg_param_factor: reg_param_factor,
                        learning_rate: learning_rate, decay: decay, momentum: momentum,
                        max_iter: max_iter, batch_size: batch_size)
  @params = {}
  @params[:n_factors] = n_factors
  @params[:reg_param_bias] = reg_param_bias
  @params[:reg_param_weight] = reg_param_weight
  @params[:reg_param_factor] = reg_param_factor
  @params[:init_std] = init_std
  @params[:learning_rate] = learning_rate
  @params[:decay] = decay
  @params[:momentum] = momentum
  @params[:max_iter] = max_iter
  @params[:batch_size] = batch_size
  @params[:random_seed] = random_seed
  @params[:random_seed] ||= srand
  @factor_mat = nil
  @weight_vec = nil
  @bias_term = nil
  @rng = Random.new(@params[:random_seed])
end

Instance Attribute Details

#bias_termNumo::DFloat (readonly)

Return the bias term for Factoriazation Machine.

Returns:

  • (Numo::DFloat)

    (shape: [n_outputs])



40
41
42
# File 'lib/svmkit/polynomial_model/factorization_machine_regressor.rb', line 40

def bias_term
  @bias_term
end

#factor_matNumo::DFloat (readonly)

Return the factor matrix for Factorization Machine.

Returns:

  • (Numo::DFloat)

    (shape: [n_outputs, n_factors, n_features])



32
33
34
# File 'lib/svmkit/polynomial_model/factorization_machine_regressor.rb', line 32

def factor_mat
  @factor_mat
end

#rngRandom (readonly)

Return the random generator for random sampling.

Returns:

  • (Random)


44
45
46
# File 'lib/svmkit/polynomial_model/factorization_machine_regressor.rb', line 44

def rng
  @rng
end

#weight_vecNumo::DFloat (readonly)

Return the weight vector for Factorization Machine.

Returns:

  • (Numo::DFloat)

    (shape: [n_outputs, n_features])



36
37
38
# File 'lib/svmkit/polynomial_model/factorization_machine_regressor.rb', line 36

def weight_vec
  @weight_vec
end

Instance Method Details

#fit(x, y) ⇒ FactorizationMachineRegressor

Fit the model with given training data.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The training data to be used for fitting the model.

  • y (Numo::Int32)

    (shape: [n_samples, n_outputs]) The target values to be used for fitting the model.

Returns:



96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
# File 'lib/svmkit/polynomial_model/factorization_machine_regressor.rb', line 96

def fit(x, y)
  check_sample_array(x)
  check_tvalue_array(y)
  check_sample_tvalue_size(x, y)

  n_outputs = y.shape[1].nil? ? 1 : y.shape[1]
  _n_samples, n_features = x.shape

  if n_outputs > 1
    @factor_mat = Numo::DFloat.zeros(n_outputs, @params[:n_factors], n_features)
    @weight_vec = Numo::DFloat.zeros(n_outputs, n_features)
    @bias_term = Numo::DFloat.zeros(n_outputs)
    n_outputs.times do |n|
      factor, weight, bias = single_fit(x, y[true, n])
      @factor_mat[n, true, true] = factor
      @weight_vec[n, true] = weight
      @bias_term[n] = bias
    end
  else
    @factor_mat, @weight_vec, @bias_term = single_fit(x, y)
  end

  self
end

#marshal_dumpHash

Dump marshal data.

Returns:

  • (Hash)

    The marshal data about FactorizationMachineRegressor



138
139
140
141
142
143
144
# File 'lib/svmkit/polynomial_model/factorization_machine_regressor.rb', line 138

def marshal_dump
  { params: @params,
    factor_mat: @factor_mat,
    weight_vec: @weight_vec,
    bias_term: @bias_term,
    rng: @rng }
end

#marshal_load(obj) ⇒ nil

Load marshal data.

Returns:

  • (nil)


148
149
150
151
152
153
154
155
# File 'lib/svmkit/polynomial_model/factorization_machine_regressor.rb', line 148

def marshal_load(obj)
  @params = obj[:params]
  @factor_mat = obj[:factor_mat]
  @weight_vec = obj[:weight_vec]
  @bias_term = obj[:bias_term]
  @rng = obj[:rng]
  nil
end

#predict(x) ⇒ Numo::DFloat

Predict values for samples.

Parameters:

  • x (Numo::DFloat)

    (shape: [n_samples, n_features]) The samples to predict the values.

Returns:

  • (Numo::DFloat)

    (shape: [n_samples, n_outputs]) Predicted values per sample.



125
126
127
128
129
130
131
132
133
134
# File 'lib/svmkit/polynomial_model/factorization_machine_regressor.rb', line 125

def predict(x)
  check_sample_array(x)
  linear_term = @bias_term + x.dot(@weight_vec.transpose)
  factor_term = if @weight_vec.shape[1].nil?
                  0.5 * (@factor_mat.dot(x.transpose)**2 - (@factor_mat**2).dot(x.transpose**2)).sum(0)
                else
                  0.5 * (@factor_mat.dot(x.transpose)**2 - (@factor_mat**2).dot(x.transpose**2)).sum(1).transpose
                end
  linear_term + factor_term
end