Class: Rumale::Ensemble::GradientBoostingRegressor
- Inherits:
-
Object
- Object
- Rumale::Ensemble::GradientBoostingRegressor
- Includes:
- Base::BaseEstimator, Base::Regressor
- Defined in:
- lib/rumale/ensemble/gradient_boosting_regressor.rb
Overview
GradientBoostingRegressor is a class that implements gradient tree boosting for regression. The class use L2 loss for the loss function.
reference
-
J H. Friedman, “Greedy Function Approximation: A Gradient Boosting Machine,” Annals of Statistics, 29 (5), pp. 1189–1232, 2001.
-
J H. Friedman, “Stochastic Gradient Boosting,” Computational Statistics and Data Analysis, 38 (4), pp. 367–378, 2002.
-
Chen and C. Guestrin, “XGBoost: A Scalable Tree Boosting System,” Proc. KDD’16, pp. 785–794, 2016.
-
Instance Attribute Summary collapse
-
#estimators ⇒ Array<GradientTreeRegressor>
readonly
Return the set of estimators.
-
#feature_importances ⇒ Numo::DFloat
readonly
Return the importance for each feature.
-
#rng ⇒ Random
readonly
Return the random generator for random selection of feature index.
Attributes included from Base::BaseEstimator
Instance Method Summary collapse
-
#apply(x) ⇒ Numo::Int32
Return the index of the leaf that each sample reached.
-
#fit(x, y) ⇒ GradientBoostingRegressor
Fit the model with given training data.
-
#initialize(n_estimators: 100, learning_rate: 0.1, reg_lambda: 0.0, subsample: 1.0, max_depth: nil, max_leaf_nodes: nil, min_samples_leaf: 1, max_features: nil, random_seed: nil) ⇒ GradientBoostingRegressor
constructor
Create a new regressor with gradient tree boosting.
-
#marshal_dump ⇒ Hash
Dump marshal data.
-
#marshal_load(obj) ⇒ nil
Load marshal data.
-
#predict(x) ⇒ Numo::DFloat
Predict values for samples.
Methods included from Base::Regressor
Constructor Details
#initialize(n_estimators: 100, learning_rate: 0.1, reg_lambda: 0.0, subsample: 1.0, max_depth: nil, max_leaf_nodes: nil, min_samples_leaf: 1, max_features: nil, random_seed: nil) ⇒ GradientBoostingRegressor
Create a new regressor with gradient tree boosting.
56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
# File 'lib/rumale/ensemble/gradient_boosting_regressor.rb', line 56 def initialize(n_estimators: 100, learning_rate: 0.1, reg_lambda: 0.0, subsample: 1.0, max_depth: nil, max_leaf_nodes: nil, min_samples_leaf: 1, max_features: nil, random_seed: nil) check_params_type_or_nil(Integer, max_depth: max_depth, max_leaf_nodes: max_leaf_nodes, max_features: max_features, random_seed: random_seed) check_params_integer(n_estimators: n_estimators, min_samples_leaf: min_samples_leaf) check_params_float(learning_rate: learning_rate, reg_lambda: reg_lambda, subsample: subsample) check_params_positive(n_estimators: n_estimators, learning_rate: learning_rate, reg_lambda: reg_lambda, subsample: subsample, max_depth: max_depth, max_leaf_nodes: max_leaf_nodes, min_samples_leaf: min_samples_leaf, max_features: max_features) @params = {} @params[:n_estimators] = n_estimators @params[:learning_rate] = learning_rate @params[:reg_lambda] = reg_lambda @params[:subsample] = subsample @params[:max_depth] = max_depth @params[:max_leaf_nodes] = max_leaf_nodes @params[:min_samples_leaf] = min_samples_leaf @params[:max_features] = max_features @params[:random_seed] = random_seed @params[:random_seed] ||= srand @estimators = nil @base_predictions = nil @feature_importances = nil @rng = Random.new(@params[:random_seed]) end |
Instance Attribute Details
#estimators ⇒ Array<GradientTreeRegressor> (readonly)
Return the set of estimators.
31 32 33 |
# File 'lib/rumale/ensemble/gradient_boosting_regressor.rb', line 31 def estimators @estimators end |
#feature_importances ⇒ Numo::DFloat (readonly)
Return the importance for each feature. The feature importances are calculated based on the numbers of times the feature is used for splitting.
36 37 38 |
# File 'lib/rumale/ensemble/gradient_boosting_regressor.rb', line 36 def feature_importances @feature_importances end |
#rng ⇒ Random (readonly)
Return the random generator for random selection of feature index.
40 41 42 |
# File 'lib/rumale/ensemble/gradient_boosting_regressor.rb', line 40 def rng @rng end |
Instance Method Details
#apply(x) ⇒ Numo::Int32
Return the index of the leaf that each sample reached.
146 147 148 149 150 151 152 153 154 155 |
# File 'lib/rumale/ensemble/gradient_boosting_regressor.rb', line 146 def apply(x) check_sample_array(x) n_outputs = @estimators.first.is_a?(Array) ? @estimators.size : 1 leaf_ids = if n_outputs > 1 Array.new(n_outputs) { |n| @estimators[n].map { |tree| tree.apply(x) } } else @estimators.map { |tree| tree.apply(x) } end Numo::Int32[*leaf_ids].transpose end |
#fit(x, y) ⇒ GradientBoostingRegressor
Fit the model with given training data.
89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 |
# File 'lib/rumale/ensemble/gradient_boosting_regressor.rb', line 89 def fit(x, y) check_sample_array(x) check_tvalue_array(y) check_sample_tvalue_size(x, y) n_features = x.shape[1] @params[:max_features] = n_features if @params[:max_features].nil? @params[:max_features] = [[1, @params[:max_features]].max, n_features].min # train regressor. n_outputs = y.shape[1].nil? ? 1 : y.shape[1] @base_predictions = n_outputs > 1 ? y.mean(0) : y.mean @estimators = if n_outputs > 1 Array.new(n_outputs) do |n| partial_fit(x, y[true, n], @base_predictions[n]) end else partial_fit(x, y, @base_predictions) end # calculate feature importances. @feature_importances = Numo::DFloat.zeros(n_features) if n_outputs > 1 n_outputs.times do |n| @estimators[n].each { |tree| @feature_importances += tree.feature_importances } end else @estimators.each { |tree| @feature_importances += tree.feature_importances } end self end |
#marshal_dump ⇒ Hash
Dump marshal data.
159 160 161 162 163 164 165 |
# File 'lib/rumale/ensemble/gradient_boosting_regressor.rb', line 159 def marshal_dump { params: @params, estimators: @estimators, base_predictions: @base_predictions, feature_importances: @feature_importances, rng: @rng } end |
#marshal_load(obj) ⇒ nil
Load marshal data.
169 170 171 172 173 174 175 176 |
# File 'lib/rumale/ensemble/gradient_boosting_regressor.rb', line 169 def marshal_load(obj) @params = obj[:params] @estimators = obj[:estimators] @base_predictions = obj[:base_predictions] @feature_importances = obj[:feature_importances] @rng = obj[:rng] nil end |
#predict(x) ⇒ Numo::DFloat
Predict values for samples.
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 |
# File 'lib/rumale/ensemble/gradient_boosting_regressor.rb', line 126 def predict(x) check_sample_array(x) n_samples = x.shape[0] n_outputs = @estimators.first.is_a?(Array) ? @estimators.size : 1 if n_outputs > 1 predicted = Numo::DFloat.ones(n_samples, n_outputs) * @base_predictions n_outputs.times do |n| @estimators[n].each { |tree| predicted[true, n] += tree.predict(x) } end else predicted = Numo::DFloat.ones(n_samples) * @base_predictions @estimators.each { |tree| predicted += tree.predict(x) } end predicted end |