Class: DSPy::Optimizers::GaussianProcess
- Inherits:
-
Object
- Object
- DSPy::Optimizers::GaussianProcess
- Extended by:
- T::Sig
- Defined in:
- lib/dspy/optimizers/gaussian_process.rb
Overview
Pure Ruby Gaussian Process implementation for Bayesian optimization No external LAPACK/BLAS dependencies required
Instance Method Summary collapse
- #fit(x_train, y_train) ⇒ Object
-
#initialize(length_scale: 1.0, signal_variance: 1.0, noise_variance: 1e-6) ⇒ GaussianProcess
constructor
A new instance of GaussianProcess.
- #predict(x_test, return_std: false) ⇒ Object
- #rbf_kernel(x1, x2) ⇒ Object
Constructor Details
#initialize(length_scale: 1.0, signal_variance: 1.0, noise_variance: 1e-6) ⇒ GaussianProcess
Returns a new instance of GaussianProcess.
15 16 17 18 19 20 |
# File 'lib/dspy/optimizers/gaussian_process.rb', line 15 def initialize(length_scale: 1.0, signal_variance: 1.0, noise_variance: 1e-6) @length_scale = length_scale @signal_variance = signal_variance @noise_variance = noise_variance @fitted = T.let(false, T::Boolean) end |
Instance Method Details
#fit(x_train, y_train) ⇒ Object
44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
# File 'lib/dspy/optimizers/gaussian_process.rb', line 44 def fit(x_train, y_train) @x_train = x_train @y_train = Numo::DFloat[*y_train] # Compute kernel matrix k_matrix = rbf_kernel(x_train, x_train) # Add noise to diagonal for numerical stability n = k_matrix.shape[0] (0...n).each { |i| k_matrix[i, i] += @noise_variance } # Store inverted kernel matrix using simple LU decomposition @k_inv = matrix_inverse(k_matrix) @alpha = @k_inv.dot(@y_train) @fitted = true end |
#predict(x_test, return_std: false) ⇒ Object
63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
# File 'lib/dspy/optimizers/gaussian_process.rb', line 63 def predict(x_test, return_std: false) raise "Gaussian Process not fitted" unless @fitted # Kernel between training and test points k_star = rbf_kernel(T.must(@x_train), x_test) # Predictive mean mean = k_star.transpose.dot(@alpha) return mean unless return_std # Predictive variance (simplified for small matrices) k_star_star = rbf_kernel(x_test, x_test) var_matrix = k_star_star - k_star.transpose.dot(@k_inv).dot(k_star) var = var_matrix.diagonal # Ensure positive variance (element-wise maximum) var = var.map { |v| [v, 1e-12].max } std = Numo::NMath.sqrt(var) [mean, std] end |
#rbf_kernel(x1, x2) ⇒ Object
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
# File 'lib/dspy/optimizers/gaussian_process.rb', line 23 def rbf_kernel(x1, x2) # Convert to Numo arrays x1_array = Numo::DFloat[*x1] x2_array = Numo::DFloat[*x2] # Compute squared Euclidean distances manually n1, n2 = x1_array.shape[0], x2_array.shape[0] sqdist = Numo::DFloat.zeros(n1, n2) (0...n1).each do |i| (0...n2).each do |j| diff = x1_array[i, true] - x2_array[j, true] sqdist[i, j] = (diff ** 2).sum end end # RBF kernel: σ² * exp(-0.5 * d² / ℓ²) @signal_variance * Numo::NMath.exp(-0.5 * sqdist / (@length_scale ** 2)) end |