Module: NanoGPT::Device

Defined in:
lib/nano_gpt/device.rb

Overview

Device detection and management

Class Method Summary collapse

Class Method Details

.autoObject

Auto-detect the best available device Priority: CUDA > MPS > CPU



9
10
11
12
13
14
# File 'lib/nano_gpt/device.rb', line 9

def auto
  return "cuda" if cuda_available?
  return "mps" if mps_available?

  "cpu"
end

.cuda_available?Boolean

Check if CUDA is available

Returns:

  • (Boolean)


17
18
19
20
21
# File 'lib/nano_gpt/device.rb', line 17

def cuda_available?
  Torch::CUDA.available?
rescue StandardError
  false
end

.gpu?(device) ⇒ Boolean

Check if device is GPU (CUDA or MPS)

Returns:

  • (Boolean)


43
44
45
# File 'lib/nano_gpt/device.rb', line 43

def gpu?(device)
  %w[cuda mps].include?(type(device))
end

.infoObject

Print device info



48
49
50
51
52
53
# File 'lib/nano_gpt/device.rb', line 48

def info
  puts "Device detection:"
  puts "  CUDA available: #{cuda_available?}"
  puts "  MPS available: #{mps_available?}"
  puts "  Auto-selected: #{auto}"
end

.mps_available?Boolean

Check if MPS (Metal Performance Shaders) is available MPS is Apple Silicon GPU acceleration

Returns:

  • (Boolean)


25
26
27
28
29
30
31
# File 'lib/nano_gpt/device.rb', line 25

def mps_available?
  # Try to create a tensor on MPS device
  Torch.tensor([1.0], device: "mps")
  true
rescue StandardError
  false
end

.type(device) ⇒ Object

Get device type string (for optimizer configuration, etc.)



34
35
36
37
38
39
40
# File 'lib/nano_gpt/device.rb', line 34

def type(device)
  case device.to_s
  when /cuda/ then "cuda"
  when /mps/ then "mps"
  else "cpu"
  end
end