Class: JsonWaveform
- Inherits:
-
Object
- Object
- JsonWaveform
- Defined in:
- lib/json-waveform.rb,
lib/json-waveform/version.rb
Defined Under Namespace
Classes: ArgumentError, RuntimeError
Constant Summary collapse
- DEFAULT_OPTIONS =
{ :method => :peak, :samples => 1800, :amplitude => 1 }
- VERSION =
"0.2.0".freeze
Class Method Summary collapse
-
.generate(source, options = {}) ⇒ Object
Generate a Waveform JSON file from the given filename with the given options.
Class Method Details
.generate(source, options = {}) ⇒ Object
Generate a Waveform JSON file from the given filename with the given options.
Available options (all optional) are:
:method => The method used to read sample frames, available methods
are peak and rms. peak is probably what you're used to seeing, it uses
the maximum amplitude per sample to generate the waveform, so the
waveform looks more dynamic. RMS gives a more fluid waveform and
probably more accurately reflects what you hear, but isn't as
pronounced (typically).
Can be :rms or :peak
Default is :peak.
:samples => The amount of samples wanted
Default is 1800.
:amplitude => The amplitude of the final values
Default is 1.
:auto_width => msec per sample. This will overwrite the sample of the
final waveform depending on the length of the audio file.
Example:
100 => 1 sample per 100 msec; a one minute audio file will result in a width of 600 samples
Example:
JsonWaveform.generate("Kickstart My Heart.wav")
JsonWaveform.generate("Kickstart My Heart.wav", :method => :rms)
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
# File 'lib/json-waveform.rb', line 47 def generate(source, ={}) = DEFAULT_OPTIONS.merge() raise ArgumentError.new("No source audio filename given, must be an existing sound file.") unless source raise RuntimeError.new("Source audio file '#{source}' not found.") unless File.exist?(source) if [:auto_samples] RubyAudio::Sound.open(source) do |audio| [:samples] = (audio.info.length * 1000 / [:auto_samples].to_i).ceil end end # Frames gives the amplitudes for each channel, for our waveform we're # saying the "visual" amplitude is the average of the amplitude across all # the channels. This might be a little weird w/ the "peak" method if the # frames are very wide (i.e. the image width is very small) -- I *think* # the larger the frames are, the more "peaky" the waveform should get, # perhaps to the point of inaccurately reflecting the actual sound. samples = frames(source, [:samples], [:method]).collect do |frame| frame.inject(0.0) { |sum, peak| sum + peak } / frame.size end normalize(samples, ) end |