Class: Google::Apis::DialogflowV2beta1::GoogleCloudDialogflowV2beta1InputAudioConfig

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/dialogflow_v2beta1/classes.rb,
lib/google/apis/dialogflow_v2beta1/representations.rb,
lib/google/apis/dialogflow_v2beta1/representations.rb

Overview

Instructs the speech recognizer on how to process the audio content.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ GoogleCloudDialogflowV2beta1InputAudioConfig

Returns a new instance of GoogleCloudDialogflowV2beta1InputAudioConfig.



17199
17200
17201
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17199

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#audio_encodingString

Required. Audio encoding of the audio content to process. Corresponds to the JSON property audioEncoding

Returns:

  • (String)


17072
17073
17074
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17072

def audio_encoding
  @audio_encoding
end

#barge_in_configGoogle::Apis::DialogflowV2beta1::GoogleCloudDialogflowV2beta1BargeInConfig

Configuration of the barge-in behavior. Barge-in instructs the API to return a detected utterance at a proper time while the client is playing back the response audio from a previous request. When the client sees the utterance, it should stop the playback and immediately get ready for receiving the responses for the current request. The barge-in handling requires the client to start streaming audio input as soon as it starts playing back the audio from the previous response. The playback is modeled into two phases: * No barge-in phase: which goes first and during which speech detection should not be carried out. * Barge-in phase: which follows the no barge-in phase and during which the API starts speech detection and may inform the client that an utterance has been detected. Note that no-speech event is not expected in this phase. The client provides this configuration in terms of the durations of those two phases. The durations are measured in terms of the audio length from the start of the input audio. The flow goes like below: --> Time without speech detection | utterance only | utterance or no-speech event | | +--------- ----+ | +------------+ | +---------------+ ----------+ no barge-in +-|-+ barge- in +-|-+ normal period +----------- +-------------+ | +------------+ | +------- --------+ No-speech event is a response with END_OF_UTTERANCE without any transcript following up. Corresponds to the JSON property bargeInConfig



17095
17096
17097
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17095

def barge_in_config
  @barge_in_config
end

#default_no_speech_timeoutString

If set, use this no-speech timeout when the agent does not provide a no-speech timeout itself. Corresponds to the JSON property defaultNoSpeechTimeout

Returns:

  • (String)


17101
17102
17103
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17101

def default_no_speech_timeout
  @default_no_speech_timeout
end

#disable_no_speech_recognized_eventBoolean Also known as: disable_no_speech_recognized_event?

Only used in Participants.AnalyzeContent and Participants. StreamingAnalyzeContent. If false and recognition doesn't return any result, trigger NO_SPEECH_RECOGNIZED event to Dialogflow agent. Corresponds to the JSON property disableNoSpeechRecognizedEvent

Returns:

  • (Boolean)


17108
17109
17110
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17108

def disable_no_speech_recognized_event
  @disable_no_speech_recognized_event
end

#enable_automatic_punctuationBoolean Also known as: enable_automatic_punctuation?

Enable automatic punctuation option at the speech backend. Corresponds to the JSON property enableAutomaticPunctuation

Returns:

  • (Boolean)


17114
17115
17116
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17114

def enable_automatic_punctuation
  @enable_automatic_punctuation
end

#enable_word_infoBoolean Also known as: enable_word_info?

If true, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information. Corresponds to the JSON property enableWordInfo

Returns:

  • (Boolean)


17123
17124
17125
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17123

def enable_word_info
  @enable_word_info
end

#language_codeString

Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. If not set, the language is inferred from the ConversationProfile. stt_config. Corresponds to the JSON property languageCode

Returns:

  • (String)


17134
17135
17136
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17134

def language_code
  @language_code
end

#modelString

Optional. Which Speech model to select for the given request. For more information, see Speech models. Corresponds to the JSON property model

Returns:

  • (String)


17141
17142
17143
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17141

def model
  @model
end

#model_variantString

Which variant of the Speech model to use. Corresponds to the JSON property modelVariant

Returns:

  • (String)


17146
17147
17148
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17146

def model_variant
  @model_variant
end

#opt_out_conformer_model_migrationBoolean Also known as: opt_out_conformer_model_migration?

If true, the request will opt out for STT conformer model migration. This field will be deprecated once force migration takes place in June 2024. Please refer to Dialogflow ES Speech model migration. Corresponds to the JSON property optOutConformerModelMigration

Returns:

  • (Boolean)


17154
17155
17156
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17154

def opt_out_conformer_model_migration
  @opt_out_conformer_model_migration
end

#phrase_hintsArray<String>

A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext. Corresponds to the JSON property phraseHints

Returns:

  • (Array<String>)


17165
17166
17167
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17165

def phrase_hints
  @phrase_hints
end

#phrase_setsArray<String>

A collection of phrase set resources to use for speech adaptation. Corresponds to the JSON property phraseSets

Returns:

  • (Array<String>)


17170
17171
17172
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17170

def phrase_sets
  @phrase_sets
end

#sample_rate_hertzFixnum

Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details. Corresponds to the JSON property sampleRateHertz

Returns:

  • (Fixnum)


17177
17178
17179
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17177

def sample_rate_hertz
  @sample_rate_hertz
end

#single_utteranceBoolean Also known as: single_utterance?

If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance. Corresponds to the JSON property singleUtterance

Returns:

  • (Boolean)


17189
17190
17191
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17189

def single_utterance
  @single_utterance
end

#speech_contextsArray<Google::Apis::DialogflowV2beta1::GoogleCloudDialogflowV2beta1SpeechContext>

Context information to assist speech recognition. See the Cloud Speech documentation for more details. Corresponds to the JSON property speechContexts



17197
17198
17199
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17197

def speech_contexts
  @speech_contexts
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



17204
17205
17206
17207
17208
17209
17210
17211
17212
17213
17214
17215
17216
17217
17218
17219
17220
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 17204

def update!(**args)
  @audio_encoding = args[:audio_encoding] if args.key?(:audio_encoding)
  @barge_in_config = args[:barge_in_config] if args.key?(:barge_in_config)
  @default_no_speech_timeout = args[:default_no_speech_timeout] if args.key?(:default_no_speech_timeout)
  @disable_no_speech_recognized_event = args[:disable_no_speech_recognized_event] if args.key?(:disable_no_speech_recognized_event)
  @enable_automatic_punctuation = args[:enable_automatic_punctuation] if args.key?(:enable_automatic_punctuation)
  @enable_word_info = args[:enable_word_info] if args.key?(:enable_word_info)
  @language_code = args[:language_code] if args.key?(:language_code)
  @model = args[:model] if args.key?(:model)
  @model_variant = args[:model_variant] if args.key?(:model_variant)
  @opt_out_conformer_model_migration = args[:opt_out_conformer_model_migration] if args.key?(:opt_out_conformer_model_migration)
  @phrase_hints = args[:phrase_hints] if args.key?(:phrase_hints)
  @phrase_sets = args[:phrase_sets] if args.key?(:phrase_sets)
  @sample_rate_hertz = args[:sample_rate_hertz] if args.key?(:sample_rate_hertz)
  @single_utterance = args[:single_utterance] if args.key?(:single_utterance)
  @speech_contexts = args[:speech_contexts] if args.key?(:speech_contexts)
end