Class: Google::Cloud::Dialogflow::V2::StreamingDetectIntentRequest

Inherits:
Object
  • Object
show all
Extended by:
Protobuf::MessageExts::ClassMethods
Includes:
Protobuf::MessageExts
Defined in:
proto_docs/google/cloud/dialogflow/v2/session.rb

Overview

The top-level message sent by the client to the Sessions.StreamingDetectIntent method.

Multiple request messages should be sent in order:

  1. The first message must contain session, query_input plus optionally query_params. If the client wants to receive an audio response, it should also contain output_audio_config. The message must not contain input_audio.
  2. If query_input was set to query_input.audio_config, all subsequent messages must contain input_audio to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with query_input.text.

    However, note that:

* Dialogflow will bill you for the audio duration so far.
* Dialogflow discards all Speech recognition results in favor of the
  input text.
* Dialogflow will use the language code from the first message.

After you sent all input, you must half-close or abort the request stream.

Instance Attribute Summary collapse

Instance Attribute Details

#input_audio::String

Returns The input audio content to be recognized. Must be sent if query_input was set to a streaming input audio config. The complete audio over all streaming messages must not exceed 1 minute.

Returns:

  • (::String)

    The input audio content to be recognized. Must be sent if query_input was set to a streaming input audio config. The complete audio over all streaming messages must not exceed 1 minute.



359
360
361
362
# File 'proto_docs/google/cloud/dialogflow/v2/session.rb', line 359

class StreamingDetectIntentRequest
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods
end

#output_audio_config::Google::Cloud::Dialogflow::V2::OutputAudioConfig

Returns Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.

Returns:



359
360
361
362
# File 'proto_docs/google/cloud/dialogflow/v2/session.rb', line 359

class StreamingDetectIntentRequest
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods
end

#output_audio_config_mask::Google::Protobuf::FieldMask

Returns Mask for output_audio_config indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level.

If unspecified or empty, output_audio_config replaces the agent-level config in its entirety.

Returns:



359
360
361
362
# File 'proto_docs/google/cloud/dialogflow/v2/session.rb', line 359

class StreamingDetectIntentRequest
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods
end

#query_input::Google::Cloud::Dialogflow::V2::QueryInput

Returns Required. The input specification. It can be set to:

  1. an audio config which instructs the speech recognizer how to process the speech audio,

  2. a conversational query in the form of text, or

  3. an event that specifies which intent to trigger.

Returns:

  • (::Google::Cloud::Dialogflow::V2::QueryInput)

    Required. The input specification. It can be set to:

    1. an audio config which instructs the speech recognizer how to process the speech audio,

    2. a conversational query in the form of text, or

    3. an event that specifies which intent to trigger.



359
360
361
362
# File 'proto_docs/google/cloud/dialogflow/v2/session.rb', line 359

class StreamingDetectIntentRequest
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods
end

#query_params::Google::Cloud::Dialogflow::V2::QueryParameters

Returns The parameters of this query.

Returns:



359
360
361
362
# File 'proto_docs/google/cloud/dialogflow/v2/session.rb', line 359

class StreamingDetectIntentRequest
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods
end

#session::String

Returns Required. The name of the session the query is sent to. Format of the session name: projects/<Project ID>/agent/sessions/<Session ID>, or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we are using "-". It's up to the API caller to choose an appropriate Session ID and User Id. They can be a random number or some type of user and session identifiers (preferably hashed). The length of the Session ID and User ID must not exceed 36 characters.

Returns:

  • (::String)

    Required. The name of the session the query is sent to. Format of the session name: projects/<Project ID>/agent/sessions/<Session ID>, or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we are using "-". It's up to the API caller to choose an appropriate Session ID and User Id. They can be a random number or some type of user and session identifiers (preferably hashed). The length of the Session ID and User ID must not exceed 36 characters.



359
360
361
362
# File 'proto_docs/google/cloud/dialogflow/v2/session.rb', line 359

class StreamingDetectIntentRequest
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods
end

#single_utterance::Boolean

Returns Please use InputAudioConfig.single_utterance instead. If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. This setting is ignored when query_input is a piece of text or an event.

Returns:

  • (::Boolean)

    Please use InputAudioConfig.single_utterance instead. If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. This setting is ignored when query_input is a piece of text or an event.



359
360
361
362
# File 'proto_docs/google/cloud/dialogflow/v2/session.rb', line 359

class StreamingDetectIntentRequest
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods
end