Class: Aws::TranscribeStreamingService::Types::StartMedicalStreamTranscriptionRequest

Inherits:
Struct
  • Object
show all
Includes:
Structure
Defined in:
lib/aws-sdk-transcribestreamingservice/types.rb

Overview

Constant Summary collapse

SENSITIVE =
[]

Instance Attribute Summary collapse

Instance Attribute Details

#audio_streamTypes::AudioStream

An encoded stream of audio blobs. Audio streams are encoded as either HTTP/2 or WebSocket data frames.

For more information, see [Transcribing streaming audio].

[1]: docs.aws.amazon.com/transcribe/latest/dg/streaming.html

Returns:



1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#content_identification_typeString

Labels all personal health information (PHI) identified in your transcript.

Content identification is performed at the segment level; PHI is flagged upon complete transcription of an audio segment.

For more information, see [Identifying personal health information (PHI) in a transcription].

[1]: docs.aws.amazon.com/transcribe/latest/dg/phi-id.html

Returns:

  • (String)


1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#enable_channel_identificationBoolean

Enables channel identification in multi-channel audio.

Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.

If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript is not separated by channel.

For more information, see [Transcribing multi-channel audio].

[1]: docs.aws.amazon.com/transcribe/latest/dg/channel-id.html

Returns:

  • (Boolean)


1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#language_codeString

Specify the language code that represents the language spoken in your audio.

Amazon Transcribe Medical only supports US English (‘en-US`).

Returns:

  • (String)


1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#media_encodingString

Specify the encoding used for the input audio. Supported formats are:

  • FLAC

  • OPUS-encoded audio in an Ogg container

  • PCM (only signed 16-bit little-endian audio formats, which does not include WAV)

For more information, see [Media formats].

[1]: docs.aws.amazon.com/transcribe/latest/dg/how-input.html#how-input-audio

Returns:

  • (String)


1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#media_sample_rate_hertzInteger

The sample rate of the input audio (in hertz). Amazon Transcribe Medical supports a range from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.

Returns:

  • (Integer)


1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#number_of_channelsInteger

Specify the number of channels in your audio stream. Up to two channels are supported.

Returns:

  • (Integer)


1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#session_idString

Specify a name for your transcription session. If you don’t include this parameter in your request, Amazon Transcribe Medical generates an ID and returns it in the response.

You can use a session ID to retry a streaming session.

Returns:

  • (String)


1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#show_speaker_labelBoolean

Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

For more information, see [Partitioning speakers (diarization)].

[1]: docs.aws.amazon.com/transcribe/latest/dg/diarization.html

Returns:

  • (Boolean)


1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#specialtyString

Specify the medical specialty contained in your audio.

Returns:

  • (String)


1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#typeString

Specify the type of input audio. For example, choose ‘DICTATION` for a provider dictating patient notes and `CONVERSATION` for a dialogue between a patient and a medical professional.

Returns:

  • (String)


1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#vocabulary_nameString

Specify the name of the custom vocabulary that you want to use when processing your transcription. Note that vocabulary names are case sensitive.

Returns:

  • (String)


1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end