Method: Aws::Rekognition::Client#detect_moderation_labels

Defined in:
lib/aws-sdk-rekognition/client.rb

#detect_moderation_labels(params = {}) ⇒ Types::DetectModerationLabelsResponse

Detects unsafe content in a specified JPEG or PNG format image. Use ‘DetectModerationLabels` to moderate images depending on your requirements. For example, you might want to filter images that contain nudity, but not images containing suggestive content.

To filter images, use the labels returned by ‘DetectModerationLabels` to determine which types of content are appropriate.

For information about moderation labels, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.

You pass the input image either as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file.

You can specify an adapter to use when retrieving label predictions by providing a ‘ProjectVersionArn` to the `ProjectVersion` argument.

Examples:

Request syntax with placeholder values


resp = client.detect_moderation_labels({
  image: { # required
    bytes: "data",
    s3_object: {
      bucket: "S3Bucket",
      name: "S3ObjectName",
      version: "S3ObjectVersion",
    },
  },
  min_confidence: 1.0,
  human_loop_config: {
    human_loop_name: "HumanLoopName", # required
    flow_definition_arn: "FlowDefinitionArn", # required
    data_attributes: {
      content_classifiers: ["FreeOfPersonallyIdentifiableInformation"], # accepts FreeOfPersonallyIdentifiableInformation, FreeOfAdultContent
    },
  },
  project_version: "ProjectVersionId",
})

Response structure


resp.moderation_labels #=> Array
resp.moderation_labels[0].confidence #=> Float
resp.moderation_labels[0].name #=> String
resp.moderation_labels[0].parent_name #=> String
resp.moderation_labels[0].taxonomy_level #=> Integer
resp.moderation_model_version #=> String
resp.human_loop_activation_output.human_loop_arn #=> String
resp.human_loop_activation_output.human_loop_activation_reasons #=> Array
resp.human_loop_activation_output.human_loop_activation_reasons[0] #=> String
resp.human_loop_activation_output.human_loop_activation_conditions_evaluation_results #=> String
resp.project_version #=> String
resp.content_types #=> Array
resp.content_types[0].confidence #=> Float
resp.content_types[0].name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :image (required, Types::Image)

    The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

    If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the ‘Bytes` field. For more information, see Images in the Amazon Rekognition developer guide.

  • :min_confidence (Float)

    Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn’t return any labels with a confidence level lower than this specified value.

    If you don’t specify ‘MinConfidence`, the operation returns labels with confidence values greater than or equal to 50 percent.

  • :human_loop_config (Types::HumanLoopConfig)

    Sets up the configuration for human evaluation, including the FlowDefinition the image will be sent to.

  • :project_version (String)

    Identifier for the custom adapter. Expects the ProjectVersionArn as a value. Use the CreateProject or CreateProjectVersion APIs to create a custom adapter.

Returns:



3386
3387
3388
3389
# File 'lib/aws-sdk-rekognition/client.rb', line 3386

def detect_moderation_labels(params = {}, options = {})
  req = build_request(:detect_moderation_labels, params)
  req.send_request(options)
end