Class: Aws::S3::Object
- Inherits:
-
Object
- Object
- Aws::S3::Object
- Extended by:
- Deprecations
- Defined in:
- lib/aws-sdk-s3/customizations/object.rb,
lib/aws-sdk-s3/object.rb
Defined Under Namespace
Classes: Collection
Read-Only Attributes collapse
-
#accept_ranges ⇒ String
Indicates that a range of bytes was specified.
-
#archive_status ⇒ String
The archive state of the head object.
-
#bucket_key_enabled ⇒ Boolean
Indicates whether the object uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).
- #bucket_name ⇒ String
-
#cache_control ⇒ String
Specifies caching behavior along the request/reply chain.
-
#checksum_crc32 ⇒ String
The base64-encoded, 32-bit CRC32 checksum of the object.
-
#checksum_crc32c ⇒ String
The base64-encoded, 32-bit CRC32C checksum of the object.
-
#checksum_sha1 ⇒ String
The base64-encoded, 160-bit SHA-1 digest of the object.
-
#checksum_sha256 ⇒ String
The base64-encoded, 256-bit SHA-256 digest of the object.
-
#content_disposition ⇒ String
Specifies presentational information for the object.
-
#content_encoding ⇒ String
Indicates what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.
-
#content_language ⇒ String
The language the content is in.
-
#content_length ⇒ Integer
Size of the body in bytes.
-
#content_type ⇒ String
A standard MIME type describing the format of the object data.
-
#delete_marker ⇒ Boolean
Specifies whether the object retrieved was (true) or was not (false) a Delete Marker.
-
#etag ⇒ String
An entity tag (ETag) is an opaque identifier assigned by a web server to a specific version of a resource found at a URL.
-
#expiration ⇒ String
If the object expiration is configured (see [ ‘PutBucketLifecycleConfiguration` ][1]), the response includes this header.
-
#expires ⇒ Time
The date and time at which the object is no longer cacheable.
- #expires_string ⇒ String
- #key ⇒ String
-
#last_modified ⇒ Time
Date and time when the object was last modified.
-
#metadata ⇒ Hash<String,String>
A map of metadata to store with the object in S3.
-
#missing_meta ⇒ Integer
This is set to the number of metadata entries not returned in ‘x-amz-meta` headers.
-
#object_lock_legal_hold_status ⇒ String
Specifies whether a legal hold is in effect for this object.
-
#object_lock_mode ⇒ String
The Object Lock mode, if any, that’s in effect for this object.
-
#object_lock_retain_until_date ⇒ Time
The date and time when the Object Lock retention period expires.
-
#parts_count ⇒ Integer
The count of parts this object has.
-
#replication_status ⇒ String
Amazon S3 can return this header if your request involves a bucket that is either a source or a destination in a replication rule.
-
#request_charged ⇒ String
If present, indicates that the requester was successfully charged for the request.
-
#restore ⇒ String
If the object is an archived object (an object whose storage class is GLACIER), the response includes this header if either the archive restoration is in progress (see [RestoreObject] or an archive copy is already restored.
-
#server_side_encryption ⇒ String
The server-side encryption algorithm used when you store this object in Amazon S3 (for example, ‘AES256`, `aws:kms`, `aws:kms:dsse`).
-
#sse_customer_algorithm ⇒ String
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.
-
#sse_customer_key_md5 ⇒ String
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.
-
#ssekms_key_id ⇒ String
If present, indicates the ID of the Key Management Service (KMS) symmetric encryption customer managed key that was used for the object.
-
#storage_class ⇒ String
Provides storage class information of the object.
-
#version_id ⇒ String
Version ID of the object.
-
#website_redirect_location ⇒ String
If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL.
Actions collapse
- #copy_from(options = {}) ⇒ Types::CopyObjectOutput
- #delete(options = {}) ⇒ Types::DeleteObjectOutput
- #get(options = {}, &block) ⇒ Types::GetObjectOutput
- #head(options = {}) ⇒ Types::HeadObjectOutput
- #initiate_multipart_upload(options = {}) ⇒ MultipartUpload
- #put(options = {}) ⇒ Types::PutObjectOutput
- #restore_object(options = {}) ⇒ Types::RestoreObjectOutput
Associations collapse
- #acl ⇒ ObjectAcl
- #bucket ⇒ Bucket
- #identifiers ⇒ Object deprecated private Deprecated.
- #multipart_upload(id) ⇒ MultipartUpload
- #version(id) ⇒ ObjectVersion
Instance Method Summary collapse
- #client ⇒ Client
-
#copy_to(target, options = {}) ⇒ Object
Copies this object to another object.
-
#data ⇒ Types::HeadObjectOutput
Returns the data for this Object.
-
#data_loaded? ⇒ Boolean
Returns ‘true` if this resource is loaded.
-
#download_file(destination, options = {}) ⇒ Boolean
Downloads a file in S3 to a path on disk.
-
#exists?(options = {}) ⇒ Boolean
Returns ‘true` if the Object exists.
-
#initialize(*args) ⇒ Object
constructor
A new instance of Object.
- #load ⇒ self (also: #reload)
-
#move_to(target, options = {}) ⇒ void
Copies and deletes the current object.
-
#presigned_post(options = {}) ⇒ PresignedPost
Creates a PresignedPost that makes it easy to upload a file from a web browser direct to Amazon S3 using an HTML post form with a file field.
-
#presigned_request(method, params = {}) ⇒ String, Hash
Allows you to create presigned URL requests for S3 operations.
-
#presigned_url(method, params = {}) ⇒ String
Generates a pre-signed URL for this object.
-
#public_url(options = {}) ⇒ String
Returns the public (un-signed) URL for this object.
- #size ⇒ Object
-
#upload_file(source, options = {}) {|response| ... } ⇒ Boolean
Uploads a file from disk to the current object in S3.
-
#upload_stream(options = {}, &block) ⇒ Boolean
Uploads a stream in a streaming fashion to the current object in S3.
-
#wait_until(options = {}) {|resource| ... } ⇒ Resource
deprecated
Deprecated.
Use [Aws::S3::Client] #wait_until instead
- #wait_until_exists(options = {}, &block) ⇒ Object
- #wait_until_not_exists(options = {}, &block) ⇒ Object
Constructor Details
#initialize(bucket_name, key, options = {}) ⇒ Object #initialize(options = {}) ⇒ Object
Returns a new instance of Object.
24 25 26 27 28 29 30 31 |
# File 'lib/aws-sdk-s3/object.rb', line 24 def initialize(*args) = Hash === args.last ? args.pop.dup : {} @bucket_name = extract_bucket_name(args, ) @key = extract_key(args, ) @data = .delete(:data) @client = .delete(:client) || Client.new() @waiter_block_warned = false end |
Instance Method Details
#accept_ranges ⇒ String
Indicates that a range of bytes was specified.
59 60 61 |
# File 'lib/aws-sdk-s3/object.rb', line 59 def accept_ranges data[:accept_ranges] end |
#acl ⇒ ObjectAcl
2998 2999 3000 3001 3002 3003 3004 |
# File 'lib/aws-sdk-s3/object.rb', line 2998 def acl ObjectAcl.new( bucket_name: @bucket_name, object_key: @key, client: @client ) end |
#archive_status ⇒ String
The archive state of the head object.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
119 120 121 |
# File 'lib/aws-sdk-s3/object.rb', line 119 def archive_status data[:archive_status] end |
#bucket ⇒ Bucket
3007 3008 3009 3010 3011 3012 |
# File 'lib/aws-sdk-s3/object.rb', line 3007 def bucket Bucket.new( name: @bucket_name, client: @client ) end |
#bucket_key_enabled ⇒ Boolean
Indicates whether the object uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
351 352 353 |
# File 'lib/aws-sdk-s3/object.rb', line 351 def bucket_key_enabled data[:bucket_key_enabled] end |
#bucket_name ⇒ String
36 37 38 |
# File 'lib/aws-sdk-s3/object.rb', line 36 def bucket_name @bucket_name end |
#cache_control ⇒ String
Specifies caching behavior along the request/reply chain.
236 237 238 |
# File 'lib/aws-sdk-s3/object.rb', line 236 def cache_control data[:cache_control] end |
#checksum_crc32 ⇒ String
The base64-encoded, 32-bit CRC32 checksum of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see [ Checking object integrity] in the *Amazon S3 User Guide*.
[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
148 149 150 |
# File 'lib/aws-sdk-s3/object.rb', line 148 def checksum_crc32 data[:checksum_crc32] end |
#checksum_crc32c ⇒ String
The base64-encoded, 32-bit CRC32C checksum of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see [ Checking object integrity] in the *Amazon S3 User Guide*.
[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
165 166 167 |
# File 'lib/aws-sdk-s3/object.rb', line 165 def checksum_crc32c data[:checksum_crc32c] end |
#checksum_sha1 ⇒ String
The base64-encoded, 160-bit SHA-1 digest of the object. This will only be present if it was uploaded with the object. When you use the API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see [ Checking object integrity] in the *Amazon S3 User Guide*.
[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
182 183 184 |
# File 'lib/aws-sdk-s3/object.rb', line 182 def checksum_sha1 data[:checksum_sha1] end |
#checksum_sha256 ⇒ String
The base64-encoded, 256-bit SHA-256 digest of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see [ Checking object integrity] in the *Amazon S3 User Guide*.
[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
199 200 201 |
# File 'lib/aws-sdk-s3/object.rb', line 199 def checksum_sha256 data[:checksum_sha256] end |
#content_disposition ⇒ String
Specifies presentational information for the object.
242 243 244 |
# File 'lib/aws-sdk-s3/object.rb', line 242 def content_disposition data[:content_disposition] end |
#content_encoding ⇒ String
Indicates what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.
250 251 252 |
# File 'lib/aws-sdk-s3/object.rb', line 250 def content_encoding data[:content_encoding] end |
#content_language ⇒ String
The language the content is in.
256 257 258 |
# File 'lib/aws-sdk-s3/object.rb', line 256 def content_language data[:content_language] end |
#content_length ⇒ Integer
Size of the body in bytes.
131 132 133 |
# File 'lib/aws-sdk-s3/object.rb', line 131 def content_length data[:content_length] end |
#content_type ⇒ String
A standard MIME type describing the format of the object data.
262 263 264 |
# File 'lib/aws-sdk-s3/object.rb', line 262 def content_type data[:content_type] end |
#copy_from(options = {}) ⇒ Types::CopyObjectOutput
78 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 78 alias_method :copy_from, :copy_from |
#copy_to(target, options = {}) ⇒ Object
If you need to copy to a bucket in a different region, use #copy_from.
Copies this object to another object. Use ‘multipart_copy: true` for large objects. This is required for objects that exceed 5GB.
121 122 123 124 125 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 121 def copy_to(target, = {}) Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do ObjectCopier.new(self, ).copy_to(target, ) end end |
#data ⇒ Types::HeadObjectOutput
Returns the data for this Aws::S3::Object. Calls Client#head_object if #data_loaded? is ‘false`.
517 518 519 520 |
# File 'lib/aws-sdk-s3/object.rb', line 517 def data load unless @data @data end |
#data_loaded? ⇒ Boolean
525 526 527 |
# File 'lib/aws-sdk-s3/object.rb', line 525 def data_loaded? !!@data end |
#delete(options = {}) ⇒ Types::DeleteObjectOutput
1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 |
# File 'lib/aws-sdk-s3/object.rb', line 1410 def delete( = {}) = .merge( bucket: @bucket_name, key: @key ) resp = Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do @client.delete_object() end resp.data end |
#delete_marker ⇒ Boolean
Specifies whether the object retrieved was (true) or was not (false) a Delete Marker. If false, this response header does not appear in the response.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
53 54 55 |
# File 'lib/aws-sdk-s3/object.rb', line 53 def delete_marker data[:delete_marker] end |
#download_file(destination, options = {}) ⇒ Boolean
Downloads a file in S3 to a path on disk.
# small files (< 5MB) are downloaded in a single API call
obj.download_file('/path/to/file')
Files larger than 5MB are downloaded using multipart method
# large files are split into parts
# and the parts are downloaded in parallel
obj.download_file('/path/to/very_large_file')
You can provide a callback to monitor progress of the download:
# bytes and part_sizes are each an array with 1 entry per part
# part_sizes may not be known until the first bytes are retrieved
progress = Proc.new do |bytes, part_sizes, file_size|
puts bytes.map.with_index { |b, i| "Part #{i+1}: #{b} / #{part_sizes[i]}"}.join(' ') + "Total: #{100.0 * bytes.sum / file_size}%" }
end
obj.download_file('/path/to/file', progress_callback: progress)
552 553 554 555 556 557 558 559 560 561 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 552 def download_file(destination, = {}) downloader = FileDownloader.new(client: client) Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do downloader.download( destination, .merge(bucket: bucket_name, key: key) ) end true end |
#etag ⇒ String
An entity tag (ETag) is an opaque identifier assigned by a web server to a specific version of a resource found at a URL.
206 207 208 |
# File 'lib/aws-sdk-s3/object.rb', line 206 def etag data[:etag] end |
#exists?(options = {}) ⇒ Boolean
Returns ‘true` if the Object exists.
532 533 534 535 536 537 538 539 540 541 |
# File 'lib/aws-sdk-s3/object.rb', line 532 def exists?( = {}) begin wait_until_exists(.merge(max_attempts: 1)) true rescue Aws::Waiters::Errors::UnexpectedError => e raise e.error rescue Aws::Waiters::Errors::WaiterFailed false end end |
#expiration ⇒ String
If the object expiration is configured (see [ ‘PutBucketLifecycleConfiguration` ][1]), the response includes this header. It includes the `expiry-date` and `rule-id` key-value pairs providing object expiration information. The value of the `rule-id` is URL-encoded.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
[1]: docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html
77 78 79 |
# File 'lib/aws-sdk-s3/object.rb', line 77 def expiration data[:expiration] end |
#expires ⇒ Time
The date and time at which the object is no longer cacheable.
268 269 270 |
# File 'lib/aws-sdk-s3/object.rb', line 268 def expires data[:expires] end |
#expires_string ⇒ String
273 274 275 |
# File 'lib/aws-sdk-s3/object.rb', line 273 def expires_string data[:expires_string] end |
#get(options = {}, &block) ⇒ Types::GetObjectOutput
1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 |
# File 'lib/aws-sdk-s3/object.rb', line 1675 def get( = {}, &block) = .merge( bucket: @bucket_name, key: @key ) resp = Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do @client.get_object(, &block) end resp.data end |
#head(options = {}) ⇒ Types::HeadObjectOutput
2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 |
# File 'lib/aws-sdk-s3/object.rb', line 2984 def head( = {}) = .merge( bucket: @bucket_name, key: @key ) resp = Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do @client.head_object() end resp.data end |
#identifiers ⇒ Object
This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.
3038 3039 3040 3041 3042 3043 |
# File 'lib/aws-sdk-s3/object.rb', line 3038 def identifiers { bucket_name: @bucket_name, key: @key } end |
#initiate_multipart_upload(options = {}) ⇒ MultipartUpload
2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 |
# File 'lib/aws-sdk-s3/object.rb', line 2160 def initiate_multipart_upload( = {}) = .merge( bucket: @bucket_name, key: @key ) resp = Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do @client.create_multipart_upload() end MultipartUpload.new( bucket_name: @bucket_name, object_key: @key, id: resp.data.upload_id, client: @client ) end |
#key ⇒ String
41 42 43 |
# File 'lib/aws-sdk-s3/object.rb', line 41 def key @key end |
#last_modified ⇒ Time
Date and time when the object was last modified.
125 126 127 |
# File 'lib/aws-sdk-s3/object.rb', line 125 def last_modified data[:last_modified] end |
#load ⇒ self Also known as: reload
Loads, or reloads #data for the current Aws::S3::Object. Returns ‘self` making it possible to chain methods.
object.reload.data
502 503 504 505 506 507 508 509 510 511 |
# File 'lib/aws-sdk-s3/object.rb', line 502 def load resp = Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do @client.head_object( bucket: @bucket_name, key: @key ) end @data = resp.data self end |
#metadata ⇒ Hash<String,String>
A map of metadata to store with the object in S3.
303 304 305 |
# File 'lib/aws-sdk-s3/object.rb', line 303 def data[:metadata] end |
#missing_meta ⇒ Integer
This is set to the number of metadata entries not returned in ‘x-amz-meta` headers. This can happen if you create metadata using an API like SOAP that supports more flexible metadata than the REST API. For example, using SOAP, you can create metadata whose values are not legal HTTP headers.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
220 221 222 |
# File 'lib/aws-sdk-s3/object.rb', line 220 def data[:missing_meta] end |
#move_to(target, options = {}) ⇒ void
This method returns an undefined value.
Copies and deletes the current object. The object will only be deleted if the copy operation succeeds.
135 136 137 138 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 135 def move_to(target, = {}) copy_to(target, ) delete end |
#multipart_upload(id) ⇒ MultipartUpload
3016 3017 3018 3019 3020 3021 3022 3023 |
# File 'lib/aws-sdk-s3/object.rb', line 3016 def multipart_upload(id) MultipartUpload.new( bucket_name: @bucket_name, object_key: @key, id: id, client: @client ) end |
#object_lock_legal_hold_status ⇒ String
Specifies whether a legal hold is in effect for this object. This header is only returned if the requester has the ‘s3:GetObjectLegalHold` permission. This header is not returned if the specified version of this object has never had a legal hold applied. For more information about S3 Object Lock, see [Object Lock].
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
[1]: docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html
485 486 487 |
# File 'lib/aws-sdk-s3/object.rb', line 485 def object_lock_legal_hold_status data[:object_lock_legal_hold_status] end |
#object_lock_mode ⇒ String
The Object Lock mode, if any, that’s in effect for this object. This header is only returned if the requester has the ‘s3:GetObjectRetention` permission. For more information about S3 Object Lock, see [Object Lock].
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
[1]: docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html
455 456 457 |
# File 'lib/aws-sdk-s3/object.rb', line 455 def object_lock_mode data[:object_lock_mode] end |
#object_lock_retain_until_date ⇒ Time
The date and time when the Object Lock retention period expires. This header is only returned if the requester has the ‘s3:GetObjectRetention` permission.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
467 468 469 |
# File 'lib/aws-sdk-s3/object.rb', line 467 def object_lock_retain_until_date data[:object_lock_retain_until_date] end |
#parts_count ⇒ Integer
The count of parts this object has. This value is only returned if you specify ‘partNumber` in your request and the object was uploaded as a multipart upload.
438 439 440 |
# File 'lib/aws-sdk-s3/object.rb', line 438 def parts_count data[:parts_count] end |
#presigned_post(options = {}) ⇒ PresignedPost
Creates a PresignedPost that makes it easy to upload a file from a web browser direct to Amazon S3 using an HTML post form with a file field.
See the PresignedPost documentation for more information.
149 150 151 152 153 154 155 156 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 149 def presigned_post( = {}) PresignedPost.new( client.config.credentials, client.config.region, bucket_name, { key: key, url: bucket.url }.merge() ) end |
#presigned_request(method, params = {}) ⇒ String, Hash
Allows you to create presigned URL requests for S3 operations. This method returns a tuple containing the URL and the signed X-amz-* headers to be used with the presigned url.
293 294 295 296 297 298 299 300 301 302 303 304 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 293 def presigned_request(method, params = {}) presigner = Presigner.new(client: client) if %w(delete head get put).include?(method.to_s) method = "#{method}_object".to_sym end presigner.presigned_request( method.downcase, params.merge(bucket: bucket_name, key: key) ) end |
#presigned_url(method, params = {}) ⇒ String
Generates a pre-signed URL for this object.
220 221 222 223 224 225 226 227 228 229 230 231 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 220 def presigned_url(method, params = {}) presigner = Presigner.new(client: client) if %w(delete head get put).include?(method.to_s) method = "#{method}_object".to_sym end presigner.presigned_url( method.downcase, params.merge(bucket: bucket_name, key: key) ) end |
#public_url(options = {}) ⇒ String
Returns the public (un-signed) URL for this object.
s3.bucket('bucket-name').object('obj-key').public_url
#=> "https://bucket-name.s3.amazonaws.com/obj-key"
To use virtual hosted bucket url. Uses https unless secure: false is set. If the bucket name contains dots (.) then you will need to set secure: false.
s3.bucket('my-bucket.com').object('key')
.public_url(virtual_host: true)
#=> "https://my-bucket.com/key"
328 329 330 331 332 333 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 328 def public_url( = {}) url = URI.parse(bucket.url()) url.path += '/' unless url.path[-1] == '/' url.path += key.gsub(/[^\/]+/) { |s| Seahorse::Util.uri_escape(s) } url.to_s end |
#put(options = {}) ⇒ Types::PutObjectOutput
2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 |
# File 'lib/aws-sdk-s3/object.rb', line 2650 def put( = {}) = .merge( bucket: @bucket_name, key: @key ) resp = Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do @client.put_object() end resp.data end |
#replication_status ⇒ String
Amazon S3 can return this header if your request involves a bucket that is either a source or a destination in a replication rule.
In replication, you have a source bucket on which you configure replication and destination bucket or buckets where Amazon S3 stores object replicas. When you request an object (‘GetObject`) or object metadata (`HeadObject`) from these buckets, Amazon S3 will return the `x-amz-replication-status` header in the response as follows:
-
**If requesting an object from the source bucket**, Amazon S3 will return the ‘x-amz-replication-status` header if the object in your request is eligible for replication.
For example, suppose that in your replication configuration, you specify object prefix ‘TaxDocs` requesting Amazon S3 to replicate objects with key prefix `TaxDocs`. Any objects you upload with this key name prefix, for example `TaxDocs/document1.pdf`, are eligible for replication. For any object request with this key name prefix, Amazon S3 will return the `x-amz-replication-status` header with value PENDING, COMPLETED or FAILED indicating object replication status.
-
**If requesting an object from a destination bucket**, Amazon S3 will return the ‘x-amz-replication-status` header with value REPLICA if the object in your request is a replica that Amazon S3 created and there is no replica modification replication in progress.
-
**When replicating objects to multiple destination buckets**, the ‘x-amz-replication-status` header acts differently. The header of the source object will only return a value of COMPLETED when replication is successful to all destinations. The header will remain at value PENDING until replication has completed for all destinations. If one or more destinations fails replication the header will return FAILED.
For more information, see [Replication].
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
[1]: docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
430 431 432 |
# File 'lib/aws-sdk-s3/object.rb', line 430 def replication_status data[:replication_status] end |
#request_charged ⇒ String
If present, indicates that the requester was successfully charged for the request.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
381 382 383 |
# File 'lib/aws-sdk-s3/object.rb', line 381 def request_charged data[:request_charged] end |
#restore ⇒ String
If the object is an archived object (an object whose storage class is GLACIER), the response includes this header if either the archive restoration is in progress (see [RestoreObject] or an archive copy is already restored.
If an archive copy is already restored, the header value indicates when Amazon S3 is scheduled to delete the object copy. For example:
‘x-amz-restore: ongoing-request=“false”, expiry-date=“Fri, 21 Dec 2012 00:00:00 GMT”`
If the object restoration is in progress, the header returns the value ‘ongoing-request=“true”`.
For more information about archiving objects, see [Transitioning Objects: General Considerations].
<note markdown=“1”> This functionality is not supported for directory buckets. Only the S3 Express One Zone storage class is supported by directory buckets to store objects.
</note>
[1]: docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html [2]: docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html#lifecycle-transition-general-considerations
109 110 111 |
# File 'lib/aws-sdk-s3/object.rb', line 109 def restore data[:restore] end |
#restore_object(options = {}) ⇒ Types::RestoreObjectOutput
2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 |
# File 'lib/aws-sdk-s3/object.rb', line 2791 def restore_object( = {}) = .merge( bucket: @bucket_name, key: @key ) resp = Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do @client.restore_object() end resp.data end |
#server_side_encryption ⇒ String
The server-side encryption algorithm used when you store this object in Amazon S3 (for example, ‘AES256`, `aws:kms`, `aws:kms:dsse`).
<note markdown=“1”> For directory buckets, only server-side encryption with Amazon S3 managed keys (SSE-S3) (‘AES256`) is supported.
</note>
297 298 299 |
# File 'lib/aws-sdk-s3/object.rb', line 297 def server_side_encryption data[:server_side_encryption] end |
#size ⇒ Object
6 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 6 alias size content_length |
#sse_customer_algorithm ⇒ String
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
315 316 317 |
# File 'lib/aws-sdk-s3/object.rb', line 315 def sse_customer_algorithm data[:sse_customer_algorithm] end |
#sse_customer_key_md5 ⇒ String
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
328 329 330 |
# File 'lib/aws-sdk-s3/object.rb', line 328 def sse_customer_key_md5 data[:sse_customer_key_md5] end |
#ssekms_key_id ⇒ String
If present, indicates the ID of the Key Management Service (KMS) symmetric encryption customer managed key that was used for the object.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
340 341 342 |
# File 'lib/aws-sdk-s3/object.rb', line 340 def ssekms_key_id data[:ssekms_key_id] end |
#storage_class ⇒ String
Provides storage class information of the object. Amazon S3 returns this header for all objects except for S3 Standard storage class objects.
For more information, see [Storage Classes].
<note markdown=“1”> Directory buckets - Only the S3 Express One Zone storage class is supported by directory buckets to store objects.
</note>
[1]: docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html
370 371 372 |
# File 'lib/aws-sdk-s3/object.rb', line 370 def storage_class data[:storage_class] end |
#upload_file(source, options = {}) {|response| ... } ⇒ Boolean
Uploads a file from disk to the current object in S3.
# small files are uploaded in a single API call
obj.upload_file('/path/to/file')
Files larger than or equal to ‘:multipart_threshold` are uploaded using the Amazon S3 multipart upload APIs.
# large files are automatically split into parts
# and the parts are uploaded in parallel
obj.upload_file('/path/to/very_large_file')
The response of the S3 upload API is yielded if a block given.
# API response will have etag value of the file
obj.upload_file('/path/to/file') do |response|
etag = response.etag
end
You can provide a callback to monitor progress of the upload:
# bytes and totals are each an array with 1 entry per part
progress = Proc.new do |bytes, totals|
puts bytes.map.with_index { |b, i| "Part #{i+1}: #{b} / #{totals[i]}"}.join(' ') + "Total: #{100.0 * bytes.sum / totals.sum }%" }
end
obj.upload_file('/path/to/file', progress_callback: progress)
470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 470 def upload_file(source, = {}) = .dup uploader = FileUploader.new( multipart_threshold: .delete(:multipart_threshold), client: client ) response = Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do uploader.upload( source, .merge(bucket: bucket_name, key: key) ) end yield response if block_given? true end |
#upload_stream(options = {}, &block) ⇒ Boolean
Uploads a stream in a streaming fashion to the current object in S3.
Passed chunks automatically split into multipart upload parts and the parts are uploaded in parallel. This allows for streaming uploads that never touch the disk.
Note that this is known to have issues in JRuby until jruby-9.1.15.0, so avoid using this with older versions of JRuby.
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 385 def upload_stream( = {}, &block) = .dup uploader = MultipartStreamUploader.new( client: client, thread_count: .delete(:thread_count), tempfile: .delete(:tempfile), part_size: .delete(:part_size) ) Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do uploader.upload( .merge(bucket: bucket_name, key: key), &block ) end true end |
#version(id) ⇒ ObjectVersion
3027 3028 3029 3030 3031 3032 3033 3034 |
# File 'lib/aws-sdk-s3/object.rb', line 3027 def version(id) ObjectVersion.new( bucket_name: @bucket_name, object_key: @key, id: id, client: @client ) end |
#version_id ⇒ String
Version ID of the object.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
230 231 232 |
# File 'lib/aws-sdk-s3/object.rb', line 230 def version_id data[:version_id] end |
#wait_until(options = {}) {|resource| ... } ⇒ Resource
Use [Aws::S3::Client] #wait_until instead
The waiting operation is performed on a copy. The original resource remains unchanged.
Waiter polls an API operation until a resource enters a desired state.
## Basic Usage
Waiter will polls until it is successful, it fails by entering a terminal state, or until a maximum number of attempts are made.
# polls in a loop until condition is true
resource.wait_until() {|resource| condition}
## Example
instance.wait_until(max_attempts:10, delay:5) do |instance|
instance.state.name == 'running'
end
## Configuration
You can configure the maximum number of polling attempts, and the delay (in seconds) between each polling attempt. The waiting condition is set by passing a block to #wait_until:
# poll for ~25 seconds
resource.wait_until(max_attempts:5,delay:5) {|resource|...}
## Callbacks
You can be notified before each polling attempt and before each delay. If you throw ‘:success` or `:failure` from these callbacks, it will terminate the waiter.
started_at = Time.now
# poll for 1 hour, instead of a number of attempts
proc = Proc.new do |attempts, response|
throw :failure if Time.now - started_at > 3600
end
# disable max attempts
instance.wait_until(before_wait:proc, max_attempts:nil) {...}
## Handling Errors
When a waiter is successful, it returns the Resource. When a waiter fails, it raises an error.
begin
resource.wait_until(...)
rescue Aws::Waiters::Errors::WaiterFailed
# resource did not enter the desired state in time
end
attempts attempt in seconds invoked before each attempt invoked before each wait
665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 |
# File 'lib/aws-sdk-s3/object.rb', line 665 def wait_until( = {}, &block) self_copy = self.dup attempts = 0 [:max_attempts] = 10 unless .key?(:max_attempts) [:delay] ||= 10 [:poller] = Proc.new do attempts += 1 if block.call(self_copy) [:success, self_copy] else self_copy.reload unless attempts == [:max_attempts] :retry end end Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do Aws::Waiters::Waiter.new().wait({}) end end |
#wait_until_exists(options = {}, &block) ⇒ Object
549 550 551 552 553 554 555 556 557 558 559 560 561 562 |
# File 'lib/aws-sdk-s3/object.rb', line 549 def wait_until_exists( = {}, &block) , params = () waiter = Waiters::ObjectExists.new() yield_waiter_and_warn(waiter, &block) if block_given? Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do waiter.wait(params.merge(bucket: @bucket_name, key: @key)) end Object.new({ bucket_name: @bucket_name, key: @key, client: @client }) end |
#wait_until_not_exists(options = {}, &block) ⇒ Object
570 571 572 573 574 575 576 577 578 579 580 581 582 583 |
# File 'lib/aws-sdk-s3/object.rb', line 570 def wait_until_not_exists( = {}, &block) , params = () waiter = Waiters::ObjectNotExists.new() yield_waiter_and_warn(waiter, &block) if block_given? Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do waiter.wait(params.merge(bucket: @bucket_name, key: @key)) end Object.new({ bucket_name: @bucket_name, key: @key, client: @client }) end |
#website_redirect_location ⇒ String
If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
285 286 287 |
# File 'lib/aws-sdk-s3/object.rb', line 285 def website_redirect_location data[:website_redirect_location] end |