Class: AWS::S3::S3Object

Inherits:
Object
  • Object
show all
Includes:
DataOptions
Defined in:
lib/aws/s3/s3_object.rb

Overview

Represents an object in S3 identified by a key.

object = bucket.objects["key-to-my-object"]
object.key #=> 'key-to-my-object'

See ObjectCollection for more information on finding objects.

Writing and Reading S3Objects

obj = bucket.objects["my-text-object"]

obj.write("MY TEXT")
obj.read
#=> "MY TEXT"

obj.write(File.new("README.txt"))
obj.read
# should equal File.read("README.txt")

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(bucket, key, opts = {}) ⇒ S3Object

Returns a new instance of S3Object.

Parameters:

  • bucket (Bucket)

    The bucket this object belongs to.

  • key (String)

    The object’s key.



52
53
54
55
56
# File 'lib/aws/s3/s3_object.rb', line 52

def initialize(bucket, key, opts = {})
  super
  @key = key
  @bucket = bucket
end

Instance Attribute Details

#bucketBucket (readonly)

Returns The bucket this object is in.

Returns:

  • (Bucket)

    The bucket this object is in.



62
63
64
# File 'lib/aws/s3/s3_object.rb', line 62

def bucket
  @bucket
end

#keyString (readonly)

Returns The objects unique key.

Returns:

  • (String)

    The objects unique key



59
60
61
# File 'lib/aws/s3/s3_object.rb', line 59

def key
  @key
end

Instance Method Details

#==(other) ⇒ Boolean Also known as: eql?

Returns true if the other object belongs to the same bucket and has the same key.

Returns:

  • (Boolean)

    Returns true if the other object belongs to the same bucket and has the same key.



71
72
73
# File 'lib/aws/s3/s3_object.rb', line 71

def ==(other)
  other.kind_of?(S3Object) and other.bucket == bucket and other.key == key
end

#aclAccessControlList

Returns the object’s access control list. This will be an instance of AccessControlList, plus an additional change method:

object.acl.change do |acl|
  # remove any grants to someone other than the bucket owner
  owner_id = object.bucket.owner.id
  acl.grants.reject! do |g|
    g.grantee.canonical_user_id != owner_id
  end
end

Note that changing the ACL is not an atomic operation; it fetches the current ACL, yields it to the block, and then sets it again. Therefore, it’s possible that you may overwrite a concurrent update to the ACL using this method.

Returns:



537
538
539
540
541
542
543
544
545
# File 'lib/aws/s3/s3_object.rb', line 537

def acl
  acl = client.get_object_acl(
    :bucket_name => bucket.name,
    :key => key
  ).acl
  acl.extend ACLProxy
  acl.object = self
  acl
end

#acl=(acl) ⇒ Response

Sets the object’s access control list. acl can be:

  • An XML policy as a string (which is passed to S3 uninterpreted)

  • An AccessControlList object

  • Any object that responds to to_xml

  • Any Hash that is acceptable as an argument to AccessControlList#initialize.

Parameters:

Returns:

  • (Response)


557
558
559
560
561
562
# File 'lib/aws/s3/s3_object.rb', line 557

def acl=(acl)
  client.set_object_acl(
    :bucket_name => bucket.name,
    :key => key,
    :acl => acl)
end

#content_lengthInteger

Returns Size of the object in bytes.

Returns:

  • (Integer)

    Size of the object in bytes.



107
108
109
# File 'lib/aws/s3/s3_object.rb', line 107

def content_length
  head.content_length
end

#content_typeString

Note:

S3 does not compute content-type. It reports the content-type as was reported during the file upload.

Returns the content type as reported by S3, defaults to an empty string when not provided during upload.

Returns:

  • (String)

    Returns the content type as reported by S3, defaults to an empty string when not provided during upload.



115
116
117
# File 'lib/aws/s3/s3_object.rb', line 115

def content_type
  head.content_type
end

#copy_from(source, options = {}) ⇒ nil

Copies data from one S3 object to another.

S3 handles the copy so the clients does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.

Parameters:

  • source (Mixed)
  • options (Hash) (defaults to: {})

Options Hash (options):

  • :bucket_name (String)

    The name of the bucket the source object can be found in. Defaults to the current object’s bucket.

  • :bucket (Bucket)

    The bucket the source object can be found in. Defaults to the current object’s bucket.

  • :metadata (Hash)

    A hash of metadata to save with the copied object. When blank, the sources metadata is copied.

  • :reduced_redundancy (Boolean) — default: false

    If true the object is stored with reduced redundancy in S3 for a lower cost.

  • :version_id (String) — default: nil

    Causes the copy to read a specific version of the source object.

Returns:

  • (nil)


403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
# File 'lib/aws/s3/s3_object.rb', line 403

def copy_from source, options = {}

  copy_opts = { :bucket_name => bucket.name, :key => key }

  copy_opts[:copy_source] = case source
  when S3Object 
    "#{source.bucket.name}/#{source.key}"
  when ObjectVersion 
    copy_opts[:version_id] = source.version_id
    "#{source.object.bucket.name}/#{source.object.key}"
  else
    case 
    when options[:bucket]      then "#{options[:bucket].name}/#{source}"
    when options[:bucket_name] then "#{options[:bucket_name]}/#{source}"
    else "#{self.bucket.name}/#{source}"
    end
  end

  if options[:metadata]
    copy_opts[:metadata] = options[:metadata]
    copy_opts[:metadata_directive] = 'REPLACE'
  else
    copy_opts[:metadata_directive] = 'COPY'
  end

  copy_opts[:version_id] = options[:version_id] if options[:version_id]

  copy_opts[:storage_class] = 'REDUCED_REDUNDANCY' if
    options[:reduced_redundancy]

  client.copy_object(copy_opts)

  nil

end

#copy_to(target, options = {}) ⇒ nil

Copies data from the current object to another object in S3.

S3 handles the copy so the client does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.

Parameters:

  • target (S3Object, String)

    An S3Object, or a string key of and object to copy to.

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :bucket_name (String)

    The name of the bucket the object should be copied into. Defaults to the current object’s bucket.

  • :bucket (Bucket)

    The bucket the target object should be copied into. Defaults to the current object’s bucket.

  • :metadata (Hash)

    A hash of metadata to save with the copied object. When blank, the sources metadata is copied.

  • :reduced_redundancy (Boolean) — default: false

    If true the object is stored with reduced redundancy in S3 for a lower cost.

Returns:

  • (nil)


458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
# File 'lib/aws/s3/s3_object.rb', line 458

def copy_to target, options = {}

  unless target.is_a?(S3Object)

    bucket = case
    when options[:bucket] then options[:bucket]
    when options[:bucket_name] 
      Bucket.new(options[:bucket_name], :config => config)
    else self.bucket
    end

    target = S3Object.new(bucket, target)
  end

  copy_opts = options.dup
  copy_opts.delete(:bucket)
  copy_opts.delete(:bucket_name)

  target.copy_from(self, copy_opts)
  
end

#delete(options = {}) ⇒ Response

Deletes the object from its S3 bucket.

Parameters:

  • options (Hash) (defaults to: {})
  • [String] (Hash)

    a customizable set of options

Returns:

  • (Response)


126
127
128
129
130
# File 'lib/aws/s3/s3_object.rb', line 126

def delete options = {}
  options[:bucket_name] = bucket.name
  options[:key] = key
  client.delete_object(options)
end

#etagString

Returns the object’s ETag.

Generally the ETAG is the MD5 of the object. If the object was uploaded using multipart upload then this is the MD5 all of the upload-part-md5s.

Returns:

  • (String)

    Returns the object’s ETag



102
103
104
# File 'lib/aws/s3/s3_object.rb', line 102

def etag
  head.etag
end

#head(options = {}) ⇒ Response

Performs a HEAD request against this object and returns an object with useful information about the object, including:

  • metadata (hash of user-supplied key-value pairs)

  • content_length (integer, number of bytes)

  • content_type (as sent to S3 when uploading the object)

  • etag (typically the object’s MD5)

Parameters:

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :version_id (String)

    Which version of this object to make a HEAD request against.

Returns:

  • (Response)

    A head object response with metatadata, content_length, content_type and etag.



90
91
92
93
# File 'lib/aws/s3/s3_object.rb', line 90

def head options = {}
  client.head_object(options.merge(
    :bucket_name => bucket.name, :key => key))
end

#metadata(options = {}) ⇒ ObjectMetadata

Returns an instance of ObjectMetadata representing the metadata for this object.

Parameters:

  • [String] (Hash)

    a customizable set of options

Returns:

  • (ObjectMetadata)

    Returns an instance of ObjectMetadata representing the metadata for this object.



136
137
138
139
# File 'lib/aws/s3/s3_object.rb', line 136

def  options = {}
  options[:config] = config
  ObjectMetadata.new(self, options)
end

#multipart_upload(options = {}) {|upload| ... } ⇒ S3Object, ObjectVersion

Performs a multipart upload. Use this if you have specific needs for how the upload is split into parts, or if you want to have more control over how the failure of an individual part upload is handled. Otherwise, #write is much simpler to use.

Examples:

Uploading an object in two parts

bucket.objects.myobject.multipart_upload do |upload|
  upload.add_part("a" * 5242880)
  upload.add_part("b" * 2097152)
end

Uploading parts out of order

bucket.objects.myobject.multipart_upload do |upload|
  upload.add_part("b" * 2097152, :part_number => 2)
  upload.add_part("a" * 5242880, :part_number => 1)
end

Aborting an upload after parts have been added

bucket.objects.myobject.multipart_upload do |upload|
  upload.add_part("b" * 2097152, :part_number => 2)
  upload.abort
end

Starting an upload and completing it later by ID

upload = bucket.objects.myobject.multipart_upload
upload.add_part("a" * 5242880)
upload.add_part("b" * 2097152)
id = upload.id

# later or in a different process
upload = bucket.objects.myobject.multipart_uploads[id]
upload.complete(:remote_parts)

Parameters:

  • options (Hash) (defaults to: {})

    Options for the upload.

Options Hash (options):

  • :metadata (Hash)

    A hash of metadata to be included with the object. These will be sent to S3 as headers prefixed with x-amz-meta.

  • :acl (Symbol)

    A canned access control policy. Valid values are:

    • :private

    • :public_read

    • :public_read_write

    • :authenticated_read

    • :bucket_owner_read

    • :bucket_owner_full_control

  • :reduced_redundancy (Boolean)

    If true, Reduced Redundancy Storage will be enabled for the uploaded object.

  • :cache_control (String)

    Can be used to specify caching behavior. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9

  • :content_disposition (String)

    Specifies presentational information for the object. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec19.5.1

  • :content_encoding (String)

    Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11

  • :content_type (Object)

    A standard MIME type describing the format of the object data.

Yield Parameters:

Returns:

  • (S3Object, ObjectVersion)

    If the bucket has versioning enabled, returns the ObjectVersion representing the version that was uploaded. If versioning is disabled, returns self.



357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
# File 'lib/aws/s3/s3_object.rb', line 357

def multipart_upload(options = {})
  upload = multipart_uploads.create(options)

  if block_given?
    result = nil
    begin
      yield(upload)
    ensure
      result = upload.close
    end
    result
  else
    upload
  end
end

#multipart_uploadsObjectUploadCollection

Returns an object representing the collection of uploads that are in progress for this object.

Examples:

Abort any in-progress uploads for the object:


object.multipart_uploads.each(&:abort)

Returns:

  • (ObjectUploadCollection)

    Returns an object representing the collection of uploads that are in progress for this object.



379
380
381
# File 'lib/aws/s3/s3_object.rb', line 379

def multipart_uploads
  ObjectUploadCollection.new(self)
end

#presigned_post(options = {}) ⇒ PresignedPost

Generates fields for a presigned POST to this object. This method adds a constraint that the key must match the key of this object. All options are sent to the PresignedPost constructor.

Returns:

See Also:



662
663
664
# File 'lib/aws/s3/s3_object.rb', line 662

def presigned_post(options = {})
  PresignedPost.new(bucket, options.merge(:key => key))
end

#public_url(options = {}) ⇒ URI::HTTP, URI::HTTPS

Generates a public (not authenticated) URL for the object.

Parameters:

  • options (Hash) (defaults to: {})

    Options for generating the URL.

Options Hash (options):

  • :secure (Boolean)

    Whether to generate a secure (HTTPS) URL or a plain HTTP url.

Returns:

  • (URI::HTTP, URI::HTTPS)


650
651
652
653
# File 'lib/aws/s3/s3_object.rb', line 650

def public_url(options = {})
  req = request_for_signing(options)
  build_uri(options[:secure] != false, req)
end

#read(options = {}, &blk) ⇒ Object

Fetches the object data from S3.

Examples:

Reading data as a string

object.write('some data')
object.read
#=> 'some data'

Parameters:

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :version_id (String)

    Reads data from a specific version of this object.

  • :if_unmodified_since (Time)

    Causes #read to return nil if the object was modified since the given time.

  • :if_modified_since (Time)

    Causes #read to return nil unless the object was modified since the given time.

  • :if_match (String)

    If specified, the method will return nil (and not fetch any data) unless the object ETag

  • :range (Range)

    A byte range to read data from



499
500
501
502
503
# File 'lib/aws/s3/s3_object.rb', line 499

def read(options = {}, &blk)
  options[:bucket_name] = bucket.name
  options[:key] = key
  client.get_object(options).data
end

#url_for(method, options = {}) ⇒ URI::HTTP, URI::HTTPS

Generates a presigned URL for an operation on this object. This URL can be used by a regular HTTP client to perform the desired operation without credentials and without changing the permissions of the object.

Examples:

Generate a url to read an object

bucket.objects.myobject.url_for(:read)

Generate a url to delete an object

bucket.objects.myobject.url_for(:delete)

Override response headers for reading an object

object = bucket.objects.myobject
url = object.url_for(:read, :response_content_type => "application/json")

Generate a url that expires in 10 minutes

bucket.objects.myobject.url_for(:read, :expires => 10*60)

Parameters:

  • method (Symbol, String)

    The HTTP verb or object method for which the returned URL will be valid. Valid values:

    • :get or :read

    • :put or :write

    • :delete

  • options (Hash) (defaults to: {})

    Additional options for generating the URL.

Options Hash (options):

  • :expires (Object)

    Sets the expiration time of the URL; after this time S3 will return an error if the URL is used. This can be an integer (to specify the number of seconds after the current time), a string (which is parsed as a date using Time#parse), a Time, or a DateTime object. This option defaults to one hour after the current time.

  • :secure (String)

    Whether to generate a secure (HTTPS) URL or a plain HTTP url.

  • :response_content_type (String)

    Sets the Content-Type header of the response when performing an HTTP GET on the returned URL.

  • :response_content_language (String)

    Sets the Content-Language header of the response when performing an HTTP GET on the returned URL.

  • :response_expires (String)

    Sets the Expires header of the response when performing an HTTP GET on the returned URL.

  • :response_cache_control (String)

    Sets the Cache-Control header of the response when performing an HTTP GET on the returned URL.

  • :response_content_disposition (String)

    Sets the Content-Disposition header of the response when performing an HTTP GET on the returned URL.

  • :response_content_encoding (String)

    Sets the Content-Encoding header of the response when performing an HTTP GET on the returned URL.

Returns:

  • (URI::HTTP, URI::HTTPS)


631
632
633
634
635
636
637
638
639
640
641
# File 'lib/aws/s3/s3_object.rb', line 631

def url_for(method, options = {})
  req = request_for_signing(options)

  method = http_method(method)
  expires = expiration_timestamp(options[:expires])
  req.add_param("AWSAccessKeyId", config.signer.access_key_id)
  req.add_param("Signature", signature(method, expires, req))
  req.add_param("Expires", expires)

  build_uri(options[:secure] != false, req)
end

#versionsObjectVersionCollection

Returns a colletion representing all the object versions for this object.

bucket.versioning_enabled? # => true
version = bucket.objects["mykey"].versions.latest


148
149
150
# File 'lib/aws/s3/s3_object.rb', line 148

def versions
  ObjectVersionCollection.new(self)
end

#write(options = {}) ⇒ S3Object, ObjectVersion #write(data, options = {}) ⇒ S3Object, ObjectVersion

Writes data to the object in S3. This method will attempt to intelligently choose between uploading in one request and using #multipart_upload.

Unless versioning is enabled, any data currently in S3 at #key will be replaced.

You can pass :data or :file as the first argument or as options. Example usage:

obj = s3.buckets.mybucket.objects.mykey
obj.write("HELLO")
obj.write(:data => "HELLO")
obj.write(Pathname.new("myfile"))
obj.write(:file => "myfile")

# writes zero-length data
obj.write(:metadata => { "avg-rating" => "5 stars" })

Parameters:

  • data

    The data to upload (see the :data option).

  • options (Hash) (defaults to: nil)

    Additional upload options.

Options Hash (options):

  • :data (Object)

    The data to upload. Valid values include:

    • A string

    • A Pathname object

    • Any object responding to read and eof?; the object must support the following access methods:

      read                     # all at once
      read(length) until eof?  # in chunks
      

      If you specify data this way, you must also include the :content_length option.

  • :file (String)

    Can be specified instead of :data; its value specifies the path of a file to upload.

  • :single_request (Boolean)

    If this option is true, the method will always generate exactly one request to S3 regardless of how much data is being uploaded.

  • :content_length (Integer)

    If provided, this option must match the total number of bytes written to S3 during the operation. This option is required if :data is an IO-like object without a size method.

  • :multipart_threshold (Integer)

    Specifies the maximum size in bytes of a single-request upload. If the data being uploaded is larger than this threshold, it will be uploaded using #multipart_upload.

  • :multipart_min_part_size (Integer)

    The minimum size of a part if a multi-part upload is used. S3 will reject non-final parts smaller than 5MB, and the default for this option is 5MB.

  • :metadata (Hash)

    A hash of metadata to be included with the object. These will be sent to S3 as headers prefixed with x-amz-meta.

  • :acl (Symbol)

    A canned access control policy. Valid values are:

    • :private

    • :public_read

    • :public_read_write

    • :authenticated_read

    • :bucket_owner_read

    • :bucket_owner_full_control

  • :storage_class (Symbol)

    Controls whether Reduced Redundancy Storage is enabled for the object. Valid values are :standard (the default) or :reduced_redundancy.

  • :cache_control (String)

    Can be used to specify caching behavior. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9

  • :content_disposition (String)

    Specifies presentational information for the object. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec19.5.1

  • :content_encoding (String)

    Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11

  • :content_type (Object)

    A standard MIME type describing the format of the object data.

Returns:

  • (S3Object, ObjectVersion)

    If the bucket has versioning enabled, returns the ObjectVersion representing the version that was uploaded. If versioning is disabled, returns self.



254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
# File 'lib/aws/s3/s3_object.rb', line 254

def write(options_or_data = nil, options = nil)

  (data_options, put_options) =
    compute_put_options(options_or_data, options)

  if use_multipart?(data_options, put_options)
    put_options.delete(:multipart_threshold)
    multipart_upload(put_options) do |upload|
      each_part(data_options, put_options) do |part|
        upload.add_part(part)
      end
    end
  else
    opts = { :bucket_name => bucket.name, :key => key }
    resp = client.put_object(opts.merge(put_options).merge(data_options))
    if resp.version_id
      ObjectVersion.new(self, resp.version_id)
    else
      self
    end
  end
end