Class: AWS::S3::S3Object

Inherits:
Object
  • Object
show all
Includes:
DataOptions
Defined in:
lib/aws/s3/s3_object.rb

Overview

Represents an object in S3 identified by a key.

object = bucket.objects["key-to-my-object"]
object.key #=> 'key-to-my-object'

See ObjectCollection for more information on finding objects.

Writing and Reading S3Objects

obj = bucket.objects["my-text-object"]

obj.write("MY TEXT")
obj.read
#=> "MY TEXT"

obj.write(File.new("README.txt"))
obj.read
# should equal File.read("README.txt")

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(bucket, key, opts = {}) ⇒ S3Object

Returns a new instance of S3Object.

Parameters:

  • bucket (Bucket)

    The bucket this object belongs to.

  • key (String)

    The object’s key.



45
46
47
48
49
# File 'lib/aws/s3/s3_object.rb', line 45

def initialize(bucket, key, opts = {})
  super
  @key = key
  @bucket = bucket
end

Instance Attribute Details

#bucketBucket (readonly)

Returns The bucket this object is in.

Returns:

  • (Bucket)

    The bucket this object is in.



55
56
57
# File 'lib/aws/s3/s3_object.rb', line 55

def bucket
  @bucket
end

#keyString (readonly)

Returns The objects unique key.

Returns:

  • (String)

    The objects unique key



52
53
54
# File 'lib/aws/s3/s3_object.rb', line 52

def key
  @key
end

Instance Method Details

#==(other) ⇒ Boolean Also known as: eql?

Returns true if the other object belongs to the same bucket and has the same key.

Returns:

  • (Boolean)

    Returns true if the other object belongs to the same bucket and has the same key.



64
65
66
# File 'lib/aws/s3/s3_object.rb', line 64

def ==(other)
  other.kind_of?(S3Object) and other.bucket == bucket and other.key == key
end

#aclAccessControlList

Returns the object’s access control list. This will be an instance of AccessControlList, plus an additional change method:

object.acl.change do |acl|
  # remove any grants to someone other than the bucket owner
  owner_id = object.bucket.owner.id
  acl.grants.reject! do |g|
    g.grantee.canonical_user_id != owner_id
  end
end

Note that changing the ACL is not an atomic operation; it fetches the current ACL, yields it to the block, and then sets it again. Therefore, it’s possible that you may overwrite a concurrent update to the ACL using this method.

Returns:



742
743
744
745
746
747
748
749
750
# File 'lib/aws/s3/s3_object.rb', line 742

def acl
  acl = client.get_object_acl(
    :bucket_name => bucket.name,
    :key => key
  ).acl
  acl.extend ACLProxy
  acl.object = self
  acl
end

#acl=(acl) ⇒ nil

Sets the object’s access control list. acl can be:

  • An XML policy as a string (which is passed to S3 uninterpreted)

  • An AccessControlList object

  • Any object that responds to to_xml

  • Any Hash that is acceptable as an argument to AccessControlList#initialize.

Parameters:

Returns:

  • (nil)


761
762
763
764
765
766
767
# File 'lib/aws/s3/s3_object.rb', line 761

def acl=(acl)
  client.set_object_acl(
    :bucket_name => bucket.name,
    :key => key,
    :acl => acl)
  nil
end

#content_lengthInteger

Returns Size of the object in bytes.

Returns:

  • (Integer)

    Size of the object in bytes.



117
118
119
# File 'lib/aws/s3/s3_object.rb', line 117

def content_length
  head.content_length
end

#content_typeString

Note:

S3 does not compute content-type. It reports the content-type as was reported during the file upload.

Returns the content type as reported by S3, defaults to an empty string when not provided during upload.

Returns:

  • (String)

    Returns the content type as reported by S3, defaults to an empty string when not provided during upload.



125
126
127
# File 'lib/aws/s3/s3_object.rb', line 125

def content_type
  head.content_type
end

#copy_from(source, options = {}) ⇒ nil

Note:

This operation does not copy the ACL, storage class (standard vs. reduced redundancy) or server side encryption setting from the source object. If you don’t specify any of these options when copying, the object will have the default values as described below.

Copies data from one S3 object to another.

S3 handles the copy so the clients does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.

Parameters:

  • source (Mixed)
  • options (Hash) (defaults to: {})

Options Hash (options):

  • :bucket_name (String)

    The name of the bucket the source object can be found in. Defaults to the current object’s bucket.

  • :bucket (Bucket)

    The bucket the source object can be found in. Defaults to the current object’s bucket.

  • :metadata (Hash)

    A hash of metadata to save with the copied object. Each name, value pair must conform to US-ASCII. When blank, the sources metadata is copied.

  • :content_type (String)

    The content type of the copied object. Defaults to the source object’s content type.

  • :reduced_redundancy (Boolean) — default: false

    If true the object is stored with reduced redundancy in S3 for a lower cost.

  • :version_id (String) — default: nil

    Causes the copy to read a specific version of the source object.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private

    • :public_read

    • :public_read_write

    • :authenticated_read

    • :bucket_owner_read

    • :bucket_owner_full_control

  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

  • :cache_control (String)

    Can be used to specify caching behavior. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9

Returns:

  • (nil)


547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
# File 'lib/aws/s3/s3_object.rb', line 547

def copy_from source, options = {}

  copy_opts = { :bucket_name => bucket.name, :key => key }

  copy_opts[:copy_source] = case source
  when S3Object 
    "#{source.bucket.name}/#{source.key}"
  when ObjectVersion 
    copy_opts[:version_id] = source.version_id
    "#{source.object.bucket.name}/#{source.object.key}"
  else
    case 
    when options[:bucket]      then "#{options[:bucket].name}/#{source}"
    when options[:bucket_name] then "#{options[:bucket_name]}/#{source}"
    else "#{self.bucket.name}/#{source}"
    end
  end

  copy_opts[:metadata_directive] = 'COPY'

  if options[:metadata]
    copy_opts[:metadata] = options[:metadata]
    copy_opts[:metadata_directive] = 'REPLACE'
  end

  if options[:content_type]
    copy_opts[:content_type] = options[:content_type]
    copy_opts[:metadata_directive] = "REPLACE"
  end

  copy_opts[:acl] = options[:acl] if options[:acl]
  copy_opts[:version_id] = options[:version_id] if options[:version_id]
  copy_opts[:server_side_encryption] =
    options[:server_side_encryption] if
    options.key?(:server_side_encryption)
  copy_opts[:cache_control] = options[:cache_control] if 
    options[:cache_control]
  add_configured_write_options(copy_opts)

  if options[:reduced_redundancy]
    copy_opts[:storage_class] = 'REDUCED_REDUNDANCY'
  else
    copy_opts[:storage_class] = 'STANDARD'
  end

  client.copy_object(copy_opts)

  nil

end

#copy_to(target, options = {}) ⇒ S3Object

Note:

This operation does not copy the ACL, storage class (standard vs. reduced redundancy) or server side encryption setting from this object to the new object. If you don’t specify any of these options when copying, the new object will have the default values as described below.

Copies data from the current object to another object in S3.

S3 handles the copy so the client does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.

Parameters:

  • target (S3Object, String)

    An S3Object, or a string key of and object to copy to.

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :bucket_name (String)

    The name of the bucket the object should be copied into. Defaults to the current object’s bucket.

  • :bucket (Bucket)

    The bucket the target object should be copied into. Defaults to the current object’s bucket.

  • :metadata (Hash)

    A hash of metadata to save with the copied object. Each name, value pair must conform to US-ASCII. When blank, the sources metadata is copied.

  • :reduced_redundancy (Boolean) — default: false

    If true the object is stored with reduced redundancy in S3 for a lower cost.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private

    • :public_read

    • :public_read_write

    • :authenticated_read

    • :bucket_owner_read

    • :bucket_owner_full_control

  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

Returns:

  • (S3Object)

    Returns the copy (target) object.



651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
# File 'lib/aws/s3/s3_object.rb', line 651

def copy_to target, options = {}

  unless target.is_a?(S3Object)

    bucket = case
    when options[:bucket] then options[:bucket]
    when options[:bucket_name] 
      Bucket.new(options[:bucket_name], :config => config)
    else self.bucket
    end

    target = S3Object.new(bucket, target)
  end

  copy_opts = options.dup
  copy_opts.delete(:bucket)
  copy_opts.delete(:bucket_name)

  target.copy_from(self, copy_opts)
  target
  
end

#delete(options = {}) ⇒ nil

Deletes the object from its S3 bucket.

Parameters:

  • options (Hash) (defaults to: {})
  • [String] (Hash)

    a customizable set of options

Returns:

  • (nil)


159
160
161
162
163
164
# File 'lib/aws/s3/s3_object.rb', line 159

def delete options = {}
  options[:bucket_name] = bucket.name
  options[:key] = key
  client.delete_object(options)
  nil
end

#etagString

Returns the object’s ETag.

Generally the ETAG is the MD5 of the object. If the object was uploaded using multipart upload then this is the MD5 all of the upload-part-md5s.

Returns:

  • (String)

    Returns the object’s ETag



105
106
107
# File 'lib/aws/s3/s3_object.rb', line 105

def etag
  head.etag
end

#exists?Boolean

Returns:

  • (Boolean)


70
71
72
73
74
75
76
# File 'lib/aws/s3/s3_object.rb', line 70

def exists?
  head
rescue Errors::NoSuchKey => e
  false
else
  true
end

#expiration_dateDateTime?

Returns:

  • (DateTime, nil)


130
131
132
# File 'lib/aws/s3/s3_object.rb', line 130

def expiration_date
  head.expiration_date
end

#expiration_rule_idString?

Returns:

  • (String, nil)


135
136
137
# File 'lib/aws/s3/s3_object.rb', line 135

def expiration_rule_id
  head.expiration_date
end

#head(options = {}) ⇒ Object

Performs a HEAD request against this object and returns an object with useful information about the object, including:

  • metadata (hash of user-supplied key-value pairs)

  • content_length (integer, number of bytes)

  • content_type (as sent to S3 when uploading the object)

  • etag (typically the object’s MD5)

  • server_side_encryption (the algorithm used to encrypt the object on the server side, e.g. :aes256)

Parameters:

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :version_id (String)

    Which version of this object to make a HEAD request against.

Returns:

  • A head object response with metatadata, content_length, content_type, etag and server_side_encryption.



93
94
95
96
# File 'lib/aws/s3/s3_object.rb', line 93

def head options = {}
  client.head_object(options.merge(
    :bucket_name => bucket.name, :key => key))
end

#last_modifiedTime

Returns the object’s last modified time.

Returns:

  • (Time)

    Returns the object’s last modified time.



112
113
114
# File 'lib/aws/s3/s3_object.rb', line 112

def last_modified
  head.last_modified
end

#metadata(options = {}) ⇒ ObjectMetadata

Returns an instance of ObjectMetadata representing the metadata for this object.

Parameters:

  • [String] (Hash)

    a customizable set of options

Returns:

  • (ObjectMetadata)

    Returns an instance of ObjectMetadata representing the metadata for this object.



170
171
172
173
# File 'lib/aws/s3/s3_object.rb', line 170

def  options = {}
  options[:config] = config
  ObjectMetadata.new(self, options)
end

#move_to(target, options = {}) ⇒ S3Object Also known as: rename_to

Moves an object to a new key.

This works by copying the object to a new key and then deleting the old object. This function returns the new object once this is done.

bucket = s3.buckets['old-bucket']
old_obj = bucket.objets['old-key']

# renaming an object returns a new object
new_obj = old_obj.move_to('new-key')

old_obj.key     #=> 'old-key'
old_obj.exists? #=> false

new_obj.key     #=> 'new-key'
new_obj.exists? #=> true

If you need to move an object to a different bucket, pass :bucket or :bucket_name.

obj = s3.buckets['old-bucket'].objects['old-key]
obj.move_to('new-key', :bucket_name => 'new_bucket')

If the copy succedes, but the then the delete fails, an error will be raised.

Parameters:

  • target (String)

    The key to move this object to.

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :bucket_name (String)

    The name of the bucket the object should be copied into. Defaults to the current object’s bucket.

  • :bucket (Bucket)

    The bucket the target object should be copied into. Defaults to the current object’s bucket.

  • :metadata (Hash)

    A hash of metadata to save with the copied object. Each name, value pair must conform to US-ASCII. When blank, the sources metadata is copied.

  • :reduced_redundancy (Boolean) — default: false

    If true the object is stored with reduced redundancy in S3 for a lower cost.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private

    • :public_read

    • :public_read_write

    • :authenticated_read

    • :bucket_owner_read

    • :bucket_owner_full_control

  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

Returns:

  • (S3Object)

    Returns a new objet with the new key.



479
480
481
482
483
# File 'lib/aws/s3/s3_object.rb', line 479

def move_to target, options = {}
  copy = copy_to(target, options)
  delete
  copy
end

#multipart_upload(options = {}) {|upload| ... } ⇒ S3Object, ObjectVersion

Performs a multipart upload. Use this if you have specific needs for how the upload is split into parts, or if you want to have more control over how the failure of an individual part upload is handled. Otherwise, #write is much simpler to use.

Examples:

Uploading an object in two parts

bucket.objects.myobject.multipart_upload do |upload|
  upload.add_part("a" * 5242880)
  upload.add_part("b" * 2097152)
end

Uploading parts out of order

bucket.objects.myobject.multipart_upload do |upload|
  upload.add_part("b" * 2097152, :part_number => 2)
  upload.add_part("a" * 5242880, :part_number => 1)
end

Aborting an upload after parts have been added

bucket.objects.myobject.multipart_upload do |upload|
  upload.add_part("b" * 2097152, :part_number => 2)
  upload.abort
end

Starting an upload and completing it later by ID

upload = bucket.objects.myobject.multipart_upload
upload.add_part("a" * 5242880)
upload.add_part("b" * 2097152)
id = upload.id

# later or in a different process
upload = bucket.objects.myobject.multipart_uploads[id]
upload.complete(:remote_parts)

Parameters:

  • options (Hash) (defaults to: {})

    Options for the upload.

Options Hash (options):

  • :metadata (Hash)

    A hash of metadata to be included with the object. These will be sent to S3 as headers prefixed with x-amz-meta. Each name, value pair must conform to US-ASCII.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private

    • :public_read

    • :public_read_write

    • :authenticated_read

    • :bucket_owner_read

    • :bucket_owner_full_control

  • :reduced_redundancy (Boolean) — default: false

    If true, Reduced Redundancy Storage will be enabled for the uploaded object.

  • :cache_control (String)

    Can be used to specify caching behavior. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9

  • :content_disposition (String)

    Specifies presentational information for the object. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec19.5.1

  • :content_encoding (String)

    Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11

  • :content_type (Object)

    A standard MIME type describing the format of the object data.

  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

Yield Parameters:

Returns:

  • (S3Object, ObjectVersion)

    If the bucket has versioning enabled, returns the ObjectVersion representing the version that was uploaded. If versioning is disabled, returns self.



415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
# File 'lib/aws/s3/s3_object.rb', line 415

def multipart_upload(options = {})

  options = options.dup
  add_configured_write_options(options)

  upload = multipart_uploads.create(options)

  if block_given?
    begin
      yield(upload)
      upload.close
    rescue
      upload.abort
    end
  else
    upload
  end
end

#multipart_uploadsObjectUploadCollection

Returns an object representing the collection of uploads that are in progress for this object.

Examples:

Abort any in-progress uploads for the object:


object.multipart_uploads.each(&:abort)

Returns:

  • (ObjectUploadCollection)

    Returns an object representing the collection of uploads that are in progress for this object.



440
441
442
# File 'lib/aws/s3/s3_object.rb', line 440

def multipart_uploads
  ObjectUploadCollection.new(self)
end

#presigned_post(options = {}) ⇒ PresignedPost

Generates fields for a presigned POST to this object. This method adds a constraint that the key must match the key of this object. All options are sent to the PresignedPost constructor.

Returns:

See Also:



872
873
874
# File 'lib/aws/s3/s3_object.rb', line 872

def presigned_post(options = {})
  PresignedPost.new(bucket, options.merge(:key => key))
end

#public_url(options = {}) ⇒ URI::HTTP, URI::HTTPS

Generates a public (not authenticated) URL for the object.

Parameters:

  • options (Hash) (defaults to: {})

    Options for generating the URL.

Options Hash (options):

  • :secure (Boolean)

    Whether to generate a secure (HTTPS) URL or a plain HTTP url.

Returns:

  • (URI::HTTP, URI::HTTPS)


860
861
862
863
# File 'lib/aws/s3/s3_object.rb', line 860

def public_url(options = {})
  req = request_for_signing(options)
  build_uri(options[:secure] != false, req)
end

#read(options = {}, &blk) ⇒ Object

Fetches the object data from S3.

Examples:

Reading data as a string

object.write('some data')
object.read
#=> 'some data'

Parameters:

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :version_id (String)

    Reads data from a specific version of this object.

  • :if_unmodified_since (Time)

    If specified, the method will raise AWS::S3::Errors::PreconditionFailed unless the object has not been modified since the given time.

  • :if_modified_since (Time)

    If specified, the method will raise AWS::S3::Errors::NotModified if the object has not been modified since the given time.

  • :if_match (String)

    If specified, the method will raise AWS::S3::Errors::PreconditionFailed unless the object ETag matches the provided value.

  • :if_none_match (String)

    If specified, the method will raise AWS::S3::Errors::NotModified if the object ETag matches the provided value.

  • :range (Range)

    A byte range to read data from



704
705
706
707
708
# File 'lib/aws/s3/s3_object.rb', line 704

def read(options = {}, &blk)
  options[:bucket_name] = bucket.name
  options[:key] = key
  client.get_object(options).data
end

#reduced_redundancy=(value) ⇒ true, false

Note:

Changing the storage class of an object incurs a COPY operation.

Changes the storage class of the object to enable or disable Reduced Redundancy Storage (RRS).

Parameters:

  • value (true, false)

    If this is true, the object will be copied in place and stored with reduced redundancy at a lower cost. Otherwise, the object will be copied and stored with the standard storage class.

Returns:

  • (true, false)

    The value parameter.



888
889
890
891
# File 'lib/aws/s3/s3_object.rb', line 888

def reduced_redundancy= value
  copy_from(key, :reduced_redundancy => value)
  value
end

#server_side_encryptionSymbol?

Returns the algorithm used to encrypt the object on the server side, or nil if SSE was not used when storing the object.

Returns:

  • (Symbol, nil)

    Returns the algorithm used to encrypt the object on the server side, or nil if SSE was not used when storing the object.



142
143
144
# File 'lib/aws/s3/s3_object.rb', line 142

def server_side_encryption
  head.server_side_encryption
end

#server_side_encryption?true, false

Returns true if the object was stored using server side encryption.

Returns:

  • (true, false)

    Returns true if the object was stored using server side encryption.



148
149
150
# File 'lib/aws/s3/s3_object.rb', line 148

def server_side_encryption?
  !server_side_encryption.nil?
end

#url_for(method, options = {}) ⇒ URI::HTTP, URI::HTTPS

Generates a presigned URL for an operation on this object. This URL can be used by a regular HTTP client to perform the desired operation without credentials and without changing the permissions of the object.

Examples:

Generate a url to read an object

bucket.objects.myobject.url_for(:read)

Generate a url to delete an object

bucket.objects.myobject.url_for(:delete)

Override response headers for reading an object

object = bucket.objects.myobject
url = object.url_for(:read, :response_content_type => "application/json")

Generate a url that expires in 10 minutes

bucket.objects.myobject.url_for(:read, :expires => 10*60)

Parameters:

  • method (Symbol, String)

    The HTTP verb or object method for which the returned URL will be valid. Valid values:

    • :get or :read

    • :put or :write

    • :delete

  • options (Hash) (defaults to: {})

    Additional options for generating the URL.

Options Hash (options):

  • :expires (Object)

    Sets the expiration time of the URL; after this time S3 will return an error if the URL is used. This can be an integer (to specify the number of seconds after the current time), a string (which is parsed as a date using Time#parse), a Time, or a DateTime object. This option defaults to one hour after the current time.

  • :secure (String)

    Whether to generate a secure (HTTPS) URL or a plain HTTP url.

  • :response_content_type (String)

    Sets the Content-Type header of the response when performing an HTTP GET on the returned URL.

  • :response_content_language (String)

    Sets the Content-Language header of the response when performing an HTTP GET on the returned URL.

  • :response_expires (String)

    Sets the Expires header of the response when performing an HTTP GET on the returned URL.

  • :response_cache_control (String)

    Sets the Cache-Control header of the response when performing an HTTP GET on the returned URL.

  • :response_content_disposition (String)

    Sets the Content-Disposition header of the response when performing an HTTP GET on the returned URL.

  • :response_content_encoding (String)

    Sets the Content-Encoding header of the response when performing an HTTP GET on the returned URL.

Returns:

  • (URI::HTTP, URI::HTTPS)


836
837
838
839
840
841
842
843
844
845
846
847
848
849
# File 'lib/aws/s3/s3_object.rb', line 836

def url_for(method, options = {})
  req = request_for_signing(options)

  method = http_method(method)
  expires = expiration_timestamp(options[:expires])
  req.add_param("AWSAccessKeyId", config.signer.access_key_id)
  req.add_param("versionId", options[:version_id]) if options[:version_id]
  req.add_param("Signature", signature(method, expires, req))
  req.add_param("Expires", expires)
  req.add_param("x-amz-security-token", config.signer.session_token) if
    config.signer.session_token

  build_uri(options[:secure] != false, req)
end

#versionsObjectVersionCollection

Returns a colletion representing all the object versions for this object.

bucket.versioning_enabled? # => true
version = bucket.objects["mykey"].versions.latest


182
183
184
# File 'lib/aws/s3/s3_object.rb', line 182

def versions
  ObjectVersionCollection.new(self)
end

#write(options = {}) ⇒ S3Object, ObjectVersion #write(data, options = {}) ⇒ S3Object, ObjectVersion

Writes data to the object in S3. This method will attempt to intelligently choose between uploading in one request and using #multipart_upload.

Unless versioning is enabled, any data currently in S3 at #key will be replaced.

You can pass :data or :file as the first argument or as options. Example usage:

obj = s3.buckets.mybucket.objects.mykey
obj.write("HELLO")
obj.write(:data => "HELLO")
obj.write(Pathname.new("myfile"))
obj.write(:file => "myfile")

# writes zero-length data
obj.write(:metadata => { "avg-rating" => "5 stars" })

Parameters:

  • data

    The data to upload (see the :data option).

  • options (Hash) (defaults to: nil)

    Additional upload options.

Options Hash (options):

  • :data (Object)

    The data to upload. Valid values include:

    • A string

    • A Pathname object

    • Any object responding to read and eof?; the object must support the following access methods:

      read                     # all at once
      read(length) until eof?  # in chunks
      

      If you specify data this way, you must also include the :content_length option.

  • :file (String)

    Can be specified instead of :data; its value specifies the path of a file to upload.

  • :single_request (Boolean)

    If this option is true, the method will always generate exactly one request to S3 regardless of how much data is being uploaded.

  • :content_length (Integer)

    If provided, this option must match the total number of bytes written to S3 during the operation. This option is required if :data is an IO-like object without a size method.

  • :multipart_threshold (Integer)

    Specifies the maximum size in bytes of a single-request upload. If the data being uploaded is larger than this threshold, it will be uploaded using #multipart_upload.

  • :multipart_min_part_size (Integer)

    The minimum size of a part if a multi-part upload is used. S3 will reject non-final parts smaller than 5MB, and the default for this option is 5MB.

  • :metadata (Hash)

    A hash of metadata to be included with the object. These will be sent to S3 as headers prefixed with x-amz-meta. Each name, value pair must conform to US-ASCII.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private

    • :public_read

    • :public_read_write

    • :authenticated_read

    • :bucket_owner_read

    • :bucket_owner_full_control

  • :storage_class (Symbol)

    Controls whether Reduced Redundancy Storage is enabled for the object. Valid values are :standard (the default) or :reduced_redundancy.

  • :cache_control (String)

    Can be used to specify caching behavior. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9

  • :content_disposition (String)

    Specifies presentational information for the object. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec19.5.1

  • :content_encoding (String)

    Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11

  • :content_type (Object)

    A standard MIME type describing the format of the object data.

  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

Returns:

  • (S3Object, ObjectVersion)

    If the bucket has versioning enabled, returns the ObjectVersion representing the version that was uploaded. If versioning is disabled, returns self.



299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
# File 'lib/aws/s3/s3_object.rb', line 299

def write(options_or_data = nil, options = nil)

  (data_options, put_options) =
    compute_put_options(options_or_data, options)

  add_configured_write_options(put_options)

  if use_multipart?(data_options, put_options)
    put_options.delete(:multipart_threshold)
    multipart_upload(put_options) do |upload|
      each_part(data_options, put_options) do |part|
        upload.add_part(part)
      end
    end
  else
    opts = { :bucket_name => bucket.name, :key => key }
    resp = client.put_object(opts.merge(put_options).merge(data_options))
    if resp.version_id
      ObjectVersion.new(self, resp.version_id)
    else
      self
    end
  end
end