Class: AWS::S3::S3Object

Inherits:
Object
  • Object
show all
Defined in:
lib/aws/s3/s3_object.rb

Overview

Represents an object in S3. Objects live in a bucket and have unique keys.

Getting Objects

You can get an object by its key.

s3 = AWS::S3.new
obj = s3.buckets['my-bucket'].objects['key'] # no request made

You can also get objects by enumerating a objects in a bucket.

bucket.objects.each do |obj|
  puts obj.key
end

See ObjectCollection for more information on finding objects.

Creating Objects

You create an object by writing to it. The following two expressions are equivalent.

obj = bucket.objects.create('key', 'data')
obj = bucket.objects['key'].write('data')

Writing Objects

To upload data to S3, you simply need to call #write on an object.

obj.write('Hello World!')
obj.read
#=> 'Hello World!'

Uploading Files

You can upload a file to S3 in a variety of ways. Given a path to a file (as a string) you can do any of the following:

# specify the data as a path to a file
obj.write(Pathname.new(path_to_file))

# also works this way
obj.write(:file => path_to_file)

# Also accepts an open file object
file = File.open(path_to_file, 'r')
obj.write(file)

All three examples above produce the same result. The file will be streamed to S3 in chunks. It will not be loaded entirely into memory.

Streaming Uploads

When you call #write with any IO-like object (must respond to #read and #eof?), it will be streamed to S3 in chunks.

While it is possible to determine the size of many IO objects, you may have to specify the :content_length of your IO object. If the exact size can not be known, you may provide an :estimated_content_length. Depending on the size (actual or estimated) of your data, it will be uploaded in a single request or in multiple requests via #multipart_upload.

You may also stream uploads to S3 using a block:

obj.write do |buffer, bytes|
  # writing fewer than the requested number of bytes to the buffer
  # will cause write to stop yielding to the block
end

Reading Objects

You can read an object directly using #read. Be warned, this will load the entire object into memory and is not recommended for large objects.

obj.write('abc')
puts obj.read
#=> abc

Streaming Downloads

If you want to stream an object from S3, you can pass a block to #read.

File.open('output', 'w') do |file|
  large_object.read do |chunk|
    file.write(chunk)
  end
end

Encryption

Amazon S3 can encrypt objects for you service-side. You can also use client-side encryption.

Server Side Encryption

Amazon S3 provides server side encryption for an additional cost. You can specify to use server side encryption when writing an object.

obj.write('data', :server_side_encryption => :aes256)

You can also make this the default behavior.

AWS.config(:s3_server_side_encryption => :aes256)

s3 = AWS::S3.new
s3.buckets['name'].objects['key'].write('abc') # will be encrypted

Client Side Encryption

Client side encryption utilizes envelope encryption, so that your keys are never sent to S3. You can use a symetric key or an asymmetric key pair.

Symmetric Key Encryption

An AES key is used for symmetric encryption. The key can be 128, 192, and 256 bit sizes. Start by generating key or read a previously generated key.

# generate a new random key
my_key = OpenSSL::Cipher.new("AES-256-ECB").random_key

# read an existing key from disk
my_key = File.read("my_key.der")

Now you can encrypt locally and upload the encrypted data to S3. To do this, you need to provide your key.

obj = bucket.objects["my-text-object"]

# encrypt then upload data
obj.write("MY TEXT", :encryption_key => my_key)

# try read the object without decrypting, oops
obj.read
#=> '.....'

Lastly, you can download and decrypt by providing the same key.

obj.read(:encryption_key => my_key)
#=> "MY TEXT"

Asymmetric Key Pair

A RSA key pair is used for asymmetric encryption. The public key is used for encryption and the private key is used for decryption. Start by generating a key.

my_key = OpenSSL::PKey::RSA.new(1024)

Provide your key to #write and the data will be encrypted before it is uploaded. Pass the same key to #read to decrypt the data when you download it.

obj = bucket.objects["my-text-object"]

# encrypt and upload the data
obj.write("MY TEXT", :encryption_key => my_key)

# download and decrypt the data
obj.read(:encryption_key => my_key)
#=> "MY TEXT"

Configuring storage locations

By default, encryption materials are stored in the object metadata. If you prefer, you can store the encryption materials in a separate object in S3. This object will have the same key + ‘.instruction’.

# new object, does not exist yet
obj = bucket.objects["my-text-object"]

# no instruction file present
bucket.objects['my-text-object.instruction'].exists?
#=> false

# store the encryption materials in the instruction file
# instead of obj#metadata
obj.write("MY TEXT",
  :encryption_key => MY_KEY,
  :encryption_materials_location => :instruction_file)

bucket.objects['my-text-object.instruction'].exists?
#=> true

If you store the encryption materials in an instruction file, you must tell #read this or it will fail to find your encryption materials.

# reading an encrypted file whos materials are stored in an
# instruction file, and not metadata
obj.read(:encryption_key => MY_KEY,
  :encryption_materials_location => :instruction_file)

Configuring default behaviors

You can configure the default key such that it will automatically encrypt and decrypt for you. You can do this globally or for a single S3 interface

# all objects uploaded/downloaded with this s3 object will be 
# encrypted/decrypted
s3 = AWS::S3.new(:s3_encryption_key => "MY_KEY")

# set the key to always encrypt/decrypt
AWS.config(:s3_encryption_key => "MY_KEY")

You can also configure the default storage location for the encryption materials.

AWS.config(:s3_encryption_materials_location => :instruction_file)

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(bucket, key, opts = {}) ⇒ S3Object

Returns a new instance of S3Object.

Parameters:

  • bucket (Bucket)

    The bucket this object belongs to.

  • key (String)

    The object’s key.



245
246
247
248
249
# File 'lib/aws/s3/s3_object.rb', line 245

def initialize(bucket, key, opts = {})
  super
  @key = key
  @bucket = bucket
end

Instance Attribute Details

#bucketBucket (readonly)

Returns The bucket this object is in.

Returns:

  • (Bucket)

    The bucket this object is in.



255
256
257
# File 'lib/aws/s3/s3_object.rb', line 255

def bucket
  @bucket
end

#keyString (readonly)

Returns The objects unique key.

Returns:

  • (String)

    The objects unique key



252
253
254
# File 'lib/aws/s3/s3_object.rb', line 252

def key
  @key
end

Instance Method Details

#==(other) ⇒ Boolean Also known as: eql?

Returns true if the other object belongs to the same bucket and has the same key.

Returns:

  • (Boolean)

    Returns true if the other object belongs to the same bucket and has the same key.



264
265
266
# File 'lib/aws/s3/s3_object.rb', line 264

def == other
  other.kind_of?(S3Object) and other.bucket == bucket and other.key == key
end

#aclAccessControlList

Returns the object’s access control list. This will be an instance of AccessControlList, plus an additional change method:

object.acl.change do |acl|
  # remove any grants to someone other than the bucket owner
  owner_id = object.bucket.owner.id
  acl.grants.reject! do |g|
    g.grantee.canonical_user_id != owner_id
  end
end

Note that changing the ACL is not an atomic operation; it fetches the current ACL, yields it to the block, and then sets it again. Therefore, it’s possible that you may overwrite a concurrent update to the ACL using this method.

Returns:



1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
# File 'lib/aws/s3/s3_object.rb', line 1096

def acl

  resp = client.get_object_acl(:bucket_name => bucket.name, :key => key)

  acl = AccessControlList.new(resp.data)
  acl.extend ACLProxy
  acl.object = self
  acl

end

#acl=(acl) ⇒ nil

Sets the objects’s ACL (access control list). You can provide an ACL in a number of different formats.

Parameters:

  • acl (Symbol, String, Hash, AccessControlList)

    Accepts an ACL description in one of the following formats:

    Canned ACL

    S3 supports a number of canned ACLs for buckets and objects. These include:

    • :private

    • :public_read

    • :public_read_write

    • :authenticated_read

    • :bucket_owner_read (object-only)

    • :bucket_owner_full_control (object-only)

    • :log_delivery_write (bucket-only)

    Here is an example of providing a canned ACL to a bucket:

    s3.buckets['bucket-name'].acl = :public_read
    

    ACL Grant Hash

    You can provide a hash of grants. The hash is composed of grants (keys) and grantees (values). Accepted grant keys are:

    • :grant_read

    • :grant_write

    • :grant_read_acp

    • :grant_write_acp

    • :grant_full_control

    Grantee strings (values) should be formatted like some of the following examples:

    id="8a6925ce4adf588a4532142d3f74dd8c71fa124b1ddee97f21c32aa379004fef"
    uri="http://acs.amazonaws.com/groups/global/AllUsers"
    emailAddress="[email protected]"
    

    You can provide a comma delimited list of multiple grantees in a single string. Please note the use of quotes inside the grantee string. Here is a simple example:

    {
      :grant_full_control => "emailAddress=\"[email protected]\", id=\"abc..mno\""
    }
    

    See the S3 API documentation for more information on formatting grants.

    AcessControlList Object

    You can build an ACL using the AccessControlList class and pass this object.

    acl = AWS::S3::AccessControlList.new
    acl.grant(:full_control).to(:canonical_user_id => "8a6...fef")
    acl #=> this is acceptible
    

    ACL XML String

    Lastly you can build your own ACL XML document and pass it as a string.

    <<-XML
      <AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
        <Owner>
          <ID>8a6...fef</ID>
          <DisplayName>owner-display-name</DisplayName>
        </Owner>
        <AccessControlList>
          <Grant>
            <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Canonical User">
              <ID>8a6...fef</ID>
              <DisplayName>owner-display-name</DisplayName>
            </Grantee>
            <Permission>FULL_CONTROL</Permission>
          </Grant>
        </AccessControlList>
      </AccessControlPolicy> 
    XML
    

Returns:

  • (nil)


1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
# File 'lib/aws/s3/s3_object.rb', line 1111

def acl=(acl)

  client_opts = {}
  client_opts[:bucket_name] = bucket.name
  client_opts[:key] = key

  client.put_object_acl(acl_options(acl).merge(client_opts))
  nil

end

#content_lengthInteger

Returns Size of the object in bytes.

Returns:

  • (Integer)

    Size of the object in bytes.



317
318
319
# File 'lib/aws/s3/s3_object.rb', line 317

def content_length
  head[:content_length]
end

#content_typeString

Note:

S3 does not compute content-type. It reports the content-type as was reported during the file upload.

Returns the content type as reported by S3, defaults to an empty string when not provided during upload.

Returns:

  • (String)

    Returns the content type as reported by S3, defaults to an empty string when not provided during upload.



325
326
327
# File 'lib/aws/s3/s3_object.rb', line 325

def content_type
  head[:content_type]
end

#copy_from(source, options = {}) ⇒ nil

Note:

This operation does not copy the ACL, storage class (standard vs. reduced redundancy) or server side encryption setting from the source object. If you don’t specify any of these options when copying, the object will have the default values as described below.

Copies data from one S3 object to another.

S3 handles the copy so the clients does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.

Parameters:

  • source (Mixed)
  • options (Hash) (defaults to: {})

Options Hash (options):

  • :bucket_name (String)

    The name of the bucket the source object can be found in. Defaults to the current object’s bucket.

  • :bucket (Bucket)

    The bucket the source object can be found in. Defaults to the current object’s bucket.

  • :metadata (Hash)

    A hash of metadata to save with the copied object. Each name, value pair must conform to US-ASCII. When blank, the sources metadata is copied.

  • :content_type (String)

    The content type of the copied object. Defaults to the source object’s content type.

  • :reduced_redundancy (Boolean) — default: false

    If true the object is stored with reduced redundancy in S3 for a lower cost.

  • :version_id (String) — default: nil

    Causes the copy to read a specific version of the source object.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private

    • :public_read

    • :public_read_write

    • :authenticated_read

    • :bucket_owner_read

    • :bucket_owner_full_control

  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

  • :client_side_encrypted (Boolean) — default: false

    Set to true when the object being copied was client-side encrypted. This is important so the encryption metadata will be copied.

  • :cache_control (String)

    Can be used to specify caching behavior. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9

  • :expires (String)

    The date and time at which the object is no longer cacheable.

Returns:

  • (nil)


843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
# File 'lib/aws/s3/s3_object.rb', line 843

def copy_from source, options = {}

  options = options.dup

  options[:copy_source] =
    case source
    when S3Object
      "#{source.bucket.name}/#{source.key}"
    when ObjectVersion
      options[:version_id] = source.version_id
      "#{source.object.bucket.name}/#{source.object.key}"
    else
      if options[:bucket]
        "#{options.delete(:bucket).name}/#{source}"
      elsif options[:bucket_name]
        "#{options.delete(:bucket_name)}/#{source}"
      else
        "#{self.bucket.name}/#{source}"
      end
    end

  if [:metadata, :content_disposition, :content_type, :cache_control,
    ].any? {|opt| options.key?(opt) }
  then
    options[:metadata_directive] = 'REPLACE'
  else
    options[:metadata_directive] ||= 'COPY'
  end

  # copies client-side encryption materials (from the metadata or
  # instruction file)
  if options.delete(:client_side_encrypted)
    copy_cse_materials(source, options)
  end

  add_sse_options(options)

  options[:storage_class] = options.delete(:reduced_redundancy) ?
    'REDUCED_REDUNDANCY' : 'STANDARD'

  options[:bucket_name] = bucket.name
  options[:key] = key

  client.copy_object(options)

  nil

end

#copy_to(target, options = {}) ⇒ S3Object

Note:

This operation does not copy the ACL, storage class (standard vs. reduced redundancy) or server side encryption setting from this object to the new object. If you don’t specify any of these options when copying, the new object will have the default values as described below.

Copies data from the current object to another object in S3.

S3 handles the copy so the client does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.

Parameters:

  • target (S3Object, String)

    An S3Object, or a string key of and object to copy to.

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :bucket_name (String)

    The name of the bucket the object should be copied into. Defaults to the current object’s bucket.

  • :bucket (Bucket)

    The bucket the target object should be copied into. Defaults to the current object’s bucket.

  • :metadata (Hash)

    A hash of metadata to save with the copied object. Each name, value pair must conform to US-ASCII. When blank, the sources metadata is copied.

  • :reduced_redundancy (Boolean) — default: false

    If true the object is stored with reduced redundancy in S3 for a lower cost.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private

    • :public_read

    • :public_read_write

    • :authenticated_read

    • :bucket_owner_read

    • :bucket_owner_full_control

  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

  • :client_side_encrypted (Boolean) — default: false

    When true, the client-side encryption materials will be copied. Without this option, the key and iv are not guaranteed to be transferred to the new object.

  • :expires (String)

    The date and time at which the object is no longer cacheable.

Returns:

  • (S3Object)

    Returns the copy (target) object.



953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
# File 'lib/aws/s3/s3_object.rb', line 953

def copy_to target, options = {}

  unless target.is_a?(S3Object)

    bucket = case
    when options[:bucket] then options[:bucket]
    when options[:bucket_name]
      Bucket.new(options[:bucket_name], :config => config)
    else self.bucket
    end

    target = S3Object.new(bucket, target)
  end

  copy_opts = options.dup
  copy_opts.delete(:bucket)
  copy_opts.delete(:bucket_name)

  target.copy_from(self, copy_opts)
  target

end

#delete(options = {}) ⇒ nil

Deletes the object from its S3 bucket.

Parameters:

  • options (Hash) (defaults to: {})
  • [String] (Hash)

    a customizable set of options

  • [Boolean] (Hash)

    a customizable set of options

Returns:

  • (nil)


394
395
396
397
398
399
400
401
402
403
404
405
406
407
# File 'lib/aws/s3/s3_object.rb', line 394

def delete options = {}
  client.delete_object(options.merge(
    :bucket_name => bucket.name,
    :key => key))

  if options[:delete_instruction_file]
    client.delete_object(
      :bucket_name => bucket.name,
      :key => key + '.instruction')
  end

  nil

end

#etagString

Returns the object’s ETag.

Generally the ETAG is the MD5 of the object. If the object was uploaded using multipart upload then this is the MD5 all of the upload-part-md5s.

Returns:

  • (String)

    Returns the object’s ETag



305
306
307
# File 'lib/aws/s3/s3_object.rb', line 305

def etag
  head[:etag]
end

#exists?Boolean

Returns true if the object exists in S3.

Returns:

  • (Boolean)

    Returns true if the object exists in S3.



270
271
272
273
274
275
276
# File 'lib/aws/s3/s3_object.rb', line 270

def exists?
  head
rescue Errors::NoSuchKey => e
  false
else
  true
end

#expiration_dateDateTime?

Returns:

  • (DateTime, nil)


330
331
332
# File 'lib/aws/s3/s3_object.rb', line 330

def expiration_date
  head[:expiration_date]
end

#expiration_rule_idString?

Returns:

  • (String, nil)


335
336
337
# File 'lib/aws/s3/s3_object.rb', line 335

def expiration_rule_id
  head[:expiration_rule_id]
end

#head(options = {}) ⇒ Object

Performs a HEAD request against this object and returns an object with useful information about the object, including:

  • metadata (hash of user-supplied key-value pairs)

  • content_length (integer, number of bytes)

  • content_type (as sent to S3 when uploading the object)

  • etag (typically the object’s MD5)

  • server_side_encryption (the algorithm used to encrypt the object on the server side, e.g. :aes256)

Parameters:

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :version_id (String)

    Which version of this object to make a HEAD request against.

Returns:

  • A head object response with metadata, content_length, content_type, etag and server_side_encryption.



293
294
295
296
# File 'lib/aws/s3/s3_object.rb', line 293

def head options = {}
  client.head_object(options.merge(
    :bucket_name => bucket.name, :key => key))
end

#last_modifiedTime

Returns the object’s last modified time.

Returns:

  • (Time)

    Returns the object’s last modified time.



312
313
314
# File 'lib/aws/s3/s3_object.rb', line 312

def last_modified
  head[:last_modified]
end

#metadata(options = {}) ⇒ ObjectMetadata

Returns an instance of ObjectMetadata representing the metadata for this object.

Parameters:

  • [String] (Hash)

    a customizable set of options

Returns:

  • (ObjectMetadata)

    Returns an instance of ObjectMetadata representing the metadata for this object.



434
435
436
437
# File 'lib/aws/s3/s3_object.rb', line 434

def  options = {}
  options[:config] = config
  ObjectMetadata.new(self, options)
end

#move_to(target, options = {}) ⇒ S3Object Also known as: rename_to

Moves an object to a new key.

This works by copying the object to a new key and then deleting the old object. This function returns the new object once this is done.

bucket = s3.buckets['old-bucket']
old_obj = bucket.objects['old-key']

# renaming an object returns a new object
new_obj = old_obj.move_to('new-key')

old_obj.key     #=> 'old-key'
old_obj.exists? #=> false

new_obj.key     #=> 'new-key'
new_obj.exists? #=> true

If you need to move an object to a different bucket, pass :bucket or :bucket_name.

obj = s3.buckets['old-bucket'].objects['old-key']
obj.move_to('new-key', :bucket_name => 'new_bucket')

If the copy succeeds, but the then the delete fails, an error will be raised.

Parameters:

  • target (String)

    The key to move this object to.

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :bucket_name (String)

    The name of the bucket the object should be copied into. Defaults to the current object’s bucket.

  • :bucket (Bucket)

    The bucket the target object should be copied into. Defaults to the current object’s bucket.

  • :metadata (Hash)

    A hash of metadata to save with the copied object. Each name, value pair must conform to US-ASCII. When blank, the sources metadata is copied.

  • :reduced_redundancy (Boolean) — default: false

    If true the object is stored with reduced redundancy in S3 for a lower cost.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private

    • :public_read

    • :public_read_write

    • :authenticated_read

    • :bucket_owner_read

    • :bucket_owner_full_control

  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

  • :client_side_encrypted (Boolean) — default: false

    When true, the client-side encryption materials will be copied. Without this option, the key and iv are not guaranteed to be transferred to the new object.

  • :expires (String)

    The date and time at which the object is no longer cacheable.

Returns:

  • (S3Object)

    Returns a new object with the new key.



768
769
770
771
772
# File 'lib/aws/s3/s3_object.rb', line 768

def move_to target, options = {}
  copy = copy_to(target, options)
  delete
  copy
end

#multipart_upload(options = {}) {|upload| ... } ⇒ S3Object, ObjectVersion

Performs a multipart upload. Use this if you have specific needs for how the upload is split into parts, or if you want to have more control over how the failure of an individual part upload is handled. Otherwise, #write is much simpler to use.

Examples:

Uploading an object in two parts

bucket.objects.myobject.multipart_upload do |upload|
  upload.add_part("a" * 5242880)
  upload.add_part("b" * 2097152)
end

Uploading parts out of order

bucket.objects.myobject.multipart_upload do |upload|
  upload.add_part("b" * 2097152, :part_number => 2)
  upload.add_part("a" * 5242880, :part_number => 1)
end

Aborting an upload after parts have been added

bucket.objects.myobject.multipart_upload do |upload|
  upload.add_part("b" * 2097152, :part_number => 2)
  upload.abort
end

Starting an upload and completing it later by ID

upload = bucket.objects.myobject.multipart_upload
upload.add_part("a" * 5242880)
upload.add_part("b" * 2097152)
id = upload.id

# later or in a different process
upload = bucket.objects.myobject.multipart_uploads[id]
upload.complete(:remote_parts)

Parameters:

  • options (Hash) (defaults to: {})

    Options for the upload.

Options Hash (options):

  • :metadata (Hash)

    A hash of metadata to be included with the object. These will be sent to S3 as headers prefixed with x-amz-meta. Each name, value pair must conform to US-ASCII.

  • :acl (Symbol) — default: private

    A canned access control policy. Valid values are:

    • :private

    • :public_read

    • :public_read_write

    • :authenticated_read

    • :bucket_owner_read

    • :bucket_owner_full_control

  • :reduced_redundancy (Boolean) — default: false

    If true, Reduced Redundancy Storage will be enabled for the uploaded object.

  • :cache_control (String)

    Can be used to specify caching behavior. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9

  • :content_disposition (String)

    Specifies presentational information for the object. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec19.5.1

  • :content_encoding (String)

    Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11

  • :content_type (Object)

    A standard MIME type describing the format of the object data.

  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

Yield Parameters:

Returns:

  • (S3Object, ObjectVersion)

    If the bucket has versioning enabled, returns the ObjectVersion representing the version that was uploaded. If versioning is disabled, returns self.



703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
# File 'lib/aws/s3/s3_object.rb', line 703

def multipart_upload(options = {})

  options = options.dup
  add_sse_options(options)

  upload = multipart_uploads.create(options)

  if block_given?
    begin
      yield(upload)
      upload.close
    rescue => e
      upload.abort
      raise e
    end
  else
    upload
  end
end

#multipart_uploadsObjectUploadCollection

Returns an object representing the collection of uploads that are in progress for this object.

Examples:

Abort any in-progress uploads for the object:


object.multipart_uploads.each(&:abort)

Returns:

  • (ObjectUploadCollection)

    Returns an object representing the collection of uploads that are in progress for this object.



729
730
731
# File 'lib/aws/s3/s3_object.rb', line 729

def multipart_uploads
  ObjectUploadCollection.new(self)
end

#presigned_post(options = {}) ⇒ PresignedPost

Generates fields for a presigned POST to this object. This method adds a constraint that the key must match the key of this object. All options are sent to the PresignedPost constructor.

Returns:

See Also:



1242
1243
1244
# File 'lib/aws/s3/s3_object.rb', line 1242

def presigned_post(options = {})
  PresignedPost.new(bucket, options.merge(:key => key))
end

#public_url(options = {}) ⇒ URI::HTTP, URI::HTTPS

Generates a public (not authenticated) URL for the object.

Parameters:

  • options (Hash) (defaults to: {})

    Options for generating the URL.

Options Hash (options):

  • :secure (Boolean)

    Whether to generate a secure (HTTPS) URL or a plain HTTP url.

Returns:

  • (URI::HTTP, URI::HTTPS)


1230
1231
1232
1233
# File 'lib/aws/s3/s3_object.rb', line 1230

def public_url(options = {})
  options[:secure] = config.use_ssl? unless options.key?(:secure)
  build_uri(request_for_signing(options), options)
end

#read(options = {}, &read_block) ⇒ Object

Note:

:range option cannot be used with client-side encryption

Note:

All decryption reads incur at least an extra HEAD operation.

Fetches the object data from S3. If you pass a block to this method, the data will be yielded to the block in chunks as it is read off the HTTP response.

Read an object from S3 in chunks

When downloading large objects it is recommended to pass a block to #read. Data will be yielded to the block as it is read off the HTTP response.

# read an object from S3 to a file
File.open('output.txt', 'w') do |file|
  bucket.objects['key'].read do |chunk|
    file.write(chunk)
  end
end

Reading an object without a block

When you omit the block argument to #read, then the entire HTTP response and read and the object data is loaded into memory.

bucket.objects['key'].read
#=> 'object-contents-here'

Parameters:

  • options (Hash) (defaults to: {})

Options Hash (options):

  • :version_id (String)

    Reads data from a specific version of this object.

  • :if_unmodified_since (Time)

    If specified, the method will raise AWS::S3::Errors::PreconditionFailed unless the object has not been modified since the given time.

  • :if_modified_since (Time)

    If specified, the method will raise AWS::S3::Errors::NotModified if the object has not been modified since the given time.

  • :if_match (String)

    If specified, the method will raise AWS::S3::Errors::PreconditionFailed unless the object ETag matches the provided value.

  • :if_none_match (String)

    If specified, the method will raise AWS::S3::Errors::NotModified if the object ETag matches the provided value.

  • :range (Range)

    A byte range to read data from

  • :encryption_key (OpenSSL::PKey::RSA, String) — default: nil

    If this option is set, the object will be decrypted using envelope encryption. The valid values are OpenSSL asymmetric keys OpenSSL::Pkey::RSA or strings representing symmetric keys of an AES-128/192/256-ECB cipher as a String. This value defaults to the value in s3_encryption_key; for more information, see AWS.config.

    Symmetric Keys:

    cipher = OpenSSL::Cipher.new(‘AES-256-ECB’) key = cipher.random_key

    Asymmetric keys can also be generated as so: key = OpenSSL::PKey::RSA.new(KEY_SIZE)

  • :encryption_materials_location (Symbol) — default: :metadata

    Set this to :instruction_file if the encryption materials are not stored in the object metadata



1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
# File 'lib/aws/s3/s3_object.rb', line 1050

def read options = {}, &read_block

  options[:bucket_name] = bucket.name
  options[:key] = key

  if should_decrypt?(options)
    get_encrypted_object(options, &read_block)
  else
    resp_data = get_object(options, &read_block)
    block_given? ? resp_data : resp_data[:data]
  end

end

#reduced_redundancy=(value) ⇒ true, false

Note:

Changing the storage class of an object incurs a COPY operation.

Changes the storage class of the object to enable or disable Reduced Redundancy Storage (RRS).

Parameters:

  • value (true, false)

    If this is true, the object will be copied in place and stored with reduced redundancy at a lower cost. Otherwise, the object will be copied and stored with the standard storage class.

Returns:

  • (true, false)

    The value parameter.



1258
1259
1260
1261
# File 'lib/aws/s3/s3_object.rb', line 1258

def reduced_redundancy= value
  copy_from(key, :reduced_redundancy => value)
  value
end

#restore(options = {}) ⇒ Boolean

Restores a temporary copy of an archived object from the Glacier storage tier. After the specified days, Amazon S3 deletes the temporary copy. Note that the object remains archived; Amazon S3 deletes only the restored copy.

Restoring an object does not occur immediately. Use #restore_in_progress? to check the status of the operation.

Parameters:

  • [Integer] (Hash)

    a customizable set of options

Returns:

  • (Boolean)

    true if a restore can be initiated.

Since:

  • 1.7.2



420
421
422
423
424
425
426
427
428
# File 'lib/aws/s3/s3_object.rb', line 420

def restore options = {}
  options[:days] ||= 1

  client.restore_object(
    :bucket_name => bucket.name,
    :key => key, :days => options[:days])

  true
end

#restore_expiration_dateDateTime?

Returns:

  • (DateTime)

    the time when the temporarily restored object will be removed from S3. Note that the original object will remain available in Glacier.

  • (nil)

    if the object was not restored from an archived copy

Since:

  • 1.7.2



366
367
368
# File 'lib/aws/s3/s3_object.rb', line 366

def restore_expiration_date
  head[:restore_expiration_date]
end

#restore_in_progress?Boolean

Returns whether a #restore operation on the object is currently being performed on the object.

Returns:

  • (Boolean)

    whether a #restore operation on the object is currently being performed on the object.

See Also:

Since:

  • 1.7.2



356
357
358
# File 'lib/aws/s3/s3_object.rb', line 356

def restore_in_progress?
  head[:restore_in_progress]
end

#restored_object?Boolean

Returns whether the object is a temporary copy of an archived object in the Glacier storage class.

Returns:

  • (Boolean)

    whether the object is a temporary copy of an archived object in the Glacier storage class.

Since:

  • 1.7.2



373
374
375
# File 'lib/aws/s3/s3_object.rb', line 373

def restored_object?
  !!head[:restore_expiration_date]
end

#server_side_encryptionSymbol?

Returns the algorithm used to encrypt the object on the server side, or nil if SSE was not used when storing the object.

Returns:

  • (Symbol, nil)

    Returns the algorithm used to encrypt the object on the server side, or nil if SSE was not used when storing the object.



342
343
344
# File 'lib/aws/s3/s3_object.rb', line 342

def server_side_encryption
  head[:server_side_encryption]
end

#server_side_encryption?true, false

Returns true if the object was stored using server side encryption.

Returns:

  • (true, false)

    Returns true if the object was stored using server side encryption.



348
349
350
# File 'lib/aws/s3/s3_object.rb', line 348

def server_side_encryption?
  !server_side_encryption.nil?
end

#url_for(method, options = {}) ⇒ URI::HTTP, URI::HTTPS

Generates a presigned URL for an operation on this object. This URL can be used by a regular HTTP client to perform the desired operation without credentials and without changing the permissions of the object.

Examples:

Generate a url to read an object

bucket.objects.myobject.url_for(:read)

Generate a url to delete an object

bucket.objects.myobject.url_for(:delete)

Override response headers for reading an object

object = bucket.objects.myobject
url = object.url_for(:read,
                     :response_content_type => "application/json")

Generate a url that expires in 10 minutes

bucket.objects.myobject.url_for(:read, :expires => 10*60)

Parameters:

  • method (Symbol, String)

    The HTTP verb or object method for which the returned URL will be valid. Valid values:

    • :get or :read

    • :put or :write

    • :delete

  • options (Hash) (defaults to: {})

    Additional options for generating the URL.

Options Hash (options):

  • :expires (Object)

    Sets the expiration time of the URL; after this time S3 will return an error if the URL is used. This can be an integer (to specify the number of seconds after the current time), a string (which is parsed as a date using Time#parse), a Time, or a DateTime object. This option defaults to one hour after the current time.

  • :secure (Boolean) — default: true

    Whether to generate a secure (HTTPS) URL or a plain HTTP url.

  • :endpoint (String)

    Sets the hostname of the endpoint (overrides config.s3_endpoint).

  • :port (Integer)

    Sets the port of the endpoint (overrides config.s3_port).

  • :force_path_style (Boolean) — default: false

    Indicates whether the generated URL should place the bucket name in the path (true) or as a subdomain (false).

  • :response_content_type (String)

    Sets the Content-Type header of the response when performing an HTTP GET on the returned URL.

  • :response_content_language (String)

    Sets the Content-Language header of the response when performing an HTTP GET on the returned URL.

  • :response_expires (String)

    Sets the Expires header of the response when performing an HTTP GET on the returned URL.

  • :response_cache_control (String)

    Sets the Cache-Control header of the response when performing an HTTP GET on the returned URL.

  • :response_content_disposition (String)

    Sets the Content-Disposition header of the response when performing an HTTP GET on the returned URL.

  • :response_content_encoding (String)

    Sets the Content-Encoding header of the response when performing an HTTP GET on the returned URL.

Returns:

  • (URI::HTTP, URI::HTTPS)


1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
# File 'lib/aws/s3/s3_object.rb', line 1200

def url_for(method, options = {})

  options[:secure] = config.use_ssl? unless options.key?(:secure)

  req = request_for_signing(options)

  method = http_method(method)
  expires = expiration_timestamp(options[:expires])
  req.add_param("AWSAccessKeyId",
                config.credential_provider.access_key_id)
  req.add_param("versionId", options[:version_id]) if options[:version_id]
  req.add_param("Signature", signature(method, expires, req))
  req.add_param("Expires", expires)
  req.add_param("x-amz-security-token",
                config.credential_provider.session_token) if
    config.credential_provider.session_token

  secure = options.fetch(:secure, config.use_ssl?)
  build_uri(req, options)
end

#versionsObjectVersionCollection

Returns a collection representing all the object versions for this object.

bucket.versioning_enabled? # => true
version = bucket.objects["mykey"].versions.latest


446
447
448
# File 'lib/aws/s3/s3_object.rb', line 446

def versions
  ObjectVersionCollection.new(self)
end

#write(data, options = {}) ⇒ S3Object, ObjectVersion

Uploads data to the object in S3.

obj = s3.buckets['bucket-name'].objects['key']

# strings
obj.write("HELLO")

# files (by path)
obj.write(Pathname.new('path/to/file.txt'))

# file objects
obj.write(File.open('path/to/file.txt', 'r'))

# IO objects (must respond to #read and #eof?)
obj.write(io)

Multipart Uploads vs Single Uploads

This method will intelligently choose between uploading the file in a signal request and using #multipart_upload. You can control this behavior by configuring the thresholds and you can disable the multipart feature as well.

# always send the file in a single request
obj.write(file, :single_request => true)

# upload the file in parts if the total file size exceeds 100MB
obj.write(file, :multipart_threshold => 100 * 1024 * 1024)

Parameters:

  • data (String, Pathname, File, IO)

    The data to upload. This may be a:

    • String

    • Pathname

    • File

    • IO

    • Any object that responds to #read and #eof?.

  • options (Hash) (defaults to: {})

    Additional upload options.

Options Hash (options):

  • :content_length (Integer)

    If provided, this option must match the total number of bytes written to S3. This options is required when it is not possible to automatically determine the size of data.

  • :estimated_content_length (Integer)

    When uploading data of unknown content length, you may specify this option to hint what mode of upload should take place. When :estimated_content_length exceeds the :multipart_threshold, then the data will be uploaded in parts, otherwise it will be read into memory and uploaded via Client#put_object.

  • :single_request (Boolean) — default: false

    When true, this method will always upload the data in a single request (via Client#put_object). When false, this method will choose between Client#put_object and #multipart_upload.

  • :multipart_threshold (Integer) — default: 16777216

    Specifies the maximum size (in bytes) of a single-request upload. If the data exceeds this threshold, it will be uploaded via #multipart_upload. The default threshold is 16MB and can be configured via AWS.config(:s3_multipart_threshold => …).

  • :multipart_min_part_size (Integer) — default: 5242880

    The minimum size of a part to upload to S3 when using #multipart_upload. S3 will reject parts smaller than 5MB (except the final part). The default is 5MB and can be configured via AWS.config(:s3_multipart_min_part_size => …).

  • :metadata (Hash)

    A hash of metadata to be included with the object. These will be sent to S3 as headers prefixed with x-amz-meta. Each name, value pair must conform to US-ASCII.

  • :acl (Symbol, String) — default: :private

    A canned access control policy. Valid values are:

    • :private

    • :public_read

    • :public_read_write

    • :authenticated_read

    • :bucket_owner_read

    • :bucket_owner_full_control

  • :grant_read (String)
  • :grant_write (String)
  • :grant_read_acp (String)
  • :grant_write_acp (String)
  • :grant_full_control (String)
  • :reduced_redundancy (Boolean) — default: false

    When true, this object will be stored with Reduced Redundancy Storage.

  • :cache_control (String)

    Can be used to specify caching behavior. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9

  • :content_disposition (String)

    Specifies presentational information for the object. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec19.5.1

  • :content_encoding (String)

    Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. See www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11

  • :content_md5 (String)

    The base64 encoded content md5 of the data.

  • :content_type (Object)

    A standard MIME type describing the format of the object data.

  • :server_side_encryption (Symbol) — default: nil

    If this option is set, the object will be stored using server side encryption. The only valid value is :aes256, which specifies that the object should be stored using the AES encryption algorithm with 256 bit keys. By default, this option uses the value of the :s3_server_side_encryption option in the current configuration; for more information, see AWS.config.

  • :encryption_key (OpenSSL::PKey::RSA, String)

    Set this to encrypt the data client-side using envelope encryption. The key must be an OpenSSL asymmetric key or a symmetric key string (16, 24 or 32 bytes in length).

  • :encryption_materials_location (Symbol) — default: :metadata

    Set this to :instruction_file if you prefer to store the client-side encryption materials in a separate object in S3 instead of in the object metadata.

  • :expires (String)

    The date and time at which the object is no longer cacheable.

Returns:



595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
# File 'lib/aws/s3/s3_object.rb', line 595

def write *args, &block

  options = compute_write_options(*args, &block)

  add_storage_class_option(options)
  add_sse_options(options)
  add_cse_options(options)

  if use_multipart?(options)
    write_with_multipart(options)
  else
    write_with_put_object(options)
  end

end