Class: Aliyun::OSS::Protocol

Inherits:
Object
  • Object
show all
Includes:
Logging
Defined in:
lib/aliyun/oss/protocol.rb

Overview

Protocol implement the OSS Open API which is low-level. User should refer to Client for normal use.

Constant Summary collapse

STREAM_CHUNK_SIZE =
16 * 1024

Constants included from Logging

Logging::DEFAULT_LOG_FILE, Logging::MAX_NUM_LOG, Logging::ROTATE_SIZE

Instance Method Summary collapse

Methods included from Logging

#logger, set_log_file, set_log_level

Constructor Details

#initialize(config) ⇒ Protocol

Returns a new instance of Protocol.



20
21
22
23
# File 'lib/aliyun/oss/protocol.rb', line 20

def initialize(config)
  @config = config
  @http = HTTP.new(config)
end

Instance Method Details

#abort_multipart_upload(bucket_name, object_name, txn_id) ⇒ Object

Note:

All the parts are discarded after abort. For some parts being uploaded while the abort happens, they may not be discarded. Call abort_multipart_upload several times for this situation.

Abort a multipart uploading transaction

Parameters:

  • bucket_name (String)

    the bucket name

  • object_name (String)

    the object name

  • txn_id (String)

    the upload id



1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
# File 'lib/aliyun/oss/protocol.rb', line 1134

def abort_multipart_upload(bucket_name, object_name, txn_id)
  logger.debug("Begin abort multipart upload, txn id: #{txn_id}")

  sub_res = {'uploadId' => txn_id}

  @http.delete(
    {:bucket => bucket_name, :object => object_name, :sub_res => sub_res})

  logger.debug("Done abort multipart: #{txn_id}.")
end

#append_object(bucket_name, object_name, position, opts = {}) {|HTTP::StreamWriter| ... } ⇒ Integer

Note:
  1. Can not append to a “Normal Object”

  2. The position must equal to the object’s size before append

  3. The :content_type is only used when the object is created

Append to an object of a bucket. Create an “Appendable Object” if the object does not exist. A block is required to provide the appending data.

Parameters:

  • bucket_name (String)

    the bucket name

  • object_name (String)

    the object name

  • position (Integer)

    the position to append

  • opts (Hash) (defaults to: {})

    Options

Options Hash (opts):

  • :content_type (String)

    the HTTP Content-Type for the file, if not specified client will try to determine the type itself and fall back to HTTP::DEFAULT_CONTENT_TYPE if it fails to do so

  • :metas (Hash<Symbol, String>)

    key-value pairs that serve as the object meta which will be stored together with the object

Yields:

  • (HTTP::StreamWriter)

    a stream writer is yielded to the caller to which it can write chunks of data streamingly

Returns:

  • (Integer)

    next position to append



551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
# File 'lib/aliyun/oss/protocol.rb', line 551

def append_object(bucket_name, object_name, position, opts = {}, &block)
  logger.debug("Begin append object, bucket: #{bucket_name}, object: "\
                "#{object_name}, position: #{position}, options: #{opts}")

  sub_res = {'append' => nil, 'position' => position}
  headers = {'Content-Type' => opts[:content_type]}
  (opts[:metas] || {})
    .each { |k, v| headers["x-oss-meta-#{k.to_s}"] = v.to_s }

  h, _ = @http.post(
       {:bucket => bucket_name, :object => object_name, :sub_res => sub_res},
       {:headers => headers, :body => HTTP::StreamPayload.new(&block)})

  logger.debug('Done append object')

  wrap(h[:x_oss_next_append_position], &:to_i) || -1
end

#batch_delete_objects(bucket_name, object_names, opts = {}) ⇒ Array<String>

Batch delete objects

Parameters:

  • bucket_name (String)

    the bucket name

  • object_names (Enumerator<String>)

    the object names

  • opts (Hash) (defaults to: {})

    options

Options Hash (opts):

  • :quiet (Boolean)

    indicates whether the server should return the delete result of the objects

  • :encoding (String)

    the encoding type for object key in the response body, only KeyEncoding::URL is supported now

Returns:

  • (Array<String>)

    object names that have been successfully deleted or empty if :quiet is true



881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
# File 'lib/aliyun/oss/protocol.rb', line 881

def batch_delete_objects(bucket_name, object_names, opts = {})
  logger.debug("Begin batch delete object, bucket: #{bucket_name}, "\
               "objects: #{object_names}, options: #{opts}")

  sub_res = {'delete' => nil}

  # It may have invisible chars in object key which will corrupt
  # libxml. So we're constructing xml body manually here.
  body = '<?xml version="1.0"?>'
  body << '<Delete>'
  body << '<Quiet>' << (opts[:quiet]? true : false).to_s << '</Quiet>'
  object_names.each { |k|
    body << '<Object><Key>' << k << '</Key></Object>'
  }
  body << '</Delete>'

  query = {}
  query['encoding-type'] = opts[:encoding] if opts[:encoding]

  _, body = @http.post(
       {:bucket => bucket_name, :sub_res => sub_res},
       {:query => query, :body => body})

  deleted = []
  unless opts[:quiet]
    doc = parse_xml(body)
    encoding = get_node_text(doc.root, 'EncodingType')
    doc.css("Deleted").map do |n|
      deleted << get_node_text(n, 'Key') { |x| decode_key(x, encoding) }
    end
  end

  logger.debug("Done delete object")

  deleted
end

#complete_multipart_upload(bucket_name, object_name, txn_id, parts) ⇒ Object

Complete a multipart uploading transaction

Parameters:



1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
# File 'lib/aliyun/oss/protocol.rb', line 1102

def complete_multipart_upload(bucket_name, object_name, txn_id, parts)
  logger.debug("Begin complete multipart upload, "\
               "txn id: #{txn_id}, parts: #{parts.map(&:to_s)}")

  sub_res = {'uploadId' => txn_id}

  body = Nokogiri::XML::Builder.new do |xml|
    xml.CompleteMultipartUpload {
      parts.each do |p|
        xml.Part {
          xml.PartNumber p.number
          xml.ETag p.etag
        }
      end
    }
  end.to_xml

  @http.post(
    {:bucket => bucket_name, :object => object_name, :sub_res => sub_res},
    {:body => body})

  logger.debug("Done complete multipart upload: #{txn_id}.")
end

#copy_object(bucket_name, src_object_name, dst_object_name, opts = {}) ⇒ Hash

Copy an object in the bucket. The source object and the dest object must be in the same bucket.

Parameters:

  • bucket_name (String)

    the bucket name

  • src_object_name (String)

    the source object name

  • dst_object_name (String)

    the dest object name

  • opts (Hash) (defaults to: {})

    options

Options Hash (opts):

  • :acl (String)

    specify the dest object’s ACL. See ACL

  • :meta_directive (String)

    specify what to do with the object’s meta: copy or replace. See MetaDirective

  • :content_type (String)

    the HTTP Content-Type for the file, if not specified client will try to determine the type itself and fall back to HTTP::DEFAULT_CONTENT_TYPE if it fails to do so

  • :metas (Hash<Symbol, String>)

    key-value pairs that serve as the object meta which will be stored together with the object

  • :condition (Hash)

    preconditions to get the object. See #get_object

Returns:

  • (Hash)

    the copy result

    • :etag [String] the etag of the dest object

    • :last_modified [Time] the last modification time of the dest object



822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
# File 'lib/aliyun/oss/protocol.rb', line 822

def copy_object(bucket_name, src_object_name, dst_object_name, opts = {})
  logger.debug("Begin copy object, bucket: #{bucket_name}, "\
               "source object: #{src_object_name}, dest object: "\
               "#{dst_object_name}, options: #{opts}")

  headers = {
    'x-oss-copy-source' =>
      @http.get_resource_path(bucket_name, src_object_name),
    'Content-Type' => opts[:content_type]
  }
  (opts[:metas] || {})
    .each { |k, v| headers["x-oss-meta-#{k.to_s}"] = v.to_s }

  {
    :acl => 'x-oss-object-acl',
    :meta_directive => 'x-oss-metadata-directive'
  }.each { |k, v| headers[v] = opts[k] if opts[k] }

  headers.merge!(get_copy_conditions(opts[:condition])) if opts[:condition]

  _, body = @http.put(
    {:bucket => bucket_name, :object => dst_object_name},
    {:headers => headers})

  doc = parse_xml(body)
  copy_result = {
    :last_modified => get_node_text(
      doc.root, 'LastModified') { |x| Time.parse(x) },
    :etag => get_node_text(doc.root, 'ETag')
  }.reject { |_, v| v.nil? }

  logger.debug("Done copy object")

  copy_result
end

#create_bucket(name, opts = {}) ⇒ Object

Create a bucket

Examples:

oss-cn-hangzhou

Parameters:

  • name (String)

    the bucket name

  • opts (Hash) (defaults to: {})

    options

Options Hash (opts):

  • :location (String)

    the region where the bucket is located



96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
# File 'lib/aliyun/oss/protocol.rb', line 96

def create_bucket(name, opts = {})
  logger.info("Begin create bucket, name: #{name}, opts: #{opts}")

  location = opts[:location]
  body = nil
  if location
    builder = Nokogiri::XML::Builder.new do |xml|
      xml.CreateBucketConfiguration {
        xml.LocationConstraint location
      }
    end
    body = builder.to_xml
  end

  @http.put({:bucket => name}, {:body => body})

  logger.info("Done create bucket")
end

#delete_bucket(name) ⇒ Object

Note:

it will fails if the bucket is not empty (it contains objects)

Delete a bucket

Parameters:

  • name (String)

    the bucket name



488
489
490
491
492
493
494
# File 'lib/aliyun/oss/protocol.rb', line 488

def delete_bucket(name)
  logger.info("Begin delete bucket: #{name}")

  @http.delete({:bucket => name})

  logger.info("Done delete bucket")
end

#delete_bucket_cors(name) ⇒ Object

Note:

this will delete all CORS rules of this bucket

Delete all bucket CORS rules

Parameters:

  • name (String)

    the bucket name



474
475
476
477
478
479
480
481
482
# File 'lib/aliyun/oss/protocol.rb', line 474

def delete_bucket_cors(name)
  logger.info("Begin delete bucket cors, bucket: #{name}")

  sub_res = {'cors' => nil}

  @http.delete({:bucket => name, :sub_res => sub_res})

  logger.info("Done delete bucket cors")
end

#delete_bucket_lifecycle(name) ⇒ Object

Note:

this will delete all lifecycle rules

Delete all lifecycle rules on the bucket

Parameters:

  • name (String)

    the bucket name



399
400
401
402
403
404
405
406
# File 'lib/aliyun/oss/protocol.rb', line 399

def delete_bucket_lifecycle(name)
  logger.info("Begin delete bucket lifecycle, name: #{name}")

  sub_res = {'lifecycle' => nil}
  @http.delete({:bucket => name, :sub_res => sub_res})

  logger.info("Done delete bucket lifecycle")
end

#delete_bucket_logging(name) ⇒ Object

Delete bucket logging settings, a.k.a. disable bucket logging

Parameters:

  • name (String)

    the bucket name



204
205
206
207
208
209
210
211
# File 'lib/aliyun/oss/protocol.rb', line 204

def delete_bucket_logging(name)
  logger.info("Begin delete bucket logging, name: #{name}")

  sub_res = {'logging' => nil}
  @http.delete({:bucket => name, :sub_res => sub_res})

  logger.info("Done delete bucket logging")
end

#delete_bucket_website(name) ⇒ Object

Delete bucket website settings

Parameters:

  • name (String)

    the bucket name



268
269
270
271
272
273
274
275
# File 'lib/aliyun/oss/protocol.rb', line 268

def delete_bucket_website(name)
  logger.info("Begin delete bucket website, name: #{name}")

  sub_res = {'website' => nil}
  @http.delete({:bucket => name, :sub_res => sub_res})

  logger.info("Done delete bucket website")
end

#delete_object(bucket_name, object_name) ⇒ Object

Delete an object from the bucket

Parameters:

  • bucket_name (String)

    the bucket name

  • object_name (String)

    the object name



861
862
863
864
865
866
867
868
# File 'lib/aliyun/oss/protocol.rb', line 861

def delete_object(bucket_name, object_name)
  logger.debug("Begin delete object, bucket: #{bucket_name}, "\
               "object:  #{object_name}")

  @http.delete({:bucket => bucket_name, :object => object_name})

  logger.debug("Done delete object")
end

#get_access_key_idString

Get user’s access key id

Returns:

  • (String)

    the access key id



1309
1310
1311
# File 'lib/aliyun/oss/protocol.rb', line 1309

def get_access_key_id
  @config.access_key_id
end

#get_bucket_acl(name) ⇒ String

Get bucket acl

Parameters:

  • name (String)

    the bucket name

Returns:

  • (String)

    the acl of this bucket



134
135
136
137
138
139
140
141
142
143
144
145
# File 'lib/aliyun/oss/protocol.rb', line 134

def get_bucket_acl(name)
  logger.info("Begin get bucket acl, name: #{name}")

  sub_res = {'acl' => nil}
  _, body = @http.get({:bucket => name, :sub_res => sub_res})

  doc = parse_xml(body)
  acl = get_node_text(doc.at_css("AccessControlList"), 'Grant')
  logger.info("Done get bucket acl")

  acl
end

#get_bucket_cors(name) ⇒ Array<OSS::CORSRule] the CORS rules

Get bucket CORS rules

Parameters:

  • name (String)

    the bucket name

Returns:



442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
# File 'lib/aliyun/oss/protocol.rb', line 442

def get_bucket_cors(name)
  logger.info("Begin get bucket cors, bucket: #{name}")

  sub_res = {'cors' => nil}
  _, body = @http.get({:bucket => name, :sub_res => sub_res})

  doc = parse_xml(body)
  rules = []

  doc.css("CORSRule").map do |n|
    allowed_origins = n.css("AllowedOrigin").map(&:text)
    allowed_methods = n.css("AllowedMethod").map(&:text)
    allowed_headers = n.css("AllowedHeader").map(&:text)
    expose_headers = n.css("ExposeHeader").map(&:text)
    max_age_seconds = get_node_text(n, 'MaxAgeSeconds', &:to_i)

    rules << CORSRule.new(
      :allowed_origins => allowed_origins,
      :allowed_methods => allowed_methods,
      :allowed_headers => allowed_headers,
      :expose_headers => expose_headers,
      :max_age_seconds => max_age_seconds)
  end

  logger.info("Done get bucket cors")

  rules
end

#get_bucket_lifecycle(name) ⇒ Array<OSS::LifeCycleRule>

Get bucket lifecycle settings

Parameters:

  • name (String)

    the bucket name

Returns:



369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
# File 'lib/aliyun/oss/protocol.rb', line 369

def get_bucket_lifecycle(name)
  logger.info("Begin get bucket lifecycle, name: #{name}")

  sub_res = {'lifecycle' => nil}
  _, body = @http.get({:bucket => name, :sub_res => sub_res})

  doc = parse_xml(body)
  rules = doc.css("Rule").map do |n|
    days = n.at_css("Expiration Days")
    date = n.at_css("Expiration Date")

    if (days && date) || (!days && !date)
      fail ClientError, "We can only have one of Date and Days for expiry."
    end

    LifeCycleRule.new(
      :id => get_node_text(n, 'ID'),
      :prefix => get_node_text(n, 'Prefix'),
      :enable => get_node_text(n, 'Status') { |x| x == 'Enabled' },
      :expiry => days ? days.text.to_i : Date.parse(date.text)
    )
  end
  logger.info("Done get bucket lifecycle")

  rules
end

#get_bucket_logging(name) ⇒ BucketLogging

Get bucket logging settings

Parameters:

  • name (String)

    the bucket name

Returns:



181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
# File 'lib/aliyun/oss/protocol.rb', line 181

def get_bucket_logging(name)
  logger.info("Begin get bucket logging, name: #{name}")

  sub_res = {'logging' => nil}
  _, body = @http.get({:bucket => name, :sub_res => sub_res})

  doc = parse_xml(body)
  opts = {:enable => false}

  logging_node = doc.at_css("LoggingEnabled")
  opts.update(
    :target_bucket => get_node_text(logging_node, 'TargetBucket'),
    :target_prefix => get_node_text(logging_node, 'TargetPrefix')
  )
  opts[:enable] = true if opts[:target_bucket]

  logger.info("Done get bucket logging")

  BucketLogging.new(opts)
end

#get_bucket_referer(name) ⇒ BucketReferer

Get bucket referer

Parameters:

  • name (String)

    the bucket name

Returns:



306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
# File 'lib/aliyun/oss/protocol.rb', line 306

def get_bucket_referer(name)
  logger.info("Begin get bucket referer, name: #{name}")

  sub_res = {'referer' => nil}
  _, body = @http.get({:bucket => name, :sub_res => sub_res})

  doc = parse_xml(body)
  opts = {
    :allow_empty =>
      get_node_text(doc.root, 'AllowEmptyReferer', &:to_bool),
    :whitelist => doc.css("RefererList Referer").map(&:text)
  }

  logger.info("Done get bucket referer")

  BucketReferer.new(opts)
end

#get_bucket_website(name) ⇒ BucketWebsite

Get bucket website settings

Parameters:

  • name (String)

    the bucket name

Returns:



248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
# File 'lib/aliyun/oss/protocol.rb', line 248

def get_bucket_website(name)
  logger.info("Begin get bucket website, name: #{name}")

  sub_res = {'website' => nil}
  _, body = @http.get({:bucket => name, :sub_res => sub_res})

  opts = {:enable => true}
  doc = parse_xml(body)
  opts.update(
    :index => get_node_text(doc.at_css('IndexDocument'), 'Suffix'),
    :error => get_node_text(doc.at_css('ErrorDocument'), 'Key')
  )

  logger.info("Done get bucket website")

  BucketWebsite.new(opts)
end

#get_object(bucket_name, object_name, opts = {}) {|String| ... } ⇒ OSS::Object

Note:

User can get the whole object or only part of it by specify the bytes range;

Note:

User can specify conditions to get the object like: if-modified-since, if-unmodified-since, if-match-etag, if-unmatch-etag. If the object to get fails to meet the conditions, it will not be returned;

Note:

User can indicate the server to rewrite the response headers such as content-type, content-encoding when get the object by specify the :rewrite options. The specified headers will be returned instead of the original property of the object.

Get an object from the bucket. A block is required to handle the object data chunks.

Parameters:

  • bucket_name (String)

    the bucket name

  • object_name (String)

    the object name

  • opts (Hash) (defaults to: {})

    options

Options Hash (opts):

  • :range (Array<Integer>)

    bytes range to get from the object, in the format: xx-yy

  • :condition (Hash)

    preconditions to get the object

    • :if_modified_since (Time) get the object if its modified time is later than specified

    • :if_unmodified_since (Time) get the object if its unmodified time if earlier than specified

    • :if_match_etag (String) get the object if its etag match specified

    • :if_unmatch_etag (String) get the object if its etag doesn’t match specified

  • :rewrite (Hash)

    response headers to rewrite

    • :content_type (String) the Content-Type header

    • :content_language (String) the Content-Language header

    • :expires (Time) the Expires header

    • :cache_control (String) the Cache-Control header

    • :content_disposition (String) the Content-Disposition header

    • :content_encoding (String) the Content-Encoding header

Yields:

  • (String)

    it gives the data chunks of the object to the block

Returns:



704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
# File 'lib/aliyun/oss/protocol.rb', line 704

def get_object(bucket_name, object_name, opts = {}, &block)
  logger.debug("Begin get object, bucket: #{bucket_name}, "\
               "object: #{object_name}")

  range = opts[:range]
  conditions = opts[:condition]
  rewrites = opts[:rewrite]

  headers = {}
  headers['Range'] = get_bytes_range(range) if range
  headers.merge!(get_conditions(conditions)) if conditions

  sub_res = {}
  if rewrites
    [ :content_type,
      :content_language,
      :cache_control,
      :content_disposition,
      :content_encoding
    ].each do |k|
      key = "response-#{k.to_s.sub('_', '-')}"
      sub_res[key] = rewrites[k] if rewrites.key?(k)
    end
    sub_res["response-expires"] =
      rewrites[:expires].httpdate if rewrites.key?(:expires)
  end

  h, _ = @http.get(
       {:bucket => bucket_name, :object => object_name,
        :sub_res => sub_res},
       {:headers => headers}
     ) { |chunk| yield chunk if block_given? }

  metas = {}
  meta_prefix = 'x_oss_meta_'
  h.select { |k, _| k.to_s.start_with?(meta_prefix) }
    .each { |k, v| metas[k.to_s.sub(meta_prefix, '')] = v.to_s }

  obj = Object.new(
    :key => object_name,
    :type => h[:x_oss_object_type],
    :size => wrap(h[:content_length], &:to_i),
    :etag => h[:etag],
    :metas => metas,
    :last_modified => wrap(h[:last_modified]) { |x| Time.parse(x) },
    :content_type => h[:content_type])

  logger.debug("Done get object")

  obj
end

#get_object_acl(bucket_name, object_name) ⇒ Object

Get object acl

return

the object’s acl. See ACL

Parameters:

  • bucket_name (String)

    the bucket name

  • object_name (String)

    the object name



940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
# File 'lib/aliyun/oss/protocol.rb', line 940

def get_object_acl(bucket_name, object_name)
  logger.debug("Begin get object acl, bucket: #{bucket_name}, "\
               "object: #{object_name}")

  sub_res = {'acl' => nil}
  _, body = @http.get(
       {bucket: bucket_name, object: object_name, sub_res: sub_res})

  doc = parse_xml(body)
  acl = get_node_text(doc.at_css("AccessControlList"), 'Grant')

  logger.debug("Done get object acl")

  acl
end

#get_object_cors(bucket_name, object_name, origin, method, headers = []) ⇒ CORSRule

Note:

this is usually used by browser to make a “preflight”

Get object CORS rule

Parameters:

  • bucket_name (String)

    the bucket name

  • object_name (String)

    the object name

  • origin (String)

    the Origin of the reqeust

  • method (String)

    the method to request access: Access-Control-Request-Method

  • headers (Array<String>) (defaults to: [])

    the headers to request access: Access-Control-Request-Headers

Returns:

  • (CORSRule)

    the CORS rule of the object



966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
# File 'lib/aliyun/oss/protocol.rb', line 966

def get_object_cors(bucket_name, object_name, origin, method, headers = [])
  logger.debug("Begin get object cors, bucket: #{bucket_name}, object: "\
               "#{object_name}, origin: #{origin}, method: #{method}, "\
               "headers: #{headers.join(',')}")

  h = {
    'Origin' => origin,
    'Access-Control-Request-Method' => method,
    'Access-Control-Request-Headers' => headers.join(',')
  }

  return_headers, _ = @http.options(
                    {:bucket => bucket_name, :object => object_name},
                    {:headers => h})

  logger.debug("Done get object cors")

  CORSRule.new(
    :allowed_origins => return_headers[:access_control_allow_origin],
    :allowed_methods => return_headers[:access_control_allow_methods],
    :allowed_headers => return_headers[:access_control_allow_headers],
    :expose_headers => return_headers[:access_control_expose_headers],
    :max_age_seconds => return_headers[:access_control_max_age]
  )
end

#get_object_meta(bucket_name, object_name, opts = {}) ⇒ OSS::Object

Note:

User can specify conditions to get the object like: if-modified-since, if-unmodified-since, if-match-etag, if-unmatch-etag. If the object to get fails to meet the conditions, it will not be returned.

Get the object meta rather than the whole object.

Parameters:

  • bucket_name (String)

    the bucket name

  • object_name (String)

    the object name

  • opts (Hash) (defaults to: {})

    options

Options Hash (opts):

  • :condition (Hash)

    preconditions to get the object meta. The same as #get_object

Returns:



768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
# File 'lib/aliyun/oss/protocol.rb', line 768

def get_object_meta(bucket_name, object_name, opts = {})
  logger.debug("Begin get object meta, bucket: #{bucket_name}, "\
               "object: #{object_name}, options: #{opts}")

  headers = {}
  headers.merge!(get_conditions(opts[:condition])) if opts[:condition]

  h, _ = @http.head(
       {:bucket => bucket_name, :object => object_name},
       {:headers => headers})

  metas = {}
  meta_prefix = 'x_oss_meta_'
  h.select { |k, _| k.to_s.start_with?(meta_prefix) }
    .each { |k, v| metas[k.to_s.sub(meta_prefix, '')] = v.to_s }

  obj = Object.new(
    :key => object_name,
    :type => h[:x_oss_object_type],
    :size => wrap(h[:content_length], &:to_i),
    :etag => h[:etag],
    :metas => metas,
    :last_modified => wrap(h[:last_modified]) { |x| Time.parse(x) },
    :content_type => h[:content_type])

  logger.debug("Done get object meta")

  obj
end

#get_request_url(bucket, object = nil) ⇒ String

Get bucket/object url

Parameters:

  • bucket (String)

    the bucket name

  • object (String) (defaults to: nil)

    the bucket name

Returns:

  • (String)

    url for the bucket/object



1303
1304
1305
# File 'lib/aliyun/oss/protocol.rb', line 1303

def get_request_url(bucket, object = nil)
  @http.get_request_url(bucket, object)
end

#initiate_multipart_upload(bucket_name, object_name, opts = {}) ⇒ String

Initiate a a multipart uploading transaction

Parameters:

  • bucket_name (String)

    the bucket name

  • object_name (String)

    the object name

  • opts (Hash) (defaults to: {})

    options

Options Hash (opts):

  • :content_type (String)

    the HTTP Content-Type for the file, if not specified client will try to determine the type itself and fall back to HTTP::DEFAULT_CONTENT_TYPE if it fails to do so

  • :metas (Hash<Symbol, String>)

    key-value pairs that serve as the object meta which will be stored together with the object

Returns:



1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
# File 'lib/aliyun/oss/protocol.rb', line 1008

def initiate_multipart_upload(bucket_name, object_name, opts = {})
  logger.info("Begin initiate multipart upload, bucket: "\
              "#{bucket_name}, object: #{object_name}, options: #{opts}")

  sub_res = {'uploads' => nil}
  headers = {'Content-Type' => opts[:content_type]}
  (opts[:metas] || {})
    .each { |k, v| headers["x-oss-meta-#{k.to_s}"] = v.to_s }

  _, body = @http.post(
       {:bucket => bucket_name, :object => object_name,
        :sub_res => sub_res},
       {:headers => headers})

  doc = parse_xml(body)
  txn_id = get_node_text(doc.root, 'UploadId')

  logger.info("Done initiate multipart upload: #{txn_id}.")

  txn_id
end

#list_buckets(opts = {}) ⇒ Array<Bucket>, Hash

List all the buckets.

Parameters:

  • opts (Hash) (defaults to: {})

    options

Options Hash (opts):

  • :prefix (String)

    return only those buckets prefixed with it if specified

  • :marker (String)

    return buckets after where it indicates (exclusively). All buckets are sorted by name alphabetically

  • :limit (Integer)

    return only the first N buckets if specified

Returns:

  • (Array<Bucket>, Hash)

    the returned buckets and a hash including the next tokens, which includes:

    • :prefix [String] the prefix used

    • :delimiter [String] the delimiter used

    • :marker [String] the marker used

    • :limit [Integer] the limit used

    • :next_marker [String] marker to continue list buckets

    • :truncated [Boolean] whether there are more buckets to be returned



43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
# File 'lib/aliyun/oss/protocol.rb', line 43

def list_buckets(opts = {})
  logger.info("Begin list buckets, options: #{opts}")

  params = {
    'prefix' => opts[:prefix],
    'marker' => opts[:marker],
    'max-keys' => opts[:limit]
  }.reject { |_, v| v.nil? }

  _, body = @http.get( {}, {:query => params})
  doc = parse_xml(body)

  buckets = doc.css("Buckets Bucket").map do |node|
    Bucket.new(
      {
        :name => get_node_text(node, "Name"),
        :location => get_node_text(node, "Location"),
        :creation_time =>
          get_node_text(node, "CreationDate") { |t| Time.parse(t) }
      }, self
    )
  end

  more = {
    :prefix => 'Prefix',
    :limit => 'MaxKeys',
    :marker => 'Marker',
    :next_marker => 'NextMarker',
    :truncated => 'IsTruncated'
  }.reduce({}) { |h, (k, v)|
    value = get_node_text(doc.root, v)
    value.nil?? h : h.merge(k => value)
  }

  update_if_exists(
    more, {
      :limit => ->(x) { x.to_i },
      :truncated => ->(x) { x.to_bool }
    }
  )

  logger.info("Done list buckets, buckets: #{buckets}, more: #{more}")

  [buckets, more]
end

#list_multipart_uploads(bucket_name, opts = {}) ⇒ Array<Multipart::Transaction>, Hash

Get a list of all the on-going multipart uploading transactions. That is: thoses started and not aborted.

Parameters:

  • bucket_name (String)

    the bucket name

  • opts (Hash) (defaults to: {})

    options:

Options Hash (opts):

  • :id_marker (String)

    return only thoese transactions with txn id after :id_marker

  • :key_marker (String)

    the object key marker for a multipart upload transaction.

    1. if :id_marker is not set, return only those transactions with object key after :key_marker;

    2. if :id_marker is set, return only thoese transactions with object key equals :key_marker and txn id after :id_marker

  • :prefix (String)

    the prefix of the object key for a multipart upload transaction. if set only return those transactions with the object key prefixed with it

  • :encoding (String)

    the encoding of object key in the response body. Only KeyEncoding::URL is supported now.

Returns:

  • (Array<Multipart::Transaction>, Hash)

    the returned transactions and a hash including next tokens, which includes:

    • :prefix [String] the prefix used

    • :limit [Integer] the limit used

    • :id_marker [String] the upload id marker used

    • :next_id_marker [String] upload id marker to continue list multipart transactions

    • :key_marker [String] the object key marker used

    • :next_key_marker [String] object key marker to continue list multipart transactions

    • :truncated [Boolean] whether there are more transactions to be returned

    • :encoding [String] the object key encoding used



1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
# File 'lib/aliyun/oss/protocol.rb', line 1178

def list_multipart_uploads(bucket_name, opts = {})
  logger.debug("Begin list multipart uploads, "\
               "bucket: #{bucket_name}, opts: #{opts}")

  sub_res = {'uploads' => nil}
  params = {
    'prefix' => opts[:prefix],
    'upload-id-marker' => opts[:id_marker],
    'key-marker' => opts[:key_marker],
    'max-uploads' => opts[:limit],
    'encoding-type' => opts[:encoding]
  }.reject { |_, v| v.nil? }

  _, body = @http.get(
    {:bucket => bucket_name, :sub_res => sub_res},
    {:query => params})

  doc = parse_xml(body)

  encoding = get_node_text(doc.root, 'EncodingType')

  txns = doc.css("Upload").map do |node|
    Multipart::Transaction.new(
      :id => get_node_text(node, "UploadId"),
      :object => get_node_text(node, "Key") { |x| decode_key(x, encoding) },
      :bucket => bucket_name,
      :creation_time =>
        get_node_text(node, "Initiated") { |t| Time.parse(t) }
    )
  end || []

  more = {
    :prefix => 'Prefix',
    :limit => 'MaxUploads',
    :id_marker => 'UploadIdMarker',
    :next_id_marker => 'NextUploadIdMarker',
    :key_marker => 'KeyMarker',
    :next_key_marker => 'NextKeyMarker',
    :truncated => 'IsTruncated',
    :encoding => 'EncodingType'
  }.reduce({}) { |h, (k, v)|
    value = get_node_text(doc.root, v)
    value.nil?? h : h.merge(k => value)
  }

  update_if_exists(
    more, {
      :limit => ->(x) { x.to_i },
      :truncated => ->(x) { x.to_bool },
      :key_marker => ->(x) { decode_key(x, encoding) },
      :next_key_marker => ->(x) { decode_key(x, encoding) }
    }
  )

  logger.debug("Done list multipart transactions")

  [txns, more]
end

#list_objects(bucket_name, opts = {}) ⇒ Array<Objects>, Hash

List objects in a bucket.

Examples:

Assume we have the following objects:
   /foo/bar/obj1
   /foo/bar/obj2
   ...
   /foo/bar/obj9999999
   /foo/xxx/
use 'foo/' as the prefix, '/' as the delimiter, the common
prefixes we get are: '/foo/bar/', '/foo/xxx/'. They are
coincidentally the sub-directories under '/foo/'. Using
delimiter we avoid list all the objects whose number may be
large.

Parameters:

  • bucket_name (String)

    the bucket name

  • opts (Hash) (defaults to: {})

    options

Options Hash (opts):

  • :prefix (String)

    return only those buckets prefixed with it if specified

  • :marker (String)

    return buckets after where it indicates (exclusively). All buckets are sorted by name alphabetically

  • :limit (Integer)

    return only the first N buckets if specified

  • :delimiter (String)

    the delimiter to get common prefixes of all objects

  • :encoding (String)

    the encoding of object key in the response body. Only KeyEncoding::URL is supported now.

Returns:

  • (Array<Objects>, Hash)

    the returned object and a hash including the next tokens, which includes:

    • :common_prefixes [String] the common prefixes returned

    • :prefix [String] the prefix used

    • :delimiter [String] the delimiter used

    • :marker [String] the marker used

    • :limit [Integer] the limit used

    • :next_marker [String] marker to continue list objects

    • :truncated [Boolean] whether there are more objects to be returned



606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
# File 'lib/aliyun/oss/protocol.rb', line 606

def list_objects(bucket_name, opts = {})
  logger.debug("Begin list object, bucket: #{bucket_name}, options: #{opts}")

  params = {
    'prefix' => opts[:prefix],
    'delimiter' => opts[:delimiter],
    'marker' => opts[:marker],
    'max-keys' => opts[:limit],
    'encoding-type' => opts[:encoding]
  }.reject { |_, v| v.nil? }

  _, body = @http.get({:bucket => bucket_name}, {:query => params})

  doc = parse_xml(body)

  encoding = get_node_text(doc.root, 'EncodingType')

  objects = doc.css("Contents").map do |node|
    Object.new(
      :key => get_node_text(node, "Key") { |x| decode_key(x, encoding) },
      :type => get_node_text(node, "Type"),
      :size => get_node_text(node, "Size", &:to_i),
      :etag => get_node_text(node, "ETag"),
      :last_modified =>
        get_node_text(node, "LastModified") { |x| Time.parse(x) }
    )
  end || []

  more = {
    :prefix => 'Prefix',
    :delimiter => 'Delimiter',
    :limit => 'MaxKeys',
    :marker => 'Marker',
    :next_marker => 'NextMarker',
    :truncated => 'IsTruncated',
    :encoding => 'EncodingType'
  }.reduce({}) { |h, (k, v)|
    value = get_node_text(doc.root, v)
    value.nil?? h : h.merge(k => value)
  }

  update_if_exists(
    more, {
      :limit => ->(x) { x.to_i },
      :truncated => ->(x) { x.to_bool },
      :delimiter => ->(x) { decode_key(x, encoding) },
      :marker => ->(x) { decode_key(x, encoding) },
      :next_marker => ->(x) { decode_key(x, encoding) }
    }
  )

  common_prefixes = []
  doc.css("CommonPrefixes Prefix").map do |node|
    common_prefixes << decode_key(node.text, encoding)
  end
  more[:common_prefixes] = common_prefixes unless common_prefixes.empty?

  logger.debug("Done list object. objects: #{objects}, more: #{more}")

  [objects, more]
end

#list_parts(bucket_name, object_name, txn_id, opts = {}) ⇒ Array<Multipart::Part>, Hash

Get a list of parts that are successfully uploaded in a transaction.

Parameters:

  • txn_id (String)

    the upload id

  • opts (Hash) (defaults to: {})

    options:

Options Hash (opts):

  • :marker (Integer)

    the part number marker after which to return parts

  • :limit (Integer)

    max number parts to return

Returns:

  • (Array<Multipart::Part>, Hash)

    the returned parts and a hash including next tokens, which includes:

    • :marker [Integer] the marker used

    • :limit [Integer] the limit used

    • :next_marker [Integer] marker to continue list parts

    • :truncated [Boolean] whether there are more parts to be returned



1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
# File 'lib/aliyun/oss/protocol.rb', line 1251

def list_parts(bucket_name, object_name, txn_id, opts = {})
  logger.debug("Begin list parts, bucket: #{bucket_name}, object: "\
               "#{object_name}, txn id: #{txn_id}, options: #{opts}")

  sub_res = {'uploadId' => txn_id}
  params = {
    'part-number-marker' => opts[:marker],
    'max-parts' => opts[:limit],
    'encoding-type' => opts[:encoding]
  }.reject { |_, v| v.nil? }

  _, body = @http.get(
    {:bucket => bucket_name, :object => object_name, :sub_res => sub_res},
    {:query => params})

  doc = parse_xml(body)
  parts = doc.css("Part").map do |node|
    Multipart::Part.new(
      :number => get_node_text(node, 'PartNumber', &:to_i),
      :etag => get_node_text(node, 'ETag'),
      :size => get_node_text(node, 'Size', &:to_i),
      :last_modified =>
        get_node_text(node, 'LastModified') { |x| Time.parse(x) })
  end || []

  more = {
    :limit => 'MaxParts',
    :marker => 'PartNumberMarker',
    :next_marker => 'NextPartNumberMarker',
    :truncated => 'IsTruncated',
    :encoding => 'EncodingType'
  }.reduce({}) { |h, (k, v)|
    value = get_node_text(doc.root, v)
    value.nil?? h : h.merge(k => value)
  }

  update_if_exists(
    more, {
      :limit => ->(x) { x.to_i },
      :truncated => ->(x) { x.to_bool }
    }
  )

  logger.debug("Done list parts, parts: #{parts}, more: #{more}")

  [parts, more]
end

#put_bucket_acl(name, acl) ⇒ Object

Put bucket acl

Parameters:

  • name (String)

    the bucket name

  • acl (String)

    the bucket acl

See Also:



119
120
121
122
123
124
125
126
127
128
129
# File 'lib/aliyun/oss/protocol.rb', line 119

def put_bucket_acl(name, acl)
  logger.info("Begin put bucket acl, name: #{name}, acl: #{acl}")

  sub_res = {'acl' => nil}
  headers = {'x-oss-acl' => acl}
  @http.put(
    {:bucket => name, :sub_res => sub_res},
    {:headers => headers, :body => nil})

  logger.info("Done put bucket acl")
end

#put_bucket_lifecycle(name, rules) ⇒ Object

Put bucket lifecycle settings

Parameters:

See Also:



329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
# File 'lib/aliyun/oss/protocol.rb', line 329

def put_bucket_lifecycle(name, rules)
  logger.info("Begin put bucket lifecycle, name: #{name}, rules: "\
               "#{rules.map { |r| r.to_s }}")

  sub_res = {'lifecycle' => nil}
  body = Nokogiri::XML::Builder.new do |xml|
    xml.LifecycleConfiguration {
      rules.each do |r|
        xml.Rule {
          xml.ID r.id if r.id
          xml.Status r.enabled? ? 'Enabled' : 'Disabled'

          xml.Prefix r.prefix
          xml.Expiration {
            if r.expiry.is_a?(Date)
              xml.Date Time.utc(
                         r.expiry.year, r.expiry.month, r.expiry.day)
                        .iso8601.sub('Z', '.000Z')
            elsif r.expiry.is_a?(Fixnum)
              xml.Days r.expiry
            else
              fail ClientError, "Expiry must be a Date or Fixnum."
            end
          }
        }
      end
    }
  end.to_xml

  @http.put(
    {:bucket => name, :sub_res => sub_res},
    {:body => body})

  logger.info("Done put bucket lifecycle")
end

#put_bucket_logging(name, logging) ⇒ Object

Put bucket logging settings

Parameters:



150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
# File 'lib/aliyun/oss/protocol.rb', line 150

def put_bucket_logging(name, logging)
  logger.info("Begin put bucket logging, "\
              "name: #{name}, logging: #{logging}")

  if logging.enabled? && !logging.target_bucket
    fail ClientError,
         "Must specify target bucket when enabling bucket logging."
  end

  sub_res = {'logging' => nil}
  body = Nokogiri::XML::Builder.new do |xml|
    xml.BucketLoggingStatus {
      if logging.enabled?
        xml.LoggingEnabled {
          xml.TargetBucket logging.target_bucket
          xml.TargetPrefix logging.target_prefix if logging.target_prefix
        }
      end
    }
  end.to_xml

  @http.put(
    {:bucket => name, :sub_res => sub_res},
    {:body => body})

  logger.info("Done put bucket logging")
end

#put_bucket_referer(name, referer) ⇒ Object

Put bucket referer

Parameters:



280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
# File 'lib/aliyun/oss/protocol.rb', line 280

def put_bucket_referer(name, referer)
  logger.info("Begin put bucket referer, "\
              "name: #{name}, referer: #{referer}")

  sub_res = {'referer' => nil}
  body = Nokogiri::XML::Builder.new do |xml|
    xml.RefererConfiguration {
      xml.AllowEmptyReferer referer.allow_empty?
      xml.RefererList {
        (referer.whitelist or []).each do |r|
          xml.Referer r
        end
      }
    }
  end.to_xml

  @http.put(
    {:bucket => name, :sub_res => sub_res},
    {:body => body})

  logger.info("Done put bucket referer")
end

#put_bucket_website(name, website) ⇒ Object

Put bucket website settings

Parameters:



216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
# File 'lib/aliyun/oss/protocol.rb', line 216

def put_bucket_website(name, website)
  logger.info("Begin put bucket website, "\
              "name: #{name}, website: #{website}")

  unless website.index
    fail ClientError, "Must specify index to put bucket website."
  end

  sub_res = {'website' => nil}
  body = Nokogiri::XML::Builder.new do |xml|
    xml.WebsiteConfiguration {
      xml.IndexDocument {
        xml.Suffix website.index
      }
      if website.error
        xml.ErrorDocument {
          xml.Key website.error
        }
      end
    }
  end.to_xml

  @http.put(
    {:bucket => name, :sub_res => sub_res},
    {:body => body})

  logger.info("Done put bucket website")
end

#put_object(bucket_name, object_name, opts = {}) {|HTTP::StreamWriter| ... } ⇒ Object

Put an object to the specified bucket, a block is required to provide the object data.

Examples:

chunk = get_chunk
put_object('bucket', 'object') { |sw| sw.write(chunk) }

Parameters:

  • bucket_name (String)

    the bucket name

  • object_name (String)

    the object name

  • opts (Hash) (defaults to: {})

    Options

Options Hash (opts):

  • :content_type (String)

    the HTTP Content-Type for the file, if not specified client will try to determine the type itself and fall back to HTTP::DEFAULT_CONTENT_TYPE if it fails to do so

  • :metas (Hash<Symbol, String>)

    key-value pairs that serve as the object meta which will be stored together with the object

Yields:

  • (HTTP::StreamWriter)

    a stream writer is yielded to the caller to which it can write chunks of data streamingly



514
515
516
517
518
519
520
521
522
523
524
525
526
527
# File 'lib/aliyun/oss/protocol.rb', line 514

def put_object(bucket_name, object_name, opts = {}, &block)
  logger.debug("Begin put object, bucket: #{bucket_name}, object: "\
               "#{object_name}, options: #{opts}")

  headers = {'Content-Type' => opts[:content_type]}
  (opts[:metas] || {})
    .each { |k, v| headers["x-oss-meta-#{k.to_s}"] = v.to_s }

  @http.put(
    {:bucket => bucket_name, :object => object_name},
    {:headers => headers, :body => HTTP::StreamPayload.new(&block)})

  logger.debug('Done put object')
end

#put_object_acl(bucket_name, object_name, acl) ⇒ Object

Put object acl

Parameters:

  • bucket_name (String)

    the bucket name

  • object_name (String)

    the object name

  • acl (String)

    the object’s ACL. See ACL



922
923
924
925
926
927
928
929
930
931
932
933
934
# File 'lib/aliyun/oss/protocol.rb', line 922

def put_object_acl(bucket_name, object_name, acl)
  logger.debug("Begin update object acl, bucket: #{bucket_name}, "\
               "object: #{object_name}, acl: #{acl}")

  sub_res = {'acl' => nil}
  headers = {'x-oss-object-acl' => acl}

  @http.put(
    {:bucket => bucket_name, :object => object_name, :sub_res => sub_res},
    {:headers => headers})

  logger.debug("Done update object acl")
end

#set_bucket_cors(name, rules) ⇒ Object

Set bucket CORS(Cross-Origin Resource Sharing) rules

Parameters:

  • name (String)

    the bucket name

  • rules (Array<OSS::CORSRule] the CORS rules)

    ules [Array<OSS::CORSRule] the CORS rules

See Also:



413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
# File 'lib/aliyun/oss/protocol.rb', line 413

def set_bucket_cors(name, rules)
  logger.info("Begin set bucket cors, bucket: #{name}, rules: "\
               "#{rules.map { |r| r.to_s }.join(';')}")

  sub_res = {'cors' => nil}
  body = Nokogiri::XML::Builder.new do |xml|
    xml.CORSConfiguration {
      rules.each do |r|
        xml.CORSRule {
          r.allowed_origins.each { |x| xml.AllowedOrigin x }
          r.allowed_methods.each { |x| xml.AllowedMethod x }
          r.allowed_headers.each { |x| xml.AllowedHeader x }
          r.expose_headers.each { |x| xml.ExposeHeader x }
          xml.MaxAgeSeconds r.max_age_seconds if r.max_age_seconds
        }
      end
    }
  end.to_xml

  @http.put(
    {:bucket => name, :sub_res => sub_res},
    {:body => body})

  logger.info("Done delete bucket lifecycle")
end

#sign(string_to_sign) ⇒ String

Sign a string using the stored access key secret

Parameters:

  • string_to_sign (String)

    the string to sign

Returns:



1316
1317
1318
# File 'lib/aliyun/oss/protocol.rb', line 1316

def sign(string_to_sign)
  Util.sign(@config.access_key_secret, string_to_sign)
end

#upload_part(bucket_name, object_name, txn_id, part_no) {|HTTP::StreamWriter| ... } ⇒ Object

Upload a part in a multipart uploading transaction.

Parameters:

  • bucket_name (String)

    the bucket name

  • object_name (String)

    the object name

  • txn_id (String)

    the upload id

  • part_no (Integer)

    the part number

Yields:

  • (HTTP::StreamWriter)

    a stream writer is yielded to the caller to which it can write chunks of data streamingly



1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
# File 'lib/aliyun/oss/protocol.rb', line 1038

def upload_part(bucket_name, object_name, txn_id, part_no, &block)
  logger.debug("Begin upload part, bucket: #{bucket_name}, object: "\
               "#{object_name}, txn id: #{txn_id}, part No: #{part_no}")

  sub_res = {'partNumber' => part_no, 'uploadId' => txn_id}
  headers, _ = @http.put(
    {:bucket => bucket_name, :object => object_name, :sub_res => sub_res},
    {:body => HTTP::StreamPayload.new(&block)})

  logger.debug("Done upload part")

  Multipart::Part.new(:number => part_no, :etag => headers[:etag])
end

#upload_part_by_copy(bucket_name, object_name, txn_id, part_no, source_object, opts = {}) ⇒ Object

Upload a part in a multipart uploading transaction by copying from an existent object as the part’s content. It may copy only part of the object by specifying the bytes range to read.

Parameters:

  • bucket_name (String)

    the bucket name

  • object_name (String)

    the object name

  • txn_id (String)

    the upload id

  • part_no (Integer)

    the part number

  • source_object (String)

    the source object name to copy from

  • opts (Hash) (defaults to: {})

    options

Options Hash (opts):

  • :range (Array<Integer>)

    the bytes range to copy, int the format: [begin(inclusive), end(exclusive]

  • :condition (Hash)

    preconditions to copy the object. See #get_object



1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
# File 'lib/aliyun/oss/protocol.rb', line 1065

def upload_part_by_copy(
      bucket_name, object_name, txn_id, part_no, source_object, opts = {})
  logger.debug("Begin upload part by copy, bucket: #{bucket_name}, "\
               "object: #{object_name}, source object: #{source_object}"\
               "txn id: #{txn_id}, part No: #{part_no}, options: #{opts}")

  range = opts[:range]
  conditions = opts[:condition]

  if range && (!range.is_a?(Array) || range.size != 2)
    fail ClientError, "Range must be an array containing 2 Integers."
  end

  headers = {
    'x-oss-copy-source' =>
      @http.get_resource_path(bucket_name, source_object)
  }
  headers['Range'] = get_bytes_range(range) if range
  headers.merge!(get_copy_conditions(conditions)) if conditions

  sub_res = {'partNumber' => part_no, 'uploadId' => txn_id}

  headers, _ = @http.put(
    {:bucket => bucket_name, :object => object_name, :sub_res => sub_res},
    {:headers => headers})

  logger.debug("Done upload part by copy: #{source_object}.")

  Multipart::Part.new(:number => part_no, :etag => headers[:etag])
end