Method: Tus::Storage::S3#patch_file
- Defined in:
- lib/tus/storage/s3.rb
#patch_file(uid, input, info = {}) ⇒ Object
Appends data to the specified upload in a streaming fashion, and returns the number of bytes it managed to save.
The data read from the input is first buffered in memory, and once 5MB (AWS S3’s mininum allowed size for a multipart part) or more data has been retrieved, it starts being uploaded in a background thread as the next multipart part. This allows us to start reading the next chunk of input data and soon as possible, achieving streaming.
If any network error is raised during the upload to S3, the upload of further input data stops and the number of bytes that manged to get uploaded is returned.
98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 |
# File 'lib/tus/storage/s3.rb', line 98 def patch_file(uid, input, info = {}) tus_info = Tus::Info.new(info) upload_id = info["multipart_id"] part_offset = info["multipart_parts"].count bytes_uploaded = 0 jobs = [] chunk = input.read(MIN_PART_SIZE) while chunk next_chunk = input.read(MIN_PART_SIZE) # merge next chunk into previous if it's smaller than minimum chunk size if next_chunk && next_chunk.bytesize < MIN_PART_SIZE chunk << next_chunk next_chunk.clear next_chunk = nil end # abort if chunk is smaller than 5MB and is not the last chunk if chunk.bytesize < MIN_PART_SIZE break if (tus_info.length && tus_info.offset) && chunk.bytesize + tus_info.offset < tus_info.length end begin info["multipart_parts"] << upload_part(chunk, uid, upload_id, part_offset += 1) bytes_uploaded += chunk.bytesize rescue Seahorse::Client::NetworkingError => exception warn "ERROR: #{exception.inspect} occurred during upload" break # ignore networking errors and return what client has uploaded so far end chunk.clear chunk = next_chunk end bytes_uploaded end |