Module: Canistor

Defined in:
lib/canistor.rb,
lib/canistor/plugin.rb,
lib/canistor/handler.rb,
lib/canistor/subject.rb,
lib/canistor/version.rb,
lib/canistor/storage/part.rb,
lib/canistor/authorization.rb,
lib/canistor/error_handler.rb,
lib/canistor/storage/bucket.rb,
lib/canistor/storage/object.rb,
lib/canistor/storage/upload.rb,
lib/canistor/storage/objects.rb,
lib/canistor/storage/delete_marker.rb,
lib/canistor/storage/bucket/settings.rb

Overview

Replacement for the HTTP handler in the AWS SDK that mocks all interaction with S3 just above the HTTP level.

The mock implementation is turned on by removing the NetHttp handlers that comes with the library by the Canistor handler.

Aws::S3::Client.remove_plugin(Seahorse::Client::Plugins::NetHttp)
Aws::S3::Client.add_plugin(Canistor::Plugin)

The Canistor instance then needs to be configured with buckets and credentials to be useful. It can be configured using either the config method on the instance or by specifying the buckets one by one.

In the example below Canistor will have two accounts and four buckets. It also specifies which accounts can access the buckets.

Canistor.config(
  logger: Rails.logger,
  credentials: {
    {
      access_key_id: 'AKIAIXXXXXXXXXXXXXX1',
      secret_access_key: 'phRL+xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx1'
    },
    {
      access_key_id: 'AKIAIXXXXXXXXXXXXXX2',
      secret_access_key: 'phRL+xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx2'
    }
  },
  buckets: {
    'us-east-1' => {
      'io-canistor-production-images' => {
        allow_access_keys: ['AKIAIXXXXXXXXXXXXXX1'],
        replicate_to: [
          'eu-central-1:io-canistor-production-images-replica'
        ],
        versioned: true
      },
      'io-canistor-production-books' => {
        allow_access_keys: ['AKIAIXXXXXXXXXXXXXX1', 'AKIAIXXXXXXXXXXXXXX2']
      }
    },
    'eu-central-1' => {
      'io-canistor-production-sales' => {
        allow_access_keys: ['AKIAIXXXXXXXXXXXXXX1']
      },
      'io-canistor-production-images-replica' => {
        allow_access_keys: ['AKIAIXXXXXXXXXXXXXX1'],
        versioned: true
      }
    }
  }
)

Canistor implements basic interaction with buckets and objects. It also verifies authentication information. It does not implement control lists so all accounts have full access to the buckets and objects.

It’s possible to turn on replication and versioning. Note that replication is instant in the mock. On actual S3 it takes a while for objects to replicate. Not the entire replication and versioning API is implemented.

The mock can simulate a number of failures. These are triggered by setting the operation which needs to fail on the mock. For more information see [Canistor.fail].

In most cases you should configure the suite to clear the mock before running each example with [Canistor.clear].

Defined Under Namespace

Modules: Storage Classes: Authorization, ConfigurationError, ErrorHandler, Handler, Plugin, Subject

Constant Summary collapse

SUPPORTED_FAILURES =
[
  :internal_server_error,
  :reset_connection,
  :bad_request,
  :fatal,
  :store
]
VERSION =
'0.5.3'

Class Attribute Summary collapse

Class Method Summary collapse

Class Attribute Details

.credentialsObject

Returns the value of attribute credentials.



87
88
89
# File 'lib/canistor.rb', line 87

def credentials
  @credentials
end

.fail(*operations) ⇒ Object (readonly)

The mock can simulate a number of failures. These are triggered by setting the way we expect it to fail. Note that the AWS-SDK already helps you to recover from certain errors like :reset_connection. If you want these kinds of error to trigger a failure you have to call #fail more than then configured retry count.

 Canistor.fail(:reset_connection)
 Canistor.fail(
   :reset_connection,
   :reset_connection,
   :reset_connection,
   :reset_connection
 )

* reset_connection: Signals the library to handle a connection error
  (retryable)
* internal_server_error: Returns a 500 internal server error (retryable)
* bad_request: Returns a 400 bad request (not retryable)
* fatal: Signals the library to handle a fatal error (fatal)

A less common problem is when S3 reports a successful write but fails to store the file. This means the PUT to the bucket will be successful, but GET and HEAD on the object fail, because it’s not there.

Canistor.fail(:store)


209
210
211
# File 'lib/canistor.rb', line 209

def fail
  @fail
end

.fail_mutexObject (readonly)

Returns the value of attribute fail_mutex.



89
90
91
# File 'lib/canistor.rb', line 89

def fail_mutex
  @fail_mutex
end

.loggerObject

Returns the value of attribute logger.



85
86
87
# File 'lib/canistor.rb', line 85

def logger
  @logger
end

.storeObject

Returns the value of attribute store.



86
87
88
# File 'lib/canistor.rb', line 86

def store
  @store
end

Class Method Details

.buckets=(buckets) ⇒ Object



149
150
151
152
153
154
155
156
157
# File 'lib/canistor.rb', line 149

def self.buckets=(buckets)
  buckets.each do |region, attributes|
    attributes.each do |bucket, options|
      bucket = create_bucket(region, bucket)
      bucket.update_settings(options)
      bucket
    end
  end
end

.clearObject

Clears the state of the mock. Leaves all the credentials and buckets but removes all objects and mocked responses.



239
240
241
242
243
244
245
246
# File 'lib/canistor.rb', line 239

def self.clear
  @fail = []
  @store.each do |region, buckets|
    buckets.each do |bucket_name, bucket|
      bucket.clear
    end
  end
end

.config(config) ⇒ Object



170
171
172
173
174
# File 'lib/canistor.rb', line 170

def self.config(config)
  config.each do |section, attributes|
    public_send("#{section}=", attributes)
  end
end

.create_bucket(region, bucket_name) ⇒ Object

Configures a bucket in the mock implementation. Use #update_settings on the Container object returned by this method to configure who may access the bucket.



162
163
164
165
166
167
168
# File 'lib/canistor.rb', line 162

def self.create_bucket(region, bucket_name)
  store[region] ||= {}
  store[region][bucket_name] = Canistor::Storage::Bucket.new(
    region: region,
    name: bucket_name
  )
end

.find_bucket(region, bucket) ⇒ Object



111
112
113
# File 'lib/canistor.rb', line 111

def self.find_bucket(region, bucket)
  store.dig(region, bucket) || find_bucket_by_name_and_warn(region, bucket)
end

.find_bucket_by_name(bucket) ⇒ Object



127
128
129
130
131
132
133
134
# File 'lib/canistor.rb', line 127

def self.find_bucket_by_name(bucket)
  store.each do |_, buckets|
    if found = buckets[bucket]
      return found
    end
  end
  nil
end

.find_bucket_by_name_and_warn(region, bucket) ⇒ Object



115
116
117
118
119
120
121
122
123
124
125
# File 'lib/canistor.rb', line 115

def self.find_bucket_by_name_and_warn(region, bucket)
  found = find_bucket_by_name(bucket)
  return if found.nil?

  logger.info(
    "S3 client configured for \"#{region}\" but the bucket \"#{bucket}\" " \
    "is in \"#{found.region}\"; Please configure the proper region to " \
    "avoid multiple unnecessary redirects and signing attempts"
  ) if logger
  found
end

.find_credentials(authorization) ⇒ Object



97
98
99
100
101
102
103
104
105
106
107
108
109
# File 'lib/canistor.rb', line 97

def self.find_credentials(authorization)
  if authorization.access_key_id
    credentials.each do |attributes|
      if authorization.access_key_id == attributes[:access_key_id]
        return Aws::Credentials.new(
          attributes[:access_key_id],
          attributes[:secret_access_key]
        )
      end
    end
  end
  nil
end

.take_fail(operation, &block) ⇒ Object

Executes the block when the operation is in the failure queue and removes one instance of the operation.



225
226
227
228
229
230
231
232
233
234
235
# File 'lib/canistor.rb', line 225

def self.take_fail(operation, &block)
  fail_mutex.synchronize do
    if index = @fail.index(operation)
      begin
        block.call
      ensure
        @fail.delete_at(index)
      end
    end
  end
end