Class: Cache
Overview
Caching of objects in a Redis store
Constant Summary collapse
- VERSION =
'0.0.2'
Class Attribute Summary collapse
-
.backend ⇒ Object
Returns the value of attribute backend.
-
.default_ttl ⇒ Object
Returns the value of attribute default_ttl.
Class Method Summary collapse
- .delete(key) ⇒ Object
- .include?(key) ⇒ Boolean
-
.new(key = nil, ttl: default_ttl) ⇒ Object
new.
- .primary ⇒ Object
- .replica ⇒ Object
- .replicas ⇒ Object
- .update_cache(key, value, ttl: default_ttl) ⇒ Object
Class Attribute Details
.backend ⇒ Object
Returns the value of attribute backend.
11 12 13 |
# File 'lib/object/cache.rb', line 11 def backend @backend end |
.default_ttl ⇒ Object
Returns the value of attribute default_ttl.
12 13 14 |
# File 'lib/object/cache.rb', line 12 def default_ttl @default_ttl end |
Class Method Details
.delete(key) ⇒ Object
73 74 75 76 77 78 |
# File 'lib/object/cache.rb', line 73 def delete(key) return false unless include?(key) primary.del(key) true end |
.include?(key) ⇒ Boolean
67 68 69 70 71 |
# File 'lib/object/cache.rb', line 67 def include?(key) replica.exists(key) rescue false end |
.new(key = nil, ttl: default_ttl) ⇒ Object
new
Finds the correct value (based on the provided key) in the cache store, or calls the original code, and stores the result in cache.
The TTL of the cached content is provided with the optional ‘ttl` named argument. If left blank, the `DEFAULT_TTL` ttl value will be used.
The caching key will be determined by creating a SHA digest of the original code’s file location and line number within that file. This makes it easier to provide short caching keys like uid’s, or ids, and still receive a unique caching key under which the data is stored.
The cache key can optionally be left blank. This should **only be done** if the provided data by the method will never changes based on some form of input.
For example: caching an ‘Item` should always be done by providing a unique item identifier as the caching key, otherwise the cache will return the same item every time, even if a different one is stored the second time.
good:
Cache.new { 'hello world' } # stored object is always the same
Cache.new(item.id) { item } # stored item is namespaced using its id
bad:
Cache.new { item } # item is only stored once, and then always
# retrieved, even if it is a different item
46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
# File 'lib/object/cache.rb', line 46 def new(key = nil, ttl: default_ttl) return yield unless replica key = Digest::SHA1.hexdigest([key, Proc.new.source_location].flatten.join)[0..5] if (cached_value = replica.get(key)).nil? yield.tap { |value| update_cache(key, value, ttl: ttl) } else Marshal.load(cached_value) end rescue TypeError # if `TypeError` is raised, the data could not be Marshal dumped. In that # case, delete anything left in the cache store, and get the data without # caching. # delete(key) yield rescue yield end |
.primary ⇒ Object
86 87 88 |
# File 'lib/object/cache.rb', line 86 def primary backend.is_a?(Hash) ? backend[:primary] : backend end |
.replica ⇒ Object
94 95 96 |
# File 'lib/object/cache.rb', line 94 def replica replicas.sample end |
.replicas ⇒ Object
90 91 92 |
# File 'lib/object/cache.rb', line 90 def replicas [backend.is_a?(Hash) ? backend[:replicas] : backend].flatten end |
.update_cache(key, value, ttl: default_ttl) ⇒ Object
80 81 82 83 84 |
# File 'lib/object/cache.rb', line 80 def update_cache(key, value, ttl: default_ttl) return unless primary && (value = Marshal.dump(value)) ttl.to_i.zero? ? primary.set(key, value) : primary.setex(key, ttl.to_i, value) end |