Class: MemCache
Overview
A Ruby implementation of the ‘memcached’ client interface.
Defined Under Namespace
Classes: ClientError, InternalError, MemCacheError, Server, ServerError
Constant Summary collapse
- SVNRev =
SVN Revision
%q$Rev: 31 $
- SVNId =
SVN Id
%q$Id: memcache.rb 31 2004-11-13 17:32:54Z ged $
- SVNURL =
SVN URL
%q$URL: svn+ssh://svn.FaerieMUD.org/var/svn/RMemCache/trunk/lib/memcache.rb $
- DefaultCThreshold =
Default compression threshold.
10_000
- DefaultPort =
Default memcached port
11211
- DefaultServerWeight =
Default ‘weight’ value assigned to a server.
1
- MinCompressionRatio =
Minimum percentage length compressed values have to be to be preferred over the uncompressed version.
0.80
- DefaultOptions =
Default constructor options
{ :debug => false, :c_threshold => DefaultCThreshold, :compression => true, :namespace => nil, :readonly => false, :urlencode => true, }
- F_SERIALIZED =
Storage flags
1
- F_COMPRESSED =
2
- F_ESCAPED =
4
- F_NUMERIC =
8
- CRLF =
Line-ending
"\r\n"
- SendFlags =
Flags to use for the BasicSocket#send call. Note that Ruby’s socket library doesn’t define MSG_NOSIGNAL, but if it ever does it’ll be used.
0
- GENERAL_ERROR =
Patterns for matching against server error replies
/^ERROR\r\n/
- CLIENT_ERROR =
/^CLIENT_ERROR\s+([^\r\n]+)\r\n/
- SERVER_ERROR =
/^SERVER_ERROR\s+([^\r\n]+)\r\n/
- ANY_ERROR =
Regexp::union( GENERAL_ERROR, CLIENT_ERROR, SERVER_ERROR )
- LINE_TERMINATOR =
Terminator regexps for the two styles of replies from memcached
Regexp::union( /\r\n$/, ANY_ERROR )
- MULTILINE_TERMINATOR =
Regexp::union( /^END\r\n$/, ANY_ERROR )
- StatConverters =
Callables to convert various part of the server stats reply to appropriate object types.
{ :__default__ => lambda {|stat| Integer(stat) }, :version => lambda {|stat| stat }, # Already a String :rusage_user => lambda {|stat| seconds, microseconds = stat.split(/:/, 2) Float(seconds) + (Float(microseconds) / 1_000_000) }, :rusage_system => lambda {|stat| seconds, microseconds = stat.split(/:/, 2) Float(seconds) + (Float(microseconds) / 1_000_000) } }
Instance Attribute Summary collapse
-
#c_threshold ⇒ Object
(also: #compression_threshold)
The compression threshold setting, in bytes.
-
#compression ⇒ Object
Turn compression on or off temporarily.
-
#debug ⇒ Object
Debugging flag – when set to
true
, debugging output will be send to $deferr. -
#hashfunc ⇒ Object
The function (a Method or Proc object) which will be used to hash keys for determining where values are stored.
-
#mutex ⇒ Object
readonly
The Sync mutex object for the cache.
-
#namespace ⇒ Object
The namespace that will be prepended to all keys set/fetched from the cache.
-
#servers ⇒ Object
The Array of MemCache::Server objects that represent the memcached instances the client will use.
-
#stats ⇒ Object
readonly
Hash of counts of cache operations, keyed by operation (e.g.,
:delete
,:flush_all
,:set
,:add
, etc.). -
#stats_callback ⇒ Object
Settable statistics callback – setting this to an object that responds to #call will cause it to be called once for each operation with the operation type (as a Symbol), and Struct::Tms objects created immediately before and after the operation.
-
#times ⇒ Object
readonly
Hash of system/user time-tuples for each op.
Instance Method Summary collapse
-
#[]=(*args) ⇒ Object
Index assignment method.
-
#active? ⇒ Boolean
Returns
true
if there is at least one active server for the receiver. -
#add(key, val, exptime = 0) ⇒ Object
Like #set, but only stores the tuple if it doesn’t already exist.
-
#decr(key, val = 1) ⇒ Object
Like #incr, but decrements.
-
#delete(key, time = nil) ⇒ Object
Delete the entry with the specified key, optionally at the specified
time
. -
#flush_all ⇒ Object
(also: #clear)
Mark all entries on all servers as expired.
-
#get(*keys) ⇒ Object
(also: #[])
Fetch and return the values associated with the given
keys
from the cache. -
#get_hash(*keys) ⇒ Object
Fetch and return the values associated the the given
keys
from the cache as a Hash object. -
#incr(key, val = 1) ⇒ Object
Atomically increment the value associated with
key
byval
. -
#initialize(*servers, &block) ⇒ MemCache
constructor
Create a new memcache object that will distribute gets and sets between the specified
servers
. -
#inspect ⇒ Object
Return a human-readable version of the cache object.
-
#readonly? ⇒ Boolean
Returns
true
if the cache was created read-only. -
#replace(key, val, exptime = 0) ⇒ Object
Like #set, but only stores the tuple if it already exists.
-
#server_item_stats(servers = @servers) ⇒ Object
Return item stats from the specified
servers
. -
#server_malloc_stats(servers = @servers) ⇒ Object
Return malloc stats from the specified
servers
(not supported on all platforms). -
#server_map_stats(servers = @servers) ⇒ Object
Return memory maps from the specified
servers
(not supported on all platforms). -
#server_reset_stats(servers = @servers) ⇒ Object
Reset statistics on the given
servers
. -
#server_size_stats(servers = @servers) ⇒ Object
Return item size stats from the specified
servers
. -
#server_slab_stats(servers = @servers) ⇒ Object
Return slab stats from the specified
servers
. -
#server_stats(servers = @servers) ⇒ Object
Return a hash of statistics hashes for each of the specified
servers
. -
#set(key, val, exptime = 0) ⇒ Object
Unconditionally set the entry in the cache under the given
key
tovalue
, returningtrue
on success. -
#set_many(pairs) ⇒ Object
Multi-set method; unconditionally set each key/value pair in
pairs
.
Constructor Details
#initialize(*servers, &block) ⇒ MemCache
Create a new memcache object that will distribute gets and sets between the specified servers
. You can also pass one or more options as hash arguments. Valid options are:
- :compression
-
Set the compression flag. See #use_compression? for more info.
- :c_threshold
-
Set the compression threshold, in bytes. See #c_threshold for more info.
- :debug
-
Send debugging output to the object specified as a value if it responds to #call, and to $deferr if set to anything else but
false
ornil
. - :namespace
-
If specified, all keys will have the given value prepended before accessing the cache. Defaults to
nil
. - :urlencode
-
If this is set, all keys and values will be urlencoded. If this is not set, keys and/or values with certain characters in them may generate client errors when interacting with the cache, but the values used will be more compatible with those set by other clients. Defaults to
true
. - :readonly
-
If this is set, any attempt to write to the cache will generate an exception. Defaults to
false
.
If a block
is given, it is used as the default hash function for determining which server the key (given as an argument to the block) is stored/fetched from.
192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 |
# File 'lib/memcache.rb', line 192 def initialize( *servers, &block ) opts = servers.pop if servers.last.is_a?( Hash ) opts = DefaultOptions.merge( opts || {} ) @debug = opts[:debug] @c_threshold = opts[:c_threshold] @compression = opts[:compression] @namespace = opts[:namespace] @readonly = opts[:readonly] @urlencode = opts[:urlencode] @buckets = nil @hashfunc = block || lambda {|val| val.hash} @mutex = Sync::new @reactor = IO::Reactor::new # Stats is an auto-vivifying hash -- an access to a key that hasn't yet # been created generates a new stats subhash @stats = Hash::new {|hsh,k| hsh[k] = {:count => 0, :utime => 0.0, :stime => 0.0} } @stats_callback = nil self.servers = servers end |
Instance Attribute Details
#c_threshold ⇒ Object Also known as: compression_threshold
The compression threshold setting, in bytes. Values larger than this threshold will be compressed by #[]= (and #set) and decompressed by #[] (and #get).
241 242 243 |
# File 'lib/memcache.rb', line 241 def c_threshold @c_threshold end |
#compression ⇒ Object
Turn compression on or off temporarily.
245 246 247 |
# File 'lib/memcache.rb', line 245 def compression @compression end |
#debug ⇒ Object
Debugging flag – when set to true
, debugging output will be send to $deferr. If set to an object which supports either #<< or #call, debugging output will be sent to it via this method instead (#call being preferred). If set to false
or nil
, no debugging will be generated.
251 252 253 |
# File 'lib/memcache.rb', line 251 def debug @debug end |
#hashfunc ⇒ Object
The function (a Method or Proc object) which will be used to hash keys for determining where values are stored.
255 256 257 |
# File 'lib/memcache.rb', line 255 def hashfunc @hashfunc end |
#mutex ⇒ Object (readonly)
The Sync mutex object for the cache
285 286 287 |
# File 'lib/memcache.rb', line 285 def mutex @mutex end |
#namespace ⇒ Object
The namespace that will be prepended to all keys set/fetched from the cache.
263 264 265 |
# File 'lib/memcache.rb', line 263 def namespace @namespace end |
#servers ⇒ Object
The Array of MemCache::Server objects that represent the memcached instances the client will use.
259 260 261 |
# File 'lib/memcache.rb', line 259 def servers @servers end |
#stats ⇒ Object (readonly)
Hash of counts of cache operations, keyed by operation (e.g., :delete
, :flush_all
, :set
, :add
, etc.). Each value of the hash is another hash with statistics for the corresponding operation:
{
:stime => <total system time of all calls>, :utime => <total user time> of all calls, :count => <number of calls>, }
273 274 275 |
# File 'lib/memcache.rb', line 273 def stats @stats end |
#stats_callback ⇒ Object
Settable statistics callback – setting this to an object that responds to #call will cause it to be called once for each operation with the operation type (as a Symbol), and Struct::Tms objects created immediately before and after the operation.
282 283 284 |
# File 'lib/memcache.rb', line 282 def stats_callback @stats_callback end |
#times ⇒ Object (readonly)
Hash of system/user time-tuples for each op
276 277 278 |
# File 'lib/memcache.rb', line 276 def times @times end |
Instance Method Details
#[]=(*args) ⇒ Object
Index assignment method. Supports slice-setting, e.g.:
cache[ :foo, :bar ] = 12, "darkwood"
This uses #set_many internally if there is more than one key, or #set if there is only one.
425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 |
# File 'lib/memcache.rb', line 425 def []=( *args ) raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? # Use #set if there's only one pair if args.length <= 2 self.set( *args ) else # Args from a slice-style call like # cache[ :foo, :bar ] = 1, 2 # will be passed in like: # ( :foo, :bar, [1, 2] ) # so just shift the value part off, transpose them into a Hash and # pass them on to #set_many. vals = args.pop vals = [vals] unless vals.is_a?( Array ) # Handle [:a,:b] = 1 pairs = Hash[ *([ args, vals ].transpose) ] self.set_many( pairs ) end # It doesn't matter what this returns, as Ruby ignores it for some # reason. return nil end |
#active? ⇒ Boolean
Returns true
if there is at least one active server for the receiver.
333 334 335 |
# File 'lib/memcache.rb', line 333 def active? not @servers.empty? end |
#add(key, val, exptime = 0) ⇒ Object
Like #set, but only stores the tuple if it doesn’t already exist.
452 453 454 455 456 457 458 459 |
# File 'lib/memcache.rb', line 452 def add( key, val, exptime=0 ) raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? @mutex.synchronize( Sync::EX ) { self.store( :add, key, val, exptime ) } end |
#decr(key, val = 1) ⇒ Object
Like #incr, but decrements. Unlike #incr, underflow is checked, and new values are capped at 0. If server value is 1, a decrement of 2 returns 0, not -1.
490 491 492 493 494 495 496 497 |
# File 'lib/memcache.rb', line 490 def decr( key, val=1 ) raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? @mutex.synchronize( Sync::EX ) { self.incrdecr( :decr, key, val ) } end |
#delete(key, time = nil) ⇒ Object
Delete the entry with the specified key, optionally at the specified time
.
502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 |
# File 'lib/memcache.rb', line 502 def delete( key, time=nil ) raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? svr = nil res = @mutex.synchronize( Sync::EX ) { svr = self.get_server( key ) cachekey = self.make_cache_key( key ) self.add_stat( :delete ) do cmd = "delete %s%s" % [ cachekey, time ? " #{time.to_i}" : "" ] self.send( svr => cmd ) end } res && res[svr].rstrip == "DELETED" end |
#flush_all ⇒ Object Also known as: clear
Mark all entries on all servers as expired.
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 |
# File 'lib/memcache.rb', line 522 def flush_all raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? res = @mutex.synchronize( Sync::EX ) { # Build commandset for servers that are alive servers = @servers.select {|svr| svr.alive? } cmds = self.make_command_map( "flush_all", servers ) # Send them in parallel self.add_stat( :flush_all ) { self.send( cmds ) } } !res.find {|svr,st| st.rstrip != 'OK'} end |
#get(*keys) ⇒ Object Also known as: []
Fetch and return the values associated with the given keys
from the cache. Returns nil
for any value that wasn’t in the cache.
340 341 342 343 344 345 346 347 348 349 |
# File 'lib/memcache.rb', line 340 def get( *keys ) raise MemCacheError, "no active servers" unless self.active? hash = nil @mutex.synchronize( Sync::SH ) { hash = self.fetch( :get, *keys ) } return *(hash.values_at( *keys )) end |
#get_hash(*keys) ⇒ Object
Fetch and return the values associated the the given keys
from the cache as a Hash object. Returns nil
for any value that wasn’t in the cache.
356 357 358 359 360 361 |
# File 'lib/memcache.rb', line 356 def get_hash( *keys ) raise MemCacheError, "no active servers" unless self.active? return @mutex.synchronize( Sync::SH ) { self.fetch( :get_hash, *keys ) } end |
#incr(key, val = 1) ⇒ Object
Atomically increment the value associated with key
by val
. Returns nil
if the value doesn’t exist in the cache, or the new value after incrementing if it does. val
should be zero or greater. Overflow on the server is not checked. Beware of values approaching 2**32.
477 478 479 480 481 482 483 484 |
# File 'lib/memcache.rb', line 477 def incr( key, val=1 ) raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? @mutex.synchronize( Sync::EX ) { self.incrdecr( :incr, key, val ) } end |
#inspect ⇒ Object
Return a human-readable version of the cache object.
222 223 224 225 226 227 228 229 230 231 |
# File 'lib/memcache.rb', line 222 def inspect "<MemCache: %d servers/%s buckets: ns: %p, debug: %p, cmp: %p, ro: %p>" % [ @servers.nitems, @buckets.nil? ? "?" : @buckets.nitems, @namespace, @debug, @compression, @readonly, ] end |
#readonly? ⇒ Boolean
Returns true
if the cache was created read-only.
289 290 291 |
# File 'lib/memcache.rb', line 289 def readonly? @readonly end |
#replace(key, val, exptime = 0) ⇒ Object
Like #set, but only stores the tuple if it already exists.
463 464 465 466 467 468 469 470 |
# File 'lib/memcache.rb', line 463 def replace( key, val, exptime=0 ) raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? @mutex.synchronize( Sync::EX ) { self.store( :replace, key, val, exptime ) } end |
#server_item_stats(servers = @servers) ⇒ Object
Return item stats from the specified servers
633 634 635 636 637 638 639 640 641 642 643 644 645 |
# File 'lib/memcache.rb', line 633 def server_item_stats( servers=@servers ) # Build commandset for servers that are alive asvrs = servers.select {|svr| svr.alive? } cmds = self.make_command_map( "stats items", asvrs ) # Send them in parallel return self.add_stat( :server_stats_items ) do self.send( cmds ) do |svr,reply| self.parse_stats( reply ) end end end |
#server_malloc_stats(servers = @servers) ⇒ Object
Return malloc stats from the specified servers
(not supported on all platforms)
595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 |
# File 'lib/memcache.rb', line 595 def server_malloc_stats( servers=@servers ) # Build commandset for servers that are alive asvrs = servers.select {|svr| svr.alive? } cmds = self.make_command_map( "stats malloc", asvrs ) # Send them in parallel return self.add_stat( :server_malloc_stats ) do self.send( cmds ) do |svr,reply| self.parse_stats( reply ) end end rescue MemCache::InternalError self.debug_msg( "One or more servers doesn't support 'stats malloc'" ) return {} end |
#server_map_stats(servers = @servers) ⇒ Object
Return memory maps from the specified servers
(not supported on all platforms)
577 578 579 580 581 582 583 584 585 586 587 588 589 590 |
# File 'lib/memcache.rb', line 577 def server_map_stats( servers=@servers ) # Build commandset for servers that are alive asvrs = servers.select {|svr| svr.alive? } cmds = self.make_command_map( "stats maps", asvrs ) # Send them in parallel return self.add_stat( :server_map_stats ) do self.send( cmds ) end rescue MemCache::ServerError => err self.debug_msg "%p doesn't support 'stats maps'" % err.server return {} end |
#server_reset_stats(servers = @servers) ⇒ Object
Reset statistics on the given servers
.
560 561 562 563 564 565 566 567 568 569 570 571 572 |
# File 'lib/memcache.rb', line 560 def server_reset_stats( servers=@servers ) # Build commandset for servers that are alive asvrs = servers.select {|svr| svr.alive? } cmds = self.make_command_map( "stats reset", asvrs ) # Send them in parallel return self.add_stat( :server_reset_stats ) do self.send( cmds ) do |svr,reply| reply.rstrip == "RESET" end end end |
#server_size_stats(servers = @servers) ⇒ Object
Return item size stats from the specified servers
649 650 651 652 653 654 655 656 657 658 659 660 661 |
# File 'lib/memcache.rb', line 649 def server_size_stats( servers=@servers ) # Build commandset for servers that are alive asvrs = servers.select {|svr| svr.alive? } cmds = self.make_command_map( "stats sizes", asvrs ) # Send them in parallel return self.add_stat( :server_stats_sizes ) do self.send( cmds ) do |svr,reply| reply.sub( /#{CRLF}END#{CRLF}/, '' ).split( /#{CRLF}/ ) end end end |
#server_slab_stats(servers = @servers) ⇒ Object
Return slab stats from the specified servers
614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 |
# File 'lib/memcache.rb', line 614 def server_slab_stats( servers=@servers ) # Build commandset for servers that are alive asvrs = servers.select {|svr| svr.alive? } cmds = self.make_command_map( "stats slabs", asvrs ) # Send them in parallel return self.add_stat( :server_slab_stats ) do self.send( cmds ) do |svr,reply| ### :TODO: I could parse the results from this further to split ### out the individual slabs into their own sub-hashes, but this ### will work for now. self.parse_stats( reply ) end end end |
#server_stats(servers = @servers) ⇒ Object
Return a hash of statistics hashes for each of the specified servers
.
544 545 546 547 548 549 550 551 552 553 554 555 556 |
# File 'lib/memcache.rb', line 544 def server_stats( servers=@servers ) # Build commandset for servers that are alive asvrs = servers.select {|svr| svr.alive?} cmds = self.make_command_map( "stats", asvrs ) # Send them in parallel return self.add_stat( :server_stats ) do self.send( cmds ) do |svr,reply| self.parse_stats( reply ) end end end |
#set(key, val, exptime = 0) ⇒ Object
Unconditionally set the entry in the cache under the given key
to value
, returning true
on success. The optional exptime
argument specifies an expiration time for the tuple, in seconds relative to the present if it’s less than 60*60*24*30 (30 days), or as an absolute Unix time (E.g., Time#to_i) if greater. If exptime
is 0
, the entry will never expire.
383 384 385 386 387 388 389 390 391 392 393 |
# File 'lib/memcache.rb', line 383 def set( key, val, exptime=0 ) raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? rval = nil @mutex.synchronize( Sync::EX ) { rval = self.store( :set, key, val, exptime ) } return rval end |
#set_many(pairs) ⇒ Object
Multi-set method; unconditionally set each key/value pair in pairs
. The call to set each value is done synchronously, but until memcached supports a multi-set operation this is only a little more efficient than calling #set for each pair yourself.
400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 |
# File 'lib/memcache.rb', line 400 def set_many( pairs ) raise MemCacheError, "no active servers" unless self.active? raise MemCacheError, "readonly cache" if self.readonly? raise MemCacheError, "expected an object that responds to the #each_pair message" unless pairs.respond_to?( :each_pair ) rvals = [] # Just iterate over the pairs, setting them one-by-one until memcached # supports multi-set. @mutex.synchronize( Sync::EX ) { pairs.each_pair do |key, val| rvals << self.store( :set, key, val, 0 ) end } return rvals end |