LHC Core Interceptors

The following set of interceptors can be used with LHC.

Cache Interceptor

Add the cache interceptor to your basic set of LHC interceptors.

  LHC.config.interceptors = [LHC::Caching]

Caching is not enabled by default, although you added it to your basic set of interceptors. If you want to have requests served/stored and stored in/from cache, you have to enable it by request.

  LHC.get('http://local.ch', cache: true)

You can also enable caching when configuring an endpoint in LHS.

  class Feedbacks < LHS::Service
    endpoint ':datastore/v2/feedbacks', cache: true
  end

Cache Interceptor Options

  LHC.get('http://local.ch', cache: true, cache_expires_in: 1.day, cache_race_condition_ttl: 15.seconds)

cache_expires_in lets the cache expires every X seconds.

Set the key that is used for caching by using the cache_key option. Every key is prefixed with LHC_CACHE:.

Setting cache_race_condition_ttl is very useful in situations where a cache entry is used very frequently and is under heavy load. If a cache expires and due to heavy load several different processes will try to read data natively and then they all will try to write to cache. To avoid that case the first process to find an expired cache entry will bump the cache expiration time by the value set in cache_race_condition_ttl.

Monitoring Interceptor

Add the monitoring interceptor to your basic set of LHC interceptors.

  LHC.config.interceptors = [LHC::Monitoring]

You also have to configure statsd in order to have the monitoring interceptor report.

  LHC::Monitoring.statsd = <your-instance-of-statsd>

The monitoring interceptor reports all the HTTP communication done with LHS. It reports the trial always.

In case of a successful response it reports the response code with a count and the response time with a gauge value.

  LHC.get('http://local.ch')

  "lhc.<app_name>.<env>.<host>.<http_method>.count", 1
  "lhc.<app_name>.<env>.<host>.<http_method>.200", 1
  "lhc.<app_name>.<env>.<host>.<http_method>.time", 43

In case your workers/processes are getting killed due limited time constraints, you are able to detect deltas with relying on "before_request", and "after_request" counts:

  "lhc.<app_name>.<env>.<host>.<http_method>.before_request", 1
  "lhc.<app_name>.<env>.<host>.<http_method>.after_request", 1

Timeouts are also reported:

  "lhc.<app_name>.<env>.<host>.<http_method>.timeout", 1

All the dots in the host are getting replaced with underscore (_), because dot is the default separator in graphite.

It is also possible to set the key for Monitoring Interceptor on per request basis:

  LHC.get('http://local.ch', monitoring_key: 'local_website')

  "local_website.count", 1
  "local_website.200", 1
  "local_website.time", 43

If you use this approach you need to add all namespaces (app, environment etc.) to the key on your own.

Auth Interceptor

Add the auth interceptor to your basic set of LHC interceptors.

  LHC.config.interceptors = [LHC::Auth]

Bearer Authentication

  LHC.get('http://local.ch', auth: { bearer: -> { access_token } })

Adds the following header to the request:

  'Authorization': 'Bearer 123456'

Assuming the method access_token responds on runtime of the request with 123456.

Basic Authentication

  LHC.get('http://local.ch', auth: { basic: { username: 'steve', password: 'can' } })

Adds the following header to the request:

  'Authorization': 'Basic c3RldmU6Y2Fu'

Which is the base64 encoded credentials "username:password".

Rollbar Interceptor

Forward errors to rollbar when exceptions occur during http requests.

  LHC.config.interceptors = [LHC::Rollbar]
  LHC.get('http://local.ch')

If it raises, it forwards the request and response object to rollbar, which contain all necessary data.

Forward additional parameters

  LHC.get('http://local.ch', rollbar: { tracking_key: 'this particular request' })

License

GNU Affero General Public License Version 3.