["\n\n \n \n\n\n File: README\n \n — Documentation for typhoeus/typhoeus (master)\n \n\n\n \n\n \n\n \n\n\n\n\n \n\n \n\n \n\n \n\n\n \n \n
\n \n
\n
\n\n
\n
\n
\n \n
\n\n\n
\n \n Libraries »\n typhoeus/typhoeus (master)\n \n \n » \n Index » \n File: README\n \n
\n\n \n
\n
\n\n

Typhoeus \"CI\" \"Experimental\" \"Code \"Gem

\n\n

Like a modern code version of the mythical beast with 100 serpent heads, Typhoeus runs HTTP requests in parallel while cleanly encapsulating handling logic.

\n\n

Example

\n\n

A single request:

\n\n
Typhoeus.get("www.example.com", followlocation: true)\n
\n\n

Parallel requests:

\n\n
hydra = Typhoeus::Hydra.new\n10.times.map{ hydra.queue(Typhoeus::Request.new("www.example.com", followlocation: true)) }\nhydra.run\n
\n\n

Installation

\n\n

Run:

\n\n
bundle add typhoeus\n
\n\n

Or install it yourself as:

\n\n
gem install typhoeus\n
\n\n

Project Tracking

\n\n\n\n

Usage

\n\n

Introduction

\n\n

The primary interface for Typhoeus is comprised of three classes: Request, Response, and Hydra. Request represents an HTTP request object, response represents an HTTP response, and Hydra manages making parallel HTTP connections.

\n\n
request = Typhoeus::Request.new(\n  "www.example.com",\n  method: :post,\n  body: "this is a request body",\n  params: { field1: "a field" },\n  headers: { Accept: "text/html" }\n)\n
\n\n

We can see from this that the first argument is the url. The second is a set of options.\nThe options are all optional. The default for :method is :get.

\n\n

When you want to send URL parameters, you can use :params hash to do so. Please note that in case of you should send a request via x-www-form-urlencoded parameters, you need to use :body hash instead. params are for URL parameters and :body is for the request body.

\n\n

Sending requests through the proxy

\n\n

Add a proxy url to the list of options:

\n\n
options = {proxy: 'http://myproxy.org'}\nreq = Typhoeus::Request.new(url, options)\n
\n\n

If your proxy requires authentication, add it with proxyuserpwd option key:

\n\n
options = {proxy: 'http://proxyurl.com', proxyuserpwd: 'user:password'}\nreq = Typhoeus::Request.new(url, options)\n
\n\n

Note that proxyuserpwd is a colon-separated username and password, in the vein of basic auth userpwd option.

\n\n

You can run the query either on its own or through the hydra:

\n\n
request.run\n#=> <Typhoeus::Response ... >\n
\n\n
hydra = Typhoeus::Hydra.hydra\nhydra.queue(request)\nhydra.run\n
\n\n

The response object will be set after the request is run.

\n\n
response = request.response\nresponse.code\nresponse.total_time\nresponse.headers\nresponse.body\n
\n\n

Making Quick Requests

\n\n

Typhoeus has some convenience methods for performing single HTTP requests. The arguments are the same as those you pass into the request constructor.

\n\n
Typhoeus.get("www.example.com")\nTyphoeus.head("www.example.com")\nTyphoeus.put("www.example.com/posts/1", body: "whoo, a body")\nTyphoeus.patch("www.example.com/posts/1", body: "a new body")\nTyphoeus.post("www.example.com/posts", body: { title: "test post", content: "this is my test"})\nTyphoeus.delete("www.example.com/posts/1")\nTyphoeus.options("www.example.com")\n
\n\n

Sending params in the body with PUT

\n\n

When using POST the content-type is set automatically to 'application/x-www-form-urlencoded'. That's not the case for any other method like PUT, PATCH, HEAD and so on, irrespective of whether you are using body or not. To get the same result as POST, i.e. a hash in the body coming through as params in the receiver, you need to set the content-type as shown below:

\n\n
Typhoeus.put("www.example.com/posts/1",\n        headers: {'Content-Type'=> "application/x-www-form-urlencoded"},\n        body: {title:"test post updated title", content: "this is my updated content"}\n    )\n
\n\n

Handling HTTP errors

\n\n

You can query the response object to figure out if you had a successful\nrequest or not. Here’s some example code that you might use to handle errors.\nThe callbacks are executed right after the request is finished, make sure to define\nthem before running the request.

\n\n
request = Typhoeus::Request.new("www.example.com", followlocation: true)\n\nrequest.on_complete do |response|\n  if response.success?\n    # hell yeah\n  elsif response.timed_out?\n    # aw hell no\n    log("got a time out")\n  elsif response.code == 0\n    # Could not get an http response, something's wrong.\n    log(response.return_message)\n  else\n    # Received a non-successful http response.\n    log("HTTP request failed: " + response.code.to_s)\n  end\nend\n\nrequest.run\n
\n\n

This also works with serial (blocking) requests in the same fashion. Both\nserial and parallel requests return a Response object.

\n\n

Handling file uploads

\n\n

A File object can be passed as a param for a POST request to handle uploading\nfiles to the server. Typhoeus will upload the file as the original file name\nand use Mime::Types to set the content type.

\n\n
Typhoeus.post(\n  "http://localhost:3000/posts",\n  body: {\n    title: "test post",\n    content: "this is my test",\n    file: File.open("thesis.txt","r")\n  }\n)\n
\n\n

Streaming the response body

\n\n

Typhoeus can stream responses. When you're expecting a large response,\nset the on_body callback on a request. Typhoeus will yield to the callback\nwith chunks of the response, as they're read. When you set an on_body callback,\nTyphoeus will not store the complete response.

\n\n
downloaded_file = File.open 'huge.iso', 'wb'\nrequest = Typhoeus::Request.new("www.example.com/huge.iso")\nrequest.on_headers do |response|\n  if response.code != 200\n    raise "Request failed"\n  end\nend\nrequest.on_body do |chunk|\n  downloaded_file.write(chunk)\nend\nrequest.on_complete do |response|\n  downloaded_file.close\n  # Note that response.body is ""\nend\nrequest.run\n
\n\n

If you need to interrupt the stream halfway,\nyou can return the :abort symbol from the on_body block, example:

\n\n
request.on_body do |chunk|\n  buffer << chunk\n  :abort if buffer.size > 1024 * 1024\nend\n
\n\n

This will properly stop the stream internally and avoid any memory leak which\nmay happen if you interrupt with something like a return, throw or raise.

\n\n

Making Parallel Requests

\n\n

Generally, you should be running requests through hydra. Here is how that looks:

\n\n
hydra = Typhoeus::Hydra.hydra\n\nfirst_request = Typhoeus::Request.new("http://example.com/posts/1")\nfirst_request.on_complete do |response|\n  third_url = response.body\n  third_request = Typhoeus::Request.new(third_url)\n  hydra.queue third_request\nend\nsecond_request = Typhoeus::Request.new("http://example.com/posts/2")\n\nhydra.queue first_request\nhydra.queue second_request\nhydra.run # this is a blocking call that returns once all requests are complete\n
\n\n

The execution of that code goes something like this. The first and second requests are built and queued. When hydra is run the first and second requests run in parallel. When the first request completes, the third request is then built and queued, in this example based on the result of the first request. The moment it is queued Hydra starts executing it. Meanwhile the second request would continue to run (or it could have completed before the first). Once the third request is done, hydra.run returns.

\n\n

How to get an array of response bodies back after executing a queue:

\n\n
hydra = Typhoeus::Hydra.new\nrequests = 10.times.map {\n  request = Typhoeus::Request.new("www.example.com", followlocation: true)\n  hydra.queue(request)\n  request\n}\nhydra.run\n\nresponses = requests.map { |request|\n  request.response.body\n}\n
\n\n

hydra.run is a blocking request. You can also use the on_complete callback to handle each request as it completes:

\n\n
hydra = Typhoeus::Hydra.new\n10.times do\n  request = Typhoeus::Request.new("www.example.com", followlocation: true)\n  request.on_complete do |response|\n    #do_something_with response\n  end\n  hydra.queue(request)\nend\nhydra.run\n
\n\n

Making Parallel Requests with Faraday + Typhoeus

\n\n
require 'faraday'\n\nconn = Faraday.new(:url => 'http://httppage.com') do |builder|\n  builder.request  :url_encoded\n  builder.response :logger\n  builder.adapter  :typhoeus\nend\n\nconn.in_parallel do\n  response1 = conn.get('/first')\n  response2 = conn.get('/second')\n\n  # these will return nil here since the\n  # requests have not been completed\n  response1.body\n  response2.body\nend\n\n# after it has been completed the response information is fully available\n# response1.status, etc\nresponse1.body\nresponse2.body\n
\n\n

Specifying Max Concurrency

\n\n

Hydra will also handle how many requests you can make in parallel. Things will get flakey if you try to make too many requests at the same time. The built in limit is 200. When more requests than that are queued up, hydra will save them for later and start the requests as others are finished. You can raise or lower the concurrency limit through the Hydra constructor.

\n\n
Typhoeus::Hydra.new(max_concurrency: 20)\n
\n\n

Memoization

\n\n

Hydra memoizes requests within a single run call. You have to enable memoization.\nThis will result in a single request being issued. However, the on_complete handlers of both will be called.

\n\n
Typhoeus::Config.memoize = true\n\nhydra = Typhoeus::Hydra.new(max_concurrency: 1)\n2.times do\n  hydra.queue Typhoeus::Request.new("www.example.com")\nend\nhydra.run\n
\n\n

This will result in two requests.

\n\n
Typhoeus::Config.memoize = false\n\nhydra = Typhoeus::Hydra.new(max_concurrency: 1)\n2.times do\n  hydra.queue Typhoeus::Request.new("www.example.com")\nend\nhydra.run\n
\n\n

Caching

\n\n

Typhoeus includes built in support for caching. In the following example, if there is a cache hit, the cached object is passed to the on_complete handler of the request object.

\n\n
class Cache\n  def initialize\n    @memory = {}\n  end\n\n  def get(request)\n    @memory[request]\n  end\n\n  def set(request, response)\n    @memory[request] = response\n  end\nend\n\nTyphoeus::Config.cache = Cache.new\n\nTyphoeus.get("www.example.com").cached?\n#=> false\nTyphoeus.get("www.example.com").cached?\n#=> true\n
\n\n

For use with Dalli:

\n\n
require "typhoeus/cache/dalli"\n\ndalli = Dalli::Client.new(...)\nTyphoeus::Config.cache = Typhoeus::Cache::Dalli.new(dalli)\n
\n\n

For use with Rails:

\n\n
require "typhoeus/cache/rails"\n\nTyphoeus::Config.cache = Typhoeus::Cache::Rails.new\n
\n\n

For use with Redis:

\n\n
require "typhoeus/cache/redis"\n\nredis = Redis.new(...)\nTyphoeus::Config.cache = Typhoeus::Cache::Redis.new(redis)\n
\n\n

All three of these adapters take an optional keyword argument default_ttl, which sets a default\nTTL on cached responses (in seconds), for requests which do not have a cache TTL set.

\n\n

You may also selectively choose not to cache by setting cache to false on a request or to use\na different adapter.

\n\n
cache = Cache.new\nTyphoeus.get("www.example.com", cache: cache)\n
\n\n

Direct Stubbing

\n\n

Hydra allows you to stub out specific urls and patterns to avoid hitting\nremote servers while testing.

\n\n
response = Typhoeus::Response.new(code: 200, body: "{'name' : 'paul'}")\nTyphoeus.stub('www.example.com').and_return(response)\n\nTyphoeus.get("www.example.com") == response\n#=> true\n
\n\n

The queued request will hit the stub. You can also specify a regex to match urls.

\n\n
response = Typhoeus::Response.new(code: 200, body: "{'name' : 'paul'}")\nTyphoeus.stub(/example/).and_return(response)\n\nTyphoeus.get("www.example.com") == response\n#=> true\n
\n\n

You may also specify an array for the stub to return sequentially.

\n\n
Typhoeus.stub('www.example.com').and_return([response1, response2])\n\nTyphoeus.get('www.example.com') == response1 #=> true\nTyphoeus.get('www.example.com') == response2 #=> true\n
\n\n

When testing make sure to clear your expectations or the stubs will persist between tests. The following can be included in your spec_helper.rb file to do this automatically.

\n\n
RSpec.configure do |config|\n  config.before :each do\n    Typhoeus::Expectation.clear\n  end\nend\n
\n\n

Timeouts

\n\n

No exceptions are raised on HTTP timeouts. You can check whether a request timed out with the following method:

\n\n
Typhoeus.get("www.example.com", timeout: 1).timed_out?\n
\n\n

Timed out responses also have their success? method return false.

\n\n

There are two different timeouts available: timeout\nand connecttimeout.\ntimeout is the time limit for the entire request in seconds.\nconnecttimeout is the time limit for just the connection phase, again in seconds.

\n\n

There are two additional more fine grained options timeout_ms and\nconnecttimeout_ms. These options offer millisecond precision but are not always available (for instance on linux if nosignal is not set to true).

\n\n

When you pass a floating point timeout (or connecttimeout) Typhoeus will set timeout_ms for you if it has not been defined. The actual timeout values passed to curl will always be rounded up.

\n\n

DNS timeouts of less than one second are not supported unless curl is compiled with an asynchronous resolver.

\n\n

The default timeout is 0 (zero) which means curl never times out during transfer. The default connecttimeout is 300 seconds. A connecttimeout of 0 will also result in the default connecttimeout of 300 seconds.

\n\n

Following Redirections

\n\n

Use followlocation: true, eg:

\n\n
Typhoeus.get("www.example.com", followlocation: true)\n
\n\n

Basic Authentication

\n\n
Typhoeus::Request.get("www.example.com", userpwd: "user:password")\n
\n\n

Compression

\n\n
Typhoeus.get("www.example.com", accept_encoding: "gzip")\n
\n\n

The above has a different behavior than setting the header directly in the header hash, eg:

\n\n
Typhoeus.get("www.example.com", headers: {"Accept-Encoding" => "gzip"})\n
\n\n

Setting the header hash directly will not include the --compressed flag in the libcurl command and therefore libcurl will not decompress the response. If you want the --compressed flag to be added automatically, set :accept_encoding Typhoeus option.

\n\n

Cookies

\n\n
Typhoeus::Request.get("www.example.com", cookiefile: "/path/to/file", cookiejar: "/path/to/file")\n
\n\n

Here, cookiefile is a file to read cookies from, and cookiejar is a file to write received cookies to.\nIf you just want cookies enabled, you need to pass the same filename for both options.

\n\n

Other CURL options

\n\n

Are available and documented here

\n\n

SSL

\n\n

SSL comes built in to libcurl so it’s in Typhoeus as well. If you pass in a\nurl with "https" it should just work assuming that you have your cert\nbundle in order and the server is\nverifiable. You must also have libcurl built with SSL support enabled. You can\ncheck that by doing this:

\n\n
curl --version\n
\n\n

Now, even if you have libcurl built with OpenSSL you may still have a messed\nup cert bundle or if you’re hitting a non-verifiable SSL server then you’ll\nhave to disable peer verification to make SSL work. Like this:

\n\n
Typhoeus.get("https://www.example.com", ssl_verifypeer: false)\n
\n\n

If you are getting "SSL: certificate subject name does not match target host\nname" from curl (ex:- you are trying to access to b.c.host.com when the\ncertificate subject is *.host.com). You can disable host verification. Like\nthis:

\n\n
# host checking enabled\nTyphoeus.get("https://www.example.com", ssl_verifyhost: 2)\n# host checking disabled\nTyphoeus.get("https://www.example.com", ssl_verifyhost: 0)\n
\n\n

Verbose debug output

\n\n

It’s sometimes useful to see verbose output from curl. You can enable it on a per-request basis:

\n\n
Typhoeus.get("http://example.com", verbose: true)\n
\n\n

or globally:

\n\n
Typhoeus::Config.verbose = true\n
\n\n

Just remember that libcurl prints it’s debug output to the console (to\nSTDERR), so you’ll need to run your scripts from the console to see it.

\n\n

Default User Agent Header

\n\n

In many cases, all HTTP requests made by an application require the same User-Agent header set. Instead of supplying it on a per-request basis by supplying a custom header, it is possible to override it for all requests using:

\n\n
Typhoeus::Config.user_agent = "custom user agent"\n
\n\n

Running the specs

\n\n

Running the specs should be as easy as:

\n\n
bundle install\nbundle exec rake\n
\n\n

Semantic Versioning

\n\n

This project conforms to semver.

\n\n

LICENSE

\n\n

(The MIT License)

\n\n

Copyright © 2009-2010 Paul Dix

\n\n

Copyright © 2011-2012 David Balatero

\n\n

Copyright © 2012-2016 Hans Hasselberg

\n\n

Permission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the "Software"),\nto deal in the Software without restriction, including without\nlimitation the rights to use, copy, modify, merge, publish, distribute,\nsublicense, and/or sell copies of the Software, and to permit persons\nto whom the Software is furnished to do so, subject to the following conditions:

\n\n

The above copyright notice and this permission notice shall be included\nin all copies or substantial portions of the Software.

\n\n

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS\nOR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\nTHE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\nOTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,\nARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR\nOTHER DEALINGS IN THE SOFTWARE.

\n
\n\n
\n Generated on Thu Jun 8 07:54:26 2023 by\n yard\n 0.9.32 (ruby-3.2.2).\n
\n\n\n\n\n\n\n \n\n\n
\n \n"]