GReactor

Gem Version Inline docs This is a native Ruby Generic Reactor with no dependencies.

This reactor can be used to:

  • Write a server - i.e. the GRHttp HTTP and WebSocket Server uses GReactor;
  • Write an event driven client; or
  • Add asnychronous event functionality to your existing application...

The documentation on rubydoc.info has more information about the GReactor's easy API.

GReactor vs. EventMachine?

Let's face it, EventMachine has it's issues... some of them are quite bad others only affect specific featuers such an SSL... I'm not the first to notice the issue nor the first to offer an alternative.

GReactor is a multi-threaded (and optionally multi-processed as well) EventMachine alternative (although you can also run GReactor as a single-threaded, single process reactor).

Having a native Ruby reactor has a number of advantages and a number of disadvantages over the non-native EventMachine alternative. Also, the GReactor's API is designed to make your life as a programmer easier as well as provide you with all the options you might need.

For most applications, performance is not the main issue to consider when choosig a reactor. Generally speaking, unless your app is limited to 'Hello World', the performance is controlled by your app's code rather the speed with which IO events are fired (i.e. protocol parsing is much more expensive than the reactor).

Be that as it may, GReactor's performance might surprise you :-)

GReactor based projects

If you use GReactor, please let me know.

The GReactor is used by the GRHttp HTTP and WebSocket Server.

Here is a multi-process and multi-threaded example (run in irb):

require 'grhttp'
GRHttp.start do
    GRHttp.listen {|request, response| 'Hello World!' }
    GR::Settings.set_forking 4 # optional GReactor forking
end
# # Benchmark using apache:
# ab -n 10000 -c 200 -k http://127.0.0.1:3000/
# # Benchmark using wrk:
# $   wrk -c400 -d10 -t12 http://localhost:3000/

The GReactor is also used by the Plezi HTTP and Websocket WebApp Framework, which runs its services using the GRHttp server, all in pure Ruby code.

Here is a single-process and multi-threaded example for the framework's Hello World and websocket echo server (run in irb):

require 'plezi'

## no forking
# GR::Settings.set_forking = false

class Ctrl
   def index
      "Hello World"
   end
   def on_message data
      # connect with a websocket client,
      # such as www.websocket.org/echo.html
      response << "Plezi Echo...\n>>> #{data}"
   end
end

listen port: 3000
route '*', Ctrl # catch all

exit
# # Benchmark using apache:
# $   ab -n 10000 -c 200 -k http://127.0.0.1:3000/
# # Benchmark using wrk:
# $   wrk -c400 -d10 -t12 http://localhost:3000/

Want to handle thousand of concurrent connections?

Evey process has it's limits. Depending on your OS, each process has an "open-file" limit (on my computer it's quite small, 256 open files), which also limits socket and other IO connections (they are considered as open-files).

So, is our reactor doomed to be limited?

No! - Enter GReactor's forking settings.

If you keep your code scalable across processes (for example, if it could work on a number of Heroku's dynos concurrently), you can ask GReactor to fork itself and work using a pool of processes - if set, forking will automaticaly happen once you call GReactor.join.

Simply set:

number_of_processes = 8
GR::Settings.set_forking number_of_processes

Next time you call GReactor.join - Boom: multi-threaded process forking will provide you with instant expansion of both concurrency (forking allows you to use multi cores on multi-cored CPUs) and open-file limits (on my machine, I just went from 256 to 2,048 possible 'open files'... but on many machines you are now eligable to much more).

Installation

Add this line to your application's Gemfile:

gem 'greactor'

And then execute:

$ bundle

Or install it yourself as:

$ gem install greactor

Server Usage

I'm lazy to write now, so let's let some code do the talking.

Here is a very simple echo server, running only one thread over the default port (3000):

# Because the `GReactor.start` method recieves a block,
# it will hang after the block is executed.
# The main thread will wait for a signal to stop the reactor (i.e. ctrl+C).
require 'greactor'
GR.start(1) do
    # The block given to the `listen` method acts as a callback.
    # The callback will recieve a comfortable IO wrapper.
    GR.listen do |io|
        data = io.read
        io.write ">>> Echoing: #{data}"
        if data.match /^(quit|exit|bye)[\r\n]+/i
            io.write ">>> Goodbye.\r\n"
            io.close
        end
    end
end

Test with:

$ telnet localhost 3000

Here is a better example, that showcases some features such as timed and async events as well as the way an actual server might use a Protocol class to handle the connections... again, the famous echo, this time over the default number of worker threads (8 threads):

require 'greactor'

# Create an Echo Protocol for our server.
# We will use the quick start template provided by the GReactor
class EchoProtocol < GR::Protocol
    def on_connect
        GR.info "Someone connetced to the echo server."
    end

    def on_message data
        send ">>> Echoing: #{data}"
        if data.match /^(quit|exit|bye)/i
            send ">>> Goodbye.\r\n"
            close
        end
    end

    def on_disconnect
        # just for fun, we can log asynchronously.
        GR.run_async { GR.info "Someone left the echo server." }
        # we can also delay an asynchronous task.
        GR.run_after(5) { GR.warn "It's been five seconds since they left and I miss them..."}
    end
end

GR.start

GR.listen timeout: 5, handler: EchoProtocol, port: 3000
GR.listen timeout: 5, handler: EchoProtocol, port: 3030, ssl: true

GR.on_shutdown { GR.info "This is called when the shutdown process ends." }
GR.on_shutdown { puts "We're done." }

#let's auto-shutdown the server after a minute or so, shall we?
GR.run_after(60) {Process.kill "TERM" , 0}

# the next line will cause the the main thread to hang as it waits to join the GReactor's threads.
GR.join {puts "\nThis is called when the shutdown process starts..."}

Client Usage

It's also possible to use GReactor to augment an existing server (Rails/Sinatra/Rack app) by using it as an Asynchronouse task manager or an event-based IO manager (such as for WebSockets).

Managing Events and Timers

The following script demonstrates the GReactor's ability to act as an Asynchronouse task manager:

#!/usr/bin/env ruby
# encoding: UTF-8

require 'greactor'

# Start the reactor (at the moment, the reactor doesn't autostart).
GR.start

# Use `on_shutdown` to create a task to be executed before the application exits.
GR.on_shutdown { puts "This will automatically be called before the app exits." }


# Use `run_after` to create delayed tasks.
GR.run_after(10) { puts "This might be missed, it isn't yet scheduled for execution
                and the application will probably exit before the 10 seconds are up." }


# Use `run_async` and `callback` to set a few tasks.
GR.run_async { sleep 3; puts "This was a blocking task." }
GR.run_async { puts "This will run in the next execution cycle." }
GR.callback GR, :info, "Log this asynchronously."
GR.callback(GR, :logger) {|logger| "this block recived the GReactor.logger's returned value: #{logger}"}


# As you can see, the thread continues while the tasks run in the background.
puts "The thread continues... Did it out-raced the task cycle?"

exit

Although IO objects are optional, please be aware that timers will not persist beyond the processes life (just like EventMachine timers). Timed event are only scheduled for execution once their time has come, and this is why they might be missed if the process exists before they are scheduled to execute. If this is an issue, you will need to use a persistant database to store and collect these tasks.

Websockets and other IO clients

Contributing

Bug reports and pull requests are welcome on GitHub at https://github.com/boazsegev/reactor.

License

The gem is available as open source under the terms of the MIT License.