Class: Mongrel::HttpServer

Inherits:
Object
  • Object
show all
Defined in:
lib/mongrel.rb

Overview

This is the main driver of Mongrel, while the Mongrel::HttpParser and Mongrel::URIClassifier make up the majority of how the server functions. It’s a very simple class that just has a thread accepting connections and a simple HttpServer.process_client function to do the heavy lifting with the IO and Ruby.

You use it by doing the following:

server = HttpServer.new("0.0.0.0", 3000)
server.register("/stuff", MyNiftyHandler.new)
server.run.join

The last line can be just server.run if you don’t want to join the thread used. If you don’t though Ruby will mysteriously just exit on you.

Ruby’s thread implementation is “interesting” to say the least. Experiments with many different types of IO processing simply cannot make a dent in it. Future releases of Mongrel will find other creative ways to make threads faster, but don’t hold your breath until Ruby 1.9 is actually finally useful.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(host, port, num_processors = (2**30-1), timeout = 0) ⇒ HttpServer

Creates a working server on host:port (strange things happen if port isn’t a Number). Use HttpServer::run to start the server and HttpServer.acceptor.join to join the thread that’s processing incoming requests on the socket.

The num_processors optional argument is the maximum number of concurrent processors to accept, anything over this is closed immediately to maintain server processing performance. This may seem mean but it is the most efficient way to deal with overload. Other schemes involve still parsing the client’s request which defeats the point of an overload handling system.

The timeout parameter is a sleep timeout (in hundredths of a second) that is placed between socket.accept calls in order to give the server a cheap throttle time. It defaults to 0 and actually if it is 0 then the sleep is not done at all.

TODO: Find out if anyone actually uses the timeout option since it seems to cause problems on FBSD.



497
498
499
500
501
502
503
504
505
506
# File 'lib/mongrel.rb', line 497

def initialize(host, port, num_processors=(2**30-1), timeout=0)
  @socket = TCPServer.new(host, port) 
  @classifier = URIClassifier.new
  @host = host
  @port = port
  @workers = ThreadGroup.new
  @timeout = timeout
  @num_processors = num_processors
  @death_time = 60
end

Instance Attribute Details

#acceptorObject (readonly)

Returns the value of attribute acceptor.



474
475
476
# File 'lib/mongrel.rb', line 474

def acceptor
  @acceptor
end

#classifierObject (readonly)

Returns the value of attribute classifier.



476
477
478
# File 'lib/mongrel.rb', line 476

def classifier
  @classifier
end

#hostObject (readonly)

Returns the value of attribute host.



477
478
479
# File 'lib/mongrel.rb', line 477

def host
  @host
end

#num_processorsObject (readonly)

Returns the value of attribute num_processors.



480
481
482
# File 'lib/mongrel.rb', line 480

def num_processors
  @num_processors
end

#portObject (readonly)

Returns the value of attribute port.



478
479
480
# File 'lib/mongrel.rb', line 478

def port
  @port
end

#timeoutObject (readonly)

Returns the value of attribute timeout.



479
480
481
# File 'lib/mongrel.rb', line 479

def timeout
  @timeout
end

#workersObject (readonly)

Returns the value of attribute workers.



475
476
477
# File 'lib/mongrel.rb', line 475

def workers
  @workers
end

Instance Method Details

#graceful_shutdownObject

Performs a wait on all the currently running threads and kills any that take too long. Right now it just waits 60 seconds, but will expand this to allow setting. The @timeout setting does extend this waiting period by that much longer.



612
613
614
615
616
617
# File 'lib/mongrel.rb', line 612

def graceful_shutdown
  while reap_dead_workers("shutdown") > 0
    STDERR.print "Waiting for #{@workers.list.length} requests to finish, could take #{@death_time + @timeout} seconds."
    sleep @death_time / 10
  end
end

#process_client(client) ⇒ Object

Does the majority of the IO processing. It has been written in Ruby using about 7 different IO processing strategies and no matter how it’s done the performance just does not improve. It is currently carefully constructed to make sure that it gets the best possible performance, but anyone who thinks they can make it faster is more than welcome to take a crack at it.



514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
# File 'lib/mongrel.rb', line 514

def process_client(client)
  begin
    parser = HttpParser.new
    params = {}
    request = nil
    data = client.readpartial(Const::CHUNK_SIZE)
    nparsed = 0

    # Assumption: nparsed will always be less since data will get filled with more
    # after each parsing.  If it doesn't get more then there was a problem
    # with the read operation on the client socket.  Effect is to stop processing when the
    # socket can't fill the buffer for further parsing.
    while nparsed < data.length
      nparsed = parser.execute(params, data, nparsed)

      if parser.finished?
        script_name, path_info, handlers = @classifier.resolve(params[Const::REQUEST_URI])

        if handlers
          params[Const::PATH_INFO] = path_info
          params[Const::SCRIPT_NAME] = script_name
          params[Const::REMOTE_ADDR] = params[Const::HTTP_X_FORWARDED_FOR] || client.peeraddr.last
          notifier = handlers[0].request_notify ? handlers[0] : nil

          # TODO: Find a faster/better way to carve out the range, preferably without copying.
          data = data[nparsed ... data.length] || ""

          request = HttpRequest.new(params, data, client, notifier)

          # in the case of large file uploads the user could close the socket, so skip those requests
          break if request.body == nil  # nil signals from HttpRequest::initialize that the request was aborted

          # request is good so far, continue processing the response
          response = HttpResponse.new(client)

          # Process each handler in registered order until we run out or one finalizes the response.
          handlers.each do |handler|
            handler.process(request, response)
            break if response.done or client.closed?
          end

          # And finally, if nobody closed the response off, we finalize it.
          unless response.done or client.closed? 
            response.finished
          end
        else
          # Didn't find it, return a stock 404 response.
          client.write(Const::ERROR_404_RESPONSE)
        end

        break #done
      else
        # Parser is not done, queue up more data to read and continue parsing
        data << client.readpartial(Const::CHUNK_SIZE)
        if data.length >= Const::MAX_HEADER
          raise HttpParserError.new("HEADER is longer than allowed, aborting client early.")
        end
      end
    end
  rescue EOFError,Errno::ECONNRESET,Errno::EPIPE,Errno::EINVAL,Errno::EBADF
    # ignored
  rescue HttpParserError
    STDERR.puts "#{Time.now}: BAD CLIENT (#{params[Const::HTTP_X_FORWARDED_FOR] || client.peeraddr.last}): #$!"
  rescue Errno::EMFILE
    reap_dead_workers('too many files')
  rescue Object
    STDERR.puts "#{Time.now}: ERROR: #$!"
  ensure
    client.close unless client.closed?
    request.body.delete if request and request.body.class == Tempfile
  end
end

#reap_dead_workers(reason = 'unknown') ⇒ Object

Used internally to kill off any worker threads that have taken too long to complete processing. Only called if there are too many processors currently servicing. It returns the count of workers still active after the reap is done. It only runs if there are workers to reap.



591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
# File 'lib/mongrel.rb', line 591

def reap_dead_workers(reason='unknown')
  if @workers.list.length > 0
    STDERR.puts "#{Time.now}: Reaping #{@workers.list.length} threads for slow workers because of '#{reason}'"
    mark = Time.now
    @workers.list.each do |w|
      w[:started_on] = Time.now if not w[:started_on]

      if mark - w[:started_on] > @death_time + @timeout
        STDERR.puts "Thread #{w.inspect} is too old, killing."
        w.raise(TimeoutError.new("Timed out thread."))
      end
    end
  end

  return @workers.list.length
end

#register(uri, handler, in_front = false) ⇒ Object

Simply registers a handler with the internal URIClassifier. When the URI is found in the prefix of a request then your handler’s HttpHandler::process method is called. See Mongrel::URIClassifier#register for more information.

If you set in_front=true then the passed in handler will be put in front in the list. Otherwise it’s placed at the end of the list.



665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
# File 'lib/mongrel.rb', line 665

def register(uri, handler, in_front=false)
  script_name, path_info, handlers = @classifier.resolve(uri)

  if not handlers
    @classifier.register(uri, [handler])
  else
    if path_info.length == 0 or (script_name == Const::SLASH and path_info == Const::SLASH)
      if in_front
        handlers.unshift(handler)
      else
        handlers << handler
      end
    else
      @classifier.register(uri, [handler])
    end
  end

  handler.listener = self
end

#runObject

Runs the thing. It returns the thread used so you can “join” it. You can also access the HttpServer::acceptor attribute to get the thread later.



622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
# File 'lib/mongrel.rb', line 622

def run
  BasicSocket.do_not_reverse_lookup=true

  @acceptor = Thread.new do
    while true
      begin
        client = @socket.accept
        worker_list = @workers.list

        if worker_list.length >= @num_processors
          STDERR.puts "Server overloaded with #{worker_list.length} processors (#@num_processors max). Dropping connection."
          client.close
          reap_dead_workers("max processors")
        else
          thread = Thread.new { process_client(client) }
          thread.abort_on_exception = true
          thread[:started_on] = Time.now
          @workers.add(thread)

          sleep @timeout/100 if @timeout > 0
        end
      rescue StopServer
        @socket.close if not @socket.closed?
        break
      rescue Errno::EMFILE
        reap_dead_workers("too many open files")
        sleep 0.5
      end
    end

    graceful_shutdown
  end

  return @acceptor
end

#stopObject

Stops the acceptor thread and then causes the worker threads to finish off the request queue before finally exiting.



694
695
696
697
698
699
700
# File 'lib/mongrel.rb', line 694

def stop
  stopper = Thread.new do 
    exc = StopServer.new
    @acceptor.raise(exc)
  end
  stopper.priority = 10
end

#unregister(uri) ⇒ Object

Removes any handlers registered at the given URI. See Mongrel::URIClassifier#unregister for more information. Remember this removes them all so the entire processing chain goes away.



688
689
690
# File 'lib/mongrel.rb', line 688

def unregister(uri)
  @classifier.unregister(uri)
end