Module: AsyncRender
- Defined in:
- lib/async_render.rb,
lib/async_render/utils.rb,
lib/async_render/engine.rb,
lib/async_render/warmup.rb,
lib/async_render/current.rb,
lib/async_render/version.rb,
lib/async_render/executor.rb,
lib/async_render/controller.rb,
lib/async_render/middleware.rb,
lib/async_render/async_helper.rb,
lib/async_render/memoized_helper.rb,
lib/generators/async_render/install/install_generator.rb
Defined Under Namespace
Modules: AsyncHelper, Controller, Generators, MemoizedHelper, Utils, Warmup Classes: Current, Engine, Executor, Middleware
Constant Summary collapse
- PLACEHOLDER_PREFIX =
"<!--ASYNC-PLACEHOLDER:".freeze
- PLACEHOLDER_SUFFIX =
"-->".freeze
- PLACEHOLDER_TEMPLATE =
"#{PLACEHOLDER_PREFIX}%s#{PLACEHOLDER_SUFFIX}".freeze
- VERSION =
"0.1.1"- @@enabled =
true- @@timeout =
10- @@executor =
nil- @@dump_state_proc =
nil- @@restore_state_proc =
nil
Class Method Summary collapse
- .build_default_executor ⇒ Object
- .configure {|_self| ... } ⇒ Object
-
.executor ⇒ Object
Lazily build a conservative default executor sized to avoid DB pool contention.
-
.memoized_cache ⇒ Object
Global, process-local memoized cache for rendered fragments or values NOTE: This persists across requests in the Ruby process.
- .reset_memoized_cache! ⇒ Object
Class Method Details
.build_default_executor ⇒ Object
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
# File 'lib/async_render.rb', line 47 def self.build_default_executor # Heuristics: cap by AR pool size and RAILS_MAX_THREADS, with a sane upper bound. ar_pool_size = begin defined?(ActiveRecord) && ActiveRecord::Base.connection_pool&.size rescue StandardError nil end puma_max_threads = Integer(ENV["RAILS_MAX_THREADS"]) rescue nil # Defaults hard_cap = 16 max_threads = [ ar_pool_size, puma_max_threads, hard_cap ].compact.min || 8 min_threads = [ 2, max_threads ].min # Concurrent::ThreadPoolExecutor.new( # min_threads: min_threads, # max_threads: max_threads, # idletime: 60, # max_queue: 1_000, # fallback_policy: :caller_runs # ) # Concurrent::FixedThreadPool.new( max_threads, idletime: 60, max_queue: 1_000, fallback_policy: :caller_runs ) # # Concurrent::CachedThreadPool.new( # min_threads: min_threads, # max_threads: max_threads, # max_queue: 1_000, # fallback_policy: :caller_runs # ) end |
.configure {|_self| ... } ⇒ Object
28 29 30 |
# File 'lib/async_render.rb', line 28 def self.configure yield self end |
.executor ⇒ Object
Lazily build a conservative default executor sized to avoid DB pool contention.
33 34 35 |
# File 'lib/async_render.rb', line 33 def self.executor @@executor ||= build_default_executor end |
.memoized_cache ⇒ Object
Global, process-local memoized cache for rendered fragments or values NOTE: This persists across requests in the Ruby process
39 40 41 |
# File 'lib/async_render.rb', line 39 def self.memoized_cache @memoized_cache ||= Concurrent::Map.new end |
.reset_memoized_cache! ⇒ Object
43 44 45 |
# File 'lib/async_render.rb', line 43 def self.reset_memoized_cache! @memoized_cache = Concurrent::Map.new end |