Git Product home page Git Product logo

autoscaler's Introduction

Sidekiq Heroku Autoscaler

Sidekiq performs background jobs. While its threading model allows it to scale easier than worker-pre-process background systems, people running test or lightly loaded systems on Heroku still want to scale down to zero to avoid racking up charges.

Requirements

Tested on Ruby 2.1.7 and Heroku Cedar stack.

Installation

gem install autoscaler

Getting Started

This gem uses the Heroku Platform-Api gem, which requires an OAuth token from Heroku. It will also need the heroku app name. By default, these are specified through environment variables. You can also pass them to HerokuPlatformScaler explicitly.

AUTOSCALER_HEROKU_ACCESS_TOKEN=.....
AUTOSCALER_HEROKU_APP=....

Install the middleware in your Sidekiq.configure_ blocks

require 'autoscaler/sidekiq'
require 'autoscaler/heroku_platform_scaler'

Sidekiq.configure_client do |config|
  config.client_middleware do |chain|
    chain.add Autoscaler::Sidekiq::Client, 'default' => Autoscaler::HerokuPlatformScaler.new
  end
end

Sidekiq.configure_server do |config|
  config.server_middleware do |chain|
    chain.add(Autoscaler::Sidekiq::Server, Autoscaler::HerokuPlatformScaler.new, 60) # 60 second timeout
  end
end

Limits and Challenges

  • HerokuPlatformScaler includes an attempt at current-worker cache that may be overcomplication, and doesn't work very well on the server
  • Multiple scale-down loops may be started, particularly if there are multiple jobs queued when the servers comes up. Heroku seems to handle multiple scale-down commands well.
  • The scale-down monitor is triggered on job completion (and server middleware is only run around jobs), so if the server never processes any jobs, it won't turn off.
  • The retry and schedule lists are considered - if you schedule a long-running task, the process will not scale-down.
  • If background jobs trigger jobs in other scaled processes, please note you'll need config.client_middleware in your Sidekiq.configure_server block in order to scale-up.
  • Exceptions while calling the Heroku API are caught and printed by default. See HerokuPlatformScaler#exception_handler to override

Experimental

Strategies

You can pass a scaling strategy object instead of the timeout to the server middleware. The object (or lambda) should respond to #call(system, idle_time) and return the desired number of workers. See lib/autoscaler/binary_scaling_strategy.rb for an example.

Initial Workers

Client#set_initial_workers to start workers on main process startup; typically:

Autoscaler::Sidekiq::Client.add_to_chain(chain, 'default' => heroku).set_initial_workers

Working caching

scaler.counter_cache = Autoscaler::CounterCacheRedis.new(Sidekiq.method(:redis))

Tests

The project is setup to run RSpec with Guard. It expects a redis instance on a custom port, which is started by the Guardfile.

The HerokuPlatformScaler is not tested by default because it makes live API requests. Specify AUTOSCALER_HEROKU_APP and AUTOSCALER_HEROKU_ACCESS_TOKEN on the command line, and then watch your app's logs.

AUTOSCALER_HEROKU_APP=... AUTOSCALER_HEROKU_ACCESS_TOKEN=... guard
heroku logs --app ...

Authors

Justin Love, @wondible, https://github.com/JustinLove

Contributors

Licence

Released under the MIT license.

autoscaler's People

Contributors

arfl avatar bkudria avatar claudiob avatar espen avatar givigier avatar jason-cooke avatar justinlove avatar kookster avatar matthewlehner avatar swrobel avatar thegreatape avatar thibaudgg avatar tmaier avatar tonkapark avatar vovayartsev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autoscaler's Issues

Disable Autoscaler via ENV

Since Heroku is having lots of API issues today I though it should be possible to disable the scaling temporarily in Autoscaler. ENV['AUTOSCALER_ENABLED'] = 'false' or something like that should not check the API and not scale the API at all. This means I can disable autoscaler and either leave the current workers alive or manually specify them.

Project requires more active maintainer

I'm no longer working on the project which spawned this gem, and so I don't have a real test environment for it, or as much motivation to keep up with maintenance.

Restarting workers?

If I manually scale down my workers and there are jobs in the queue, should I expect autoscaler to restart my workers for me?

Or does something need to trigger the restart check, like the scheduling of a new job for a queue?

Also, when if I deploy to heroku while my workers are processing jobs they do not seem to restart automatically, which means they continue to run on the old code they have loaded in memory. Should I expect them to restart? What do people do about this?

Number of scaled worker option

It would be great to be able to set the desirable number of worker when scaled. Ie:

scaler = Autoscaler::HerokuScaler.new
scaler.wanted_workers = 4 # or an other name variable...

WDYT?

Does not scale down to 0

Scales from 0 to 1 but not back to 0 after work is complete and timeout reaches. Using Unicorn with preload_app. Used in staging environment with 'worker: bundle exec sidekiq -e $RACK_ENV -C ./config/sidekiq.yml' in Procfile.

require 'sidekiq'
require 'autoscaler/sidekiq'
require 'autoscaler/heroku_scaler'
heroku = Autoscaler::HerokuScaler.new

Sidekiq.configure_client do |config|
  config.redis = { :size => 1 }
  config.client_middleware do |chain|
    chain.add Autoscaler::Sidekiq::Client, 'default' => heroku
  end
end

Sidekiq.configure_server do |config|
  config.redis = { :size => 4 }
  config.server_middleware do |chain|
    chain.add(Autoscaler::Sidekiq::Server, heroku, 60)
  end
end

ThreadError: can't alloc thread

Looks like my error is related to #59

I am using autoscaler (1.0.0), and the scale down issue was supposedly fixed in 0.12.0

Here is my error log from Rollbar
ThreadError: can't alloc thread

File /app/vendor/bundle/ruby/2.5.0/gems/autoscaler-1.0.0/lib/autoscaler/sidekiq/thread_server.rb line 52 in new
File /app/vendor/bundle/ruby/2.5.0/gems/autoscaler-1.0.0/lib/autoscaler/sidekiq/thread_server.rb line 52 in wait_for_downscale
File /app/vendor/bundle/ruby/2.5.0/gems/autoscaler-1.0.0/lib/autoscaler/sidekiq/thread_server.rb line 26 in ensure in call
File /app/vendor/bundle/ruby/2.5.0/gems/autoscaler-1.0.0/lib/autoscaler/sidekiq/thread_server.rb line 26 in call
File /app/vendor/bundle/ruby/2.5.0/gems/sidekiq-5.2.5/lib/sidekiq/middleware/chain.rb line 130 in block in invoke
File /app/vendor/bundle/ruby/2.5.0/gems/rollbar-2.18.2/lib/rollbar/plugins/sidekiq/plugin.rb line 11 in call
File /app/vendor/bundle/ruby/2.5.0/gems/sidekiq-5.2.5/lib/sidekiq/middleware/chain.rb line 130 in block in invoke
File /app/vendor/bundle/ruby/2.5.0/gems/scout_apm-2.4.21/lib/scout_apm/background_job_integrations/sidekiq.rb line 68 in call

This is my sidekiq.rb scaling strategy:

require 'sidekiq'
require 'autoscaler/sidekiq'
require 'autoscaler/linear_scaling_strategy'
require 'autoscaler/heroku_platform_scaler'

if ENV['AUTOSCALER_HEROKU_ACCESS_TOKEN'].present?
  Sidekiq.configure_client do |config|
    config.redis = { :url => ENV["REDIS_URL"], :size => 1 }
    config.client_middleware do |chain|
      heroku   = Autoscaler::HerokuPlatformScaler.new
      strategy = Autoscaler::LinearScalingStrategy.new(max_workers: 9, worker_capacity: 1, min_factor: 0)
      Autoscaler::Sidekiq::Client
      .add_to_chain(chain, 'mailers'=>heroku, 'default'=>heroku, 'low'=>heroku, 'ahoy'=>heroku)
      .set_initial_workers(strategy)
    end
  end

  Sidekiq.configure_server do |config|
    config.redis = { :url => ENV["REDIS_URL"], :size => 3 }
    config.server_middleware do |chain|
    p "Setting up auto-scaledown"
    strategy = Autoscaler::LinearScalingStrategy.new(max_workers: 9, worker_capacity: 1, min_factor: 0)
    chain.add(Autoscaler::Sidekiq::Server, Autoscaler::HerokuPlatformScaler.new, strategy)   
    # chain.add(Autoscaler::Sidekiq::Server, heroku, 60)  # 60 second timeout
    end
  end
end

Version 0.4.0 is broken as delayed_shutdown.rb is missing

I got the following error message:

rake aborted!
cannot load such file -- autoscaler/delayed_shutdown
/vagrant/config/initializers/sidekiq.rb:2:in `<top (required)>'

The current release does not contain the file mentioned.

I verified this by downloading the gem directly from http://rubygems.org/gems/autoscaler and unpacking it afterwards

Reason

https://github.com/JustinLove/autoscaler/blob/master/autoscaler.gemspec lists every single file which should be included, The new file is missing.
Instead of naming every single file, you could use a wildcard, like here: https://github.com/plataformatec/simple_form/blob/master/simple_form.gemspec

multiple processes stuck in wait_or_scale loop when queue is emtpy

When multiple jobs hit the middleware at the same time they will all sit in the wait_for_task_or_scale routine for the defined timeout to see if the queues remain empty. As seen in the log output below the last two workers both decided to scale down at the same time. This time it didn't cause a problem but seems like a problem especially if we are running 20 or more workers that finish at the same time all running the loop.

Not sure yet where to address this. Thanks.

2013-01-18T20:24:26+00:00 app[worker.1]: 2013-01-18T20:24:26Z 2 TID-g0iss TestWorker JID-edfccd1737ededd07219a55b INFO: start
2013-01-18T20:24:26+00:00 app[worker.1]: 2013-01-18T20:24:26Z 2 TID-iwtag TestWorker JID-61358ce245f64ae6ec49e957 INFO: start
2013-01-18T20:25:01+00:00 app[worker.1]: "Scaling worker to 0"
2013-01-18T20:25:01+00:00 app[worker.1]: "Scaling worker to 0"
2013-01-18T20:25:01+00:00 app[worker.1]: 2013-01-18T20:25:01Z 2 TID-iwtag TestWorker JID-61358ce245f64ae6ec49e957 INFO: done: 35.235 sec
2013-01-18T20:25:01+00:00 app[worker.1]: 2013-01-18T20:25:01Z 2 TID-g0iss TestWorker JID-edfccd1737ededd07219a55b INFO: done: 35.355 sec

Rename ENV vars

The new Heroku-16 stack reserves HEROKU_* ENV vars for Heroku. See https://devcenter.heroku.com/articles/heroku-16-stack#what-s-new

The HEROKU_ namespace is reserved for config vars set by the Heroku platform in order to offer functionality. If you have created HEROKU_ config vars, we suggest you change them when upgrading to Heroku-16, in order to avoid config var conflicts.

middleware not getting injected?

Hi -- I'm a new user of Discourse, which recommends autoscaler. I followed the directions here:

https://github.com/discourse/discourse/blob/master/docs/HEROKU.md#autoscaler

I noticed that sidekiq will get scaled from 0 to 1 when there is a new job, but then it never gets scaled down after that (i waited 23 minutes).

The first thing I explored was if the middleware is getting properly injected. It seems like it is not. (see below)

so my questions:

  1. am i correct that the middleware is not getting injected properly?
  2. is this the cause of the process not getting scaled down? if not, what might be?
  3. why does scale up work and not scale down?

thanks!

➔ heroku run rake middleware
Running `rake middleware` attached to terminal... up, run.9895
use Rack::Cache::Discourse
use ActionDispatch::Static
use Rack::Lock
use Rack::Runtime
use Rack::MethodOverride
use ActionDispatch::RequestId
use SilenceLogger
use ActionDispatch::ShowExceptions
use ActionDispatch::DebugExceptions
use ActionDispatch::RemoteIp
use Rack::Sendfile
use ActionDispatch::Callbacks
use ActiveRecord::ConnectionAdapters::ConnectionManagement
use ActiveRecord::QueryCache
use ActionDispatch::Cookies
use ActionDispatch::Session::CookieStore
use ActionDispatch::Flash
use ActionDispatch::ParamsParser
use ActionDispatch::Head
use Rack::ConditionalGet
use Rack::ETag
use ActionDispatch::BestStandardsSupport
use MessageBus::Rack::Middleware
use OmniAuth::Builder
run Discourse::Application.routes

can't make autoscaler to work with multiple queues

Hi,
I have a Heroku application with 3 Sidekiq workers on 3 different queues and I'm trying to make it work with this great gem.

My workers are configured to use the following queues: "enrich", "mass_enrich" and "import" (all given as strings and not symbols)

That's my sidekiq.rb conf file:

require 'sidekiq'
require 'autoscaler/sidekiq'
require 'autoscaler/heroku_scaler'

heroku = nil
if ENV['HEROKU_APP']
  heroku = {}
  scaleable = %w[mass_enrich enrich import] - (ENV['ALWAYS'] || '').split(' ')
  scaleable.each do |queue|
    heroku[queue] = Autoscaler::HerokuScaler.new(
      queue,
      ENV['HEROKU_API_KEY'],
      ENV['HEROKU_APP'])
  end
end

Sidekiq.configure_client do |config|
  if heroku
    config.client_middleware do |chain|
      chain.add Autoscaler::Sidekiq::Client, heroku
    end
  end
end

# define HEROKU_PROCESS in the Procfile:
#
#    default: env HEROKU_PROCESS=default bundle exec sidekiq -r ./background/boot.rb
#    import:  env HEROKU_PROCESS=import bundle exec sidekiq -q import -c 1 -r /background/boot.rb

Sidekiq.configure_server do |config|
  config.server_middleware do |chain|
    if heroku && ENV['HEROKU_PROCESS'] && heroku[ENV['HEROKU_PROCESS']]
      p "Setting up auto-scaledown"
      chain.add(Autoscaler::Sidekiq::Server, heroku[ENV['HEROKU_PROCESS']], 60, [ENV['HEROKU_PROCESS']])
    else
      p "Not scaleable"
    end
  end
end

and that's my Procfile:

web: bundle exec thin start -R config.ru -e $RAILS_ENV -p $PORT
mass_enrich: env HEROKU_PROCESS=mass_enrich bundle exec sidekiq -c 1 -q mass_enrich
enrich: env HEROKU_PROCESS=enrich bundle exec sidekiq -c 25 -q enrich
import: env HEROKU_PROCESS=import bundle exec sidekiq -c 1 -q import

My user can only send jobs to the mass_enrich queue, This worker in turn, after performing some filtering and normalization will send some enrich jobs (can be hundreds or thousands). The EnrichWorker will perform the enrichment and might send an import job.
I've separated to 3 queues because I wanted to ensure that mass_enrichment requests are not stuck behind a lot of enrichments, and that import will always happen at a slow pace without hurting enrichment capacity.

What happens is that I send the mass_enrichment job and autoscaler starts the mass_enrich process. It handles the job and I see that Sidekiq now has waiting jobs on the enrich queue but nothing happens then.
When I manually (via rails console) send a dummy enrich job, the enrich process begins and handles all of them but then the same happens to the import job.

Is there any problem with workers sending other workers? Any other approaches to solve my problem?

Many thanks, and if I can help in any way, please feel free to ask. I'm not a ruby/rails expert and don't know the middleware layer very well but will be happy to try.

Zach

Autoscaling multiple workers

Is there any reason to NOT do this in sidekiq.rb?

Sidekiq.configure_client do |config|
  heroku = Autoscaler::HerokuScaler.new
  heroku.workers = 10
  config.redis = {url: ENV["REDISTOGO_URL"]}
  config.client_middleware do |chain|
    chain.add Autoscaler::Sidekiq::Client, 'default' => heroku
  end
end

Seeing as you pay for Heroku workers on a pro rata basis, I'm using you'll pay the same amount overall but get the work done in a tenth the time. Correct?

Spurious scale API calls

I was seeing "Scaling to 1", along with an API message, in the heroku logs when the process was already running.

How to use LinearScalingStrategy?

I'm not sure exactly how to set up my middleware chain to use the LinearScalingStrategy with DelayedShutdown. Here's what I've got so far, but it only ever scales to 1 worker dyno, even when I add 10+ jobs. Sidekiq's concurrency is set to 1.

# Only auto-scale on Heroku
if ENV['HEROKU_API_KEY'] and ENV['HEROKU_APP']
  Sidekiq.configure_client do |config|
    config.client_middleware do |chain|
      heroku   = Autoscaler::HerokuScaler.new
      strategy = Autoscaler::DelayedShutdown.new(
        Autoscaler::LinearScalingStrategy.new(10, 1),
        60
      )
      Autoscaler::Sidekiq::Client
        .add_to_chain(chain, 'default' => heroku)
        .set_initial_workers(strategy)
    end
  end

  Sidekiq.configure_server do |config|
    config.server_middleware do |chain|
      strategy = Autoscaler::DelayedShutdown.new(
        Autoscaler::LinearScalingStrategy.new(10, 1),
        60
      )
      chain.add(Autoscaler::Sidekiq::Server, Autoscaler::HerokuScaler, strategy)
    end
  end
end

What am I doing wrong?

Heroku platform API error

Getting the below error because of Heroku's shutting down Legacy Platform API (v2)

Heroku::API::Errors::ErrorWithResponse: Expected(200) <=> Actual(410 Gone)
body: "{\"id\":\"gone\",\"error\":\"This version of the API has been Sunset.\\nPlease see https://devcenter.heroku.com/changelog-items/1147 for more information.\\n\"}">

support for Sidekiq 3.0.0

It seems like the terminal complains on bundle install about the version of sidekiq required by this gem, is not compatible (it requires an older version of sidekiq). Can it be updated to the new version so we can use 3.0.0 ?

Handle heroku api downtimes

Today, the heroku api was down due to some problems with the database.
During this short downtime we lost a lot of background jobs, since enqueueing failed due to autoscaler not able to handle this situation.
Here is our backtrace:

Heroku::API::Errors::ErrorWithResponse: Expected(200) <=> Actual(503 Service Unavailable) body: ""

/vendor/bundle/ruby/2.1.0/gems/excon-0.31.0/lib/excon/middlewares/expects.rb:6 in "response_call"
/vendor/bundle/ruby/2.1.0/gems/excon-0.31.0/lib/excon/middlewares/response_parser.rb:26 in "response_call"
/vendor/bundle/ruby/2.1.0/gems/excon-0.31.0/lib/excon/connection.rb:398 in "response"
/vendor/bundle/ruby/2.1.0/gems/excon-0.31.0/lib/excon/connection.rb:268 in "request"
/vendor/bundle/ruby/2.1.0/gems/heroku-api-0.3.17/lib/heroku/api.rb:76 in "request"
/vendor/bundle/ruby/2.1.0/gems/heroku-api-0.3.17/lib/heroku/api/processes.rb:6 in "get_ps"
/vendor/bundle/ruby/2.1.0/bundler/gems/autoscaler-530b8575e123/lib/autoscaler/heroku_scaler.rb:62 in "heroku_get_workers"
/vendor/bundle/ruby/2.1.0/bundler/gems/autoscaler-530b8575e123/lib/autoscaler/heroku_scaler.rb:26 in "block in workers"
/vendor/bundle/ruby/2.1.0/bundler/gems/autoscaler-530b8575e123/lib/autoscaler/counter_cache_memory.rb:24 in "counter"
/vendor/bundle/ruby/2.1.0/bundler/gems/autoscaler-530b8575e123/lib/autoscaler/heroku_scaler.rb:26 in "workers"
/vendor/bundle/ruby/2.1.0/bundler/gems/autoscaler-530b8575e123/lib/autoscaler/sidekiq/client.rb:20 in "call"
/vendor/bundle/ruby/2.1.0/gems/sidekiq-2.17.3/lib/sidekiq/middleware/chain.rb:124 in "block in invoke"

Autoscaler should be able to ignore errors like this and just enqueue the job in redis.

Project maintainer should also be a user

I've stopped running the project that I built autoscaler for. I'm out of touch with real world issues, don't have a realistic test environment, and generally don't have a big incentive to keep up with things. Having a more active maintainer would be good.

Linear scaling strategy with long running jobs

Say I have 5 dynos with a worker capacity of 5 each performing 1 long running job, the autoscaler will see that I only need 1 server and downscale to 1 server.

This effectively kills the long running job on 4 out 5 dynos (they still get retried, but have to be restarted from scratch).

Is there any way to mitigate this problem?

Ideally, I'd like to only issue downscales if nothing is active or queued even if I end up running my dynos for a bit longer.

Error when scaling down - Unable to create thread

WARN: ThreadError: can't create Thread: Resource temporarily unavailable 2015-12-23T15:51:14.908914+00:00 app[worker.1]: 3 TID-owpwn1y0o WARN: /app/vendor/bundle/ruby/2.2.0/gems/autoscaler-0.11.0/lib/autoscaler/sidekiq/thread_server.rb:32:in initialize'``

Occurred when reaching the timeout value and trying to figure out whether to scale down. More jobs to process.

I am unable to replicate this now. I did some other config changes so restarted the dynos a few times. I also noticed that a similar error was logged from the Librato gem. Anyways just wanted to leave this here despite not having more info or unable to replicate at the moment.

403 Forbidden when scaling worker

I'm having problems with the autoscaler gem. This is a new app I just picked up and seems it was working fine a month ago, this is the problematic strack_trave

2014-04-09T19:25:16.043768+00:00 app[web.1]: "Scaling sidekiq to 1"
2014-04-09T19:25:16.188542+00:00 app[web.1]: /app/vendor/bundle/ruby/2.0.0/gems/excon-0.25.1/lib/excon/connection.rb:353:in `response'
2014-04-09T19:25:16.188542+00:00 app[web.1]: /app/vendor/bundle/ruby/2.0.0/gems/excon-0.25.1/lib/excon/connection.rb:247:in `request'
2014-04-09T19:25:16.188542+00:00 app[web.1]: /app/vendor/bundle/ruby/2.0.0/gems/excon-0.25.1/lib/excon/middlewares/expects.rb:6:in `response_call'
2014-04-09T19:25:16.188542+00:00 app[web.1]: /app/vendor/bundle/ruby/2.0.0/gems/heroku-api-0.3.13/lib/heroku/api.rb:76:in `request'
2014-04-09T19:25:16.188542+00:00 app[web.1]: /app/vendor/bundle/ruby/2.0.0/gems/autoscaler-0.2.1/lib/autoscaler/heroku_scaler.rb:38:in `workers='
2014-04-09T19:25:16.188542+00:00 app[web.1]: /app/vendor/bundle/ruby/2.0.0/gems/heroku-api-0.3.13/lib/heroku/api/processes.rb:36:in `post_ps_scale'
2014-04-09T19:25:16.188312+00:00 app[web.1]: Expected(200) <=> Actual(403 Forbidden)
2014-04-09T19:25:16.188542+00:00 app[web.1]: /app/vendor/bundle/ruby/2.0.0/gems/autoscaler-0.2.1/lib/autoscaler/sidekiq.rb:17:in `call'
2014-04-09T19:25:16.188542+00:00 app[web.1]: /app/vendor/bundle/ruby/2.0.0/gems/sidekiq-2.10.1/lib/sidekiq/middleware/chain.rb:114:in `call'
2014-04-09T19:25:16.188542+00:00 app[web.1]: /app/vendor/bundle/ruby/2.0.0/gems/sidekiq-2.10.1/lib/sidekiq/middleware/chain.rb:114:in `invoke'
2014-04-09T19:25:16.188921+00:00 app[web.1]: /app/vendor/bundle/ruby/2.0.0/gems/sidekiq-2.10.1/lib/sidekiq/worker.rb:68:in `client_push'
2014-04-09T19:25:16.188921+00:00 app[web.1]: /app/vendor/bundle/ruby/2.0.0/gems/sidekiq-2.10.1/lib/sidekiq/client.rb:113:in `process_single'
2014-04-09T19:25:16.188921+00:00 app[web.1]: /app/vendor/bundle/ruby/2.0.0/gems/sidekiq-2.10.1/lib/sidekiq/extensions/generic_proxy.rb:17:in `method_missing'
2014-04-09T19:25:16.188921+00:00 app[web.1]: /app/vendor/bundle/ruby/2.0.0/gems/activerecord-3.2.17/lib/active_record/associations/collection_proxy.rb:91:in `each'
2014-04-09T19:25:16.188921+00:00 app[web.1]: /app/vendor/bundle/ruby/2.0.0/gems/sidekiq-2.10.1/lib/sidekiq/client.rb:41:in `push'

The code around is just a call to send an email where we use a worker, I'm receiving 403 Forbidden as it seems we are unable scale the sidekiq worker. I double checked the HEROKU env variables and they seem fine. Is this something related to changes on the API? What can I try? Thanks a lot

Handling multiple queues with a single worker dyno

I know this topic has been talked about in two other issues, so I apologize for opening a third. I have been unable to find an answer on whether I can monitor three queues with a single worker, or if I am required to have a worker in my Procfile for each queue. I worry about db and redis connection limits, so I would really like to do this with a single dyno. Here is my sidekiq.yml:

unless Rails.env.test? || Rails.env.development?
  require 'autoscaler/sidekiq'
  require 'autoscaler/heroku_scaler'

  Sidekiq.configure_client do |config|
    config.client_middleware do |chain|
      heroku = Autoscaler::HerokuScaler.new
      chain.add Autoscaler::Sidekiq::Client, 'default' => heroku, 'mailers' => heroku, 'high_priority' => heroku
    end
    config.redis = { :size => 10 }
  end

  Sidekiq.configure_server do |config|
    config.server_middleware do |chain|
      chain.add(Autoscaler::Sidekiq::Server, Autoscaler::HerokuScaler.new, 60) # 60 second timeout
    end
    config.redis = { :size => 12 }
  end
end

I currently have three jobs in the mailers queue, but AutoScaler is not scaling up my worker to handle them.

Thanks for your time!

Exception handling when Heroku API call fails?

I'm wondering what happens when autoscaler hits an exception while attempting to do its thing. Here's my understanding:

  1. I enqueue something in my code
  2. Sidekiq attempts to enqueue that job
  3. Autoscaler jumps in at some point and attempts to tell the API to add a worker
  4. Exception thrown
  5. Exception bubbles back up to the top, stopping Sidekiq & my code in its tracks

Does the job end up get enqueued in the Sidekiq or does that process fail because Autoscaler did? Is it worth trying to catch exceptions in Autoscaler since it isn't a "critical" process?

What led me to these questions is that I've been noticing weird Excon errors in honeybadger today. Here are the two exception messages I've gotten, both of which have backtraces that lead me to believe that it's happening when autoscaler attempts to hit the heroku api (each links to a gist of the backtrace):

Sorry, this is more just a dump of my thoughts, but hopefully it sparks a good conversation..

wait_for_task_or_scale should consider scheduled work queues as well

I have noticed my sidekiq process gets shutdown when I still have jobs set for the future. This only happens in a heroku enviroment not locally.

Right now the autoscalar only considers the enqueued queues with the pending_work? method. However, it is possible for failed jobs to be in the RETRY set or there to be scheduled jobs in the SCHEDULED set.

I think the code could look like this in lib/autoscaler/sidekiq.rb

      def scheduled_work?
        ::Sidekiq.redis { |conn| conn.zcard("retry") > 0 || conn.zcard("scheduled") > 0 } 
      end

      def pending_work?
        queues.any? {|q| !empty?(q)}
      end

      def wait_for_task_or_scale
        loop do
          return if pending_work?
          return if scheduled_work?
          return @scaler.workers = 0 if idle?
          sleep(0.5)
        end
      end

Unfortunately I am not exactly sure how to test this with your current setup so I didn't put together a commit.

By the way while looking at your tests I notice you might want to define a redis connection so you aren't connecting to an existing localhost db that has workers present and causes your jobs to fail.

Consider making scheduled jobs optional

@espen

I suggest that there is a value to be set which specifies timeframe for the autoscaler to stay idle if there are scheduled jobs.

I.e.: :schedule_limit => 5.minutes. will cause autoscaler to scale down if the next scheduled jobs is 10 minutes from now and to stay idle if the next job is 4 minutes from now.

Scheduled jobs must then either be processed whenever autoscaler scales up again or through Heroku Scheduler.

Can't get my worker to startup when one of my 2 queues is updated

I have 2 queues, called mailchimp and default. They both share the same worker process but that worker isn't getting started when I add jobs to them. They do seem to be able to shut down properly if I start up the worker manually.

Here is my Procfile

web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec sidekiq -C ./config/sidekiq.yml

Here is sidekiq.yml

:concurrency:   6
:queues:
  - [mailchimp, 1]
  - [default, 1]
:limits:
  mailchimp: 9

And here is my autoscaler.rb in config/initializers/

require 'sidekiq'
require 'autoscaler/sidekiq'
require 'autoscaler/heroku_scaler'

heroku = nil
if ENV['HEROKU_APP']
  heroku = Autoscaler::HerokuScaler.new
end

Sidekiq.configure_client do |config|
  if heroku
    config.client_middleware do |chain|
      chain.add Autoscaler::Sidekiq::Client, 'default' => heroku
    end
  end
end

Sidekiq.configure_server do |config|
  config.server_middleware do |chain|
    if heroku
      p "Setting up auto-scaledown"
      chain.add(Autoscaler::Sidekiq::Server, heroku, 60, %w[default mailchimp])
    else
      p "Not scaleable"
    end
  end
end

I'm guessing I'm just doing something wrong but I'm not sure what.

Sidekiq 5 support?

Does this gem support sidekiq 5?

Bundler could not find compatible versions for gem "sidekiq":
In snapshot (Gemfile.lock):
sidekiq (= 5.0.0)
In Gemfile:
sidekiq
autoscaler was resolved to 0.12.0, which depends on
sidekiq (~> 4.0)

Heroku API rate limit

I ran into an issue with the Heroku API rate limit today. I see the calls to the API is cached for 5 seconds, but when using Unicorn with multiple workers I assume this is only cached within each worker. Thus there can be a lot of calls to the API. The rate limit is 1200 per hour. Any ideas on how to improve this?

time to release a new version?

the current version in rubygems doesn't seem to support sidekiq 3. i think it is the right time to upgrade the version on rubygems to the latest?

Autoscaler is passaing Incorrect Authentication "type" param to platform-api

Hi,

I instantiate a new Autoscaler::HerokuPlatformScaler object like this:
as = Autoscaler::HerokuPlatformScaler.new type: 'worker', token: 'my-token', app: 'my-app-name'

when I ivoke type method, I got a hash, instead of type "worker"
as.type
output: {:type=>"worker", :token=>"my-token", :app=>"my-app-name"}

This way, it is passing wrong arguments to heroku_set_workers method, and I got following error on production:

"Scaling worker to 1" Excon::Error::Unauthorized: Expected([200, 201, 202, 204, 206, 304]) <=> Actual(401 Unauthorized)

My enviroment:
Ruby: ruby 2.3.4p301
Rails: 3.2.22.5

Handling Multiple Queues

I have multiple queues (ie: default, mailer, etc...) within Sidekiq.

I have read the other issue concerning Multiple Queues but it seems more complicated -- and wanted to know if there's a simpler way to do this?

For example:

chain.add Autoscaler::Sidekiq::Client, ['default','mailer'] => Autoscaler::HerokuScaler.new

I tried using the ARRAY but it didn't work. Any advice for me (and future folks who encounter multiple queries needs) are greatly appreciated!

don't want to use set_initial_workers

I have things working almost how I want them, however, I cannot figure out how to do what I have, without using set_initial_workers

Here is my initializer, any help would be greatly appreciated:

require 'autoscaler/heroku_platform_scaler'
require 'autoscaler/linear_scaling_strategy'


if ENV['HEROKU_ACCESS_TOKEN'] and ENV['HEROKU_APP']
  Sidekiq.configure_client do |config|
    config.client_middleware do |chain|
      heroku   = Autoscaler::HerokuPlatformScaler.new
      strategy = Autoscaler::DelayedShutdown.new(
        Autoscaler::LinearScalingStrategy.new(ENV['MAX_WORKERS'].to_i, ENV['SIDEKIQ_CONCURRENCY'].to_i),
        10
      )
      Autoscaler::Sidekiq::Client
      .add_to_chain(chain, 'files'=>heroku, 'rows'=>heroku, 'default'=>heroku)
      .set_initial_workers(strategy)
    end
  end

  Sidekiq.configure_server do |config|
    config.server_middleware do |chain|
      strategy = Autoscaler::DelayedShutdown.new(
        Autoscaler::LinearScalingStrategy.new(ENV['MAX_WORKERS'].to_i, ENV['SIDEKIQ_CONCURRENCY'].to_i),
        10
      )
      chain.add(Autoscaler::Sidekiq::Server, Autoscaler::HerokuPlatformScaler.new, strategy)
    end
  end
end```

Autoscaler doesn't start automatically

Hello,

I am using autoscaler with Sidekiq on Heroku. The autoscale feature doesn't seem to be working for me.

The issue is that it doesn't seem to start automatically. If I put something into the queue, nothing happens, until I actually go to Heroku and manually scale up the workers to 1 or I run 'bundle exec sidekiq -q default' from the heroku cli.

After it's done the job, it seems to scale down okay, but the next job I put into the queue, it does not scale up.

Here is my Procfile:

web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec sidekiq -q default

I have HEROKU_APP and HEROKU_API_KEY set in my config variables (does it matter if I'm the app owner or not?)

Sidekiq seems to be functioning correctly as all my jobs seem to go into the queue, just not processed by background workers.

Any help would be greatly appreciated. I am on Ruby 2.0.0 and Rails 3.2.13.

Here is my sidekiq.rb initializer:

    if ENV['RAILS_ENV'] == 'production'
        Excon.defaults[:ssl_verify_peer] = true
    else
        Excon.defaults[:ssl_verify_peer] = false
    end

    require 'sidekiq'
    require 'autoscaler/sidekiq'
    require 'autoscaler/heroku_scaler'

    heroku = nil
    if ENV['HEROKU_APP']
         heroku = Autoscaler::HerokuScaler.new
    end

    Sidekiq.configure_client do |config|
        if heroku
            config.client_middleware do |chain|
                chain.add Autoscaler::Sidekiq::Client, 'default' => heroku
            end
        end
    end

    Sidekiq.configure_server do |config|
        config.server_middleware do |chain|
            if heroku
                p "[Sidekiq] Running on Heroku, autoscaler is used"
                chain.add(Autoscaler::Sidekiq::Server, heroku, 1) # 1 minute timeout
            else
                p "[Sidekiq] Running locally, so autoscaler isn't used"
            end
        end
    end

Timeout::Error (execution expired)

First of all, thanks for sharing this great gem!

I'm trying to make it work on my app but with no success, could you help me find if its a bug or if I am missing something?

When I perform MyMailer.delay.blah(1) it successfully scale worker=1. Although the email goes, the worker ends up in a strange error Timeout::Error (execution expired) as follow:

execution expired
  /app/vendor/bundle/ruby/1.9.1/gems/autoscaler-0.0.3/lib/autoscaler/sidekiq.rb:65:in `sleep'

  -------------------------------
Backtrace:
-------------------------------

  /app/vendor/bundle/ruby/1.9.1/gems/autoscaler-0.0.3/lib/autoscaler/sidekiq.rb:65:in `sleep'
  /app/vendor/bundle/ruby/1.9.1/gems/autoscaler-0.0.3/lib/autoscaler/sidekiq.rb:65:in `block in wait_for_task_or_scale'
  /app/vendor/bundle/ruby/1.9.1/gems/autoscaler-0.0.3/lib/autoscaler/sidekiq.rb:62:in `loop'
  /app/vendor/bundle/ruby/1.9.1/gems/autoscaler-0.0.3/lib/autoscaler/sidekiq.rb:62:in `wait_for_task_or_scale'
  /app/vendor/bundle/ruby/1.9.1/gems/autoscaler-0.0.3/lib/autoscaler/sidekiq.rb:41:in `call'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/middleware/chain.rb:111:in `block in invoke'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/middleware/server/timeout.rb:11:in `block in call'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/middleware/server/timeout.rb:10:in `call'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/middleware/chain.rb:111:in `block in invoke'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/middleware/server/active_record.rb:6:in `call'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/middleware/chain.rb:111:in `block in invoke'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/middleware/server/retry_jobs.rb:49:in `call'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/middleware/chain.rb:111:in `block in invoke'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/middleware/server/logging.rb:11:in `block in call'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/logging.rb:22:in `with_context'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/middleware/server/logging.rb:7:in `call'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/middleware/chain.rb:111:in `block in invoke'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/middleware/chain.rb:114:in `call'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/middleware/chain.rb:114:in `invoke'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/processor.rb:44:in `block (2 levels) in process'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/processor.rb:80:in `stats'
  /app/vendor/bundle/ruby/1.9.1/gems/sidekiq-2.6.4/lib/sidekiq/processor.rb:43:in `block in process'
  /app/vendor/bundle/ruby/1.9.1/gems/celluloid-0.12.4/lib/celluloid/calls.rb:23:in `call'
  /app/vendor/bundle/ruby/1.9.1/gems/celluloid-0.12.4/lib/celluloid/calls.rb:23:in `public_send'
  /app/vendor/bundle/ruby/1.9.1/gems/celluloid-0.12.4/lib/celluloid/calls.rb:23:in `dispatch'
  /app/vendor/bundle/ruby/1.9.1/gems/celluloid-0.12.4/lib/celluloid/future.rb:18:in `block in initialize'
  /app/vendor/bundle/ruby/1.9.1/gems/celluloid-0.12.4/lib/celluloid/internal_pool.rb:48:in `call'
  /app/vendor/bundle/ruby/1.9.1/gems/celluloid-0.12.4/lib/celluloid/internal_pool.rb:48:in `block in create'

-------------------------------
Data:
-------------------------------

  * data: {:message=>
    {"retry"=>true,
     "queue"=>"default",
     "timeout"=>30,
     "class"=>"Sidekiq::Extensions::DelayedMailer",
     "args"=>["---\n- !ruby/class 'ContactMailer'\n- :contact\n- - 1\n"],
     "jid"=>"39ff2ff597db75548000ebb0",
     "error_message"=>"execution expired",
     "error_class"=>"Timeout::Error",
     "failed_at"=>2013-01-18 02:11:30 UTC,
     "retry_count"=>0}}

Thanks again!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.