Git Product home page Git Product logo

Comments (13)

pboling avatar pboling commented on June 19, 2024

I have a setup that is very similar. I am also unable to get autoscaler to start my workers, but it does shut them down just fine.

My Profile looks like this:

web: bundle exec rails server puma -p $PORT -e $RACK_ENV
critical: env HEROKU_PROCESS=critical bundle exec sidekiq -c 4 -q critical,1
default: env HEROKU_PROCESS=default bundle exec sidekiq -c 2 -q default,1

I have no idea what else to try. I'm considering switching back to HireFireApp.

from autoscaler.

JustinLove avatar JustinLove commented on June 19, 2024

@friendlycredit , I think the issue is that you have background jobs trigger jobs in a different worker - my application doesn't do this (my import jobs are triggered manually or by the scheduler) As a consquence, the example code doesn't configure scale-up on the background jobs. I believe that you need to add config.client_middleware to your Sidekiq.configure_server block.

from autoscaler.

JustinLove avatar JustinLove commented on June 19, 2024

@pboling can you open a separate issue to save us some confusion?

Please provide your sidekiq middleware configuration.

Another good place to start would be to try and get https://github.com/JustinLove/autoscaler_sample working, and make a fork with you configuration that demonstrates the problem.

from autoscaler.

pboling avatar pboling commented on June 19, 2024

@JustinLove My issue seems to be literally the same one @friendlycredit, so I am trying you're advice to him now. Our jobs schedule other jobs too.

from autoscaler.

pboling avatar pboling commented on June 19, 2024

@JustinLove Just to verify there wasn't a typo - you are saying add config.clientmiddleware to the Sidekiq.configureserver block?

I already have a config.servermiddleware block in my Sidekiq.configureserver block:

  config.server_middleware do |chain|
      chain.add(Autoscaler::Sidekiq::Server, heroku[ENV['HEROKU_PROCESS']], 120, [ENV['HEROKU_PROCESS']])
  end

Are you saying I need the config.client_middleware in addition to, or in place of the config.server_middleware I have there now, or was it a typo, and what I have is correct?

from autoscaler.

pboling avatar pboling commented on June 19, 2024

@JustinLove Here is my complete sidekiq initializer and Procfile.

from autoscaler.

JustinLove avatar JustinLove commented on June 19, 2024

Middleware and configuration are two separate things, that happen to often be entangled.

Configurations are alternates depending on how it's run. Server runs when you start the Sidekiq CLI, Client runs any other time. The configures are convenience methods to streamline setting the configuration for each case. Their execution is mutually exclusive - one has yield if server? and the other yield unless server?

Middleware exists in two chains; the chains are disjoint, but both can be configured and running in the same process. The client chain is executed when pushing a job, and the server chain is executed when running a job.

So: when running a worker, the configure_server block is run, the configure_client block is not. Any process that schedules jobs for a worker that is not itself needs a client_middleware with the autoscaler client installed.

If the duplication offends you, the version of Sidekiq I'm looking at passes Sidekiq as the config object, so you could set up the client_middleware outside of the configure blocks to have it run every time.

See also:

@pboling Are you no longer using low? It's in your sidekiq config, but not in Procfile.

from autoscaler.

pboling avatar pboling commented on June 19, 2024

@JustinLove That is correct - I removed low from the Procfile. I guess that could be my problem? It'll take me awhile to digest your explanation. I'll take a look at those links. Thanks for the help!

from autoscaler.

pboling avatar pboling commented on June 19, 2024

@JustinLove does autoscaler scale workers when running foreman locally? Why, or why not? Is there anyway to test that it is capable of scaling up without redeploying each time I want to try a new tweak?

I am still unable to scale up, but down is working fine. I have updated my gist with the current code.

UPDATE: I've been watching my queues. I'm not sure whendefault started working, but I just deployed a switch to github master gem 'sidekiq', github: 'mperham/sidekiq' and I have seen my default scale up. So it is working partially. I am still having to manually start the critical queue. The low queue hasn't been empty yet so I can't tell if it is being managed. Also it seems as though the empty queues are no longer scaling down.

I think I may have something bad in my sidekiq commands in the Procfile, so reading up on that now.

from autoscaler.

JustinLove avatar JustinLove commented on June 19, 2024

Right now it only interacts with the Heroku API. The autoscaler_sample project has some notes on remote-controlling a heroku instance from your local machine for testing purposes. I made HerokuScaler a separate object, so in principal you could write a scaler that spawned and killed processes locally.

I'll look at the gist later.

I'll have to review this, but I believe that there is still some spooky action at a distance because only one active flag is being used, if your low queue is constantly cycling, it may be keeping the others up.

from autoscaler.

JustinLove avatar JustinLove commented on June 19, 2024

I made a sample configuration using your gist.

https://github.com/JustinLove/autoscaler_sample/tree/pboling

Things work fine in the basic case, so there are no fundamental configuration errors. I removed the final call to ActiveRecord since the sample has no database, and I didn't try running it with Puma.

I was able to prevent other processes from scaling down by hitting 'low' occasionally, so the theoretical entanglement has been actually seen. I was also seeing spurious scale-to-1 for the low process, so something is off there.

from autoscaler.

JustinLove avatar JustinLove commented on June 19, 2024

I just pushed 0.2.1 which should eliminate the known crosstalk issues.

There is also a StubScaler that can be used for local testing. The only thing it does is print a message, but you can check whether scaling is being triggered without a heroku puppet.

from autoscaler.

pboling avatar pboling commented on June 19, 2024

Great! I'll give the new version a shot, and report back. On the previous version I did also see strange random scale-to-1 messages when I could have sworn it was already scaled to 1, which, I think is the same as you mentioned.

from autoscaler.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.