queueclassic / queue_classic Goto Github PK
View Code? Open in Web Editor NEWSimple, efficient worker queue for Ruby & PostgreSQL.
License: MIT License
Simple, efficient worker queue for Ruby & PostgreSQL.
License: MIT License
While using the Batch Processing features of Queue_classic I face this problem.
NoMethodError: undefined method `database' for #<QC::Queue:0xce59a14 @name="default">
Since you've been so helpful, I thought I would keep it up...
I haven't seen this in production yet (nor have I looked), but a couple times now in dev I have noticed 1-2 jobs getting orphaned (being left in a queue even with workings running).
I have one such queue and job now. Running select * from nightly_jobs;
shows:
id | details | locked_at
-------+-------------------------------------------------------------------------+----------------------------
28226 | {"job":"UsersHelper.update_user_facebook_information","params":[11678]} | 2011-06-08 17:31:33.547898
(1 row)
What can I do to get such a job worked? How might this happen? (obviously I was doing a lot of testing with killing workers, but I believe I only used SIGINT on the workers for this queue)
It'd be nice to let QC fall back to using the standard settings from database.yml
if DATABASE_URL
or QC_DATABASE_URL
aren't present, as is normal outside of the Heroku environment. It'd save having to use the rails-database-url
gem, which is currently causing some issues in our test environment.
Does that sound like a sensible thing to do? If so, I'll try and pull a patch together.
Is there a way to know how many times a failed job has been retried ? I understand the need to keep the framework as simple and light as possible and I read from the documentation how to relaunch a failed job but it is basically useless if there is no way to tell how many time the failed job has been tried. A failing job would just run forever.
master doesn't seem to pass on several of the rubies listed in the file
http://travis-ci.org/#!/ryandotsmith/queue_classic/builds/2048585
Is there a reason to keep 1.8.7, jruby and rbx?
With this Gemfile:
source :rubygems
gem "queue_classic", "2.0.0rc1"
…this code:
QC::Queries.load_functions
…gives this error:
/Users/paul/.rvm/gems/ruby-1.9.3-p0@qc_test/gems/queue_classic-2.0.0rc1/lib/queue_classic/queries.rb:40:in `initialize': No such file or directory - /Users/paul/.rvm/gems/ruby-1.9.3-p0@qc_test/gems/queue_classic-2.0.0rc1/lib/sql/ddl.sql (Errno::ENOENT)
…because the gem doesn't include the sql dir containing the ddl files:
ls -l $(bundle show queue_classic)
total 40
drwxr-xr-x 4 paul staff 136 Mar 27 12:54 lib
-rw-r--r-- 1 paul staff 18398 Mar 27 12:54 readme.md
drwxr-xr-x 5 paul staff 170 Mar 27 12:54 test
We are using Rails 3.0.7 and a recent master of QC. We are using the Rails cache with memcached.
We recently upgraded to a multi-tenant architecture and started running more than one worker per queue.
We ran into an issue where our objects were not deserialize from the Rails cache properly because the model class wasn't loaded into that worker's object space yet.
The first worker would actually execute a code block with model(s) and store the result to the cache. Then, a different worker used the cached result which included model classes it didn't know how to deserialize.
To fix this, we require_dependency for all of our ActiveRecord model classes before we start our custom worker:
require 'queue_classic'
require "#{Rails.root}/lib/jobs/custom_worker.rb"
namespace :jobs do
desc "Work custom background jobs (specify QUEUE)"
task :custom_work => :environment do
# load all model classes for cache demarshalling
Dir.glob("#{Rails.root}/app/models/*.rb").each { |file| require_dependency file }
CustomWorker.new.start
end
end
We are not suggesting any changes to QC, but maybe there is a good place that issues like this could be documented so others don't have to solve them again.
I couldn't find anywhere a solution on how to make queue_classic write logs to a file. Scrolls, which is used for logging, doesn't seem to have any example either.
Could someone provide a working example?
I have been unable to find any of the previous documentation that explains how to extend a worker and override the handle_failure method. And have been having trouble getting it to work (probably due to a noob mistake).
I needed to be able to puts the error messages to the console during development as I have been having issues debugging a completely unrelated error but it looks like the old handle_failure method has been updated in favor of one that calls a logging method.
I did enable the debug env var to get the log to output, but this didn't give me enough information.
Is there any plans to put this documentation up again or is it somewhere that I have missed?
My current work around is to use an older version of QC that uses the old handle_failure method.
Thanks,
S.
Most of the time, a job has a single id that refers to a row in a table. For example, if you need to process an order, the job payload would contain an "order_id" field that refers to orders.id
. Ideally, the job's order_id column could be a foreign key to orders.id
. This ensures that all the jobs are referencing valid data.
This also implies that you could have a table per type of job, with database constraints on the job tables that ensure that all the data in the job is correct.
Another example: I want to have a background that job checks a comment for spam. My spam_check_jobs table would look like:
create table spam_check_jobs (
id serial foreign key,
comment_id integer not null references comments(id),
method_name text not null default 'ProcessSpam.process!',
locked_at timestamp
);
When that job is ran, something similar to eval "#{method_name}(:comment_id => #{comment_id})"
is ran.
This makes it impossible to insert invalid data into the jobs table.
Thoughts on how to best handle this?
One thing that has always bothered me in the queue_classic setup process is this line:
$ ruby -r queue_classic -e "QC::Database.new.load_functions"
Some people call this function in an irb session, others place it in a migration. There has got to be a better way.
One idea is to have the worker load the function when it is initialized. Producers do not need this function thus is is safe to have the worker be responsible for loading it's function.
We would have to create or replace the function each time we init a worker, so we should figure out some algorithm that loads the function only if it is not already there.
I see connection= in lib/queue_classic/conn.rb in github, but gem 2.0.3 is missing that method, and hence I can't complete the installation as per the documentation. Any idea why this might be?
What's the best way to use with something like capistrano? For now i have rake qc:work
in my deploy.rb, but just wondering if i'm missing something?
The random offset selection code has a tiny order of operations related bug:
it should be:
SELECT TRUNC(random() * (top_boundary + 1)) INTO relative_top;
so that the possible results are [0..top_boundary] inclusive.
The way it is written now:
SELECT TRUNC(random() * top_boundary + 1) INTO relative_top;
results in [1..top_boundary]
I have Created a root/bin/worker.rb file as mentioned in document. but When I run my rail server its runs the all the jobs but hang the server.
We've had our Gemfile pulling off master since 1.0 was broken (for setting QC_DATABASE_URL on Rails environment load).
We just started seeing this error:
QC::OkJson::Error (Hash key is not a string: :option_code):
Is there a stable version that we can set in our Gemfile that allows us to set QC_DATABASE_URL on Rails environment load AND supports symbols as hash keys? Or is there something we're missing?
2.0.0rc1 is trying to load lib/sql/ddl.sql
but it doesn't exist. The sql directory isn't there at all.
~/code/ruby/queue_classic: huntergillane $ ruby -r queue_classic -e "QC::Queries.load_functions"
I, [2012-03-18T13:26:58.811219 #31858] INFO -- : program=queue_classic log=true
/Users/huntergillane/.rvm/gems/ruby-1.9.3-p0@queue_classic/gems/queue_classic-2.0.0rc1/lib/queue_classic/queries.rb:40:in `initialize': No such file or directory - /Users/huntergillane/.rvm/gems/ruby-1.9.3-p0@queue_classic/gems/queue_classic- 2.0.0rc1/lib/sql/ddl.sql (Errno::ENOENT)
from /Users/huntergillane/.rvm/gems/ruby-1.9.3-p0@queue_classic/gems/queue_classic-2.0.0rc1/lib/queue_classic/queries.rb:40:in `open'
from /Users/huntergillane/.rvm/gems/ruby-1.9.3-p0@queue_classic/gems/queue_classic-2.0.0rc1/lib/queue_classic/queries.rb:40:in `load_functions'
from -e:1:in `<main>'
Following the quick start directions.
queue_classic looks super promising. Looking forward to using it. Thanks!
db/migrations/add_queue_classic.rb
should be
db/migrate/add_queue_classic.rb
I was surprised when inserting into queue_classic_jobs didn't automatically issue a NOTIFY.
If I want to insert jobs into queue_classic_jobs without using the ruby gem, I'd need to run NOTIFY "default"
to get the worker notified by NOTIFY/LISTEN.
Should there be an insert trigger on queue_classic_jobs that automatically executes a NOTIFY?
I have an instance where I cannot set the database_url env vars (TDDium insists on using sockets). Would it be feasible for QC to fallback to use the ActiveRecord::Base.connection if there is one if the db_url won't resolve?
The only issue I can see at the moment is that the execute method will need changing to suit the different APIs
Over the years, I have seen several feature requests submitted to queue_classic. Most of the time I have rejected these requests. My motivation for the rejections has always been based on the fact that I want to keep queue_classic as simple as possible. I want queue_classic to do one thing and do it well. Queue_classic currently has ~500 LOC, this is a feature!
Now, I will not deny that the feature requests that were submitted provide real value. I am sure that the features requests would help solve real problems. To that end, I suggest we use a new project to keep these features. Sinatra has done this with good success.
potential qc-contrib features:
@lmarburger, @Bobbyw, @ClemensGruber, @apeckham, @joevandyk, what do you guys think?
This always raises a warning, and is usually fine. However, it can create problems if a problem occurs later in the migration chain, as happened to me today.
I'm not entirely sure on the trigger, I'll try and work out the minimum steps to reproduce. But I think what basically happened is that the COMMIT
within QC::Setup.create
has committed the creation of the queue_classic_jobs
table, but a later failure has caused active record to rollback and hence not marked the QC migration as complete. Subsequently, Rails will try and run the QC::Setup.create
migration which will fail because the queue_classic_jobs
now exists.
My initial thoughts on a solution are to check whether a transaction is currently in progress within QC::Conn.transaction
and only transaction wrap if one is not in progress. Need to read up on the return statuses from http://deveiate.org/code/pg/PG/Connection.html#method-i-transaction_status and see if they can deliver the goods.
The current QC Rake tasks include ':environment' which is Rails specific.
We are using Rails 3.0.7 and rake 0.9.x (at the time, we have switched back to 0.8.7 as it is more compatible with Rails and gems).
After adding Queue Classic 0.3.1 to our Gemfile and running bundle install
, in order to get rake to recognize the Queue Classic tasks, I had to create lib/tasks/queue_classic.rb with this:
# load Queue Classic rake tasks
$VERBOSE = nil
load "#{Gem.searcher.find('queue_classic').full_gem_path}/lib/queue_classic/tasks.rb"
and load it in my Rakefile like this:
# load Queue Classic rake tasks
require File.expand_path('../lib/tasks/queue_classic.rb', __FILE__)
The Queue Classic quick start instructions do not mention this nor do I know of other gems that have had these issues (but that doesn't mean that they don't).
Did I do something wrong? Is it a rake 0.9.x issue?
I've been trying to pass long html strings in QC, and its been hanging. I investigated a bit and it looks like OkJson is the problem. When passing in a hash containing the entire html of a wikipedia page OkJson takes about 10 minutes to encode, whereas ruby's to_json takes 500 ms. OkJson is pretty awesome because of it's portability, but it's made it infeasible to pass long strings as arguments to QC. That being said, there may also be a better way to pass long strings to QC that I don't know about.
Thanks!
Can we apply inversion of control here, e.g. can I specify a class to handle logging instead of Scrolls?
I could try to hook Scrolls instead, but it has the same problem; since Scrolls is in my project only for QC, it seems like I should just intercept the data there.
FWIW, I'm already subclassing QC::Worker
and can override log
there, but QC::Worker
uses a direct call to QC.log_yield
, which I can't capture.
PostgreSQL has option to connect using unix socket. In this case we should specify path to direcory which holding unix socket files.
For example I have following config:
database.yml
development: &dev
adapter: postgresql
encoding: unicode
pool: 5
host: /home/mcduck/postgresql/9.1/sockets
port: 5434
database: duck_tales
queue_classic has only option to specify database connection using DATABASE_URL, so I can't pass "/home/mcduck/postgresql/9.1/sockets" as hostname, because it it not valid hostname by URI specification.
If after a while, the user decides to remove QC from the app, the migrations become very complicated as QC will not be available in the namespace.
please stick to standard migration methods.
In order to get our specs working once we switched from DelayedJob to QC, I created a helper module:
module QueueClassicSpecHelper
QUEUES = [QC, $nightly_jobs]
def work_all_background_jobs
QUEUES.each do |queue|
work_queue queue
end
end
def clear_all_background_jobs
QUEUES.each do |queue|
clear_queue queue
end
end
private
def work_queue(queue)
while queue.length > 0
job = queue.dequeue
begin
job.work
rescue Object => e
raise e
ensure
queue.delete(job)
end
end
end
def clear_queue(queue)
queue.delete_all
end
end
where work_queue
was mostly a copy from QC itself. This will probably need to be updated once we upgrade which introduces a somewhat annoying dependency.
Is there a better approach? Could something be added into QC so that it's easy to use with specs that need to run or delete jobs?
When I run rake qc:load_functions
I get this error:
"""
ERROR: type "queue_classic_jobs" does not exist
/Users/matt/.rvm/gems/ruby-1.9.2-p290-patch_require@karma/gems/queue_classic-1.0.0/lib/queue_classic/database.rb:56:in exec' /Users/matt/.rvm/gems/ruby-1.9.2-p290-patch_require@karma/gems/queue_classic-1.0.0/lib/queue_classic/database.rb:56:in
execute'
/Users/matt/.rvm/gems/ruby-1.9.2-p290-patch_require@karma/gems/queue_classic-1.0.0/lib/queue_classic/database.rb:91:in `load_functions'
"""
I see the comments about using that as a special return value. But, I am not sure how to get postgres to accept it.
postgresql 9.0.4
ruby 1.9.2
We are seeing behavior in some of our apps with Queue Classic (0.3.1) workers that makes it seem like queues aren't always being worked in a timely fashion.
In fact, it looks a lot like a bunch of older jobs get done right when we restart the workers (based on receiving a flood of emails that we send with background jobs).
Has anyone else seen this? How often do you restart your workers?
Google implies that this comes from Rails. In my non-Rails environment I get this:
rake qc:create
rake aborted!
Don't know how to build task 'environment'
I'll use the -e quickstart stuff, but perhaps something akin to that could replace the rake tasks? Not sure how to best support both Rails and not Rails, but I figured I would point this out in case it wasn't a known issue. Thanks!
As soon as I run bundle exec rake qc:work
the cpu on my ubuntu ec2 machine goes to 100%, even with no tasks in the queue. Anyone else having such an issue?
Queue Classic should be entirely a postgres extension.
9.2 supports json data type, should use it for the args
column.
I got the error below when running the rake task.
any ideas on how to debug this ?
Sass is in the process of being separated from Haml,
and will no longer be bundled at all in Haml 3.2.0.
Please install the 'sass' gem if you want to use Sass.
** Invoke jobs:work (first_time)
** Invoke qc:work (first_time)
** Invoke environment (first_time)
** Invoke disable_rails_admin_initializer (first_time)
** Execute disable_rails_admin_initializer
** Execute environment
** Execute qc:work
lib=queue_classic error="#<PG::Error: ERROR: function lock_head(unknown, unknown) does not exist
LINE 1: SELECT * FROM lock_head($1, $2)
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
"
'
rake aborted!
ERROR: function lock_head(unknown, unknown) does not exist
LINE 1: SELECT * FROM lock_head($1, $2)
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/queue_classic-2.0.0/lib/queue_classic/conn.rb:9:inexec' /Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/queue_classic-2.0.0/lib/queue_classic/conn.rb:9:in
execute'
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/queue_classic-2.0.0/lib/queue_classic/queries.rb:15:inlock_head' /Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/queue_classic-2.0.0/lib/queue_classic/queue.rb:16:in
lock'
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/queue_classic-2.0.0/lib/queue_classic/worker.rb:107:inlock_job' /Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/queue_classic-2.0.0/lib/queue_classic/worker.rb:87:in
work'
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/queue_classic-2.0.0/lib/queue_classic/worker.rb:75:instart' /Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/queue_classic-2.0.0/lib/queue_classic/tasks.rb:11:in
block (2 levels) in <top (required)>'
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/task.rb:205:incall' /Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/task.rb:205:in
block in execute'
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/task.rb:200:ineach' /Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/task.rb:200:in
execute'
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/task.rb:158:inblock in invoke_with_call_chain' /Users/user/.rvm/rubies/ruby-1.9.3-p0/lib/ruby/1.9.1/monitor.rb:211:in
mon_synchronize'
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/task.rb:151:ininvoke_with_call_chain' /Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/task.rb:176:in
block in invoke_prerequisites'
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/task.rb:174:ineach' /Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/task.rb:174:in
invoke_prerequisites'
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/task.rb:157:inblock in invoke_with_call_chain' /Users/user/.rvm/rubies/ruby-1.9.3-p0/lib/ruby/1.9.1/monitor.rb:211:in
mon_synchronize'
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/task.rb:151:ininvoke_with_call_chain' /Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/task.rb:144:in
invoke'
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/application.rb:116:ininvoke_task' /Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/application.rb:94:in
block (2 levels) in top_level'
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/application.rb:94:ineach' /Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/application.rb:94:in
block in top_level'
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/application.rb:133:instandard_exception_handling' /Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/application.rb:88:in
top_level'
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/application.rb:66:inblock in run' /Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/application.rb:133:in
standard_exception_handling'
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/lib/rake/application.rb:63:inrun' /Users/user/.rvm/gems/ruby-1.9.3-p0@blog/gems/rake-0.9.2.2/bin/rake:33:in
<top (required)>'
/Users/user/.rvm/gems/ruby-1.9.3-p0@blog/bin/rake:23:inload' /Users/user/.rvm/gems/ruby-1.9.3-p0@blog/bin/rake:23:in
Tasks: TOP => jobs:work => qc:work
This might be something for the contrib project, but it would be nice to be able to spawn threads for each job.
The code in question: https://github.com/ryandotsmith/queue_classic/blob/master/lib/queue_classic/job.rb#L33-39
I know this may not seem like an issue, but, consider the following. I have a method that takes an array. So the method takes one parameter. QC checks to see if the param is an Array and then splats the array so each array element becomes an individual parameter, and I get an error expected 1 param got 9.
My humble opinion is that the params passed through QC should be the params you get back. Although I can understand why people may like splatting arrays for their params.
QC uses methods like PGConn.connect
and PG::Connection#exec
which are synchronous and implemented in such a way that they ignore timeouts being hit when the higher-level QC operations are wrapped in timeout
blocks.
It might be worthwhile to consider using PG::Connection.connect_start
and PG::Connection#async_exec
with polling so such timeouts are respected. Current behavior could be maintained by polling until an explicit error is returned by way of status changes.
Are there any plans to add the ability to enqueue a job that will be run by every worker on a queue?
We have some architecture changes coming up that may require this and I wanted to see if it was planned or possible.
running heroku run rake db:migrate gives the following error:
== AddQueueClassic: migrating ================================================
lib=queue_classic error="#<PG::Error: ERROR: language ...
HINT: Use CREATE LANGUAGE to load the language into the database.
"
rake aborted!
An error has occurred, this and all later migrations canceled:
ERROR: language "plpgsql" does not exist
HINT: Use CREATE LANGUAGE to load the language into the database.
Tasks: TOP => db:migrate
(See full trace by running task with --trace)
Am I the only one who gets lost in the readme?
Perhaps we should build a simple web site that organizes the contents of the readme.
Or instead of using markdown, we could use RST and take advantage of linking divs...
mkdir bin
emacs bin/worker #paste the code from the README in changing 'your_app' to my 'app_name'.
chmod +x bin/worker
bin/worker
bin/worker
/home/guy/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- app_name (LoadError)
from /home/deployer/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
from bin/worker:7:in `<main>`
Should i literally just need to change the 'your_app'
to 'app_name'
right? do i have to include the bin directory in the application.rb file, like you usually do for lib too?
I'm sure I'm the only guy with this issue, but what can paste to figure out what is going on? How to debug? Rails.application.class.parent_name
verifies that I am using the right name too. I'm using RVM but would that mess with the LOAD_PATH for the require?
Thoughts?
Hi,
I just finished setting up QC in Heroku to support Carrierwave workers with carrierwave_backgrounder.
The app seems to work fine in dev and production (heroku). However, I've got an errbit setup to monitor for uncatched exceptions in heroku, and it seems that the first time I tried to use the worker, it threw an exception. Subsequent calls seems to have worked just fine (no further exceptions raised). It might be that this is something that happens only on the first try, but not on subsequent calls.
The backtrace looks like this:
PG::Error: ERROR: function lock_head(unknown, unknown) does not exist LINE 1: SELECT * FROM lock_head($1, $2) ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts.
queue_classic-2.1.1/lib/queue_classic/conn.rb:13→ exec
queue_classic-2.1.1/lib/queue_classic/conn.rb:13→ block in execute
/[unknown source]→
queue_classic-2.1.1/lib/queue_classic/conn.rb:9→ execute
queue_classic-2.1.1/lib/queue_classic/queries.rb:15→ lock_head
queue_classic-2.1.1/lib/queue_classic/queue.rb:15→ lock
queue_classic-2.1.1/lib/queue_classic/worker.rb:76→ lock_job
queue_classic-2.1.1/lib/queue_classic/worker.rb:48→ work
queue_classic-2.1.1/lib/queue_classic/worker.rb:24→ start
queue_classic-2.1.1/lib/queue_classic/tasks.rb:14→ block (2 levels) in <top (required)>
rake-10.0.3/lib/rake/task.rb:228→ call
...
Any ideas?
The version of queue_classic on RubyGems is a little dated, any chance of getting it updated? Two issues in that version that have bit me are SQL escape problems and calling "SET application_name" by default. Obviously, I can use the Github version, but I prefer the stability of an official release when possible.
Thanks!
We have a background job, which pretty much takes user-entered text as an argument. QC currently fails to insert arguments, which contain single quotes. I get the following exception:
ActiveRecord::StatementInvalid: PG::Error: ERROR: syntax error at or near "avance"
This points exactly to the location of the '
. I did some research and noticed that OkJson
seems to work:
1.9.3p194 :012 > json = QC::OkJson.encode({'key' => 'val', 'other_key' => "other'val"})
=> "{\"key\":\"val\",\"other_key\":\"other'val\"}"
1.9.3p194 :014 > QC::OkJson.decode json
=> {"key"=>"val", "other_key"=>"other'val"}
I think we need to escape the single quote since it has a special meaning in SQL. I don't think thats an application specific problem and we should fix it in the library.
This is an urgent problem for us. I am willing to write up a PR but wanted to check with you first before we do the work.
LIMIT/OFFSET still has to scan all the tuples being skipped by OFFSET. That means read-only locks are taken (but those can be blocked by a concurrent UPDATE that touches the index) and additional tuples are scanned. This still is helpful because there's a good chance a worker will perform UPDATE on a tuple that is fairly "deep", but we could be much better.
My suggestion is to try using modulus and qualifying against the index. Unlike LIMIT/OFFSET, swathes of tuples can be skipped as those index pages will never be accessed. It has better concurrency characteristics and fetches fewer tuples, too. This requires each worker have a 'clock' value that increments, and ideally those clock numbers are well-distributed. As a degenerate case, consider the modulus of 1, which will always result in all workers always trying to lock the first element.
I suggest instead of using centralized coordination of the workers to parcel out their seed clock numbers to instead choose a random number in a large space (even, say a full uint32 range) and then do a random restart every few jobs to prevent systemic issues. The modulus can be tuned the same way the limit/offset can be, trading ordering for better concurrency.
It's hard to for me to foresee a metric -- besides some implementation complexity -- whereby this is not superior to random LIMIT/OFFSET in every way. (That doesn't mean such a trade-off does not exist, I just cannot predict it)
We are managing the start, stop, restart, etc of our workers ourselves. When we do kill on the worker pid (send it a SIGTERM), it often ignores this. What is the appropriate way to stop the workers?
I have a job that sends emails via SMTP. When the SMTP server is unreachable (perhaps for maintenance), the job raises an exception. I know I can report the exception to Airbrake, but the email is lost. I'd like to retry a few hours later when the server is available.
How can I do this with QC? Is this the right tool for the task? I'd like it to be, because I like not having to run a separate service for the queue.
Thanks for queue_classic!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.