beanstalkd / beaneater Goto Github PK
View Code? Open in Web Editor NEWThe best way to interact with beanstalkd from within Ruby
Home Page: http://beanstalkd.github.com/beaneater
License: MIT License
The best way to interact with beanstalkd from within Ruby
Home Page: http://beanstalkd.github.com/beaneater
License: MIT License
I have a simple Rack application that does this:
require 'beaneater'
@beanstalk = Beaneater::Pool.new(['localhost:11300'])
if @beanstalk
begin
tube = @beanstalk.tubes["process_file"]
work_file = original_path
tube.put path, {:ttr => 60 * 10}
rescue Beaneater::NotConnected => e
# if errors are met when dealing with beanstalk, we disable the connection
@beanstalk = false
logger.warn "Disabling the beanstalk-connection"
end
end
If I kill the beanstalk-server after my app has connected to it, I get this error:
NoMethodError: undefined method `chomp' for nil:NilClass
/Users/kasper/.rbenv/versions/1.9.3-p374/lib/ruby/gems/1.9.1/gems/beaneater-0.3.0/lib/beaneater/connection.rb:112:in `parse_response'
/Users/kasper/.rbenv/versions/1.9.3-p374/lib/ruby/gems/1.9.1/gems/beaneater-0.3.0/lib/beaneater/connection.rb:53:in `transmit'
/Users/kasper/.rbenv/versions/1.9.3-p374/lib/ruby/gems/1.9.1/gems/beaneater-0.3.0/lib/beaneater/pool.rb:111:in `block in transmit_to_rand'
/Users/kasper/.rbenv/versions/1.9.3-p374/lib/ruby/gems/1.9.1/gems/beaneater-0.3.0/lib/beaneater/pool.rb:141:in `safe_transmit'
/Users/kasper/.rbenv/versions/1.9.3-p374/lib/ruby/gems/1.9.1/gems/beaneater-0.3.0/lib/beaneater/pool.rb:109:in `transmit_to_rand'
/Users/kasper/.rbenv/versions/1.9.3-p374/lib/ruby/gems/1.9.1/gems/beaneater-0.3.0/lib/beaneater/pool_command.rb:40:in `method_missing'
/Users/kasper/.rbenv/versions/1.9.3-p374/lib/ruby/gems/1.9.1/gems/beaneater-0.3.0/lib/beaneater/tube/record.rb:39:in `block in put'
/Users/kasper/.rbenv/versions/1.9.3-p374/lib/ruby/gems/1.9.1/gems/beaneater-0.3.0/lib/beaneater/tube/record.rb:153:in `safe_use'
/Users/kasper/.rbenv/versions/1.9.3-p374/lib/ruby/gems/1.9.1/gems/beaneater-0.3.0/lib/beaneater/tube/record.rb:35:in `put'
/Users/kasper/projects/secret_project/lib/secret_project.rb:210:in `process_file'
/Users/kasper/projects/secret_project/lib/secret_project.rb:154:in `block in <class:SecretProject>'
It should throw the Beaneater::NotConnected
-exception, right?
We need good YARD docs for all methods in beaneater. Then we can generate YARD docs on rdoc.info and provide solid docs for users. Example of good docs:
# Summary of what this does
#
# @!attribute [r] count
# @yield [a, b, c] Gives 3 random numbers to the block
# @param [type] Name description
# @option name [Types] option_key (default_value) description
# @return [type] description
# @raise [Types] description
# @example
# something.foo # => "test"
#
The slice method was added in ruby 2.5+. On ruby 2.3 it causes this error:
NoMethodError: undefined method `slice' for {}:Hash
/var/lib/gems/2.3.0/gems/beaneater-1.1.1/lib/beaneater/connection.rb:73:in `transmit'
/var/lib/gems/2.3.0/gems/beaneater-1.1.1/lib/beaneater/tube/collection.rb:33:in `transmit'
/var/lib/gems/2.3.0/gems/beaneater-1.1.1/lib/beaneater/tube/collection.rb:79:in `all'
If that's deliberate, ok, but I would expect explicit mention on README. If it's not (and seems it's only slice in whole code), you may want to re-think 13e9791
I have a simple worker like this:
require 'subexec'
require 'beaneater'
@beanstalk = Beaneater::Pool.new(['localhost:11300'])
@beanstalk.jobs.register('do.job') do |job|
puts "Error out!"
STDOUT.flush
sleep 3
raise "errors?"
end
@beanstalk.jobs.process!
And this Procfile:
beanstalkd: beanstalkd -p 11300
worker_1: bundle exec ruby lib/project/beanstalk_worker.rb
worker_2: bundle exec ruby lib/project/beanstalk_worker.rb
[project (features/beanstalk_queue)]=> foreman start
15:08:31 beanstalkd.1 | started with pid 15270
15:08:31 worker_1.1 | started with pid 15271
15:08:31 worker_2.1 | started with pid 15272
15:08:35 worker_1.1 | Error out!
15:08:36 worker_2.1 | Error out!
Is it intentional that the workers don't throw the exceptions or at least notify about them?
Check out retry_wrap, call_wrap in beanstalk connection.rb
I have a bunch of tasks that belong to a user and I'd like to notify him via email when they are all done. My idea is to create a tube, watch it until it's empty, then delete it and send the email.
Is there a way to hook up with a "queue x is empty" event?
Hi guys
I wanted to start using beaneater to interact with beanstalker but I was wondering if you provided methods for testing a worker.
While there is plenty of information how to interact with Beanstalk I could not find details regarding how to implement a production grade worker with beaneater.
Mainly I'm concerned with how to stop a worker gracefully when, for example, it is restarted/stopped via systemd. As far as my understanding is I will want to let the worker finish it's job before I stop it. How can I accomplish this?
My current approach looks like this. It works if the worker is currently doing work, it will stop after the job. But if the worker is currently doing nothing it will stop only after the next job is processed. What am I doing wrong?
Thanks for your help.
#!/usr/bin/env ruby
require 'rubygems'
require 'bundler/setup'
require 'beaneater'
beanstalk = Beaneater.new '127.0.0.1:11300'
tube_name = "app.default"
beanstalk.jobs.register(tube_name) do |job|
# Do the actual work.
end
trap 'SIGTERM' do
beanstalk.jobs.stop!
end
trap 'SIGINT' do
beanstalk.jobs.stop!
end
beanstalk.jobs.process!
beanstalk.close
Is anyone here already working on the ActiveJob adapter for Rails 4.2?
Looks not that hard to implemnt after looking at one of the other implementations.
Beaneater doesn't deal with newlines very well:
beanstalk = Beaneater::Pool.new(['localhost:11300'])
tube = beanstalk.tubes['my-tube']
payload = "foo\nbar"
tube.put payload
/usr/local/share/gems/gems/beaneater-0.2.2/lib/beaneater/connection.rb:116:in `parse_response': Response failed with: EXPECTED_CRLF (Beaneater::ExpectedCrlfError)
from /usr/local/share/gems/gems/beaneater-0.2.2/lib/beaneater/connection.rb:55:in `transmit'
from /usr/local/share/gems/gems/beaneater-0.2.2/lib/beaneater/pool.rb:111:in `block in transmit_to_rand'
from /usr/local/share/gems/gems/beaneater-0.2.2/lib/beaneater/pool.rb:141:in `safe_transmit'
from /usr/local/share/gems/gems/beaneater-0.2.2/lib/beaneater/pool.rb:109:in `transmit_to_rand'
from /usr/local/share/gems/gems/beaneater-0.2.2/lib/beaneater/pool_command.rb:40:in `method_missing'
from /usr/local/share/gems/gems/beaneater-0.2.2/lib/beaneater/tube/record.rb:39:in `block in put'
from /usr/local/share/gems/gems/beaneater-0.2.2/lib/beaneater/tube/record.rb:153:in `safe_use'
from /usr/local/share/gems/gems/beaneater-0.2.2/lib/beaneater/tube/record.rb:35:in `put'
from ./producer.rb:27:in `<main>'
(By the way, that's after patching errors.rb: http://fpaste.org/KKAV/)
I looked at what actually went over the wire, and \n
got converted into \r\n
, but the job size was still payload.length
.
Using "foo\r\nbar"
doesn't work either, this will be converted to \r\r\n
.
in the collection.rb https://github.com/beanstalkd/beaneater/blob/master/lib/beaneater/job/collection.rb#L98 when a StandardError is caught, before job.bury
is used the job
needs to be checked like in the ensure
clause otherwise it may lead to try to bury a job which has already been deleted.
This may occur when
@beanstalk.jobs.register('whatever', :retry_on => [Timeout::Error]) do |job|
process(job)
screw_up_here_and_raise_some_exception()
end
in which case a weird Beaneater::NotFoundError: Response failed with: NOT_FOUND
will percolate up from the parse_response(cmd, res)
when it tries to send the bury
command (which is very confusing)
I'm looking for a shortcut to transmit_to_all. Essentially a way to broadcast messages to anyone watching. The usecase I'm in is transmitting progress for jobs but I'll have several watchers on it.
Technically I'm getting stale watchers and the jobs are getting put into no-where. I'm trying to work that out (possibly an issue with ActionController::Live not ever disconnecting on a refresh.
I am not sure if this is related to #28, but we see something the following in our logs:
17:59:12 jobs.1 | /Users/amos/.gem/ruby/2.1.3/gems/beaneater-0.3.3/lib/beaneater/job/collection.rb:107:in `rescue in block in process!': undefined method `bury' for nil:NilClass (NoMethodError)
17:59:12 jobs.1 | from /Users/amos/.gem/ruby/2.1.3/gems/beaneater-0.3.3/lib/beaneater/job/collection.rb:110:in `block in process!'
17:59:12 jobs.1 | from /Users/amos/.gem/ruby/2.1.3/gems/beaneater-0.3.3/lib/beaneater/job/collection.rb:92:in `loop'
17:59:12 jobs.1 | from /Users/amos/.gem/ruby/2.1.3/gems/beaneater-0.3.3/lib/beaneater/job/collection.rb:92:in `process!'
17:59:12 jobs.1 | from /Users/amos/Dev/memoways/kura/jobs/init.rb:100:in `work'
17:59:12 jobs.1 | from jobs/worker.rb:5:in `<main>'
17:59:12 jobs.1 | exited with code 1
I am not sure why reserve returns nil:
https://github.com/beanstalkd/beaneater/blob/master/lib/beaneater/job/collection.rb#L94
It seems this commit 109b795 moved the reserve after the begin
statement, which makes exceptions raised in reserve caught by the rescue.
It says
puts "job value is #{job.body["key"]}!"
but job.body
is a string. The example should be:
puts "job value is #{JSON.parse(job.body)["key"]}!"
Here's an example:
Beaneater::Tube.configure do |c|
c.default_delay = 0
c.default_priority = 65536
c.default_ttr = 120
end
There is an issue with the pool concept and then with put
and reserve
. We should change beaneater_test with https://github.com/beanstalkd/beaneater/blob/master/test/beaneater_test.rb#L6 and test with multiple connections.
The issue is around how reserving works in this case:
@beanstalk = Beaneater::Pool.new(['127.0.0.1:11300', '127.0.0.1:11301'])
@tube = @beanstalk.tubes["foo"]
@tube.put("bar") # <-- put onto connection one
@tube.put("baz") # <-- put onto connection one
So now we have two jobs on connection one. If we now call
@tube.reserve
Reserve could randomly pick connection two in which case there are no jobs and it would hang forever. Also other variations of this problem when 2 jobs fall into one connection and one into another and calling reserve will eventually start to hang up even though the other connection still has jobs.
RubyGems.org doesn't report a license for your gem. This is because it is not specified in the gemspec of your last release.
via e.g.
spec.license = 'MIT'
# or
spec.licenses = ['MIT', 'GPL-2']
Including a license in your gemspec is an easy way for rubygems.org and other tools to check how your gem is licensed. As you can imagine, scanning your repository for a LICENSE file or parsing the README, and then attempting to identify the license or licenses is much more difficult and more error prone. So, even for projects that already specify a license, including a license in your gemspec is a good practice. See, for example, how rubygems.org uses the gemspec to display the rails gem license.
There is even a License Finder gem to help companies/individuals ensure all gems they use meet their licensing needs. This tool depends on license information being available in the gemspec. This is an important enough issue that even Bundler now generates gems with a default 'MIT' license.
I hope you'll consider specifying a license in your gemspec. If not, please just close the issue with a nice message. In either case, I'll follow up. Thanks for your time!
Appendix:
If you need help choosing a license (sorry, I haven't checked your readme or looked for a license file), GitHub has created a license picker tool. Code without a license specified defaults to 'All rights reserved'-- denying others all rights to use of the code.
Here's a list of the license names I've found and their frequencies
p.s. In case you're wondering how I found you and why I made this issue, it's because I'm collecting stats on gems (I was originally looking for download data) and decided to collect license metadata,too, and make issues for gemspecs not specifying a license as a public service :). See the previous link or my blog post about this project for more information.
I'm using @beanstalk.jobs.process!
to automatically process the jobs. But I find that, after some time, my scripts are not getting any jobs.
Issuing stats-tube
command shows that there are x
number of clients watching the tube, but there are 0 clients in waiting
state. The number of jobs keep increasing, but none of the clients receives any job.
Sample out of stats-tube
below
stats-tube email.send.v1
OK 263
---
name: email.send.v1
current-jobs-urgent: 0
current-jobs-ready: 10
current-jobs-reserved: 0
current-jobs-delayed: 0
current-jobs-buried: 0
total-jobs: 196704
current-using: 0
current-watching: 1
current-waiting: 0
cmd-pause-tube: 0
pause: 0
pause-time-left: 0
If I stop and start my scripts, jobs get processed. Any input on this will helpful
As mentioned in the title, the process! method does not propagate or break from the while loop when there is Beaneater::NotConnected exception i.e when the Beanstalkd goes down.
Steps to reproduce:
beanstalkd -l 192.168.50.21 -p 11300 &
@beanstalk = Beaneater.new('192.168.50.21:11300')
begin
@beanstalk.jobs.register('my-tube') do |job|
puts "Job: #{job.inspect}"
end
@beanstalk.jobs.process!({:reserve_timeout => 10})
rescue Exception => e
puts "Exception: #{e.inspect}"
@beanstalk.close
end
(see NotConnected, BadFormat, etc)
See errors.rb for a detailed rundown of errors, selected based on the message returned by the beanstalk queue.
Check:
I had to poke around in the source code to find out how to exit the process!
loop cleanly. I'm not sure how much detail needs to be in the README, but it seems like at least a mention of AbortProcessingError is called for.
Automatically try to reconnect on Beaneater::NotConnected instead of throwing an error.
So I run a beanstalkd instance with
'''
beanstalkd -b ~/beanstore &
'''
Then I have the following code:
'''ruby
require 'beaneater'
bean = Beaneater::Pool.new(['0.0.0.0:11300'])
tube = bean.tubes['msg-tube']
tube.put "5"
'''
Which is pretty standard beaneater code, but the put method returns the following
'''ruby
=> {:status=>"INSERTED", :body=>nil, :id=>"2", :connection=>#<Beaneater::Connection host="0.0.0.0" port=11300>}
'''
I have a loop running in another process waiting on input, and it does receive a connection and tries to process the job, but since their is nothing in the body it fails
Ideas
As per @kr suggestion, if a connection in the pool has an error, it's probably best to close
it, then attempt to reconnect periodically.
Also perhaps be able to add new connections to the pool as well?
I was trying to figure out why I got "lib/beaneater/job/record.rb:39:in bury': undefined method
pri' for nil:NilClass (NoMethodError)" when using beaneater.
After some digging, it turns out my config.job_parser is faulty, the exception was caught by beaneater, then beaneater tries to bury the job, it will fail like that due to job id being nil.
beaneater/lib/beaneater/job/collection.rb
Line 126 in c72df85
Maybe beaneater shouldn't resuce StandardError?
Hi there,
First, thanks for writing this excellent software - I've been using it for 10+ years and processed literally billions of jobs with it.
I recently started using weighted queues in another project which uses Sidekiq and I really enjoy not having to worry about fine turning priorities and running multiple job processors to avoid queue starvation. It's nice to get a predictable amount of processing for every queue.
Is there any way to achieve something similar with Beanstalkd? Essentially, selecting from job queues in a weighted random fashion?
I imagine I could rig up a system that uses weights to randomly select a queue, peak to see if there are jobs ready and if so, reserve a job from that queue. Rough code below:
pipes = [{pipe: 'low_priority', weight: 1}, {pipe: 'medium_priority', weight: 2}, {pipe: 'high_priority', weight: 4}]
q_min = pipes.min_by {|q| q.dig(:weight)}.dig(:weight)
q_max = pipes.inject(0) {|r, q| r + q.dig(:weight) }
range = q_min..q_max
loop do
## Randomly select a queue based on weights and assign queue name to pipe_to_process
q_rand = Random.rand(range)
q_accumulate = 0
pipe_to_process = pipes.find do |q|
q_accumulate += q.dig(:weight)
q_accumulate >= q_rand
end
pipe = pipe_to_process.dig(:pipe)
unless beanstalk.tubes.find(pipe).peek(:ready).nil?
puts "Getting job from #{pipe}"
beanstalk.tubes.watch!(pipe)
job = beanstalk.tubes.reserve(1)
puts "Got job: #{job.id} : tube: #{pipe}"
job.release delay: 5
else
puts "No jobs in #{pipe}"
end
end
Not ideal to wait on a queue that might be empty ( if running multiple processors ). And when there are no jobs it thrashes between all the queues. Any thoughts on better ways to get weighted queues?
When I try to call Beaneater.new with teh env variable set I get this:
ArgumentError: wrong number of arguments (given 0, expected 1)
/path/vendor/bundle/ruby/2.5.0/gems/beaneater-1.0.0/lib/beaneater.rb:24:in `initialize'
I assume that's not intended. When I adjusted both of these:
/path/vendor/bundle/ruby/2.5.0/gems/beaneater-1.0.0/lib/beaneater/connection.rb: def initialize(address)
/path/vendor/bundle/ruby/2.5.0/gems/beaneater-1.0.0/lib/beaneater.rb: def initialize(address)
To be address = nil it works. However, I don't know how we want to solve that problem. At the moment I'm solving it like this:
Beaneater.new(ENV['BEANSTALKD_URL'])
Which I doubt is the intended behavior. I'm assuming no one uses this ENV at this point as it seems to not work.
When you create a Beaneater::Pool with multiple connections, you sometimes see unexpected behavior on some boundary conditions.
I included some Ruby code below to demonstrate the situation.
The gist of the issue is Pool hides the fact that you are working with multiple connections. If you add data to a tube in one connection, calls such as Tube#stats might throw a NotFoundError when trying to retrieve stats on a tube that does not exist for the next connection. Or Jobs#find might return nil if checking on a connection that did not have a job of that id inserted.
require 'beaneater'
#beanstalk = Beaneater::Pool.new(['localhost:11300'])
beanstalk = Beaneater::Pool.new(['localhost:11300', 'localhost:11400'])
tube1 = beanstalk.tubes['test']
response = tube1.put '{"name":"A"}'
id1 = response[:id]
puts "Added Job #{id1}"
tube2 = beanstalk.tubes['test2']
response = tube2.put '{"name":"B"}'
id2 = response[:id]
puts "Added Job #{id2}"
puts beanstalk.tubes['test'].stats
puts beanstalk.tubes['test2'].stats
puts beanstalk.jobs.find(id1)
puts beanstalk.jobs.find(id2)
This was an issue with the original beanstalk-client gem as well.
I had a hideous monkey patch in beanstalkd_view to get around it: https://github.com/denniskuczynski/beanstalkd_view/blob/8e726f2a280538b5a8cc4be1f5428b3079b53e81/lib/beanstalkd_view/extensions/beanstalk-pool.rb
But perhaps with the new gem we can find a better solution.
Something along the lines of having Pool#safe_transmit always rescue from NotFoundErrors -- only throwing the exception if NotFound on all connections might work for most cases.
The new gem looks great by the way. I've already converted my beanstalkd_view gem to use it.
Let me know what you think,
Thanks,
Dennis
I'm using ruby 2.6.6, and when changed from beaneater 1.1.1 to 1.1.2, got a json parse problem.
If I use colon in one of the parameters, I get this error (removing the colon will have success):
[2022-10-14 15:46:09] #00 ERROR: Exception Backburner::Job::JobFormatInvalid -> Job body could not be parsed: #<JSON::ParserError: 859: unexpected token at '{"action_list":["reconfigure_journals"],"app_digest":"","brand":"toconline","destination_tube":"accounting-ops-reconfigure","entity_id":"265843","entity_schema":"pt999999990_c265843","entity_variable_ids":"","fiscal_year_name":"Exercício de 2022","id":"3376","module_mask":"511","notification_title":"Redefinir configuração: Exercício de 2022","product"_"toconline","role_mask"_"32772","sharded_schema"_"pt999999990_c265843","source_company_id"_63571,"source_prefix"_"y2022_12_","source_schema"_"pt221170391_97586","source_user_id"_null,"subentity_id"_"pt999999990_16_1#y2022_1_","subentity_prefix"_"y2022_1_","subentity_schema"_"pt999999990_16_1","ttr"_21600,"tube"_"job_controller","user_email"_"[email protected]","user_id"_"750063","validity"_21600,"x_brand"_"toconline","x_product"_"toconline"}
'>
/Users/joana/.rbenv/versions/2.6.6/lib/ruby/2.6.0/delegate.rb:85:in `call'
/Users/joana/.rbenv/versions/2.6.6/lib/ruby/2.6.0/delegate.rb:85:in `method_missing'
/Users/joana/.rbenv/versions/2.6.6/gemsets/toconline-rubies-jobs/gems/backburner-1.6.0/lib/backburner/job.rb:30:in `rescue in initialize'
/Users/joana/.rbenv/versions/2.6.6/gemsets/toconline-rubies-jobs/gems/backburner-1.6.0/lib/backburner/job.rb:22:in `initialize'
/Users/joana/.rbenv/versions/2.6.6/gemsets/toconline-rubies-jobs/gems/backburner-1.6.0/lib/backburner/worker.rb:178:in `new'
/Users/joana/.rbenv/versions/2.6.6/gemsets/toconline-rubies-jobs/gems/backburner-1.6.0/lib/backburner/worker.rb:178:in `reserve_job'
/Users/joana/work/sp-job/lib/sp/job/worker.rb:52:in `work_one_job'
/Users/joana/work/sp-job/lib/sp/job/worker.rb:30:in `block in start'
/Users/joana/work/sp-job/lib/sp/job/worker.rb:28:in `loop'
/Users/joana/work/sp-job/lib/sp/job/worker.rb:28:in `start'
/Users/joana/.rbenv/versions/2.6.6/gemsets/toconline-rubies-jobs/gems/backburner-1.6.0/lib/backburner/worker.rb:60:in `start'
/Users/joana/.rbenv/versions/2.6.6/gemsets/toconline-rubies-jobs/gems/backburner-1.6.0/lib/backburner.rb:40:in `work'
/Users/joana/work/toconline/jobs/toc-fast/toc-fast:47:in `<top (required)>'
With the following job submitted:
We should generate the 'site' based on the readme. Go to Admin, pick a theme and hit generate. Set that as the website for this gem.
Sometimes, it seems when the workers are very busy we get segmentation faults.
/data/server/vendor/bundle/ruby/2.4.0/gems/beaneater-1.0.0/lib/beaneater/tube/collection.rb:106:in watched': undefined method
map' for #String:0x00000000074411f8 (NoMethodError)
Is the following returning a string instead of an array?
last_watched = transmit('list-tubes-watched')[:body]
Hi @nesquena, I developed a code that inserts tube's job into an array.
This is my code that inserts a job to an array.
beanstalk = Beaneater.new(Beaneater.configuration.beanstalkd_url)
tube = beanstalk.tubes["my-tube"]
array = Array.new
loop do
@array << tube.reserve
break if @array.size == 100
end
Is there a way to get the number of ready jobs in a tube?
I was thinking to implement like this.
loop do
@array << tube.reserve
break if @array.size == tube.size
end
Thanks,
Michael
I wrote a small script to multiplex a tube into multiple other tubes: https://github.com/martint17r/beanstalk-multiplex
I would like to harden it against data loss, i.e. when the script gets interrupted, it should still process the current job and only after finishing it, exit the processing loop and close the connection to the beanstalkd.
If there is no job being processed it would be best to close the connection and immediately exit.
I tried using trap(...) but it requires some jumping through burning hoops to bring the signal into the processing loop.
What is the best way to achieve proper signal handling?
Our system process many jobs from the queue and there are times that those jobs were not yet finish processing. There is a chance that our system will put jobs with the same name of the jobs that are currently process.
Is there a beaneater checker that will tell us that the job with the same name is already in the queue before we add it in the queue?
Thanks,
Michael
client = Beaneater.new
raises
beaneater-1.0.0/lib/beaneater.rb:24:in `initialize': wrong number of arguments (0 for 1) (ArgumentError)
But according to the documentation, it should default to the configuration or then environment variable.
Need to fix the issue where if there're multiple beaneater (telnet) connections, that could mix two responses from beanstalkd.
Stop trying to distinguish different response with regex (ref) and read byte size instead
When I try to call Beaneater.new with teh env variable set I get this:
ArgumentError: wrong number of arguments (given 0, expected 1)
/path/vendor/bundle/ruby/2.5.0/gems/beaneater-1.0.0/lib/beaneater.rb:24:in `initialize'
I assume that's not intended. When I adjusted both of these:
/path/vendor/bundle/ruby/2.5.0/gems/beaneater-1.0.0/lib/beaneater/connection.rb: def initialize(address)
/path/vendor/bundle/ruby/2.5.0/gems/beaneater-1.0.0/lib/beaneater.rb: def initialize(address)
To be address = nil it works. However, I don't know how we want to solve that problem. At the moment I'm solving it like this:
Beaneater.new(ENV['BEANSTALKD_URL'])
Which I doubt is the intended behavior. I'm assuming no one uses this ENV at this point as it seems to not work.
I using Beanstalk as primary message queue with Beaneater. In my scenario, every service got its own Beanstalkd Tube.
As system growing, There is an idea about using optimized job serializer/parser for different service to get better performance.
Any suggestion :D?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.