Git Product home page Git Product logo

Comments (16)

mike-marcacci avatar mike-marcacci commented on August 31, 2024

I actually haven't used it with ioredis but I'll give that a go and let you know. Sounds like a good thing to add to the the test suite.

Sent from my iPhone

On Jun 22, 2015, at 11:44 PM, Behrad [email protected] wrote:

seems promising...

Have you tested this with ioredis on Redis 3.0 Cluster?


Reply to this email directly or view it on GitHub.

from node-redlock.

behrad avatar behrad commented on August 31, 2024

thank you @mike-marcacci
then how you claim it works with Redis Cluster? since node_redis doesn't support Redis Cluster

from node-redlock.

mike-marcacci avatar mike-marcacci commented on August 31, 2024

Ah yes. I actually used node-redis specifically because it doesn't do any clustering on its own. The redlock algorithm needs to know about all of the nodes in the cluster so that it knows if a quorum is reached (that's the key part of making it distributed).

I just skimmed through the ioredis source and it looks like it should be trivial for me to detect whether redlock is passed a single redis node or a cluster, and then get the nodes from that cluster. I'll give this a shot over the weekend and get back to you.

from node-redlock.

behrad avatar behrad commented on August 31, 2024

great @mike-marcacci That would make node-redlock a good candidate to be used in https://github.com/Automattic/kue

from node-redlock.

mike-marcacci avatar mike-marcacci commented on August 31, 2024

Hey @behrad, I got a chance this weekend to work through this and think I have solutions for all my major concerns. I'll see if I can get this implemented sometime this weekend!

from node-redlock.

behrad avatar behrad commented on August 31, 2024

👍 @mike-marcacci

from node-redlock.

MaximGarber avatar MaximGarber commented on August 31, 2024

@behrad
about cluster , as for redis-cluster 3.0 , when we access a key from any node , we can get a unique answer ,right ? nodes will communicate with port 2XXXX each other . so I think when we new RedLock , we just config one of nodes , It works .

from node-redlock.

behrad avatar behrad commented on August 31, 2024

about cluster , as for redis-cluster 3.0 , when we access a key from any node , we can get a unique answer ,right ?

I think the answer is NOT always yes: http://redis.io/topics/cluster-tutorial#redis-cluster-consistency-guarantees

from node-redlock.

mike-marcacci avatar mike-marcacci commented on August 31, 2024

@behrad is correct. As far as I know, it's not possible to force storage on a specific node using redis cluster; rather you must force a key slot that maps to the node.

Originally I misspoke when I said this supported redis "cluster" (as in redis-cluster 3.0) and have since clarified in the readme: it does support a "cluster" of unassociated redis nodes or multiple true redis clusters that are each treated as individual nodes. The real difficulty with integrating with a single cluster is dealing with redis node additions and changes in the key slot map, which could theoretically consolidate multiple previously-redundant lock keys to a single node which, if that node were to fail, could break the locks' guarantees.

I've thought of a few ways around this:

1. Key Slot Configuration
The actual redlock algorithm (as currently implemented here) holds the redis nodes constant. A slight cluster-friendly modification to this would be to configure redlock with a list of key slots that are guaranteed to provide a meaningful quorum when distributed across the cluster. This makes cluster changes trivial, but does require that the cluster is never changed in a way that invalidates the configuration.

2. Automatic Reconfiguration
Another way to handle this is by automatically calculating the key slots to be used. However, to keep the guarantees, whenever the key slot map changes, all redlock instances will pause for the maximum duration that a lock can be set.

3. Offloading Key Slot Logic
Of course, this logic could be offloaded to a user-supplied method, but that's a bit of a cop-out.

4. Don't Bother
It's quite possible that this is a problem that doesn't need solving, because there's already a great fault-tolerant, scaleable solution: use multiple masters or multiple clusters. Using redlock exactly the way it is with 3 or more redis nodes provides excellent fault-tolerance; using redlock exactly the way it is with 3 or more redis clusters provides excellent fault-tolerance AND excellent scalability, but does require quite a bit of redundancy.

I'm going to continue exploring all of these, but please let me know if you see any problems or an easier solution that I'm just missing.

from node-redlock.

mike-marcacci avatar mike-marcacci commented on August 31, 2024

Hey guys! So, I put some good time into option #2 above, and I've come to think that the benefits don't justify the complexity of the feature and the difficulty of testing it. Given how easy and inexpensive it is to spin up new servers these days, I'm increasingly leaning toward option #4. In a multi-cluster setup, any changes to the shard maps – adding nodes, removing nodes, failover, etc – are handled without impacting new or existing locks. It also provides a higher degree of redundancy and safety than using a single cluster, and shouldn't have any impact on scalability.

I've added tests that use ioredis which worked out of the box, and I've added some recommendations to the readme for high-availability installations.

I think in general this makes the locking process much more transparent to your ops team, and keeps the redlock codebase much more maintainable.

Your thoughts?

from node-redlock.

behrad avatar behrad commented on August 31, 2024

great @mike-marcacci You mean this should work in a multi-master redis cluster until we don't add/remove nodes !?

from node-redlock.

mike-marcacci avatar mike-marcacci commented on August 31, 2024

Hey @behrad, so sorry to leave this conversation stagnant for so long. Basically, because redis uses an eventual consistency model for replication, a single cluster cannot provide the same kind of guarantees possible with multiple independent redis instances (individual nodes or individual clusters). This isn't necessarily a problem for all use cases though, and in cases that are not extremely sensitive to early lock deletion in very rare failure scenarios, running redlock on a single cluster is perfectly fine and will provide high availability and great scaling.

However, since this is a locking library and people will probably use it in extremely sensitive situations, I just want to be absolutely clear that the most safe way to run this is to have multiple independently authoritative redis instances.

from node-redlock.

mike-marcacci avatar mike-marcacci commented on August 31, 2024

Hey @behrad, I'm going to go ahead and close this issue. Feel free to reopen it if you'd like!

from node-redlock.

ntquyen avatar ntquyen commented on August 31, 2024

Hi @mike-marcacci,

I tested the compatibility of redlock + redis cluster. The code below works well with redis cluster

// localhost:7000, localhost:7001, localhost:7002 are 3 master nodes
const redisClient = new Redis.Cluster([{
  host: 'localhost',
  port: '7000'
}, {
  host: 'localhost',
  port: '7001'
}, {
  host: 'localhost',
  port: '7002'
}]);

const redlock = new Redlock([redisClient], redlockConfig);

const lockAsync = thunkify(redlock.lock).bind(redlock);
lock = yield lockAsync(uniqueKey, ttl);

But the code doesn't seem to work if I pass a single node running in non-cluster node. It hangs in redlock.lock() Looks like it has some issue with backward compatibility. There is some scenario that we need it, you know, like in development, I don't wan't to create a redis cluster for the code to work.

from node-redlock.

mike-marcacci avatar mike-marcacci commented on August 31, 2024

Hi @ntquyen - that's interesting. To help me track this down, could you tell me which redis client library you're using? It's tested with both node-redis and ioredis, but I wouldn't be surprised if others exist.

Also, just out of curiosity, does your thunkify method above provide anything that the returned promise doesn't? You should be able to just do:

var lock = yield redlock.lock(uniqueKey, ttl);

from node-redlock.

ntquyen avatar ntquyen commented on August 31, 2024

@mike-marcacci Sorry, I should have included more info in previous comment, I'm using ioredis as my client library.

I don't think there is any problem with thunkify. However, I just replaced it with your provided code but it still hang with non-cluster node

var lock = yield redlock.lock(uniqueKey, ttl);

from node-redlock.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.