Git Product home page Git Product logo

dyno's Introduction

Dyno

Build Status Dev chat at https://gitter.im/Netflix/dynomite Apache V2 License

Dyno encapsulates features necessary to scale a client application utilizing Dynomite.

See the blog post for introductory info.

See the wiki for documentation and examples.

Dyno Client Features

  • Connection pooling of persistent connections - this helps reduce connection churn on the Dynomite server with client connection reuse.
  • Topology aware load balancing (Token Aware) for avoiding any intermediate hops to a Dynomite coordinator node that is not the owner of the specified data.
  • Application specific local rack affinity based request routing to Dynomite nodes.
  • Application resilience by intelligently failing over to remote racks when local Dynomite rack nodes fail.
  • Application resilience against network glitches by constantly monitoring connection health and recycling unhealthy connections.
  • Capability of surgically routing traffic away from any nodes that need to be taken offline for maintenance.
  • Flexible retry policies such as exponential backoff etc
  • Insight into connection pool metrics
  • Highly configurable and pluggable connection pool components for implementing your advanced features.

Build

Dyno comes with a Gradle wrapper.

git clone https://github.com/Netflix/dyno.git

cd dyno

./gradlew clean build

The gradlew script will download all dependencies automatically and then build Dyno.

Contributing

Thank you for your interest in contributing to the Dyno project. Please see the Contributing file for instructions on how to submit a pull request.

Tip: It is always a good idea to submit an issue to discuss a proposed feature before writing code.

Help

Need some help with either getting up and going or some problems with the code?

License

Licensed under the Apache License, Version 2.0

dyno's People

Contributors

akbarahmed avatar akhaku avatar alosix avatar anguenot avatar balajivenki avatar brharrington avatar davidwadden avatar ipapapa avatar jcacciatore avatar jhspaybar avatar jkschneider avatar kishorekasi avatar kowalczykbartek avatar marimcmurtrie avatar opuneet avatar pcting avatar racam avatar rpalcolea avatar rprevot avatar rspieldenner avatar rsrinivasannetflix avatar sghill avatar shailesh33 avatar smukil avatar srikanthm-1 avatar timiblossom avatar tiodollar avatar tomaslin avatar v1r3n avatar yijuchung avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dyno's Issues

First one of two hsetnx not work as expected.

In following code, I insert two item into hashset using dyno client to redis through dynomite.
I have two JVM running in different machines,each use a dyno client to communicate with redis.
Usually it works fine. But today, both setnx works without problem, but I can only found updated content in BUILD_INFO of second hsetnx. Seems the first hsetnx did not took effect.
But the dynoClient.hgetAll(BUILD_ID) can get updated content, but I can not found it on redis server.
I actually using redis-cli connect to redis through dynomite, can only found updated content in BUILD_INFO, but no update for BUILD_ID

dynoClient.hsetnx(BUILD_ID, buildUniqueKey, result); logger.info(String.format("After add: %s",dynoClient.hgetAll(BUILD_ID))); dynoClient.hsetnx(BUILD_INFO, buildUniqueKey, json);

Detecting the RACK failure

HI,
I am trying to understand how dyno client works. Does it take care of doing the health check of all the racks and all the datatacentres. Assume that we have A1,B1,C1 as three dynomite nodes running on RACK1. A2,B2,C2 dynomite nodes running on RACK2. Wondering what happens if one of the dynomite node A1 crashes on RACK1. Now from the dyno client perspective, will it talk to B1,C1 & A2 ? Or will dyno client shift all the traffic to A2,B2,C2 ?

Thanks
Ajay

Exception when I set localZone and localDatacenter

I have Dynomite running in remote server. The cluster has 1 rack with 2 nodes.

In the Dynomite server,

curl -X GET maritest.abc.com:22222/cluster_describe
{"dcs": [{"name":"pnp","racks": [{"name":"rack1","servers": [{"name":"0.0.0.0","host":"0.0.0.0","port":8103,
"token":4294967294
},{"name":"127.0.0.1","host":"127.0.0.1","port":8101,
"token":2147483647
}]}]}]}

In Client side, I use DynoJedisClient. I supply my own HostSupplier and TokenMapSupplier.

Host host = new Host("maritest.abc.com", null, 8102, "rack1", "pnp", Host.Status.Up);
HostSupplier customHostSupplier = new HostSupplier() {
            @Override
            public List<Host> getHosts() {
                return Collections.singletonList(host);
            }
        };

String tokenMapJson = "
[{
 	\"token\": \"2147483647\",
 	\"hostname\": \"maritest.abc.com\",
 	\"zone\": \"rack1\",
 	\"dc":\ "pnp\"
 }, {
 	\"token\": \"4294967294\",
 	\"hostname\": \"maritest.abc.com\",
 	\"zone\": \"rack1\",
 	\"dc\": \"pnp\"
 }]
";
TokenMapSupplier tokenSupplier = new AbstractTokenMapSupplier() {
            @Override
            public String getTopologyJsonPayload(Set<Host> set) {
                return tokenMapJson;
            }
            @Override
            public String getTopologyJsonPayload(String hostname) {
                return tokenMapJson;
            }
        };

Then I instantiate DynoJedisClient

 dynoClient = new DynoJedisClient.Builder()
                .withApplicationName(appName)
                .withHostSupplier(customHostSupplier)
                .withTokenMapSupplier(tokenSupplier)
                .build();

In log, I get these warnings

WARN [2018-06-25 15:05:58,542] com.netflix.dyno.jedis.DynoJedisClient: DynoJedisClient for app=[webproxy] is configured for local rack affinity but cannot determine the local rack! DISABLING rack affinity for this instance. To make the client aware of the local rack either use ConnectionPoolConfigurationImpl.setLocalRack() when constructing the client instance or ensure EC2_AVAILABILTY_ZONE is set as an environment variable, e.g. run with -DEC2_AVAILABILITY_ZONE=us-east-1c [main]
WARN [2018-06-25 15:05:58,550] com.netflix.dyno.connectionpool.impl.lb.AbstractTokenMapSupplier: Local Datacenter was not defined [main]

So I added those 2 variables before I instantiate DynoJedisClient.

      System.setProperty("EC2_REGION", "pnp");
      System.setProperty("EC2_AVAILABILITY_ZONE", "rack1");

If I supply the above 2 variables, I get an Exception.

In Log

ERROR [2018-06-25 14:05:32,080] io.dropwizard.jersey.errors.LoggingExceptionMapper: Error handling a request: 4ec7ab819b9ca7eb [dw-51 - GET /redis/allkeys]
! com.netflix.dyno.connectionpool.exception.PoolOfflineException: PoolOfflineException: [host=Host [hostname=UNKNOWN, ipAddress=UNKNOWN, port=0, rack: UNKNOWN, datacenter: UNKNOW, status: Down, hashtag=null], latency=0(0), attempts=0]host pool is offline and no Racks available for fallback
! at com.netflix.dyno.connectionpool.impl.lb.HostSelectionWithFallback.getConnection(HostSelectionWithFallback.java:163)
! at com.netflix.dyno.connectionpool.impl.lb.HostSelectionWithFallback.getConnectionsToRing(HostSelectionWithFallback.java:256)
! at com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl.executeWithRing(ConnectionPoolImpl.java:366)
! at com.netflix.dyno.jedis.DynoJedisClient.d_keys(DynoJedisClient.java:2663)
! at com.netflix.dyno.jedis.DynoJedisClient.keys(DynoJedisClient.java:2643)

In Console

java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0
at com.netflix.dyno.jedis.DynoJedisClient$Builder.startConnectionPool(DynoJedisClient.java:4059)
at com.netflix.dyno.jedis.DynoJedisClient$Builder.createConnectionPool(DynoJedisClient.java:4031)
at com.netflix.dyno.jedis.DynoJedisClient$Builder.buildDynoJedisClient(DynoJedisClient.java:4007)
at com.netflix.dyno.jedis.DynoJedisClient$Builder.build(DynoJedisClient.java:3936)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at com.netflix.dyno.connectionpool.impl.lb.HostSelectionWithFallback.calculateReplicationFactor(HostSelectionWithFallback.java:396)
at com.netflix.dyno.connectionpool.impl.lb.HostSelectionWithFallback.initWithHosts(HostSelectionWithFallback.java:353)
at com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl.initSelectionStrategy(ConnectionPoolImpl.java:627)
at com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl.start(ConnectionPoolImpl.java:526)
at com.netflix.dyno.jedis.DynoJedisClient$Builder.startConnectionPool(DynoJedisClient.java:4042)

Binary Keys: incorrect node selection

In file https://github.com/Netflix/dyno/blob/master/dyno-jedis/src/main/java/com/netflix/dyno/jedis/DynoJedisClient.java

you have the following constructor:

private BaseKeyOperation(final byte[] k, final OpName o) {
        	this.key = null;
        	this.binaryKey = null;
        	this.op = o;
}

which simply discards "k".

then in the Token Aware LB part (file https://github.com/Netflix/dyno/blob/master/dyno-core/src/main/java/com/netflix/dyno/connectionpool/impl/lb/TokenAwareSelection.java)
only the String keys are used to select a node:

@Override
	public HostConnectionPool<CL> getPoolForOperation(BaseOperation<CL, ?> op) throws NoAvailableHostsException {
		
		String key = op.getKey();
		HostToken hToken = this.getTokenForKey(key);
		
		HostConnectionPool<CL> hostPool = null;
		if (hToken != null) {
			hostPool = tokenPools.get(hToken.getToken());
		}
		
		if (hostPool == null) {
			throw new NoAvailableHostsException("Could not find host connection pool for key: " + key + ", hash: " +
                    tokenMapper.hash(key));
		}
		
		return hostPool;
}

so it'll always return the first node when using binary keys which is kind of bad for performance.

PoolOfflineException: [host=Host [hostname=UNKNOWN, ipAddress=UNKNOWN ...

@ipapapa

Running dynomite locally single node with this config.

dyn_o_mite:
  datacenter: local-dc
  rack: rack1
  dyn_listen: 0.0.0.0:8101
  data_store: 0
  listen: 0.0.0.0:8102
  pem_key_file: conf/dynomite.pem  
  dyn_seed_provider: simple_provider
  servers:
  - 0.0.0.0:6379:1
  tokens: '100'

Running with dyno 1.5.7 using this code:

package com.github.diegopacheco.dynomite.dyno.connection.test;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.List;
import java.util.Set;

import org.junit.Test;

import com.github.diegopacheco.dynomite.cluster.checker.DynomiteConfig;
import com.github.diegopacheco.dynomite.cluster.checker.parser.DynomiteNodeInfo;
import com.netflix.dyno.connectionpool.Host;
import com.netflix.dyno.connectionpool.HostSupplier;
import com.netflix.dyno.connectionpool.TokenMapSupplier;
import com.netflix.dyno.connectionpool.Host.Status;
import com.netflix.dyno.connectionpool.impl.RetryNTimes;
import com.netflix.dyno.connectionpool.impl.lb.AbstractTokenMapSupplier;
import com.netflix.dyno.contrib.ArchaiusConnectionPoolConfiguration;
import com.netflix.dyno.jedis.DynoJedisClient;

public class SimpleConnectionTest {
	
	@Test
	public void testConnection(){
		
		String clusterName = "local-cluster";
		
		DynomiteNodeInfo node = new DynomiteNodeInfo("127.0.0.1","8102","rack1","local-dc","100");
		
		DynoJedisClient dynoClient = new DynoJedisClient.Builder()
				.withApplicationName(DynomiteConfig.CLIENT_NAME)
	            .withDynomiteClusterName(clusterName)
	            .withCPConfig( new ArchaiusConnectionPoolConfiguration(DynomiteConfig.CLIENT_NAME)
	            					.withTokenSupplier(toTokenMapSupplier(Arrays.asList(node)))
	            					.setMaxConnsPerHost(1)
                                    .setConnectTimeout(2000)
                                    .setPoolShutdownDelay(0)
                                    .setFailOnStartupIfNoHosts(true)
                                    .setFailOnStartupIfNoHostsSeconds(2)
                                    .setMaxTimeoutWhenExhausted(2000)
                                    .setSocketTimeout(2000)
                                    .setRetryPolicyFactory(new RetryNTimes.RetryFactory(1))
	            )
	            .withHostSupplier(toHostSupplier(Arrays.asList(node)))
	            .build();

		dynoClient.set("Z", "200");
		System.out.println("Z: " + dynoClient.get("Z"));
		
	}
	
	private static TokenMapSupplier toTokenMapSupplier(List<DynomiteNodeInfo> nodes){
		StringBuilder jsonSB = new StringBuilder("[");
		int count = 0;
		for(DynomiteNodeInfo node: nodes){
			jsonSB.append(" {\"token\":\""+ node.getTokens() + "\",\"hostname\":\"" + node.getServer() + 
							"\",\"dc\":\"" +  node.getDc() 
							+ "\",\"rack\":\"" +  node.getRack()
							+ "\",\"zone\":\"" +  node.getDc()
							+ "\"} ");
			count++;
			if (count < nodes.size())
				jsonSB.append(" , ");
		}
		jsonSB.append(" ]\"");
		
	   final String json = jsonSB.toString();
	   TokenMapSupplier testTokenMapSupplier = new AbstractTokenMapSupplier() {
			    @Override
			    public String getTopologyJsonPayload(String hostname) {
			        return json;
			    }
				@Override
				public String getTopologyJsonPayload(Set<Host> activeHosts) {
					return json;
				}
		};
		return testTokenMapSupplier;
	}
	
	private static HostSupplier toHostSupplier(List<DynomiteNodeInfo> nodes){
		final List<Host> hosts = new ArrayList<Host>();
		
		for(DynomiteNodeInfo node: nodes){
			hosts.add(buildHost(node));
		}
		
		final HostSupplier customHostSupplier = new HostSupplier() {
		   @Override
		   public Collection<Host> getHosts() {
			   return hosts;
		   }
		};
		return customHostSupplier;
	}
	
	private static Host buildHost(DynomiteNodeInfo node){
		return new Host(node.getServer(),node.getServer(),8102,node.getRack(),node.getDc(),Status.Up);
	}
		
}

It was all working fine with dyno 1.5.1 so when I change to dyno 1.5.7 i'm getting this Exception.

com.netflix.dyno.connectionpool.exception.PoolOfflineException: PoolOfflineException: [host=Host [hostname=UNKNOWN, ipAddress=UNKNOWN, port=0, rack: UNKNOWN, datacenter: UNKNOW, status: Down], latency=0(0), attempts=0]host pool is offline and no Racks available for fallback
	at com.netflix.dyno.connectionpool.impl.lb.HostSelectionWithFallback.getConnection(HostSelectionWithFallback.java:161)
	at com.netflix.dyno.connectionpool.impl.lb.HostSelectionWithFallback.getConnectionUsingRetryPolicy(HostSelectionWithFallback.java:120)
	at com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl.executeWithFailover(ConnectionPoolImpl.java:292)
	at com.netflix.dyno.jedis.DynoJedisClient.d_set(DynoJedisClient.java:1233)
	at com.netflix.dyno.jedis.DynoJedisClient.set(DynoJedisClient.java:1223)
	at com.github.diegopacheco.dynomite.dyno.connection.test.SimpleConnectionTest.testConnection(SimpleConnectionTest.java:48)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
	at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:678)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)

LOGS

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/diego/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.21/be4b3c560a37e69b6c58278116740db28832232c/slf4j-simple-1.7.21.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/diego/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.21/7238b064d1aba20da2ac03217d700d91e02460fa/slf4j-log4j12-1.7.21.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
[main] WARN com.netflix.config.sources.URLConfigurationSource - No URLs will be polled as dynamic configuration sources.
[main] INFO com.netflix.config.sources.URLConfigurationSource - To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
[main] INFO com.netflix.config.DynamicPropertyFactory - DynamicPropertyFactory is initialized with configuration sources: com.netflix.config.ConcurrentCompositeConfiguration@56cbfb61
[main] INFO com.netflix.dyno.contrib.ArchaiusConnectionPoolConfiguration - Dyno configuration: CompressionStrategy = NONE
[main] WARN com.netflix.dyno.jedis.DynoJedisClient - DynoJedisClient for app=[DynomiteClusterChecker] is configured for local rack affinity but cannot determine the local rack! DISABLING rack affinity for this instance. To make the client aware of the local rack either use ConnectionPoolConfigurationImpl.setLocalRack() when constructing the client instance or ensure EC2_AVAILABILTY_ZONE is set as an environment variable, e.g. run with -DEC2_AVAILABILITY_ZONE=us-east-1c
[main] INFO com.netflix.dyno.jedis.DynoJedisClient - Starting connection pool for app DynomiteClusterChecker
[pool-3-thread-1] INFO com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl - Adding host connection pool for host: Host [hostname=127.0.0.1, ipAddress=127.0.0.1, port=8102, rack: rack1, datacenter: local-dc, status: Up]
[pool-3-thread-1] INFO com.netflix.dyno.connectionpool.impl.HostConnectionPoolImpl - Priming connection pool for host:Host [hostname=127.0.0.1, ipAddress=127.0.0.1, port=8102, rack: rack1, datacenter: local-dc, status: Up], with conns:3
[pool-3-thread-1] INFO com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl - Successfully primed 3 of 3 to Host [hostname=127.0.0.1, ipAddress=127.0.0.1, port=8102, rack: rack1, datacenter: local-dc, status: Up]
[main] WARN com.netflix.dyno.connectionpool.impl.lb.AbstractTokenMapSupplier - Local Datacenter was not defined
[main] INFO com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl - registered mbean com.netflix.dyno.connectionpool.impl:type=MonitorConsole
[Thread-1] INFO com.netflix.dyno.connectionpool.impl.HostConnectionPoolImpl - Shutting down connection pool for host:Host [hostname=127.0.0.1, ipAddress=127.0.0.1, port=8102, rack: rack1, datacenter: local-dc, status: Up]
[Thread-1] WARN com.netflix.dyno.connectionpool.impl.HostConnectionPoolImpl - Failed to close connection for host: Host [hostname=127.0.0.1, ipAddress=127.0.0.1, port=8102, rack: rack1, datacenter: local-dc, status: Up] Unexpected end of stream.
[Thread-1] WARN com.netflix.dyno.connectionpool.impl.HostConnectionPoolImpl - Failed to close connection for host: Host [hostname=127.0.0.1, ipAddress=127.0.0.1, port=8102, rack: rack1, datacenter: local-dc, status: Up] Unexpected end of stream.
[Thread-1] WARN com.netflix.dyno.connectionpool.impl.HostConnectionPoolImpl - Failed to close connection for host: Host [hostname=127.0.0.1, ipAddress=127.0.0.1, port=8102, rack: rack1, datacenter: local-dc, status: Up] Unexpected end of stream.
[Thread-1] INFO com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl - Remove host: Successfully removed host 127.0.0.1 from connection pool
[Thread-1] INFO com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl - deregistered mbean com.netflix.dyno.connectionpool.impl:type=MonitorConsole

Cheers,
Diego Pacheco

Replication factor determination ignores inactive hosts

Hi,

When using Token Aware load balancing, the replication factor is checked during startup.

This validation is done by the HostSelectionWithFallback class (initWithHosts and calculateReplicationFactor methods):

public void initWithHosts(Map<Host, HostConnectionPool<CL>> hPools) {
       List<HostToken> allHostTokens = tokenSupplier.getTokens(hPools.keySet());
       // ....
       if (localSelector.isTokenAware() && localRack != null) {
              replicationFactor.set(calculateReplicationFactor(allHostTokens));
       }
       // ...
}

Unfortunately, the given collection "hPools" contains only up hosts. So if one host is unavailable during startup, Dyno throws an exception even for symmetric configuration.

Use getMaxTimeoutWhenExhausted() when borrowing connection from pool

Should getMaxTimeoutWhenExhausted() should be used versus the connection timeout when selectionStrategy.getConnection(). This seems to be true for all use cases w/in ConnectionPoolImpl.java with exception to: getConnectionForOperation().

public <R> Connection<CL> getConnectionForOperation(BaseOperation<CL, R> baseOperation) {
- return selectionStrategy.getConnection(baseOperation, cpConfiguration.getConnectTimeout(), TimeUnit.MILLISECONDS);
+ return selectionStrategy.getConnection(baseOperation, cpConfiguration.getMaxTimeoutWhenExhausted(), TimeUnit.MILLISECONDS);   

'dyno-jedis', 'dyno-redisson', 'dyno-memcache' etc. should not depend on dyno-contrib

Hey,

The fact that the backend implementations, 'dyno-jedis', 'dyno-redisson', 'dyno-memcache', etc., depend on 'dyno-contrib' brings dependency on Eureka (and underlying dependencies) which is AWS specific. ('eureka-client')

Not only is is not desirable to force AWS specific dependencies to applications not running off AWS, it is a rather a heavy payload to carry for a driver library.

IMHO, all backend implementations should only depend on the 'dyno-core' and AWS specific implementation should be part of 'dyno-contrib' or a AWS specific component.

Let me know if you would be open in discussing this and if you would like me to give it a try with a PR moving AWS specific to 'dyno-contrib' to keep the backend implementations AWS agnostic.

Thanks

J.

Replication Rack write/read not working

With v1.5.1. If the primary node in local rack is down write or read to the corresponding node in replication rack or other rack is not working. It used to work well prior to 1.5.1. With local rack affinity changes it is regressed.

E,g. In my test case the key "ApZmMHHkWN" is part of node2 in rack1 and also node2 in rack2. I brought down dynomite(redis is still up). When I write a new value or read a value for the key the expected behavior is that since node2 in rack1 is down it should read from node2 from rack2. But I am getting the following exception.

Exception in thread "main" com.netflix.dyno.connectionpool.exception.FatalConnectionException: FatalConnectionException: [host=Host [hostname=UNKNOWN, ipAddress=UNKNOWN, port=0, rack: null, datacenter: null, status: Down], latency=0(0), attempts=1]redis.clients.jedis.exceptions.JedisDataException: ERR Peer: Peer Node is not connected
at com.netflix.dyno.jedis.JedisConnectionFactory$JedisConnection.execute(JedisConnectionFactory.java:104)
at com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl.executeWithFailover(ConnectionPoolImpl.java:294)
at com.netflix.dyno.jedis.DynoJedisClient.d_get(DynoJedisClient.java:340)
at com.netflix.dyno.jedis.DynoJedisClient.get(DynoJedisClient.java:334)
at ConfLoaded.main(ConfLoaded.java:31)
Caused by: redis.clients.jedis.exceptions.JedisDataException: ERR Peer: Peer Node is not connected
at redis.clients.jedis.Protocol.processError(Protocol.java:127)
at redis.clients.jedis.Protocol.process(Protocol.java:161)
at redis.clients.jedis.Protocol.read(Protocol.java:215)
at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:340)
at redis.clients.jedis.Connection.getBinaryBulkReply(Connection.java:259)
at redis.clients.jedis.Connection.getBulkReply(Connection.java:248)
at redis.clients.jedis.Jedis.get(Jedis.java:153)
at com.netflix.dyno.jedis.DynoJedisClient$9.execute(DynoJedisClient.java:343)
at com.netflix.dyno.jedis.DynoJedisClient$9.execute(DynoJedisClient.java:340)
at com.netflix.dyno.jedis.JedisConnectionFactory$JedisConnection.execute(JedisConnectionFactory.java:85)
... 4 more

Example code not working

Hi,

I've trying to use the library but couldn't make it work. I get the following exception:

2014-11-04 14:01:22 WARN TokenMapSupplierImpl:107 - Could not get json response for token topology [Connection to http://localhost:8080 refused]
2014-11-04 14:01:22 INFO HostConnectionPoolImpl:162 - Shutting down connection pool for host:Host [name=localhost, port=8102, dc: localrack, status: Up]
2014-11-04 14:01:22 WARN HostConnectionPoolImpl:407 - Failed to close connection for host: Host [name=localhost, port=8102, dc: localrack, status: Up] Unknown reply:
Exception in thread "main" com.netflix.dyno.connectionpool.exception.NoAvailableHostsException: NoAvailableHostsException: [host=Host [name=UNKNOWN, port=0, dc: null, status: Down], latency=0(0), attempts=0]Token not found for key hash: 3303027599
at com.netflix.dyno.connectionpool.impl.hash.BinarySearchTokenMapper.getToken(BinarySearchTokenMapper.java:73)
at com.netflix.dyno.connectionpool.impl.lb.TokenAwareSelection.getPoolForOperation(TokenAwareSelection.java:85)
at com.netflix.dyno.connectionpool.impl.lb.HostSelectionWithFallback.getConnection(HostSelectionWithFallback.java:125)
at com.netflix.dyno.connectionpool.impl.lb.HostSelectionWithFallback.getConnection(HostSelectionWithFallback.java:114)
at com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl.executeWithFailover(ConnectionPoolImpl.java:299)
at com.netflix.dyno.jedis.DynoJedisClient.d_set(DynoJedisClient.java:946)
at com.netflix.dyno.jedis.DynoJedisClient.set(DynoJedisClient.java:941)
at testElastic.DynoTest.main(DynoTest.java:37)

Server Log:

src/dynomite -c conf/redis.yml
[Tue Nov 4 11:36:20 2014] dynomite.c:192 dynomite-0.1.19 built for Linux 3.13.0-39-generic x86_64 started on pid 18082
[Tue Nov 4 11:36:20 2014] dynomite.c:197 run, rabbit run / dig that hole, forget the sun / and when at last the work is done / don't sit down / it's time to dig another one
[Tue Nov 4 11:36:20 2014] dyn_stats.c:992 m 3 listening on '0.0.0.0:22222'
[Tue Nov 4 11:36:20 2014] dyn_proxy.c:211 p 7 listening on '127.0.0.1:8102' in redis pool 0 'dyn_o_mite' with 1 servers
[Tue Nov 4 11:36:20 2014] dyn_dnode_server.c:195 dyn: p 8 listening on '127.0.0.1:8101' in redis pool 0 'dyn_o_mite' with 1 servers
[Tue Nov 4 11:36:20 2014] dyn_dnode_server.c:286 dyn: accept on sd 10
[Tue Nov 4 11:36:20 2014] dyn_dnode_server.c:326 dyn: accepted c 10 on p 8 from '127.0.0.1:38887'
[Tue Nov 4 11:36:24 2014] dyn_proxy.c:341 accepted c 11 on p 7 from '127.0.0.1:48407'
[Tue Nov 4 11:36:24 2014] dyn_redis.c:1050 parsed unsupported command 'QUIT'
[Tue Nov 4 11:36:24 2014] dyn_core.c:284 close c 11 '127.0.0.1:48407' on event 00FF eof 0 done 0 rb 14 sb 0: Invalid argument
[Tue Nov 4 14:01:22 2014] dyn_proxy.c:341 accepted c 11 on p 7 from '127.0.0.1:53395'
[Tue Nov 4 14:01:22 2014] dyn_redis.c:1050 parsed unsupported command 'QUIT'
[Tue Nov 4 14:01:22 2014] dyn_core.c:284 close c 11 '127.0.0.1:53395' on event 00FF eof 0 done 0 rb 14 sb 0: Invalid argument

Server conf:

dyn_o_mite:
listen: 127.0.0.1:8102
env: network
datacenter: localdc
rack: localrack
dyn_listen: 127.0.0.1:8101
tokens: '3303027599'
servers:
- 127.0.0.1:6379:1
redis: true

Java Code:

import java.util.ArrayList;
import java.util.Collection;
import java.util.List;

import com.netflix.dyno.connectionpool.Host;
import com.netflix.dyno.connectionpool.Host.Status;
import com.netflix.dyno.connectionpool.HostSupplier;
import com.netflix.dyno.jedis.DynoJedisClient;

public class DynoTest {

public static void main(String[] args) {
    final HostSupplier customHostSupplier = new HostSupplier() {

        final List<Host> hosts = new ArrayList<Host>();

        @Override
        public Collection<Host> getHosts() {

            hosts.add(new Host("localhost", 8102, Status.Up).setRack("localrack"));

            return hosts;
        }
    };


    DynoJedisClient dynoClient = new DynoJedisClient.Builder()
            .withApplicationName("MY APP")
            .withDynomiteClusterName("local")
            .withHostSupplier(customHostSupplier).build();

    try{
        dynoClient.set("foo", "puneetTest");
        System.out.println("Value: " + dynoClient.get("foo"));

    } finally{
        dynoClient.stopClient();
    }
}

}

Build fail

E:\dyno_master_cd6\dyno>gradlew clean --stacktrace

Configure project :
Inferred project: dyno, version: 1.6.4-SNAPSHOT

FAILURE: Build failed with an exception.

  • What went wrong:
    org/jfrog/gradle/plugin/artifactory/task/BuildInfoBaseTask

  • Try:
    Run with --info or --debug option to get more log output. Run with --scan to get full insights.

  • Exception is:
    java.lang.NoClassDefFoundError: org/jfrog/gradle/plugin/artifactory/task/BuildInfoBaseTask
    at nebula.plugin.netflixossproject.publishing.PublishingPlugin.apply(PublishingPlugin.groovy:51)
    at nebula.plugin.netflixossproject.publishing.PublishingPlugin.apply(PublishingPlugin.groovy)
    at org.gradle.api.internal.plugins.ImperativeOnlyPluginTarget.applyImperative(ImperativeOnlyPluginTarget.java:42)
    at org.gradle.api.internal.plugins.RuleBasedPluginTarget.applyImperative(RuleBasedPluginTarget.java:50)
    at org.gradle.api.internal.plugins.DefaultPluginManager.addPlugin(DefaultPluginManager.java:165)
    at org.gradle.api.internal.plugins.DefaultPluginManager.access$200(DefaultPluginManager.java:47)
    at org.gradle.api.internal.plugins.DefaultPluginManager$AddPluginBuildOperation.run(DefaultPluginManager.java:252)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110)
    at org.gradle.api.internal.plugins.DefaultPluginManager.doApply(DefaultPluginManager.java:144)
    at org.gradle.api.internal.plugins.DefaultPluginManager.addImperativePlugin(DefaultPluginManager.java:80)
    at org.gradle.api.internal.plugins.DefaultPluginManager.addImperativePlugin(DefaultPluginManager.java:86)
    at org.gradle.api.internal.plugins.DefaultPluginContainer.apply(DefaultPluginContainer.java:60)
    at org.gradle.api.plugins.PluginContainer$apply.call(Unknown Source)
    at nebula.plugin.netflixossproject.NetflixOssProjectPlugin.apply(NetflixOssProjectPlugin.groovy:63)
    at nebula.plugin.netflixossproject.NetflixOssProjectPlugin.apply(NetflixOssProjectPlugin.groovy)
    at org.gradle.api.internal.plugins.ImperativeOnlyPluginTarget.applyImperative(ImperativeOnlyPluginTarget.java:42)
    at org.gradle.api.internal.plugins.RuleBasedPluginTarget.applyImperative(RuleBasedPluginTarget.java:50)
    at org.gradle.api.internal.plugins.DefaultPluginManager.addPlugin(DefaultPluginManager.java:165)
    at org.gradle.api.internal.plugins.DefaultPluginManager.access$200(DefaultPluginManager.java:47)
    at org.gradle.api.internal.plugins.DefaultPluginManager$AddPluginBuildOperation.run(DefaultPluginManager.java:252)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110)
    at org.gradle.api.internal.plugins.DefaultPluginManager.doApply(DefaultPluginManager.java:144)
    at org.gradle.api.internal.plugins.DefaultPluginManager.apply(DefaultPluginManager.java:125)
    at org.gradle.plugin.use.internal.DefaultPluginRequestApplicator$3.run(DefaultPluginRequestApplicator.java:151)
    at org.gradle.plugin.use.internal.DefaultPluginRequestApplicator.applyPlugin(DefaultPluginRequestApplicator.java:225)
    at org.gradle.plugin.use.internal.DefaultPluginRequestApplicator.applyPlugins(DefaultPluginRequestApplicator.java:148)
    at org.gradle.configuration.DefaultScriptPluginFactory$ScriptPluginImpl.apply(DefaultScriptPluginFactory.java:179)
    at org.gradle.configuration.BuildOperationScriptPlugin$1.run(BuildOperationScriptPlugin.java:61)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110)
    at org.gradle.configuration.BuildOperationScriptPlugin.apply(BuildOperationScriptPlugin.java:58)
    at org.gradle.configuration.project.BuildScriptProcessor.execute(BuildScriptProcessor.java:41)
    at org.gradle.configuration.project.BuildScriptProcessor.execute(BuildScriptProcessor.java:26)
    at org.gradle.configuration.project.ConfigureActionsProjectEvaluator.evaluate(ConfigureActionsProjectEvaluator.java:34)
    at org.gradle.configuration.project.LifecycleProjectEvaluator.doConfigure(LifecycleProjectEvaluator.java:64)
    at org.gradle.configuration.project.LifecycleProjectEvaluator.access$100(LifecycleProjectEvaluator.java:34)
    at org.gradle.configuration.project.LifecycleProjectEvaluator$ConfigureProject.run(LifecycleProjectEvaluator.java:110)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110)
    at org.gradle.configuration.project.LifecycleProjectEvaluator.evaluate(LifecycleProjectEvaluator.java:50)
    at org.gradle.api.internal.project.DefaultProject.evaluate(DefaultProject.java:666)
    at org.gradle.api.internal.project.DefaultProject.evaluate(DefaultProject.java:135)
    at org.gradle.execution.TaskPathProjectEvaluator.configure(TaskPathProjectEvaluator.java:35)
    at org.gradle.execution.TaskPathProjectEvaluator.configureHierarchy(TaskPathProjectEvaluator.java:60)
    at org.gradle.configuration.DefaultBuildConfigurer.configure(DefaultBuildConfigurer.java:38)
    at org.gradle.initialization.DefaultGradleLauncher$ConfigureBuild.run(DefaultGradleLauncher.java:249)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110)
    at org.gradle.initialization.DefaultGradleLauncher.configureBuild(DefaultGradleLauncher.java:167)
    at org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:126)
    at org.gradle.initialization.DefaultGradleLauncher.executeTasks(DefaultGradleLauncher.java:109)
    at org.gradle.internal.invocation.GradleBuildController$1.call(GradleBuildController.java:78)
    at org.gradle.internal.invocation.GradleBuildController$1.call(GradleBuildController.java:75)
    at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:152)
    at org.gradle.internal.invocation.GradleBuildController.doBuild(GradleBuildController.java:100)
    at org.gradle.internal.invocation.GradleBuildController.run(GradleBuildController.java:75)
    at org.gradle.tooling.internal.provider.ExecuteBuildActionRunner.run(ExecuteBuildActionRunner.java:28)
    at org.gradle.launcher.exec.ChainingBuildActionRunner.run(ChainingBuildActionRunner.java:35)
    at org.gradle.tooling.internal.provider.ValidatingBuildActionRunner.run(ValidatingBuildActionRunner.java:32)
    at org.gradle.launcher.exec.RunAsBuildOperationBuildActionRunner$1.run(RunAsBuildOperationBuildActionRunner.java:43)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110)
    at org.gradle.launcher.exec.RunAsBuildOperationBuildActionRunner.run(RunAsBuildOperationBuildActionRunner.java:40)
    at org.gradle.tooling.internal.provider.SubscribableBuildActionRunner.run(SubscribableBuildActionRunner.java:51)
    at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:47)
    at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:30)
    at org.gradle.launcher.exec.BuildTreeScopeBuildActionExecuter.execute(BuildTreeScopeBuildActionExecuter.java:39)
    at org.gradle.launcher.exec.BuildTreeScopeBuildActionExecuter.execute(BuildTreeScopeBuildActionExecuter.java:25)
    at org.gradle.tooling.internal.provider.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:80)
    at org.gradle.tooling.internal.provider.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:53)
    at org.gradle.tooling.internal.provider.ServicesSetupBuildActionExecuter.execute(ServicesSetupBuildActionExecuter.java:57)
    at org.gradle.tooling.internal.provider.ServicesSetupBuildActionExecuter.execute(ServicesSetupBuildActionExecuter.java:32)
    at org.gradle.tooling.internal.provider.GradleThreadBuildActionExecuter.execute(GradleThreadBuildActionExecuter.java:36)
    at org.gradle.tooling.internal.provider.GradleThreadBuildActionExecuter.execute(GradleThreadBuildActionExecuter.java:25)
    at org.gradle.tooling.internal.provider.ParallelismConfigurationBuildActionExecuter.execute(ParallelismConfigurationBuildActionExecuter.java:43)
    at org.gradle.tooling.internal.provider.ParallelismConfigurationBuildActionExecuter.execute(ParallelismConfigurationBuildActionExecuter.java:29)
    at org.gradle.tooling.internal.provider.StartParamsValidatingActionExecuter.execute(StartParamsValidatingActionExecuter.java:69)
    at org.gradle.tooling.internal.provider.StartParamsValidatingActionExecuter.execute(StartParamsValidatingActionExecuter.java:30)
    at org.gradle.tooling.internal.provider.SessionFailureReportingActionExecuter.execute(SessionFailureReportingActionExecuter.java:59)
    at org.gradle.tooling.internal.provider.SessionFailureReportingActionExecuter.execute(SessionFailureReportingActionExecuter.java:44)
    at org.gradle.tooling.internal.provider.SetupLoggingActionExecuter.execute(SetupLoggingActionExecuter.java:45)
    at org.gradle.tooling.internal.provider.SetupLoggingActionExecuter.execute(SetupLoggingActionExecuter.java:30)
    at org.gradle.launcher.daemon.server.exec.ExecuteBuild.doBuild(ExecuteBuild.java:67)
    at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:36)
    at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122)
    at org.gradle.launcher.daemon.server.exec.WatchForDisconnection.execute(WatchForDisconnection.java:37)
    at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122)
    at org.gradle.launcher.daemon.server.exec.ResetDeprecationLogger.execute(ResetDeprecationLogger.java:26)
    at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122)
    at org.gradle.launcher.daemon.server.exec.RequestStopIfSingleUsedDaemon.execute(RequestStopIfSingleUsedDaemon.java:34)
    at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122)
    at org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.call(ForwardClientInput.java:74)
    at org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.call(ForwardClientInput.java:72)
    at org.gradle.util.Swapper.swap(Swapper.java:38)
    at org.gradle.launcher.daemon.server.exec.ForwardClientInput.execute(ForwardClientInput.java:72)
    at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122)
    at org.gradle.launcher.daemon.server.exec.LogAndCheckHealth.execute(LogAndCheckHealth.java:55)
    at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122)
    at org.gradle.launcher.daemon.server.exec.LogToClient.doBuild(LogToClient.java:62)
    at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:36)
    at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122)
    at org.gradle.launcher.daemon.server.exec.EstablishBuildEnvironment.doBuild(EstablishBuildEnvironment.java:82)
    at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:36)
    at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122)
    at org.gradle.launcher.daemon.server.exec.StartBuildOrRespondWithBusy$1.run(StartBuildOrRespondWithBusy.java:50)
    at org.gradle.launcher.daemon.server.DaemonStateCoordinator$1.run(DaemonStateCoordinator.java:295)
    at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
    at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
    at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)

  • Get more help at https://help.gradle.org

BUILD FAILED in 1s

ConnectionPoolConfiguration.getConnectTimeout() not used when constructing Jedis() connection

The ConnectionPoolConfiguration.getConnectTimeout() value is not used when constructing a Jedis() connection. The only use of ConnectionPoolConfiguration.getConnectTimeout() seems to be an anomaly captured here: #51

JedisConnectionFactory does not have access to the pool config directly. Looks like it could be exposed from HostConnectionPool or perhaps a member added to Host for connection timeout. It could be passed in the signature as well:

com.netflix.dyno.jedis.JedisConnectionFactory.createConnection(HostConnectionPool<Jedis>, ConnectionObservor)

Failover case not working : not reading data from replica rack

I have set up two racks each with one server in a data center. Then I have setup dyno client along with dynomite to handle failovers. Now when both redis instances are active then it works fine for read and write both and it has keys in both instances (i.e replica is created) but when I stop one instance of redis on my local and then test again then it doesn't read from the replica which exists on the active instance instead I get the following exception: com.netflix.dyno.connectionpool.exception.FatalConnectionException: FatalConnectionException: [host=Host [hostname=127.0.0.1, ipAddress=null, port=8102, rack: rack-1, datacenter: dc, status: Up, hashtag=null], latency=0(0), attempts=1]redis.clients.jedis.exceptions.JedisDataException: ERR Storage: Datastore refused connection
Can you please suggest a solution to this?

maven published artifacts have pom.xml depencies marked with `runtime` scope instead of `compile`.

Hey,

This issue makes the use of the Dyno libs, and other Netflix libs, very painful for the rest of us using maven because we have to exclude all the transitive dependencies of the Netflix libs and define (and maintain) custom dependency management overrides.

Last version of the Dyno libs without broken maven artifacts is 1.0.6 (2015/01/22):

For instance, dyno-core:

http://search.maven.org/#artifactdetails%7Ccom.netflix.dyno%7Cdyno-core%7C1.0.6%7Cjar

Latest Dyno libs in comparison with the broken compile scopes:

http://search.maven.org/#artifactdetails%7Ccom.netflix.dyno%7Cdyno-core%7C1.5.0%7Cjar

I see that people been reporting the issue on Netflix Archaius:

Netflix/archaius#379

Any update for the Dyno libs?

Thanks.

J.

Support for MULTI - EXEC

Hi,

I am looking to port my existing jedis api calls to dyno jedis client. I have to execute transactions and was looking to see if the MULTI-EXEC feature is supported in the dyno client. I couldn't find this support. Can you please let me know how to achieve MULTI-EXEC commands using dyno client code?

Thanks,
Vasavi

Reset connection after a max age

In some network environment, especially cloud, a keep-alive connection will be zombie one after some time. It would be great to prevent a zombie connection to cripple the 99th latency. We should have an additional background thread(s) to help out connection reset after a max keep-alive age.

Dyno Client with Redis Sentinel setup

I want to use Netflix/Conductor service which makes uses of Netflix/Dyno Client to talk to Redis instance. I would like to use a Redis Sentinel pool for HA purposes, and wondering if there are plans or has anyone managed to get Dyno to work with pool of hosts that are running Sentinel+Redis ?

Dyno has long time out 36s

@ipapapa @timiblossom

Dyno has long timeout default looks to be 36s. If I try to connect on the wrong IP where Dynomite is not there it takes 36s to dyno realize it's a bad IP.

$ ./gradlew run
:compileJava
:processResources UP-TO-DATE
:classes
:run
log4j:WARN No appenders could be found for logger (com.netflix.config.sources.URLConfigurationSource).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
com.netflix.dyno.connectionpool.exception.PoolOfflineException: PoolOfflineException: [host=Host [hostname=UNKNOWN, ipAddress=UNKNOWN, port=0, rack: null, datacenter: null, status: Down], latency=0(0), attempts=0]host pool is offline and no Racks available for fallback
--
--
TIME TO RUN: 36 seconds
> Building 75% > :run

Dyno does not respect timeout parameters such as: setConnectTimeout, setFailOnStartupIfNoHostsSeconds, setMaxTimeoutWhenExhaustedm setSocketTimeout... Sample code:

        DynoJedisClient dynoClient = new DynoJedisClient.Builder()
                .withApplicationName("MY_APP")
                    .withDynomiteClusterName("MY_CLUSTER")
                    .withCPConfig( new ArchaiusConnectionPoolConfiguration("MY_APP")
                                        .setPort(8101)
                                        .withTokenSupplier(testTokenMapSupplier)
                                        .setMaxConnsPerHost(100) 

                                        .setConnectTimeout(5)
                                        .setFailOnStartupIfNoHosts(true)
                                        .setFailOnStartupIfNoHostsSeconds(5)
                                        .setMaxTimeoutWhenExhausted(5)
                                        .setSocketTimeout(5)
                                        .setRetryPolicyFactory(new RetryNTimes.RetryFactory(2))
                    )
                    .withHostSupplier(customHostSupplier)
                    .build();

The whole code is here: https://github.com/diegopacheco/netflixoss-pocs/tree/master/dyno-timout

I think would be nice is Dyno respect that timeouts and we could configure it. 36s is a very long timeout.

Cheers,
Diego Pacheco

ability to specify count hint for SCAN call

default redis count of elements for scan is 10, that results in long iteration cycle as it usually returns 1-2 elements per call. To optimize, function should accept count hint as well.

Dual writes to support the failure recovery in writing to the secondary cluster

Currently, all writes (we can ignore the reads) to the new cluster in Dual writes are fire and forget. This would case some data loss in moving data in some cases. There are couple of options to do this:
1. Storing failure requests somewhere and retry later
2. Fail the operations so that the application layer can retry
3. Guarantee the writes always successful (which is hard)

ConnectionPoolImpl addHost()

Hi,

I'm testing a two-nodes cluster in 1 DC with 2 Racks. However, my configuration of the 2 nodes are only differ by the port number. Thus, my Host setup are "host1", 7000 and "host2", 7002. I did setup my ConnectionPoolConfiguration to have default port 7000.

When I run my code and start testing the dyno failover (I bring down "host1"), I noticed that it was actually connecting to "host2" but with the default port 7000 instead of 7002.

I did a debugging and noticed that ConnectionPoolImple class addHost() method is always setting the host's port based on the cpc port.

Line 172: host.setPort(cpConfiguration.getPort());

Is there any intention behind this line of code as it clearly overwrite the host port which I had defined and resulting in a failed connection.

Andy

How does dyno know about redis is unavailable?

Hi, here is my test case, and i use hiredis client:
0) dyno client reads a key

  1. dynomite replies "ERR Storage: Connection refused" if it's redis was unavailable

Because hiredis can read something from dynomite, so redisContext's err field would not be set to ERROR, and the redisReply is not NULL. We get the wrong value.

My question is:
How does dyno handle this situation?

switch to jdk 8 as compile platform breaks 1.7 compatibility

Altho you have

sourceCompatibility = 1.7
targetCompatibility = 1.7

but it seems artifacts now compiled with java 8, that produce this particular error when code run with java 7.

java.lang.NoSuchMethodError: java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
	at com.netflix.dyno.connectionpool.impl.hash.BinarySearchTokenMapper.initBinarySearch(BinarySearchTokenMapper.java:114)
	at com.netflix.dyno.connectionpool.impl.hash.BinarySearchTokenMapper.initSearchMecahnism(BinarySearchTokenMapper.java:78)
	at com.netflix.dyno.connectionpool.impl.lb.TokenAwareSelection.initWithHosts(TokenAwareSelection.java:66)
	at com.netflix.dyno.connectionpool.impl.lb.HostSelectionWithFallback.initWithHosts(HostSelectionWithFallback.java:348)
	at com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl.initSelectionStrategy(ConnectionPoolImpl.java:627)
	at com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl.start(ConnectionPoolImpl.java:526)
	at com.netflix.dyno.jedis.DynoJedisClient$Builder.startConnectionPool(DynoJedisClient.java:3521)
	at com.netflix.dyno.jedis.DynoJedisClient$Builder.createConnectionPool(DynoJedisClient.java:3509)
	at com.netflix.dyno.jedis.DynoJedisClient$Builder.buildDynoJedisClient(DynoJedisClient.java:3487)
	at com.netflix.dyno.jedis.DynoJedisClient$Builder.build(DynoJedisClient.java:3421)

if you continue to support 1.7, you have to compile with 1.7 jdk.

Scatter Gather Operations

Hi, I read the wiki about scatter gather operations and how a mset is divided and then weaved to return a single result, but when I try to use it I get a org.apache.commons.lang.NotImplementedException.

Are there any plans to implement this soon? We could really use this feature as mset is essential for us.

Thanks!

How many Host I need to instantiate?

Hi, I just started leaning this. I am a bit confused about Host and connection pool. I need help for clarification.

I have Dynomite cluster in remote servers. See below. Details here: Netflix/dynomite#569

  • data center:"dc1"

    • rack1:"dc1_rack1"
      • node1 - host:216.11.111.1, token:2147483647
      • node2 - host:216.11.111.1, token:4294967294
  • data center:"dc2"

    • rack1:"dc2_rack1"
      • node1 - host:216.22.222.2, token: 1383429731
    • rack2:"dc2_rack2"
      • node1 - host: 216.22.222.2, token: 2147483647
      • node2 - host:216.22.222.2, token: 4294967294

I write client code with DynoJedis. I have questions about this client-side code. I generate own tokenMap:

[
  {
    "token": "2147483647",
    "hostname": "216.11.111.1",
    "zone": "dc1_rack1",
    "dc": "dc1"
  },
  {
    "token": "4294967294",
    "hostname": "216.11.111.1",
    "zone": "dc1_rack1",
    "dc": "dc1"
  },
  {
    "token": "1383429731",
    "hostname": "216.22.222.2",
    "zone": "dc2_rack1",
    "dc": "dc2"
  },
  {
    "token": "2147483647",
    "hostname": "216.22.222.2",
    "zone": "dc2_rack2",
    "dc": "dc2"
  },
  {
    "token": "4294967294",
    "hostname": "216.22.222.2",
    "zone": "dc2_rack2",
    "dc": "dc2"
  }
]
  1. How many Host objects do I have to instantiate? I have 2 data centers and total of 5 nodes.
    (a) Do I need only 1 Host object for a given cluster?
    (b) 2 Host objects because I use 2 hosts for this cluster?
    (c) 5 Host objects since I have 5 nodes? That is probably not the case.

  2. What the port should be when I instantiate the Host object? Should it be one of ports the nodes LISTEN to? I know default port is 8102.

  3. Suppose only 1 Host is needed for HostSupplier, then doesn't it mean that all requests from a client goes to 1 node? I know if the first node does not have data, then it passes the request around, but if the initial requests all go to 1 node, then the node gets too busy, doesn't it? Or is the connection pool smart enough to handle that complexity?

Thanks for help!!

Add a configuration option for cross-zone fallback behavior

Currently cross-zone fallbacks occur only when there is no dynomite server present in the local zone. Clients have requested the option to enable cross-zone fallbacks when an error occurs despite a dynomite server being UP in the local zone.

Connection health checking is not working for synchronous connection pool

Health checking for synchronous connection pool has been enabled in PR #112.

This mechanism doesn't actually work.

In ConnectionPoolHealthTracker, the method pingHostPool is executed periodically by a scheduler:

private void pingHostPool(HostConnectionPool<CL> hostPool) {
        for (Connection<CL> connection : hostPool.getAllConnections()) {
		try { 
			connection.execPing();
		} catch (DynoException e) {
			trackConnectionError(hostPool, e);
		}
	}
}

But, for synchronous connection pool, the HostConnectionPoolImpl.getAllConnections method throws an exception:

public Collection<Connection<CL>> getAllConnections() {
	throw new RuntimeException("Not Implemented");
}

Connection pool configuration contains port and ignores supplied port from host

It is not clear if this is misunderstanding on my end or if this is the intended behavior but I see that your connection pool configuration includes a host port. That port is then used to overwrite the port in a supplied host:

ConnectionPoolImpl.java
public boolean addHost(Host host, boolean refreshLoadBalancer) {
host.setPort(cpConfiguration.getPort());

In our deployed env we would have N-redis instances on a single host. Following the documentation we would then setup N nodes (dynomite + redis) on that host. This would mean N ports.

Please let me know what I am missing.

Support for Dynomite running over memcached

Hi,
Could someone please explain why do we need a specific dyno client for Dynomite running over memcached? I thought Dynomite has it's own api, so Dyno client talks directly to Dynomite (so it's should be back-end specific).
I would appreciate any advices on how to use DYno client with Dynomite running over memcached.
Thanks

Client not working as expected when data center is set

https://github.com/Netflix/dyno/blob/master/dyno-core/src/main/java/com/netflix/dyno/connectionpool/impl/lb/HostSelectionWithFallback.java#L377

    String dataCenter = cpConfig.getLocalDataCenter();
    if (dataCenter == null) {
        dataCenter = localRack.substring(0, localRack.length() - 1);
    }

    for (HostToken hostToken: uniqueHostTokens) {
    	if (hostToken.getHost().getRack().contains(dataCenter)) {

This code seems a little off to me. If the data center value is provided then the host token will compare this to the rack name?

The end result of this is that if the data center value is set, then the host tokens will never be found and thus causes a connection error.

Support mget in Dyno

Support mget. This has following requirements:

  • Support basic mget

  • Support mget with compression

  • Support scatter gather for token aware mget. This is a little involved because mget will have to split the request into different key sets based on token to query to individual token owners. Today Dynomite takes care of this splitting of request and aggregating of requests

Switching between multiple databases using Jedis client

Hi,

We have multiple databases in our redis instance. I did not see support for SELECT index in dynomite. Can you please confirm if creating and using multiple redis databases are supported by dynomite?

Thanks,
Vasavi

Client requests always going to same rack

Hi,
We created a 9-node cluster in one datacenter with 3 racks, each having 3 nodes following the example at https://github.com/Netflix/dynomite/wiki/Getting+Started#c-6-node-cluster-3-racks-with-2-nodes-on-each. The racks are in AWS in zones us-east-1b, us-east-1c, and us-east-1d.
We created a client in Java to connect to the above using the following maven dependency:

com.netflix.dyno
dyno-jedis
1.0.4

We also created a test Java app to test failover that uses the client and writes and reads from Redis. However, we noticed that all writes and reads always go to one rack (us-east-1c) and if we shut down the 3 servers in that rack the client errors out in every write and read. This does not happen when we shut down the other 2 racks (us-east-1b, us-east-1d). Is this the intended behavior? Our expectation was that traffic gets routed to whatever racks are available regardless of what rack fails.

Finally, we found that we can set up the environment variable EC2_AVAILABILITY_ZONE to a specific zone (e.g., us-east-1b) while running our test, and that in this case our failover works (e.g., we can successfully shut down any of the racks and requests get routed to the other racks that are up). The only problem with this solution is that we will need to set up the EC2_AVAILABILITY_ZONE variable in every app servers connecting to Dyno. Is there a better way to make this work?

Thank you

infinite recursion in zrevrangeByScore for pipeline

   public Response<Set<String>> zrevrangeByScore(final String key, final double max, 
   final double min, final int offset, final int count) {
        return new PipelineOperation<Set<String>>() {

            @Override
            Response<Set<String>> execute(Pipeline jedisPipeline) throws DynoException {
                return zrevrangeByScore(key, max, min, offset, count);
            }

        }.execute(key, OpName.ZREVRANGEBYSCORE);
    }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.