Git Product home page Git Product logo

dynamodb-cross-region-library's Introduction

[IMPORTANT] Dynamodb now provides server-side support for cross-region replication using Global Tables. Please use that instead of this client-side library. For more details about Global Tables, please see https://aws.amazon.com/dynamodb/global-tables/

DynamoDB Cross-Region Replication

The DynamoDB cross-region replication process consists of 2 distinct steps:

  • Step 1: Table copy (bootstrap) - copying existing data from source table to destination table
  • Step 2: Real-time updates (this component) - applying live DynamoDB stream records from the source table to the destination table

Requirements

  • Maven
  • JRE 1.7+
  • Pre-existing source and destination DynamoDB tables

Step 1 (Optional): Table copy (bootstrapping existing data)

This step is necessary if your source table contains existing data, and you would like to sync the data first. Please use the following steps to complete the table copy:

  1. (Optional) If your source table is not receiving live traffic, you may skip this step. Otherwise, if your source table is being continuously updated, you must enable DynamoDB Streams to record these live writes while table copy is ongoing. Enable DynamoDB Streams on your source table with StreamViewType set to "New and old images". For more information on how to do this, please refer to our offical DynamoDB Streams documentation.
  2. Check the read provisioned throughput (RCU) on your source table, and the write provisioned throughput (WCU) on your destination table. Ensure they are set high enough to allow table copy to complete well within 24 hours.
    • Rough calculation: table copy completion time ~= # of items in source table * ceiling(average item size / 1KB) / WCU of destination table.
  3. Start the table copy process, there are a few options:
    • Use the Import/Export option available via the official AWS DynamoDB Console, which exports data to S3 then imports it back to a different DynamoDB table. For more information, please refer to our official Import/Export documentation
    • Use a custom Java tool on awslabs that performs a parallel table scan then writes scanned items to the destination table, also available on Github.
    • Write your own tool to perform the table copy, essentially scanning items in the source table and using parallel PutItem calls to write items into the destination table.

WARNING: If your source table has live writes, make sure the table copy process completes well within 24 hours, because DynamoDB Streams records are only available for 24 hours. If your table copy process takes more than 24 hours, you can potentially end up with inconsistent data across your tables!

Step 2: Real-time updates (applying live stream records)

This step sets up a replication process that continuously consumes DynamoDB stream records from the source table and applies them to the destination table in real-time.

  1. Enable DynamoDB Streams on your source table with StreamViewType set to "New and old images". For more information on how to do this, please refer to our offical DynamoDB Streams documentation.

  2. Build the library:

    mvn install
  1. This produces the target jar in the target/ directory, to start the replication process:
    java -jar target/dynamodb-cross-region-replication-1.2.1.jar --sourceRegion <source_region> --sourceTable <source_table_name> --destinationRegion <destination_region> --destinationTable <destination_table_name>

Use the --help option to view all available arguments to the connector executable jar. The connector process accomplishes a few things:

  • Sets up a Kinesis Client Library (KCL) worker to consume the DynamoDB Stream of the source table
  • Uses a custom implementation of the Kinesis Connector Library to apply incoming stream records to the destination table in real-time
  • Creates a DynamoDB checkpoint table using the given or default taskName, used when restoring from crashes.
    • WARNING: Each replication process requires a different taskName. Overlapping names will result in strange, unpredictable behavior. Please also delete this DynamoDB checkpoint table if you wish to completely restart replication. See how a default taskName is calculated below in section "Advanced: running replication process across multiple machines".
  • Publishes default KCL CloudWatch metrics to report number of records and bytes processed. For more information please refer to the official KCL documentation.. CloudWatch metric publishing can be disabled with the --dontPublishCloudwatch flag.
  • Produces logs locally according to the default log4j configuration file, which produces 2 separate log files: one for the KCL process and one for the rest of the connector application. You may use your own log4j.properties file to override these defaults. In addition, AWS CloudWatch offers a monitoring agent to automatically push local logs to your AWS CloudWatch account, if needed.
  • You can override the source, KCL and destination DynamoDB endpoints with --sourceEndpoint, and --destinationEndpoint command line arguments. You can override the DynamoDB Streams source endpoint with the --sourceStreamsEndpoint command line argument. The main use case for overriding any endpoint is to use DynamoDB Local on one end or both ends of the replication pipeline, or for KCL leases and checkpoints.

NOTE: More information on the design and internal structure of the connector library can be found in the design doc. Please note it is your responsibility to ensure the connector process is up and running at all times - replication stops as soon as the process is killed, though upon resuming the process automatically uses the checkpoint table in DynamoDB to restore progress.

Advanced: running replication process across multiple machines

With extremely large tables or tables with high throughput, it might be necessary to split the replication process across multiple machines. In this case, simply kick off the target executable jar with the same command on each machine (i.e. one KCL worker per machine). The processes use the DynamoDB checkpoint table to coordinate and distribute work among them, as a result, it is essential that you use the same taskName for each process, or if you did not specify a taskName, a default one is computed.

  • Default taskName = MD5 hash of (sourceTableRegion + sourceTableName + destinationTableRegion + destinationTableName)

Advanced: replicating multiple tables

Each instantiation of the jar executable is for a single replication path only (i.e. one source DynamoDB table to one destination DynamoDB table). To enable replication for multiple tables or create multiple replicas of the same table, a separate instantiation of the cross-region replication library is required. Some examples of replication setup:

Replication Scenario 1: One source table in us-east-1, one replica in each of us-west-2, us-west-1, and eu-west-1

  • Number of Processes Required: 3 cross-region replication processes required: one from us-east-1 to us-west-2, one from us-east-1 to us-west-1, and one from us-east-1 to eu-west-1

Replication Scenario 2: Two source tables (table1 & table2) in us-east-1, both replicated separately to us-west-2

  • Number of Processes Required: 2 cross-region replication processes required: one for table1 from us-east-1 to us-west-2, and one for table2 from us-east-1 to us-west-2

Can multiple cross-region replication processes run on the same machine?

  • Yes, feel free to launch multiple processes on the same machine to optimize resource usage. However, it is highly recommended that you monitor one process first to understand its CPU, memory, network and other resource footprint. In general, bigger tables require more resources and high-throughput tables require more resources.

How can I ensure the process is always up and running?

How can I build the library and run tests? Execute mvn clean verify -Pintegration-tests on the command line. This will download DynamoDB Local and run an integration test against the local instance with CloudWatch metrics disabled.

dynamodb-cross-region-library's People

Contributors

afitzgibbon avatar amcp avatar amey91 avatar dependabot[bot] avatar dymaws avatar hyandell avatar prithviramanathan avatar schwar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dynamodb-cross-region-library's Issues

wrong sourceDynamodbStreamsEndpoint

Should line 121 on CommandLineInterface.java use: params.getSourceStreamsEndpoint() instead of params.getSourceEndpoint()

So, it would end up being:

sourceDynamodbStreamsEndpoint = Optional.fromNullable(params.getSourceStreamsEndpoint());

instead of the current one

sourceDynamodbStreamsEndpoint = Optional.fromNullable(params.getSourceEndpoint());

At least with that change it fixed some exception I got when trying it with AWS servers instead of DynamoDB-local

Sample for multi master model

This video shows examples of multi master model very quickly and does not give much details. I am also not able to find much details anywhere.
1.Where can i find the test application that was shown in the video?
2. How to get access to dynamo db tool box console? This library does not have code for the UI.

Please help.

Replication server API throwing 500

When I run the replication server locally, I get a MessageBodyProviderNotFoundException exception when I try to call the API. Is the dependency list missing a json provider?

Example:

POST http://localhost:7000
{
"Command": "ListReplicationGroupsRequest",
"Arguments": {
  "ExclusiveStartReplicationGroupName": "",
  "Limit": 1
},
"Version": 1
}

I see the following exceptions in the stacktrace:

javax.servlet.ServletException: org.glassfish.jersey.message.internal.MessageBodyProviderNotFoundException: MessageBodyWriter not found for media type=application/octet-stream, type=class java.util.ArrayList, genericType=class java.util.ArrayList.
    at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:373)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:372)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:335)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:218)
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:686)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
    at org.eclipse.jetty.server.Server.handle(Server.java:370)
    at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
    at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:960)
    at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1021)
    at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:865)
    at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
    at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
    at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668)
    at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
    at java.lang.Thread.run(Thread.java:745)
Caused by: 
org.glassfish.jersey.message.internal.MessageBodyProviderNotFoundException: MessageBodyWriter not found for media type=application/octet-stream, type=class java.util.ArrayList, genericType=class java.util.ArrayList.
    at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:227)
    at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:149)
    at org.glassfish.jersey.server.internal.JsonWithPaddingInterceptor.aroundWriteTo(JsonWithPaddingInterceptor.java:103)
    at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:149)
    at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:88)
    at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:149)
    at org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1139)
    at org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse(ServerRuntime.java:562)
    at org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:357)
    at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:347)
    at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:258)
    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
    at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:318)
    at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:235)
    at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:983)
    at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:359)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:372)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:335)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:218)
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:686)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
    at org.eclipse.jetty.server.Server.handle(Server.java:370)
    at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
    at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:960)
    at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1021)
    at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:865)
    at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
    at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
    at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668)
    at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
    at java.lang.Thread.run(Thread.java:745)
2015-07-22 11:55:38.431:WARN:oejs.ServletHandler:/
org.glassfish.jersey.message.internal.MessageBodyProviderNotFoundException: MessageBodyWriter not found for media type=application/octet-stream, type=class java.util.ArrayList, genericType=class java.util.ArrayList.
    at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:227)
    at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:149)
    at org.glassfish.jersey.server.internal.JsonWithPaddingInterceptor.aroundWriteTo(JsonWithPaddingInterceptor.java:103)
    at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:149)
    at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:88)
    at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:149)
    at org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1139)
    at org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse(ServerRuntime.java:562)
    at org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:357)
    at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:347)
    at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:258)
    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
    at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:318)
    at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:235)
    at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:983)
    at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:359)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:372)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:335)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:218)
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:686)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
    at org.eclipse.jetty.server.Server.handle(Server.java:370)
    at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
    at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:960)
    at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1021)
    at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:865)
    at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
    at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
    at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668)
    at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
    at java.lang.Thread.run(Thread.java:745)

Version conflict while building

Getting a build failure due to conflicts when trying to build the library from a fresh repo (after a LOOOONG time downloading all dependencies).

output of mvn install below:

[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building DynamoDB Cross-region Replication 1.1.0
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.993 s
[INFO] Finished at: 2016-08-01T21:23:04-04:00
[INFO] Final Memory: 51M/1218M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project dynamodb-cross-region-replication: Could not resolve dependencies for project com.amazonaws:dynamodb-cross-region-replication:jar:1.1.0: Failed to collect dependencies for com.amazonaws:dynamodb-cross-region-replication:jar:1.1.0: Could not resolve version conflict among [com.amazonaws:aws-java-sdk-dynamodb:jar:[1.10.5.1,1.11.0), com.amazonaws:dynamodb-streams-kinesis-adapter:jar:[1.0.0,2.0.0) -> com.amazonaws:aws-java-sdk-dynamodb:jar:[1.11.7,2.0.0), com.amazonaws:dynamodb-streams-kinesis-adapter:jar:[1.0.0,2.0.0) -> com.amazonaws:amazon-kinesis-client:jar:[1.6.0,1.7.0) -> com.amazonaws:aws-java-sdk-dynamodb:jar:1.11.14] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException

Enhancement and Suggestion Needed

Hi,

After several weeks using cross-region replication library, I noticed several missing features here.

  1. Configurable CPU units and Memory size in replication console [important]

I noticed that DynamoDBReplicationConnector requires 256 CPU units and 512 MB of memory. However, "Out of Memory" issue occurs frequently.

Reason OutOfMemoryError: Container killed due to memory usage

screen shot 2016-01-13 at 6 56 23 pm

I've decided to add some instances to my autoscaling group, but the tasks are not distributed properly. As a consequence, there is a short failure timespan (2 - 4 minutes) for each failure.

screen shot 2016-01-13 at 6 57 26 pm

Any workaround for this issue (without updating ECS task manually)?

  1. Issues in CloudFormation template [important]

I've consulted it with ECS team (aws/amazon-ecs-agent#277 (comment)) and they suggest to use ecs-init in your template.

In addition, the provided SSH keypair in CloudFormation is not working for the replication coordinator component. It only works for the connectors.

screen shot 2016-01-13 at 7 13 22 pm

screen shot 2016-01-13 at 7 13 37 pm

As the consequence, I cannot access my replication coordinator directly, except if I modify the provided template.

  1. Configurable throughput for KCL and metadata tables via replication console

We still can change it from DynamoDB page, but it's better if we can modify it from the replication console at the beginning.

  1. Add an option to replicate GSI and LSI from master table via replication console

  2. Add CloudFormation template support for t-class instances (since they require private VPC)

Thank you!

Authentication details

Can you please mention what authentication does this require and how to set the credentials.
I am seeing this erros after running the command line.

2016-11-11 11:19:11,085 FATAL com.amazonaws.services.dynamodbv2.streams.connectors.CommandLineInterface - com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain

How to solve the shard split?

I have a question about the shard split, as mentioned in the aws documentation, the shard may be split into several shards, my question is only the latest shard will split? For example, one shard keep writing will be split into two: A and B, and A will be sealed while B keep writing.
Is my understanding correct? Will a sealed shard split?

Support two way replication between two tables

I have a business case where the data needs to be replicated between two DynamoDb tables keeping them synchronized both ways. The current cross region replication library does not support this feature but is much needed.

I have made the required changes and introduced this feature. This issue should be labeled as an enhancement.

~Thanks

missing dependency DynamoDBLocal & dynamodb-streams-kinesis-adapter

In checking out the project i'm unable to build due to missing dependencies:

[WARNING] The POM for com.amazonaws:DynamoDBLocal:jar:1.10.5.1 is missing, no dependency information available

[WARNING] The POM for com.amazonaws:dynamodb-streams-kinesis-adapter:jar:1.0.0 is missing, no dependency information available

I don't see these jars in maven central yet?

Kinesis NullPointer Exception

Hi,

I am attempting to set up a second replication group from an existing table. The copy appears to be working, and takes ~16 hours. After the copy finishes, the DynamoDBReplicationConnector task shows up, and is running.
It is not keeping the tables in sync.

I see this in the log:
ERROR com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShutdownTask - Application exception. java.lang.NullPointerException at com.amazonaws.services.kinesis.connectors.KinesisClientLibraryPipelinedRecordProcessor.shutdown(KinesisClientLibraryPipelinedRecordProcessor.java:160) at com.amazonaws.services.kinesis.clientlibrary.lib.worker.V1ToV2RecordProcessorAdapter.shutdown(V1ToV2RecordProcessorAdapter.java:48) at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShutdownTask.call(ShutdownTask.java:94) at com.amazonaws.services.kinesis.clientlibrary.lib.worker.MetricsCollectingTaskDecorator.call(MetricsCollectingTaskDecorator.java:48) at com.amazonaws.services.kinesis.clientlibrary.lib.worker.MetricsCollectingTaskDecorator.call(MetricsCollectingTaskDecorator.java:23) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)

Also seeing this:
com.amazonaws.AmazonClientException: Unable to execute HTTP request: Timeout waiting for connection from pool at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:478) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:302) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:1581) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.putItem(AmazonDynamoDBClient.java:746) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBAsyncClient$20.call(AmazonDynamoDBAsyncClient.java:920) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBAsyncClient$20.call(AmazonDynamoDBAsyncClient.java:916) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool at org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:226) at org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(PoolingClientConnectionManager.java:195) at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.amazonaws.http.conn.ClientConnectionRequestFactory$Handler.invoke(ClientConnectionRequestFactory.java:70) at com.amazonaws.http.conn.$Proxy8.getConnection(Unknown Source) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:423) at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57) at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:706) at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:467)

Replication Group disappeared from console

Hi,
I created a replication group yesterday, and it no longer shows up in the console. The replication appears to still be working properly between the tables.

I tried "restarting the app server" through beanstalk, but no luck. I can see the name of the group in the DynamoDBReplicationCoordinatorMetadata DynamoDB table.

replication not work after stop replication using dynamodb local demo

I run the demo following the steps as documentation but I the replication is not not after I stop the replication for master-slave and start it again. I print the detailed for each table on mater and replica as following

TN=>master
Desc=>{Table: {AttributeDefinitions: [{AttributeName: device_id,AttributeType: S}, {AttributeName: time,AttributeType: S}],TableName: master,KeySchema: [{AttributeName: device_id,KeyType: HASH}, {AttributeName: time,KeyType: RANGE}],TableStatus: ACTIVE,CreationDateTime: Wed Mar 11 14:50:15 CST 2015,ProvisionedThroughput: {LastIncreaseDateTime: Thu Jan 01 08:00:00 CST 1970,LastDecreaseDateTime: Thu Jan 01 08:00:00 CST 1970,NumberOfDecreasesToday: 0,ReadCapacityUnits: 10,WriteCapacityUnits: 10},TableSizeBytes: 101083,ItemCount: 722,StreamSpecification: {StreamEnabled: true,StreamViewType: NEW_AND_OLD_IMAGES},LatestStreamId: eb0a191797624dd3a48fa681d30612121426056615782ccf83}}
==========================
TN=>replica
Desc=>{Table: {AttributeDefinitions: [{AttributeName: device_id,AttributeType: S}, {AttributeName: time,AttributeType: S}],TableName: replica,KeySchema: [{AttributeName: device_id,KeyType: HASH}, {AttributeName: time,KeyType: RANGE}],TableStatus: ACTIVE,CreationDateTime: Wed Mar 11 14:50:16 CST 2015,ProvisionedThroughput: {LastIncreaseDateTime: Thu Jan 01 08:00:00 CST 1970,LastDecreaseDateTime: Thu Jan 01 08:00:00 CST 1970,NumberOfDecreasesToday: 0,ReadCapacityUnits: 10,WriteCapacityUnits: 10},TableSizeBytes: 13452,ItemCount: 114,StreamSpecification: {StreamEnabled: false,},}}
==========================

And below is the output of demo process

三月 11, 2015 2:51:25 下午 com.amazonaws.services.dynamodbv2.replication.manager.models.ReplicationGroupCoordinator stopReplication
INFO: Stopping replication: null
三月 11, 2015 2:51:36 下午 com.amazonaws.services.dynamodbv2.replication.manager.models.ReplicationGroupCoordinator startReplication
INFO: Starting replication: null

And now the replication groups status is "BOOTSTRAPPING". The status has not changed after 10 minutes later.

Thanks

Create Fails, not seeing any logs?

I have created a replication group with an existing table and master. When I create, it says "WAITING" under the "replication connections", then eventually just says "CREATE FAILED"

Is there any place I should be looking for indications as to why it failed or why it's waiting? I don't see anything helpful in beanstalk logs.

Troubleshooting

Can you tell me how long it should take to replicate an 11mb table? Where is the best place to troubleshoot logs? Does an empty table need to be created before hand? I get a table created with this name (DynamoDBCrossRegionReplication), but nothing else for over 6 hours.

com.amazonaws.AmazonServiceException InvalidSignatureException

I've verified the key pair is correct and valid. It's defined in ~/.aws/config

java -jar dynamodb-cross-region-replication-1.1.0.jar --sourceEndpoint dynamodb.us-east-1.amazonaws.com --sourceTable --destinationEndpoint dynamodb.us-west-1.amazonaws.com --destinationTable

com.amazonaws.AmazonServiceException: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.

The Canonical String for this request should have been
'POST
/

amz-sdk-invocation-id:6f137dc7-3c6e-45e3-99fb-0416dffeeae8
amz-sdk-retry:10/12800/
content-length:51
content-type:application/x-amz-json-1.0
host:dynamodb.us-east-1.amazonaws.com
user-agent:aws-sdk-java/1.10.77 Linux/4.4.11-23.53.amzn1.x86_64 OpenJDK_64-Bit_Server_VM/24.95-b01/1.7.0_101
x-amz-date:20160819T161205Z
x-amz-target:DynamoDB_20120810.DescribeTable

amz-sdk-invocation-id;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-date;x-amz-target
c861b51d332699336e6de18a9edc332105ab9ad2cd16a17afce7387cc6586a1d'

The String-to-Sign should have been
'AWS4-HMAC-SHA256
20160819T161205Z
20160819/us-east-1/dynamodb/aws4_request
b000cd77e2b2905c8a49622056d52854ace5435f3049c46924f11b100bc23c72'
(Service: AmazonDynamoDBv2; Status Code: 400; Error Code: InvalidSignatureException; Request ID: MGK51SAPODBMFDCJTK14ITU1VNVV4KQNSO5AEMVJF66Q9ASUAAJG)

Build failure

As I try to compile the source after my recent pull, I keep getting build failure.

Can someone take a look at let me know ?

[INFO] Reactor Summary:
[INFO]
[INFO] DynamoDB Connectors ................................ SUCCESS [ 43.707 s]
[INFO] DynamoDB Replication Coordinator ................... FAILURE [ 3.905 s]
[INFO] DynamoDB Replication Server ........................ SKIPPED
[INFO] DynamoDB Table Copy Client ......................... SKIPPED
[INFO] DynamoDB Table Copy Nanny .......................... SKIPPED
[INFO] DynamoDB Cross-region Replication Library .......... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 47.791 s
[INFO] Finished at: 2016-03-21T12:50:56+05:30
[INFO] Final Memory: 51M/1025M
[INFO] ----------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.3:compile (default-compile) on project dynamodb-replication-coordinator: Compilation failure: Compilation failure:
[ERROR] /dynamodb-cross-region-library/dynamodb-replication-coordinator/src/main/java/com/amazonaws/services/dynamodbv2/replication/AwsAccess.java:[23,45] package com.amazonaws.services.cloudformation does not exist
[ERROR] /dynamodb-cross-region-library/dynamodb-replication-coordinator/src/main/java/com/amazonaws/services/dynamodbv2/replication/AwsAccess.java:[24,45] package com.amazonaws.services.cloudformation does not exist
[ERROR] /dynamodb-cross-region-library/dynamodb-replication-coordinator/src/main/java/com/amazonaws/services/dynamodbv2/replication/AwsAccess.java:[29,34] package com.amazonaws.services.ecs does not exist
[ERROR] /dynamodb-cross-region-library/dynamodb-replication-coordinator/src/main/java/com/amazonaws/services/dynamodbv2/replication/AwsAccess.java:[30,34] package com.amazonaws.services.ecs does not exist
[ERROR] /dynamodb-cross-region-library/dynamodb-replication-coordinator/src/main/java/com/amazonaws/services/dynamodbv2/replication/AwsAccess.java:[55,31] cannot find symbol
[ERROR] symbol: class AmazonCloudFormation
[ERROR] location: class com.amazonaws.services.dynamodbv2.replication.AwsAccess
[ERROR] /dynamodb-cross-region-library/dynamodb-replication-coordinator/src/main/java/com/amazonaws/services/dynamodbv2/replication/AwsAccess.java:[60,31] cannot find symbol
[ERROR] symbol: class AmazonECS
[ERROR] location: class com.amazonaws.services.dynamodbv2.replication.AwsAccess
[ERROR] /dynamodb-cross-region-library/dynamodb-replication-coordinator/src/main/java/com/amazonaws/services/dynamodbv2/replication/AwsAccess.java:[111,25] cannot find symbol
[ERROR] symbol: class AmazonCloudFormation
[ERROR] location: class com.amazonaws.services.dynamodbv2.replication.AwsAccess
[ERROR] /dynamodb-cross-region-library/dynamodb-replication-coordinator/src/main/java/com/amazonaws/services/dynamodbv2/replication/AwsAccess.java:[127,25] cannot find symbol
[ERROR] symbol: class AmazonECS

(question not issue) Using Lambda

I am just starting to learn about dynamodb cross-region sync'ing, and I was wondering whether something like this could be done with Lambda?

connector/tablecopy task image missing from repository

The dockerfile and scripts necessary to deploy an image for the tablecopy and connector task image are not included in the repository. Ideally they would be included similar to how the coordinator is included under cloud-formation/coordinatorContainer.

Currently, to deploy a custom jar for either of these components, the existing image must be pulled and updated, then pushed as a new image, which feels a little messy. I would like to be able to build a new image without worrying about missing any configuration or resources

how to set region for checkpoint table?

library is creating checkpoint tables ("DynamoDBCrossRegionReplication*") in the default region (us-east-1 for me), and not in the region of the source (?) table.

I tried setting AWS_REGION env var, AWS_DEFAULT_REGION, ~/.aws/credentials do not have a default region... no matter which region end point I chose as source, or what I put into AWS_%REGION env vars, checkpoint tables always end up in in us-east-1.

how to configure this?

Can not find 'pushd' in setup.sh script for demo

I run the setup.sh script in demo/target/dynamodb-cross-region-replication-demo directory and it throw command not find exception like this

./setup.sh: 15: ./setup.sh: popd: not found
### Unpacking DynamoDB Local

./setup.sh: 20: ./setup.sh: pushd: not found

So I think this script should start with "#!/bin/bash". And it ok for me after changing.

Thanks

Replica table indexes

When a replication group is created, the indexes aren't copied from source to replicas.

ProjectionType Casting error

Casting error creating replication group which tables have indexes.

2016-04-23 14:56:42,104 ERROR com.amazonaws.services.dynamodbv2.replication.server.api.DynamoDBReplicationGroupResource - Exception while processing coordinator command: {"Command": "CreateReplicationGroupRequest", "Arguments": {"KeySchema": [{"KeyType": "HASH", "AttributeName": "ApiKey_ID"}], "ReplicationGroupName": "RG_Erad_ApiKey", "ReplicationGroupMembers": {"arn:aws:dynamodb:us-west-2:939466447253:table/Erad_ApiKey": {"GlobalSecondaryIndexes": [{"ProvisionedThroughput": {"WriteCapacityUnits": 2, "ReadCapacityUnits": 3}, "IndexName": "Account_ID-index", "Projection": {"ProjectionType": "ALL"}, "KeySchema": [{"keyType": "HASH", "attributeName": "Account_ID"}]}, {"ProvisionedThroughput": {"WriteCapacityUnits": 2, "ReadCapacityUnits": 3}, "IndexName": "ApiKeyName-index", "Projection": {"ProjectionType": "ALL"}, "KeySchema": [{"keyType": "HASH", "attributeName": "ApiKeyName"}]}], "Endpoint": "https://dynamodb.us-west-2.amazonaws.com", "LocalSecondaryIndexes": [], "StreamEnabled": true, "ProvisionedThroughput": {"WriteCapacityUnits": 2, "ReadCapacityUnits": 3}, "Connectors": [], "ARN": "arn:aws:dynamodb:us-west-2:939466447253:table/Erad_ApiKey"}, "arn:aws:dynamodb:us-east-1:939466447253:table/Erad_ApiKey": {"GlobalSecondaryIndexes": [{"ProvisionedThroughput": {"WriteCapacityUnits": 2, "ReadCapacityUnits": 3}, "IndexName": "Account_ID-index", "Projection": {"ProjectionType": "ALL"}, "KeySchema": [{"keyType": "HASH", "attributeName": "Account_ID"}]}, {"ProvisionedThroughput": {"WriteCapacityUnits": 2, "ReadCapacityUnits": 3}, "IndexName": "ApiKeyName-index", "Projection": {"ProjectionType": "ALL"}, "KeySchema": [{"keyType": "HASH", "attributeName": "ApiKeyName"}]}], "Endpoint": "https://dynamodb.us-east-1.amazonaws.com", "LocalSecondaryIndexes": [], "StreamEnabled": false, "ProvisionedThroughput": {"WriteCapacityUnits": 2, "ReadCapacityUnits": 3}, "Connectors": [{"SourceTableArn": "arn:aws:dynamodb:us-west-2:939466447253:table/Erad_ApiKey", "SourceTableEndpoint": "https://dynamodb.us-west-2.amazonaws.com"}], "ARN": "arn:aws:dynamodb:us-east-1:939466447253:table/Erad_ApiKey", "TableCopyTask": {"SourceTableArn": "arn:aws:dynamodb:us-west-2:939466447253:table/Erad_ApiKey", "SourceTableEndpoint": "https://dynamodb.us-west-2.amazonaws.com"}}}, "ConnectorType": "SINGLE_MASTER_TO_READ_REPLICA", "AttributeDefinitions": [{"AttributeName": "Account_ID", "AttributeType": "S"}, {"AttributeName": "ApiKeyName", "AttributeType": "S"}, {"AttributeName": "ApiKey_ID", "AttributeType": "S"}]}} with error: java.lang.ClassCastException: java.lang.String cannot be cast to com.amazonaws.services.dynamodbv2.model.ProjectionType

Creating replication group silently fails if non-NEW_AND_OLD_IMAGES stream exists

In DynamoDBReplicationUtilities.createTableIfNotExists, the current stream specification is compared against the desired stream specification, and the table's config is updated if they don't match.

This works fine if streams aren't enabled in the first place. However, if streams are already enabled, but the stream type is something other than NEW_AND_OLD_IMAGES, the UpdateTable call fails with a ResourceInUse exception.

As a side note, this was pretty difficult to debug, because the resulting exception isn't actually printed - you just get the following log line in CloudWatch:

2015-09-06 12:37:41,108 ERROR com.amazonaws.services.dynamodbv2.replication.coordinator.state.DynamoDBReplicationGroupCreationStarted - Unable to create table for replication member with ARN: <arn>

throw grunt error

Hi

I run the mvn install command to build the latest project but throw exception as following .


[INFO] Amazon DynamoDB Cross Region Replication Library .. SUCCESS [3.071s]
[INFO] Amazon DynamoDB Cross Region Replication Manager .. FAILURE [14.435s]
[INFO] Amazon DynamoDB Cross Region Replication Demo ..... SKIPPED
[INFO] Amazon DynamoDB Cross Region ...................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 17.800s
[INFO] Finished at: Tue Mar 10 17:59:47 CST 2015
[INFO] Final Memory: 17M/213M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.github.trecloux:yeoman-maven-plugin:0.1:build (default) on project dynamodb-cross-region-replication-manager: Error during : grunt --no-color: Process exited with an error: 6 (Exit value: 6) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command

[ERROR] mvn -rf :dynamodb-cross-region-replication-manager

npm version is 2.5.1
grunt-cli v0.1.13
grunt v0.4.5

Thanks.

[Feature Request / Enhancement] Traffic Compression to save money

Hey AWS Labs guys. You've done a terrific job with this utility for replicating DynamoDB tables across several regions. We've been using it for quite some time and it works just fine. Our DynamoDB tables are so huge and they are constantly changing, so the traffic generated by the tool to keep the tables replicated is huge as well, we are talking about several TBs monthly, which is leaving its mark on the AWS bill.

It would be nice to be able to enable some sort of compression on the traffic received/sent from/to DynamoDB, so we would be able to save a lot of money.

I'd appreciate any feedback/comments on this request.

Thanks a lot

Missing commands for dynamodb-table-copy-utilities in ./bin

The table copy util readme contains the following steps:

  1. npm install
  2. Run commands in ./bin

After running npm install a bin directory is not created nor do I see any code with implementation for the command line client in the repository.

I see references in the Nanny lib for /opt/dynamodb-tablecopy/DynamoDBTableCopyUtilities/bin/copy_table but cannot find any relevant implementation.

Is the complete source for the command line client supposed to be contained in this repo or have I overlooked something?

cannot build 1.1.0

We can no longer build this project. We had a jenkins job that was building this project, but when we try to build 1.1.0 tag, it fails like so:

$ mvn install
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building DynamoDB Cross-region Replication 1.1.0
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 24.728 s
[INFO] Finished at: 2017-05-19T17:30:14-04:00
[INFO] Final Memory: 88M/1381M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project dynamodb-cross-region-replication: Could not resolve dependencies for project com.amazonaws:dynamodb-cross-region-replication:jar:1.1.0: Failed to collect dependencies for com.amazonaws:dynamodb-cross-region-replication:jar:1.1.0: Could not resolve version conflict among [com.amazonaws:aws-java-sdk-dynamodb:jar:[1.10.5.1,1.11.0), com.amazonaws:dynamodb-streams-kinesis-adapter:jar:[1.0.0,2.0.0) -> com.amazonaws:aws-java-sdk-dynamodb:jar:[1.11.115,2.0.0), com.amazonaws:dynamodb-streams-kinesis-adapter:jar:[1.0.0,2.0.0) -> com.amazonaws:amazon-kinesis-client:jar:[1.7.5,1.8.0) -> com.amazonaws:aws-java-sdk-dynamodb:jar:1.11.115] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
alexgray at Alex-MBP in ~/workspace/allrepos/dynamodb-cross-region-library on (no branch)△
$ mvn --version
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T11:41:47-05:00)
Maven home: /usr/local/Cellar/maven/3.3.9/libexec
Java version: 1.8.0_65, vendor: Oracle Corporation
Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "mac os x", version: "10.11.6", arch: "x86_64", family: "mac"
alexgray at Alex-MBP in ~/workspace/allrepos/dynamodb-cross-region-library on (no branch)△
$ java -version
java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)

Any ideas what could be happening?

Disabiling Cloudwatch Metric

We are working on Dynamodb Cross Region Replication using latest CRR library(1.2.1). We have replicated almost 100 tables from Ireland to Oregon by running replication jar on Autoscaled EC2 Instances.
The current setup of Cross Region Replication is on the EC2 instances that are configured periodically to post memory utilization data of the AutoScaling Group to the AWS CloudWatch which is done using custom monitoring script provided by AWS. This data is being used to raise alarms which triggers the auto scaling policies to add/remove an instance based on the memory utilization threshold.
We are using CloudWatch service to raise alarms based on memory utilization statistics provided by AWS CloudWatch custom metrics and monitoring scripts.
We are using CRR jar 1.2.1 provided by AWS over GitHub for our current solution of cross region replication, which is responsible for managing streams for data replication internally on its own. It is utilizing the IAM user dynamodbCrossReplication, and polling that PutMetricData API on CloudWatch.
We want to reduce the Cloudwatch charges which is being caused due to the PutMetricData API calls made by the IAM user: dynamodbCrossReplication.
We also will like to tell you that the autoscaled instances are configured on the basis of memory utilization statistics using AWS CloudWatch custom metrics and monitoring scripts.
As suggested by source, it could be done easily by disabiling the Cloudwatch metrics, by passing the argument --dontPublishCloudwatch flag in the jar command, as mentioned below,
java -jar dynamodb-cross-region-replication-1.2.1.jar --sourceRegion <source_region> --sourceTable <source_table_name> --destinationRegion <destination_region> --destinationTable <destination_table_name> --dontPublishCloudwatch
The number of records and bytes processed will not get recorded for them, when the jar will be executed.

  1. Basically it looks like we are polling a specific metric at a very high frequency and would need to reduce the polling frequency, which can be done by reducing the value of DEFAULT_PARENT_SHARD_POLL_INTERVAL_MILLIS in DynamoDBConnectorConstants.java from 10000L to lower the frequency rate value.

Would any of the above mentioned way, effect the running replication processes in any way?

Please help us with the best solution out of these two or if there is any, to reduce the cloudwatch charge which does not effect our current setup of replication adversely.

mvn install fails with npm dependency issue

I checked out this repo and started following the directions in your README.md
step 1: downloaded all the preview jars by running the shell script
step 2: ran mvn install from the git root directory

mvn install fails and generates the following log
npm ERR! System Darwin 13.4.0
npm ERR! command "node" "/usr/local/bin/npm" "install"
npm ERR! cwd /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
npm ERR! node -v v0.10.28
npm ERR! npm -v 2.0.0-beta.2
npm ERR! code EPEERINVALID
npm ERR!
npm ERR! Additional logging details can be found in:
npm ERR! /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo/npm-debug.log
npm ERR! not ok code 0
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Amazon DynamoDB Cross Region Replication Library .. SUCCESS [ 1.906 s]
[INFO] Amazon DynamoDB Cross Region Replication Manager .. FAILURE [ 2.936 s]
[INFO] Amazon DynamoDB Cross Region Replication Demo ..... SKIPPED
[INFO] Amazon DynamoDB Cross Region ...................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.983 s
[INFO] Finished at: 2015-02-19T13:28:59-08:00
[INFO] Final Memory: 17M/228M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.github.trecloux:yeoman-maven-plugin:0.1:build (default) on project dynamodb-cross-region-replication-manager: Error during : npm install: Process exited with an error: 1 (Exit value: 1) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn -rf :dynamodb-cross-region-replication-manager

and here's the npm-debug.log

cat /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo/npm-debug.log

0 info it worked if it ends with ok
1 verbose cli [ 'node', '/usr/local/bin/npm', 'install' ]
2 info using [email protected]
3 info using [email protected]
4 verbose node symlink /usr/local/bin/node
5 verbose readDependencies using package.json deps
6 verbose install where, deps [ '/Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo',
6 verbose install [ 'aws-sdk',
6 verbose install 'coffee-script',
6 verbose install 'grunt',
6 verbose install 'grunt-autoprefixer',
6 verbose install 'grunt-bg-shell',
6 verbose install 'grunt-concurrent',
6 verbose install 'grunt-connect-proxy',
6 verbose install 'grunt-contrib-clean',
6 verbose install 'grunt-contrib-compass',
6 verbose install 'grunt-contrib-concat',
6 verbose install 'grunt-contrib-connect',
6 verbose install 'grunt-contrib-copy',
6 verbose install 'grunt-contrib-cssmin',
6 verbose install 'grunt-contrib-htmlmin',
6 verbose install 'grunt-contrib-imagemin',
6 verbose install 'grunt-contrib-jade',
6 verbose install 'grunt-contrib-jshint',
6 verbose install 'grunt-contrib-uglify',
6 verbose install 'grunt-contrib-watch',
6 verbose install 'grunt-curl',
6 verbose install 'grunt-filerev',
6 verbose install 'grunt-google-cdn',
6 verbose install 'grunt-if-missing',
6 verbose install 'grunt-karma',
6 verbose install 'grunt-newer',
6 verbose install 'grunt-ng-annotate',
6 verbose install 'grunt-ng-constant',
6 verbose install 'grunt-svgmin',
6 verbose install 'grunt-tar.gz',
6 verbose install 'grunt-usemin',
6 verbose install 'grunt-wiredep',
6 verbose install 'jshint-stylish',
6 verbose install 'karma',
6 verbose install 'karma-jasmine',
6 verbose install 'karma-ng-html2js-preprocessor',
6 verbose install 'karma-phantomjs-launcher',
6 verbose install 'load-grunt-tasks',
6 verbose install 'nconf',
6 verbose install 'time-grunt' ] ]
7 info preinstall [email protected]
8 verbose readDependencies using package.json deps
9 verbose already installed skipping grunt-concurrent@^0.5.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
10 verbose already installed skipping grunt-connect-proxy@^0.1.11 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
11 verbose already installed skipping grunt-contrib-clean@^0.5.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
12 verbose already installed skipping grunt-contrib-compass@^0.7.2 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
13 verbose already installed skipping grunt-contrib-concat@^0.4.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
14 verbose already installed skipping grunt-contrib-connect@^0.7.1 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
15 verbose already installed skipping grunt-contrib-copy@^0.5.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
16 verbose already installed skipping grunt-contrib-cssmin@^0.9.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
17 verbose already installed skipping grunt-contrib-htmlmin@^0.3.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
18 verbose already installed skipping grunt-contrib-imagemin@^0.7.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
19 verbose already installed skipping grunt-contrib-jade@^0.12.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
20 verbose already installed skipping grunt-contrib-jshint@^0.10.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
21 verbose already installed skipping grunt-contrib-uglify@^0.4.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
22 verbose already installed skipping grunt-contrib-watch@^0.6.1 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
23 verbose already installed skipping grunt-curl@^2.0.2 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
24 verbose already installed skipping grunt-filerev@^0.2.1 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
25 verbose already installed skipping grunt-google-cdn@^0.4.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
26 verbose already installed skipping grunt-if-missing@^1.0.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
27 verbose already installed skipping grunt-karma@~0.8.3 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
28 verbose already installed skipping grunt-newer@^0.7.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
29 verbose already installed skipping grunt-ng-annotate@^0.4.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
30 verbose already installed skipping grunt-ng-constant@^1.0.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
31 verbose already installed skipping grunt-svgmin@^0.4.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
32 verbose already installed skipping [email protected] /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
33 verbose already installed skipping grunt-usemin@^2.1.1 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
34 verbose already installed skipping grunt-wiredep@^1.7.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
35 verbose already installed skipping jshint-stylish@^0.2.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
36 verbose already installed skipping karma@~0.12.21 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
37 verbose already installed skipping karma-jasmine@^0.2.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
38 verbose already installed skipping karma-ng-html2js-preprocessor@^0.1.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
39 verbose already installed skipping karma-phantomjs-launcher@~0.1.4 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
40 verbose already installed skipping load-grunt-tasks@^0.4.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
41 verbose already installed skipping nconf@^0.6.9 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
42 verbose already installed skipping time-grunt@^0.3.1 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
43 verbose already installed skipping aws-sdk@^2.0.19 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
44 verbose already installed skipping grunt@^0.4.1 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
45 verbose already installed skipping coffee-script@^1.8.0 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
46 verbose already installed skipping grunt-autoprefixer@^0.7.2 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
47 verbose already installed skipping grunt-bg-shell@^2.3.1 /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
48 silly resolved []
49 info build /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
50 verbose linkStuff [ false,
50 verbose linkStuff false,
50 verbose linkStuff false,
50 verbose linkStuff '/Users/mapte/Lab/dynamodb-cross-region-library/replication-manager' ]
51 info linkStuff [email protected]
52 verbose linkBins [email protected]
53 verbose linkMans [email protected]
54 verbose rebuildBundles [email protected]
55 verbose rebuildBundles [ '.bin',
55 verbose rebuildBundles 'aws-sdk',
55 verbose rebuildBundles 'coffee-script',
55 verbose rebuildBundles 'grunt',
55 verbose rebuildBundles 'grunt-autoprefixer',
55 verbose rebuildBundles 'grunt-bg-shell',
55 verbose rebuildBundles 'grunt-concurrent',
55 verbose rebuildBundles 'grunt-connect-proxy',
55 verbose rebuildBundles 'grunt-contrib-clean',
55 verbose rebuildBundles 'grunt-contrib-compass',
55 verbose rebuildBundles 'grunt-contrib-concat',
55 verbose rebuildBundles 'grunt-contrib-connect',
55 verbose rebuildBundles 'grunt-contrib-copy',
55 verbose rebuildBundles 'grunt-contrib-cssmin',
55 verbose rebuildBundles 'grunt-contrib-htmlmin',
55 verbose rebuildBundles 'grunt-contrib-imagemin',
55 verbose rebuildBundles 'grunt-contrib-jade',
55 verbose rebuildBundles 'grunt-contrib-jshint',
55 verbose rebuildBundles 'grunt-contrib-uglify',
55 verbose rebuildBundles 'grunt-contrib-watch',
55 verbose rebuildBundles 'grunt-curl',
55 verbose rebuildBundles 'grunt-filerev',
55 verbose rebuildBundles 'grunt-google-cdn',
55 verbose rebuildBundles 'grunt-if-missing',
55 verbose rebuildBundles 'grunt-karma',
55 verbose rebuildBundles 'grunt-newer',
55 verbose rebuildBundles 'grunt-ng-annotate',
55 verbose rebuildBundles 'grunt-ng-constant',
55 verbose rebuildBundles 'grunt-svgmin',
55 verbose rebuildBundles 'grunt-tar.gz',
55 verbose rebuildBundles 'grunt-usemin',
55 verbose rebuildBundles 'grunt-wiredep',
55 verbose rebuildBundles 'jshint-stylish',
55 verbose rebuildBundles 'karma',
55 verbose rebuildBundles 'karma-jasmine',
55 verbose rebuildBundles 'karma-ng-html2js-preprocessor',
55 verbose rebuildBundles 'karma-phantomjs-launcher',
55 verbose rebuildBundles 'load-grunt-tasks',
55 verbose rebuildBundles 'nconf',
55 verbose rebuildBundles 'time-grunt' ]
56 info install [email protected]
57 info postinstall [email protected]
58 info prepublish [email protected]
59 error peerinvalid The package grunt does not satisfy its siblings' peerDependencies requirements!
59 error peerinvalid Peer [email protected] wants grunt@~0.4.2
59 error peerinvalid Peer [email protected] wants grunt@~0.4.0
59 error peerinvalid Peer [email protected] wants grunt@~0.4.0
59 error peerinvalid Peer [email protected] wants grunt@~0.4.1
59 error peerinvalid Peer [email protected] wants grunt@~0.4.0
59 error peerinvalid Peer [email protected] wants grunt@~0.4.0
59 error peerinvalid Peer [email protected] wants grunt@~0.4.0
59 error peerinvalid Peer [email protected] wants grunt@~0.4.0
59 error peerinvalid Peer [email protected] wants grunt@~0.4.0
59 error peerinvalid Peer [email protected] wants grunt@~0.4.1
59 error peerinvalid Peer [email protected] wants grunt@~0.4.0
59 error peerinvalid Peer [email protected] wants grunt@^0.4.0
59 error peerinvalid Peer [email protected] wants grunt@~0.4.1
59 error peerinvalid Peer [email protected] wants grunt@~0.4.0
59 error peerinvalid Peer [email protected] wants grunt@^0.4.0
59 error peerinvalid Peer [email protected] wants grunt@~0.4.0
59 error peerinvalid Peer [email protected] wants grunt@>=0.4.0
59 error peerinvalid Peer [email protected] wants grunt@~0.4.1
59 error peerinvalid Peer [email protected] wants [email protected]
59 error peerinvalid Peer [email protected] wants grunt@~0.4.1
59 error peerinvalid Peer [email protected] wants grunt@~0.4.1
59 error peerinvalid Peer [email protected] wants grunt@~0.4.1
59 error peerinvalid Peer [email protected] wants grunt@~0.4.0
59 error peerinvalid Peer [email protected] wants grunt@~0.4.0
59 error peerinvalid Peer [email protected] wants grunt@>=0.4.0
59 error peerinvalid Peer [email protected] wants grunt@~0.4.0
60 error System Darwin 13.4.0
61 error command "node" "/usr/local/bin/npm" "install"
62 error cwd /Users/mapte/Lab/dynamodb-cross-region-library/replication-manager/yo
63 error node -v v0.10.28
64 error npm -v 2.0.0-beta.2
65 error code EPEERINVALID
66 verbose exit [ 1, true ]

What is the correct set of dependencies that'll work? I am going to try and just play with the version numbers in order to resolve this issue, but it'll be great if someone takes a look to determine if this is really an issue

seeing a lot of log entries with java.lang.NoSuchMethodError: com.amazonaws.services.cloudwatch.AmazonCloudWatch.putMetricData(Lcom/amazonaws/services/cloudwatch/model/PutMetricDataRequest;)Lcom/amazonaws/services/cloudwatch/model/PutMetricDataResult;

I'm seeing a lot of stack traces around AmazonCloudWatch.putMetricData in logs while running this library:

steps to reproduce:
clone this repo ,
mvn clean install
cd target/
java -jar dynamodb-cross-region-replication-1.1.0.jar --sourceEndpoint https://dynamodb.us-east-1.amazonaws.com --sourceTable --destinationEndpoint https://dynamodb.us-west-2.amazonaws.com --destinationTable

replication does appear to work, however the kcl.log is full of these stack traces

2016-08-14 17:01:37,044 ERROR com.amazonaws.services.kinesis.metrics.impl.CWPublisherRunnable - Caught exception thrown by metrics Publisher in CWPublisherRunnable
java.lang.NoSuchMethodError: com.amazonaws.services.cloudwatch.AmazonCloudWatch.putMetricData(Lcom/amazonaws/services/cloudwatch/model/PutMetricDataRequest;)Lcom/amazonaws/services/cloudwatch/model/PutMetricDataResult;
at com.amazonaws.services.kinesis.metrics.impl.DefaultCWMetricsPublisher.publishMetrics(DefaultCWMetricsPublisher.java:63)
at com.amazonaws.services.kinesis.metrics.impl.CWPublisherRunnable.runOnce(CWPublisherRunnable.java:144)
at com.amazonaws.services.kinesis.metrics.impl.CWPublisherRunnable.run(CWPublisherRunnable.java:90)
at java.lang.Thread.run(Thread.java:745)

IAM policy template for replication?

It could be helpful to add to documentation an IAM policy template required by this replication job in order to simplify setup for new users...

ElasticBeanstalk RED due to JDK7 end of life

Following the steps in the official guide here:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.CrossRegionRepl.Walkthrough.Step2.html

The ElasticBeanstalk application is RED due to the error below. Problem is that the Dockerfile used for the coordinator doesn't version-lock ubuntu, so the latest images are built against xenial, which no longer includes openjdk-7-jre. Ref: https://github.com/awslabs/dynamodb-cross-region-library/blob/master/cloud-formation/coordinatorContainer/Dockerfile#L1

I am putting together a PR to fix this.
Edit: here it is #24

I'm more surprised that noone noticed until now.

Error:

 ---> Running in ae2ccaa9a39a
  Reading package lists...
  Building dependency tree...
  Package openjdk-7-jre is not available, but is referred to by another package.
  This may mean that the package is missing, has been obsoleted, or
  is only available from another source

  �[91mE�[0m�[91m: �[0m�[91mPackage 'openjdk-7-jre' has no installation candidate�[0m�[91m
  �[0mThe command '/bin/sh -c apt-get -y install openjdk-7-jre python-setuptools wget jq' returned a non-zero code: 100
  Failed to build Docker image aws_beanstalk/staging-app, retrying...
  Sending build context to Docker daemon 12.29 kB

JVM heap size isn't adjusted based on memory constraints

I ran into issues with tasks consistently getting OOM killed when I created my ECS cluster using c3.8xlarge instances.

As I understand it, the default JVM behavior is to set the max heap size to 25% of the total system memory. However, when run under a docker memory constraint, it has no ability to detect that less memory than usual is available. On a c3.8xlarge instance, for example, this results in a max heap size of 15G, in spite of the container capping memory usage at 512M. With a max heap that large, the JVM is pretty lazy about GCs, so it runs out of memory within a minute or two.

This should be pretty easy to fix by just passing an -Xmx option explicitly (say, have start_connector.sh look for a JAVA_OPTS envvar?)

Replication connector fails silently

Initial condition

I have 6 DynamoDB tables which are replicated throughout two different regions. After several weeks of running without encountering this reported problem, I've realized that DynamoDB replication console does not replicate LSI, so I need to create the table by myself.

I've removed all replication groups from the replication console and I've deleted all tables from the replicas region. After that, I recreated those fresh tables (with same name as the previous one) with LSI. Then, I recreated the replication group with the existing tables in both regions.

Problem

After finishing DynamoDBTableCopy task, DynamoDBReplicationConnector is executed. However, at a certain point, the replication in 3 out of 6 tables stopped running silently.

screen shot 2016-01-07 at 11 18 24 am

As you can see in the screenshot, it works for an hour and half before it fails. I've tried to remove the replication group again, remove KCL checkpoint table for those replication groups, and recreate it again without avail.

In addition, I've tried to increase the number of workers from 1 to 3 (for each replication groups with 1 Master and 1 replica). This problem keeps occurring, even with small throughput (< 5 operations per second).

Current condition

ECS still shows that my task is running properly.

screen shot 2016-01-07 at 11 14 27 am

I've decided to access the machine directly and executed docker ps inside the machine. The process is still running up till now.

screen shot 2016-01-07 at 11 17 00 am

screen shot 2016-01-07 at 11 17 55 am

I've checked the CloudWatch log, but it does not show any error messages. The last error message that I receive, is not related to this problem (from the timestamp, > 24 hours ago).

screen shot 2016-01-07 at 11 15 34 am

The only pattern that I notice is leaseCounter in KCL tables. In the failed groups, the number of leaseCounter keeps increasing periodically until it reaches 2000 in less than 24 hours (~ 13 hours). In the running groups, the number of leaseCounter is less than 700.

screen shot 2016-01-07 at 11 19 37 am

Any idea regarding the cause of this problem? Thank you.

Elastic Beanstalk set health to Red

Just after create the stack, the Elastic Beanstalk set health to Red for the following (screenshot and logs attached):

InvalidParameterValue: Not a valid Join function: Each argument must resolve to a string: {"Ref":"AWSEBInstanceLaunchWaitHandle"}. Verify that your policies allow you to perform 'cloudformation:ListStackResources'

BundleLogs-1461367573393.zip
screen shot 2016-04-22 at 20 25 57

Comparing Master to Replica

When running in production we would like to be able to verify that the a database matches its replica from time to time. Not sure the best way to approach this since the data may be in flight during comparison.
Is there any solution in place for comparing a table to its replica?
And if not, is there any suggested approach to this validation?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.