Git Product home page Git Product logo

spark-dynamodb's People

Contributors

breachsim avatar eduardoramirez avatar kwadhwa18 avatar kylewm avatar sachee avatar sboora avatar timchan-lumoslabs avatar timkral avatar traviscrawford avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spark-dynamodb's Issues

Does not work with EMR.

This connector doesnot work with most versions of EMR.

org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:191) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:190) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NoSuchMethodError: com.google.common.util.concurrent.RateLimiter.acquire(I)D at com.github.traviscrawford.spark.dynamodb.DynamoDBRelation$$anonfun$scan$1$$anonfun$apply$2$$anonfun$apply$1.apply$mcDI$sp(DynamoDBRelation.scala:138) at com.github.traviscrawford.spark.dynamodb.DynamoDBRelation$$anonfun$scan$1$$anonfun$apply$2$$anonfun$apply$1.apply(DynamoDBRelation.scala:137) at com.github.traviscrawford.spark.dynamodb.DynamoDBRelation$$anonfun$scan$1$$anonfun$apply$2$$anonfun$apply$1.apply(DynamoDBRelation.scala:137) at scala.Option.foreach(Option.scala:257) at com.github.traviscrawford.spark.dynamodb.DynamoDBRelation$$anonfun$scan$1$$anonfun$apply$2.apply(DynamoDBRelation.scala:137) at com.github.traviscrawford.spark.dynamodb.DynamoDBRelation$$anonfun$scan$1$$anonfun$apply$2.apply(DynamoDBRelation.scala:131) at scala.Option.foreach(Option.scala:257) at com.github.traviscrawford.spark.dynamodb.DynamoDBRelation$$anonfun$scan$1.apply(DynamoDBRelation.scala:131) at com.github.traviscrawford.spark.dynamodb.DynamoDBRelation$$anonfun$scan$1.apply(DynamoDBRelation.scala:115) at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) at

How to use with Gradle in a project?

What should I add to my build.gradle to import this? Is the artifact published anywhere? Sorry if this is obvious, I'm new to Java build tools and need to get this to work.

filter_expression doesn't seem to parse correctly for UUID's or fields with colons

ParsedFilterExpression seems to have trouble parsing UUIDs and values with colons. Example:
"datasetId = 19e34782-91c4-4143-aaee-2ba81ed0b206" or
"datasetId = dataId:19e34782-91c4-4143-aaee-2ba81ed0b206"

Neither of these parse correctly due to the regex expressions. Perhaps the "expression-attribute-values" option could be implemented to use the same mechanism that AWS uses to allow direct identification of types?

Two failing tests as examples:

 it should "correctly parse equals expressions using a UUID" in {
    val parsedExpr = ParsedFilterExpression("name = 19e34782-91c4-4143-aaee-2ba81ed0b206")
    parsedExpr.expression should be ("#name = :name")
    parsedExpr.expressionNames should contain theSameElementsAs Map("#name" -> "name")
    parsedExpr.expressionValues should contain theSameElementsAs Map(":name" -> "19e34782-91c4-4143-aaee-2ba81ed0b206")
  }

  it should "correctly parse equals expressions which contains a colon" in {
    val parsedExpr = ParsedFilterExpression("name = name:19e34782-91c4-4143-aaee-2ba81ed0b206")
    parsedExpr.expression should be ("#name = :name")
    parsedExpr.expressionNames should contain theSameElementsAs Map("#name" -> "name")
    parsedExpr.expressionValues should contain theSameElementsAs Map(":name" -> "name:19e34782-91c4-4143-aaee-2ba81ed0b206")
  }

Can't write DataFrame result back to DynamoDB Table.

I am trying to write DataFrame resume to to DynamoDB. But I am getting the following issue.

scala> result.write.dynamodb("resultdata") <console>:29: error: value dynamodb is not a member of org.apache.spark.sql.DataFrameWriter[org.apache.spark.sql.Row] result.write.dynamodb("resultdata")

@traviscrawford or anyone has tried this before or this is something not implemented so far.

Reader parses schema but returns no rows

The result of the first command suggests it connects to dynamodb and parses the schema. But there are no rows in the dataframe, while I'm sure the dynamodb table is not empty.

scala> val users = spark.sqlContext.read.dynamodb("ap-southeast-1", "user")
users: org.apache.spark.sql.DataFrame = [account_name: string, avatar: string ... 24 more fields]

scala> users.count()
res15: Long = 0

Issues with pyspark 2.2 (throttling and filtering)

I'm moving this to a new case since I keep hitting different errors trying to use your plugin.

The command I am trying to use right now is this one:

df = spark.read.format("com.github.traviscrawford.spark.dynamodb").option("region", "us-west-2").option("table", "solr-product").load()
df.take(1)
df.count()

I've constantly kept hitting throttling errors when trying to read my table. Additionally I upped the read capacity to 5,000 and it still hits errors, I am not sure if that is the problem or something else?

It seems the command wants to load the entire database at once. So I found you also can pass server-side filter expressions, which I used but I still ran into problems. This is the filter expression I was using:
df = spark.read.format("com.github.traviscrawford.spark.dynamodb").option("region", "us-west-2").option("filter_expression", "product_id = 4038609").option("table", "solr-product").load()

This is the error I got

18/01/05 20:40:23 INFO YarnClientSchedulerBackend: Application application_1515175129419_0011 has started running.
18/01/05 20:40:23 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 42939.
18/01/05 20:40:23 INFO NettyBlockTransferService: Server created on 172.16.88.191:42939
18/01/05 20:40:23 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/01/05 20:40:23 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 172.16.88.191, 42939, None)
18/01/05 20:40:23 INFO BlockManagerMasterEndpoint: Registering block manager 172.16.88.191:42939 with 413.9 MB RAM, BlockManagerId(driver, 172.16.88.191, 42939, None)
18/01/05 20:40:23 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 172.16.88.191, 42939, None)
18/01/05 20:40:23 INFO BlockManager: external shuffle service port = 7337
18/01/05 20:40:23 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 172.16.88.191, 42939, None)
18/01/05 20:40:24 INFO EventLoggingListener: Logging events to hdfs:///var/log/spark/apps/application_1515175129419_0011
18/01/05 20:40:24 INFO Utils: Using initial executors = 0, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
18/01/05 20:40:24 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
18/01/05 20:40:24 INFO SharedState: loading hive config file: file:/etc/spark/conf.dist/hive-site.xml
18/01/05 20:40:24 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('hdfs:///user/spark/warehouse').
18/01/05 20:40:24 INFO SharedState: Warehouse path is 'hdfs:///user/spark/warehouse'.
18/01/05 20:40:24 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
18/01/05 20:40:25 WARN CredentialsLegacyConfigLocationProvider: Found the legacy config profiles file at [/home/hadoop/.aws/config]. Please move it to the latest default location [~/.aws/credentials].
18/01/05 20:40:26 INFO CodeGenerator: Code generated in 165.389524 ms
18/01/05 20:40:26 INFO SparkContext: Starting job: json at DynamoDBRelation.scala:62
18/01/05 20:40:26 INFO DAGScheduler: Got job 0 (json at DynamoDBRelation.scala:62) with 2 output partitions
18/01/05 20:40:26 INFO DAGScheduler: Final stage: ResultStage 0 (json at DynamoDBRelation.scala:62)
18/01/05 20:40:26 INFO DAGScheduler: Parents of final stage: List()
18/01/05 20:40:26 INFO DAGScheduler: Missing parents: List()
18/01/05 20:40:26 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[4] at json at DynamoDBRelation.scala:62), which has no missing parents
18/01/05 20:40:26 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 9.2 KB, free 413.9 MB)
18/01/05 20:40:26 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 4.8 KB, free 413.9 MB)
18/01/05 20:40:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.16.88.191:42939 (size: 4.8 KB, free: 413.9 MB)
18/01/05 20:40:26 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1047
18/01/05 20:40:26 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[4] at json at DynamoDBRelation.scala:62) (first 15 tasks are for partitions Vector(0, 1))
18/01/05 20:40:26 INFO YarnScheduler: Adding task set 0.0 with 2 tasks
18/01/05 20:40:27 INFO ExecutorAllocationManager: Requesting 1 new executor because tasks are backlogged (new desired total will be 1)
18/01/05 20:40:30 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (172.16.88.135:57466) with ID 1
18/01/05 20:40:30 INFO ExecutorAllocationManager: New executor 1 has registered (new total is 1)
18/01/05 20:40:30 WARN TaskSetManager: Stage 0 contains a task of very large size (623 KB). The maximum recommended task size is 100 KB.
18/01/05 20:40:30 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, ip-172-16-88-135.us-west-2.compute.internal, executor 1, partition 0, PROCESS_LOCAL, 637966 bytes)
18/01/05 20:40:30 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, ip-172-16-88-135.us-west-2.compute.internal, executor 1, partition 1, PROCESS_LOCAL, 633356 bytes)
18/01/05 20:40:30 INFO BlockManagerMasterEndpoint: Registering block manager ip-172-16-88-135.us-west-2.compute.internal:42633 with 2.8 GB RAM, BlockManagerId(1, ip-172-16-88-135.us-west-2.compute.internal, 42633, None)
18/01/05 20:40:30 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on ip-172-16-88-135.us-west-2.compute.internal:42633 (size: 4.8 KB, free: 2.8 GB)
18/01/05 20:40:31 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 1013 ms on ip-172-16-88-135.us-west-2.compute.internal (executor 1) (1/2)
18/01/05 20:40:31 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1036 ms on ip-172-16-88-135.us-west-2.compute.internal (executor 1) (2/2)
18/01/05 20:40:31 INFO YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool 
18/01/05 20:40:31 INFO DAGScheduler: ResultStage 0 (json at DynamoDBRelation.scala:62) finished in 4.214 s
18/01/05 20:40:31 INFO DAGScheduler: Job 0 finished: json at DynamoDBRelation.scala:62, took 4.346569 s
18/01/05 20:40:31 INFO DynamoDBRelation: Table solr-product contains 513643 items using 1147952212 bytes.
18/01/05 20:40:31 INFO DynamoDBRelation: Schema for tableName solr-product: StructType(StructField(ColorSwatches,StringType,true), StructField(IsBeauty,StringType,true), StructField(LTSDateEnd,StringType,true), StructField(LTSDateStart,StringType,true), StructField(LTSFlag,BooleanType,true), StructField(LTSPercentOff,StringType,true), StructField(LTSPrice,StringType,true), StructField(age_code,StringType,true), StructField(age_group,StringType,true), StructField(alt_image_url,StringType,true), StructField(alternate_view_count,LongType,true), StructField(archived,BooleanType,true), StructField(associated_style_id,StringType,true), StructField(associated_style_numbers,StringType,true), StructField(available_color_count,LongType,true), StructField(average_review,DoubleType,true), StructField(brand_display_name,StringType,true), StructField(brand_id,LongType,true), StructField(brand_name,StringType,true), StructField(buy_and_save,StringType,true), StructField(classifier,StringType,true), StructField(classifier_id,StringType,true), StructField(composite_classifier,StringType,true), StructField(custom_holiday_flag,BooleanType,true), StructField(date_created,StringType,true), StructField(date_go_live,StringType,true), StructField(date_image_modified,StringType,true), StructField(date_published,StringType,true), StructField(display_photo_id,StringType,true), StructField(doc_description,StringType,true), StructField(doc_name,StringType,true), StructField(doc_type,StringType,true), StructField(execution_id,LongType,true), StructField(fit_info,StringType,true), StructField(fit_recs_available,BooleanType,true), StructField(fit_type_description,StringType,true), StructField(fulfillment_available_percentage,LongType,true), StructField(gender,StringType,true), StructField(gender_code,StringType,true), StructField(gwp,StringType,true), StructField(image_url,StringType,true), StructField(internal_anniversary_flag,BooleanType,true), StructField(inv_conf,LongType,true), StructField(is_umap_enabled,BooleanType,true), StructField(keyword,StringType,true), StructField(last_modified,StringType,true), StructField(live_status,BooleanType,true), StructField(max_msrp,StringType,true), StructField(max_percent_off,LongType,true), StructField(max_price,DoubleType,true), StructField(med_video_url,StringType,true), StructField(min_msrp,StringType,true), StructField(min_percent_off,LongType,true), StructField(min_price,DoubleType,true), StructField(path_alias,StringType,true), StructField(photos,StringType,true), StructField(product_id,LongType,true), StructField(product_uri,StringType,true), StructField(ready_status,BooleanType,true), StructField(review_count,LongType,true), StructField(sale_max_price,DoubleType,true), StructField(sale_min_price,DoubleType,true), StructField(size_chart_name,StringType,true), StructField(size_info,StringType,true), StructField(special_copy,StringType,true), StructField(style_features,StringType,true), StructField(style_num,StringType,true), StructField(style_num_alias,StringType,true), StructField(subclassifier,StringType,true), StructField(subclassifier_id,LongType,true), StructField(title,StringType,true), StructField(u_map_start_date,StringType,true), StructField(umap_end_date,StringType,true))
18/01/05 20:40:31 WARN Utils: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.
18/01/05 20:40:31 INFO ContextCleaner: Cleaned accumulator 52
18/01/05 20:40:31 INFO CodeGenerator: Code generated in 28.685884 ms
18/01/05 20:40:31 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 172.16.88.191:42939 in memory (size: 4.8 KB, free: 413.9 MB)
18/01/05 20:40:31 INFO BlockManagerInfo: Removed broadcast_0_piece0 on ip-172-16-88-135.us-west-2.compute.internal:42633 in memory (size: 4.8 KB, free: 2.8 GB)
18/01/05 20:40:31 INFO CodeGenerator: Code generated in 12.303414 ms
18/01/05 20:40:31 INFO SparkContext: Starting job: count at NativeMethodAccessorImpl.java:0
18/01/05 20:40:31 INFO DAGScheduler: Registering RDD 12 (count at NativeMethodAccessorImpl.java:0)
18/01/05 20:40:31 INFO DAGScheduler: Got job 1 (count at NativeMethodAccessorImpl.java:0) with 1 output partitions
18/01/05 20:40:31 INFO DAGScheduler: Final stage: ResultStage 2 (count at NativeMethodAccessorImpl.java:0)
18/01/05 20:40:31 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1)
18/01/05 20:40:31 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1)
18/01/05 20:40:31 INFO DAGScheduler: Submitting ShuffleMapStage 1 (MapPartitionsRDD[12] at count at NativeMethodAccessorImpl.java:0), which has no missing parents
18/01/05 20:40:31 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 12.3 KB, free 413.9 MB)
18/01/05 20:40:31 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 6.2 KB, free 413.9 MB)
18/01/05 20:40:31 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 172.16.88.191:42939 (size: 6.2 KB, free: 413.9 MB)
18/01/05 20:40:31 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1047
18/01/05 20:40:31 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1 (MapPartitionsRDD[12] at count at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0))
18/01/05 20:40:31 INFO YarnScheduler: Adding task set 1.0 with 1 tasks
18/01/05 20:40:31 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, ip-172-16-88-135.us-west-2.compute.internal, executor 1, partition 0, PROCESS_LOCAL, 8442 bytes)
18/01/05 20:40:31 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on ip-172-16-88-135.us-west-2.compute.internal:42633 (size: 6.2 KB, free: 2.8 GB)
18/01/05 20:42:02 WARN Errors: The following warnings have been detected: WARNING: The (sub)resource method stageData in org.apache.spark.status.api.v1.OneStageResource contains empty path annotation.

18/01/05 20:42:03 WARN ServletHandler: 
javax.servlet.ServletException: java.util.NoSuchElementException: None.get
	at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489)
	at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
	at org.spark_project.jetty.servlet.ServletHolder.handle(ServletHolder.java:845)
	at org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689)
	at org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:164)
	at org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
	at org.spark_project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
	at org.spark_project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
	at org.spark_project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
	at org.spark_project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
	at org.spark_project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
	at org.spark_project.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:461)
	at org.spark_project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
	at org.spark_project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
	at org.spark_project.jetty.server.Server.handle(Server.java:524)
	at org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:319)
	at org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
	at org.spark_project.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
	at org.spark_project.jetty.io.FillInterest.fillable(FillInterest.java:95)
	at org.spark_project.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
	at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
	at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
	at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
	at org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
	at org.spark_project.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.NoSuchElementException: None.get
	at scala.None$.get(Option.scala:347)
	at scala.None$.get(Option.scala:345)
	at org.apache.spark.status.api.v1.MetricHelper.submetricQuantiles(AllStagesResource.scala:313)
	at org.apache.spark.status.api.v1.AllStagesResource$$anon$1.build(AllStagesResource.scala:178)
	at org.apache.spark.status.api.v1.AllStagesResource$.taskMetricDistributions(AllStagesResource.scala:181)
	at org.apache.spark.status.api.v1.OneStageResource$$anonfun$taskSummary$1.apply(OneStageResource.scala:71)
	at org.apache.spark.status.api.v1.OneStageResource$$anonfun$taskSummary$1.apply(OneStageResource.scala:62)
	at org.apache.spark.status.api.v1.OneStageResource$$anonfun$withStageAttempt$1.apply(OneStageResource.scala:130)
	at org.apache.spark.status.api.v1.OneStageResource$$anonfun$withStageAttempt$1.apply(OneStageResource.scala:126)
	at org.apache.spark.status.api.v1.OneStageResource.withStage(OneStageResource.scala:97)
	at org.apache.spark.status.api.v1.OneStageResource.withStageAttempt(OneStageResource.scala:126)
	at org.apache.spark.status.api.v1.OneStageResource.taskSummary(OneStageResource.scala:62)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)
	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161)
	at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:205)
	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99)
	at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389)
	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347)
	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)
	at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326)
	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
	at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
	at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305)
	at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
	at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:473)
	... 28 more
18/01/05 20:42:03 WARN HttpChannel: //ip-172-16-88-191.us-west-2.compute.internal:4040/api/v1/applications/application_1515175129419_0011/stages/2/0/taskSummary?proxyapproved=true
javax.servlet.ServletException: java.util.NoSuchElementException: None.get
	at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489)
	at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
	at org.spark_project.jetty.servlet.ServletHolder.handle(ServletHolder.java:845)
	at org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689)
	at org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:164)
	at org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
	at org.spark_project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
	at org.spark_project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
	at org.spark_project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
	at org.spark_project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
	at org.spark_project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
	at org.spark_project.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:461)
	at org.spark_project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
	at org.spark_project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
	at org.spark_project.jetty.server.Server.handle(Server.java:524)
	at org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:319)
	at org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
	at org.spark_project.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
	at org.spark_project.jetty.io.FillInterest.fillable(FillInterest.java:95)
	at org.spark_project.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
	at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
	at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
	at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
	at org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
	at org.spark_project.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.NoSuchElementException: None.get
	at scala.None$.get(Option.scala:347)
	at scala.None$.get(Option.scala:345)
	at org.apache.spark.status.api.v1.MetricHelper.submetricQuantiles(AllStagesResource.scala:313)
	at org.apache.spark.status.api.v1.AllStagesResource$$anon$1.build(AllStagesResource.scala:178)
	at org.apache.spark.status.api.v1.AllStagesResource$.taskMetricDistributions(AllStagesResource.scala:181)
	at org.apache.spark.status.api.v1.OneStageResource$$anonfun$taskSummary$1.apply(OneStageResource.scala:71)
	at org.apache.spark.status.api.v1.OneStageResource$$anonfun$taskSummary$1.apply(OneStageResource.scala:62)
	at org.apache.spark.status.api.v1.OneStageResource$$anonfun$withStageAttempt$1.apply(OneStageResource.scala:130)
	at org.apache.spark.status.api.v1.OneStageResource$$anonfun$withStageAttempt$1.apply(OneStageResource.scala:126)
	at org.apache.spark.status.api.v1.OneStageResource.withStage(OneStageResource.scala:97)
	at org.apache.spark.status.api.v1.OneStageResource.withStageAttempt(OneStageResource.scala:126)
	at org.apache.spark.status.api.v1.OneStageResource.taskSummary(OneStageResource.scala:62)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)
	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161)
	at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:205)
	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99)
	at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389)
	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347)
	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)
	at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326)
	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
	at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
	at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305)
	at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
	at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:473)
	... 28 more
18/01/05 20:43:28 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 2) in 177440 ms on ip-172-16-88-135.us-west-2.compute.internal (executor 1) (1/1)
18/01/05 20:43:28 INFO YarnScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool 
18/01/05 20:43:28 INFO DAGScheduler: ShuffleMapStage 1 (count at NativeMethodAccessorImpl.java:0) finished in 177.441 s
18/01/05 20:43:28 INFO DAGScheduler: looking for newly runnable stages
18/01/05 20:43:28 INFO DAGScheduler: running: Set()
18/01/05 20:43:28 INFO DAGScheduler: waiting: Set(ResultStage 2)
18/01/05 20:43:28 INFO DAGScheduler: failed: Set()
18/01/05 20:43:28 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[15] at count at NativeMethodAccessorImpl.java:0), which has no missing parents
18/01/05 20:43:28 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 7.0 KB, free 413.9 MB)
18/01/05 20:43:28 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 3.7 KB, free 413.9 MB)
18/01/05 20:43:28 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 172.16.88.191:42939 (size: 3.7 KB, free: 413.9 MB)
18/01/05 20:43:28 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1047
18/01/05 20:43:28 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[15] at count at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0))
18/01/05 20:43:28 INFO YarnScheduler: Adding task set 2.0 with 1 tasks
18/01/05 20:43:28 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 3, ip-172-16-88-135.us-west-2.compute.internal, executor 1, partition 0, NODE_LOCAL, 4737 bytes)
18/01/05 20:43:28 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on ip-172-16-88-135.us-west-2.compute.internal:42633 (size: 3.7 KB, free: 2.8 GB)
18/01/05 20:43:28 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 172.16.88.135:57466
18/01/05 20:43:28 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 0 is 173 bytes
18/01/05 20:43:28 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 3) in 68 ms on ip-172-16-88-135.us-west-2.compute.internal (executor 1) (1/1)
18/01/05 20:43:28 INFO YarnScheduler: Removed TaskSet 2.0, whose tasks have all completed, from pool 
18/01/05 20:43:28 INFO DAGScheduler: ResultStage 2 (count at NativeMethodAccessorImpl.java:0) finished in 0.069 s
18/01/05 20:43:28 INFO DAGScheduler: Job 1 finished: count at NativeMethodAccessorImpl.java:0, took 177.543464 s
0
18/01/05 20:43:29 INFO SparkContext: Invoking stop() from shutdown hook
18/01/05 20:43:29 INFO SparkUI: Stopped Spark web UI at http://ip-172-16-88-191.us-west-2.compute.internal:4040
18/01/05 20:43:29 INFO YarnClientSchedulerBackend: Interrupting monitor thread
18/01/05 20:43:29 INFO YarnClientSchedulerBackend: Shutting down all executors
18/01/05 20:43:29 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
18/01/05 20:43:29 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
 services=List(),
 started=false)
18/01/05 20:43:29 INFO YarnClientSchedulerBackend: Stopped
18/01/05 20:43:29 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/01/05 20:43:29 INFO MemoryStore: MemoryStore cleared
18/01/05 20:43:29 INFO BlockManager: BlockManager stopped
18/01/05 20:43:29 INFO BlockManagerMaster: BlockManagerMaster stopped
18/01/05 20:43:29 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/01/05 20:43:29 INFO SparkContext: Successfully stopped SparkContext
18/01/05 20:43:29 INFO ShutdownHookManager: Shutdown hook called
18/01/05 20:43:29 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-7eaf86b4-d141-400e-8c40-e37045981e1c
18/01/05 20:43:29 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-7eaf86b4-d141-400e-8c40-e37045981e1c/pyspark-8b546929-ebdc-4b12-a81e-3845e181dd58

.dynamoDB not found ?

trying to use the DataFrame API

val users = sqlContext.read.dynamodb("users")

we are using java in our project, when trying the following
import com.github.traviscrawford.spark.dynamodb.*;
......
SQLContext sqlContext = new org.apache.spark.sql.SQLContext(spark.sparkContext());
sqlContext.read()

it seems .dynamodb() is not a defined method, is there something we are missing ?

Read from DynamoDB JSON with Spark in Scala

Hi, I've been trying to read DynamoDB JSON ( exported from DynamoDB and stored in a file system). However, not able to find out an appropriate way either to process this DynamoDB JSON format directly or to convert it to normal JSON file.

Please note that DynamoDB JSON file has couple of nested level as well.

Thank you.

Can't install/run

Hey there! Total noob to the whole spark scene so sorry if this is something super obvious.

I'm trying this on an EMR cluster with spark as instructed with no success. I even tried downloading the repo and running mvn install with no success. Any idea of what I'm missing?

Here's the output:

[hadoop@ip-xxxx spark-dynamodb]$ spark-shell --packages com.github.traviscrawford:spark-dynamodb:0.0.1-SNAPSHOT
Ivy Default Cache set to: /home/hadoop/.ivy2/cache
The jars for the packages stored in: /home/hadoop/.ivy2/jars
:: loading settings :: url = jar:file:/usr/lib/spark/lib/spark-assembly-1.6.1-hadoop2.7.2-amzn-2.jar!/org/apache/ivy/core/settings/ivysettings.xml
com.github.traviscrawford#spark-dynamodb added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
        confs: [default]
        found com.github.traviscrawford#spark-dynamodb;0.0.1-SNAPSHOT in local-m2-cache
        found com.amazonaws#aws-java-sdk-dynamodb;1.10.68 in local-m2-cache
        found com.amazonaws#aws-java-sdk-s3;1.10.68 in local-m2-cache
        found com.amazonaws#aws-java-sdk-kms;1.10.68 in local-m2-cache
        found com.amazonaws#aws-java-sdk-core;1.10.68 in local-m2-cache
        found commons-logging#commons-logging;1.1.3 in local-m2-cache
        found org.apache.httpcomponents#httpclient;4.3.6 in local-m2-cache
        found org.apache.httpcomponents#httpcore;4.3.3 in local-m2-cache
        found commons-codec#commons-codec;1.6 in local-m2-cache
        found com.fasterxml.jackson.core#jackson-databind;2.5.3 in local-m2-cache
        found com.fasterxml.jackson.core#jackson-annotations;2.5.0 in local-m2-cache
        found com.fasterxml.jackson.core#jackson-core;2.5.3 in local-m2-cache
        found com.fasterxml.jackson.dataformat#jackson-dataformat-cbor;2.5.3 in local-m2-cache
        found joda-time#joda-time;2.8.1 in local-m2-cache
        found com.google.guava#guava;18.0 in local-m2-cache
downloading file:/home/hadoop/.m2/repository/com/github/traviscrawford/spark-dynamodb/0.0.1-SNAPSHOT/spark-dynamodb-0.0.1-SNAPSHOT.jar ...
        [SUCCESSFUL ] com.github.traviscrawford#spark-dynamodb;0.0.1-SNAPSHOT!spark-dynamodb.jar (2ms)
downloading file:/home/hadoop/.m2/repository/com/amazonaws/aws-java-sdk-dynamodb/1.10.68/aws-java-sdk-dynamodb-1.10.68.jar ...
        [SUCCESSFUL ] com.amazonaws#aws-java-sdk-dynamodb;1.10.68!aws-java-sdk-dynamodb.jar (4ms)
downloading file:/home/hadoop/.m2/repository/com/google/guava/guava/18.0/guava-18.0.jar ...
        [SUCCESSFUL ] com.google.guava#guava;18.0!guava.jar(bundle) (5ms)
downloading file:/home/hadoop/.m2/repository/com/amazonaws/aws-java-sdk-s3/1.10.68/aws-java-sdk-s3-1.10.68.jar ...
        [SUCCESSFUL ] com.amazonaws#aws-java-sdk-s3;1.10.68!aws-java-sdk-s3.jar (3ms)
downloading file:/home/hadoop/.m2/repository/com/amazonaws/aws-java-sdk-core/1.10.68/aws-java-sdk-core-1.10.68.jar ...
        [SUCCESSFUL ] com.amazonaws#aws-java-sdk-core;1.10.68!aws-java-sdk-core.jar (3ms)
downloading file:/home/hadoop/.m2/repository/com/amazonaws/aws-java-sdk-kms/1.10.68/aws-java-sdk-kms-1.10.68.jar ...
        [SUCCESSFUL ] com.amazonaws#aws-java-sdk-kms;1.10.68!aws-java-sdk-kms.jar (2ms)
downloading file:/home/hadoop/.m2/repository/commons-logging/commons-logging/1.1.3/commons-logging-1.1.3.jar ...
        [SUCCESSFUL ] commons-logging#commons-logging;1.1.3!commons-logging.jar (1ms)
downloading file:/home/hadoop/.m2/repository/org/apache/httpcomponents/httpclient/4.3.6/httpclient-4.3.6.jar ...
        [SUCCESSFUL ] org.apache.httpcomponents#httpclient;4.3.6!httpclient.jar (2ms)
downloading file:/home/hadoop/.m2/repository/com/fasterxml/jackson/dataformat/jackson-dataformat-cbor/2.5.3/jackson-dataformat-cbor-2.5.3.jar ...
        [SUCCESSFUL ] com.fasterxml.jackson.dataformat#jackson-dataformat-cbor;2.5.3!jackson-dataformat-cbor.jar(bundle) (1ms)
downloading file:/home/hadoop/.m2/repository/joda-time/joda-time/2.8.1/joda-time-2.8.1.jar ...
        [SUCCESSFUL ] joda-time#joda-time;2.8.1!joda-time.jar (3ms)
downloading file:/home/hadoop/.m2/repository/org/apache/httpcomponents/httpcore/4.3.3/httpcore-4.3.3.jar ...
        [SUCCESSFUL ] org.apache.httpcomponents#httpcore;4.3.3!httpcore.jar (3ms)
:: resolution report :: resolve 6570ms :: artifacts dl 62ms
        :: modules in use:
        com.amazonaws#aws-java-sdk-core;1.10.68 from local-m2-cache in [default]
        com.amazonaws#aws-java-sdk-dynamodb;1.10.68 from local-m2-cache in [default]
        com.amazonaws#aws-java-sdk-kms;1.10.68 from local-m2-cache in [default]
        com.amazonaws#aws-java-sdk-s3;1.10.68 from local-m2-cache in [default]
        com.fasterxml.jackson.core#jackson-annotations;2.5.0 from local-m2-cache in [default]
        com.fasterxml.jackson.core#jackson-core;2.5.3 from local-m2-cache in [default]
        com.fasterxml.jackson.core#jackson-databind;2.5.3 from local-m2-cache in [default]
        com.fasterxml.jackson.dataformat#jackson-dataformat-cbor;2.5.3 from local-m2-cache in [default]
        com.github.traviscrawford#spark-dynamodb;0.0.1-SNAPSHOT from local-m2-cache in [default]
       com.google.guava#guava;18.0 from local-m2-cache in [default]
        commons-codec#commons-codec;1.6 from local-m2-cache in [default]
        commons-logging#commons-logging;1.1.3 from local-m2-cache in [default]
        joda-time#joda-time;2.8.1 from local-m2-cache in [default]
        org.apache.httpcomponents#httpclient;4.3.6 from local-m2-cache in [default]
        org.apache.httpcomponents#httpcore;4.3.3 from local-m2-cache in [default]
        ---------------------------------------------------------------------
        |                  |            modules            ||   artifacts   |
        |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
        ---------------------------------------------------------------------
        |      default     |   15  |   15  |   15  |   0   ||   15  |   11  |
        ---------------------------------------------------------------------

:: problems summary ::
:::: WARNINGS
                [NOT FOUND  ] com.fasterxml.jackson.core#jackson-databind;2.5.3!jackson-databind.jar(bundle) (0ms)

        ==== local-m2-cache: tried

          file:/home/hadoop/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.5.3/jackson-databind-2.5.3.jar

                [NOT FOUND  ] commons-codec#commons-codec;1.6!commons-codec.jar (0ms)

        ==== local-m2-cache: tried

          file:/home/hadoop/.m2/repository/commons-codec/commons-codec/1.6/commons-codec-1.6.jar

                [NOT FOUND  ] com.fasterxml.jackson.core#jackson-annotations;2.5.0!jackson-annotations.jar(bundle) (0ms)

        ==== local-m2-cache: tried

          file:/home/hadoop/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.5.0/jackson-annotations-2.5.0.jar

                [NOT FOUND  ] com.fasterxml.jackson.core#jackson-core;2.5.3!jackson-core.jar(bundle) (0ms)

        ==== local-m2-cache: tried

          file:/home/hadoop/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.5.3/jackson-core-2.5.3.jar

                ::::::::::::::::::::::::::::::::::::::::::::::

                ::              FAILED DOWNLOADS            ::

                :: ^ see resolution messages for details  ^ ::

                ::::::::::::::::::::::::::::::::::::::::::::::

                :: commons-codec#commons-codec;1.6!commons-codec.jar

                :: com.fasterxml.jackson.core#jackson-databind;2.5.3!jackson-databind.jar(bundle)

                :: com.fasterxml.jackson.core#jackson-annotations;2.5.0!jackson-annotations.jar(bundle)

                :: com.fasterxml.jackson.core#jackson-core;2.5.3!jackson-core.jar(bundle)

                ::::::::::::::::::::::::::::::::::::::::::::::



:: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
Exception in thread "main" java.lang.RuntimeException: [download failed: commons-codec#commons-codec;1.6!commons-codec.jar, download failed: com.fasterxml.jackson.core#jackson-databind;2.5.3!jackson-databind.jar(bundle), download failed: com.fasterxml.jackson.core#jackson-annotations;2.5.0!jackson-annotations.jar(bun
dle), download failed: com.fasterxml.jackson.core#jackson-core;2.5.3!jackson-core.jar(bundle)]
        at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1068)
        at org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:287)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:154)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

org.apache.spark.sql.AnalysisException: Table or view not found:

Hi,

We have a number of DDb tables and I am Pocing this library to access some DDb table. We actually use databricks as the platform.

Im doing a very simple query:

import com.github.traviscrawford.spark.dynamodb._
val data = sqlContext.sql(
"select id from purchaseorderline_uk")

but get an error below

org.apache.spark.sql.AnalysisException: Table or view not found: purchaseorderline_uk; line 1 pos 15

The table absolutely exists and the job is assuming a role that gives it DDBFull rights. I also have internet access via the Nat gateway so internet access if fine.

Totally stumped on this seemingly simple query to a table that exists. Is there some configuration that is required that isnt documented (or that I just missed!). Has anyone had this before?

Im using the v0.11 (latest) that is in maven

thanks!

Working dependencies?

What version of hadoop-aws does this work with? I keep running into conflicts with the aws dependencies between this and hadoop-aws. What working Spark, Hadoop, spark-dynamodb, and hadoop-aws versions can I use to run the simple example that writes to s3?

spark-submit \
  --class com.github.traviscrawford.spark.dynamodb.DynamoBackupJob \
  --packages com.github.traviscrawford:spark-dynamodb:0.0.5 '' \
  -table users -output s3://myBucket/users \
  -totalSegments 4 -rateLimit 100

Not able to write to s3

Hey I have launched an emr cluster with spark and then on the master machine issuing the spark submit command
spark-submit --packages com.github.traviscrawford:spark-dynamodb:0.0.6 --class com.github.traviscrawford.spark.dynamodb.DynamoBackupJob '' -table content -output s3://mybucket/dynamotest/file -totalSegments 4 -rateLimit 100 -region ap-southeast-1 /usr/lib/spark/jars/fake.jar

I am finally getting this error at the end
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 10, ip----**.us-west-1.compute.internal): java.lang.RuntimeException: Stream '/jars/hadoop' was not found.

I just opened a normal spark-shell on the same machine and tried to write something to s3 and it worked properly.
Not able to understand why the above command is not working and what jar is it trying to find?

Using spark-dynamodb in unit tests

DynamoDB provides local instance http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html

Is there a way to read dataframe from local dynamodb by providing endpoint and credentials ?

I tried following code but it throws exception ->

val optDF = spark.sqlContext.read.option("endpoint","http://localhost:12000").dynamodb("tempTable")

This throws error -
java.lang.RuntimeException: com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Cannot do operations on a non-existent table (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: xxx-xxx)

The table is created using AmazonDynamoDBClient connected to local server ->

amazonDynamoDB = new AmazonDynamoDBClient(new BasicAWSCredentials("access", "secret"))
amazonDynamoDB.setEndpoint("http://localhost:" + port)
amazonDynamoDB.createTable(...

The table arn when printed is ->
arn:aws:dynamodb:ddblocal:000000000000:table/tempTable

read_capacity_pct unused?

Greetings!

First run and this library works as advertised (thanks @traviscrawford for the open source contribution ๐Ÿ‘), except the readCapacityPct option doesn't appear to be respected.

My snippet:

val df = sqlContext
  .read
  .option("read_capacity_pct", "20")  // <-- this should set usage to 20%, right?
  .schema(TableSchema)
  .dynamodb(TableName)
df.count

The issue I'm seeing is I watched the read unit consumption jump to ~85%, which won't fly in a production environment. Am I configuring the read_capacity_pct option correctly?

From what I see, readCapacityPct gets declared but is not used elsewhere.

Cheers.

filter_expression not working as expected.

Hi,

I have a DynamoDB table with offer_id as the hash key and created_on as the range key. Trying to execute the below scala program. The filter expression is somehow not working . It is showing me more records than expected. Am I doing something wrong here ?

import com.github.traviscrawford.spark.dynamodb._
val sqlContext = new org.apache.spark.sql.SQLContext(sc)

import org.apache.spark.sql._
import org.apache.spark.sql.types._
val schema = StructType(
Seq(
StructField("offer_id", StringType),
StructField("created_on", StringType)
)
)

val tblImpressionLog = sqlContext.read.option("filter_expression", "offer_id = 688").dynamodb("eu-central-1", "tblImpressionLog")

tblImpressionLog.show

Getting Null Values when using SparkSql

Hi,

I am loading DynamoDb table and Registering it for querying using SparkSql.

e.g

//Step 1
val TableDf = spark.read.dynamodb("some table")

//Step 2
val TableDf = spark.read.dynamodb("some table").cache()

TableDf.createOrReplaceTempView("someTable")

val dataDF = spark.sql("select * from someTable")
// This shows all the values as saved in Dynamo Table
dataDF.show()

val newDF = spark.sql("select a,b,c from someTable")
// When using Step 1 This shows records of most columns as null
newDF.show()

But when I use Step 2 i.e cached dataframe to register table the data is shown as expected.

I have a big dynamo table and can't cache it!

ItemConverter to support DynamoDB List

https://github.com/traviscrawford/spark-dynamodb/blob/master/src/main/scala/com/github/traviscrawford/spark/dynamodb/ItemConverter.scala#L17

When hitting an unsupported type it will immediately throw an Exception. By your comments there I assume this is expected :)

17/08/01 12:41:06 ERROR DynamoDBRelation: Failed converting item to row: {"id":"xxxx","sections":["AAA","BBB","CCC"]}
scala.MatchError: ArrayType(StringType,true) (of class org.apache.spark.sql.types.ArrayType)
	at com.github.traviscrawford.spark.dynamodb.ItemConverter$$anonfun$1.apply(ItemConverter.scala:30)

I'm quite new to Scala but the following works for my current needs:

case _ =>
          field.dataType match {
            case ArrayType(StringType,true) => jsonFieldValue.extract[List[String]]
            case IntegerType => jsonFieldValue.extract[Int]
            case LongType => jsonFieldValue.extract[Long]
            case DoubleType => jsonFieldValue.extract[Double]
            case StringType => jsonFieldValue.extract[String]
          }

I wonder if you are planning to update the logic with something more robust

Error mapping a nested format

Hello guys,

After have load my DynamoDB table using scala when I try to retrieve a row filtered in a specific value I'm getting an error.

the full record format is :

{
  "agent_id": 14732,
  "called": "+185999999",
  "to_city": "null",
  "language": "english",
  "team_id": 15,
  "taskChannelUniqueName": "voice",
  "caller_country": "CA",
  "price": "-0.22880",
  "account_sid": "AC3bdb78f4397e3951251",
  "from": "+178049978523",
  "ivr_call_sid": "C983a7254cb0ade7862705",
  "travel_type": "international",
  "priceUnit": "USD",
  "called_zip": "null",
  "from_city": "EDMONTON",
  "caller_zip": "null",
  "from_state": "AB",
  "start_time": "2018-07-04 18:57:31",
  "site_id": "4",
  "status": "completed",
  "from_country": "CA",
  "original_task_sid": "XFT05514703a5c53ec0338a7ceb9cddf16c",
  "conference_sid": "CF6ffadad8e05be8b48d78c0704b21562c",
  "direct": "inbound",
  "to_country": "US",
  "conversations": {
    "handling_team_name": "Team 1",
    "conversation_attribute_1": "No",
    "hold_time": 12,
    "ivr_path": "other_inquiries",
    "ivr_time": 57,
    "conversation_id": "XDT55414703a5c53ec0338a7ceb9cddf16c",
    "segment": "CAe8e393775870c1ec4d765c2355009acd",
    "segment_link": [
      "https:\\/\\/d3dx7qtk6i17ic.cloudfront.net\\/production\\/2018\\/07\\/04\\/15\\/RE6eede8cf6d0a574a4f4564a32e555b38.wav"
    ],
    "external_contact": "JFLY TP",
    "case": 668132,
    "outcome": null,
    "queue": "Non-Revenue",
    "handling_team_id": 15,
    "in_business_hours": "Yes",
    "direction": "Inbound"
  },
  "to_state": "null",
  "products": {
    "product_attribute_1": "International",
    "brand": "JustFly",
    "product_attribute_2": "null"
  },
  "duration": "946",
  "booking_id": "108154082",
  "SourceApp": "ServPro",
  "call_sid": "CAb46c43e3e8a983a7254cb0ade7862705",
  "conversation_id": "XDT55414703a5c53ec0338a7ceb9cddf16c",
  "from_zip": "null",
  "customers": {
    "gender": "Male",
    "market_segment": "1st Time",
    "name": "Kofi Annor",
    "customer_id": 58484842,
    "business_value": "5596.19",
    "email": "[email protected]",
    "year_of_birth": 1521,
    "acquisition_date": "2017-12-15"
  },
  "department": "service",
  "external_contact": "JFLY TP",
  "call_type": "other_inquiries",
  "direction": "inbound",
  "support_case_id": 668132,
  "caller_state": "AB",
  "to_zip": "null",
  "called_country": "US",
  "end_time": "2018-07-04 19:13:17",
  "called_city": "null",
  "api_version": "2010-04-01",
  "called_state": "null",
  "agents": {
    "full_name": "Customer Name",
    "role": "agent_level_1",
    "agent_id": 14732,
    "manager": null,
    "location": null,
    "team_id": 15,
    "department": "service",
    "email": "[email protected]",
    "team_name": "Team 1"
  },
  "caller": "+19878874894",
  "caller_city": "EDMONTON",
  "to": "+15458779216"
}

Error:

at java.lang.Thread.run(Thread.java:748)
18/07/16 18:33:26 ERROR DynamoDBRelation: Failed converting item to row: {"conversation_id":"WT00014703a5c53ec0338a7ceb9cddf16c","agents":{"full_name":"Princess Naquita","role":"agent_level_1","agent_id":14732,"manager":"None","location":"None","team_id":15,"department":"service","email":"[email protected]","team_name":"Team 1"}}
scala.MatchError: StructType(StructField(agent_id,LongType,true), StructField(department,StringType,true), StructField(email,StringType,true), StructField(full_name,StringType,true), StructField(location,StringType,true), StructField(manager,StringType,true), StructField(role,StringType,true), StructField(team_id,LongType,true), StructField(team_name,StringType,true)) (of class org.apache.spark.sql.types.StructType)
at com.github.traviscrawford.spark.dynamodb.ItemConverter$$anonfun$1.apply(ItemConverter.scala:30)
at com.github.traviscrawford.spark.dynamodb.ItemConverter$$anonfun$1.apply(ItemConverter.scala:20)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at org.apache.spark.sql.types.StructType.foreach(StructType.scala:98)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at org.apache.spark.sql.types.StructType.map(StructType.scala:98)
at com.github.traviscrawford.spark.dynamodb.ItemConverter$.toRow(ItemConverter.scala:20)
at com.github.traviscrawford.spark.dynamodb.DynamoDBRelation$$anonfun$scan$1$$anonfun$7.apply(DynamoDBRelation.scala:131)

Need recommendation

I am using the following.

"rate_limit_per_segment": "10", "page_size": 100,"segments": 5.

our RCU is 100 and it always goes above 150 and dynamo throttles it.

Can you recommend the appropriate parameters ?

How do I set my AWS credentials?

Hey, there's no examples for setting the configuration attributes. How do I set my own AWS credentials for the sqlContext? Thanks!

Not really an issue in itself, but more documentation would be nice.

Support write operation

Hi,

Write to dynamoDb is currently not supported. Would it be possible to add it?

Thanks,
Hub

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.